id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
8748548
Fixed-income attribution
Fixed-income attribution is the process of measuring returns generated by various sources of risk in a fixed income portfolio, particularly when multiple sources of return are active at the same time. Importance. The risks affecting the return on a bond portfolio, as an example, include the overall level of the yield curve, the slope of the yield curve, and the credit spreads of the bonds in the portfolio. A portfolio manager may hold firm views on the ways in which these factors will change in the near future, so in three separate risk decisions he positions the assets in the portfolio to take advantage of these expected market movements. If all views subsequently prove to be correct, then each decision will generate a profit. If one view is wrong, it will generate a loss, but the effect of the other bets may compensate. The overall performance will then be the sum of the performance contributions from each source of risk. Attribution is therefore an extremely useful tool in verifying a fund manager's claims to possessing particular investment skills. If a fund is marketed as being interest-rate neutral while providing consistent returns from superior credit research, then an attribution report will confirm this claim. Conversely, if the attribution report shows that this same manager is making non-zero returns from interest rate movements, then his exposure to interest rate risk is clearly not zero and his investment process clearly differs from his stated position. Fixed-income attribution therefore provides a much deeper level of information than is available from a simple portfolio performance report. Typically, such a report only shows returns at an aggregated level, and provides no feedback as to where the investor's true skills lie. For these reasons, fixed-income attribution is rapidly growing in importance in the investment industry; see . Sector-based attribution. Among the simplest fixed income attribution techniques is "sector-based" attribution. This is based on the standard Brinson-Fachler attribution scheme, where the securities in the portfolio and benchmark are divided up into buckets based on their modified duration. This scheme has the advantage that it is readily understandable, particularly by managers who have an equity background. However, it does not provide a very deep analysis. The overall effects of a parallel change in the yield curve are supplied but there is none of the more detailed analysis supplied by a true fixed-income decomposition. A useful account of sector-based attribution, with worked examples, is provided in Dynkin "et al." (1998). Yield curve attribution. A more widely used approach to fixed-income attribution is to decompose the returns of individual securities by source of risk, and then to aggregate these risk-specific returns over an entire portfolio. Typical sources of risk include yield return, return due to yield curve movements, and credit spread shifts. These sub-returns can then be aggregated over time and sector to give the overall portfolio return, attributed by source of risk. For a description of the mechanics of combining these sub-returns in a self-consistent manner, see Bacon (2004). Sources of return. Over a given interval, the return of each security will be made up of return from various sub-returns (see below for explanations) First principles versus perturbational attribution. To calculate the return arising from each effect, we can reprice the security from first principles by using a pricing formula, or some other algorithm, before and after each source of return is considered. For instance, in calculating yield return, we might calculate the price of the security at the start and end of the calculation interval, but using the yield at the beginning of the interval. Then the difference between the two prices may be used to calculate the security's return due to the passage of time. This approach is simple in principle but can lead to operational difficulties. It requires For these reasons, a pricing model-based approach to attribution may not be the right one where data sourcing or reconciliation is an issue. An alternative solution is to perform a Taylor expansion on the price of a security formula_0 and remove higher-order terms, which gives formula_1 Writing the return of the security as formula_2, this leads to the perturbation equation formula_3 where the last term denotes higher-order corrections that may be ignored, and formula_4 formula_5 The terms formula_6 and formula_7 measure first- and second-order interest rate sensitivity. These are conventionally referred to as the "modified duration" and "convexity" of the security, and are often called risk numbers. The data requirements for this approach to attribution are less onerous than for the first-principle approach. The perturbation equation does require externally calculated risk numbers, but this may not be a major obstacle, since these quantities are readily available from the same sources as yields and prices. There may also be inherent advantages in this approach with its ability to work with user-supplied risk numbers, since it allows the user to use sensitivity measures from in-house models, which is particularly useful where (for instance) the user has custom repayment models for mortgage-backed securities. The approach is also self-checking, in that the size of the residual returns should be very low. If this is not the case, there will be presumably be an error in the calculated return or the risk numbers, or some other source of risk will be distorting the returns. Conveniently, the perturbational approach may be extended to new asset types without requiring any new pricing code or types of data, and it also works for benchmark sectors as well as individual securities, which is useful if benchmark data is only available at sector level. Modeling the yield curve. Historically, one of the most important drivers of return in fixed-income portfolios has been the yield curve, and many investment strategies are expressed in terms of changes in the curve. Any discussion of fixed-income attribution therefore requires an appreciation of how changes in the curve are described, and their effect on the performance of a portfolio. If one is only interested in gross changes in the yield curve at a particular maturity, then one can read yields off the various datasets, using interpolation where necessary, and there is no need to model any part of the curve. If, on the other hand, one wants to describe curve movements in terms used by traders (or to extrapolate), then some form of parameterization is required. The most widely used nomenclature for describing yield curve changes uses the terms "shift", "twist" and "butterfly". Briefly: To describe these movements in numerical terms, typically requires fitting a model to the observed yield curve with a limited number of parameters. These parameters can then be translated into shift, twist, and butterfly movements – or whatever other interpretation the trader chooses to use. This model is often used for extrapolate CDS. Two of the most widely used models are polynomial functions and Nelson-Siegel functions (Nelson and Siegel (1987)). formula_8 where formula_9 is maturity, formula_10 are parameters to be fitted, and formula_11 is the yield of the curve at maturity formula_9. formula_12 where formula_11 and formula_9 are as above, and formula_13, formula_14, formula_15 and formula_16, are parameters to be fitted via a least-squares or similar algorithm (see Diebold and Li [2006]; Bolder and Stréliski [1999]): *formula_13 is interpreted as the long run levels of interest rates (the loading is 1, it is a constant that does not decay); *formula_14 is the short-term component (it starts at 1, and decays monotonically and quickly to 0); *formula_15 is the medium-term component (it starts at 0, increases, then decays to zero); *formula_16 is the decay factor: small values produce slow decay and can better fit the curve at long maturities, while large values produce fast decay and can better fit the curve at short maturities; formula_16 also governs where formula_15 achieves its maximum. Svensson (1994) adds a "second hump" term; this is the Nelson–Siegel–Svensson (NSS) model. The additional term is: formula_17, and the interpretation is as for formula_15 and formula_16 above. Another generalizing of Nelson-Siegel is the family of Exponential Polynomial Model ("EPM(n)") where the number of linear coefficients is free. Once a curve has been fitted, the user can then define various measures of shift, twist and butterfly, and calculate their values from the calculated parameters. For instance, the amount of shift in a curve modeled by a polynomial function can be modeled as the difference between the polynomial formula_18 parameters at successive dates. In practice, the Nelson-Siegel function has the advantages that it is well-behaved at long maturities, and that its parameters can be set to model virtually any yield curve (see Nelson and Siegel [1987]). Factor-based attribution. A factor-based model of yield curve movements is calculated by deriving the covariance matrix of yield shifts at predefined maturities, and calculating the eigenvectors and eigenvalues of this matrix. Each eigenvector corresponds to a fundamental model of the yield curve, and each eigenvector is orthogonal, so that the curve movement on any given day is a linear combination of the basis eigenvectors. The eigenvalues of this matrix then give the relative weights, or importance, of these curve shifts. [Phoa (1998)]. Factor models use a large sample of historical yield curve data and construct a set of basis functions that can be linearly combined to represent these curve movements in the most economical way. The algorithm always attributes as much of the curve movement to the first basis function, then as much as possible to the second, and so on. Since these functions roughly correspond to our shift and twist motions, this approach attributes almost all of the curve change to these two modes, leaving a very small contribution from higher modes. Typical results attribute 90% of curve movements to shift changes, 8% to twist, and 2% to curvature (or butterfly) movements. However, the issue that these basis functions may be different from those in which the risk decisions were expressed is not widely appreciated. Since conventional risk analysis for fixed-income instruments usually assumes a parallel yield shift across all maturities, it would be most convenient if a parallel motion mode turned out to dominate the other modes, and in fact this is more or less what occurs. While a factor-based decomposition of term structure changes is mathematically elegant, it does have some significant drawbacks for attribution purposes: The great advantage of a factor-based approach is that it ensures that as much curve movement as possible is attributed to shift movement, and that twist and curvature motion are given as small values as possible. This allows apparently straightforward reporting, because hard-to-understand curve movements are always assigned small weights in an attribution analysis. However, this is at the cost of a distortion of the other results. On the other hand, a naïve interpretation of the terms shift, twist, curvature when applied to yield curve movements may well give rise to higher-order movements that are much higher than investors would expect. There are also problems in the exact definition of the terms shift and twist. Without fixing a twist point at the outset, there is no unique value for these terms in either a Nelson-Siegel or polynomial formulation. However, the location of this twist point may not match user expectations. For a deeper discussion of this point, see Colin (2005). Interest returns. The first source of return in a fixed-income portfolio is that due to interest. The majority of securities will pay a regular coupon, and this is paid irrespective of what happens in the marketplace (ignoring defaults and similar catastrophes). For instance, a bond paying a 10% annual coupon will always pay 10% of its face value to the owner each year, even if there is no change in market conditions. However, the effective yield on the bond may well be different, since the market price of the bond is usually different from the face value. Yield return is calculated from formula_19 where formula_20 is the security's yield to maturity, and formula_21 is the elapsed time. Towards the end of the bond's life we often see a pull-to-parity effect. As maturity approaches, a bond's price converges to its nominal amount, irrespective of the level of interest rates, and this may cause a bond's price to move in a different way to what would normally be expected. Roll return. Roll return can occur when a yield curve is steeply sloped. In the absence of any changes in the curve, as a security is held over time its maturity will decrease and the yield (as read off the curve) will change. If the slope is positive, the yield will decrease and the security's price will increase. Positioning a portfolio's assets to take advantage of a steeply sloping yield curve is sometimes called riding the yield curve. Strictly speaking, roll return belongs in a separate category, as it is neither a strict yield effect nor a return caused by a change in the yield curve. Yield curve attribution. Changes in term structure form one of the most important sources of risk in a portfolio. Unlike an equity price, which just moves one-dimensionally, the price of a fixed-income security is calculated from sum of discounted cash flows, where the discount rate used depends on the interest rate at that maturity. The magnitude and shape of curve changes are therefore of major importance to fixed-income managers. At the most basic level, we can break down yield changes in terms of treasury shift and credit shift. At any maturity, we can compare the change in the target security with the change in the corresponding government-backed security, which will have the highest credit rating and hence the lowest yield. All securities have yields equal or greater than their equivalent-maturity government securities, which act as a benchmark for movements in the marketplace. Many investment-grade securities are traded at a spread to the Treasury curve, with the size of this spread depending on current economic conditions and the credit rating of the individual security. For instance, in April 2005 General Motors debt was downgraded to non-investment, or junk, status by the ratings agencies. As a result, the credit spread (or return demanded by investors for holding this riskier investment) rose by over 150 basis points, and the value of General Motors bonds accordingly fell. The loss in performance this caused was attributed entirely due to credit effects. Since the yield of virtually any fixed-income instrument is affected by changes in the shape of the Treasury curve, it is not surprising that traders examine future and past performance in the light of changes to this curve. Appropriate yield curves. It is not always appropriate to use a single yield curve throughout a portfolio, even for instruments traded from a particular country. Inflation-linked securities use their own curve, whose movements may not show strong correlation with the yield curve of the broader market. Short-term money market securities may be better modeled by a separate model for the bill curve, and other markets may use the swap curve rather than the treasury curve. Credit attribution. The situation is complicated by recent innovations in the credit markets and explosive growth of instruments that allow credit risk to be precisely targeted, such as credit-default swaps and the ability to split different tranches of instruments in collateralized debt obligations (CDO). The simplest way to regard return on credit is to see it as return made by changes in a security's yield, after changes due to movements in the market's reference curve have been removed. This may be quite adequate for a simple portfolio, but for traders who are deliberately interest-rate neutral and are making all their returns from credit bets, something more detailed is probably necessary. An alternative way to regard the higher yields of credit instruments is to regard them as being priced off different yield curves, where these credit curves lie above the reference curve. The lower the credit rating, the higher the spread, thus reflecting the extra yield premium demanded for greater risk. Using this model we can describe returns of, say, an A-rated security in terms of movements in the AAA curve, plus movements (tightening or widening) in the credit spread. Other ways to look at the return generated by credit spreads is to measure the yield of each security against an industry sector curve, or (in the case of Eurobonds) to measure the spread between bonds of the same credit rating and currency but differing by country of issue. Attribution on mortgage-backed securities. Mortgage-backed securities (MBS) are substantially more complex to price than vanilla bonds, due to the uncertainties implied by the prepayment option included in the instrument's structure. Ideally, the returns generated by these other risks should be shown in the attribution report. Simple risk measures. The simplest measure of interest-rate sensitivity for an MBS is its effective duration. The modified duration of a bond assumes that cash flows do not change in response to movements in the term structure, which is not the case for an MBS. For instance, when rates fall, the rate of prepayments will probably rise and the duration of the MBS will also fall, which is entirely the opposite behavior to a vanilla bond. For this reason, effective duration formula_22 is a better single-figure measure of interest-rate sensitivity, where formula_23 Here, formula_24 is the price of the MBS at yield formula_20, calculated using an appropriate prepayment model. While compact, effective duration only measures the effect of a parallel shift in the yield curve across all maturities. It does not take into effect other risk factors, such as non-parallel yield curve shifts, convexity, option-adjusted spreads, and others. However, effective duration may suffice for many managers as a basic risk measure. Virtually no research has been published on the attribution of other sources of risk for MBS. Key rate durations. For managers who need to account for changes in the shape of the yield curve in detail, a single risk measure for interest-rate sensitivity is insufficient and a more detailed way of measuring changes across the entire term structure is required. One of the most popular techniques to accomplish this is the use of key-rate durations (KRDs), introduced by Thomas Ho (1992). Ho defines a number of maturities on the yield curve as being the key rate durations, with typical values of 3 months, 1, 2, 3, 5, 7, 10, 15, 20, 25 and 30 years. At each point, we define a duration that measures interest-rate sensitivity to a movement at that point only, with the effect of the duration at other maturities decreasing linearly to the neighboring points. In other words, a key rate duration measures the effect of a change in the yield curve that is localized at a particular maturity, and restricted to the immediate vicinity of that maturity, usually by having the change drop linearly to zero at neighboring points. Of course, the yield curve is most unlikely to behave in this way. The idea is that the actual change in the yield curve can be modeled in terms of a sum of such saw-tooth functions. At each key-rate duration, we know the change in the curve's yield, and can combine this change with the KRD to calculate the overall change in value of the portfolio. In other words, formula_25 where the sum is across all key rate maturities. The sum of an instrument's key rate durations is approximately equal to its modified duration. The sum may not be exact because modified duration assumes a flat yield curve, which is seldom the case. This approach can easily be combined with the earlier decomposition into shift, twist and curvature components to give price changes due to these yield curve movement types. For instance, suppose we know the amount by which the yield curve has steepened at each key rate maturity. Then the return of the MBS due to a steepening Treasury curve is given by formula_26 Other risk factors. MBS have many more risk factors than are used for vanilla bonds, and an attribution scheme needs to model them all. They include While all these factors can be important in accounting for changes in MBS returns, in practice a particular user may only select a subset. The reason is that a perturbational analysis requires the provision of risk sensitivity numbers for each factor, and in some cases these may simply not be available. The return made by such uncomputed risks may be grouped into an ‘Other’ category in the attribution report. Benchmarks. The importance of benchmarks remains widely underestimated. To perform attribution on a portfolio, one must also run attribution on its associated benchmark, and this frequently presents substantial difficulties. To provide attribution information at the same level of detail for a benchmark, one needs extensive, detailed weights and returns, and these are often hard to find. For instance, many widely used benchmarks contain thousands of bonds. Deriving the security-level returns of an industry benchmark so that the overall returns match the published figures remains a major challenge for most practitioners. While benchmarks may have much greater uniformity of instrument type than managed portfolios, the sheer number of securities – and the data maintenance issues required to reprice each one, and to ensure that the correct coupon amount and timing is used when a coupon is paid – means that detailed benchmark modeling remains extremely difficult. There are also issues involving transparency of benchmark calculations, with many of the underlying actions remaining obscure. Even pricing data can be difficult to come by in some cases. For some Asian benchmarks, illiquid markets can mean that accurate yield data is not published at all, which can make calculation of risks very difficult. Future challenges. The sheer variety of the fixed-income markets, and the pace of innovation in this area, means that provision of an attribution capability from scratch will continue to present significant challenges. In no particular order, issues to be faced include While there remain numerous challenges to solve, the state of fixed income attribution is much less murky than was the case even five years ago. The reasons include References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "P\\left( {y,t} \\right)" }, { "math_id": 1, "text": "\\delta P = \\frac{{\\partial P}}{{\\partial t}}\\delta t + \\frac{{\\partial P}}{{\\partial y}}\\delta y + \\frac{1}{2}\\frac{{\\partial ^2 P}}{{\\partial y^2 }}\\delta y^2 + O\\left( {\\delta t^2 ,\\delta y^3 } \\right)" }, { "math_id": 2, "text": "\\delta r = \\frac{{\\delta P}}{P}" }, { "math_id": 3, "text": "\\delta r = y \\cdot \\delta t - MD \\cdot \\delta y + \\frac{1}{2}C \\cdot \\delta y^2 + O\\left( {\\delta t^2 ,\\delta y^3 } \\right)" }, { "math_id": 4, "text": "MD = - \\frac{1}{P}\\frac{{\\partial P}}{{\\partial y}}" }, { "math_id": 5, "text": "C = \\frac{1}{P}\\frac{{\\partial ^2 P}}{{\\partial y^2 }}" }, { "math_id": 6, "text": "MD" }, { "math_id": 7, "text": "C" }, { "math_id": 8, "text": "y\\left( m \\right) = a_0 + a_1 m + a_2 m^2" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "a_0, a_1, a_2" }, { "math_id": 11, "text": "y\\left( m \\right)" }, { "math_id": 12, "text": " y\\left( m \\right) = \\beta _0 + \\beta _1\\frac{{\\left[ {1 - \\exp \\left( { - m/\\tau} \\right)} \\right]}}{m/\\tau} + \\beta _2 {\\left(\\frac{{\\left[ {1 - \\exp \\left( { - m/\\tau} \\right)} \\right]}}{m/\\tau} - \\exp \\left( { - m/\\tau}\\right)\\right)} " }, { "math_id": 13, "text": "\\beta _0" }, { "math_id": 14, "text": "\\beta _1" }, { "math_id": 15, "text": "\\beta _2" }, { "math_id": 16, "text": "\\tau" }, { "math_id": 17, "text": " +\\beta _3 {\\left(\\frac{{\\left[ {1 - \\exp \\left( { - m/\\tau_2} \\right)} \\right]}}{m/\\tau_2} - \\exp \\left( { - m/\\tau_2}\\right)\\right)} " }, { "math_id": 18, "text": "a _0" }, { "math_id": 19, "text": "r_{yield} = y \\cdot \\delta t" }, { "math_id": 20, "text": "y" }, { "math_id": 21, "text": "\\delta t" }, { "math_id": 22, "text": "D_e " }, { "math_id": 23, "text": "D_e = - \\frac{{P\\left( {y + \\delta y} \\right) - P\\left( {y - \\delta y} \\right)}}{{2 \\cdot P\\left( y \\right) \\cdot \\delta y}}" }, { "math_id": 24, "text": "P\\left( y \\right)" }, { "math_id": 25, "text": "\\delta r_{yield} = \\sum\\limits_{i = 1}^m {KRD_i \\cdot \\delta y_i } " }, { "math_id": 26, "text": "\\delta r_{yield}^{steepening} = \\sum\\limits_{i = 1}^m {KRD_i \\cdot \\delta y_i^{steepening} } " } ]
https://en.wikipedia.org/wiki?curid=8748548
875134
Galactic halo
Spherical component of a galaxy which extends beyond the main, visible component A galactic halo is an extended, roughly spherical component of a galaxy which extends beyond the main, visible component. Several distinct components of a galaxy comprise its halo: The distinction between the halo and the main body of the galaxy is clearest in spiral galaxies, where the spherical shape of the halo contrasts with the flat disc. In an elliptical galaxy, there is no sharp transition between the other components of the galaxy and the halo. A halo can be studied by observing its effect on the passage of light from distant bright objects like quasars that are in line of sight beyond the galaxy in question. Components of the galactic halo. Stellar halo. The stellar halo is a nearly spherical population of field stars and globular clusters. It surrounds most disk galaxies as well as some elliptical galaxies of type cD. A low amount (about one percent) of a galaxy's stellar mass resides in the stellar halo, meaning its luminosity is much lower than other components of the galaxy. The Milky Way's stellar halo contains globular clusters, RR Lyrae stars with low metal content, and subdwarfs. In our stellar halo, stars tend to be old (most are greater than 12 billion years old) and metal-poor, but there are also halo star clusters with observed metal content similar to disk stars. The halo stars of the Milky Way have an observed radial velocity dispersion of about 200 km/s and a low average velocity of rotation of about 50 km/s. Star formation in the stellar halo of the Milky Way ceased long ago. Galactic corona. A galactic corona is a distribution of gas extending far away from the center of the galaxy. It can be detected by the distinct emission spectrum it gives off, showing the presence of HI gas (H one, 21 cm microwave line) and other features detectable by X-ray spectroscopy. Dark matter halo. The dark matter halo is a theorized distribution of dark matter which extends throughout the galaxy extending far beyond its visible components. The mass of the dark matter halo is far greater than the mass of the other components of the galaxy. Its existence is hypothesized in order to account for the gravitational potential that determines the dynamics of bodies within galaxies. The nature of dark matter halos is an important area in current research in cosmology, in particular its relation to galactic formation and evolution. The Navarro–Frenk–White profile is a widely accepted density profile of the dark matter halo determined through numerical simulations. It represents the mass density of the dark matter halo as a function of formula_0, the distance from the galactic center: formula_1 where formula_2 is a characteristic radius for the model, formula_3 is the critical density (with formula_4 being the Hubble constant), and formula_5 is a dimensionless constant. The invisible halo component cannot extend with this density profile indefinitely, however; this would lead to a diverging integral when calculating mass. It does, however, provide a finite gravitational potential for all formula_0. Most measurements that can be made are relatively insensitive to the outer halo's mass distribution. This is a consequence of Newton's laws, which state that if the shape of the halo is spheroidal or elliptical there will be no net gravitational effect from halo mass a distance formula_0 from the galactic center on an object that is closer to the galactic center than formula_0. The only dynamical variable related to the extent of the halo that can be constrained is the escape velocity: the fastest-moving stellar objects still gravitationally bound to the Galaxy can give a lower bound on the mass profile of the outer edges of the dark halo. Formation of galactic halos. The formation of stellar halos occurs naturally in a cold dark matter model of the universe in which the evolution of systems such as halos occurs from the bottom-up, meaning the large scale structure of galaxies is formed starting with small objects. Halos, which are composed of both baryonic and dark matter, form by merging with each other. Evidence suggests that the formation of galactic halos may also be due to the effects of increased gravity and the presence of primordial black holes. The gas from halo mergers goes toward the formation of the central galactic components, while stars and dark matter remain in the galactic halo. On the other hand, the halo of the Milky Way Galaxy is thought to derive from the Gaia Sausage.
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "\\rho (r) = \\frac{\\rho_\\text{crit} \\delta_{c}}{(r/r_{s})(1+r/r_{s})^{2}}" }, { "math_id": 2, "text": "r_{s}" }, { "math_id": 3, "text": "\\rho_\\text{crit} = 3H^2/8 \\pi G" }, { "math_id": 4, "text": "H" }, { "math_id": 5, "text": "\\delta_c" } ]
https://en.wikipedia.org/wiki?curid=875134
8752642
Nuclear structure
Structure of the atomic nucleus Understanding the structure of the atomic nucleus is one of the central challenges in nuclear physics. Models. The cluster model. The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals. The liquid drop model. The liquid drop model is one of the first models of nuclear structure, proposed by Carl Friedrich von Weizsäcker in 1935. It describes the nucleus as a semiclassical fluid made up of neutrons and protons, with an internal repulsive electrostatic force proportional to the number of protons. The quantum mechanical nature of these particles appears via the Pauli exclusion principle, which states that no two nucleons of the same kind can be at the same state. Thus the fluid is actually what is known as a Fermi liquid. In this model, the binding energy of a nucleus with formula_0 protons and formula_1 neutrons is given by formula_2 where formula_3 is the total number of nucleons (Mass Number). The terms proportional to formula_4 and formula_5 represent the volume and surface energy of the liquid drop, the term proportional to formula_6 represents the electrostatic energy, the term proportional to formula_7 represents the Pauli exclusion principle and the last term formula_8 is the pairing term, which lowers the energy for even numbers of protons or neutrons. The coefficients formula_9 and the strength of the pairing term may be estimated theoretically, or fit to data. This simple model reproduces the main features of the binding energy of nuclei. The assumption of nucleus as a drop of Fermi liquid is still widely used in the form of Finite Range Droplet Model (FRDM), due to the possible good reproduction of nuclear binding energy on the whole chart, with the necessary accuracy for predictions of unknown nuclei. The shell model. The expression "shell model" is ambiguous in that it refers to two different items. It was previously used to describe the existence of nucleon shells according to an approach closer to what is now called mean field theory. Nowadays, it refers to a formalism analogous to the configuration interaction formalism used in quantum chemistry. Introduction to the shell concept. Systematic measurements of the binding energy of atomic nuclei show systematic deviations with respect to those estimated from the liquid drop model. In particular, some nuclei having certain values for the number of protons and/or neutrons are bound more tightly together than predicted by the liquid drop model. These nuclei are called singly/doubly magic. This observation led scientists to assume the existence of a shell structure of nucleons (protons and neutrons) within the nucleus, like that of electrons within atoms. Indeed, nucleons are quantum objects. Strictly speaking, one should not speak of energies of individual nucleons, because they are all correlated with each other. However, as an approximation one may envision an average nucleus, within which nucleons propagate individually. Owing to their quantum character, they may only occupy "discrete" energy levels. These levels are by no means uniformly distributed; some intervals of energy are crowded, and some are empty, generating a gap in possible energies. A shell is such a set of levels separated from the other ones by a wide empty gap. The energy levels are found by solving the Schrödinger equation for a single nucleon moving in the average potential generated by all other nucleons. Each level may be occupied by a nucleon, or empty. Some levels accommodate several different quantum states with the same energy; they are said to be "degenerate". This occurs in particular if the average nucleus exhibits a certain symmetry, like a spherical shape. The concept of shells allows one to understand why some nuclei are bound more tightly than others. This is because two nucleons of the same kind cannot be in the same state (Pauli exclusion principle). Werner Heisenberg extended the principle of Pauli exclusion to nucleons, via the introduction of the iso-spin concept. Nucleons are thought to be composed of two kind of particles, the neutron and the proton that differ through their intrinsic property, associated with their iso-spin quantum number. This concept enables the explanation of the bound state of Deuterium, in which the proton and neutron can couple their spin and iso-spin in two different manners. So the lowest-energy state of the nucleus is one where nucleons fill all energy levels from the bottom up to some level. Nuclei that exhibit an odd number of either protons or neutrons are less bound than nuclei with even number. A nucleus with full shells is exceptionally stable, as will be explained. As with electrons in the electron shell model, protons in the outermost shell are relatively loosely bound to the nucleus if there are only few protons in that shell, because they are farthest from the center of the nucleus. Therefore, nuclei which have a full outer proton shell will be more tightly bound and have a higher binding energy than other nuclei with a similar total number of protons. This is also true for neutrons. Furthermore, the energy needed to excite the nucleus (i.e. moving a nucleon to a higher, previously unoccupied level) is exceptionally high in such nuclei. Whenever this unoccupied level is the next after a full shell, the only way to excite the nucleus is to raise one nucleon "across the gap", thus spending a large amount of energy. Otherwise, if the highest occupied energy level lies in a partly filled shell, much less energy is required to raise a nucleon to a higher state in the same shell. Some evolution of the shell structure observed in stable nuclei is expected away from the valley of stability. For example, observations of unstable isotopes have shown shifting and even a reordering of the single particle levels of which the shell structure is composed. This is sometimes observed as the creation of an island of inversion or in the reduction of excitation energy gaps above the traditional magic numbers. Basic hypotheses. Some basic hypotheses are made in order to give a precise conceptual framework to the shell model: Brief description of the formalism. The general process used in the shell model calculations is the following. First a Hamiltonian for the nucleus is defined. Usually, for computational practicality, only one- and two-body terms are taken into account in this definition. The interaction is an effective theory: it contains free parameters which have to be fitted with experimental data. The next step consists in defining a basis of single-particle states, i.e. a set of wavefunctions describing all possible nucleon states. Most of the time, this basis is obtained via a Hartree–Fock computation. With this set of one-particle states, Slater determinants are built, that is, wavefunctions for "Z" proton variables or "N" neutron variables, which are antisymmetrized products of single-particle wavefunctions (antisymmetrized meaning that under exchange of variables for any pair of nucleons, the wavefunction only changes sign). In principle, the number of quantum states available for a single nucleon at a finite energy is finite, say "n". The number of nucleons in the nucleus must be smaller than the number of available states, otherwise the nucleus cannot hold all of its nucleons. There are thus several ways to choose "Z" (or "N") states among the "n" possible. In combinatorial mathematics, the number of choices of "Z" objects among "n" is the binomial coefficient C. If "n" is much larger than "Z" (or "N"), this increases roughly like "n""Z". Practically, this number becomes so large that every computation is impossible for "A"="N"+"Z" larger than 8. To obviate this difficulty, the space of possible single-particle states is divided into core and valence, by analogy with chemistry (see core electron and valence electron). The core is a set of single-particles which are assumed to be inactive, in the sense that they are the well bound lowest-energy states, and that there is no need to reexamine their situation. They do not appear in the Slater determinants, contrary to the states in the valence space, which is the space of all single-particle states "not in the core", but possibly to be considered in the choice of the build of the ("Z"-) "N"-body wavefunction. The set of all possible Slater determinants in the valence space defines a basis for ("Z"-) "N"-body states. The last step consists in computing the matrix of the Hamiltonian within this basis, and to diagonalize it. In spite of the reduction of the dimension of the basis owing to the fixation of the core, the matrices to be diagonalized reach easily dimensions of the order of 109, and demand specific diagonalization techniques. The shell model calculations give in general an excellent fit with experimental data. They depend however strongly on two main factors: Mean field theories. The independent-particle model (IPM). The interaction between nucleons, which is a consequence of strong interactions and binds the nucleons within the nucleus, exhibits the peculiar behaviour of having a finite range: it vanishes when the distance between two nucleons becomes too large; it is attractive at medium range, and repulsive at very small range. This last property correlates with the Pauli exclusion principle according to which two fermions (nucleons are fermions) cannot be in the same quantum state. This results in a very large mean free path predicted for a nucleon within the nucleus. The main idea of the Independent Particle approach is that a nucleon moves inside a certain potential well (which keeps it bound to the nucleus) independently from the other nucleons. This amounts to replacing an "N"-body problem ("N" particles interacting) by "N" single-body problems. This essential simplification of the problem is the cornerstone of mean field theories. These are also widely used in atomic physics, where electrons move in a mean field due to the central nucleus and the electron cloud itself. The independent particle model and mean field theories (we shall see that there exist several variants) have a great success in describing the properties of the nucleus starting from an effective interaction or an effective potential, thus are a basic part of atomic nucleus theory. One should also notice that they are modular enough, in that it is quite easy to extend the model to introduce effects such as nuclear pairing, or collective motions of the nucleon like rotation, or vibration, adding the corresponding energy terms in the formalism. This implies that in many representations, the mean field is only a starting point for a more complete description which introduces correlations reproducing properties like collective excitations and nucleon transfer. Nuclear potential and effective interaction. A large part of the practical difficulties met in mean field theories is the definition (or calculation) of the potential of the mean field itself. One can very roughly distinguish between two approaches: In the case of the Hartree–Fock approaches, the trouble is not to find the mathematical function which describes best the nuclear potential, but that which describes best the nucleon–nucleon interaction. Indeed, in contrast with atomic physics where the interaction is known (it is the Coulomb interaction), the nucleon–nucleon interaction within the nucleus is not known analytically. There are two main reasons for this fact. First, the strong interaction acts essentially among the quarks forming the nucleons. The nucleon–nucleon interaction in vacuum is a mere "consequence" of the quark–quark interaction. While the latter is well understood in the framework of the Standard Model at high energies, it is much more complicated in low energies due to color confinement and asymptotic freedom. Thus there is yet no fundamental theory allowing one to deduce the nucleon–nucleon interaction from the quark–quark interaction. Furthermore, even if this problem were solved, there would remain a large difference between the ideal (and conceptually simpler) case of two nucleons interacting in vacuum, and that of these nucleons interacting in the nuclear matter. To go further, it was necessary to invent the concept of effective interaction. The latter is basically a mathematical function with several arbitrary parameters, which are adjusted to agree with experimental data. Most modern interaction are zero-range so they act only when the two nucleons are in contact, as introduced by Tony Skyrme. In a seminal paper by Dominique Vautherin and David M. Brink it was demonstrated that a Skyrme force that is density dependent can reproduce basic properties of atomic nuclei. Other commonly used interaction is the finite range Gogny force, The self-consistent approaches of the Hartree–Fock type. In the Hartree–Fock approach of the "n"-body problem, the starting point is a Hamiltonian containing "n" kinetic energy terms, and potential terms. As mentioned before, one of the mean field theory hypotheses is that only the two-body interaction is to be taken into account. The potential term of the Hamiltonian represents all possible two-body interactions in the set of "n" fermions. It is the first hypothesis. The second step consists in assuming that the wavefunction of the system can be written as a Slater determinant of one-particle spin-orbitals. This statement is the mathematical translation of the independent-particle model. This is the second hypothesis. There remains now to determine the components of this Slater determinant, that is, the individual wavefunctions of the nucleons. To this end, it is assumed that the total wavefunction (the Slater determinant) is such that the energy is minimum. This is the third hypothesis. Technically, it means that one must compute the mean value of the (known) two-body Hamiltonian on the (unknown) Slater determinant, and impose that its mathematical variation vanishes. This leads to a set of equations where the unknowns are the individual wavefunctions: the Hartree–Fock equations. Solving these equations gives the wavefunctions and individual energy levels of nucleons, and so the total energy of the nucleus and its wavefunction. This short account of the Hartree–Fock method explains why it is called also the variational approach. At the beginning of the calculation, the total energy is a "function of the individual wavefunctions" (a so-called functional), and everything is then made in order to optimize the choice of these wavefunctions so that the functional has a minimum – hopefully absolute, and not only local. To be more precise, there should be mentioned that the energy is a functional of the density, defined as the sum of the individual squared wavefunctions. The Hartree–Fock method is also used in atomic physics and condensed matter physics as Density Functional Theory, DFT. The process of solving the Hartree–Fock equations can only be iterative, since these are in fact a Schrödinger equation in which the potential depends on the density, that is, precisely on the wavefunctions to be determined. Practically, the algorithm is started with a set of individual grossly reasonable wavefunctions (in general the eigenfunctions of a harmonic oscillator). These allow to compute the density, and therefrom the Hartree–Fock potential. Once this done, the Schrödinger equation is solved anew, and so on. The calculation stops – convergence is reached – when the difference among wavefunctions, or energy levels, for two successive iterations is less than a fixed value. Then the mean field potential is completely determined, and the Hartree–Fock equations become standard Schrödinger equations. The corresponding Hamiltonian is then called the Hartree–Fock Hamiltonian. The relativistic mean field approaches. Born first in the 1970s with the works of John Dirk Walecka on quantum hadrodynamics, the relativistic models of the nucleus were sharpened up towards the end of the 1980s by P. Ring and coworkers. The starting point of these approaches is the relativistic quantum field theory. In this context, the nucleon interactions occur via the exchange of virtual particles called mesons. The idea is, in a first step, to build a Lagrangian containing these interaction terms. Second, by an application of the least action principle, one gets a set of equations of motion. The real particles (here the nucleons) obey the Dirac equation, whilst the virtual ones (here the mesons) obey the Klein–Gordon equations. In view of the non-perturbative nature of strong interaction, and also since the exact potential form of this interaction between groups of nucleons is relatively badly known, the use of such an approach in the case of atomic nuclei requires drastic approximations. The main simplification consists in replacing in the equations all field terms (which are operators in the mathematical sense) by their mean value (which are functions). In this way, one gets a system of coupled integro-differential equations, which can be solved numerically, if not analytically. The interacting boson model. The interacting boson model (IBM) is a model in nuclear physics in which nucleons are represented as pairs, each of them acting as a boson particle, with integral spin of 0, 2 or 4. This makes calculations feasible for larger nuclei. There are several branches of this model - in one of them (IBM-1) one can group all types of nucleons in pairs, in others (for instance - IBM-2) one considers protons and neutrons in pairs separately. Spontaneous breaking of symmetry in nuclear physics. One of the focal points of all physics is symmetry. The nucleon–nucleon interaction and all effective interactions used in practice have certain symmetries. They are invariant by translation (changing the frame of reference so that directions are not altered), by rotation (turning the frame of reference around some axis), or parity (changing the sense of axes) in the sense that the interaction does not change under any of these operations. Nevertheless, in the Hartree–Fock approach, solutions which are not invariant under such a symmetry can appear. One speaks then of spontaneous symmetry breaking. Qualitatively, these spontaneous symmetry breakings can be explained in the following way: in the mean field theory, the nucleus is described as a set of independent particles. Most additional correlations among nucleons which do not enter the mean field are neglected. They can appear however by a breaking of the symmetry of the mean field Hamiltonian, which is only approximate. If the density used to start the iterations of the Hartree–Fock process breaks certain symmetries, the final Hartree–Fock Hamiltonian may break these symmetries, if it is advantageous to keep these broken from the point of view of the total energy. It may also converge towards a symmetric solution. In any case, if the final solution breaks the symmetry, for example, the rotational symmetry, so that the nucleus appears not to be spherical, but elliptic, all configurations deduced from this deformed nucleus by a rotation are just as good solutions for the Hartree–Fock problem. The ground state of the nucleus is then "degenerate". A similar phenomenon happens with the nuclear pairing, which violates the conservation of the number of baryons (see below). Extensions of the mean field theories. Nuclear pairing phenomenon. The most common extension to mean field theory is the nuclear pairing. Nuclei with an even number of nucleons are systematically more bound than those with an odd one. This implies that each nucleon binds with another one to form a pair, consequently the system cannot be described as independent particles subjected to a common mean field. When the nucleus has an even number of protons and neutrons, each one of them finds a partner. To excite such a system, one must at least use such an energy as to break a pair. Conversely, in the case of odd number of protons or neutrons, there exists an unpaired nucleon, which needs less energy to be excited. This phenomenon is closely analogous to that of Type 1 superconductivity in solid state physics. The first theoretical description of nuclear pairing was proposed at the end of the 1950s by Aage Bohr, Ben Mottelson, and David Pines (which contributed to the reception of the Nobel Prize in Physics in 1975 by Bohr and Mottelson). It was close to the BCS theory of Bardeen, Cooper and Schrieffer, which accounts for metal superconductivity. Theoretically, the pairing phenomenon as described by the BCS theory combines with the mean field theory: nucleons are both subject to the mean field potential and to the pairing interaction. The Hartree–Fock–Bogolyubov (HFB) method is a more sophisticated approach, enabling one to consider the pairing and mean field interactions consistently on equal footing. HFB is now the de facto standard in the mean field treatment of nuclear systems. Symmetry restoration. Peculiarity of mean field methods is the calculation of nuclear property by explicit symmetry breaking. The calculation of the mean field with self-consistent methods (e.g. Hartree-Fock), breaks rotational symmetry, and the calculation of pairing property breaks particle-number. Several techniques for symmetry restoration by projecting on good quantum numbers have been developed. Particle vibration coupling. Mean field methods (eventually considering symmetry restoration) are a good approximation for the ground state of the system, even postulating a system of independent particles. Higher-order corrections consider the fact that the particles interact together by the means of correlation. These correlations can be introduced taking into account the coupling of independent particle degrees of freedom, low-energy collective excitation of systems with even number of protons and neutrons. In this way, excited states can be reproduced by the means of random phase approximation (RPA), also eventually consistently calculating corrections to the ground state (e.g. by the means of nuclear field theory). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Z" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "E_{B} = a_{V} A - a_{S} A^{2/3} - a_{C} \\frac{Z^2}{A^{1/3}} - a_{A} \\frac{(N - Z)^{2}}{A} - \\delta(A,Z)" }, { "math_id": 3, "text": "A=Z+N" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "A^{2/3}" }, { "math_id": 6, "text": "Z^{2}" }, { "math_id": 7, "text": "(N-Z)^{2}" }, { "math_id": 8, "text": "\\delta(A,Z)" }, { "math_id": 9, "text": "a_{V},a_{S},a_{C},a_{A}" } ]
https://en.wikipedia.org/wiki?curid=8752642
8753939
GF method
The GF method, sometimes referred to as FG method, is a classical mechanical method introduced by Edgar Bright Wilson to obtain certain "internal coordinates" for a vibrating semi-rigid molecule, the so-called "normal coordinates" "Q"k. Normal coordinates decouple the classical vibrational motions of the molecule and thus give an easy route to obtaining vibrational amplitudes of the atoms as a function of time. In Wilson's GF method it is assumed that the molecular kinetic energy consists only of harmonic vibrations of the atoms, "i.e.," overall rotational and translational energy is ignored. Normal coordinates appear also in a quantum mechanical description of the vibrational motions of the molecule and the Coriolis coupling between rotations and vibrations. It follows from application of the Eckart conditions that the matrix G−1 gives the kinetic energy in terms of arbitrary linear internal coordinates, while F represents the (harmonic) potential energy in terms of these coordinates. The GF method gives the linear transformation from general internal coordinates to the special set of normal coordinates. The GF method. A non-linear molecule consisting of "N" atoms has 3"N" − 6 internal degrees of freedom, because positioning a molecule in three-dimensional space requires three degrees of freedom, and the description of its orientation in space requires another three degree of freedom. These degrees of freedom must be subtracted from the 3"N" degrees of freedom of a system of "N" particles. The interaction among atoms in a molecule is described by a potential energy surface (PES), which is a function of 3"N" − 6 coordinates. The internal degrees of freedom "s"1, ..., "s"3"N"−6 describing the PES in an optimal way are often non-linear; they are for instance "valence coordinates", such as bending and torsion angles and bond stretches. It is possible to write the quantum mechanical kinetic energy operator for such curvilinear coordinates, but it is hard to formulate a general theory applicable to any molecule. This is why Wilson linearized the internal coordinates by assuming small displacements. The linearized version of the internal coordinate "s"t is denoted by "S"t. The PES "V" can be Taylor expanded around its minimum in terms of the "S"t. The third term (the Hessian of "V") evaluated in the minimum is a force derivative matrix F. In the harmonic approximation the Taylor series is ended after this term. The second term, containing first derivatives, is zero because it is evaluated in the minimum of "V". The first term can be included in the zero of energy. Thus, formula_0 The classical vibrational kinetic energy has the form: formula_1 where "g""st" is an element of the metric tensor of the internal (curvilinear) coordinates. The dots indicate time derivatives. Mixed terms formula_2 generally present in curvilinear coordinates are not present here, because only linear coordinate transformations are used. Evaluation of the metric tensor g in the minimum s0 of "V" gives the positive definite and symmetric matrix G = g(s0)−1. One can solve the two matrix problems formula_3 simultaneously, since they are equivalent to the generalized eigenvalue problem formula_4 where formula_5 where "fi" is equal to formula_6 (formula_7 is the frequency of normal mode "i"); formula_8 is the unit matrix. The matrix L−1 contains the "normal coordinates" "Q"k in its rows: formula_9 Because of the form of the generalized eigenvalue problem, the method is called the GF method, often with the name of its originator attached to it: Wilson's GF method. By matrix transposition in both sides of the equation and using the fact that both G and F are symmetric matrices, as are diagonal matrices, one can recast this equation into a very similar one for FG . This is why the method is also referred to as Wilson's FG method. We introduce the vectors formula_10 which satisfy the relation formula_11 Upon use of the results of the generalized eigenvalue equation, the energy "E" = "T " + "V" (in the harmonic approximation) of the molecule becomes: formula_12 formula_13 formula_14 The Lagrangian "L" = "T" − "V" is formula_15 The corresponding Lagrange equations are identical to the Newton equations formula_16 for a set of uncoupled harmonic oscillators. These ordinary second-order differential equations are easily solved, yielding "Q""t" as a function of time; see the article on harmonic oscillators. Normal coordinates in terms of Cartesian displacement coordinates. Often the normal coordinates are expressed as linear combinations of Cartesian displacement coordinates. Let RA be the position vector of nucleus A and RA0 the corresponding equilibrium position. Then formula_17 is by definition the "Cartesian displacement coordinate" of nucleus A. Wilson's linearizing of the internal curvilinear coordinates "q""t" expresses the coordinate "S""t" in terms of the displacement coordinates formula_18 where sAt is known as a "Wilson s-vector". If we put the formula_19 into a (3"N" − 6) × 3"N" matrix B, this equation becomes in matrix language formula_20 The actual form of the matrix elements of B can be fairly complicated. Especially for a torsion angle, which involves 4 atoms, it requires tedious vector algebra to derive the corresponding values of the formula_19. See for more details on this method, known as the "Wilson s-vector method", the book by Wilson "et al.", or molecular vibration. Now, formula_21 which can be inverted and put in summation language: formula_22 Here D is a (3"N" − 6) × 3"N" matrix, which is given by (i) the linearization of the internal coordinates s (an algebraic process) and (ii) solution of Wilson's GF equations (a numeric process). Matrices involved in the analysis. There are several related coordinate systems commonly used in the GF matrix analysis. These quantities are related by a variety of matrices. For clarity, we provide the coordinate systems and their interrelations here. The relevant coordinates are: These different coordinate systems are related to one another by: Note the useful relationship: formula_35 These matrices allow one to construct the G matrix quite simply as formula_36 Relation to Eckart conditions. From the invariance of the internal coordinates "S""t" under overall rotation and translation of the molecule, follows the same for the linearized coordinates s"t"A. It can be shown that this implies that the following 6 conditions are satisfied by the internal coordinates, formula_37 These conditions follow from the Eckart conditions that hold for the displacement vectors, formula_38 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " 2V \\approx \\sum_{s,t=1}^{3N-6} F_{st} S_s\\, S_t. " }, { "math_id": 1, "text": " 2T = \\sum_{s,t=1}^{3N-6} g_{st}(\\mathbf{s}) \\dot{S}_s\\dot{S}_t ," }, { "math_id": 2, "text": "S_s\\, \\dot{S}_t" }, { "math_id": 3, "text": " \\mathbf{L}^\\mathrm{T} \\mathbf{F} \\mathbf{L} =\\boldsymbol{\\Phi}\n\\quad \\mathrm{and}\\quad \\mathbf{L}^\\mathrm{T} \\mathbf{G}^{-1} \\mathbf{L} = \\mathbf{E},\n" }, { "math_id": 4, "text": "\n\\mathbf{G} \\mathbf{F} \\mathbf{L} = \\mathbf{L} \\boldsymbol{\\Phi},\n" }, { "math_id": 5, "text": "\\boldsymbol{\\Phi}=\\operatorname{diag}(f_1,\\ldots, f_{3N-6}) " }, { "math_id": 6, "text": "4{\\pi}^{2}{\\nu}_i^{2}" }, { "math_id": 7, "text": "{\\nu}_i" }, { "math_id": 8, "text": "\\mathbf{E}\\," }, { "math_id": 9, "text": " Q_k = \\sum_{t=1}^{3N-6} (\\mathbf{L}^{-1})_{kt} S_t , \\quad k=1,\\ldots, 3N-6. \\," }, { "math_id": 10, "text": "\\mathbf{s} = \\operatorname{col}(S_1,\\ldots, S_{3N-6})\n\\quad\\mathrm{and}\\quad\n\\mathbf{Q} = \\operatorname{col}(Q_1,\\ldots, Q_{3N-6}),\n" }, { "math_id": 11, "text": "\n\\mathbf{s} = \\mathbf{L} \\mathbf{Q}.\n" }, { "math_id": 12, "text": "\n2E = \\dot{\\mathbf{s}}^\\mathrm{T} \\mathbf{G}^{-1}\\dot{\\mathbf{s}}+\n\\mathbf{s}^\\mathrm{T}\\mathbf{F}\\mathbf{s}\n" }, { "math_id": 13, "text": "\n= \\dot{\\mathbf{Q}}^\\mathrm{T} \\; \\left( \\mathbf{L}^\\mathrm{T} \\mathbf{G}^{-1} \\mathbf{L}\\right) \\; \\dot{\\mathbf{Q}}+\n\\mathbf{Q}^\\mathrm{T} \\left( \\mathbf{L}^\\mathrm{T}\\mathbf{F}\\mathbf{L}\\right)\\; \\mathbf{Q}\n" }, { "math_id": 14, "text": "\n= \\dot{\\mathbf{Q}}^\\mathrm{T}\\dot{\\mathbf{Q}} + \\mathbf{Q}^\\mathrm{T}\\boldsymbol{\\Phi}\\mathbf{Q}\n = \\sum_{t=1}^{3N-6} \\big( \\dot{Q}_t^2 + f_t Q_t^2 \\big).\n" }, { "math_id": 15, "text": "\nL = \\frac{1}{2} \\sum_{t=1}^{3N-6} \\big( \\dot{Q}_t^2 - f_t Q_t^2 \\big).\n" }, { "math_id": 16, "text": "\n\\ddot{Q}_t + f_t \\,Q_t = 0\n" }, { "math_id": 17, "text": " \\mathbf{x}_A \\equiv \\mathbf{R}_A -\\mathbf{R}_A^0 " }, { "math_id": 18, "text": "\nS_t =\\sum_{A=1}^N \\sum_{i=1}^3 s^t_{Ai} \\, x_{Ai}= \\sum_{A=1}^N \\mathbf{s}^t_{A} \\cdot \\mathbf{x}_{A}, \\quad \\mathrm{for}\\quad t = 1,\\ldots,3N-6,\n" }, { "math_id": 19, "text": "s^t_{Ai}" }, { "math_id": 20, "text": " \\mathbf{s} = \\mathbf{B} \\mathbf{x}. " }, { "math_id": 21, "text": "\n\\mathbf{s} = \\mathbf{L} \\mathbf{Q} = \\mathbf{L} \\mathbf{l}^{tr} \\mathbf{q} = \\mathbf{B} \\mathbf{M}^{-1/2} \\mathbf{q} \\equiv \\mathbf{D} \\mathbf{q},\n" }, { "math_id": 22, "text": "\nQ_k = \\sum_{A=1}^N \\sum_{i=1}^3 D^k_{Ai}\\, d_{Ai} \\quad \\mathrm{for}\\quad k=1,\\ldots, 3N-6.\n" }, { "math_id": 23, "text": "\\mathbf{x}:" }, { "math_id": 24, "text": "\\mathbf{s}: " }, { "math_id": 25, "text": "\\mathbf{q}:" }, { "math_id": 26, "text": "\\mathbf{Q}: " }, { "math_id": 27, "text": "\\mathbf{s} = \\mathbf{B}\\mathbf{x}" }, { "math_id": 28, "text": "\\mathbf{B}" }, { "math_id": 29, "text": "\\mathbf{x} = \\mathbf{M}^{-1/2}\\mathbf{q}," }, { "math_id": 30, "text": "\\mathbf{M}^{1/2}" }, { "math_id": 31, "text": "\\mathbf{q} = \\mathbf{l} \\mathbf{Q}," }, { "math_id": 32, "text": "\\mathbf{l}" }, { "math_id": 33, "text": "\\mathbf{s} = \\mathbf{L}\\mathbf{Q}," }, { "math_id": 34, "text": "\\mathbf{L}" }, { "math_id": 35, "text": "\n\\mathbf{L}=\\mathbf{B} \\mathbf{M}^{-1/2} \\mathbf{l}.\n" }, { "math_id": 36, "text": "\n\\mathbf{G} = \\mathbf{B} \\mathbf{M}^{-1} \\mathbf{B}^{\\rm T}.\n" }, { "math_id": 37, "text": "\n\\sum_{A=1}^N \\mathbf{s}^t_{A} = 0\\quad\\mathrm{and}\\quad\n\\sum_{A=1}^N \\mathbf{R}^0_A\\times \\mathbf{s}^t_A= 0, \\quad t=1,\\ldots,3N-6.\n" }, { "math_id": 38, "text": " \\sum_{A=1}^N M_A\\; \\mathbf{d}_{A} = 0 \\quad\\mathrm{and}\\quad\n\\sum_{A=1}^N M_A\\; \\mathbf{R}^0_{A} \\times \\mathbf{d}_{A} = 0.\n" } ]
https://en.wikipedia.org/wiki?curid=8753939
8754928
Digital signature forgery
In a cryptographic digital signature or MAC system, digital signature forgery is the ability to create a pair consisting of a message, formula_0, and a signature (or MAC), formula_1, that is valid for formula_0, but has not been created in the past by the legitimate signer. There are different types of forgery. To each of these types, security definitions can be associated. A signature scheme is secure by a specific definition if no forgery of the associated type is possible. Types. The following definitions are ordered from lowest to highest achieved security, in other words, from most powerful to the weakest attack. The definitions form a hierarchy, meaning that an attacker able to mount a specific attack can execute all the attacks further down the list. Likewise, a scheme that reaches a certain security goal also reaches all prior ones. Total break. More general than the following attacks, there is also a "total break": when adversary can recover the private information and keys used by the signer, they can create any possible signature on any message. Universal forgery (universal unforgeability, UUF). Universal forgery is the creation (by an adversary) of a valid signature, formula_1, for "any" given message, formula_0. An adversary capable of universal forgery is able to sign messages they chose themselves (as in selective forgery), messages chosen at random, or even specific messages provided by an opponent. Selective forgery (selective unforgeability, SUF). Selective forgery is the creation of a message/signature pair formula_2 by an adversary, where formula_0 has been "chosen" by the attacker prior to the attack. formula_0 may be chosen to have interesting mathematical properties with respect to the signature algorithm; however, in selective forgery, formula_0 must be fixed before the start of the attack. The ability to successfully conduct a selective forgery attack implies the ability to successfully conduct an existential forgery attack. Existential forgery. Existential forgery (existential unforgeability, EUF) is the creation (by an adversary) of at least one message/signature pair, formula_2, where formula_0 has never been signed by the legitimate signer. The adversary can choose formula_0 freely; formula_0 need not have any particular meaning; the message content is irrelevant — as long as the pair, formula_2, is valid, the adversary has succeeded in constructing an existential forgery. Thus, creating an existential forgery is easier than a selective forgery, because the attacker may select a message formula_0 for which a forgery can easily be created, whereas in the case of a selective forgery, the challenger can ask for the signature of a “difficult” message. Example of an existential forgery. The RSA cryptosystem has the following multiplicative property: formula_3. This property can be exploited by creating a message formula_4 with a signature formula_5. A common defense to this attack is to hash the messages before signing them. Weak existential forgery (strong existential unforgeability, strong unforgeability; sEUF, or SUF). This notion is a stronger (more secure) variant of the existential forgery detailed above. Weak existential forgery is the creation (by an adversary) of at least one message/signature pair, formula_6, given a number of different message-signature pairs formula_2 produced by the legitimate signer. In contrast to existential forgeries, an adversary is also considered successful if they manage to create a new signature for an already signed message formula_7. Strong existential forgery is essentially the weakest adversarial goal, therefore the strongest schemes are those that are "strongly existentially unforgeable". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "(m, \\sigma)" }, { "math_id": 3, "text": "\\sigma(m_1) \\cdot \\sigma(m_2) = \\sigma (m_1 \\cdot m_2)" }, { "math_id": 4, "text": "m' = m_1 \\cdot m_2" }, { "math_id": 5, "text": "\\sigma\\left(m'\\right) = \\sigma (m_1 \\cdot m_2)= \\sigma(m_1) \\cdot \\sigma(m_2)" }, { "math_id": 6, "text": "\\left(m', \\sigma'\\right)" }, { "math_id": 7, "text": "m'" } ]
https://en.wikipedia.org/wiki?curid=8754928
875509
Jacobi field
In Riemannian geometry, a Jacobi field is a vector field along a geodesic formula_0 in a Riemannian manifold describing the difference between the geodesic and an "infinitesimally close" geodesic. In other words, the Jacobi fields along a geodesic form the tangent space to the geodesic in the space of all geodesics. They are named after Carl Jacobi. Definitions and properties. Jacobi fields can be obtained in the following way: Take a smooth one parameter family of geodesics formula_1 with formula_2, then formula_3 is a Jacobi field, and describes the behavior of the geodesics in an infinitesimal neighborhood of a given geodesic formula_0. A vector field "J" along a geodesic formula_0 is said to be a Jacobi field if it satisfies the Jacobi equation: formula_4 where "D" denotes the covariant derivative with respect to the Levi-Civita connection, "R" the Riemann curvature tensor, formula_5 the tangent vector field, and "t" is the parameter of the geodesic. On a complete Riemannian manifold, for any Jacobi field there is a family of geodesics formula_1 describing the field (as in the preceding paragraph). The Jacobi equation is a linear, second order ordinary differential equation; in particular, values of formula_6 and formula_7 at one point of formula_0 uniquely determine the Jacobi field. Furthermore, the set of Jacobi fields along a given geodesic forms a real vector space of dimension twice the dimension of the manifold. As trivial examples of Jacobi fields one can consider formula_8 and formula_9. These correspond respectively to the following families of reparametrisations: formula_10 and formula_11. Any Jacobi field formula_6 can be represented in a unique way as a sum formula_12, where formula_13 is a linear combination of trivial Jacobi fields and formula_14 is orthogonal to formula_8, for all formula_15. The field formula_16 then corresponds to the same variation of geodesics as formula_6, only with changed parameterizations. Motivating example. On a unit sphere, the geodesics through the North pole are great circles. Consider two such geodesics formula_17 and formula_1 with natural parameter, formula_18, separated by an angle formula_19. The geodesic distance formula_20 is formula_21 Computing this requires knowing the geodesics. The most interesting information is just that formula_22, for any formula_19. Instead, we can consider the derivative with respect to formula_19 at formula_23: formula_24 Notice that we still detect the intersection of the geodesics at formula_25. Notice further that to calculate this derivative we do not actually need to know formula_20, rather, all we need do is solve the equation formula_26, for some given initial data. Jacobi fields give a natural generalization of this phenomenon to arbitrary Riemannian manifolds. Solving the Jacobi equation. Let formula_27 and complete this to get an orthonormal basis formula_28 at formula_29. Parallel transport it to get a basis formula_30 all along formula_0. This gives an orthonormal basis with formula_31. The Jacobi field can be written in co-ordinates in terms of this basis as formula_32 and thus formula_33 and the Jacobi equation can be rewritten as a system formula_34 for each formula_35. This way we get a linear ordinary differential equation (ODE). Since this ODE has smooth coefficients we have that solutions exist for all formula_15 and are unique, given formula_36 and formula_37, for all formula_35. Examples. Consider a geodesic formula_38 with parallel orthonormal frame formula_39, formula_40, constructed as above.
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\gamma_\\tau" }, { "math_id": 2, "text": "\\gamma_0=\\gamma" }, { "math_id": 3, "text": "J(t)=\\left.\\frac{\\partial\\gamma_\\tau(t)}{\\partial \\tau}\\right|_{\\tau=0}" }, { "math_id": 4, "text": "\\frac{D^2}{dt^2}J(t)+R(J(t),\\dot\\gamma(t))\\dot\\gamma(t)=0," }, { "math_id": 5, "text": "\\dot\\gamma(t)=d\\gamma(t)/dt" }, { "math_id": 6, "text": "J" }, { "math_id": 7, "text": "\\frac{D}{dt}J" }, { "math_id": 8, "text": "\\dot\\gamma(t)" }, { "math_id": 9, "text": "t\\dot\\gamma(t)" }, { "math_id": 10, "text": "\\gamma_\\tau(t)=\\gamma(\\tau+t)" }, { "math_id": 11, "text": "\\gamma_\\tau(t)=\\gamma((1+\\tau)t)" }, { "math_id": 12, "text": "T+I" }, { "math_id": 13, "text": "T=a\\dot\\gamma(t)+bt\\dot\\gamma(t)" }, { "math_id": 14, "text": "I(t)" }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "I" }, { "math_id": 17, "text": "\\gamma_0" }, { "math_id": 18, "text": "t\\in [0,\\pi]" }, { "math_id": 19, "text": "\\tau" }, { "math_id": 20, "text": "d(\\gamma_0(t),\\gamma_\\tau(t)) \\," }, { "math_id": 21, "text": "d(\\gamma_0(t),\\gamma_\\tau(t))=\\sin^{-1}\\bigg(\\sin t\\sin\\tau\\sqrt{1+\\cos^2 t\\tan^2(\\tau/2)}\\bigg)." }, { "math_id": 22, "text": "d(\\gamma_0(\\pi),\\gamma_\\tau(\\pi))=0 \\," }, { "math_id": 23, "text": "\\tau=0" }, { "math_id": 24, "text": "\\frac{\\partial}{\\partial\\tau}\\bigg|_{\\tau=0}d(\\gamma_0(t),\\gamma_\\tau(t))=|J(t)|=\\sin t." }, { "math_id": 25, "text": "t=\\pi" }, { "math_id": 26, "text": "y''+y=0 \\," }, { "math_id": 27, "text": "e_1(0)=\\dot\\gamma(0)/|\\dot\\gamma(0)|" }, { "math_id": 28, "text": "\\big\\{e_i(0)\\big\\}" }, { "math_id": 29, "text": "T_{\\gamma(0)}M" }, { "math_id": 30, "text": "\\{e_i(t)\\}" }, { "math_id": 31, "text": "e_1(t)=\\dot\\gamma(t)/|\\dot\\gamma(t)|" }, { "math_id": 32, "text": "J(t)=y^k(t)e_k(t)" }, { "math_id": 33, "text": "\\frac{D}{dt}J=\\sum_k\\frac{dy^k}{dt}e_k(t),\\quad\\frac{D^2}{dt^2}J=\\sum_k\\frac{d^2y^k}{dt^2}e_k(t)," }, { "math_id": 34, "text": "\\frac{d^2y^k}{dt^2}+|\\dot\\gamma|^2\\sum_j y^j(t)\\langle R(e_j(t),e_1(t))e_1(t),e_k(t)\\rangle=0" }, { "math_id": 35, "text": "k" }, { "math_id": 36, "text": "y^k(0)" }, { "math_id": 37, "text": "{y^k}'(0)" }, { "math_id": 38, "text": "\\gamma(t)" }, { "math_id": 39, "text": "e_i(t)" }, { "math_id": 40, "text": "e_1(t)=\\dot\\gamma(t)/|\\dot\\gamma|" }, { "math_id": 41, "text": "\\dot \\gamma(t)" }, { "math_id": 42, "text": "t\\dot \\gamma(t)" }, { "math_id": 43, "text": "-k^2" }, { "math_id": 44, "text": "\\exp(\\pm kt)e_i(t)" }, { "math_id": 45, "text": "i>1" }, { "math_id": 46, "text": "k^2" }, { "math_id": 47, "text": "\\sin(kt)e_i(t)" }, { "math_id": 48, "text": "\\cos(kt)e_i(t)" } ]
https://en.wikipedia.org/wiki?curid=875509
875650
Currency symbol
Symbol used to represent a monetary currency's name A currency symbol or currency sign is a graphic symbol used to denote a currency unit. Usually it is defined by a monetary authority, such as the national central bank for the currency concerned. A symbol may be positioned in various ways, according to national convention: before, between or after the numeric amounts: , and . Symbols are neither defined nor listed by international standard ISO 4217, which only assigns three-letter codes. Usage. When writing currency amounts, the location of the symbol varies by language. For currencies in English-speaking countries and in most of Latin America, the symbol is placed before the amount, as in . In most other countries, including many in Europe, the symbol is placed after the amount, as in . Exceptionally, the symbol for the Cape Verdean escudo (like the Portuguese escudo, to which it was formerly pegged) is placed in the decimal separator position, as in . Design. Older currency symbols have evolved slowly, often from previous currencies. The modern dollar and peso symbols originated from the mark employed to denote the Spanish dollar, whereas the pound and lira symbols evolved from the letter L (written until the seventeenth century in blackletter type as formula_0) standing for , a Roman pound of silver. Newly invented currencies and currencies adopting new symbols have symbolism meaningful to their adopter. For example, the euro sign € is based on , an archaic form of the Greek epsilon, to represent Europe; the Indian rupee sign ₹ is a blend of the Latin letter 'R' with the Devanagari letter र (ra); and the Russian Ruble sign ₽ is based on Р (the Cyrillic capital letter 'er'). There are other considerations, such as how the symbol is rendered on computers and typesetting. For a new symbol to be used, its glyphs needs to be added to computer fonts and keyboard mappings already in widespread use, and keyboard layouts need to be altered or shortcuts added to type the new symbol. For example, the European Commission was criticized for not considering how the euro sign would need to be customized to work in different fonts. The original design was also exceptionally wide. These two factors have led to most type foundries designing customized versions that match the 'look and feel' of the font to which it is to be added, often with reduced width. List of historical currency symbols. Some of these symbols may not display correctly. The Unicode CJK Compatibility block contains several square versions of the names of currencies in Japanese katakana. They are intended for compatibility with earlier character sets. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathfrak{L}" } ]
https://en.wikipedia.org/wiki?curid=875650
8757491
COIN-OR
Computational Infrastructure for Operations Research (COIN-OR), is a project that aims to "create for mathematical software what the open literature is for mathematical theory." The open literature (e.g., a research journal) provides the operations research (OR) community with a peer-review process and an archive. Papers in operations research journals on mathematical theory often contain supporting numerical results from computational studies. The software implementations, models, and data used to produce the numerical results are typically not published. The status quo impeded researchers needing to reproduce computational results, make fair comparisons, and extend the state of the art. The success of Linux, Apache, and other projects popularized the open-source model of software development and distribution. A group at IBM Research proposed open source as an analogous yet viable means to "publish" software, models, and data. COIN-OR was conceived as an initiative to promote open source in the computational operations research community and to provide the on-line resources and hosting services required to enable others to run their own open-source software projects. The COIN-OR website was launched as an experiment in 2000, in conjunction with 17th International Symposium on Math Programming in Atlanta, Georgia. In 2007, COIN-OR had 25 application projects, including tools for linear programming (e.g., COIN-OR CLP), nonlinear programming (e.g., IPOPT), integer programming (e.g., CBC, Bcp and COIN-OR SYMPHONY), algebraic modeling languages (e.g., Coopr) and more. By 2011, this had grown to 48 projects. COIN-OR is hosted by the Institute for Operations Research and the Management Sciences, INFORMS, and run by the educational, non-profit COIN-OR Foundation. Projects. CLP. COIN-OR LP (CLP or Clp) is an open-source linear programming solver written in C++. It is published under the Common Public License so it can be used in proprietary software with none of the restrictions of the GNU General Public License. CLP is primarily meant to be used as a callable library, although a stand-alone executable version can be built. It is designed to be as reliable as any commercial solver, although several times slower, and to be able to tackle very large problems. CLP is designed to solve linear programming problems such as : minimize formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 with up to millions of variables and/or constraints. Its main algorithm is the simplex algorithm. CLP is used in other COIN-OR projects such as SYMPHONY, Branch Cut and Price (BCP), COIN-OR Branch and Cut (CBC), and others. CBC. COIN-OR branch and cut (CBC or Cbc) is an open-source mixed integer programming solver written in C++. It can be used as both a stand-alone executable and as a callable library (through "A Mathematical Programming Language" (AMPL) [natively], "General Algebraic Modeling System" (GAMS) [using the links provided by the "COIN-OR Optimization Services" (OS) and "GAMSlinks" projects], MPL [through the "CoinMP" project], AIMMS [through the "AIMMSlinks" project], PuLP, CMPL, OpenSolver for Excel, JuMP, or MiniZinc). Although it has been a popular choice of open source MIP solver for many years, its performance is now significantly inferior to HiGHS. SYMPHONY. Single- or multi-process optimization over networks (SYMPHONY) is an open source branch and cut framework for solving mixed integer programs (MIPs) over heterogeneous networks. It can use CLP, CPLEX, XPRESS or other linear programming solvers to solve the underlying linear programs. SYMPHONY is a callable library which implements both sequential and parallel versions of branch, cut and price to solve MILPs. A branch, cut and price algorithm is similar to a branch and bound algorithm but additionally includes cutting-plane methods and pricing algorithms. The user of the library can customize the algorithm in any number of ways by supplying application-specific subroutines for reading in custom data files, generating application-specific cutting planes, or applying custom branching rules, resulting in a customized branch and cut algorithm. Most components of the algorithm, e.g., search tree management, management of linear programming solution, cut pool management, and communication management, are internal to the library and need not be touched by the user. The executables can be built in any number of configurations ranging from completely sequential to fully parallel with independently functioning cut generators, cut pools, and LP solvers. The distributed version currently runs in any environment supported by the PVM message passing protocol. The same source code can also be compiled for shared-memory architectures using any OpenMP compliant compiler. SYMPHONY reads MPS (through the COIN-OR MPS reader) and GNU MathProg files. SYMPHONY does not have an LP-Solver of its own, but can be used with solvers like Clp, Cplex, Xpress through the Osi-interface. Cuts are generated using COIN's cut generation library: CGL. SYMPHONY also has structure specific implementations for problems like the traveling salesman problem, vehicle routing problem, set partitioning problem, mixed postman problem, etc. SYMPHONY also has an interactive shell where the user can enter commands to execute and control the program. PuLP. PuLP is an LP/IP modeler written in Python. It can generate MPS or LP files and call GLPK, CLP/CBC, and CPLEX, to solve linear problems. PuLP is the default optimization tool in SolverStudio for Excel. SMI. SMI is a stochastic programming modeler and solver written in C++. It can read Stochastic MPS and offers direct interfaces for constructing stochastic programs. It generates the deterministic equivalent linear program, solves it, and provides interfaces to access the scenario solutions.
[ { "math_id": 0, "text": "c_1 x_1 + c_2 x_2\\," }, { "math_id": 1, "text": "a_{11} x_1 + a_{12} x_2 \\le b_1" }, { "math_id": 2, "text": "a_{21} x_1 + a_{22} x_2 \\le b_2" }, { "math_id": 3, "text": "a_{31} x_1 + a_{32} x_2 \\le b_3" }, { "math_id": 4, "text": "x_1 \\ge 0" }, { "math_id": 5, "text": "x_2 \\ge 0" } ]
https://en.wikipedia.org/wiki?curid=8757491
875871
Bowling average
Statistic used to compare cricket bowlers In cricket, a player's bowling average is the number of runs they have conceded per wicket taken. The lower the bowling average is, the better the bowler is performing. It is one of a number of statistics used to compare bowlers, commonly used alongside the economy rate and the strike rate to judge the overall performance of a bowler. When a bowler has taken only a small number of wickets, their bowling average can be artificially high or low, and unstable, with further wickets taken or runs conceded resulting in large changes to their bowling average. Due to this, qualification restrictions are generally applied when determining which players have the best bowling averages. After applying these criteria, George Lohmann holds the record for the lowest average in Test cricket, having claimed 112 wickets at an average of 10.75 runs per wicket. Calculation. A cricketer's bowling average is calculated by dividing the numbers of runs they have conceded by the number of wickets they have taken. The number of runs conceded by a bowler is determined as the total number of runs that the opposing side have scored while the bowler was bowling, excluding any byes, leg byes, or penalty runs. The bowler receives credit for any wickets taken during their bowling that are either bowled, caught, hit wicket, leg before wicket or stumped. formula_0 A number of flaws have been identified for the statistic, most notable among these the fact that a bowler who has taken no wickets cannot have a bowling average, as dividing by zero does not give a result. The effect of this is that the bowling average cannot distinguish between a bowler who has taken no wickets and conceded one run, and a bowler who has taken no wickets and conceded one hundred runs. The bowling average also does not tend to give a true reflection of the bowler's ability when the number of wickets they have taken is small, especially in comparison to the number of runs they have conceded. In his paper proposing an alternative method of judging batsmen and bowlers, Paul van Staden gives an example of this: Suppose a bowler has bowled a total of 80 balls, conceded 60 runs and has taken only 2 wickets so that.. [their average is] 30. If the bowler takes a wicket with the next ball bowled (no runs obviously conceded), then [their average is] 20. Due to this, when establishing records for bowling averages, qualification criteria are generally set. For Test cricket, the "Wisden Cricketers' Almanack" sets this as 75 wickets, while ESPNcricinfo requires 2,000 deliveries. Similar restrictions are set for one-day cricket. Variations. A number of factors other than purely the ability level of the bowler have an effect on a player's bowling average. Most significant among these are the different eras in which cricket has been played. The bowling average tables in Test and first-class cricket are headed by players who competed in the nineteenth century, a period when pitches were uncovered and some were so badly looked after that they had rocks on them. The bowlers competing in the Howa Bowl, a competition played in South African during the apartheid-era, restricted to non-white players, during which time, according to Vincent Barnes: "Most of the wickets we played on were underprepared. For me, as a bowler, it was great." Other factors which provided an advantage to bowlers in that era was the lack of significant safety equipment; batting gloves and helmets were not worn, and batsmen had to be warier. Other variations are caused by frequent matches against stronger or weaker opposition, changes in the laws of cricket and the length of matches. Records. Due to the varying qualifying restrictions placed on the records by different statisticians, the record for the lowest career bowling average can be different from publication to publication. Test cricket. In Test cricket, George Lohmann is listed as having the superior average by each of the "Wisden Cricketers' Almanack", ESPNcricinfo and CricketArchive. Though all three use different restrictions, Lohmann's average of 10.75 is considered the best. If no qualification criteria were applied at all, three players—Wilf Barber, A. N. Hornby and Bruce Murray—would tie for the best average, all having claimed just one wicket in Test matches, without conceding any runs, thus averaging zero. ESPNcricinfo list Betty Wilson as having the best Women's Test cricket average with 11.80, while CricketArchive accept Mary Spear's average of 5.78. One Day Internationals. In One Day Internationals, the varying criteria set by ESPNcricinfo and CricketArchive result in same player being listed as holding the record. ESPNcricinfo has the stricter restriction, requiring 1,000 deliveries whereas CricketArchive has more relaxed requirement of 400 deliveries. By both measures, Sandeep Lamichhane is the record-holder, having claimed his wickets at an average of 18.06. In women's One Day International cricket, Caroline Barrs tops the CricketArchive list with an average of 9.52, but by ESPNcricinfo's stricter guidelines, the record is instead held by Gill Smith's 12.53. T20 Internationals. The record is again split for the two websites for Twenty20 International cricket. In this situation ESPNcricinfo has the lower boundary, requiring just 30 balls to have been bowled, Dhruv Maisuria's average of 9.46 holds the record using those criteria. But the stricter 200 deliveries required by CricketArchive results in Andre Botha being listed as the superior, averaging 8.76. First Class cricket. Domestically, the records for first-class cricket are dominated by players from the nineteenth century, who make up sixteen of the top twenty by ESPNcricinfo's criteria of 5,000 deliveries. William Lillywhite, who was active from 1825 to 1853 has the lowest average, claiming his 1,576 wickets at an average of just 1.54. The leading players from the twentieth century are Stephen Draai and Vincent Barnes with averages of just under twelve, both of whom claimed the majority of their wickets in the South African Howa Bowl tournament during the apartheid era. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{Bowling~average} = \\frac{\\mathrm{Runs~conceded}}{\\mathrm{Wickets~taken}}" } ]
https://en.wikipedia.org/wiki?curid=875871
8761319
Skorokhod's embedding theorem
In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process (Brownian motion) evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A. V. Skorokhod. Skorokhod's first embedding theorem. Let "X" be a real-valued random variable with expected value 0 and variance; let "W" denote a canonical real-valued Wiener process. Then there is a stopping time (with respect to the natural filtration of "W"), "τ", such that "W""τ" has the same distribution as "X", formula_0 and formula_1 Skorokhod's second embedding theorem. Let "X"1, "X"2, ... be a sequence of independent and identically distributed random variables, each with expected value 0 and finite variance, and let formula_2 Then there is a sequence of stopping times "τ"1 ≤ "τ"2 ≤ ... such that the formula_3 have the same joint distributions as the partial sums "S""n" and "τ"1, "τ"2 − "τ"1, "τ"3 − "τ"2, ... are independent and identically distributed random variables satisfying formula_4 and formula_5
[ { "math_id": 0, "text": "\\operatorname{E}[\\tau] = \\operatorname{E}[X^2]" }, { "math_id": 1, "text": "\\operatorname{E}[\\tau^2] \\leq 4 \\operatorname{E}[X^4]." }, { "math_id": 2, "text": "S_n = X_1 + \\cdots + X_n." }, { "math_id": 3, "text": "W_{\\tau_{n}}" }, { "math_id": 4, "text": "\\operatorname{E}[\\tau_n - \\tau_{n - 1}] = \\operatorname{E}[X_1^2]" }, { "math_id": 5, "text": "\\operatorname{E}[(\\tau_{n} - \\tau_{n - 1})^2] \\le 4 \\operatorname{E}[X_1^4]." } ]
https://en.wikipedia.org/wiki?curid=8761319
8761903
Strain engineering
Strain engineering refers to a general strategy employed in semiconductor manufacturing to enhance device performance. Performance benefits are achieved by modulating strain, as one example, in the transistor channel, which enhances electron mobility (or hole mobility) and thereby conductivity through the channel. Another example are semiconductor photocatalysts strain-engineered for more effective use of sunlight. In CMOS manufacturing. The use of various strain engineering techniques has been reported by many prominent microprocessor manufacturers, including AMD, IBM, and Intel, primarily with regards to sub-130 nm technologies. One key consideration in using strain engineering in CMOS technologies is that PMOS and NMOS respond differently to different types of strain. Specifically, PMOS performance is best served by applying compressive strain to the channel, whereas NMOS receives benefit from tensile strain. Many approaches to strain engineering induce strain locally, allowing both n-channel and p-channel strain to be modulated independently. One prominent approach involves the use of a strain-inducing capping layer. CVD silicon nitride is a common choice for a strained capping layer, in that the magnitude and type of strain (e.g. tensile vs compressive) may be adjusted by modulating the deposition conditions, especially temperature. Standard lithography patterning techniques can be used to selectively deposit strain-inducing capping layers, to deposit a compressive film over only the PMOS, for example. Capping layers are key to the "Dual Stress Liner" (DSL) approach reported by IBM-AMD. In the DSL process, standard patterning and lithography techniques are used to selectively deposit a tensile silicon nitride film over the NMOS and a compressive silicon nitride film over the PMOS. A second prominent approach involves the use of a silicon-rich solid solution, especially silicon-germanium, to modulate channel strain. One manufacturing method involves epitaxial growth of silicon on top of a relaxed silicon-germanium underlayer. Tensile strain is induced in the silicon as the lattice of the silicon layer is stretched to mimic the larger lattice constant of the underlying silicon-germanium. Conversely, compressive strain could be induced by using a solid solution with a smaller lattice constant, such as silicon-carbon. See, e.g., U.S. Patent No. 7,023,018. Another closely related method involves replacing the source and drain region of a MOSFET with silicon-germanium. In thin films. Strain can be induced in thin films with either epitaxial growth, or more recently, topological growth. Epitaxial strain in thin films generally arises due to lattice mismatch between the film and its substrate and triple junction restructuring at the surface triple junction, which arises either during film growth or due to thermal expansion mismatch. Tuning this epitaxial strain can be used to moderate the properties of thin films and induce phase transitions. The misfit parameter (formula_0) is given by the equation below: formula_1 where formula_2 is the lattice parameter of the epitaxial film and formula_3 is the lattice parameter of the substrate. After some critical film thickness, it becomes energetically favorable to relieve some mismatch strain through the formation of misfit dislocations or microtwins. Misfit dislocations can be interpreted as a dangling bond at an interface between layers with different lattice constants. This critical thickness (formula_4) was computed by Mathews and Blakeslee to be: formula_5 where formula_6 is the length of the Burgers vector, formula_7 is the Poisson ratio, formula_8 is the angle between the Burgers vector and misfit dislocation line, and formula_9 is the angle between the Burgers vector and the vector normal to the dislocation's glide plane. The equilibrium in-plane strain for a thin film with a thickness (formula_10) that exceeds formula_4 is then given by the expression: formula_11 Strain relaxation at thin film interfaces via misfit dislocation nucleation and multiplication occurs in three stages which are distinguishable based on the relaxation rate. The first stage is dominated by glide of pre-existing dislocations and is characterized by a slow relaxation rate. The second stage has a faster relaxation rate, which depends on the mechanisms for dislocation nucleation in the material. Finally, the last stage represents a saturation in strain relaxation due to strain hardening. Strain engineering has been well-studied in complex oxide systems, in which epitaxial strain can strongly influence the coupling between the spin, charge, and orbital degrees of freedom, and thereby impact the electrical and magnetic properties. Epitaxial strain has been shown to induce metal-insulator transitions and shift the Curie temperature for the antiferromagnetic-to-ferromagnetic transition in <chem>La_{1-x}Sr_{x}MnO_{3}</chem>. In alloy thin films, epitaxial strain has been observed to impact the spinodal instability, and therefore impact the driving force for phase separation. This is explained as a coupling between the imposed epitaxial strain and the system's composition-dependent elastic properties. Researchers more recently have achieved strain in thick oxide films larger than that achieved in epitaxial growth by incorporating nano-structured topologies (Guerra and Vezenov, 2002) and nanorods/nanopillars within an oxide film matrix. Following this work, researchers world-wide have created such self-organized, phase-separated, nanorod/nanopillar structures in numerous oxide films as reviewed here. In 2008, Thulin and Guerra published calculations of strain-modified anatase titania band structures, which included an indicated higher hole mobility with increasing strain. Additionally, in two dimensional materials such as WSe2 strain has been shown to induce conversion from an indirect semiconductor to a direct semiconductor allowing a hundred-fold increase in the light emission rate. In III-N LEDs. Strain engineering plays a major role in III-N LEDs, one of the most ubiquitous and efficient LED varieties that has only gained popularity after the 2014 Nobel Prize in Physics. Most III-N LEDs utilize a combination of GaN and InGaN, the latter being used as the quantum well region. The composition of In within the InGaN layer can be tuned to change the color of the light emitted from these LEDs. However, the epilayers of the LED quantum well have inherently mismatched lattice constants, creating strain between the layers. Due to the quantum confined Stark effect (QCSE), the electron and hole wave functions are misaligned within the quantum well, resulting in a reduced overlap integral, decreased recombination probability, and increased carrier lifetime. As such, applying an external strain can negate the internal quantum well strain, reducing the carrier lifetime and making the LEDs a more attractive light source for communications and other applications requiring fast modulation speeds. With appropriate strain engineering, it is possible to grow III-N LEDs on Si substrates. This can be accomplished via strain relaxed templates, superlattices, and pseudo-substrates. Furthermore, electro-plated metal substrates have also shown promise in applying an external counterbalancing strain to increase the overall LED efficiency. In DUV LEDs. In addition to traditional strain engineering that takes place with III-N LEDs, Deep Ultraviolet (DUV) LEDs, which use AlN, AlGaN, and GaN, undergo a polarity switch from TE to TM at a critical Al composition within the active region. The polarity switch arises from the negative value of AlN’s crystal field splitting, which results in its valence bands switching character at this critical Al composition. Studies have established a linear relationship between this critical composition within the active layer and the Al composition used in the substrate templating region, underscoring the importance of strain engineering in the character of light emitted from DUV LEDs. Furthermore, any existing lattice mismatch causes phase separation and surface roughness, in addition to creating dislocations and point defects. The former results in local current leakage while the latter enhances the nonradiative recombination process, both reducing the device's internal quantum efficiency (IQE). Active layer thickness can trigger the bending and annihilation of threading dislocations, surface roughening, phase separation, misfit dislocation formation, and point defects. All of these mechanisms compete across different thicknesses. By delaying strain accumulation to grow at a thicker epilayer before reaching the target relaxation degree, certain adverse effects can be reduced. In nano-scale materials. Typically, the maximum elastic strain achievable in normal bulk materials ranges from 0.1% to 1%. This limits our ability to effectively modify material properties in a reversible and quantitative manner using strain. However, recent research on nanoscale materials has shown that the elastic strain range is much broader. Even the hardest material in nature, diamond, exhibits up to 9.0% uniform elastic strain at the nanoscale. Keeping in line with Moore's law, semiconductor devices are continuously shrinking in size to the nanoscale. With the concept of "smaller is stronger", elastic strain engineering can be fully exploited at the nanoscale. In nanoscale elastic strain engineering, the crystallographic direction plays a crucial role. Most materials are anisotropic, meaning their properties vary with direction. This is particularly true in elastic strain engineering, as applying strain in different crystallographic directions can have a significant impact on the material's properties. Taking diamond as an example, Density Functional Theory (DFT) simulations demonstrate distinct behaviors in the bandgap decreasing rates when strained along different directions. Straining along the <110> direction results in a higher bandgap decreasing rate, while straining along the <111> direction leads to a lower bandgap decreasing rate but a transition from an indirect to a direct bandgap. A similar indirect-direct bandgap transition can be observed in strained silicon. Theoretically, achieving this indirect-direct bandgap transition in silicon requires a strain of more than 14% uniaxial strain. In 2D materials. In the case of elastic strain, when the limit is exceeded, plastic deformation occurs due to slip and dislocation movement in the microstructure of the material. Plastic deformation is not commonly utilized in strain engineering due to the difficulty in controlling its uniform outcome. Plastic deformation is more influenced by local distortion rather than the global stress field observed in elastic strain. However, 2D materials have a greater range of elastic strain compared to bulk materials because they lack typical plastic deformation mechanisms like slip and dislocation. Additionally, it is easier to apply strain along a specific crystallographic direction in 2D materials compared to bulk materials. Recent research has shown significant progress in strain engineering in 2D materials through techniques such as deforming the substrate, inducing material rippling, and creating lattice asymmetry. These methods of applying strain effectively enhance the electric, magnetic, thermal, and optical properties of the material. For example, in the reference provided, the optical gap of monolayer and bilayer MoS2 decreases at rates of approximately 45 and 120 meV/%, respectively, under 0-2.2% uniaxial strain. Additionally, the photoluminescence intensity of monolayer MoS2 decreases at 1% strain, indicating an indirect-to-direct bandgap transition. The reference also demonstrates that strain-engineered rippling in black phosphorus leads to bandgap variations between +10% and -30%. In the case of ReSe2, the literature shows the formation of local wrinkle structures when the substrate is relaxed after stretching. This folding process results in a redshift in the absorption spectrum peak, leading to increased light absorption and changes in magnetic properties and bandgap. The research team also conducted I-V curve tests on the stretched samples and found that a 30% stretching resulted in lower resistance compared to the unstretched samples. However, a 50% stretching showed the opposite effect, with higher resistance compared to the unstretched samples. This behavior can be attributed to the folding of ReSe2, with the folded regions being particularly weak. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "f=(a_s - a_e)/a_e" }, { "math_id": 2, "text": "a_e " }, { "math_id": 3, "text": "a_s" }, { "math_id": 4, "text": "h_c" }, { "math_id": 5, "text": "h_c = \\frac{b(2-\\nu cos^2\\alpha)[ln(h_c/b)+1]}{8\\pi |f|(1+\\nu)cos\\lambda}" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "\\nu" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "\\lambda " }, { "math_id": 10, "text": "h" }, { "math_id": 11, "text": "\\epsilon_{||} = \\frac{f}{|f|}\\frac{b(1-\\nu cos^2(\\alpha)[ln(h/b) +1]}{8\\pi |f| (1+\\nu)cos\\lambda}" } ]
https://en.wikipedia.org/wiki?curid=8761903
876294
Borel functional calculus
Branch of functional analysis In functional analysis, a branch of mathematics, the Borel functional calculus is a "functional calculus" (that is, an assignment of operators from commutative algebras to functions defined on their spectra), which has particularly broad scope. Thus for instance if "T" is an operator, applying the squaring function "s" → "s"2 to "T" yields the operator "T"2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential formula_0 The 'scope' here means the kind of "function of an operator" which is allowed. The Borel functional calculus is more general than the continuous functional calculus, and its focus is different than the holomorphic functional calculus. More precisely, the Borel functional calculus allows for applying an arbitrary Borel function to a self-adjoint operator, in a way that generalizes applying a polynomial function. Motivation. If "T" is a self-adjoint operator on a finite-dimensional inner product space "H", then "H" has an orthonormal basis {"e"1, ..., "eℓ"} consisting of eigenvectors of "T", that is formula_1 Thus, for any positive integer "n", formula_2 If only polynomials in "T" are considered, then one gets the holomorphic functional calculus. The relation also holds for more general functions of "T". Given a Borel function "h", one can define an operator "h"("T") by specifying its behavior on the basis: formula_3 Generally, any self-adjoint operator "T" is unitarily equivalent to a multiplication operator; this means that for many purposes, "T" can be considered as an operator formula_4 acting on "L"2 of some measure space. The domain of "T" consists of those functions whose above expression is in "L"2. In such a case, one can define analogously formula_5 For many technical purposes, the previous formulation is good enough. However, it is desirable to formulate the functional calculus in a way that does not depend on the particular representation of "T" as a multiplication operator. That's what we do in the next section. The bounded functional calculus. Formally, the bounded Borel functional calculus of a self adjoint operator "T" on Hilbert space "H" is a mapping defined on the space of bounded complex-valued Borel functions "f" on the real line, formula_6 such that the following conditions hold <templatestyles src="Math_theorem/styles.css" /> Theorem — Any self-adjoint operator "T" has a unique Borel functional calculus. This defines the functional calculus for "bounded" functions applied to possibly "unbounded" self-adjoint operators. Using the bounded functional calculus, one can prove part of the Stone's theorem on one-parameter unitary groups: <templatestyles src="Math_theorem/styles.css" /> Theorem —  If "A" is a self-adjoint operator, then formula_9 is a 1-parameter strongly continuous unitary group whose infinitesimal generator is "iA". As an application, we consider the Schrödinger equation, or equivalently, the dynamics of a quantum mechanical system. In non-relativistic quantum mechanics, the Hamiltonian operator "H" models the total energy observable of a quantum mechanical system S. The unitary group generated by "iH" corresponds to the time evolution of S. We can also use the Borel functional calculus to abstractly solve some linear initial value problems such as the heat equation, or Maxwell's equations. Existence of a functional calculus. The existence of a mapping with the properties of a functional calculus requires proof. For the case of a bounded self-adjoint operator "T", the existence of a Borel functional calculus can be shown in an elementary way as follows: First pass from polynomial to continuous functional calculus by using the Stone–Weierstrass theorem. The crucial fact here is that, for a bounded self adjoint operator "T" and a polynomial "p", formula_10 Consequently, the mapping formula_11 is an isometry and a densely defined homomorphism on the ring of polynomial functions. Extending by continuity defines "f"("T") for a continuous function "f" on the spectrum of "T". The Riesz-Markov theorem then allows us to pass from integration on continuous functions to spectral measures, and this is the Borel functional calculus. Alternatively, the continuous calculus can be obtained via the Gelfand transform, in the context of commutative Banach algebras. Extending to measurable functions is achieved by applying Riesz-Markov, as above. In this formulation, "T" can be a normal operator. Given an operator "T", the range of the continuous functional calculus "h" → "h"("T") is the (abelian) C*-algebra "C"("T") generated by "T". The Borel functional calculus has a larger range, that is the closure of "C"("T") in the weak operator topology, a (still abelian) von Neumann algebra. The general functional calculus. We can also define the functional calculus for not necessarily bounded Borel functions "h"; the result is an operator which in general fails to be bounded. Using the multiplication by a function "f" model of a self-adjoint operator given by the spectral theorem, this is multiplication by the composition of "h" with "f". <templatestyles src="Math_theorem/styles.css" /> Theorem — Let "T" be a self-adjoint operator on "H", "h" a real-valued Borel function on R. There is a unique operator "S" such that formula_12 formula_13 The operator "S" of the previous theorem is denoted "h"("T"). More generally, a Borel functional calculus also exists for (bounded) normal operators. Resolution of the identity. Let formula_14 be a self-adjoint operator. If formula_15 is a Borel subset of R, and formula_16 is the indicator function of "E", then formula_17 is a self-adjoint projection on "H". Then mapping formula_18 is a projection-valued measure. The measure of R with respect to formula_19 is the identity operator on "H". In other words, the identity operator can be expressed as the spectral integral formula_20. Stone's formula expresses the spectral measure formula_21 in terms of the resolvent formula_22: formula_23 Depending on the source, the resolution of the identity is defined, either as a projection-valued measure formula_21, or as a one-parameter family of projection-valued measures formula_24 with formula_25. In the case of a discrete measure (in particular, when "H" is finite-dimensional), formula_26 can be written as formula_27 in the Dirac notation, where each formula_28 is a normalized eigenvector of "T". The set formula_29 is an orthonormal basis of "H". In physics literature, using the above as heuristic, one passes to the case when the spectral measure is no longer discrete and write the resolution of identity as formula_30 and speak of a "continuous basis", or "continuum of basis states", formula_29 Mathematically, unless rigorous justifications are given, this expression is purely formal. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " e^{it \\Delta}." }, { "math_id": 1, "text": " T e_k = \\lambda_k e_k, \\qquad 1 \\leq k \\leq \\ell." }, { "math_id": 2, "text": " T^n e_k = \\lambda_k^n e_k." }, { "math_id": 3, "text": " h(T) e_k = h(\\lambda_k) e_k." }, { "math_id": 4, "text": " [T \\psi](x) = f(x) \\psi(x)" }, { "math_id": 5, "text": " [h(T) \\psi](x) = [h \\circ f](x) \\psi(x). " }, { "math_id": 6, "text": "\\begin{cases} \\pi_T: L^\\infty(\\mathbb{R},\\mathbb{C}) \\to \\mathcal{B}(\\mathcal{H})\\\\ f \\mapsto f(T) \\end{cases}" }, { "math_id": 7, "text": " \\nu_\\xi:E \\mapsto \\langle \\pi_T(\\mathbf{1}_E) \\xi, \\xi \\rangle " }, { "math_id": 8, "text": " \\pi_T \\left ([\\eta +i]^{-1} \\right ) = [T + i]^{-1}." }, { "math_id": 9, "text": " U_t = e^{i t A}, \\qquad t \\in \\mathbb{R} " }, { "math_id": 10, "text": "\\| p(T) \\| = \\sup_{\\lambda \\in \\sigma(T)} |p(\\lambda)|." }, { "math_id": 11, "text": " p \\mapsto p(T) " }, { "math_id": 12, "text": "\\operatorname{dom} S = \\left\\{\\xi \\in H: h \\in L^2_{\\nu_\\xi}(\\mathbb{R}) \\right\\}" }, { "math_id": 13, "text": "\\langle S \\xi, \\xi \\rangle = \\int_{\\mathbb{R}} h(t) \\ d\\nu_{\\xi} (t), \\quad \\text{for} \\quad \\xi \\in \\operatorname{dom} S" }, { "math_id": 14, "text": "T" }, { "math_id": 15, "text": "E" }, { "math_id": 16, "text": " \\mathbf{1}_E " }, { "math_id": 17, "text": " \\mathbf{1}_E(T) " }, { "math_id": 18, "text": " \\Omega_T: E \\mapsto \\mathbf{1}_E(T)" }, { "math_id": 19, "text": "\\Omega_T" }, { "math_id": 20, "text": "I = \\Omega_T([-\\infty,\\infty]) = \\int_{-\\infty}^{\\infty} d\\Omega_T" }, { "math_id": 21, "text": "\\Omega_T" }, { "math_id": 22, "text": "R_T(\\lambda) \\equiv \\left(T-\\lambda I \\right)^{-1}" }, { "math_id": 23, "text": "\\frac{1}{2\\pi i} \\lim_{\\epsilon \\to 0^+} \\int_a^b \\left[ R_T(\\lambda+i\\epsilon)) - R_T(\\lambda-i\\epsilon) \\right] \\, d\\lambda = \\Omega_T((a,b)) + \\frac{1}{2}\\left[ \\Omega_T(\\{a\\}) + \\Omega_T(\\{b\\}) \\right]." }, { "math_id": 24, "text": "\\{\\Sigma_\\lambda\\}" }, { "math_id": 25, "text": "-\\infty < \\lambda < \\infty" }, { "math_id": 26, "text": "I = \\int 1\\,d\\Omega_T" }, { "math_id": 27, "text": "I = \\sum_{i} \\left | i \\right \\rangle \\left \\langle i \\right |" }, { "math_id": 28, "text": "|i\\rangle" }, { "math_id": 29, "text": " \\{ |i\\rangle \\}" }, { "math_id": 30, "text": "I = \\int\\!\\! di~ |i\\rangle \\langle i|" } ]
https://en.wikipedia.org/wiki?curid=876294
876428
Divergent series
Infinite series that is not convergent &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Les séries divergentes sont en général quelque chose de bien fatal et c’est une honte qu’on ose y fonder aucune démonstration. N. H. Abel, letter to Holmboe, January 1826, reprinted in volume 2 of his collected papers. In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge. A counterexample is the harmonic series formula_0 The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme. In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A "summability method" or "summation method" is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series formula_1 the value . Cesàro summation is an "averaging" method, in that it relies on the arithmetic mean of the sequence of partial sums. Other methods involve analytic continuations of related series. In physics, there are a wide variety of summability methods; these are discussed in greater detail in the article on regularization. History. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; ... but it is broadly true to say that mathematicians before Cauchy asked not 'How shall we "define" 1 − 1 + 1...?' but 'What "is" 1 − 1 + 1...?', and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal. G. H. Hardy, Divergent series, page 6 Before the 19th century, divergent series were widely used by Leonhard Euler and others, but often led to confusing and contradictory results. A major problem was Euler's idea that any divergent series should have a natural sum, without first defining what is meant by the sum of a divergent series. Augustin-Louis Cauchy eventually gave a rigorous definition of the sum of a (convergent) series, and for some time after this, divergent series were mostly excluded from mathematics. They reappeared in 1886 with Henri Poincaré's work on asymptotic series. In 1890, Ernesto Cesàro realized that one could give a rigorous definition of the sum of some divergent series, and defined Cesàro summation. (This was not the first use of Cesàro summation, which was used implicitly by Ferdinand Georg Frobenius in 1880; Cesàro's key contribution was not the discovery of this method, but his idea that one should give an explicit definition of the sum of a divergent series.) In the years after Cesàro's paper, several other mathematicians gave other definitions of the sum of a divergent series, although these are not always compatible: different definitions can give different answers for the sum of the same divergent series; so, when talking about the sum of a divergent series, it is necessary to specify which summation method one is using. Theorems on methods for summing divergent series. A summability method "M" is "regular" if it agrees with the actual limit on all convergent series. Such a result is called an "Abelian theorem" for "M", from the prototypical Abel's theorem. More subtle, are partial converse results, called "Tauberian theorems", from a prototype proved by Alfred Tauber. Here "partial converse" means that if "M" sums the series "Σ", and some side-condition holds, then "Σ" was convergent in the first place; without any side-condition such a result would say that "M" only summed convergent series (making it useless as a summation method for divergent series). The function giving the sum of a convergent series is "linear", and it follows from the Hahn–Banach theorem that it may be extended to a summation method summing any series with bounded partial sums. This is called the "Banach limit". This fact is not very useful in practice, since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive. The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships. The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis. Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques. Examples of such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics. Properties of summation methods. Summation methods usually concentrate on the sequence of partial sums of the series. While this sequence does not converge, we may often find that when we take an average of larger and larger numbers of initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series. A "summation method" can be seen as a function from a set of sequences of partial sums to values. If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a "series-summation method" A"Σ" that assigns the same values to the corresponding series. There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively. The third condition is less important, and some significant methods, such as Borel summation, do not possess it. One can also give a weaker alternative to the last condition. A desirable property for two distinct summation methods A and B to share is "consistency": A and B are consistent if for every sequence "s" to which both assign a value, A("s") = B("s"). (Using this language, a summation method A is regular iff it is consistent with the standard sum Σ.) If two methods are consistent, and one sums more series than the other, the one summing more series is "stronger". There are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques. Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations. This partly explains why many different summation methods give the same answer for certain series. For instance, whenever "r" ≠ 1, the geometric series formula_11 can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value. However, when "r" is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of infinity. Classical summation methods. The two classical summation methods for series, ordinary convergence and absolute convergence, define the sum as a limit of certain partial sums. These are included only for completeness; strictly speaking they are not true summation methods for divergent series since, by definition, a series is divergent only if these methods do not work. Most but not all summation methods for divergent series extend these methods to a larger class of sequences. Absolute convergence. Absolute convergence defines the sum of a sequence (or set) of numbers to be the limit of the net of all partial sums "a""k"1 + ... + "a""k""n", if it exists. It does not depend on the order of the elements of the sequence, and a classical theorem says that a sequence is absolutely convergent if and only if the sequence of absolute values is convergent in the standard sense. Sum of a series. Cauchy's classical definition of the sum of a series "a"0 + "a"1 + ... defines the sum to be the limit of the sequence of partial sums "a"0 + ... + "a""n". This is the default definition of convergence of a sequence. Nørlund means. Suppose "pn" is a sequence of positive terms, starting from "p"0. Suppose also that formula_12 If now we transform a sequence s by using "p" to give weighted means, setting formula_13 then the limit of "tn" as "n" goes to infinity is an average called the "Nørlund mean" N"p"("s"). The Nørlund mean is regular, linear, and stable. Moreover, any two Nørlund means are consistent. Cesàro summation. The most significant of the Nørlund means are the Cesàro sums. Here, if we define the sequence "pk" by formula_14 then the Cesàro sum "C""k" is defined by "C""k"("s") = N("pk")("s"). Cesàro sums are Nørlund means if "k" ≥ 0, and hence are regular, linear, stable, and consistent. "C"0 is ordinary summation, and "C"1 is ordinary Cesàro summation. Cesàro sums have the property that if "h" &gt; "k", then "C""h" is stronger than "C""k". Abelian means. Suppose "λ" = {"λ"0, "λ"1, "λ"2...} is a strictly increasing sequence tending towards infinity, and that "λ"0 ≥ 0. Suppose formula_15 converges for all real numbers "x" &gt; 0. Then the "Abelian mean" "A""λ" is defined as formula_16 More generally, if the series for "f" only converges for large "x" but can be analytically continued to all positive real "x", then one can still define the sum of the divergent series by the limit above. A series of this type is known as a generalized Dirichlet series; in applications to physics, this is known as the method of "heat-kernel regularization". Abelian means are regular and linear, but not stable and not always consistent between different choices of "λ". However, some special cases are very important summation methods. Abel summation. If "λ""n" = "n", then we obtain the method of "Abel summation". Here formula_17 where "z" = exp(−"x"). Then the limit of "f"("x") as "x" approaches 0 through positive reals is the limit of the power series for "f"("z") as "z" approaches 1 from below through positive reals, and the Abel sum "A"("s") is defined as formula_18 Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: "A"("s") = "C""k"("s") whenever the latter is defined. The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation. Lindelöf summation. If "λ""n" = "n" log("n"), then (indexing from one) we have formula_19 Then "L"("s"), the "Lindelöf sum", is the limit of "f"("x") as "x" goes to positive zero. The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star. If "g"("z") is analytic in a disk around zero, and hence has a Maclaurin series "G"("z") with a positive radius of convergence, then "L"("G"("z")) = "g"("z") in the Mittag-Leffler star. Moreover, convergence to "g"("z") is uniform on compact subsets of the star. Analytic continuation. Several summation methods involve taking the value of an analytic continuation of a function. Analytic continuation of power series. If Σ"a""n""x""n" converges for small complex "x" and can be analytically continued along some path from "x" = 0 to the point "x" = 1, then the sum of the series can be defined to be the value at "x" = 1. This value may depend on the choice of path. One of the first examples of potentially different sums for a divergent series, using analytic continuation, was given by Callet, who observed that if formula_20 then formula_21 Evaluating at formula_22, one gets formula_23 However, the gaps in the series are key. For formula_24 for example, we actually would get formula_25, so different sums correspond to different placements of the formula_26's. Another example of analytic continuation is the divergent alternating series formula_27 which is a sum over products of formula_28-functions and Pochhammer's symbols. Using the duplication formula of the formula_28-function, it reduces to a generalized hypergeometric series formula_29 Euler summation. Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complex "z" and can be analytically continued to the open disk with diameter from to 1 and is continuous at 1, then its value at "q" is called the Euler or (E,"q") sum of the series Σ"a""n". Euler used it before analytic continuation was defined in general, and gave explicit formulas for the power series of the analytic continuation. The operation of Euler summation can be repeated several times, and this is essentially equivalent to taking an analytic continuation of a power series to the point "z" = 1. Analytic continuation of Dirichlet series. This method defines the sum of a series to be the value of the analytic continuation of the Dirichlet series formula_30 at "s" = 0, if this exists and is unique. This method is sometimes confused with zeta function regularization. If "s" = 0 is an isolated singularity, the sum is defined by the constant term of the Laurent series expansion. Zeta function regularization. If the series formula_31 (for positive values of the "a""n") converges for large real "s" and can be analytically continued along the real line to "s" = −1, then its value at "s" = −1 is called the zeta regularized sum of the series "a"1 + "a"2 + ... Zeta function regularization is nonlinear. In applications, the numbers "a""i" are sometimes the eigenvalues of a self-adjoint operator "A" with compact resolvent, and "f"("s") is then the trace of "A"−"s". For example, if "A" has eigenvalues 1, 2, 3, ... then "f"("s") is the Riemann zeta function, "ζ"("s"), whose value at "s" = −1 is −, assigning a value to the divergent series 1 + 2 + 3 + 4 + ... Other values of "s" can also be used to assign values for the divergent sums "ζ"(0) = 1 + 1 + 1 + ... = −, "ζ"(−2) = 1 + 4 + 9 + ... = 0 and in general formula_32 where "Bk" is a Bernoulli number. Integral function means. If "J"("x") = Σ"p""n""x""n" is an integral function, then the "J" sum of the series "a"0 + ... is defined to be formula_33 if this limit exists. There is a variation of this method where the series for "J" has a finite radius of convergence "r" and diverges at "x" = "r". In this case one defines the sum as above, except taking the limit as "x" tends to "r" rather than infinity. Borel summation. In the special case when "J"("x") = "e""x" this gives one (weak) form of Borel summation. Valiron's method. Valiron's method is a generalization of Borel summation to certain more general integral functions "J". Valiron showed that under certain conditions it is equivalent to defining the sum of a series as formula_34 where "H" is the second derivative of "G" and "c"("n") = "e"−"G"("n"), and "a"0 + ... + "a""h" is to be interpreted as 0 when "h" &lt; 0. Moment methods. Suppose that "dμ" is a measure on the real line such that all the moments formula_35 are finite. If "a"0 + "a"1 + ... is a series such that formula_36 converges for all "x" in the support of "μ", then the ("dμ") sum of the series is defined to be the value of the integral formula_37 if it is defined. (If the numbers "μ""n" increase too rapidly then they do not uniquely determine the measure "μ".) Borel summation. For example, if "dμ" = "e"−"x" "dx" for positive "x" and 0 for negative "x" then "μ""n" = "n"!, and this gives one version of Borel summation, where the value of a sum is given by formula_38 There is a generalization of this depending on a variable "α", called the (B′,"α") sum, where the sum of a series "a"0 + ... is defined to be formula_39 if this integral exists. A further generalization is to replace the sum under the integral by its analytic continuation from small "t". Miscellaneous methods. BGN hyperreal summation. This summation method works by using an extension to the real numbers known as the hyperreal numbers. Since the hyperreal numbers include distinct infinite values, these numbers can be used to represent the values of divergent series. The key method is to designate a particular infinite value that is being summed, usually formula_40, which is used as a unit of infinity. Instead of summing to an arbitrary infinity (as is typically done with formula_41), the BGN method sums to the specific hyperreal infinite value labeled formula_40. Therefore, the summations are of the form formula_42 This allows the usage of standard formulas for finite series such as arithmetic progressions in an infinite context. For instance, using this method, the sum of the progression formula_43 is formula_44, or, using just the most significant infinite hyperreal part, formula_45. Hutton's method. In 1812 Hutton introduced a method of summing divergent series by starting with the sequence of partial sums, and repeatedly applying the operation of replacing a sequence "s"0, "s"1, ... by the sequence of averages , , ..., and then taking the limit. Ingham summability. The series "a"1 + ... is called Ingham summable to "s" if formula_46 Albert Ingham showed that if "δ" is any positive number then (C,−"δ") (Cesàro) summability implies Ingham summability, and Ingham summability implies (C,"δ") summability. Lambert summability. The series "a"1 + ... is called Lambert summable to "s" if formula_47 If a series is (C,"k") (Cesàro) summable for any "k" then it is Lambert summable to the same value, and if a series is Lambert summable then it is Abel summable to the same value. Le Roy summation. The series "a"0 + ... is called Le Roy summable to "s" if formula_48 Mittag-Leffler summation. The series "a"0 + ... is called Mittag-Leffler (M) summable to "s" if formula_49 Ramanujan summation. Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on the Euler–Maclaurin summation formula. The Ramanujan sum of a series "f"(0) + "f"(1) + ... depends not only on the values of "f" at integers, but also on values of the function "f" at non-integral points, so it is not really a summation method in the sense of this article. Riemann summability. The series "a"1 + ... is called (R,"k") (or Riemann) summable to "s" if formula_50 The series "a"1 + ... is called R2 summable to "s" if formula_51 Riesz means. If "λ""n" form an increasing sequence of real numbers and formula_52 then the Riesz (R,"λ","κ") sum of the series "a"0 + ... is defined to be formula_53 Vallée-Poussin summability. The series "a"1 + ... is called VP (or Vallée-Poussin) summable to "s" if formula_54 where formula_55 is the gamma function. Zeldovich summability. The series is Zeldovich summable if formula_56 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1 + \\frac{1}{2} + \\frac{1}{3} + \\frac{1}{4} + \\frac{1}{5} + \\cdots =\\sum_{n=1}^\\infty\\frac{1}{n}." }, { "math_id": 1, "text": "1 - 1 + 1 - 1 + \\cdots" }, { "math_id": 2, "text": "\\text{ “ = ” }\\frac{1}{2}" }, { "math_id": 3, "text": "\\text{ “ = ” }\\frac{1}{4}" }, { "math_id": 4, "text": "\\text{ “ = ” }\\!\\!\\int_0^\\infty \\frac{e^{-x}}{1+x} \\, dx \\approx 0.596\\,347\\ldots" }, { "math_id": 5, "text": "\\text{ “ = ” }\\frac{1}{3}" }, { "math_id": 6, "text": "\\text{ “ = ” }-1" }, { "math_id": 7, "text": "\\text{ “ = ” }-\\frac{1}{2}" }, { "math_id": 8, "text": "\\text{ “ = ” }-\\frac{1}{12}" }, { "math_id": 9, "text": " f: \\mathbb{N} \\rightarrow \\mathbb{N} " }, { "math_id": 10, "text": " N \\in \\mathbb{N} " }, { "math_id": 11, "text": "\\begin{align}\nG(r,c) & = \\sum_{k=0}^\\infty cr^k & & \\\\\n & = c + \\sum_{k=0}^\\infty cr^{k+1} & & \\text{ (stability) } \\\\\n & = c + r \\sum_{k=0}^\\infty cr^k & & \\text{ (linearity) } \\\\\n & = c + r \\, G(r,c), & & \\text{ hence } \\\\\nG(r,c) & = \\frac{c}{1-r} ,\\text{ unless it is infinite} & & \\\\\n\\end{align}" }, { "math_id": 12, "text": "\\frac{p_n}{p_0+p_1 + \\cdots + p_n} \\rightarrow 0." }, { "math_id": 13, "text": "t_m = \\frac{p_m s_0 + p_{m-1}s_1 + \\cdots + p_0 s_m}{p_0+p_1+\\cdots+p_m}" }, { "math_id": 14, "text": "p_n^k = {n+k-1 \\choose k-1}" }, { "math_id": 15, "text": "f(x) = \\sum_{n=0}^\\infty a_n e^{-\\lambda_n x}" }, { "math_id": 16, "text": "A_\\lambda(s) = \\lim_{x \\rightarrow 0^{+}} f(x)." }, { "math_id": 17, "text": "f(x) = \\sum_{n=0}^\\infty a_n e^{-nx} = \\sum_{n=0}^\\infty a_n z^n," }, { "math_id": 18, "text": "A(s) = \\lim_{z \\rightarrow 1^{-}} \\sum_{n=0}^\\infty a_n z^n." }, { "math_id": 19, "text": "f(x) = a_1 + a_2 2^{-2x} + a_3 3^{-3x} + \\cdots ." }, { "math_id": 20, "text": "1 \\leq m < n" }, { "math_id": 21, "text": "\\frac{1-x^m}{1-x^n} = \\frac{1 + x + \\dots + x^{m-1}}{1 + x + \\dots x^{n-1}} = 1 - x^m + x^n - x^{n+m} + x^{2n} - \\dots" }, { "math_id": 22, "text": "x=1\n" }, { "math_id": 23, "text": "1 - 1 + 1 - 1 + \\dots = \\frac{m}{n} ." }, { "math_id": 24, "text": "m=1, n=3" }, { "math_id": 25, "text": "1-1+0+1-1+0+1-1+\\dots = \\frac{1}{3} \n" }, { "math_id": 26, "text": "0\n" }, { "math_id": 27, "text": "\n\\sum_{k \\ge 0} (-1)^{k+1}\\frac{1}{2k-1}\\binom{2k}{k}\n=1+2-2+4-10+28-84+264-858+2860-9724+\\cdots\n" }, { "math_id": 28, "text": "\\Gamma" }, { "math_id": 29, "text": "\n\\ldots =\n\\sum_{k \\ge 0} (-4)^k\\frac{(-1/2)_k}{k!}\n={}_1F_0(-1/2;;-4) = \\sqrt{5}.\n" }, { "math_id": 30, "text": "f(s) = \\frac{a_1}{1^s} + \\frac{a_2}{2^s} + \\frac{a_3}{3^s}+ \\cdots " }, { "math_id": 31, "text": "f(s) = \\frac{1}{a_1^s} + \\frac{1}{a_2^s} + \\frac{1}{a_3^s}+ \\cdots " }, { "math_id": 32, "text": "\\zeta(-s)=\\sum_{n=1}^\\infty n^s=1^s + 2^s + 3^s + \\cdots = -\\frac{B_{s+1}}{s+1}\\, ," }, { "math_id": 33, "text": "\\lim_{x\\rightarrow\\infty}\\frac{\\sum_np_n(a_0+\\cdots+a_n)x^n}{\\sum_np_nx^n}," }, { "math_id": 34, "text": " \\lim_{n\\rightarrow +\\infty}\\sqrt{\\frac{H(n)}{2\\pi}}\\sum_{h\\in Z} e^{-\\frac12 h^2H(n)}(a_0+\\cdots+a_h)" }, { "math_id": 35, "text": "\\mu_n=\\int x^n \\, d\\mu" }, { "math_id": 36, "text": "a(x)=\\frac{a_0x^0}{\\mu_0}+\\frac{a_1x^1}{\\mu_1}+\\cdots" }, { "math_id": 37, "text": "\\int a(x) \\, d\\mu" }, { "math_id": 38, "text": "\\int_0^\\infty e^{-t}\\sum\\frac{a_nt^n}{n!} \\, dt." }, { "math_id": 39, "text": "\\int_0^\\infty e^{-t}\\sum\\frac{a_nt^{n\\alpha}}{\\Gamma(n\\alpha+1)} \\, dt" }, { "math_id": 40, "text": "\\omega" }, { "math_id": 41, "text": "\\infty" }, { "math_id": 42, "text": "\\sum_{x=1}^\\omega f(x)" }, { "math_id": 43, "text": "1 + 2 + 3 + \\ldots" }, { "math_id": 44, "text": "\\frac{\\omega^2}{2} + \\frac{\\omega}{2}" }, { "math_id": 45, "text": "\\frac{\\omega^2}{2}" }, { "math_id": 46, "text": "\\lim_{x\\rightarrow \\infty} \\sum_{1\\le n\\le x} a_n\\frac{n}{x}\\left[\\frac{x}{n}\\right] = s." }, { "math_id": 47, "text": "\\lim_{y\\rightarrow 0^+} \\sum_{n\\ge 1} a_n\\frac{nye^{-ny}}{1-e^{-ny}} = s. " }, { "math_id": 48, "text": "\\lim_{\\zeta\\rightarrow 1^-} \\sum_{n} \\frac{\\Gamma(1+\\zeta n)}{\\Gamma(1+ n)} a_n = s. " }, { "math_id": 49, "text": "\\lim_{\\delta\\rightarrow 0} \\sum_{n} \\frac{a_n}{\\Gamma(1+\\delta n)} = s." }, { "math_id": 50, "text": "\\lim_{h\\rightarrow 0} \\sum_{n} a_n\\left(\\frac{\\sin nh}{nh}\\right)^k = s." }, { "math_id": 51, "text": "\\lim_{h\\rightarrow 0} \\frac{2}{\\pi}\\sum_n \\frac{\\sin^2 nh}{n^2h}(a_1+\\cdots + a_n) = s." }, { "math_id": 52, "text": "A_\\lambda(x)=a_0+\\cdots+a_n \\text{ for } \\lambda_n<x\\le \\lambda_{n+1}" }, { "math_id": 53, "text": "\\lim_{\\omega\\rightarrow\\infty} \\frac{\\kappa}{\\omega^\\kappa} \\int_0^\\omega A_\\lambda(x)(\\omega-x)^{\\kappa-1} \\, dx." }, { "math_id": 54, "text": "\\lim_{m\\rightarrow \\infty} \\sum_{k=0}^{m} a_k\\frac{[\\Gamma(m+1)]^2}{\\Gamma(m+1-k)\\,\\Gamma(m+1+k)} = \\lim_{m\\rightarrow \\infty} \\left[a_0+a_1\\frac{m}{m+1}+a_2\\frac{m(m-1)}{(m+1)(m+2)}+\\cdots\\right] = s, " }, { "math_id": 55, "text": "\\Gamma(x)" }, { "math_id": 56, "text": "\\lim_{\\alpha\\to 0^+}\\sum_n c_n e^{-\\alpha n^2} = s." } ]
https://en.wikipedia.org/wiki?curid=876428
8765022
Speed of electricity
Rate of travel of electric energy The word "electricity" refers generally to the movement of electrons, or other charge carriers, through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals travel as electromagnetic waves typically at 50%–99% of the speed of light in vacuum. The electrons themselves move much more slowly. See drift velocity and electron mobility. Electromagnetic waves. The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave traveling along (guided by) the cable. I.e., a cable is a form of a waveguide. The propagation of the wave is affected by the interaction with the material(s) in and surrounding the cable, caused by the presence of electric charge carriers, interacting with the electric field component, and magnetic dipoles, interacting with the magnetic field component. These interactions are typically described using mean field theory by the permeability and the permittivity of the materials involved. The energy/signal usually flows overwhelmingly outside the electric conductor of a cable. The purpose of the conductor is thus not to conduct energy, but to guide the energy-carrying wave. Velocity of electromagnetic waves in good dielectrics. The velocity of electromagnetic waves in a low-loss dielectric is given by formula_0 where Velocity of electromagnetic waves in good conductors. The velocity of transverse electromagnetic (TEM) mode waves in a good conductor is given by formula_10 where and permeability is defined as above in This velocity is the speed with which electromagnetic waves penetrate into the conductor and is not the drift velocity of the conduction electrons. In copper at 60Hz, formula_19 3.2m/s. As a consequence of Snell's Law and the extremely low speed, electromagnetic waves always enter good conductors in a direction that is within a milliradian of normal to the surface, regardless of the angle of incidence. Electromagnetic waves in circuits. In the theoretical investigation of electric circuits, the velocity of propagation of the electromagnetic field through space is usually not considered; the field is assumed, as a precondition, to be present throughout space. The magnetic component of the field is considered to be in phase with the current, and the electric component is considered to be in phase with the voltage. The electric field starts at the conductor, and propagates through space at the velocity of light, which depends on the material it is traveling through. The electromagnetic fields do not move through space. It is the electromagnetic energy that moves. The corresponding fields simply grow and decline in a region of space in response to the flow of energy. At any point in space, the electric field corresponds not to the condition of the electric energy flow at that moment, but to that of the flow at a moment earlier. The latency is determined by the time required for the field to propagate from the conductor to the point under consideration. In other words, the greater the distance from the conductor, the more the electric field lags. Since the velocity of propagation is very high – about 300,000 kilometers per second – the wave of an alternating or oscillating current, even of high frequency, is of considerable length. At 60 cycles per second, the wavelength is 5,000 kilometers, and even at 100,000 hertz, the wavelength is 3 kilometers. This is a very large distance compared to those typically used in field measurement and application. The important part of the electric field of a conductor extends to the return conductor, which usually is only a few feet distant. At greater distance, the aggregate field can be approximated by the differential field between conductor and return conductor, which tend to cancel. Hence, the intensity of the electric field is usually inappreciable at a distance which is still small compared to the wavelength. Within the range in which an appreciable field exists, this field is practically in phase with the flow of energy in the conductor. That is, the velocity of propagation has no appreciable effect unless the return conductor is very distant, or entirely absent, or the frequency is so high that the distance to the return conductor is an appreciable portion of the wavelength. Charge carrier drift. The drift velocity deals with the average velocity of a particle, such as an electron, due to an electric field. In general, an electron will propagate randomly in a conductor at the Fermi velocity. Free electrons in a conductor follow a random path. Without the presence of an electric field, the electrons have no net velocity. When a DC voltage is applied, the electron drift velocity will increase in speed proportionally to the strength of the electric field. The drift velocity in a 2 mm diameter copper wire in 1 ampere current is approximately 8 cm per hour. AC voltages cause no net movement. The electrons oscillate back and forth in response to the alternating electric field, over a distance of a few micrometers – see example calculation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " v = \\frac {1} \\sqrt{\\varepsilon \\mu} =\\frac {c} \\sqrt{\\varepsilon_r \\mu_r} ." }, { "math_id": 1, "text": " c " }, { "math_id": 2, "text": "\\mu_0 " }, { "math_id": 3, "text": "\\mu_r " }, { "math_id": 4, "text": " \\mu_r = 1" }, { "math_id": 5, "text": " \\mu = \\mu_r \\mu_0 " }, { "math_id": 6, "text": " \\varepsilon_0 " }, { "math_id": 7, "text": " \\varepsilon_r " }, { "math_id": 8, "text": " \\varepsilon_r = 1" }, { "math_id": 9, "text": " \\varepsilon = \\varepsilon_r \\varepsilon_0 " }, { "math_id": 10, "text": "v = \\sqrt{ \\frac {2 \\omega} {\\sigma \\mu}} = \\sqrt{ \\frac {4 \\pi} {\\sigma_c \\mu_0 }} \\sqrt{ \\frac { f } {\\sigma_r \\mu_r }} \\approx \\left (0.41 ~ \\mathrm{ m/s } \\right) \\sqrt{ \\frac { f / ( 1 ~ \\mathrm{Hz} ) } {\\sigma_r \\mu_r }} ." }, { "math_id": 11, "text": "f " }, { "math_id": 12, "text": "\\omega " }, { "math_id": 13, "text": "\\sigma_c " }, { "math_id": 14, "text": "\\sigma_r " }, { "math_id": 15, "text": " \\sigma_r " }, { "math_id": 16, "text": " \\sigma = \\sigma_r \\sigma_c " }, { "math_id": 17, "text": " \\mu_0 " }, { "math_id": 18, "text": " \\mu_r" }, { "math_id": 19, "text": " v \\approx " } ]
https://en.wikipedia.org/wiki?curid=8765022
876534
Full and faithful functors
Functors which are surjective and injective on hom-sets In category theory, a faithful functor is a functor that is injective on hom-sets, and a full functor is surjective on hom-sets. A functor that has both properties is called a fully faithful functor. Formal definitions. Explicitly, let "C" and "D" be (locally small) categories and let "F" : "C" → "D" be a functor from "C" to "D". The functor "F" induces a function formula_0 for every pair of objects "X" and "Y" in "C". The functor "F" is said to be for each "X" and "Y" in "C". Properties. A faithful functor need not be injective on objects or morphisms. That is, two objects "X" and "X"′ may map to the same object in "D" (which is why the range of a full and faithful functor is not necessarily isomorphic to "C"), and two morphisms "f" : "X" → "Y" and "f"′ : "X"′ → "Y"′ (with different domains/codomains) may map to the same morphism in "D". Likewise, a full functor need not be surjective on objects or morphisms. There may be objects in "D" not of the form "FX" for some "X" in "C". Morphisms between such objects clearly cannot come from morphisms in "C". A full and faithful functor is necessarily injective on objects up to isomorphism. That is, if "F" : "C" → "D" is a full and faithful functor and formula_1 then formula_2. Generalization to (∞, 1)-categories. The notion of a functor being 'full' or 'faithful' does not translate to the notion of a (∞, 1)-category. In an (∞, 1)-category, the maps between any two objects are given by a space only up to homotopy. Since the notion of injection and surjection are not homotopy invariant notions (consider an interval embedding into the real numbers vs. an interval mapping to a point), we do not have the notion of a functor being "full" or "faithful." However, we can define a functor of quasi-categories to be "fully faithful" if for every "X" and "Y" in "C," the map formula_3is a weak equivalence. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "F_{X,Y}\\colon\\mathrm{Hom}_{\\mathcal C}(X,Y)\\rightarrow\\mathrm{Hom}_{\\mathcal D}(F(X),F(Y))" }, { "math_id": 1, "text": "F(X)\\cong F(Y)" }, { "math_id": 2, "text": "X \\cong Y" }, { "math_id": 3, "text": "F_{X,Y}" } ]
https://en.wikipedia.org/wiki?curid=876534
8765524
Cyanogen halide
Molecule A cyanogen halide is a molecule consisting of cyanide and a halogen. Cyanogen halides are chemically classified as pseudohalogens. The cyanogen halides are a group of chemically reactive compounds which contain a cyano group (-CN) attached to a halogen element, such as fluorine, chlorine, bromine or iodine. Cyanogen halides are colorless, volatile, lacrimatory (tear-producing) and highly poisonous compounds. Production. Halogen cyanides can be obtained by the reaction of halogens with metal cyanides or the halogenation of hydrocyanic acid. formula_0 Cyanogen fluoride can be obtained by thermal decomposition of cyanuric fluoride. Properties. Halogen cyanides are stable at normal pressure below 20 °C and in the absence of moisture or acids. In the presence of free halogens or Lewis acids they easily polymerize to cyanuric halides, for example cyanogen chloride to cyanuric chloride. They are very toxic and tear-inducing (lachrymatory). Cyanogen chloride melts at −6 °C and boils at about 150 °C. Bromine cyanide melts at 52 °C and boils at 61 °C. Iodine cyanide sublimates at normal pressure. Cyanogen fluoride boils at −46 °C and polymerizes at room temperature to cyanuric fluoride. In some of their reactions they resemble halogens. The hydrolysis of cyanogen halides takes place in different ways depending on the electronegativity of the halogen and the resulting different polarity of the X-C bond. Cyanogen fluoride is a gas produced by heating cyanuric fluoride. Cyanogen chloride is a liquid produced by reacting chlorine with hydrocyanic acid. Biomedical effects and metabolism of cyanogen halides. Cyanide is naturally present in human tissues in very small quantities. It is metabolized by rhodanese, a live enzyme at a rate of approximately 17 μg/kg·min. Rhodanese catalyzes the irreversible reaction forming thiocyanate from cyanide and sulfane which is non-toxic and can be excreted through the urine. Under normal conditions, availability of sulfane is the limiting factor which acts as a substrate for rhodanese. Sulfur can be administered therapeutically as sodium thiosulfate to accelerate the reaction. A lethal dose of cyanide is time-dependent because of the body's ability to detoxify and excrete small amounts of cyanide through rhodanese-sulfate catalysis. If an amount of cyanide is absorbed slowly, rhodanese-sulfate may be able to biologically render it non-toxic through catalysis to thiosulfate whereas the same amount administered over a short period of time may be lethal. Use. Halogen cyanides, in particular cyanogen chloride and cyanogen bromide, are important starting materials for the incorporation of the cyanogen group, the production of other carbonic acid derivatives and heterocycles. It has been suggested that cyanogen chloride be used by the military as poison gas. Cyanogen bromide is a solid that is prepared by reacting bromine with hydrocyanic acid salts; it has been used as a chemical pesticide against insects and rodents and as a reagent for the study of protein structure. Cyanogen halides have been found to act as electrolytes in liquid solvents, sulfur dioxide, arsenous chloride, and sulfuryl chloride. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{MCN + X_2 \\longrightarrow XCN + MX}" } ]
https://en.wikipedia.org/wiki?curid=8765524
876732
Poincaré map
Type of map used in mathematics, particularly dynamical systems In mathematics, particularly in dynamical systems, a first recurrence map or Poincaré map, named after Henri Poincaré, is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name "first recurrence map". The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it. A Poincaré map can be interpreted as a discrete dynamical system with a state space that is one dimension smaller than the original continuous dynamical system. Because it preserves many properties of periodic and quasiperiodic orbits of the original system and has a lower-dimensional state space, it is often used for analyzing the original system in a simpler way. In practice this is not always possible as there is no general method to construct a Poincaré map. A Poincaré map differs from a recurrence plot in, that space, not time, determines when to plot a point. For instance, the locus of the Moon when the Earth is at perihelion is a recurrence plot; the locus of the Moon when it passes through the plane perpendicular to the Earth's orbit and passing through the Sun and the Earth at perihelion is a Poincaré map. It was used by Michel Hénon to study the motion of stars in a galaxy, because the path of a star projected onto a plane looks like a tangled mess, while the Poincaré map shows the structure more clearly. Definition. Let (R, "M", "φ") be a global dynamical system, with R the real numbers, "M" the phase space and "φ" the evolution function. Let γ be a periodic orbit through a point "p" and "S" be a local differentiable and transversal section of "φ" through "p", called a Poincaré section through "p". Given an open and connected neighborhood formula_0 of "p", a function formula_1 is called Poincaré map for the orbit γ on the Poincaré section "S" through the point "p" if Example. Consider the following system of differential equations in polar coordinates, formula_2: formula_3 The flow of the system can be obtained by integrating the equation: for the formula_4 component we simply have formula_5 while for the formula_6 component we need to separate the variables and integrate: formula_7 Inverting last expression gives formula_8 and since formula_9 we find formula_10 The flow of the system is therefore formula_11 The behaviour of the flow is the following: Therefore, the solution with initial data formula_13 draws a spiral that tends towards the radius 1 circle. We can take as Poincaré section for this flow the positive horizontal axis, namely formula_14: obviously we can use formula_6 as coordinate on the section. Every point in formula_15 returns to the section after a time formula_16 (this can be understood by looking at the evolution of the angle): we can take as Poincaré map the restriction of formula_17 to the section formula_15 computed at the time formula_18, formula_19. The Poincaré map is therefore :formula_20 The behaviour of the orbits of the discrete dynamical system formula_21 is the following: Poincaré maps and stability analysis. Poincaré maps can be interpreted as a discrete dynamical system. The stability of a periodic orbit of the original system is closely related to the stability of the fixed point of the corresponding Poincaré map. Let (R, "M", "φ") be a differentiable dynamical system with periodic orbit γ through "p". Let formula_1 be the corresponding Poincaré map through "p". We define formula_27 formula_28 formula_29 and formula_30 then (Z, "U", "P") is a discrete dynamical system with state space "U" and evolution function formula_31 Per definition this system has a fixed point at "p". The periodic orbit γ of the continuous dynamical system is stable if and only if the fixed point "p" of the discrete dynamical system is stable. The periodic orbit γ of the continuous dynamical system is asymptotically stable if and only if the fixed point "p" of the discrete dynamical system is asymptotically stable.
[ { "math_id": 0, "text": "U \\subset S" }, { "math_id": 1, "text": "P: U \\to S" }, { "math_id": 2, "text": "(\\theta, r)\\in \\mathbb{S}^1\\times \\mathbb{R}^+ " }, { "math_id": 3, "text": "\n\\begin{cases}\n\\dot{\\theta} = 1\\\\\n\\dot{r} = (1-r^2)r\n\\end{cases}\n" }, { "math_id": 4, "text": "\\theta" }, { "math_id": 5, "text": "\n\\theta(t) = \\theta_0 + t\n" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "\n\\int \\frac{1}{(1-r^2)r} dr = \\int dt \\Longrightarrow \\log\\left(\\frac{r}{\\sqrt{1-r^2}}\\right) = t+c\n" }, { "math_id": 8, "text": "\nr(t) = \\sqrt{\\frac{e^{2(t+c)}}{1+e^{2(t+c)}}}\n" }, { "math_id": 9, "text": "\nr(0)=\\sqrt{\\frac{e^{2c}}{1+e^{2c}}}\n" }, { "math_id": 10, "text": "\nr(t) = \\sqrt{\\frac{e^{2t}r_0^2}{1+r_0^2(e^{2t}-1)}} = \\sqrt{\\frac{1}{1+e^{-2t}\\left(\\frac{1}{r_0^2}-1\\right)}}\n" }, { "math_id": 11, "text": "\n\\Phi_t(\\theta, r) = \\left(\\theta+ t, \\sqrt{\\frac{1}{1+e^{-2t}\\left(\\frac{1}{r_0^2}-1\\right)}}\\right)\n" }, { "math_id": 12, "text": "\\bar{r}=1" }, { "math_id": 13, "text": "(\\theta_0, r_0\\neq 1)" }, { "math_id": 14, "text": "\n\\Sigma = \\{(\\theta, r) \\ : \\ \\theta =0 \\}\n" }, { "math_id": 15, "text": "\\Sigma" }, { "math_id": 16, "text": "t=2\\pi" }, { "math_id": 17, "text": "\\Phi" }, { "math_id": 18, "text": "2\\pi" }, { "math_id": 19, "text": "\\Phi_{2\\pi}|_{\\Sigma}" }, { "math_id": 20, "text": "\\Psi(r) = \\sqrt{\\frac{1}{1+e^{-4\\pi}\\left(\\frac{1}{r^2}-1\\right)}}" }, { "math_id": 21, "text": " (\\Sigma, \\mathbb{Z}, \\Psi) " }, { "math_id": 22, "text": "r=1" }, { "math_id": 23, "text": "\\Psi^n(1)=1" }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "\\Psi^n(z) \\to 1" }, { "math_id": 26, "text": "n\\to \\pm \\infty" }, { "math_id": 27, "text": "P^{0} := \\operatorname{id}_{U}" }, { "math_id": 28, "text": "P^{n+1} := P \\circ P^n" }, { "math_id": 29, "text": "P^{-n-1} := P^{-1} \\circ P^{-n}" }, { "math_id": 30, "text": "P(n, x) := P^n(x)" }, { "math_id": 31, "text": "P: \\mathbb{Z} \\times U \\to U." } ]
https://en.wikipedia.org/wiki?curid=876732
8768758
Sylvester's formula
In matrix theory, Sylvester's formula or Sylvester's matrix theorem (named after J. J. Sylvester) or Lagrange−Sylvester interpolation expresses an analytic function "f"("A") of a matrix A as a polynomial in A, in terms of the eigenvalues and eigenvectors of A. It states that formula_0 where the "λ""i" are the eigenvalues of A, and the matrices formula_1 are the corresponding Frobenius covariants of A, which are (projection) matrix Lagrange polynomials of A. Conditions. Sylvester's formula applies for any diagonalizable matrix A with k distinct eigenvalues, λ1, ..., λ"k", and any function f defined on some subset of the complex numbers such that "f"("A") is well defined. The last condition means that every eigenvalue "λ""i" is in the domain of f, and that every eigenvalue "λ""i" with multiplicity m"i" &gt; 1 is in the interior of the domain, with f being ("m""i" — 1) times differentiable at "λ""i". Example. Consider the two-by-two matrix: formula_2 This matrix has two eigenvalues, 5 and −2. Its Frobenius covariants are formula_3 Sylvester's formula then amounts to formula_4 For instance, if f is defined by "f"("x") "x"−1, then Sylvester's formula expresses the matrix inverse "f"("A") "A"−1 as formula_5 Generalization. Sylvester's formula is only valid for diagonalizable matrices; an extension due to Arthur Buchheim, based on Hermite interpolating polynomials, covers the general case: formula_6, where formula_7. A concise form is further given by Hans Schwerdtfeger, formula_8, where A"i" are the corresponding Frobenius covariants of A Special case. If a matrix A is both Hermitian and unitary, then it can only have eigenvalues of formula_9, and therefore formula_10, where formula_11 is the projector onto the subspace with eigenvalue +1, and formula_12 is the projector onto the subspace with eigenvalue formula_13; By the completeness of the eigenbasis, formula_14. Therefore, for any analytic function f, formula_15 In particular, formula_16 and formula_17. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(A) = \\sum_{i=1}^k f(\\lambda_i) ~A_i ~," }, { "math_id": 1, "text": " A_i \\equiv \\prod_{j=1 \\atop j \\ne i}^k \\frac{1}{\\lambda_i - \\lambda_j} \\left(A - \\lambda_j I\\right) " }, { "math_id": 2, "text": " A = \\begin{bmatrix} 1 & 3 \\\\ 4 & 2 \\end{bmatrix}." }, { "math_id": 3, "text": " \\begin{align}\nA_1 &= c_1 r_1 = \\begin{bmatrix} 3 \\\\ 4 \\end{bmatrix} \\begin{bmatrix} \\frac{1}{7} & \\frac{1}{7} \\end{bmatrix} = \\begin{bmatrix} \\frac{3}{7} & \\frac{3}{7} \\\\ \\frac{4}{7} & \\frac{4}{7} \\end{bmatrix} = \\frac{A + 2I}{5 - (-2)}\\\\\nA_2 &= c_2 r_2 = \\begin{bmatrix} \\frac{1}{7} \\\\ -\\frac{1}{7} \\end{bmatrix} \\begin{bmatrix} 4 & -3 \\end{bmatrix} = \\begin{bmatrix} \\frac{4}{7} & -\\frac{3}{7} \\\\ -\\frac{4}{7} & \\frac{3}{7} \\end{bmatrix} = \\frac{A - 5I}{-2 - 5}.\n\\end{align} " }, { "math_id": 4, "text": " f(A) = f(5) A_1 + f(-2) A_2. \\, " }, { "math_id": 5, "text": " \\frac{1}{5} \\begin{bmatrix} \\frac{3}{7} & \\frac{3}{7} \\\\ \\frac{4}{7} & \\frac{4}{7} \\end{bmatrix} - \\frac{1}{2} \\begin{bmatrix} \\frac{4}{7} & -\\frac{3}{7} \\\\ -\\frac{4}{7} & \\frac{3}{7} \\end{bmatrix} = \\begin{bmatrix} -0.2 & 0.3 \\\\ 0.4 & -0.1 \\end{bmatrix}. " }, { "math_id": 6, "text": "f(A) = \\sum_{i=1}^{s} \\left[ \\sum_{j=0}^{n_{i}-1} \\frac{1}{j!} \\phi_i^{(j)}(\\lambda_i)\\left(A - \\lambda_i I\\right)^j \\prod_{{j=1,j\\ne i}}^{s}\\left(A - \\lambda_j I\\right)^{n_j} \\right]" }, { "math_id": 7, "text": "\\phi_i(t) := f(t)/\\prod_{j\\ne i}\\left(t - \\lambda_j\\right)^{n_j}" }, { "math_id": 8, "text": "f(A)=\\sum_{i=1}^{s} A_{i} \\sum_{j=0}^{n_{i}-1} \\frac{f^{(j)}(\\lambda_i)}{j!}(A-\\lambda_iI)^{j}" }, { "math_id": 9, "text": "\\plusmn 1" }, { "math_id": 10, "text": "A=A_+-A_-" }, { "math_id": 11, "text": "A_+" }, { "math_id": 12, "text": "A_-" }, { "math_id": 13, "text": "- 1" }, { "math_id": 14, "text": "A_++A_-=I" }, { "math_id": 15, "text": "\\begin{align} f(\\theta A)&=f(\\theta)A_{+1}+f(-\\theta)A_{-1} \\\\\n&=f(\\theta)\\frac{I+A}{2}+f(-\\theta)\\frac{I-A}{2}\\\\\n&=\\frac{f(\\theta)+f(-\\theta)}{2}I+\\frac{f(\\theta)-f(-\\theta)}{2}A\\\\\n\\end{align} .\n" }, { "math_id": 16, "text": " e^{i\\theta A}=(\\cos \\theta)I+(i\\sin \\theta) A" }, { "math_id": 17, "text": " A =e^{i\\frac{\\pi}{2}(I-A)}=e^{-i\\frac{\\pi}{2}(I-A)}" } ]
https://en.wikipedia.org/wiki?curid=8768758
8770485
Institute of Medicine Equation
The Institute of Medicine Equation was published in September 2002. It is the equation which is behind the 2005 Dietary Guidelines for Americans and the new food pyramid, MyPyramid. The Institute of Medicine equation uses a different approach to most others. The equation doesn't measure basal metabolic rate, but uses experiments based on doubly labelled water. The scientists at the Institute of Medicine said in their report that the factorial method tended to underestimate calorie expenditure. Equations. The Estimated Energy Requirement, formula_0, is the estimated number of daily kilocalories, or Calories, an individual requires in order to maintain his or her current weight. For a person with a body mass of formula_1 (kg), height of formula_2 (m), age of formula_3 (years) and Physical Activity formula_4, this is given by These equations are for healthy weight children and adults. Correction formulae are used for overweight and obese individuals. These corrections for children and adolescents have been debated by S. J. Woodruff, R. M. Hanning, and S. I. Barr in a paper in Obesity Reviews published January 1, 2009. The issue is whether or not the different formulae are actually necessary or possibly even harmful if overestimate occurs and thus contributes to an even higher and unhealthier body weight in these individuals. However, they are as follows: formula_4 is the Physical Activity coefficient. The activity coefficients are tabulated below: Sedentary means only the light physical activity associated with independent living, moderately active means about half an hour of moderate to vigorous exercise in addition to this. Active means at least an hour of exercise and very active means being physically active for several hours each day. External links. The calculation can be done automatically at these sites: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{EER}" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "a" }, { "math_id": 4, "text": "\\text{PA}" }, { "math_id": 5, "text": "\\text{EER}=662-(9.53 \\times a)+ \\text{PA} \\times ((15.91 \\times m)+(539.6 \\times h))" }, { "math_id": 6, "text": "\\text{EER}=354-(6.91 \\times a)+\\text{PA}\\times ((9.36 \\times m)+(726 \\times h))" }, { "math_id": 7, "text": "\\text{EER}=88.5-(61.9 \\times a)+\\text{PA} \\times ((26.7 \\times m)+(903 \\times h))" }, { "math_id": 8, "text": "\\text{EER}=135.3-(30.8 \\times a)+\\text{PA}\\times ((10 \\times m)+(934 \\times h))" }, { "math_id": 9, "text": "\\text{EER}=89 \\times m-80" }, { "math_id": 10, "text": "\\text{EER}=389-(41.2 \\times a)+\\text{PA}\\times ((15.0 \\times m)+(701.6 \\times h))" }, { "math_id": 11, "text": "\\text{EER}=-114-(50.9 \\times a)+\\text{PA}\\times ((19.5 \\times m)+(1161.4 \\times h))" } ]
https://en.wikipedia.org/wiki?curid=8770485
8771473
Kernel (statistics)
Window function The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics. Bayesian statistics. In statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted. Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from). For many distributions, the kernel can be written in closed form, but not the normalization constant. An example is the normal distribution. Its probability density function is formula_0 and the associated kernel is formula_1 Note that the factor in front of the exponential has been omitted, even though it contains the parameter formula_2 , because it is not a function of the domain variable formula_3 . Pattern analysis. The kernel of a reproducing kernel Hilbert space is used in the suite of techniques known as kernel methods to perform tasks such as statistical classification, regression analysis, and cluster analysis on data in an implicit space. This usage is particularly common in machine learning. Nonparametric statistics. In nonparametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time-series, in the use of the periodogram to estimate the spectral density where they are known as window functions. An additional use is in the estimation of a time-varying intensity for a point process where window functions (kernels) are convolved with time-series data. Commonly, kernel widths must also be specified when running a non-parametric estimation. Definition. A kernel is a non-negative real-valued integrable function "K." For most applications, it is desirable to define the function to satisfy two additional requirements: formula_4 formula_5 The first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used. If "K" is a kernel, then so is the function "K"* defined by "K"*("u") = λ"K"(λ"u"), where λ &gt; 0. This can be used to select a scale that is appropriate for the data. Kernel functions in common use. Several types of kernel functions are commonly used: uniform, triangle, Epanechnikov, quartic (biweight), tricube, triweight, Gaussian, quadratic and cosine. In the table below, if formula_6 is given with a bounded support, then formula_7 for values of "u" lying outside the support. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}" }, { "math_id": 1, "text": "p(x|\\mu,\\sigma^2) \\propto e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}" }, { "math_id": 2, "text": "\\sigma^2" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\\int_{-\\infty}^{+\\infty}K(u)\\,du = 1\\,;" }, { "math_id": 5, "text": "K(-u) = K(u) \\mbox{ for all values of } u\\,." }, { "math_id": 6, "text": "K" }, { "math_id": 7, "text": " K(u) = 0 " } ]
https://en.wikipedia.org/wiki?curid=8771473
8771567
Partial correlation
Concept in probability theory and statistics In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest. For example, given economic data on the consumption, income, and wealth of various individuals, consider the relationship between consumption and income. Failing to control for wealth when computing a correlation coefficient between consumption and income would give a misleading result, since income might be numerically related to wealth which in turn might be numerically related to consumption; a measured correlation between consumption and income might actually be contaminated by these other correlations. The use of a partial correlation avoids this problem. Like the correlation coefficient, the partial correlation coefficient takes on a value in the range from –1 to 1. The value –1 conveys a perfect negative correlation controlling for some variables (that is, an exact linear relationship in which higher values of one variable are associated with lower values of the other); the value 1 conveys a perfect positive linear relationship, and the value 0 conveys that there is no linear relationship. The partial correlation coincides with the conditional correlation if the random variables are jointly distributed as the multivariate normal, other elliptical, multivariate hypergeometric, multivariate negative hypergeometric, multinomial, or Dirichlet distribution, but not in general otherwise. Formal definition. Formally, the partial correlation between "X" and "Y" given a set of "n" controlling variables Z = {"Z"1, "Z"2, ..., "Z""n"}, written "ρ""XY"·Z, is the correlation between the residuals "e""X" and "e""Y" resulting from the linear regression of "X" with Z and of "Y" with Z, respectively. The first-order partial correlation (i.e., when "n" = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345). Computation. Using linear regression. A simple way to compute the sample partial correlation for some data is to solve the two associated linear regression problems and calculate the correlation between the residuals. Let "X" and "Y" be random variables taking real values, and let Z be the "n"-dimensional vector-valued random variable. Let "xi", "yi" and z"i" denote the "i"th of formula_0 i.i.d. observations from some joint probability distribution over real random variables "X", "Y", and Z, with z"i" having been augmented with a 1 to allow for a constant term in the regression. Solving the linear regression problem amounts to finding ("n"+1)-dimensional regression coefficient vectors formula_1 and formula_2 such that formula_3 formula_4 where formula_0 is the number of observations, and formula_5 is the scalar product between the vectors formula_6 and formula_7. The residuals are then formula_8 formula_9 and the sample partial correlation is then given by the usual formula for sample correlation, but between these new "derived" values: formula_10 In the first expression the three terms after minus signs all equal 0 since each contains the sum of residuals from an ordinary least squares regression. Example. Consider the following data on three variables, "X", "Y", and "Z": Computing the Pearson correlation coefficient between variables "X" and "Y" results in approximately 0.970, while computing the partial correlation between "X" and "Y", using the formula given above, gives a partial correlation of 0.919. The computations were done using R with the following code. &gt; X &lt;- c(2,4,15,20) &gt; Y &lt;- c(1,2,3,4) &gt; Z &lt;- c(0,0,1,1) &gt; mm1 &lt;- lm(X~Z) &gt; res1 &lt;- mm1$residuals &gt; mm2 &lt;- lm(Y~Z) &gt; res2 &lt;- mm2$residuals &gt; cor(res1,res2) [1] 0.919145 &gt; cor(X,Y) [1] 0.9695016 &gt; generalCorr::parcorMany(cbind(X,Y,Z)) nami namj partij partji rijMrji [1,] "X" "Y" "0.8844" "1" "-0.1156" [2,] "X" "Z" "0.1581" "1" "-0.8419" The lower part of the above code reports generalized nonlinear partial correlation coefficient between "X" and "Y" after removing the nonlinear effect of "Z" to be 0.8844. Also, the generalized partial correlation coefficient between "X" and "Z" after removing the nonlinear effect of "Y" to be 0.1581. See the R package `generalCorr' and its vignettes for details. Simulation and other details are in Vinod (2017) "Generalized correlation and kernel causality with applications in development economics," Communications in Statistics - Simulation and Computation, vol. 46, [4513, 4534], available online: 29 Dec 2015, URL https://doi.org/10.1080/03610918.2015.1122048. Using recursive formula. It can be computationally expensive to solve the linear regression problems. Actually, the "n"th-order partial correlation (i.e., with |Z| = "n") can be easily computed from three ("n" - 1)th-order partial correlations. The zeroth-order partial correlation "ρ""XY"·Ø is defined to be the regular correlation coefficient "ρ""XY". It holds, for any formula_11 that formula_12 Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of formula_13. Note in the case where "Z" is a single variable, this reduces to: formula_14 Using matrix inversion. The partial correlation can also be written in terms of the joint precision matrix. Consider a set of random variables, formula_15 of cardinality "n". We want the partial correlation between two variables formula_16 and formula_17 given all others, i.e., formula_18. Suppose the (joint/full) covariance matrix formula_19 is positive definite and therefore invertible. If the precision matrix is defined as formula_20, then Computing this requires formula_21, the inverse of the covariance matrix formula_22 which runs in formula_13 time (using the sample covariance matrix to obtain a sample partial correlation). Note that only a single matrix inversion is required to give "all" the partial correlations between pairs of variables in formula_23. To prove Equation (1), return to the previous notation (i.e. formula_24) and start with the definition of partial correlation: "ρ""XY"·Z is the correlation between the residuals "e""X" and "e""Y" resulting from the linear regression of "X" with Z and of "Y" with Z, respectively. First, suppose formula_25 are the coefficients for linear regression fit; that is, formula_26 formula_27 Write the joint covariance matrix for the vector formula_28 as formula_29 whereformula_30Then the standard formula for linear regression gives formula_31 Hence, the residuals can be written as formula_32 Note that formula_33 has expectation zero because of the inclusion of an intercept term in formula_34. Computing the covariance now gives Next, write the precision matrix formula_35 in a similar block form: formula_36 Then, by Schur's formula for block-matrix inversion, formula_37 The entries of the right-hand-side matrix are precisely the covariances previously computed in (2), giving formula_38 Using the formula for the inverse of a 2×2 matrix gives formula_39 So indeed, the partial correlation is formula_40 as claimed in (1). Interpretation. Geometrical. Let three variables "X", "Y", "Z" (where "Z" is the "control" or "extra variable") be chosen from a joint probability distribution over "n" variables V. Further, let v"i", 1 ≤ "i" ≤ "N", be "N" "n"-dimensional i.i.d. observations taken from the joint probability distribution over V. The geometrical interpretation comes from considering the "N"-dimensional vectors x (formed by the successive values of "X" over the observations), y (formed by the values of "Y"), and z (formed by the values of "Z"). It can be shown that the residuals "eX,i" coming from the linear regression of "X" on Z, if also considered as an "N"-dimensional vector e"X" (denoted r"X" in the accompanying graph), have a zero scalar product with the vector z generated by Z. This means that the residuals vector lies on an ("N"–1)-dimensional hyperplane "S"z that is perpendicular to z. The same also applies to the residuals "eY,i" generating a vector e"Y". The desired partial correlation is then the cosine of the angle "φ" between the projections e"X" and e"Y" of x and y, respectively, onto the hyperplane perpendicular to z. As conditional independence test. With the assumption that all involved variables are multivariate Gaussian, the partial correlation "ρ""XY"·Z is zero if and only if "X" is conditionally independent from "Y" given Z. This property does not hold in the general case. To test if a sample partial correlation formula_41 implies that the true population partial correlation differs from 0, Fisher's "z-transform of the partial correlation" can be used: formula_42 The null hypothesis is formula_43, to be tested against the two-tail alternative formula_44. formula_45 can be rejected if formula_46 where formula_47 is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, formula_48 is the significance level of formula_45, and "formula_0" is the sample size. This "z"-transform is approximate, and the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient, and the partial variances is available. The distribution of the sample partial correlation was described by Fisher. Semipartial correlation (part correlation). The semipartial (or part) correlation statistic is similar to the partial correlation statistic; both compare variations of two variables after certain factors are controlled for. However, to calculate the semipartial correlation, one holds the third variable constant for either "X" or "Y" but not both; whereas for the partial correlation, one holds the third variable constant for both. The semipartial correlation compares the unique variation of one variable (having removed variation associated with the "Z" variable(s)) with the unfiltered variation of the other, while the partial correlation compares the unique variation of one variable to the unique variation of the other. The semipartial correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable." Conversely, it is less theoretically useful because it is less precise about the role of the unique contribution of the independent variable. The absolute value of the semipartial correlation of "X" with "Y" is always less than or equal to that of the partial correlation of "X" with "Y". The reason is this: Suppose the correlation of "X" with "Z" has been removed from "X", giving the residual vector "e""x" . In computing the semipartial correlation, "Y" still contains both unique variance and variance due to its association with "Z". But "e""x" , being uncorrelated with "Z", can only explain some of the unique part of the variance of "Y" and not the part related to "Z". In contrast, with the partial correlation, only "e""y" (the part of the variance of "Y" that is unrelated to "Z") is to be explained, so there is less variance of the type that "e""x" cannot explain. Use in time series analysis. In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag "formula_49", as formula_50 This function is used to determine the appropriate lag length for an autoregression. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "\\mathbf{w}_X^*" }, { "math_id": 2, "text": "\\mathbf{w}_Y^*" }, { "math_id": 3, "text": "\\mathbf{w}_X^* = \\arg\\min_{\\mathbf{w}} \\left\\{ \\sum_{i=1}^N (x_i - \\langle\\mathbf{w}, \\mathbf{z}_i \\rangle)^2 \\right\\} " }, { "math_id": 4, "text": "\\mathbf{w}_Y^* = \\arg\\min_{\\mathbf{w}} \\left\\{ \\sum_{i=1}^N (y_i - \\langle\\mathbf{w}, \\mathbf{z}_i \\rangle)^2 \\right\\} " }, { "math_id": 5, "text": "\\langle\\mathbf{w}, \\mathbf{z}_i \\rangle" }, { "math_id": 6, "text": "\\mathbf{w}" }, { "math_id": 7, "text": "\\mathbf{z}_i" }, { "math_id": 8, "text": "e_{X,i} = x_i - \\langle\\mathbf{w}_X^*,\\mathbf{z}_i \\rangle" }, { "math_id": 9, "text": "e_{Y,i} = y_i - \\langle\\mathbf{w}_Y^*,\\mathbf{z}_i \\rangle" }, { "math_id": 10, "text": "\\begin{align}\n\\hat{\\rho}_{XY\\cdot\\mathbf{Z}}&=\\frac{N\\sum_{i=1}^N e_{X,i}e_{Y,i}-\\sum_{i=1}^N e_{X,i}\\sum_{i=1}^N e_{Y,i}}\n{\\sqrt{N\\sum_{i=1}^N e_{X,i}^2-\\left(\\sum_{i=1}^N e_{X,i}\\right)^2}~\\sqrt{N\\sum_{i=1}^N e_{Y,i}^2-\\left(\\sum_{i=1}^N e_{Y,i}\\right)^2}}\\\\\n&=\\frac{N\\sum_{i=1}^N e_{X,i}e_{Y,i}}\n{\\sqrt{N\\sum_{i=1}^N e_{X,i}^2}~\\sqrt{N\\sum_{i=1}^N e_{Y,i}^2}}.\n\\end{align}" }, { "math_id": 11, "text": "Z_0 \\in \\mathbf{Z}," }, { "math_id": 12, "text": "\\rho_{XY\\cdot \\mathbf{Z} } =\n \\frac{\\rho_{XY\\cdot\\mathbf{Z}\\setminus\\{Z_0\\}} - \\rho_{XZ_0\\cdot\\mathbf{Z}\\setminus\\{Z_0\\}}\\rho_{Z_0Y\\cdot\\mathbf{Z}\\setminus\\{Z_0\\}}}\n {\\sqrt{1-\\rho_{XZ_0\\cdot\\mathbf{Z}\\setminus\\{Z_0\\}}^2} \\sqrt{1-\\rho_{Z_0Y\\cdot\\mathbf{Z}\\setminus\\{Z_0\\}}^2}}" }, { "math_id": 13, "text": "\\mathcal{O}(n^3)" }, { "math_id": 14, "text": "\\rho_{XY\\cdot Z}=\n \\frac{\\rho_{XY}-\\rho_{XZ}\\rho_{ZY}}\n {\\sqrt{1-\\rho_{XZ}^2}\\sqrt{1-\\rho_{ZY}^2}}" }, { "math_id": 15, "text": "\\mathbf{V} = {X_1,\\dots, X_n}" }, { "math_id": 16, "text": "X_i" }, { "math_id": 17, "text": "X_j" }, { "math_id": 18, "text": "\\mathbf{V} \\setminus \\{X_i,X_j\\}" }, { "math_id": 19, "text": "\\Sigma = (\\sigma_{ij})" }, { "math_id": 20, "text": "\\Omega = (p_{ij}) = \\Sigma^{-1}" }, { "math_id": 21, "text": "\\Sigma^{-1}" }, { "math_id": 22, "text": "\\Sigma" }, { "math_id": 23, "text": "\\mathbf{V}" }, { "math_id": 24, "text": "X,Y,\\mathbf{Z} \\leftrightarrow X_i,X_j, \\mathbf{V} \\setminus \\{X_i,X_j\\}" }, { "math_id": 25, "text": "\\beta,\\gamma" }, { "math_id": 26, "text": "\\beta = \\operatorname{argmin}_\\beta \\mathbb{E} \\|X - \\beta ^T Z\\|^2 " }, { "math_id": 27, "text": "\\gamma = \\operatorname{argmin}_\\gamma \\mathbb{E} \\|Y - \\gamma ^T Z\\|^2 " }, { "math_id": 28, "text": "(X,Y,Z^T)^T" }, { "math_id": 29, "text": " \\Sigma = \n\\begin{bmatrix}\n\\Sigma_{XX} & \\Sigma_{XY} & \\Sigma_{XZ} \\\\\n\\Sigma_{YX} & \\Sigma_{YY} & \\Sigma_{YZ} \\\\\n\\Sigma_{ZX} & \\Sigma_{ZY} & \\Sigma_{ZZ}\n\\end{bmatrix} =\n\\begin{bmatrix}\nC_{11} & C_{12} \\\\\nC_{21} & C_{22} \\\\\n\\end{bmatrix}\n" }, { "math_id": 30, "text": "C_{11} = \\begin{bmatrix}\n\\Sigma_{XX} & \\Sigma_{XY} \\\\\n\\Sigma_{YX} & \\Sigma_{YY}\n\\end{bmatrix}, \\qquad\nC_{12} = \\begin{bmatrix} \\Sigma_{XZ} \\\\ \\Sigma_{YZ} \\end{bmatrix}, \\qquad\nC_{21} = \\begin{bmatrix} \\Sigma_{ZX} & \\Sigma_{ZY} \\end{bmatrix}, \\qquad\nC_{22} = \\Sigma_{ZZ}\n" }, { "math_id": 31, "text": " \\beta = \\left(\\Sigma_{ZZ}\\right)^{-1} \\Sigma_{ZX} " }, { "math_id": 32, "text": " R_X = X - \\beta^T Z = X - \\Sigma_{XZ} \\left(\\Sigma_{ZZ}\\right)^{-1} Z " }, { "math_id": 33, "text": "R_X" }, { "math_id": 34, "text": "Z" }, { "math_id": 35, "text": " \\Omega = \\Sigma^{-1}" }, { "math_id": 36, "text": " \\Omega = \n\\begin{bmatrix}\n\\Omega_{XX} & \\Omega_{XY} & \\Omega_{XZ} \\\\\n\\Omega_{YX} & \\Omega_{YY} & \\Omega_{YZ} \\\\\n\\Omega_{ZX} & \\Omega_{ZY} & \\Omega_{ZZ}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nP_{11} & P_{12} \\\\\nP_{21} & P_{22} \\\\\n\\end{bmatrix}\n" }, { "math_id": 37, "text": " P_{11}^{-1} = C_{11} - C_{12} C_{22}^{-1} C_{21} " }, { "math_id": 38, "text": " P_{11}^{-1} = \n\\begin{bmatrix}\n\\operatorname{Cov}(R_X,R_X) & \\operatorname{Cov}(R_X,R_Y) \\\\\n\\operatorname{Cov}(R_Y,R_X) & \\operatorname{Cov}(R_Y,R_Y) \\\\\n\\end{bmatrix}\n" }, { "math_id": 39, "text": "\n\\begin{align}\nP_{11}^{-1}\n& =\n\\frac{1}{\\text{det} P_{11}}\n\\begin{pmatrix}\n[P_{11}]_{22} & -[P_{11}]_{12} \\\\\n-[P_{11}]_{21} & [P_{11}]_{11} \\\\\n\\end{pmatrix} \\\\\n& = \n\\frac{1}{\\text{det} P_{11}}\n\\begin{pmatrix}\np_{YY} & -p_{XY} \\\\\n-p_{YX} & p_{XX} \\\\\n\\end{pmatrix}\n\\end{align}\n" }, { "math_id": 40, "text": " \\rho_{XY \\cdot Z} \n= \\frac{\\operatorname{Cov}(R_X,R_Y)}{\\sqrt{\\operatorname{Cov}(R_X,R_X)\\operatorname{Cov}(R_Y,R_Y)}}\n= \\frac{-\\tfrac{1}{\\text{det} P_{11}}p_{XY}}{\\sqrt{\\tfrac{1}{\\text{det} P_{11}}p_{XX}\\tfrac{1}{\\text{det} P_{11}}p_{YY}}}\n\n= -\\frac{p_{XY}}{\\sqrt{p_{XX}p_{YY}}}\n" }, { "math_id": 41, "text": "\\hat{\\rho}_{XY\\cdot\\mathbf{Z}}" }, { "math_id": 42, "text": "z(\\hat{\\rho}_{XY\\cdot\\mathbf{Z}}) = \\frac{1}{2} \\ln\\left(\\frac{1+\\hat{\\rho}_{XY\\cdot\\mathbf{Z}}}{1-\\hat{\\rho}_{XY\\cdot\\mathbf{Z}}}\\right)" }, { "math_id": 43, "text": "H_0: \\rho_{XY\\cdot\\mathbf{Z}} = 0" }, { "math_id": 44, "text": "H_A: \\rho_{XY\\cdot\\mathbf{Z}} \\neq 0" }, { "math_id": 45, "text": "H_0" }, { "math_id": 46, "text": "\\sqrt{N - |\\mathbf{Z}| - 3}\\cdot |z(\\hat{\\rho}_{XY\\cdot\\mathbf{Z}})| > \\Phi^{-1}(1-\\alpha/2)" }, { "math_id": 47, "text": "\\Phi" }, { "math_id": 48, "text": "\\alpha" }, { "math_id": 49, "text": "h" }, { "math_id": 50, "text": "\\varphi(h)= \\rho_{X_0X_h\\,\\cdot\\, \\{X_1,\\,\\dots\\,,X_{h-1} \\}}" } ]
https://en.wikipedia.org/wiki?curid=8771567
8771825
Kernel regression
Technique in statistics In statistics, kernel regression is a non-parametric technique to estimate the conditional expectation of a random variable. The objective is to find a non-linear relation between a pair of random variables X and Y. In any nonparametric regression, the conditional expectation of a variable formula_0 relative to a variable formula_1 may be written: formula_2 where formula_3 is an unknown function. Nadaraya–Watson kernel regression. Nadaraya and Watson, both in 1964, proposed to estimate formula_3 as a locally weighted average, using a kernel as a weighting function. The Nadaraya–Watson estimator is: formula_4 where formula_5 is a kernel with a bandwidth formula_6 such that formula_7 is of order at least 1, that is formula_8. Derivation. Starting with the definition of conditional expectation, formula_9 we estimate the joint distributions "f"("x","y") and "f"("x") using kernel density estimation with a kernel K: formula_10 formula_11 We get: formula_12 which is the Nadaraya–Watson estimator. formula_13 Priestley–Chao kernel estimator. where formula_14 is the bandwidth (or smoothing parameter). formula_15 Gasser–Müller kernel estimator. where formula_16 Example. This example is based upon Canadian cross-section wage data consisting of a random sample taken from the 1971 Canadian Census Public Use Tapes for male individuals having common education (grade 13). There are 205 observations in total. The figure to the right shows the estimated regression function using a second order Gaussian kernel along with asymptotic variability bounds. Script for example. The following commands of the R programming language use the codice_0 function to deliver optimal smoothing and to create the figure given above. These commands can be entered at the command prompt via cut and paste. install.packages("np") library(np) # non parametric library data(cps71) attach(cps71) m &lt;- npreg(logwage~age) plot(m, plot.errors.method="asymptotic", plot.errors.style="band", ylim=c(11, 15.2)) points(age, logwage, cex=.25) detach(cps71) Related. According to David Salsburg, the algorithms used in kernel regression were independently developed and used in fuzzy systems: "Coming up with almost exactly the same computer algorithm, fuzzy systems and kernel density-based regressions appear to have been developed completely independently of one another." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\operatorname{E}(Y \\mid X) = m(X)" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": " \\widehat{m}_h(x)=\\frac{\\sum_{i=1}^n K_h(x-x_i) y_i}{\\sum_{i=1}^nK_h(x-x_i)}\n" }, { "math_id": 5, "text": "K_h(t) = \\frac{1}{h}K\\left(\\frac{t}{h}\\right)" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "K(\\cdot)" }, { "math_id": 8, "text": "\\int_{-\\infty}^\\infty u K(u) \\, du = 0" }, { "math_id": 9, "text": "\n\\operatorname{E}(Y \\mid X=x) = \\int y f(y\\mid x) \\, dy = \\int y \\frac{f(x,y)}{f(x)} \\, dy\n" }, { "math_id": 10, "text": "\n\\hat{f}(x,y) = \\frac{1}{n}\\sum_{i=1}^n K_h(x-x_i) K_h(y-y_i),\n" }, { "math_id": 11, "text": "\n\\hat{f}(x) = \\frac{1}{n} \\sum_{i=1}^n K_h(x-x_i),\n" }, { "math_id": 12, "text": "\n\\begin{align}\n\\operatorname{\\hat E}(Y \\mid X=x) &= \\int y \\frac{\\hat{f}(x,y)}{\\hat{f}(x)} \\, dy,\\\\[6pt]\n&= \\int y \\frac{\\sum_{i=1}^n K_h(x-x_i) K_h(y-y_i)}{\\sum_{j=1}^n K_h(x-x_j)} \\,dy,\\\\[6pt]\n&= \\frac{\\sum_{i=1}^n K_h(x-x_i) \\int y \\, K_h(y-y_i) \\,dy}{\\sum_{j=1}^n K_h(x-x_j)},\\\\[6pt]\n&= \\frac{\\sum_{i=1}^n K_h(x-x_i) y_i}{\\sum_{j=1}^n K_h(x-x_j)},\n\\end{align}\n" }, { "math_id": 13, "text": "\n\\widehat{m}_{PC}(x) = h^{-1} \\sum_{i=2}^n (x_i - x_{i-1}) K\\left(\\frac{x-x_i}{h}\\right) y_i\n" }, { "math_id": 14, "text": " h " }, { "math_id": 15, "text": "\n\\widehat{m}_{GM}(x) = h^{-1} \\sum_{i=1}^n \\left[\\int_{s_{i-1}}^{s_i} K\\left(\\frac{x-u}{h}\\right) \\, du\\right] y_i\n" }, { "math_id": 16, "text": "s_i = \\frac{x_{i-1} + x_i}{2}." } ]
https://en.wikipedia.org/wiki?curid=8771825
877209
Zassenhaus lemma
Technical lemma in group theory In mathematics, the butterfly lemma or Zassenhaus lemma, named after Hans Zassenhaus, is a technical result on the lattice of subgroups of a group or the lattice of submodules of a module, or more generally for any modular lattice. Lemma. Suppose formula_0 is a group with subgroups formula_1 and formula_2. Suppose formula_3 and formula_4 are normal subgroups. Then there is an isomorphism of quotient groups: formula_5 This can be generalized to the case of a group with operators formula_6 with stable subgroups formula_1 and formula_2, the above statement being the case of formula_7 acting on itself by conjugation. Zassenhaus proved this lemma specifically to give the most direct proof of the Schreier refinement theorem. The 'butterfly' becomes apparent when trying to draw the Hasse diagram of the various groups involved. Zassenhaus' lemma for groups can be derived from a more general result known as Goursat's theorem stated in a Goursat variety (of which groups are an instance); however the group-specific modular law also needs to be used in the derivation.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "B\\triangleleft A" }, { "math_id": 4, "text": "D\\triangleleft C" }, { "math_id": 5, "text": "\\frac{(A\\cap C)B}{(A\\cap D)B} \\cong \\frac{(A\\cap C)D}{(B\\cap C)D}." }, { "math_id": 6, "text": "(G, \\Omega)" }, { "math_id": 7, "text": "\\Omega=G" } ]
https://en.wikipedia.org/wiki?curid=877209
877315
Independent increments
In probability theory, independent increments are a property of stochastic processes and random measures. Most of the time, a process or random measure has independent increments by definition, which underlines their importance. Some of the stochastic processes that by definition possess independent increments are the Wiener process, all Lévy processes, all additive process and the Poisson point process. Definition for stochastic processes. Let formula_0 be a stochastic process. In most cases, formula_1 or formula_2. Then the stochastic process has independent increments if and only if for every formula_3 and any choice formula_4 with formula_5 the random variables formula_6 are stochastically independent. Definition for random measures. A random measure formula_7 has got independent increments if and only if the random variables formula_8 are stochastically independent for every selection of pairwise disjoint measurable sets formula_9 and every formula_3. Independent S-increments. Let formula_7 be a random measure on formula_10 and define for every bounded measurable set formula_11 the random measure formula_12 on formula_13 as formula_14 Then formula_7 is called a random measure with independent S-increments, if for all bounded sets formula_15 and all formula_16 the random measures formula_17 are independent. Application. Independent increments are a basic property of many stochastic processes and are often incorporated in their definition. The notion of independent increments and independent S-increments of random measures plays an important role in the characterization of Poisson point process and infinite divisibility
[ { "math_id": 0, "text": " (X_t)_{t \\in T} " }, { "math_id": 1, "text": " T= \\N " }, { "math_id": 2, "text": " T=\\R^+ " }, { "math_id": 3, "text": " m \\in \\N " }, { "math_id": 4, "text": " t_0, t_1, t_2, \\dots,t_{m-1}, t_m \\in T" }, { "math_id": 5, "text": " t_0 < t_1 < t_2< \\dots < t_m " }, { "math_id": 6, "text": " (X_{t_1}-X_{t_0}),(X_{t_2}-X_{t_1}), \\dots, (X_{t_m}-X_{t_{m-1}} )" }, { "math_id": 7, "text": " \\xi " }, { "math_id": 8, "text": "\\xi(B_1), \\xi(B_2), \\dots, \\xi(B_m) " }, { "math_id": 9, "text": " B_1, B_2, \\dots, B_m " }, { "math_id": 10, "text": " S \\times T " }, { "math_id": 11, "text": " B " }, { "math_id": 12, "text": " \\xi_B " }, { "math_id": 13, "text": " T " }, { "math_id": 14, "text": " \\xi_B(\\cdot):= \\xi(B \\times \\cdot ) " }, { "math_id": 15, "text": " B_1, B_2, \\dots, B_n " }, { "math_id": 16, "text": " n \\in \\N " }, { "math_id": 17, "text": " \\xi_{B_1},\\xi_{B_2}, \\dots, \\xi_{B_n}" } ]
https://en.wikipedia.org/wiki?curid=877315
877320
Texel (graphics)
Fundamental unit of a texture map In computer graphics, a texel, texture element, or texture pixel is the fundamental unit of a texture map. Textures are represented by arrays of texels representing the texture space, just as other images are represented by arrays of pixels. Texels can also be described by image regions that are obtained through simple procedures such as thresholding. Voronoi tesselation can be used to define their spatial relationships—divisions are made at the midpoints between the centroids of each texel and the centroids of every surrounding texel for the entire texture. This results in each texel centroid having a Voronoi polygon surrounding it, which consists of all points that are closer to its own texel centroid than any other centroid. Rendering. When texturing a 3D surface or surfaces (a process known as texture mapping), the renderer maps texels to appropriate pixels in the geometric fragment (typically a triangle) in the output picture. On modern computers, this operation is accomplished on the graphics processing unit. The texturing process starts with a location in space. The location can be in world space, but typically it is local to a model space so that the texture moves with the model. A projector function is applied to the location to change the location from a three-element vector (formula_0) to a two-element (formula_1) vector with values ranging from zero to one (uv). These values are multiplied by the resolution of the texture to obtain the location of the texel. When a texel is requested that is not on an integer position, texture filtering is applied. When a texel is requested that is outside of the texture, one of two techniques is used: clamping or wrapping. Clamping limits the texel to the texture size, moving it to the nearest edge if it is more than the texture size. Wrapping moves the texel in increments of the texture's size to bring it back into the texture. Wrapping causes a texture to be repeated; clamping causes it to be in one spot only. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left (u,v,z \\right )" }, { "math_id": 1, "text": "\\left (x,y\\right )" } ]
https://en.wikipedia.org/wiki?curid=877320
8776627
The Twelve Days of Christmas (song)
English Christmas carol "The Twelve Days of Christmas" is an English Christmas carol. A classic example of a cumulative song, the lyrics detail a series of increasingly numerous gifts given to the speaker by their "true love" on each of the twelve days of Christmas (the twelve days that make up the Christmas season, starting with Christmas Day). The carol, whose words were first published in England in the late eighteenth century, has a Roud Folk Song Index number of 68. A large number of different melodies have been associated with the song, of which the best known is derived from a 1909 arrangement of a traditional folk melody by English composer Frederic Austin. Lyrics. "The Twelve Days of Christmas" is a cumulative song, meaning that each verse is built on top of the previous verses. There are twelve verses, each describing a gift given by "my true love" on one of the twelve days of Christmas. There are many variations in the lyrics. The lyrics given here are from Frederic Austin's 1909 publication that established the current form of the carol. The first three verses run, in full, as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; &lt;poem&gt; On the first day of Christmas my true love sent to me A partridge in a pear tree On the second day of Christmas my true love sent to me Two turtle doves, "And a partridge in a pear tree." On the third day of Christmas my true love sent to me Three French hens, "Two turtle doves," "And a partridge in a pear tree." &lt;/poem&gt; Subsequent verses follow the same pattern. Each verse deals with the next day of Christmastide, adding one new gift and then repeating all the earlier gifts, so that each verse is one line longer than its predecessor. Variations of the lyrics. The earliest known publications of the words to "The Twelve Days of Christmas" were an illustrated children's book, "Mirth Without Mischief", published in London in 1780, and a broadsheet by Angus, of Newcastle, dated to the late eighteenth or early nineteenth centuries. While the words as published in "Mirth Without Mischief" and the Angus broadsheet were almost identical, subsequent versions (beginning with James Orchard Halliwell's "Nursery Rhymes of England" of 1842) have displayed considerable variation: For ease of comparison with Austin's 1909 version given above: Scotland. A similar cumulative verse from Scotland, "The Yule Days", has been likened to "The Twelve Days of Christmas" in the scholarly literature. It has thirteen days rather than twelve, and the number of gifts does not increase in the manner of "The Twelve Days". Its final verse, as published in Chambers, "Popular Rhymes, Fireside Stories, and Amusements of Scotland" (1842), runs as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;&lt;poem&gt;The king sent his lady on the thirteenth Yule day, Three stalks o' merry corn, Three maids a-merry dancing, Three hinds a-merry hunting, An Arabian baboon, Three swans a-merry swimming, Three ducks a-merry laying, A bull that was brown, Three goldspinks, Three starlings, A goose that was grey, Three plovers, Three partridges, A pippin go aye; Wha learns my carol and carries it away? &lt;/poem&gt; "Pippin go aye" (also spelled "papingo-aye" in later editions) is a Scots word for peacock or parrot. Similarly, Iceland has a Christmas tradition where "Yule Lads" put gifts in the shoes of children for each of the 13 nights of Christmas. Faroe Islands. In the Faroe Islands, there is a comparable counting Christmas song. The gifts include: one feather, two geese, three sides of meat, four sheep, five cows, six oxen, seven dishes, eight ponies, nine banners, ten barrels, eleven goats, twelve men, thirteen hides, fourteen rounds of cheese and fifteen deer. These were illustrated in 1994 by local cartoonist Óli Petersen (born 1936) on a series of two stamps issued by the Faroese Philatelic Office. Sweden. In Blekinge and Småland, southern Sweden, a similar song was also sung. It featured one hen, two barley seeds, three grey geese, four pounds of pork, six flayed sheep, a sow with six pigs, seven åtting grain, eight grey foals with golden saddles, nine newly born cows, ten pairs of oxen, eleven clocks, and finally twelve churches, each with twelve altars, each with twelve priests, each with twelve capes, each with twelve coin-purses, each with twelve daler inside. France. "Les Douze Mois" ("The Twelve Months") (also known as "La Perdriole"—"The Partridge") is another similar cumulative verse from France that has been likened to "The Twelve Days of Christmas". Its final verse, as published in de Coussemaker, "Chants Populaires des Flamands de France" (1856), runs as follows: &lt;templatestyles src="Verse translation/styles.css" /&gt; According to de Coussemaker, the song was recorded "in the part of [French] Flanders that borders on the Pas de Calais". Another similar folksong, "Les Dons de l'An", was recorded in the Cambresis region of France. Its final verse, as published in 1864, runs: &lt;templatestyles src="Verse translation/styles.css" /&gt; History and meaning. Origins. The exact origins and the meaning of the song are unknown, but it is highly probable that it originated from a children's memory and forfeit game. The twelve days in the song are the twelve days starting with Christmas Day to the day before Epiphany (6 January). Twelfth Night is defined by the "Oxford English Dictionary" as "the evening of January 5th, the day before Epiphany, which traditionally marks the end of Christmas celebrations". The best known English version was first printed in "Mirth without Mischief", a children's book published in London around 1780. The work was heavily illustrated with woodcuts, attributed in one source to Thomas Bewick. In the northern counties of England, the song was often called the "Ten Days of Christmas", as there were only ten gifts. It was also known in Somerset, Dorset, and elsewhere in England. The kinds of gifts vary in a number of the versions, some of them becoming alliterative tongue-twisters. "The Twelve Days of Christmas" was also widely popular in the United States and Canada. It is mentioned in the section on "Chain Songs" in Stith Thompson's "Motif-Index of Folk-Literature" (Indiana University Studies, Vol. 5, 1935), p. 416. There is evidence pointing to the North of England, specifically the area around Newcastle upon Tyne, as the origin of the carol. Husk, in the 1864 excerpt quoted below, stated that the carol was "found on broadsides printed at Newcastle at various periods during the last hundred and fifty years", i.e. from approximately 1714. In addition, many of the nineteenth century citations come from the Newcastle area. Peter and Iona Opie suggest that "if '[t]he partridge in the peartree' is to be taken literally it looks as if the chant comes from France, since the Red Leg partridge, which perches in trees more frequently than the common partridge, was not successfully introduced into England until about 1770". Some authors suggest a connection to a religious verse entitled "Twelfth Day", found in a thirteenth century manuscript at Trinity College, Cambridge; this theory is criticised as "erroneous" by Yoffie. It has also been suggested that this carol is connected to the "old ballad" which Sir Toby Belch begins to sing in Shakespeare's "Twelfth Night". Manner of performance. Many early sources suggest that "The Twelve Days of Christmas" was a "memory-and-forfeits" game, in which participants were required to repeat a verse of poetry recited by the leader. Players who made an error were required to pay a penalty, in the form of offering a kiss or confection. Halliwell, writing in 1842, stated that "[e]ach child in succession repeats the gifts of the day, and forfeits for each mistake." Salmon, writing from Newcastle, claimed in 1855 that the song "[had] been, up to within twenty years, extremely popular as a schoolboy's Christmas chant". Husk, writing in 1864, stated: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This piece is found on broadsides printed at Newcastle at various periods during the last hundred and fifty years. On one of these sheets, nearly a century old, it is entitled "An Old English Carol," but it can scarcely be said to fall within that description of composition, being rather fitted for use in playing the game of "Forfeits," to which purpose it was commonly applied in the metropolis upwards of forty years since. The practice was for one person in the company to recite the first three lines; a second, the four following; and so on; the person who failed in repeating her portion correctly being subjected to some trifling forfeit. Thomas Hughes, in a short story published in 1864, described a fictional game of Forfeits involving the song:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[A] cry for forfeits arose. So the party sat down round Mabel on benches brought out from under the table, and Mabel began, -- The first day of Christmas my true love sent to me a partridge and a pear-tree; The second day of Christmas my true love sent to me two turtle-doves, a partridge, and a pear-tree; The third day of Christmas my true love sent to me three fat hens, two turtle-doves, a partridge, and a pear-tree; The fourth day of Christmas my true love sent to me four ducks quacking, three fat hens, two turtle-doves, a partridge, and a pear-tree; The fifth day of Christmas my true love sent to me five hares running, four ducks quacking, three fat hens, two turtle-doves, a partridge, and a pear-tree; And so on. Each day was taken up and repeated all round; and for every breakdown (except by little Maggie, who struggled with desperately earnest round eyes to follow the rest correctly, but with very comical results), the player who made the slip was duly noted down by Mabel for a forfeit. Barnes (1882), stated that the last verse "is to be said in one breath". Scott (1892), reminiscing about Christmas and New Year's celebrations in Newcastle around the year 1844, described a performance thus:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A lady begins it, generally an elderly lady, singing the first line in a high clear voice, the person sitting next takes up the second, the third follows, at first gently, but before "twelfth day" is reached the whole circle were joining in with stentorian noise and wonderful enjoyment. Lady Gomme wrote in 1898: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"The Twelve Days" was a Christmas game. It was a customary thing in a friend's house to play "The Twelve Days," or "My Lady's Lap Dog," every Twelfth Day night. The party was usually a mixed gathering of juveniles and adults, mostly relatives, and before supper—that is, before eating mince pies and twelfth cake—this game and the cushion dance were played, and the forfeits consequent upon them always cried. The company were all seated round the room. The leader of the game commenced by saying the first line. [...] The lines for the "first day" of Christmas was said by each of the company in turn; then the first "day" was repeated, with the addition of the "second" by the leader, and then this was said all round the circle in turn. This was continued until the lines for the "twelve days" were said by every player. For every mistake a forfeit—a small article belonging to the person—had to be given up. These forfeits were afterwards "cried" in the usual way, and were not returned to the owner until they had been redeemed by the penalty inflicted being performed. Meanings of the gifts. Partridge in a pear tree. An anonymous "antiquarian", writing in 1867, speculated that "pear-tree" is a corruption of French "perdrix" (, "partridge"). This was also suggested by Anne Gilchrist, who observed in 1916 that "from the constancy in English, French, and Languedoc versions of the 'merry little partridge,' I suspect that 'pear-tree' is really "perdrix" (Old French ) carried into England". The variant text "part of a juniper tree", found as early as c. 1840, is likely not original, since "partridge" is found in the French versions. It is probably a corruption of "partridge in a pear tree", though Gilchrist suggests "juniper tree" could have been "joli perdrix", [pretty partridge]. Another suggestion is that an old English drinking song may have furnished the idea for the first gift. William B. Sandys refers to it as a "convivial glee introduced a few years since, 'A Pie [i.e., a magpie] sat on a Pear Tree,' where one drinks while the others sing." The image of the bird in the pear tree also appears in lines from a children's counting rhyme an old Mother Goose. A pye sate on a pear tree, Heigh O Once so merrily hopp'd she; Heigh O Twice so merrily, etc. Thrice so, etc. French hens. Gilchrist suggests that the adjective "French" may mean "foreign". Sharp reports that one singer sings "Britten chains", which he interprets as a corruption of "Breton hens". William and Ceil Baring-Gould also suggest that the birds are Breton hens, which they see as another indication that the carol is of French origin. Colly birds. The word "colly", found in the earliest publications, was the source of considerable confusion. Multiple sources confirm that it is a dialectal word, found in Somerset and elsewhere, meaning "black", so "colly birds" are blackbirds. Despite this, other theories about the word's origin are also found in the literature, such as that the word is a corruption of French ("ruff"), or of "coloured". Gold rings. Shahn suggests that "the five golden rings refer to the ringed pheasant". William and Ceil Baring-Gould reiterate this idea, which implies that the gifts for first seven days are all birds. Others suggest the gold rings refer to "five goldspinks"—a goldspink being an old name for a goldfinch; or even canaries. However, the 1780 publication includes an illustration that clearly depicts the "five gold rings" as being jewellery. General. According to "The Oxford Dictionary of Nursery Rhymes", "Suggestions have been made that the gifts have significance, as representing the food or sport for each month of the year. Importance [certainly has] long been attached to the Twelve Days, when, for instance, the weather on each day was carefully observed to see what it would be in the corresponding month of the coming year. Nevertheless, whatever the ultimate origin of the chant, it seems probable [that] the lines that survive today both in England and France are merely an irreligious travesty." In 1979, a Canadian hymnologist, Hugh D. McKellar, published an article, "How to Decode the Twelve Days of Christmas", in which he suggested that "The Twelve Days of Christmas" lyrics were intended as a catechism song to help young English Catholics learn their faith, at a time when practising Catholicism was against the law (from 1558 until 1829). McKellar offered no evidence for his claim. Three years later, in 1982, Fr. Hal Stockert wrote an article (subsequently posted online in 1995) in which he suggested a similar possible use of the twelve gifts as part of a catechism. The possibility that the twelve gifts were used as a catechism during the period of Catholic repression was also hypothesised in this same time period (1987 and 1992) by Fr. James Gilhooley, chaplain of Mount Saint Mary College of Newburgh, New York. Snopes.com, a website reviewing urban legends, Internet rumours, e-mail forwards, and other stories of unknown or questionable origin, concludes that the hypothesis of the twelve gifts of Christmas being a surreptitious Catholic catechism is incorrect. None of the enumerated items would distinguish Catholics from Protestants, and so would hardly need to be secretly encoded. Music. Standard melody. The now-standard melody for the carol was popularised by the English baritone and composer Frederic Austin. The singer, having arranged the music for solo voice with piano accompaniment, included it in his concert repertoire from 1905 onwards. A "Times" review from 1906 praised the "quaint folk-song", while noting that "the words ... are better known than the excellent if intricate tune". Austin's arrangement was published by Novello &amp; Co. in 1909. According to a footnote added to the posthumous 1955 reprint of his musical setting, Austin wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This song was, in my childhood, current in my family. I have not met with the tune of it elsewhere, nor with the particular version of the words, and have, in this setting, recorded both to the best of my recollection. "F. A." A number of later publications state that Austin's music for "five gold rings" is an original addition to an otherwise traditional melody. An early appearance of this claim is found in the 1961 "University Carol Book", which states:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This is a traditional English singing game but the melody of five gold rings was added by Richard ["sic"] Austin whose fine setting (Novello) should be consulted for a fuller accompaniment. Similar statements are found in John Rutter's 1967 arrangement, and in the 1992 "New Oxford Book of Carols". Many of the decisions Austin made with regard to the lyrics subsequently became widespread: The time signature of this song is not constant, unlike most popular music. This irregular meter perhaps reflects the song's folk origin. The introductory lines "On the ["n"th] day of Christmas, my true love gave to me", are made up of two bars, while most of the lines naming gifts receive one bar per gift with the exception of "Five gold rings", which receives two bars, "Two turtle doves" getting a bar with "And a" on its fourth beat and "partridge in a pear tree" getting two bars of music. In most versions, a bar of music immediately follows "partridge in a pear tree". "On the" is found in that bar on the fourth (pickup) beat for the next verse. The successive bars of three for the gifts surrounded by bars of four give the song its hallmark "hurried" quality. The second to fourth verses' melody is different from that of the fifth to twelfth verses. Before the fifth verse (when "Five gold rings" is first sung), the melody, using solfege, is "sol re mi fa re" for the fourth to second items, and this same melody is thereafter sung for the twelfth to sixth items. However, the melody for "four colly birds, three French hens, two turtle doves" changes from this point, differing from the way these lines were sung in the opening four verses. In the final verse, Austin inserted a flourish on the words "Five gold rings". This has not been copied by later versions, which simply repeat the melody from the earlier verses. Earlier melodies. The earliest known sources for the text, such as "Mirth Without Mischief", do not include music. A melody, possibly related to the "traditional" melody on which Austin based his arrangement, was recorded in Providence, Rhode Island in 1870 and published in 1905. Cecil Sharp's "Folk Songs from Somerset" (1905) contains two different melodies for the song, both distinct from the now-standard melody. Several folklorists have recorded the carol using traditional melodies. Peter Kennedy recorded the Copper family of Sussex, England singing a version in 1955 which differs slightly from the common version, whilst Helen Hartness Flanders recorded several different versions in the 1930s and 40s in New England, where the song seems to have been particularly popular. Edith Fowke recorded a single version sung by Woody Lambe of Toronto, Canada in 1963, whilst Herbert Halpert recorded one version sung by Oscar Hampton and Sabra Bare in Morgantown, North Carolina One interesting version was also recorded in 1962 in Deer, Arkansas, performed by Sara Stone; the recording is available online courtesy of the University of Arkansas. Christmas Price Index. Since 1984, the cumulative costs of the items mentioned in the Frederic Austin version have been used as a tongue-in-cheek economic indicator. Assuming the gifts are repeated in full in each round of the song, then a total of 364 items are delivered by the twelfth day. This custom began with and is maintained by PNC Bank. Two pricing charts are created, referred to as the Christmas Price Index and The True Cost of Christmas. The former is an index of the current costs of one set of each of the gifts given by the True Love to the singer of the song "The Twelve Days of Christmas". The latter is the cumulative cost of all the gifts with the repetitions listed in the song. The people mentioned in the song are hired, not purchased. The total costs of all goods and services for the 2023 Christmas Price Index is US$46,729.86, or US$201,972.18 for all 364 items. The original 1984 cost was $12,623.10. The index has been humorously criticised for not accurately reflecting the true cost of the gifts featured in the Christmas carol. John Julius Norwich's 1998 book, "The Twelve Days of Christmas (Correspondence)", uses the motif of repeating the previous gifts on each subsequent day, to humorous effect. Computational complexity. In the famous article "The Complexity of Songs", Donald Knuth computes the space complexity of the song as function of the number of days, observing that a hypothetical "The formula_0 Days of Christmas" requires a memory space of formula_1 as formula_2 where formula_3 is the length of the song, showing that songs with complexity lower than formula_4 indeed exist. Incidentally, it is also observed that the total number of gifts after formula_0 days equals formula_5. In 1988, a C program authored by Ian Philipps won the International Obfuscated C Code Contest. The code, which according to the jury of the contest "looked like what you would get by pounding on the keys of an old typewriter at random", takes advantage of the recursive structure of the song to print its lyrics with code that is shorter than the lyrics themselves. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "O\\left(\\sqrt{n/\\log n}\\right)" }, { "math_id": 2, "text": "m\\to\\infty" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "O(\\sqrt{n})" }, { "math_id": 5, "text": "m^3/6 + m^2/2 + m/3" } ]
https://en.wikipedia.org/wiki?curid=8776627
877698
General linear model
Statistical linear model The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as formula_0 where Y is a matrix with series of multivariate measurements (each column being a set of measurements on one of the dependent variables), X is a matrix of observations on independent variables that might be a design matrix (each column being a set of observations on one of the independent variables), B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors (noise). The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U. The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression, "t"-test and "F"-test. The general linear model is a generalization of multiple linear regression to the case of more than one dependent variable. If Y, B, and U were column vectors, the matrix equation above would represent multiple linear regression. Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix. Comparison to multiple linear regression. Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is formula_1 or more compactly formula_2 for each observation "i" = 1, ... , "n". In the formula above we consider "n" observations of one dependent variable and "p" independent variables. Thus, "Y""i" is the "i"th observation of the dependent variable, "X""ik" is "k"th observation of the "k"th independent variable, "j" = 1, 2, ..., "p". The values "β""j" represent parameters to be estimated, and "ε""i" is the "i"th independent identically distributed normal error. In the more general multivariate linear regression, there is one equation of the above form for each of "m" &gt; 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other: formula_3 or more compactly formula_4 for all observations indexed as "i" = 1, ... , "n" and for all dependent variables indexed as "j = 1, ... , "m". Note that, since each dependent variable has its own set of regression parameters to be fitted, from a computational point of view the general multivariate regression is simply a sequence of standard multiple linear regressions using the same explanatory variables. Comparison to generalized linear model. The general linear model and the generalized linear model (GLM) are two commonly used families of statistical methods to relate some number of continuous and/or categorical predictors to a single outcome variable. The main difference between the two approaches is that the general linear model strictly assumes that the residuals will follow a conditionally normal distribution, while the GLM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the general linear model is a special case of the GLM in which the distribution of the residuals follow a conditionally normal distribution. The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLM family. Commonly used models in the GLM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLM may be spoken of as a general family of statistical models or as specific models for specific outcome types. Applications. An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where Y contains data from brain scanners, X contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a "mass-univariate" in this setting) and is often referred to as statistical parametric mapping. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{Y} = \\mathbf{X}\\mathbf{B} + \\mathbf{U}," }, { "math_id": 1, "text": " Y_i = \\beta_0 + \\beta_1 X_{i1} + \\beta_2 X_{i2} + \\ldots + \\beta_p X_{ip} + \\epsilon_i" }, { "math_id": 2, "text": "Y_i = \\beta_0 + \\sum \\limits_{k=1}^{p} {\\beta_k X_{ik}} + \\epsilon_i" }, { "math_id": 3, "text": " Y_{ij} = \\beta_{0j} + \\beta_{1j} X_{i1} + \\beta_{2j}X_{i2} + \\ldots + \\beta_{pj} X_{ip} + \\epsilon_{ij}" }, { "math_id": 4, "text": "Y_{ij} = \\beta_{0j} + \\sum \\limits_{k=1}^{p} { \\beta_{kj} X_{ik}} + \\epsilon_{ij}" } ]
https://en.wikipedia.org/wiki?curid=877698
877761
Stiefel–Whitney class
Set of topological invariants In mathematics, in particular in algebraic topology and differential geometry, the Stiefel–Whitney classes are a set of topological invariants of a real vector bundle that describe the obstructions to constructing everywhere independent sets of sections of the vector bundle. Stiefel–Whitney classes are indexed from 0 to "n", where "n" is the rank of the vector bundle. If the Stiefel–Whitney class of index "i" is nonzero, then there cannot exist formula_0 everywhere linearly independent sections of the vector bundle. A nonzero "n"th Stiefel–Whitney class indicates that every section of the bundle must vanish at some point. A nonzero first Stiefel–Whitney class indicates that the vector bundle is not orientable. For example, the first Stiefel–Whitney class of the Möbius strip, as a line bundle over the circle, is not zero, whereas the first Stiefel–Whitney class of the trivial line bundle over the circle, formula_1, is zero. The Stiefel–Whitney class was named for Eduard Stiefel and Hassler Whitney and is an example of a formula_2-characteristic class associated to real vector bundles. In algebraic geometry one can also define analogous Stiefel–Whitney classes for vector bundles with a non-degenerate quadratic form, taking values in etale cohomology groups or in Milnor K-theory. As a special case one can define Stiefel–Whitney classes for quadratic forms over fields, the first two cases being the discriminant and the Hasse–Witt invariant . Introduction. General presentation. For a real vector bundle "E", the Stiefel–Whitney class of "E" is denoted by "w"("E"). It is an element of the cohomology ring formula_3 where "X" is the base space of the bundle "E", and formula_2 (often alternatively denoted by formula_4) is the commutative ring whose only elements are 0 and 1. The component of formula_5 in formula_6 is denoted by formula_7 and called the "i"-th Stiefel–Whitney class of "E". Thus, formula_8, where each formula_7 is an element of formula_6. The Stiefel–Whitney class formula_5 is an invariant of the real vector bundle "E"; i.e., when "F" is another real vector bundle which has the same base space "X" as "E", and if "F" is isomorphic to "E", then the Stiefel–Whitney classes formula_5 and formula_9 are equal. (Here "isomorphic" means that there exists a vector bundle isomorphism formula_10 which covers the identity formula_11.) While it is in general difficult to decide whether two real vector bundles "E" and "F" are isomorphic, the Stiefel–Whitney classes formula_5 and formula_9 can often be computed easily. If they are different, one knows that "E" and "F" are not isomorphic. As an example, over the circle formula_12, there is a line bundle (i.e., a real vector bundle of rank 1) that is not isomorphic to a trivial bundle. This line bundle "L" is the Möbius strip (which is a fiber bundle whose fibers can be equipped with vector space structures in such a way that it becomes a vector bundle). The cohomology group formula_13 has just one element other than 0. This element is the first Stiefel–Whitney class formula_14 of "L". Since the trivial line bundle over formula_12 has first Stiefel–Whitney class 0, it is not isomorphic to "L". Two real vector bundles "E" and "F" which have the same Stiefel–Whitney class are not necessarily isomorphic. This happens for instance when "E" and "F" are trivial real vector bundles of different ranks over the same base space "X". It can also happen when "E" and "F" have the same rank: the tangent bundle of the 2-sphere formula_15 and the trivial real vector bundle of rank 2 over formula_15 have the same Stiefel–Whitney class, but they are not isomorphic. But if two real "line" bundles over "X" have the same Stiefel–Whitney class, then they are isomorphic. Origins. The Stiefel–Whitney classes formula_7 get their name because Eduard Stiefel and Hassler Whitney discovered them as mod-2 reductions of the obstruction classes to constructing formula_16 everywhere linearly independent sections of the vector bundle "E" restricted to the "i"-skeleton of "X". Here "n" denotes the dimension of the fibre of the vector bundle formula_17. To be precise, provided "X" is a CW-complex, Whitney defined classes formula_18 in the "i"-th cellular cohomology group of "X" with twisted coefficients. The coefficient system being the formula_19-st homotopy group of the Stiefel manifold formula_20 of formula_16 linearly independent vectors in the fibres of "E". Whitney proved that formula_21 if and only if "E", when restricted to the "i"-skeleton of "X", has formula_16 linearly-independent sections. Since formula_22 is either infinite-cyclic or isomorphic to formula_2, there is a canonical reduction of the formula_18 classes to classes formula_23 which are the Stiefel–Whitney classes. Moreover, whenever formula_24, the two classes are identical. Thus, formula_25 if and only if the bundle formula_26 is orientable. The formula_27 class contains no information, because it is equal to 1 by definition. Its creation by Whitney was an act of creative notation, allowing the Whitney sum Formula formula_28 to be true. Definitions. Throughout, formula_29 denotes singular cohomology of a space "X" with coefficients in the group "G". The word "map" means always a continuous function between topological spaces. Axiomatic definition. The Stiefel-Whitney characteristic class formula_30 of a finite rank real vector bundle "E" on a paracompact base space "X" is defined as the unique class such that the following axioms are fulfilled: The uniqueness of these classes is proved for example, in section 17.2 – 17.6 in Husemoller or section 8 in Milnor and Stasheff. There are several proofs of the existence, coming from various constructions, with several different flavours, their coherence is ensured by the unicity statement. Definition "via" infinite Grassmannians. The infinite Grassmannians and vector bundles. This section describes a construction using the notion of classifying space. For any vector space "V", let formula_41 denote the Grassmannian, the space of "n"-dimensional linear subspaces of "V", and denote the infinite Grassmannian formula_42. Recall that it is equipped with the tautological bundle formula_43 a rank "n" vector bundle that can be defined as the subbundle of the trivial bundle of fiber "V" whose fiber at a point formula_44 is the subspace represented by "W". Let formula_45, be a continuous map to the infinite Grassmannian. Then, up to isomorphism, the bundle induced by the map "f" on "X" formula_46 depends only on the homotopy class of the map ["f"]. The pullback operation thus gives a morphism from the set formula_47 of maps formula_48 "modulo" homotopy equivalence, to the set formula_49 of isomorphism classes of vector bundles of rank "n" over "X". Now, by the naturality axiom (4) above, formula_50. So it suffices in principle to know the values of formula_51 for all "j". However, the cohomology ring formula_52 is free on specific generators formula_53 arising from a standard cell decomposition, and it then turns out that these generators are in fact just given by formula_54. Thus, for any rank-n bundle, formula_55, where "f" is the appropriate classifying map. This in particular provides one proof of the existence of the Stiefel–Whitney classes. The case of line bundles. We now restrict the above construction to line bundles, "ie" we consider the space, formula_56 of line bundles over "X". The Grassmannian of lines formula_57 is just the infinite projective space formula_58 which is doubly covered by the infinite sphere formula_59 with antipodal points as fibres. This sphere formula_59 is contractible, so we have formula_60 Hence P∞(R) is the Eilenberg-Maclane space formula_61. It is a property of Eilenberg-Maclane spaces, that formula_62 for any "X", with the isomorphism given by "f" → "f*"η, where η is the generator formula_63. Applying the former remark that α : ["X", "Gr"1] → Vect1("X") is also a bijection, we obtain a bijection formula_64 this defines the Stiefel–Whitney class "w"1 for line bundles. The group of line bundles. If Vect1("X") is considered as a group under the operation of tensor product, then the Stiefel–Whitney class, "w"1 : Vect1("X") → "H"1("X"; Z/2Z), is an isomorphism. That is, "w"1(λ ⊗ μ) = "w"1(λ) + "w"1(μ) for all line bundles λ, μ → "X". For example, since "H"1("S"1; Z/2Z) = Z/2Z, there are only two line bundles over the circle up to bundle isomorphism: the trivial one, and the open Möbius strip (i.e., the Möbius strip with its boundary deleted). The same construction for complex vector bundles shows that the Chern class defines a bijection between complex line bundles over "X" and "H"2("X"; Z), because the corresponding classifying space is P∞(C), a K(Z, 2). This isomorphism is true for topological line bundles, the obstruction to injectivity of the Chern class for algebraic vector bundles is the Jacobian variety. Properties. Uniqueness of the Stiefel–Whitney classes. The bijection above for line bundles implies that any functor θ satisfying the four axioms above is equal to "w", by the following argument. The second axiom yields θ(γ1) = 1 + θ1(γ1). For the inclusion map "i" : P1(R) → P∞(R), the pullback bundle formula_68 is equal to formula_69. Thus the first and third axiom imply formula_70 Since the map formula_71 is an isomorphism, formula_72 and θ(γ1) = "w"(γ1) follow. Let "E" be a real vector bundle of rank "n" over a space "X". Then "E" admits a splitting map, i.e. a map "f" : "X′" → "X" for some space "X′" such that formula_73 is injective and formula_74 for some line bundles formula_75. Any line bundle over "X" is of the form formula_76 for some map "g", and formula_77 by naturality. Thus θ = "w" on formula_78. It follows from the fourth axiom above that formula_79 Since formula_80 is injective, θ = "w". Thus the Stiefel–Whitney class is the unique functor satisfying the four axioms above. Non-isomorphic bundles with the same Stiefel–Whitney classes. Although the map formula_81 is a bijection, the corresponding map is not necessarily injective in higher dimensions. For example, consider the tangent bundle formula_82 for "n" even. With the canonical embedding of formula_83 in formula_84, the normal bundle formula_85 to formula_83 is a line bundle. Since formula_83 is orientable, formula_85 is trivial. The sum formula_86 is just the restriction of formula_87 to formula_83, which is trivial since formula_84 is contractible. Hence "w"("TSn") = "w"("TSn")"w"(ν) = w("TSn" ⊕ ν) = 1. But, provided n is even, "TSn" → "Sn" is not trivial; its Euler class formula_88, where ["Sn"] denotes a fundamental class of "Sn" and χ the Euler characteristic. Related invariants. Stiefel–Whitney numbers. If we work on a manifold of dimension "n", then any product of Stiefel–Whitney classes of total degree "n" can be paired with the Z/2Z-fundamental class of the manifold to give an element of Z/2Z, a Stiefel–Whitney number of the vector bundle. For example, if the manifold has dimension 3, there are three linearly independent Stiefel–Whitney numbers, given by formula_89. In general, if the manifold has dimension "n", the number of possible independent Stiefel–Whitney numbers is the number of partitions of "n". The Stiefel–Whitney numbers of the tangent bundle of a smooth manifold are called the Stiefel–Whitney numbers of the manifold. They are known to be cobordism invariants. It was proven by Lev Pontryagin that if "B" is a smooth compact ("n"+1)–dimensional manifold with boundary equal to "M", then the Stiefel-Whitney numbers of "M" are all zero. Moreover, it was proved by René Thom that if all the Stiefel-Whitney numbers of "M" are zero then "M" can be realised as the boundary of some smooth compact manifold. One Stiefel–Whitney number of importance in surgery theory is the "de Rham invariant" of a (4"k"+1)-dimensional manifold, formula_90 Wu classes. The Stiefel–Whitney classes formula_91 are the Steenrod squares of the Wu classes formula_92, defined by Wu Wenjun in 1947. Most simply, the total Stiefel–Whitney class is the total Steenrod square of the total Wu class: formula_93. Wu classes are most often defined implicitly in terms of Steenrod squares, as the cohomology class representing the Steenrod squares. Let the manifold "X" be "n" dimensional. Then, for any cohomology class "x" of degree formula_94, formula_95. Or more narrowly, we can demand formula_96, again for cohomology classes "x" of degree formula_94. Integral Stiefel–Whitney classes. The element formula_97 is called the "i" + 1 "integral" Stiefel–Whitney class, where β is the Bockstein homomorphism, corresponding to reduction modulo 2, Z → Z/2Z: formula_98 For instance, the third integral Stiefel–Whitney class is the obstruction to a Spinc structure. Relations over the Steenrod algebra. Over the Steenrod algebra, the Stiefel–Whitney classes of a smooth manifold (defined as the Stiefel–Whitney classes of the tangent bundle) are generated by those of the form formula_99. In particular, the Stiefel–Whitney classes satisfy the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Wu formula, named for Wu Wenjun: formula_100 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(n-i+1)" }, { "math_id": 1, "text": "S^1 \\times\\R" }, { "math_id": 2, "text": "\\Z/2\\Z" }, { "math_id": 3, "text": "H^\\ast(X; \\Z/2\\Z) = \\bigoplus_{i\\geq0} H^i(X; \\Z/2\\Z)" }, { "math_id": 4, "text": "\\Z_2" }, { "math_id": 5, "text": "w(E)" }, { "math_id": 6, "text": "H^i(X; \\Z/2\\Z)" }, { "math_id": 7, "text": "w_i(E)" }, { "math_id": 8, "text": "w(E) = w_0(E) + w_1(E) + w_2(E) + \\cdots" }, { "math_id": 9, "text": "w(F)" }, { "math_id": 10, "text": "E \\to F" }, { "math_id": 11, "text": "\\mathrm{id}_X\\colon X\\to X" }, { "math_id": 12, "text": "S^1" }, { "math_id": 13, "text": "H^1(S^1; \\Z/2\\Z)" }, { "math_id": 14, "text": "w_1(L)" }, { "math_id": 15, "text": "S^2" }, { "math_id": 16, "text": "n-i+1" }, { "math_id": 17, "text": "F\\to E\\to X" }, { "math_id": 18, "text": "W_i(E)" }, { "math_id": 19, "text": "(i-1)" }, { "math_id": 20, "text": "V_{n-i+1}(F)" }, { "math_id": 21, "text": "W_i(E)=0" }, { "math_id": 22, "text": "\\pi_{i-1}V_{n-i+1}(F)" }, { "math_id": 23, "text": "w_i(E) \\in H^i(X; \\Z/2\\Z)" }, { "math_id": 24, "text": "\\pi_{i-1}V_{n-i+1}(F) = \\Z/2\\Z" }, { "math_id": 25, "text": "w_1(E) = 0" }, { "math_id": 26, "text": "E\\to X" }, { "math_id": 27, "text": "w_0(E)" }, { "math_id": 28, "text": "w(E_1 \\oplus E_2) = w(E_1)w(E_2)" }, { "math_id": 29, "text": "H^i(X; G)" }, { "math_id": 30, "text": "w(E)\\in H^*(X; \\Z/2\\Z)" }, { "math_id": 31, "text": "\\mathbf{P}^1(\\R)" }, { "math_id": 32, "text": "w(\\gamma^1_1)= 1 + a \\in H^*(\\mathbf{P}^1(\\R); \\Z/2\\Z)= (\\Z/2\\Z)[a]/(a^2)" }, { "math_id": 33, "text": "w_0(E) = 1 \\in H^0(X)," }, { "math_id": 34, "text": "w_i = 0 \\in H^i(X)" }, { "math_id": 35, "text": "w(E) \\in H^{\\leqslant \\mathrm{rank} (E)}(X)." }, { "math_id": 36, "text": "w(E\\oplus F)= w(E) \\smile w(F)" }, { "math_id": 37, "text": "w(f^*E) = f^*w(E)" }, { "math_id": 38, "text": "E \\to X" }, { "math_id": 39, "text": "f\\colon X' \\to X" }, { "math_id": 40, "text": "f^*E" }, { "math_id": 41, "text": "Gr_n(V)" }, { "math_id": 42, "text": "Gr_n = Gr_n(\\R^\\infty)" }, { "math_id": 43, "text": "\\gamma^n \\to Gr_n," }, { "math_id": 44, "text": "W \\in Gr_n (V)" }, { "math_id": 45, "text": "f\\colon X \\to Gr_n" }, { "math_id": 46, "text": "f^*\\gamma^n \\in \\mathrm{Vect}_n(X)" }, { "math_id": 47, "text": " [X; Gr_n]" }, { "math_id": 48, "text": "X \\to Gr_n" }, { "math_id": 49, "text": "\\mathrm{Vect}_n(X)" }, { "math_id": 50, "text": "w_j (f^*\\gamma^n)= f^* w_j (\\gamma^n)" }, { "math_id": 51, "text": "w_j (\\gamma^n)" }, { "math_id": 52, "text": "H^*(Gr_n, \\Z_2)" }, { "math_id": 53, "text": "x_j\\in H^j(Gr_n, \\Z_2)" }, { "math_id": 54, "text": "x_j=w_j (\\gamma^n)" }, { "math_id": 55, "text": "w_j= f^*x_j" }, { "math_id": 56, "text": "\\mathrm{Vect}_1(X)" }, { "math_id": 57, "text": "Gr_1" }, { "math_id": 58, "text": "\\mathbf{P}^\\infty(\\mathbf{R}) = \\mathbf{R}^\\infty/\\mathbf{R}^*," }, { "math_id": 59, "text": "S^{\\infty}" }, { "math_id": 60, "text": "\\begin{align}\n\\pi_1(\\mathbf{P}^\\infty(\\mathbf{R})) &= \\mathbf{Z}/2\\mathbf{Z} \\\\\n\\pi_i(\\mathbf{P}^\\infty(\\mathbf{R})) &= \\pi_i(S^\\infty) = 0 && i > 1\n\\end{align}" }, { "math_id": 61, "text": "K(\\Z/2\\Z, 1)" }, { "math_id": 62, "text": " \\left [X; \\mathbf{P}^\\infty(\\mathbf{R}) \\right ] = H^1(X; \\Z/2\\Z)" }, { "math_id": 63, "text": "H^1(\\mathbf{P}^\\infty(\\mathbf{R}); \\mathbf{Z}/2\\mathbf{Z}) = \\Z/2\\Z" }, { "math_id": 64, "text": "w_1\\colon \\text{Vect}_1(X) \\to H^1(X; \\mathbf{Z}/2\\mathbf{Z})" }, { "math_id": 65, "text": "s_1,\\ldots,s_{\\ell}" }, { "math_id": 66, "text": "\\ell" }, { "math_id": 67, "text": "w_{k-\\ell+1}=\\cdots=w_k=0" }, { "math_id": 68, "text": "i^*\\gamma^1" }, { "math_id": 69, "text": "\\gamma_1^1" }, { "math_id": 70, "text": "i^* \\theta_1 \\left (\\gamma^1 \\right ) = \\theta_1 \\left (i^* \\gamma^1 \\right ) = \\theta_1 \\left (\\gamma_1^1 \\right ) = w_1 \\left (\\gamma_1^1 \\right ) = w_1 \\left (i^* \\gamma^1 \\right ) = i^* w_1 \\left (\\gamma^1 \\right )." }, { "math_id": 71, "text": "i^*: H^1 \\left (\\mathbf{P}^\\infty(\\mathbf{R} \\right ); \\mathbf{Z}/2\\mathbf{Z}) \\to H^1 \\left (\\mathbf{P}^1(\\mathbf{R}); \\mathbf{Z}/2\\mathbf{Z} \\right )" }, { "math_id": 72, "text": "\\theta_1(\\gamma^1) = w_1(\\gamma^1)" }, { "math_id": 73, "text": "f^*:H^*(X; \\mathbf{Z}/2\\mathbf{Z})) \\to H^*(X'; \\mathbf{Z}/2\\mathbf{Z})" }, { "math_id": 74, "text": "f^* E = \\lambda_1 \\oplus \\cdots \\oplus \\lambda_n" }, { "math_id": 75, "text": "\\lambda_i \\to X'" }, { "math_id": 76, "text": "g^*\\gamma^1" }, { "math_id": 77, "text": "\\theta \\left (g^*\\gamma^1 \\right ) = g^*\\theta \\left ( \\gamma^1 \\right ) = g^* w \\left ( \\gamma^1 \\right ) = w \\left ( g^*\\gamma^1 \\right )," }, { "math_id": 78, "text": "\\text{Vect}_1(X)" }, { "math_id": 79, "text": "f^*\\theta(E) = \\theta(f^*E) = \\theta(\\lambda_1 \\oplus \\cdots \\oplus \\lambda_n) = \\theta(\\lambda_1) \\cdots \\theta(\\lambda_n) = w(\\lambda_1) \\cdots w(\\lambda_n) = w(f^*E) = f^* w(E)." }, { "math_id": 80, "text": "f^*" }, { "math_id": 81, "text": "w_1 \\colon \\mathrm{Vect}_1(X) \\to H^1(X; \\Z/2\\Z)" }, { "math_id": 82, "text": "TS^n" }, { "math_id": 83, "text": "S^n" }, { "math_id": 84, "text": "\\R^{n+1}" }, { "math_id": 85, "text": "\\nu" }, { "math_id": 86, "text": "TS^n \\oplus \\nu" }, { "math_id": 87, "text": "T\\R^{n+1}" }, { "math_id": 88, "text": "e(TS^n) = \\chi(TS^n)[S^n] = 2[S^n] \\not =0" }, { "math_id": 89, "text": "w_1^3, w_1 w_2, w_3" }, { "math_id": 90, "text": "w_2w_{4k-1}." }, { "math_id": 91, "text": "w_k" }, { "math_id": 92, "text": "v_k" }, { "math_id": 93, "text": "\\operatorname{Sq}(v) = w" }, { "math_id": 94, "text": "n-k" }, { "math_id": 95, "text": "v_k \\cup x = \\operatorname{Sq}^k(x)" }, { "math_id": 96, "text": "\\langle v_k \\cup x, \\mu\\rangle = \\langle \\operatorname{Sq}^k(x), \\mu \\rangle" }, { "math_id": 97, "text": "\\beta w_i \\in H^{i+1}(X;\\mathbf{Z})" }, { "math_id": 98, "text": "\\beta\\colon H^i(X;\\mathbf{Z}/2\\mathbf{Z}) \\to H^{i+1}(X;\\mathbf{Z})." }, { "math_id": 99, "text": "w_{2^i}" }, { "math_id": 100, "text": "Sq^i(w_j)=\\sum_{t=0}^i {j+t-i-1 \\choose t} w_{i-t}w_{j+t}." } ]
https://en.wikipedia.org/wiki?curid=877761
87793
Joseph-Louis Lagrange
Italian-French mathematician and astronomer (1736–1813) Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia or Giuseppe Ludovico De la Grange Tournier; 25 January 1736 – 10 April 1813), also reported as Giuseppe Luigi Lagrange or Lagrangia, was an Italian mathematician, physicist and astronomer, later naturalized French. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics. In 1766, on the recommendation of Leonhard Euler and d'Alembert, Lagrange succeeded Euler as the director of mathematics at the Prussian Academy of Sciences in Berlin, Prussia, where he stayed for over twenty years, producing many volumes of work and winning several prizes of the French Academy of Sciences. Lagrange's treatise on analytical mechanics ("Mécanique analytique", 4. ed., 2 vols. Paris: Gauthier-Villars et fils, 1788–89), which was written in Berlin and first published in 1788, offered the most comprehensive treatment of classical mechanics since Newton and formed a basis for the development of mathematical physics in the nineteenth century. In 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life. He was instrumental in the decimalisation process in Revolutionary France, became the first professor of analysis at the École Polytechnique upon its opening in 1794, was a founding member of the Bureau des Longitudes, and became Senator in 1799. Scientific contribution. Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He extended the method to include possible constraints, arriving at the method of Lagrange multipliers. Lagrange invented the method of solving differential equations known as variation of parameters, applied differential calculus to the theory of probabilities and worked on solutions for algebraic equations. He proved that every natural number is a sum of four squares. His treatise "Theorie des fonctions analytiques" laid some of the foundations of group theory, anticipating Galois. In calculus, Lagrange developed a novel approach to interpolation and Taylor's theorem. He studied the three-body problem for the Earth, Sun and Moon (1764) and the movement of Jupiter's satellites (1766), and in 1772 found the special-case solutions to this problem that yield what are now known as Lagrangian points. Lagrange is best known for transforming Newtonian mechanics into a branch of analysis, Lagrangian mechanics. He presented the mechanical "principles" as simple results of the variational calculus. Biography. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; In appearance he was of medium height, and slightly formed, with pale blue eyes and a colourless complexion. In character he was nervous and timid, he detested controversy, and to avoid it willingly allowed others to take the credit for what he had himself done. He always thought out the subject of his papers before he began to compose them, and usually wrote them straight off without a single erasure or correction. W.W. Rouse Ball Early years. Firstborn of eleven children as "Giuseppe Lodovico Lagrangia", Lagrange was of Italian and French descent. His paternal great-grandfather was a French captain of cavalry, whose family originated from the French region of Tours. After serving under Louis XIV, he had entered the service of Charles Emmanuel II, Duke of Savoy, and married a Conti from the noble Roman family. Lagrange's father, Giuseppe Francesco Lodovico, was a doctor in Law at the University of Torino, while his mother was the only child of a rich doctor of Cambiano, in the countryside of Turin. He was raised as a Roman Catholic (but later on became an agnostic). His father, who had charge of the king's military chest and was Treasurer of the Office of Public Works and Fortifications in Turin, should have maintained a good social position and wealth, but before his son grew up he had lost most of his property in speculations. A career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his favourite subject was classical Latin. At first, he had no great enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley from 1693 which he came across by accident. Alone and unaided he threw himself into mathematical studies; at the end of a year's incessant toil he was already an accomplished mathematician. Charles Emmanuel III appointed Lagrange to serve as the "Sostituto del Maestro di Matematica" (mathematics assistant professor) at the Royal Military Academy of the Theory and Practice of Artillery in 1755, where he taught courses in calculus and mechanics to support the Piedmontese army's early adoption of the ballistics theories of Benjamin Robins and Leonhard Euler. In that capacity, Lagrange was the first to teach calculus in an engineering school. According to , the academy's military commander and famous artillery theorist, Lagrange unfortunately proved to be a problematic professor with his oblivious teaching style, abstract reasoning, and impatience with artillery and fortification-engineering applications. In this academy one of his students was François Daviet. Variational calculus. Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of the tautochrone, discovering a method of maximizing and minimizing functionals in a way similar to finding extrema of functions. Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his "δ-algorithm", leading to the Euler–Lagrange equations of variational calculus and considerably simplifying Euler's earlier analysis. Lagrange also applied his ideas to problems of classical mechanics, generalising the results of Euler and Maupertuis. Euler was very impressed with Lagrange's results. It has been stated that "with characteristic courtesy he withheld a paper he had previously written, which covered some of the same ground, in order that the young Italian might have time to complete his work, and claim the undisputed invention of the new calculus"; however, this chivalric view has been disputed. Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. "Miscellanea Taurinensia". In 1758, with the aid of his pupils (mainly with Daviet), Lagrange established a society, which was subsequently incorporated as the Turin Academy of Sciences, and most of his early writings are to be found in the five volumes of its transactions, usually known as the "Miscellanea Taurinensia". Many of these are elaborate papers. The first volume contains a paper on the theory of the propagation of sound; in this he indicates a mistake made by Newton, obtains the general differential equation for the motion, and integrates it for motion in a straight line. This volume also contains the complete solution of the problem of a string vibrating transversely; in this paper, he points out a lack of generality in the solutions previously given by Brook Taylor, D'Alembert, and Euler, and arrives at the conclusion that the form of the curve at any time "t" is given by the equation formula_0. The article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in this volume are on recurring series, probabilities, and the calculus of variations. The second volume contains a long paper embodying the results of several papers in the first volume on the theory and notation of the calculus of variations, and he illustrates its use by deducing the principle of least action, and by solutions of various problems in dynamics. The third volume includes the solution of several dynamical problems by means of the calculus of variations; some papers on the integral calculus; a solution of a Fermat's problem: given an integer "n" which is not a perfect square, to find a number "x" such that "nx"2 + 1 is a perfect square; and the general differential equations of motion for three bodies moving under their mutual attractions. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the same face was always turned to the earth, a problem which he treated by the aid of virtual work. His solution is especially interesting as containing the germ of the idea of generalised equations of motion, equations which he first formally proved in 1780. Berlin. Already by 1756, Euler and Maupertuis, seeing Lagrange's mathematical talent, tried to persuade Lagrange to come to Berlin, but he shyly refused the offer. In 1765, d'Alembert interceded on Lagrange's behalf with Frederick of Prussia and by letter, asked him to leave Turin for a considerably more prestigious position in Berlin. He again turned down the offer, responding that "It seems to me that Berlin would not be at all suitable for me while M.Euler is there". In 1766, after Euler left Berlin for Saint Petersburg, Frederick himself wrote to Lagrange expressing the wish of "the greatest king in Europe" to have "the greatest mathematician in Europe" resident at his court. Lagrange was finally persuaded. He spent the next twenty years in Prussia, where he produced a long series of papers published in the Berlin and Turin transactions, and composed his monumental work, the "Mécanique analytique". In 1767, he married his cousin Vittoria Conti. Lagrange was a favourite of the king, who frequently lectured him on the advantages of perfect regularity of life. The lesson was accepted, and Lagrange studied his mind and body as though they were machines, and experimented to find the exact amount of work which he could do before exhaustion. Every night he set himself a definite task for the next day, and on completing any branch of a subject he wrote a short analysis to see what points in the demonstrations or the subject-matter were capable of improvement. He carefully planned his papers before writing them, usually without a single erasure or correction. Nonetheless, during his years in Berlin, Lagrange's health was rather poor, and that of his wife Vittoria was even worse. She died in 1783 after years of illness and Lagrange was very depressed. In 1786, Frederick II died, and the climate of Berlin became difficult for Lagrange. Paris. In 1786, following Frederick's death, Lagrange received similar invitations from states including Spain and Naples, and he accepted the offer of Louis XVI to move to Paris. In France he was received with every mark of distinction and special apartments in the Louvre were prepared for his reception, and he became a member of the French Academy of Sciences, which later became part of the Institut de France (1795). At the beginning of his residence in Paris, he was seized with an attack of melancholy, and even the printed copy of his "Mécanique" on which he had worked for a quarter of a century lay for more than two years unopened on his desk. Curiosity as to the results of the French Revolution first stirred him out of his lethargy, a curiosity which soon turned to alarm as the revolution developed. It was about the same time, 1792, that the unaccountable sadness of his life and his timidity moved the compassion of 24-year-old Renée-Françoise-Adélaïde Le Monnier, daughter of his friend, the astronomer Pierre Charles Le Monnier. She insisted on marrying him and proved a devoted wife to whom he became warmly attached. In September 1793, the Reign of Terror began. Under the intervention of Antoine Lavoisier, who himself was by then already thrown out of the academy along with many other scholars, Lagrange was specifically exempted by name in the decree of October 1793 that ordered all foreigners to leave France. On 4 May 1794, Lavoisier and 27 other tax farmers were arrested and sentenced to death and guillotined on the afternoon after the trial. Lagrange said on the death of Lavoisier: "It took only a moment to cause this head to fall and a hundred years will not suffice to produce its like." Though Lagrange had been preparing to escape from France while there was yet time, he was never in any danger; different revolutionary governments (and at a later time, Napoleon) gave him honours and distinctions. This luckiness or safety may to some extent be due to his life attitude he expressed many years before: "I believe that, in general, one of the first principles of every wise man is to conform strictly to the laws of the country in which he is living, even when they are unreasonable". A striking testimony to the respect in which he was held was shown in 1796 when the French commissary in Italy was ordered to attend in the full state on Lagrange's father and tender the congratulations of the republic on the achievements of his son, who "had done honour to all mankind by his genius, and whom it was the special glory of Piedmont to have produced". It may be added that Napoleon, when he attained power, warmly encouraged scientific studies in France, and was a liberal benefactor of them. Appointed senator in 1799, he was the first signer of the Sénatus-consulte which in 1802 annexed his fatherland Piedmont to France. He acquired French citizenship in consequence. The French claimed he was a French mathematician, but the Italians continued to claim him as Italian." Units of measurement. Lagrange was involved in the development of the metric system of measurement in the 1790s. He was offered the presidency of the Commission for the reform of weights and measures ("la Commission des Poids et Mesures") when he was preparing to escape. After Lavoisier's death in 1794, it was largely Lagrange who influenced the choice of the metre and kilogram units with decimal subdivision, by the commission of 1799. Lagrange was also one of the founding members of the Bureau des Longitudes in 1795. École Normale. In 1795, Lagrange was appointed to a mathematical chair at the newly established École Normale, which enjoyed only a short existence of four months. His lectures there were elementary; they contain nothing of any mathematical importance, though they do provide a brief historical insight into his reason for proposing undecimal or Base 11 as the base number for the reformed system of weights and measures. The lectures were published because the professors had to "pledge themselves to the representatives of the people and to each other neither to read nor to repeat from memory" ["Les professeurs aux Écoles Normales ont pris, avec les Représentants du Peuple, et entr'eux l'engagement de ne point lire ou débiter de mémoire des discours écrits"]. The discourses were ordered and taken down in shorthand to enable the deputies to see how the professors acquitted themselves. It was also thought the published lectures would interest a significant portion of the citizenry ["Quoique des feuilles sténographiques soient essentiellement destinées aux élèves de l'École Normale, on doit prévoir quיelles seront lues par une grande partie de la Nation"]. École Polytechnique. In 1794, Lagrange was appointed professor of the École Polytechnique; and his lectures there, described by mathematicians who had the good fortune to be able to attend them, were almost perfect both in form and matter. Beginning with the merest elements, he led his hearers on until, almost unknown to themselves, they were themselves extending the bounds of the subject: above all he impressed on his pupils the advantage of always using general methods expressed in a symmetrical notation. However, Lagrange does not seem to have been a successful teacher. Fourier, who attended his lectures in 1795, wrote: his voice is very feeble, at least in that he does not become heated; he has a very marked Italian accent and pronounces the "s" like "z" [...] The students, of whom the majority are incapable of appreciating him, give him little welcome, but the "professeurs" make amends for it. Late years. In 1810, Lagrange started a thorough revision of the "Mécanique analytique", but he was able to complete only about two-thirds of it before his death in Paris in 1813, in 128 rue du Faubourg Saint-Honoré. Napoleon honoured him with the Grand Croix of the Ordre Impérial de la Réunion just two days before he died. He was buried that same year in the Panthéon in Paris. The inscription on his tomb reads in translation:JOSEPH LOUIS LAGRANGE. Senator. Count of the Empire. Grand Officer of the Legion of Honour. Grand Cross of the Imperial Order of the Reunion. Member of the Institute and the Bureau of Longitude. Born in Turin on 25 January 1736. Died in Paris on 10 April 1813. Work in Berlin. Lagrange was extremely active scientifically during the twenty years he spent in Berlin. Not only did he produce his "Mécanique analytique", but he contributed between one and two hundred papers to the Academy of Turin, the Berlin Academy, and the French Academy. Some of these are really treatises, and all without exception are of a high order of excellence. Except for a short time when he was ill he produced on average about one paper a month. Of these, note the following as amongst the most important. First, his contributions to the fourth and fifth volumes, 1766–1773, of the "Miscellanea Taurinensia"; of which the most important was the one in 1771, in which he discussed how numerous astronomical observations should be combined so as to give the most probable result. And later, his contributions to the first two volumes, 1784–1785, of the transactions of the Turin Academy; to the first of which he contributed a paper on the pressure exerted by fluids in motion, and to the second an article on integration by infinite series, and the kind of problems for which it is suitable. Most of the papers sent to Paris were on astronomical questions, and among these, including his paper on the Jovian system in 1766, his essay on the problem of three bodies in 1772, his work on the secular equation of the Moon in 1773, and his treatise on cometary perturbations in 1778. These were all written on subjects proposed by the Académie française, and in each case, the prize was awarded to him. Lagrangian mechanics. &lt;templatestyles src="Hlist/styles.css"/&gt; Between 1772 and 1788, Lagrange re-formulated Classical/Newtonian mechanics to simplify formulas and ease calculations. These mechanics are called Lagrangian mechanics. Algebra. The greater number of his papers during this time were, however, contributed to the Prussian Academy of Sciences. Several of them deal with questions in algebra. Number theory. Several of his early papers also deal with questions of number theory. 1 has a nontrivial solution in the integers for any non-square natural number n. Other mathematical work. There are also numerous articles on various points of analytical geometry. In two of them, written rather later, in 1792 and 1793, he reduced the equations of the quadrics (or conicoids) to their canonical forms. During the years from 1772 to 1785, he contributed a long series of papers which created the science of partial differential equations. A large part of these results was collected in the second edition of Euler's integral calculus which was published in 1794. Astronomy. Lastly, there are numerous papers on problems in astronomy. Of these the most important are the following: Fundamental treatise. Over and above these various papers he composed his fundamental treatise, the "Mécanique analytique". In this book, he lays down the law of virtual work, and from that one fundamental principle, by the aid of the calculus of variations, deduces the whole of mechanics, both of solids and fluids. The object of the book is to show that the subject is implicitly included in a single principle, and to give general formulae from which any particular result can be obtained. The method of generalised co-ordinates by which he obtained this result is perhaps the most brilliant result of his analysis. Instead of following the motion of each individual part of a material system, as D'Alembert and Euler had done, he showed that, if we determine its configuration by a sufficient number of variables "x", called generalized coordinates, whose number is the same as that of the degrees of freedom possessed by the system, then the kinetic and potential energies of the system can be expressed in terms of those variables, and the differential equations of motion thence deduced by simple differentiation. For example, in dynamics of a rigid system he replaces the consideration of the particular problem by the general equation, which is now usually written in the form formula_3 where "T" represents the kinetic energy and "V" represents the potential energy of the system. He then presented what we now know as the method of Lagrange multipliers—though this is not the first time that method was published—as a means to solve this equation. Amongst other minor theorems here given it may suffice to mention the proposition that the kinetic energy imparted by the given impulses to a material system under given constraints is a maximum, and the principle of least action. All the analysis is so elegant that Sir William Rowan Hamilton said the work could be described only as a scientific poem. Lagrange remarked that mechanics was really a branch of pure mathematics analogous to a geometry of four dimensions, namely, the time and the three coordinates of the point in space; and it is said that he prided himself that from the beginning to the end of the work there was not a single diagram. At first no printer could be found who would publish the book; but Legendre at last persuaded a Paris firm to undertake it, and it was issued under the supervision of Laplace, Cousin, Legendre (editor) and Condorcet in 1788. Work in France. Differential calculus and calculus of variations. Lagrange's lectures on the differential calculus at École Polytechnique form the basis of his treatise "Théorie des fonctions analytiques", which was published in 1797. This work is the extension of an idea contained in a paper he had sent to the Berlin papers in 1772, and its object is to substitute for the differential calculus a group of theorems based on the development of algebraic functions in series, relying in particular on the principle of the generality of algebra. A somewhat similar method had been previously used by John Landen in the "Residual Analysis", published in London in 1758. Lagrange believed that he could thus get rid of those difficulties, connected with the use of infinitely large and infinitely small quantities, to which philosophers objected in the usual treatment of the differential calculus. The book is divided into three parts: of these, the first treats of the general theory of functions, and gives an algebraic proof of Taylor's theorem, the validity of which is, however, open to question; the second deals with applications to geometry; and the third with applications to mechanics. Another treatise on the same lines was his "Leçons sur le calcul des fonctions", issued in 1804, with the second edition in 1806. It is in this book that Lagrange formulated his celebrated method of Lagrange multipliers, in the context of problems of variational calculus with integral constraints. These works devoted to differential calculus and calculus of variations may be considered as the starting point for the researches of Cauchy, Jacobi, and Weierstrass. Infinitesimals. At a later period Lagrange fully embraced the use of infinitesimals in preference to founding the differential calculus on the study of algebraic forms; and in the preface to the second edition of the "Mécanique Analytique", which was issued in 1811, he justifies the employment of infinitesimals, and concludes by saying that: "When we have grasped the spirit of the infinitesimal method, and have verified the exactness of its results either by the geometrical method of prime and ultimate ratios, or by the analytical method of derived functions, we may employ infinitely small quantities as a sure and valuable means of shortening and simplifying our proofs." Number theory. His "Résolution des équations numériques", published in 1798, was also the fruit of his lectures at École Polytechnique. There he gives the method of approximating the real roots of an equation by means of continued fractions, and enunciates several other theorems. In a note at the end, he shows how Fermat's little theorem, that is formula_4 where "p" is a prime and "a" is prime to "p", may be applied to give the complete algebraic solution of any binomial equation. He also here explains how the equation whose roots are the squares of the differences of the roots of the original equation may be used so as to give considerable information as to the position and nature of those roots. Celestial mechanics. A theory of the planetary motions had formed the subject of some of the most remarkable of Lagrange's Berlin papers. In 1806 the subject was reopened by Poisson, who, in a paper read before the French Academy, showed that Lagrange's formulae led to certain limits for the stability of the orbits. Lagrange, who was present, now discussed the whole subject afresh, and in a letter communicated to the academy in 1808 explained how, by the variation of arbitrary constants, the periodical and secular inequalities of any system of mutually interacting bodies could be determined. Prizes and distinctions. Euler proposed Lagrange for election to the Berlin Academy and he was elected on 2 September 1756. He was elected a Fellow of the Royal Society of Edinburgh in 1790, a Fellow of the Royal Society and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1808, Napoleon made Lagrange a Grand Officer of the Legion of Honour and a Count of the Empire. He was awarded the Grand Croix of the Ordre Impérial de la Réunion in 1813, a week before his death in Paris, and was buried in the Panthéon, a mausoleum dedicated to the most honoured French people. Lagrange was awarded the 1764 prize of the French Academy of Sciences for his memoir on the libration of the Moon. In 1766 the academy proposed a problem of the motion of the satellites of Jupiter, and the prize again was awarded to Lagrange. He also shared or won the prizes of 1772, 1774, and 1778. Lagrange is one of the 72 prominent French scientists who were commemorated on plaques at the first stage of the Eiffel Tower when it first opened. "Rue Lagrange" in the 5th Arrondissement in Paris is named after him. In Turin, the street where the house of his birth still stands is named "via Lagrange". The lunar crater Lagrange and the asteroid 1006 Lagrangea also bear his name. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; The initial version of this article was taken from the public domain resource "A Short Account of the History of Mathematics" (4th edition, 1908) by W. W. Rouse Ball.
[ { "math_id": 0, "text": "y = a \\sin (mx) \\sin (nt)\\," }, { "math_id": 1, "text": "ax^n" }, { "math_id": 2, "text": "b=0" }, { "math_id": 3, "text": "\n \\frac{d}{dt}\n \\frac{\\partial T}{\\partial \\dot{x}}\n- \\frac{\\partial T}{\\partial x}\n+ \\frac{\\partial V}{\\partial x} = 0,\n" }, { "math_id": 4, "text": " a^{p-1}-1 \\equiv 0\\pmod p " } ]
https://en.wikipedia.org/wiki?curid=87793
87837
Ratio
Relationship between two numbers of the same kind In mathematics, a ratio () shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ratio 4:3). Similarly, the ratio of lemons to oranges is 6:8 (or 3:4) and the ratio of oranges to the total amount of fruit is 8:14 (or 4:7). The numbers in a ratio may be quantities of any kind, such as counts of people or objects, or such as measurements of lengths, weights, time, etc. In most contexts, both numbers are restricted to be positive. A ratio may be specified either by giving both constituting numbers, written as ""a" to "b" or "a:b"", or by giving just the value of their quotient . Equal quotients correspond to equal ratios. A statement expressing the equality of two ratios is called a proportion. Consequently, a ratio may be considered as an ordered pair of numbers, a fraction with the first number in the numerator and the second in the denominator, or as the value denoted by this fraction. Ratios of counts, given by (non-zero) natural numbers, are rational numbers, and may sometimes be natural numbers. A more specific definition adopted in physical sciences (especially in metrology) for "ratio" is the dimensionless quotient between two physical quantities measured with the same unit. A quotient of two quantities that are measured with different units may be called a "rate". Notation and terminology. The ratio of numbers "A" and "B" can be expressed as: When a ratio is written in the form "A":"B", the two-dot character is sometimes the colon punctuation mark. In Unicode, this is , although Unicode also provides a dedicated ratio character, . The numbers "A" and "B" are sometimes called "terms of the ratio", with "A" being the "antecedent" and "B" being the "consequent". A statement expressing the equality of two ratios "A":"B" and "C":"D" is called a proportion, written as "A":"B" = "C":"D" or "A":"B"∷"C":"D". This latter form, when spoken or written in the English language, is often expressed as ("A" is to "B") as ("C" is to "D"). "A", "B", "C" and "D" are called the terms of the proportion. "A" and "D" are called its "extremes", and "B" and "C" are called its "means". The equality of three or more ratios, like "A":"B" = "C":"D" = "E":"F", is called a continued proportion. Ratios are sometimes used with three or even more terms, e.g., the proportion for the edge lengths of a "two by four" that is ten inches long is therefore formula_1 (unplaned measurements; the first two numbers are reduced slightly when the wood is planed smooth) a good concrete mix (in volume units) is sometimes quoted as formula_2 For a (rather dry) mixture of 4/1 parts in volume of cement to water, it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement. The meaning of such a proportion of ratios with more than two terms is that the ratio of any two terms on the left-hand side is equal to the ratio of the corresponding two terms on the right-hand side. History and etymology. It is possible to trace the origin of the word "ratio" to the Ancient Greek ("logos"). Early translators rendered this into Latin as ' ("reason"; as in the word "rational"). A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word ' ("proportion") to indicate ratio and "" ("proportionality") for the equality of ratios. Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables. The existence of multiple theories seems unnecessarily complex since ratios are, to a large extent, identified with quotients and their prospective values. However, this is a comparatively recent development, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold: first, there was the previously mentioned reluctance to accept irrational numbers as true numbers, and second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century. Euclid's definitions. Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a "part" of a quantity is another quantity that "measures" it and conversely, a "multiple" of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity. Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity "measures" the second. These definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII. Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities "of the same type", so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists, when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities "p" and "q", if there exist integers "m" and "n" such that "mp"&gt;"q" and "nq"&gt;"p". This condition is known as the Archimedes property. Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but such a definition would have been meaningless to Euclid. In modern notation, Euclid's definition of equality is that given quantities "p", "q", "r" and "s", "p":"q"∷"r" :"s" if and only if, for any positive integers "m" and "n", "np"&lt;"mq", "np"="mq", or "np"&gt;"mq" according as "nr"&lt;"ms", "nr"="ms", or "nr"&gt;"ms", respectively. This definition has affinities with Dedekind cuts as, with "n" and "q" both positive, "np" stands to "mq" as stands to the rational number (dividing both terms by "nq"). Definition 6 says that quantities that have the same ratio are "proportional" or "in proportion". Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog". Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities "p", "q", "r" and "s", "p":"q"&gt;"r":"s" if there are positive integers "m" and "n" so that "np"&gt;"mq" and "nr"≤"ms". As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms "p", "q" and "r" to be in proportion when "p":"q"∷"q":"r". This is extended to four terms "p", "q", "r" and "s" as "p":"q"∷"q":"r"∷"r":"s", and so on. Sequences that have the property that the ratios of consecutive terms are equal are called geometric progressions. Definitions 9 and 10 apply this, saying that if "p", "q" and "r" are in proportion then "p":"r" is the "duplicate ratio" of "p":"q" and if "p", "q", "r" and "s" are in proportion then "p":"s" is the "triplicate ratio" of "p":"q". Number of terms and use of fractions. In general, a comparison of the quantities of a two-entity ratio can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount, size, volume, or quantity of the first entity is formula_3 that of the second entity. If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason. Fractions can also be inferred from ratios with more than two entities; however, a ratio with more than two entities cannot be completely converted into a single fraction, because a fraction can only compare two quantities. A separate fraction can be used to compare the quantities of any two of the entities covered by the ratio: for example, from a ratio of 2:3:7 we can infer that the quantity of the second entity is formula_4 that of the third entity. Proportions and percentage ratios. If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent). If a mixture contains substances A, B, C and D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100, we have converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10). If the two or more ratio quantities encompass all of the quantities in a particular situation, it is said that "the whole" contains the sum of the parts: for example, a fruit basket containing two apples and three oranges and no other fruit is made up of two parts apples and three parts oranges. In this case, formula_5, or 40% of the whole is apples and formula_6, or 60% of the whole is oranges. This comparison of a specific quantity to "the whole" is called a proportion. If the ratio consists of only two values, it can be represented as a fraction, in particular as a decimal fraction. For example, older televisions have a 4:3 "aspect ratio", which means that the width is 4/3 of the height (this can also be expressed as 1.33:1 or just 1.33 rounded to two decimal places). More recent widescreen TVs have a 16:9 aspect ratio, or 1.78 rounded to two decimal places. One of the popular widescreen movie formats is 2.35:1 or simply 2.35. Representing ratios as decimal fractions simplifies their comparison. When comparing 1.33, 1.78 and 2.35, it is obvious which format offers wider image. Such a comparison works only when values being compared are consistent, like always expressing width in relation to height. Reduction. Ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers. Thus, the ratio 40:60 is equivalent in meaning to the ratio 2:3, the latter being obtained from the former by dividing both quantities by 20. Mathematically, we write 40:60 = 2:3, or equivalently 40:60∷2:3. The verbal equivalent is "40 is to 60 as 2 is to 3." A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms. Sometimes it is useful to write a ratio in the form 1:"x" or "x":1, where "x" is not necessarily an integer, to enable comparisons of different ratios. For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4) Alternatively, it can be written as 0.8:1 (dividing both sides by 5). Where the context makes the meaning clear, a ratio in this form is sometimes written without the 1 and the ratio symbol (:), though, mathematically, this makes it a factor or multiplier. Irrational ratios. Ratios may also be established between incommensurable quantities (quantities whose ratio, as value of a fraction, amounts to an irrational number). The earliest discovered example, found by the Pythagoreans, is the ratio of the length of the diagonal d to the length of a side s of a square, which is the square root of 2, formally formula_7 Another example is the ratio of a circle's circumference to its diameter, which is called π, and is not just an irrational number, but a transcendental number. Also well known is the golden ratio of two (mostly) lengths a and b, which is defined by the proportion formula_8 or, equivalently formula_9 Taking the ratios as fractions and formula_10 as having the value x, yields the equation formula_11 or formula_12 which has the positive, irrational solution formula_13 Thus at least one of "a" and "b" has to be irrational for them to be in the golden ratio. An example of an occurrence of the golden ratio in math is as the limiting value of the ratio of two consecutive Fibonacci numbers: even though all these ratios are ratios of two integers and hence are rational, the limit of the sequence of these rational ratios is the irrational golden ratio. Similarly, the silver ratio of a and b is defined by the proportion formula_14 corresponding to formula_15 This equation has the positive, irrational solution formula_16 so again at least one of the two quantities "a" and "b" in the silver ratio must be irrational. Odds. "Odds" (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. The probability of success is 30%. In every ten trials, there are expected to be three wins and seven losses. Units. Ratios may be unitless, as in the case they relate quantities in units of the same dimension, even if their units of measurement are initially different. For example, the ratio one minute : 40 seconds can be reduced by changing the first value to 60 seconds, so the ratio becomes 60 seconds : 40 seconds. Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2. On the other hand, there are non-dimensionless quotients, also known as "rates" (sometimes also as ratios). In chemistry, mass concentration ratios are usually expressed as weight/volume fractions. For example, a concentration of 3% w/v usually means 3 g of substance in every 100 mL of solution. This cannot be converted to a dimensionless ratio, as in weight/weight or volume/volume fractions. Triangular coordinates. The locations of points relative to a triangle with vertices "A", "B", and "C" and sides "AB", "BC", and "CA" are often expressed in extended ratio form as "triangular coordinates". In barycentric coordinates, a point with coordinates "α, β, γ" is the point upon which a weightless sheet of metal in the shape and size of the triangle would exactly balance if weights were put on the vertices, with the ratio of the weights at "A" and "B" being "α" : "β", the ratio of the weights at "B" and "C" being "β" : "γ", and therefore the ratio of weights at "A" and "C" being "α" : "γ". In trilinear coordinates, a point with coordinates "x" :"y" :"z" has perpendicular distances to side "BC" (across from vertex "A") and side "CA" (across from vertex "B") in the ratio "x" :"y", distances to side "CA" and side "AB" (across from "C") in the ratio "y" :"z", and therefore distances to sides "BC" and "AB" in the ratio "x" :"z". Since all information is expressed in terms of ratios (the individual numbers denoted by "α, β, γ, x, y," and "z" have no meaning by themselves), a triangle analysis using barycentric or trilinear coordinates applies regardless of the size of the triangle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{A}{B}" }, { "math_id": 1, "text": "\\text{thickness : width : length } = 2:4:10;" }, { "math_id": 2, "text": "\\text{cement : sand : gravel } = 1:2:4." }, { "math_id": 3, "text": "\\tfrac{2}{3}" }, { "math_id": 4, "text": "\\tfrac{3}{7}" }, { "math_id": 5, "text": "\\tfrac{2}{5}" }, { "math_id": 6, "text": "\\tfrac{3}{5}" }, { "math_id": 7, "text": "a:d = 1:\\sqrt{2}." }, { "math_id": 8, "text": "a:b = (a+b):a \\quad" }, { "math_id": 9, "text": "\\quad a:b = (1+b/a):1." }, { "math_id": 10, "text": "a:b" }, { "math_id": 11, "text": "x=1+\\tfrac 1x \\quad" }, { "math_id": 12, "text": "\\quad x^2-x-1 = 0," }, { "math_id": 13, "text": "x=\\tfrac{a}{b}=\\tfrac{1+\\sqrt{5}}{2}." }, { "math_id": 14, "text": "a:b = (2a+b):a \\quad (= (2+b/a):1)," }, { "math_id": 15, "text": "x^2-2x-1 = 0." }, { "math_id": 16, "text": "x = \\tfrac{a}{b}=1+\\sqrt{2}," } ]
https://en.wikipedia.org/wiki?curid=87837
87872
Antiproton
Subatomic particle The antiproton, , (pronounced "p-bar") is the antiparticle of the proton. Antiprotons are stable, but they are typically short-lived, since any collision with a proton will cause both particles to be annihilated in a burst of energy. The existence of the antiproton with electric charge of , opposite to the electric charge of of the proton, was predicted by Paul Dirac in his 1933 Nobel Prize lecture. Dirac received the Nobel Prize for his 1928 publication of his Dirac equation that predicted the existence of positive and negative solutions to Einstein's energy equation (formula_0) and the existence of the positron, the antimatter analog of the electron, with opposite charge and spin. The antiproton was first experimentally confirmed in 1955 at the Bevatron particle accelerator by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. In terms of valence quarks, an antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception that the antiproton has electric charge and magnetic moment that are the opposites of those in the proton, which is to be expected from the antimatter equivalent of a proton. The questions of how matter is different from antimatter, and the relevance of antimatter in explaining how our universe survived the Big Bang, remain open problems—open, in part, due to the relative scarcity of antimatter in today's universe. Occurrence in nature. Antiprotons have been detected in cosmic rays beginning in 1979, first by balloon-borne experiments and more recently by satellite-based detectors. The standard picture for their presence in cosmic rays is that they are produced in collisions of cosmic ray protons with atomic nuclei in the interstellar medium, via the reaction, where A represents a nucleus: + A → + + + A The secondary antiprotons () then propagate through the galaxy, confined by the galactic magnetic fields. Their energy spectrum is modified by collisions with other atoms in the interstellar medium, and antiprotons can also be lost by "leaking out" of the galaxy. The antiproton cosmic ray energy spectrum is now measured reliably and is consistent with this standard picture of antiproton production by cosmic ray collisions. These experimental measurements set upper limits on the number of antiprotons that could be produced in exotic ways, such as from annihilation of supersymmetric dark matter particles in the galaxy or from the Hawking radiation caused by the evaporation of primordial black holes. This also provides a lower limit on the antiproton lifetime of about 1–10 million years. Since the galactic storage time of antiprotons is about 10 million years, an intrinsic decay lifetime would modify the galactic residence time and distort the spectrum of cosmic ray antiprotons. This is significantly more stringent than the best laboratory measurements of the antiproton lifetime: The magnitude of properties of the antiproton are predicted by CPT symmetry to be exactly related to those of the proton. In particular, CPT symmetry predicts the mass and lifetime of the antiproton to be the same as those of the proton, and the electric charge and magnetic moment of the antiproton to be opposite in sign and equal in magnitude to those of the proton. CPT symmetry is a basic consequence of quantum field theory and no violations of it have ever been detected. Modern experiments and applications. Production. Antiprotons were routinely produced at Fermilab for collider physics operations in the Tevatron, where they were collided with protons. The use of antiprotons allows for a higher average energy of collisions between quarks and antiquarks than would be possible in proton–proton collisions. This is because the valence quarks in the proton, and the valence antiquarks in the antiproton, tend to carry the largest fraction of the proton or antiproton's momentum. Formation of antiprotons requires energy equivalent to a temperature of 10 trillion K (1013 K), and this does not tend to happen naturally. However, at CERN, protons are accelerated in the Proton Synchrotron to an energy of 26 GeV and then smashed into an iridium rod. The protons bounce off the iridium nuclei with enough energy for matter to be created. A range of particles and antiparticles are formed, and the antiprotons are separated off using magnets in vacuum. Measurements. In July 2011, the ASACUSA experiment at CERN determined the mass of the antiproton to be times that of the electron. This is the same as the mass of a proton, within the level of certainty of the experiment. In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. In January 2022, by comparing the charge-to-mass ratios between antiproton and negatively charged hydrogen ion, the BASE experiment has determined the antiproton's charge-to-mass ratio is identical to the proton's, down to 16 parts per trillion. Possible applications. Antiprotons have been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. The primary difference between antiproton therapy and proton therapy is that following ion energy deposition the antiproton annihilates, depositing additional energy in the cancerous region. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E = mc^2" } ]
https://en.wikipedia.org/wiki?curid=87872
8788855
Kharitonov region
A Kharitonov region is a concept in mathematics. It arises in the study of the stability of polynomials. Let formula_0 be a simply-connected set in the complex plane and let formula_1 be the polynomial family. formula_0 is said to be a Kharitonov region if formula_2 is a subset of formula_3 Here, formula_4 denotes the set of all vertex polynomials of complex interval polynomials formula_5 and formula_6 denotes the set of all vertex polynomials of real interval polynomials formula_7 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "V_T^n(V_S^n)" }, { "math_id": 3, "text": "P." }, { "math_id": 4, "text": "V_T^n" }, { "math_id": 5, "text": "(T^n)" }, { "math_id": 6, "text": "V_S^n" }, { "math_id": 7, "text": "(S^n)." } ]
https://en.wikipedia.org/wiki?curid=8788855
8790052
Nelson complexity index
Conversion calculation in petroluem refinery The Nelson complexity index (NCI) is a measure to compare the secondary conversion capacity of a petroleum refinery with the primary distillation capacity. The index provides an easy metric for quantifying and ranking the complexity of various refineries and units. To calculate the index, it is necessary to use complexity factors, which compare the cost of upgrading units to the cost of crude distillation unit. History. It was developed by Wilbur L. Nelson in a series of articles that appeared in the "Oil &amp; Gas Journal" from 1960 to 1961 (Mar. 14, p. 189; Sept. 26, p. 216; and June 19, p. 109). In 1976, he elaborated on the concept in another series of articles, again in the "Oil &amp; Gas Journal" (Sept. 13, p. 81; Sept. 20, p. 202; and Sept. 27, p. 83). Formula. formula_0 Where: The NCI assigns a complexity factor to each major piece of refinery equipment based on its complexity and cost in comparison to crude distillation, which is assigned a complexity factor of 1.0. The complexity of each piece of refinery equipment is then calculated by multiplying its complexity factor by its throughput ratio as a percentage of crude distillation capacity. Adding up the complexity values assigned to each piece of equipment, including crude distillation, determines a refinery’s complexity on the NCI. The NCI indicates not only the investment intensity or cost index of the refinery but also its potential value addition. Thus, the higher the index number, the greater the cost of the refinery and the higher the value of its products. In the second edition of the book "Petroleum Refinery Process Economics" (2000), author Robert Maples notes that U.S. refineries rank highest in complexity index, averaging 9.5, compared with Europe's at 6.5. The Jamnagar Refinery belonging to India-based Reliance Industries Limited is now one of the most complex refineries in the world with a Nelson complexity index of 21.1. The "Oil and Gas Journal" annually calculates and publishes a list of refineries with their associated Nelson complexity index scores. Complexity factors. Some factors for various processing units: Example. If an oil refinery has a crude distillation unit (100 kbd), vacuum distillation unit (60 kbd), and catalytic reforming unit (30 kbd), then the NCI will be 1*(100/100) + 2*(60/100) + 5*(30/100) = 1.0 + 1.2 + 1.5 = 3.7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{NCI} = \\sum_{i=1}^N F_i \\cdot \\frac{C_i}{C_\\text{CDU}} " }, { "math_id": 1, "text": "F_i" }, { "math_id": 2, "text": "C_i" }, { "math_id": 3, "text": "C_\\text{CDU}" }, { "math_id": 4, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=8790052
8790877
Pressure regulator
Control valve that maintains the pressure of a fluid or gas A pressure regulator is a valve that controls the pressure of a fluid to a desired value, using negative feedback from the controlled pressure. Regulators are used for gases and liquids, and can be an integral device with a pressure setting, a restrictor and a sensor all in the one body, or consist of a separate pressure sensor, controller and flow valve. Two types are found: The pressure reduction regulator and the back-pressure regulator. Both types of regulator use feedback of the regulated pressure as input to the control mechanism, and are commonly actuated by a spring loaded diaphragm or piston reacting to changes in the feedback pressure to control the valve opening, and in both cases the valve should be opened only enough to maintain the set regulated pressure. The actual mechanism may be very similar in all respects except the placing of the feedback pressure tap. As in other feedback control mechanisms, the level of damping is important to achieve a balance between fast response to a change in the measured pressure, and stability of output. Insufficient damping may lead to hunting oscillation of the controlled pressure, while excessive friction of moving parts may cause hysteresis. Pressure reducing regulator. Operation. A pressure reducing regulator's primary function is to match the flow of gas through the regulator to the demand for fluid placed upon it, whilst maintaining a sufficiently constant output pressure. If the load flow decreases, then the regulator flow must decrease as well. If the load flow increases, then the regulator flow must increase in order to keep the controlled pressure from decreasing due to a shortage of fluid in the pressure system. It is desirable that the controlled pressure does not vary greatly from the set point for a wide range of flow rates, but it is also desirable that flow through the regulator is stable and the regulated pressure is not subject to excessive oscillation. A pressure regulator includes a "restricting element", a "loading element", and a "measuring element": In the pictured single-stage regulator, a force balance is used on the diaphragm to control a poppet valve in order to regulate pressure. With no inlet pressure, the spring above the diaphragm pushes it down on the poppet valve, holding it open. Once inlet pressure is introduced, the open poppet allows flow to the diaphragm and pressure in the upper chamber increases, until the diaphragm is pushed upward against the spring, causing the poppet to reduce flow, finally stopping further increase of pressure. By adjusting the top screw, the downward pressure on the diaphragm can be increased, requiring more pressure in the upper chamber to maintain equilibrium. In this way, the outlet pressure of the regulator is controlled. &lt;templatestyles src="Block indent/styles.css"/&gt;formula_0 &lt;templatestyles src="Block indent/styles.css"/&gt;formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 Single stage regulator. High pressure gas from the supply enters the regulator through the inlet port. The inlet pressure gauge will indicate this pressure. The gas then passes through the normally open pressure control valve orifice and the downstream pressure rises until the valve actuating diaphragm is deflected sufficiently to close the valve, preventing any more gas from entering the low pressure side until the pressure drops again. The outlet pressure gauge will indicate this pressure. The outlet pressure on the diaphragm and the inlet pressure and poppet spring force on the upstream part of the valve hold the diaphragm/poppet assembly in the closed position against the force of the diaphragm loading spring. If the supply pressure falls, the closing force due to supply pressure is reduced, and downstream pressure will rise slightly to compensate. Thus, if the supply pressure falls, the outlet pressure will increase, provided the outlet pressure remains below the falling supply pressure. This is the cause of end-of-tank dump where the supply is provided by a pressurized gas tank. The operator can compensate for this effect by adjusting the spring load by turning the knob to restore outlet pressure to the desired level. With a single stage regulator, when the supply pressure gets low, the lower inlet pressure causes the outlet pressure to climb. If the diaphragm loading spring compression is not adjusted to compensate, the poppet can remain open and allow the tank to rapidly dump its remaining contents. Double stage regulator. Two stage regulators are two regulators in series in the same housing that operate to reduce the pressure progressively in two steps instead of one. The first stage, which is preset, reduces the pressure of the supply gas to an intermediate stage; gas at that pressure passes into the second stage. The gas emerges from the second stage at a pressure (working pressure) set by user by adjusting the pressure control knob at the diaphragm loading spring. Two stage regulators may have two safety valves, so that if there is any excess pressure between stages due to a leak at the first stage valve seat the rising pressure will not overload the structure and cause an explosion. An unbalanced single stage regulator may need frequent adjustment. As the supply pressure falls, the outlet pressure may change, necessitating adjustment. In the two stage regulator, there is improved compensation for any drop in the supply pressure. Applications. Pressure reducing regulators. Air compressors. Air compressors are used in industrial, commercial, and home workshop environments to perform an assortment of jobs including blowing things clean; running air powered tools; and inflating things like tires, balls, etc. Regulators are often used to adjust the pressure coming out of an air receiver (tank) to match what is needed for the task. Often, when one large compressor is used to supply compressed air for multiple uses (often referred to as "shop air" if built as a permanent installation of pipes throughout a building), additional regulators will be used to ensure that each separate tool or function receives the pressure it needs. This is important because some air tools, or uses for compressed air, require pressures that may cause damage to other tools or materials. Aircraft. Pressure regulators are found in aircraft cabin pressurization, canopy seal pressure control, potable water systems, and waveguide pressurization. Aerospace. Aerospace pressure regulators have applications in propulsion pressurant control for reaction control systems (RCS) and Attitude Control Systems (ACS), where high vibration, large temperature extremes and corrosive fluids are present. Cooking. Pressurized vessels can be used to cook food much more rapidly than at atmospheric pressure, as the higher pressure raises the boiling point of the contents. All modern pressure cookers will have a pressure regulator valve and a pressure relief valve as a safety mechanism to prevent explosion in the event that the pressure regulator valve fails to adequately release pressure. Some older models lack a safety release valve. Most home cooking models are built to maintain a low and high pressure setting. These settings are usually . Almost all home cooking units will employ a very simple single-stage pressure regulator. Older models will simply use a small weight on top of an opening that will be lifted by excessive pressure to allow excess steam to escape. Newer models usually incorporate a spring-loaded valve that lifts and allows pressure to escape as pressure in the vessel rises. Some pressure cookers will have a quick release setting on the pressure regulator valve that will, essentially, lower the spring tension to allow the pressure to escape at a quick, but still safe rate. Commercial kitchens also use pressure cookers, in some cases using oil based pressure cookers to quickly deep fry fast food. Pressure vessels of this sort can also be used as autoclaves to sterilize small batches of equipment and in home canning operations. Water pressure reduction. A water pressure regulating valve limits inflow by dynamically changing the valve opening so that when less pressure is on the outside, the valve opens up fully, and too much pressure on the outside causes the valve to shut. In a no pressure situation, where water could flow backwards, it won't be impeded. A water pressure regulating valve does not function as a check valve. They are used in applications where the water pressure is too high at the end of the line to avoid damage to appliances or pipes. Welding and cutting. Oxy-fuel welding and cutting processes require gases at specific pressures, and regulators will generally be used to reduce the high pressures of storage cylinders to those usable for cutting and welding. Oxygen and fuel gas regulators usually have two stages: The first stage of the regulator releases the gas at a constant pressure from the cylinder despite the pressure in the cylinder becoming less as the gas is released. The second stage of the regulator controls the pressure reduction from the intermediate pressure to low pressure. The final flow rate may be adjusted at the torch. The regulator assembly usually has two pressure gauges, one indicating cylinder pressure, the other indicating delivery pressure. Inert gas shielded arc welding also uses gas stored at high pressure provided through a regulator. There may be a flow gauge calibrated to the specific gas. Propane/LP gas. All propane and LP gas applications require the use of a regulator. Because pressures in propane tanks can fluctuate significantly with temperature, regulators must be present to deliver a steady pressure to downstream appliances. These regulators normally compensate for tank pressures between and commonly deliver 11 inches water column for residential applications and 35 inches of water column for industrial applications. Propane regulators differ in size and shape, delivery pressure and adjustability, but are uniform in their purpose to deliver a constant outlet pressure for downstream requirements. Common international settings for domestic LP gas regulators are 28 mbar for butane and 37 mbar for propane. Gas powered vehicles. All vehicular motors that run on compressed gas as a fuel (internal combustion engine or fuel cell electric power train) require a pressure regulator to reduce the stored gas (CNG or Hydrogen) pressure from 700, 500, 350 or 200 bar (or 70, 50, 35 and 20 MPa) to operating pressure.) Recreational vehicles. For recreational vehicles with plumbing, a pressure regulator is required to reduce the pressure of an external water supply connected to the vehicle plumbing, as the supply may be a much higher elevation than the campground, and water pressure depends on the height of the water column. Without a pressure regulator, the intense pressure encountered at some campgrounds in mountainous areas may be enough to burst the camper's water pipes or unseat the plumbing joints, causing flooding. Pressure regulators for this purpose are typically sold as small screw-on accessories that fit inline with the hoses used to connect an RV to the water supply, which are almost always screw-thread-compatible with the common garden hose. Breathing gas supply. Pressure regulators are used with diving cylinders for Scuba diving. The tank may contain pressures in excess of , which could cause a fatal barotrauma injury to a person breathing it directly. A demand controlled regulator provides a flow of breathing gas at the ambient pressure (which varies by depth in the water). Pressure reducing regulators are also use to supply breathing gas to surface-supplied divers, and people who use self-contained breathing apparatus (SCBA) for rescue and hazmat work on land. The interstage pressure for SCBA at normal atmospheric pressure can generally be left constant at a factory setting, but for surface supplied divers it is controlled by the gas panel operator, depending on the diver depth and flow rate requirements. Supplementary oxygen for high altitude flight in unpressurised aircraft and medical gases are also commonly dispensed through pressure reducing regulators from high-pressure storage. Supplementary oxygen may also be dispensed through a regulator which both reduces the pressure, and supplies the gas at a metered flow rate, to be mixed with ambient air. One way of producing a constant mass flow at variable ambient pressure is to use a choked flow, where the flow through the metering orifice is sonic. For a given gas in choked flow, the mass flow rate may be controlled by setting the orifice size or the upstream pressure. To produce a choked flow in oxygen, the absolute pressure ratio of upstream and downstream gas must exceed 1.893 at 20 °C. At normal atmospheric pressure this requires an upstream pressure of more than 1.013 × 1.893 = 1.918 bar. A typical nominal regulated gauge pressure from a medical oxygen regulator is , for an absolute pressure of approximately 4.4 bar and a pressure ratio of about 4.4 without back pressure, so they will have choked flow in the metering orifices for a downstream (outlet) pressure of up to about 2.3 bar absolute. This type of regulator commonly uses a rotor plate with calibrated orifices and detents to hold it in place when the orifice corresponding to the desired flow rate is selected. This type of regulator may also have one or two uncalibrated takeoff connections from the intermediate pressure chamber with diameter index safety system (DISS) or similar connectors to supply gas to other equipment, and the high pressure connection is commonly a pin index safety system (PISS) yoke clamp. Similar mechanisms can be used for flow rate control for aviation and mountaineering regulators. Mining industry. As the pressure in water pipes builds rapidly with depth, underground mining operations require a fairly complex water system with pressure reducing valves. These devices must be installed at a certain vertical interval, usually . Without such valves, pipes could burst and pressure would be too great for equipment operation. Natural gas industry. Pressure regulators are used extensively within the natural gas industry. Natural gas is compressed to high pressures in order to be distributed throughout the country through large transmission pipelines. The transmission pressure can be over and must be reduced through various stages to a usable pressure for industrial, commercial, and residential applications. There are three main pressure reduction locations in this distribution system. The first reduction is located at the city gate, whereas the transmission pressure is dropped to a distribution pressure to feed throughout the city. This is also the location where the odorless natural gas is odorized with mercaptan. The distribution pressure is further reduced at a district regulator station, located at various points in the city, to below 60 psig. The final cut would occur at the end users location. Generally, the end user reduction is taken to low pressures ranging from 0.25 psig to 5 psig. Some industrial applications can require a higher pressure. Back-pressure regulators. Hyperbaric chambers. Where the pressure drop on a built-in breathing system exhaust system is too great, typically in saturation systems, a back-pressure regulator may be used to reduce the exhaust pressure drop to a safer and more manageable pressure. Reclaim diving helmets. The depth at which most heliox breathing mixtures are used in surface-supplied diving is generally at least 5 bar above surface atmospheric pressure, and the exhaust gas from the diver must pass through a reclaim valve, which is a back-pressure valve activated by the increase in pressure in the diver's helmet above ambient pressure caused by diver exhalation. The reclaim gas hose which carries the exhaled gas back to the surface for recycling must not be at too great a pressure difference from the ambient pressure at the diver. An additional back-pressure regulator in this line allows finer setting of the reclaim valve for lower work of breathing at variable depths. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F = (P_i - P_o)s + P_oS + f" }, { "math_id": 1, "text": "F:\\text{ diaphragm spring force}" }, { "math_id": 2, "text": "f:\\text{ poppet spring force}" }, { "math_id": 3, "text": "P_i:\\text{ inlet pressure}" }, { "math_id": 4, "text": "P_o:\\text{ outlet pressure}" }, { "math_id": 5, "text": "s:\\text{ poppet area}" }, { "math_id": 6, "text": "S:\\text{ diaphragm area}" } ]
https://en.wikipedia.org/wiki?curid=8790877
879282
Neutron transport
Study of motions and interactions of neutrons Neutron transport (also known as neutronics) is the study of the motions and interactions of neutrons with materials. Nuclear scientists and engineers often need to know where neutrons are in an apparatus, in what direction they are going, and how quickly they are moving. It is commonly used to determine the behavior of nuclear reactor cores and experimental or industrial neutron beams. Neutron transport is a type of radiative transport. Background. Neutron transport has roots in the Boltzmann equation, which was used in the 1800s to study the kinetic theory of gases. It did not receive large-scale development until the invention of chain-reacting nuclear reactors in the 1940s. As neutron distributions came under detailed scrutiny, elegant approximations and analytic solutions were found in simple geometries. However, as computational power has increased, numerical approaches to neutron transport have become prevalent. Today, with massively parallel computers, neutron transport is still under very active development in academia and research institutions throughout the world. It remains a computationally challenging problem since it depends on time and the 3 dimensions of space, and the variables of energy span several orders of magnitude (from fractions of meV to several MeV). Modern solutions use either discrete ordinates or Monte Carlo methods, or even a hybrid of both. Neutron transport equation. The neutron transport equation is a balance statement that conserves neutrons. Each term represents a gain or a loss of a neutron, and the balance, in essence, claims that neutrons gained equals neutrons lost. It is formulated as follows: formula_0 formula_1 formula_2 Where: The transport equation can be applied to a given part of phase space (time t, energy E, location formula_3, and direction of travel formula_4). The first term represents the time rate of change of neutrons in the system. The second terms describes the movement of neutrons into or out of the volume of space of interest. The third term accounts for all neutrons that have a collision in that phase space. The first term on the right hand side is the production of neutrons in this phase space due to fission, while the second term on the right hand side is the production of neutrons in this phase space due to delayed neutron precursors (i.e., unstable nuclei which undergo neutron decay). The third term on the right hand side is in-scattering, these are neutrons that enter this area of phase space as a result of scattering interactions in another. The fourth term on the right is a generic source. The equation is usually solved to find formula_5, since that will allow for the calculation of reaction rates, which are of primary interest in shielding and dosimetry studies. Types of neutron transport calculations. Several basic types of neutron transport problems exist, depending on the type of problem being solved. Fixed source. A fixed source calculation involves imposing a known neutron source on a medium and determining the resulting neutron distribution throughout the problem. This type of problem is particularly useful for shielding calculations, where a designer would like to minimize the neutron dose outside of a shield while using the least amount of shielding material. For instance, a spent nuclear fuel cask requires shielding calculations to determine how much concrete and steel is needed to safely protect the truck driver who is shipping it. Criticality. Fission is the process through which a nucleus splits into (typically two) smaller atoms. If fission is occurring, it is often of interest to know the asymptotic behavior of the system. A reactor is called “critical” if the chain reaction is self-sustaining and time-independent. If the system is not in equilibrium the asymptotic neutron distribution, or the fundamental mode, will grow or decay exponentially over time. Criticality calculations are used to analyze steady-state multiplying media (multiplying media can undergo fission), such as a critical nuclear reactor. The loss terms (absorption, out-scattering, and leakage) and the source terms (in-scatter and fission) are proportional to the neutron flux, contrasting with fixed-source problems where the source is independent of the flux. In these calculations, the presumption of time invariance requires that neutron production exactly equals neutron loss. Since this criticality can only be achieved by very fine manipulations of the geometry (typically via control rods in a reactor), it is unlikely that the modeled geometry will be truly critical. To allow some flexibility in the way models are set up, these problems are formulated as eigenvalue problems, where one parameter is artificially modified until criticality is reached. The most common formulations are the time-absorption and the multiplication eigenvalues, also known as the alpha and k eigenvalues. The alpha and k are the tunable quantities. K-eigenvalue problems are the most common in nuclear reactor analysis. The number of neutrons produced per fission is multiplicatively modified by the dominant eigenvalue. The resulting value of this eigenvalue reflects the time dependence of the neutron density in a multiplying medium. In the case of a nuclear reactor, neutron flux and power density are proportional, hence during reactor start-up "keff" &gt; 1, during reactor operation "keff" = 1 and "keff" &lt; 1 at reactor shutdown. Computational methods. Both fixed-source and criticality calculations can be solved using deterministic methods or stochastic methods. In deterministic methods the transport equation (or an approximation of it, such as diffusion theory) is solved as a differential equation. In stochastic methods such as Monte Carlo discrete particle histories are tracked and averaged in a random walk directed by measured interaction probabilities. Deterministic methods usually involve multi-group approaches while Monte Carlo can work with multi-group and continuous energy cross-section libraries. Multi-group calculations are usually iterative, because the group constants are calculated using flux-energy profiles, which are determined as the result of the neutron transport calculation. Discretization in deterministic methods. To numerically solve the transport equation using algebraic equations on a computer, the spatial, angular, energy, and time variables must be discretized. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\frac{1}{v(E)}\\frac{\\partial}{\\partial t}+\\mathbf{\\hat{\\Omega}}\\cdot\\nabla+\\Sigma_t(\\mathbf{r},E,t)\\right)\n\\psi(\\mathbf{r},E,\\mathbf{\\hat{\\Omega}},t)=\\quad" }, { "math_id": 1, "text": "\\quad\\frac{\\chi_p \\left( E \\right)}{4\\pi}\\int_0^{\\infty} dE^{\\prime}\\nu_p \\left( E^{\\prime} \\right) \\Sigma_f \\left(\\mathbf{r}, E^{\\prime}, t \\right) \\phi \\left( \\mathbf{r}, E^{\\prime}, t \\right) + \\sum_{i=1}^N \\frac{\\chi_{di}\\left( E \\right)}{4\\pi} \\lambda_i C_i \\left( \\mathbf{r}, t \\right)+\\quad" }, { "math_id": 2, "text": "\\quad\\int_{4\\pi}d\\Omega^\\prime\\int^{\\infty}_{0}dE^\\prime\\,\\Sigma_s(\\mathbf{r},E^\\prime\\rightarrow E,\\mathbf{\\hat{\\Omega}}^\\prime\\rightarrow \\mathbf{\\hat{\\Omega}},t)\\psi(\\mathbf{r},E^\\prime,\\mathbf{\\hat{\\Omega}^\\prime},t)+s(\\mathbf{r},E,\\mathbf{\\hat{\\Omega}},t)" }, { "math_id": 3, "text": "\\mathbf{r}" }, { "math_id": 4, "text": "\\mathbf{\\hat{\\Omega}}" }, { "math_id": 5, "text": "\\phi(\\mathbf{r},E)" } ]
https://en.wikipedia.org/wiki?curid=879282
879358
Coordinate vector
Concept in linear algebra In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations. The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below. Definition. Let "V" be a vector space of dimension "n" over a field "F" and let formula_0 be an ordered basis for "V". Then for every formula_1 there is a unique linear combination of the basis vectors that equals "formula_2": formula_3 The coordinate vector of "formula_2" relative to "B" is the sequence of coordinates formula_4 This is also called the "representation of formula_2 with respect to B", or the "B representation of formula_2". The formula_5 are called the "coordinates of formula_2". The order of the basis becomes important here, since it determines the order in which the coefficients are listed in the coordinate vector. Coordinate vectors of finite-dimensional vector spaces can be represented by matrices as column or row vectors. In the above notation, one can write formula_6 and formula_7 where formula_8 is the transpose of the matrix formula_9. The standard representation. We can mechanize the above transformation by defining a function formula_10, called the "standard representation of V with respect to B", that takes every vector to its coordinate representation: formula_11. Then formula_10 is a linear transformation from "V" to "F""n". In fact, it is an isomorphism, and its inverse formula_12 is simply formula_13 Alternatively, we could have defined formula_14 to be the above function from the beginning, realized that formula_14 is an isomorphism, and defined formula_10 to be its inverse. Examples. Example 1. Let formula_15 be the space of all the algebraic polynomials of degree at most 3 (i.e. the highest exponent of "x" can be 3). This space is linear and spanned by the following polynomials: formula_16 matching formula_17 then the coordinate vector corresponding to the polynomial formula_18 is formula_19 According to that representation, the differentiation operator "d"/"dx" which we shall mark "D" will be represented by the following matrix: formula_20 Using that method it is easy to explore the properties of the operator, such as: invertibility, Hermitian or anti-Hermitian or neither, spectrum and eigenvalues, and more. Example 2. The Pauli matrices, which represent the spin operator when transforming the spin eigenstates into vector coordinates. Basis transformation matrix. Let "B" and "C" be two different bases of a vector space "V", and let us mark with formula_21 the matrix which has columns consisting of the "C" representation of basis vectors "b1, b2, …, bn": formula_22 This matrix is referred to as the basis transformation matrix from "B" to "C". It can be regarded as an automorphism over formula_23. Any vector "v" represented in "B" can be transformed to a representation in "C" as follows: formula_24 Under the transformation of basis, notice that the superscript on the transformation matrix, "M", and the subscript on the coordinate vector, "v", are the same, and seemingly cancel, leaving the remaining subscript. While this may serve as a memory aid, it is important to note that no such cancellation, or similar mathematical operation, is taking place. Corollary. The matrix "M" is an invertible matrix and "M"−1 is the basis transformation matrix from "C" to "B". In other words, formula_25 Infinite-dimensional vector spaces. Suppose "V" is an infinite-dimensional vector space over a field "F". If the dimension is "κ", then there is some basis of "κ" elements for "V". After an order is chosen, the basis can be considered an ordered basis. The elements of "V" are finite linear combinations of elements in the basis, which give rise to unique coordinate representations exactly as described before. The only change is that the indexing set for the coordinates is not finite. Since a given vector "v" is a "finite" linear combination of basis elements, the only nonzero entries of the coordinate vector for "v" will be the nonzero coefficients of the linear combination representing "v". Thus the coordinate vector for "v" is zero except in finitely many entries. The linear transformations between (possibly) infinite-dimensional vector spaces can be modeled, analogously to the finite-dimensional case, with infinite matrices. The special case of the transformations from "V" into "V" is described in the full linear ring article. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " B = \\{ b_1, b_2, \\ldots, b_n \\} " }, { "math_id": 1, "text": " v \\in V " }, { "math_id": 2, "text": " v " }, { "math_id": 3, "text": " v = \\alpha _1 b_1 + \\alpha _2 b_2 + \\cdots + \\alpha _n b_n ." }, { "math_id": 4, "text": " [v]_B = (\\alpha _1, \\alpha _2, \\ldots, \\alpha _n) ." }, { "math_id": 5, "text": " \\alpha _1, \\alpha _2, \\ldots, \\alpha _n" }, { "math_id": 6, "text": " [v]_B = \\begin{bmatrix} \\alpha_1 \\\\ \\vdots \\\\ \\alpha_n \\end{bmatrix}" }, { "math_id": 7, "text": "[v]_B^T = \\begin{bmatrix} \\alpha_1 & \\alpha_2 & \\cdots & \\alpha_n \\end{bmatrix}" }, { "math_id": 8, "text": "[v]_B^T" }, { "math_id": 9, "text": "[v]_B" }, { "math_id": 10, "text": "\\phi_B" }, { "math_id": 11, "text": "\\phi_B(v)=[v]_B" }, { "math_id": 12, "text": "\\phi_B^{-1}:F^n\\to V" }, { "math_id": 13, "text": "\\phi_B^{-1}(\\alpha_1,\\ldots,\\alpha_n)=\\alpha_1 b_1+\\cdots+\\alpha_n b_n." }, { "math_id": 14, "text": "\\phi_B^{-1}" }, { "math_id": 15, "text": "P_3" }, { "math_id": 16, "text": "B_P = \\left\\{ 1, x, x^2, x^3 \\right\\}" }, { "math_id": 17, "text": "\n 1 := \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{bmatrix} ; \\quad\n x := \\begin{bmatrix} 0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{bmatrix} ; \\quad\n x^2 := \\begin{bmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{bmatrix} ; \\quad\n x^3 := \\begin{bmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 1 \\end{bmatrix}\n" }, { "math_id": 18, "text": "p \\left( x \\right) = a_0 + a_1 x + a_2 x^2 + a_3 x^3" }, { "math_id": 19, "text": "\\begin{bmatrix} a_0 \\\\ a_1 \\\\ a_2 \\\\ a_3 \\end{bmatrix}." }, { "math_id": 20, "text": "Dp(x) = P'(x) ; \\quad [D] = \n \\begin{bmatrix}\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 2 & 0 \\\\\n 0 & 0 & 0 & 3 \\\\\n 0 & 0 & 0 & 0 \\\\\n \\end{bmatrix} \n" }, { "math_id": 21, "text": "\\lbrack M \\rbrack_C^B" }, { "math_id": 22, "text": "\\lbrack M\\rbrack_C^B = \\begin{bmatrix} \\lbrack b_1\\rbrack_C & \\cdots & \\lbrack b_n\\rbrack_C \\end{bmatrix} " }, { "math_id": 23, "text": "F^n" }, { "math_id": 24, "text": "\\lbrack v\\rbrack_C = \\lbrack M\\rbrack_C^B \\lbrack v\\rbrack_B. " }, { "math_id": 25, "text": "\\begin{align}\n \\operatorname{Id} &= \\lbrack M\\rbrack_C^B \\lbrack M\\rbrack_B^C = \\lbrack M\\rbrack_C^C \\\\[3pt]\n &= \\lbrack M\\rbrack_B^C \\lbrack M\\rbrack_C^B = \\lbrack M\\rbrack_B^B\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=879358
87947
Sharkovskii's theorem
Mathematical rule In mathematics, Sharkovskii's theorem (also spelled Sharkovsky, Sharkovskiy, Šarkovskii or Sarkovskii), named after Oleksandr Mykolayovych Sharkovsky, who published it in 1964, is a result about discrete dynamical systems. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. Statement. For some interval formula_0, suppose that formula_1 is a continuous function. The number formula_2 is called a "periodic point of period formula_3" if formula_4, where formula_5 denotes the iterated function obtained by composition of formula_3 copies of formula_6. The number formula_2 is said to have "least period formula_3" if, in addition, formula_7 for all formula_8. Sharkovskii's theorem concerns the possible least periods of periodic points of formula_6. Consider the following ordering of the positive integers, sometimes called the Sharkovskii ordering: formula_9 It consists of: This ordering is a total order: every positive integer appears exactly once somewhere on this list. However, it is not a well-order. In a well-order, every subset would have an earliest element, but in this order there is no earliest power of two. Sharkovskii's theorem states that if formula_6 has a periodic point of least period formula_3, and formula_3 precedes formula_16 in the above ordering, then formula_6 has also a periodic point of least period formula_16. One consequence is that if formula_6 has only finitely many periodic points, then they must all have periods that are powers of two. Furthermore, if there is a periodic point of period three, then there are periodic points of all other periods. Sharkovskii's theorem does not state that there are "stable" cycles of those periods, just that there are cycles of those periods. For systems such as the logistic map, the bifurcation diagram shows a range of parameter values for which apparently the only cycle has period 3. In fact, there must be cycles of all periods there, but they are not stable and therefore not visible on the computer-generated picture. The assumption of continuity is important. Without this assumption, the discontinuous piecewise linear function formula_17 defined as: formula_18 for which every value has period 3, would be a counterexample. Similarly essential is the assumption of formula_6 being defined on an interval. Otherwise formula_19, which is defined on real numbers except the one: formula_20 and for which every non-zero value has period 3, would be a counterexample. Generalizations and related results. Sharkovskii also proved the converse theorem: every upper set of the above order is the set of periods for some continuous function from an interval to itself. In fact all such sets of periods are achieved by the family of functions formula_21, formula_22 for formula_23, except for the empty set of periods which is achieved by formula_24, formula_25. On the other hand, with additional information on the combinatorial structure of the interval map acting on the points in a periodic orbit, a period-n point may force period-3 (and hence all periods). Namely, if the orbit type (the cyclic permutation generated by the map acting on the points in the periodic orbit) has a so-called stretching pair, then this implies the existence of a periodic point of period-3. It can be shown (in an asymptotic sense) that almost all cyclic permutations admit at least one stretching pair, and hence almost all orbit types imply period-3. Tien-Yien Li and James A. Yorke showed in 1975 that not only does the existence of a period-3 cycle imply the existence of cycles of all periods, but in addition it implies the existence of an uncountable infinitude of points that never map to any cycle (chaotic points)—a property known as period three implies chaos. Sharkovskii's theorem does not immediately apply to dynamical systems on other topological spaces. It is easy to find a circle map with periodic points of period 3 only: take a rotation by 120 degrees, for example. But some generalizations are possible, typically involving the mapping class group of the space minus a periodic orbit. For example, Peter Kloeden showed that Sharkovskii's theorem holds for triangular mappings, i.e., mappings for which the component "fi" depends only on the first "i" components "x1..., xi". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I\\subset \\mathbb{R}" }, { "math_id": 1, "text": "f : I \\to I" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "f^{(m)}(x)=x" }, { "math_id": 5, "text": "f^{(m)}" }, { "math_id": 6, "text": "f" }, { "math_id": 7, "text": "f^{(k)}(x)\\ne x" }, { "math_id": 8, "text": "0<k<m" }, { "math_id": 9, "text": "\\begin{array}{cccccccc}\n3 & 5 & 7 & 9 & 11 & \\ldots & (2n+1)\\cdot2^{0} & \\ldots\\\\\n3\\cdot2 & 5\\cdot2 & 7\\cdot2 & 9\\cdot2 & 11\\cdot2 & \\ldots & (2n+1)\\cdot2^{1} & \\ldots\\\\\n3\\cdot2^{2} & 5\\cdot2^{2} & 7\\cdot2^{2} & 9\\cdot2^{2} & 11\\cdot2^{2} & \\ldots & (2n+1)\\cdot2^{2} & \\ldots\\\\\n3\\cdot2^{3} & 5\\cdot2^{3} & 7\\cdot2^{3} & 9\\cdot2^{3} & 11\\cdot2^{3} & \\ldots & (2n+1)\\cdot2^{3} & \\ldots\\\\\n & \\vdots\\\\\n\\ldots & 2^{n} & \\ldots & 2^{4} & 2^{3} & 2^{2} & 2 & 1\\end{array}" }, { "math_id": 10, "text": " = (2n+1)\\cdot2^0" }, { "math_id": 11, "text": " = (2n+1)\\cdot2^1" }, { "math_id": 12, "text": " = (2n+1)\\cdot2^2" }, { "math_id": 13, "text": " = (2n+1)\\cdot2^3" }, { "math_id": 14, "text": " = (2n+1)\\cdot2^m" }, { "math_id": 15, "text": " = 2^n" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "f:[0,3) \\to [0,3)" }, { "math_id": 18, "text": "f: x\\mapsto \\begin{cases}x+1 &\\mathrm{for\\ } 0\\le x<2 \\\\ x-2 &\\mathrm{for\\ } 2\\le x< 3\\end{cases}" }, { "math_id": 19, "text": "f : x \\mapsto (1 - x)^{-1}" }, { "math_id": 20, "text": "\\mathbb R\\setminus\\{1\\}," }, { "math_id": 21, "text": "T_h:[0,1]\\to[0,1]" }, { "math_id": 22, "text": "x\\mapsto\\min(h,1-2|x-1/2|)" }, { "math_id": 23, "text": "h\\in[0,1]" }, { "math_id": 24, "text": "T:\\mathbb R\\to\\mathbb R" }, { "math_id": 25, "text": "x\\mapsto x+1" } ]
https://en.wikipedia.org/wiki?curid=87947
8795656
Copositive matrix
In mathematics, specifically linear algebra, a real matrix "A" is copositive if formula_0 for every nonnegative vector formula_1. The collection of all copositive matrices is a proper cone; it includes as a subset the collection of real positive-definite matrices. Copositive matrices find applications in economics, operations research, and statistics.
[ { "math_id": 0, "text": "x^TAx\\geq 0" }, { "math_id": 1, "text": "x\\geq 0" } ]
https://en.wikipedia.org/wiki?curid=8795656
879637
Joint probability distribution
Type of probability distribution formula_0 formula_1 formula_2 formula_3 Many sample observations (black) are shown from a joint probability distribution. The marginal densities are shown as well (in blue and in red). Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered for any given number of random variables. The joint distribution encodes the marginal distributions, i.e. the distributions of each of the individual random variables and the conditional probability distributions, which deal with how the outputs of one random variable are distributed when given information on the outputs of the other random variable(s). In the formal mathematical setup of measure theory, the joint distribution is given by the pushforward measure, by the map obtained by pairing together the given random variables, of the sample space's probability measure. In the case of real-valued random variables, the joint distribution, as a particular multivariate distribution, may be expressed by a multivariate cumulative distribution function, or by a multivariate probability density function together with a multivariate probability mass function. In the special case of continuous random variables, it is sufficient to consider probability density functions, and in the case of discrete random variables, it is sufficient to consider probability mass functions. Examples. Draws from an urn. Each of two urns contains twice as many red balls as blue balls, and no others, and one ball is randomly selected from each urn, with the two draws independent of each other. Let formula_4 and formula_5 be discrete random variables associated with the outcomes of the draw from the first urn and second urn respectively. The probability of drawing a red ball from either of the urns is 2/3, and the probability of drawing a blue ball is 1/3. The joint probability distribution is presented in the following table: Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. In any one cell the probability of a particular combination occurring is (since the draws are independent) the product of the probability of the specified result for A and the probability of the specified result for B. The probabilities in these four cells sum to 1, as with all probability distributions. Moreover, the final row and the final column give the marginal probability distribution for A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as 2/3. Thus the marginal probability distribution for formula_4 gives formula_4's probabilities "unconditional" on formula_5, in a margin of the table. Coin flips. Consider the flip of two fair coins; let formula_4 and formula_5 be discrete random variables associated with the outcomes of the first and second coin flips respectively. Each coin flip is a Bernoulli trial and has a Bernoulli distribution. If a coin displays "heads" then the associated random variable takes the value 1, and it takes the value 0 otherwise. The probability of each of these outcomes is 1/2, so the marginal (unconditional) density functions are formula_6 formula_7 The joint probability mass function of formula_4 and formula_5 defines probabilities for each pair of outcomes. All possible outcomes are formula_8 Since each outcome is equally likely the joint probability mass function becomes formula_9 Since the coin flips are independent, the joint probability mass function is the product of the marginals: formula_10 Rolling a die. Consider the roll of a fair die and let formula_11 if the number is even (i.e. 2, 4, or 6) and formula_12 otherwise. Furthermore, let formula_13 if the number is prime (i.e. 2, 3, or 5) and formula_14 otherwise. Then, the joint distribution of formula_4 and formula_5, expressed as a probability mass function, is formula_15 formula_16 These probabilities necessarily sum to 1, since the probability of "some" combination of formula_4 and formula_5 occurring is 1. Marginal probability distribution. If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables. If the joint probability density function of random variable X and Y is formula_17 , the marginal probability density function of X and Y, which defines the marginal distribution, is given by: formula_18 &lt;br&gt; formula_19 where the first integral is over all points in the range of (X,Y) for which X=x and the second integral is over all points in the range of (X,Y) for which Y=y. Joint cumulative distribution function. For a pair of random variables formula_20, the joint cumulative distribution function (CDF) formula_21 is given by where the right-hand side represents the probability that the random variable formula_0 takes on a value less than or equal to formula_22 and that formula_1 takes on a value less than or equal to formula_23. For formula_24 random variables formula_25, the joint CDF formula_26 is given by Interpreting the formula_24 random variables as a random vector formula_27 yields a shorter notation: formula_28 Joint density function or mass function. Discrete case. The joint probability mass function of two discrete random variables formula_29 is: or written in terms of conditional distributions formula_30 where formula_31 is the probability of formula_32 given that formula_33. The generalization of the preceding two-variable case is the joint probability distribution of formula_34 discrete random variables formula_35 which is: or equivalently formula_36. This identity is known as the chain rule of probability. Since these are probabilities, in the two-variable case formula_37 which generalizes for formula_34 discrete random variables formula_38 to formula_39 Continuous case. The joint probability density function formula_17 for two continuous random variables is defined as the derivative of the joint cumulative distribution function (see Eq.1): This is equal to: formula_40 where formula_41 and formula_42 are the conditional distributions of formula_1 given formula_43 and of formula_0 given formula_44 respectively, and formula_45 and formula_46 are the marginal distributions for formula_0 and formula_1 respectively. The definition extends naturally to more than two random variables: Again, since these are probability distributions, one has formula_47 respectively formula_48 Mixed case. The "mixed joint density" may be defined where one or more random variables are continuous and the other random variables are discrete. With one variable of each type formula_49 One example of a situation in which one may wish to find the cumulative distribution of one random variable which is continuous and another random variable which is discrete arises when one wishes to use a logistic regression in predicting the probability of a binary outcome Y conditional on the value of a continuously distributed outcome formula_0. One "must" use the "mixed" joint density when finding the cumulative distribution of this binary outcome because the input variables formula_50 were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. Formally, formula_17 is the probability density function of formula_50 with respect to the product measure on the respective supports of formula_0 and formula_1. Either of these two decompositions can then be used to recover the joint cumulative distribution function: formula_51 The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables. Additional properties. Joint distribution for independent variables. In general two random variables formula_0 and formula_1 are independent if and only if the joint cumulative distribution function satisfies formula_52 Two discrete random variables formula_0 and formula_1 are independent if and only if the joint probability mass function satisfies formula_53 for all formula_22 and formula_23. While the number of independent random events grows, the related joint probability value decreases rapidly to zero, according to a negative exponential law. Similarly, two absolutely continuous random variables are independent if and only if formula_54 for all formula_22 and formula_23. This means that acquiring any information about the value of one or more of the random variables leads to a conditional distribution of any other variable that is identical to its unconditional (marginal) distribution; thus no variable provides any information about any other variable. Joint distribution for conditionally dependent variables. If a subset formula_4 of the variables formula_55 is conditionally dependent given another subset formula_5 of these variables, then the probability mass function of the joint distribution is formula_56. formula_56 is equal to formula_57. Therefore, it can be efficiently represented by the lower-dimensional probability distributions formula_58 and formula_59. Such conditional independence relations can be represented with a Bayesian network or copula functions. Covariance. When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables. If the relationship between the random variables is nonlinear, the covariance might not be sensitive to the relationship, which means, it does not relate the correlation between two variables. The covariance between the random variable X and Y, denoted as cov(X,Y), is : formula_60 Correlation. There is another measure of the relationship between two random variables that is often easier to interpret than the covariance. The correlation just scales the covariance by the product of the standard deviation of each variable. Consequently, the correlation is a dimensionless quantity that can be used to compare the linear relationships between pairs of variables in different units. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive (or negative) slope, ρXY is near +1 (or −1). If ρXY equals +1 or −1, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables. The correlation between random variable X and Y, denoted as formula_61 Important named distributions. Named joint distributions that arise frequently in statistics include the multivariate normal distribution, the multivariate stable distribution, the multinomial distribution, the negative multinomial distribution, the multivariate hypergeometric distribution, and the elliptical distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "p(X)" }, { "math_id": 3, "text": "p(Y)" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "P(A)=1/2 \\quad \\text{for} \\quad A\\in \\{0, 1\\};" }, { "math_id": 7, "text": "P(B)=1/2 \\quad \\text{for} \\quad B\\in \\{0, 1\\}." }, { "math_id": 8, "text": "\n(A=0,B=0),\n(A=0,B=1),\n(A=1,B=0),\n(A=1,B=1).\n" }, { "math_id": 9, "text": "P(A,B)=1/4 \\quad \\text{for} \\quad A,B\\in\\{0,1\\}." }, { "math_id": 10, "text": "P(A,B)=P(A)P(B) \\quad \\text{for} \\quad A,B \\in\\{0,1\\}." }, { "math_id": 11, "text": "A=1" }, { "math_id": 12, "text": "A=0" }, { "math_id": 13, "text": "B=1" }, { "math_id": 14, "text": "B=0" }, { "math_id": 15, "text": "\n \\mathrm{P}(A=0,B=0)=P\\{1\\}=\\frac{1}{6},\\quad \\quad \\mathrm{P}(A=1,B=0)=P\\{4,6\\}=\\frac{2}{6},\n" }, { "math_id": 16, "text": "\n \\mathrm{P}(A=0,B=1)=P\\{3,5\\}=\\frac{2}{6},\\quad \\quad \\mathrm{P}(A=1,B=1)=P\\{2\\}=\\frac{1}{6}.\n" }, { "math_id": 17, "text": "f_{X,Y}(x,y)" }, { "math_id": 18, "text": "f_{X}(x)= \\int f_{X,Y}(x,y) \\; dy " }, { "math_id": 19, "text": "f_{Y}(y)= \\int f_{X,Y}(x,y) \\; dx " }, { "math_id": 20, "text": "X,Y" }, { "math_id": 21, "text": "F_{X,Y}" }, { "math_id": 22, "text": "x" }, { "math_id": 23, "text": "y" }, { "math_id": 24, "text": "N" }, { "math_id": 25, "text": "X_1,\\ldots,X_N" }, { "math_id": 26, "text": "F_{X_1,\\ldots,X_N}" }, { "math_id": 27, "text": "\\mathbf{X} = (X_1,\\ldots,X_N)^T" }, { "math_id": 28, "text": "F_{\\mathbf{X}}(\\mathbf{x}) = \\operatorname{P}(X_1 \\leq x_1,\\ldots,X_N \\leq x_N)" }, { "math_id": 29, "text": "X, Y" }, { "math_id": 30, "text": "p_{X,Y}(x,y) = \\mathrm{P}(Y=y \\mid X=x) \\cdot \\mathrm{P}(X=x) = \\mathrm{P}(X=x \\mid Y=y) \\cdot \\mathrm{P}(Y=y)" }, { "math_id": 31, "text": " \\mathrm{P}(Y=y \\mid X=x) " }, { "math_id": 32, "text": " Y = y " }, { "math_id": 33, "text": " X = x " }, { "math_id": 34, "text": "n\\," }, { "math_id": 35, "text": "X_1, X_2, \\dots,X_n" }, { "math_id": 36, "text": "\n\\begin{align}\np_{X_1,\\ldots,X_n}(x_1,\\ldots,x_n) & = \\mathrm{P}(X_1=x_1) \\cdot \\mathrm{P}(X_2=x_2\\mid X_1=x_1) \\\\ & \\cdot \\mathrm{P}(X_3=x_3\\mid X_1=x_1,X_2=x_2) \\\\ & \\dots \\\\ & \\cdot P(X_n=x_n\\mid X_1=x_1,X_2=x_2,\\dots,X_{n-1}=x_{n-1}).\n\\end{align}\n" }, { "math_id": 37, "text": "\\sum_i \\sum_j \\mathrm{P}(X=x_i\\ \\mathrm{and}\\ Y=y_j) = 1,\\," }, { "math_id": 38, "text": "X_1, X_2, \\dots , X_n" }, { "math_id": 39, "text": "\\sum_{i} \\sum_{j} \\dots \\sum_{k} \\mathrm{P}(X_1=x_{1i},X_2=x_{2j}, \\dots, X_n=x_{nk}) = 1.\\;" }, { "math_id": 40, "text": "f_{X,Y}(x,y) = f_{Y\\mid X}(y\\mid x)f_X(x) = f_{X\\mid Y}(x\\mid y)f_Y(y)" }, { "math_id": 41, "text": "f_{Y\\mid X}(y\\mid x)" }, { "math_id": 42, "text": "f_{X\\mid Y}(x\\mid y)" }, { "math_id": 43, "text": "X=x" }, { "math_id": 44, "text": "Y=y" }, { "math_id": 45, "text": "f_X(x)" }, { "math_id": 46, "text": "f_Y(y)" }, { "math_id": 47, "text": "\\int_x \\int_y f_{X,Y}(x,y) \\; dy \\; dx= 1" }, { "math_id": 48, "text": "\\int_{x_1} \\ldots \\int_{x_n} f_{X_1,\\ldots,X_n}(x_1,\\ldots,x_n) \\; dx_n \\ldots \\; dx_1 = 1" }, { "math_id": 49, "text": "\n\\begin{align}\nf_{X,Y}(x,y) = f_{X \\mid Y}(x \\mid y)\\mathrm{P}(Y=y)= \\mathrm{P}(Y=y \\mid X=x) f_X(x).\n\\end{align}\n" }, { "math_id": 50, "text": "(X,Y)" }, { "math_id": 51, "text": "\n\\begin{align}\nF_{X,Y}(x,y)&=\\sum\\limits_{t\\le y}\\int_{s=-\\infty}^x f_{X,Y}(s,t)\\;ds.\n\\end{align}\n" }, { "math_id": 52, "text": " F_{X,Y}(x,y) = F_X(x) \\cdot F_Y(y) " }, { "math_id": 53, "text": " P(X = x \\ \\mbox{and} \\ Y = y ) = P( X = x) \\cdot P( Y = y) " }, { "math_id": 54, "text": " f_{X,Y}(x,y) = f_X(x) \\cdot f_Y(y) " }, { "math_id": 55, "text": "X_1,\\cdots,X_n" }, { "math_id": 56, "text": "\\mathrm{P}(X_1,\\ldots,X_n)" }, { "math_id": 57, "text": "P(B)\\cdot P(A\\mid B)" }, { "math_id": 58, "text": "P(B)" }, { "math_id": 59, "text": "P(A\\mid B)" }, { "math_id": 60, "text": "\\sigma_{XY}=E[(X-\\mu_x)(Y-\\mu_y)]=E(XY)-\\mu_x\\mu_y" }, { "math_id": 61, "text": "\\rho_{XY}=\\frac{cov(X,Y)}{\\sqrt{V(X)V(Y)}}=\\frac{\\sigma_{XY}}{\\sigma_X\\sigma_Y}" } ]
https://en.wikipedia.org/wiki?curid=879637
879681
Trigonometric interpolation
Interpolation with trigonometric polynomials In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods. This form is especially suited for interpolation of periodic functions. An important special case is when the given data points are equally spaced, in which case the solution is given by the discrete Fourier transform. Formulation of the interpolation problem. A trigonometric polynomial of degree "K" has the form This expression contains 2"K" + 1 coefficients, "a"0, "a"1, … "a""K", "b"1, …, "b""K", and we wish to compute those coefficients so that the function passes through "N" points: formula_0 Since the trigonometric polynomial is periodic with period 2π, the "N" points can be distributed and ordered in one period as formula_1 (Note that we do "not" in general require these points to be equally spaced.) The interpolation problem is now to find coefficients such that the trigonometric polynomial "p" satisfies the interpolation conditions. Formulation in the complex plane. The problem becomes more natural if we formulate it in the complex plane. We can rewrite the formula for a trigonometric polynomial as formula_2 where "i" is the imaginary unit. If we set "z" = "e""ix", then this becomes formula_3 with formula_4 This reduces the problem of trigonometric interpolation to that of polynomial interpolation on the unit circle. Existence and uniqueness for trigonometric interpolation now follows immediately from the corresponding results for polynomial interpolation. For more information on formulation of trigonometric interpolating polynomials in the complex plane, see p. 156 of Interpolation using Fourier Polynomials. Solution of the problem. Under the above conditions, there exists a solution to the problem for "any" given set of data points {"x""k", "y""k"} as long as "N", the number of data points, is not larger than the number of coefficients in the polynomial, i.e., "N" ≤ 2"K"+1 (a solution may or may not exist if "N"&gt;2"K"+1 depending upon the particular set of data points). Moreover, the interpolating polynomial is unique if and only if the number of adjustable coefficients is equal to the number of data points, i.e., "N" = 2"K" + 1. In the remainder of this article, we will assume this condition to hold true. Odd number of points. If the number of points "N" is odd, say "N=2K+1", applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form where formula_5 The factor formula_6 in this formula compensates for the fact that the complex plane formulation contains also negative powers of formula_7 and is therefore not a polynomial expression in formula_7. The correctness of this expression can easily be verified by observing that formula_8 and that formula_9 is a linear combination of the right powers of formula_7. Upon using the identity the coefficient formula_9 can be written in the form Even number of points. If the number of points "N" is even, say "N=2K", applying the Lagrange formula for polynomial interpolation to the polynomial formulation in the complex plane yields that the solution can be written in the form where Here, the constants formula_10 can be chosen freely. This is caused by the fact that the interpolating function (1) contains an odd number of unknown constants. A common choice is to require that the highest frequency is of the form a constant times formula_11, i.e. the formula_12 term vanishes, but in general the phase of the highest frequency can be chosen to be formula_13. To get an expression for formula_10, we obtain by using (2) that (3) can be written on the form formula_14 This yields formula_15 and formula_16 Note that care must be taken in order to avoid infinities caused by zeros in the denominators. Equidistant nodes. Further simplification of the problem is possible if nodes formula_17 are equidistant, i.e. formula_18 see Zygmund for more details. Odd number of points. Further simplification by using (4) would be an obvious approach, but is obviously involved. A much simpler approach is to consider the Dirichlet kernel formula_19 where formula_20 is odd. It can easily be seen that formula_21 is a linear combination of the right powers of formula_7 and satisfies formula_22 Since these two properties uniquely define the coefficients formula_9 in (5), it follows that formula_23 Here, the sinc-function prevents any singularities and is defined by formula_24 Even number of points. For formula_25 even, we define the Dirichlet kernel as formula_26 Again, it can easily be seen that formula_21 is a linear combination of the right powers of formula_7, does not contain the term formula_27 and satisfies formula_22 Using these properties, it follows that the coefficients formula_9 in (6) are given by formula_28 Note that formula_9 does not contain the formula_27 as well. Finally, note that the function formula_27 vanishes at all the points formula_17. Multiples of this term can, therefore, always be added, but it is commonly left out. Implementation. A MATLAB implementation of the above can be found here and is given by: function P = triginterp(xi,x,y) % TRIGINTERP Trigonometric interpolation. % Input: % xi evaluation points for the interpolant (vector) % x equispaced interpolation nodes (vector, length N) % y interpolation values (vector, length N) % Output: % P values of the trigonometric interpolant (vector) N = length(x); % Adjust the spacing of the given independent variable. h = 2/N; scale = (x(2)-x(1)) / h; x = x/scale; xi = xi/scale; % Evaluate interpolant. P = zeros(size(xi)); for k = 1:N P = P + y(k)*trigcardinal(xi-x(k),N); end function tau = trigcardinal(x,N) ws = warning('off','MATLAB:divideByZero'); % Form is different for even and odd N. if rem(N,2)==1 % odd tau = sin(N*pi*x/2) ./ (N*sin(pi*x/2)); else % even tau = sin(N*pi*x/2) ./ (N*tan(pi*x/2)); end warning(ws) tau(x==0) = 1; % fix value at x=0 Relation with the discrete Fourier transform. The special case in which the points "x""n" are equally spaced is especially important. In this case, we have formula_29 The transformation that maps the data points "y""n" to the coefficients "a""k", "b""k" is obtained from the discrete Fourier transform (DFT) of order N. formula_30 formula_31 The case of the cosine-only interpolation for equally spaced points, corresponding to a trigonometric interpolation when the points have even symmetry, was treated by Alexis Clairaut in 1754. In this case the solution is equivalent to a discrete cosine transform. The sine-only expansion for equally spaced points, corresponding to odd symmetry, was solved by Joseph Louis Lagrange in 1762, for which the solution is a discrete sine transform. The full cosine and sine interpolating polynomial, which gives rise to the DFT, was solved by Carl Friedrich Gauss in unpublished work around 1805, at which point he also derived a fast Fourier transform algorithm to evaluate it rapidly. Clairaut, Lagrange, and Gauss were all concerned with studying the problem of inferring the orbit of planets, asteroids, etc., from a finite set of observation points; since the orbits are periodic, a trigonometric interpolation was a natural choice. See also Heideman "et al." (1984). Applications in numerical computing. Chebfun, a fully integrated software system written in MATLAB for computing with functions, uses trigonometric interpolation and Fourier expansions for computing with periodic functions. Many algorithms related to trigonometric interpolation are readily available in Chebfun; several examples are available here.
[ { "math_id": 0, "text": " p(x_n) = y_n, \\quad n=0, \\ldots, N-1. \\, " }, { "math_id": 1, "text": " 0 \\leq x_0 < x_1 < x_2 < \\ldots < x_{N-1} < 2 \\pi. \\, " }, { "math_id": 2, "text": " p(x) = \\sum_{k=-K}^K c_k e^{ikx}, \\, " }, { "math_id": 3, "text": " q(z) = \\sum_{k=-K}^K c_k z^{k}, \\, " }, { "math_id": 4, "text": " q(e^{ix}) \\triangleq p(x). \\, " }, { "math_id": 5, "text": " t_k(x) = e^{-iKx+iKx_k} \\prod_{\\begin{align}m&=0 \\\\[-4mu] m &\\ne k\\end{align}}^{2K} \\frac{e^{ix}-e^{ix_m}}{e^{ix_k}-e^{ix_m}}." }, { "math_id": 6, "text": "e^{-iKx+iKx_k}" }, { "math_id": 7, "text": "e^{ix}" }, { "math_id": 8, "text": "t_k(x_k)=1" }, { "math_id": 9, "text": "t_k(x)" }, { "math_id": 10, "text": "\\alpha_k" }, { "math_id": 11, "text": "\\cos(Kx)" }, { "math_id": 12, "text": "\\sin(Kx)" }, { "math_id": 13, "text": "\\varphi_K" }, { "math_id": 14, "text": " t_k(x) = \\frac{\\cos\\tfrac12\\Biggl(2Kx-\\alpha_k+\\displaystyle\\sum\\limits_{m=0,\\,m \\ne k}^{2K-1} x_m\\Biggr)+\\sum\\limits_{m=-(K-1)}^{K-1}c_k e^{imx}}{2^N\\sin\\tfrac12(x_k-\\alpha_k)\\displaystyle\\prod\\limits_{m=0,\\,m \\ne k}^{2K-1}\\sin\\tfrac12(x_k-x_m)}." }, { "math_id": 15, "text": "\\alpha_k=\\sum_{\\begin{align}m&=0 \\\\[-4mu] m &\\ne k\\end{align}}^{2K-1} x_m - 2 \\varphi_K" }, { "math_id": 16, "text": " t_k(x) = \\frac{\\sin\\tfrac12(x-\\alpha_k)}{\\sin\\tfrac12(x_k-\\alpha_k)}\\prod_{\\begin{align}m&=0 \\\\[-4mu] m &\\ne k\\end{align}}^{2K-1} \\frac{\\sin\\tfrac12(x-x_m)}{\\sin\\tfrac12(x_k-x_m)}." }, { "math_id": 17, "text": "x_m" }, { "math_id": 18, "text": "x_m=\\frac{2\\pi m}{N}," }, { "math_id": 19, "text": "D(x,N)=\\frac{1}{N} +\\frac{2}{N} \\sum_{k=1}^{(N-1)/2}\\cos(kx) = \\frac{\\sin\\tfrac12 Nx}{N\\sin\\tfrac12 x}," }, { "math_id": 20, "text": "N>0" }, { "math_id": 21, "text": "D(x,N)" }, { "math_id": 22, "text": "D(x_m,N)=\\begin{cases}0\\text{ for } m\\neq0 \\\\1\\text{ for } m=0\\end{cases}." }, { "math_id": 23, "text": "\\begin{align}\nt_k(x) &= D(x-x_k,N)=\\begin{cases}\n\\dfrac{\\sin\\tfrac12 N(x-x_k)}{N\\sin\\tfrac12 (x-x_k)} \\text{ for } x\\neq x_k\\\\[10mu]\n\\lim\\limits_{x\\to 0} \\dfrac{\\sin\\tfrac12 Nx}{N\\sin\\tfrac12 x}=1 \\text{ for } x= x_k\n\\end{cases}\\\\&= \\frac{\\mathrm{sinc}\\,\\tfrac12 N(x-x_k)}{\\mathrm{sinc}\\,\\tfrac12 (x-x_k)}.\n\\end{align}" }, { "math_id": 24, "text": " \\mathrm{sinc}\\,x=\\frac{\\sin x}{x}." }, { "math_id": 25, "text": "N" }, { "math_id": 26, "text": "D(x,N)=\\frac{1}{N} +\\frac{1}{N}\\cos \\tfrac12 Nx + \\frac{2}{N} \\sum_{k=1}^{(N-1)/2}\\cos(kx) = \\frac{\\sin\\tfrac12 Nx}{N\\tan\\tfrac12 x}." }, { "math_id": 27, "text": " \\sin \\tfrac12 Nx " }, { "math_id": 28, "text": "\\begin{align}\nt_k(x) &= D(x-x_k,N)=\\begin{cases}\n\\dfrac{\\sin\\tfrac12 N(x-x_k)}{N\\tan\\tfrac12 (x-x_k)}\\text{ for } x\\neq x_k\\\\[10mu]\n\\lim\\limits_{x\\to 0} \\dfrac{\\sin\\tfrac12 Nx}{N\\tan\\tfrac12 x}=1 \\text{ for } x= x_k.\n\\end{cases}\\\\&= \\frac{\\mathrm{sinc}\\,\\tfrac12 N(x-x_k)}{ \\mathrm{sinc}\\,\\tfrac12 (x-x_k)}\\cos\\tfrac12 (x-x_k)\n\\end{align}" }, { "math_id": 29, "text": " x_n = 2 \\pi \\frac{n}{N}, \\qquad 0 \\leq n < N." }, { "math_id": 30, "text": " Y_k = \\sum_{n=0}^{N-1} y_n \\ e^{-i 2 \\pi nk/N} \\, " }, { "math_id": 31, "text": " y_n = p(x_n) = \\frac{1}{N} \\sum_{k=0}^{N-1} Y_k \\ e^{i 2 \\pi nk/N} \\, " } ]
https://en.wikipedia.org/wiki?curid=879681
8798339
Supervaluationism
Semantics for dealing with irreferential singular terms and vagueness In philosophical logic, supervaluationism is a semantics for dealing with irreferential singular terms and vagueness. It allows one to apply the tautologies of propositional logic in cases where truth values are undefined. According to supervaluationism, a proposition can have a definite truth value even when its components do not. The proposition "Pegasus likes licorice", for example, is often interpreted as having no truth-value given the assumption that the name "Pegasus" fails to refer. If indeed reference fails for "Pegasus", then it seems as though there is nothing that can justify an assignment of a truth-value to any apparent assertion in which the term "Pegasus" occurs. The statement "Pegasus likes licorice or Pegasus doesn't like licorice", however, is an instance of the valid schema formula_0 ("formula_1 or not-formula_1"), so, according to supervaluationism, it should be true regardless of whether or not its disjuncts have a truth value; that is, it should be true in all interpretations. If, in general, something is true in all precisifications, supervaluationism describes it as "supertrue", while something false in all precisifications is described as "superfalse". Supervaluations were first formalized by Bas van Fraassen. Example abstraction. Let "v" be a classical valuation defined on every atomic sentence of the language "L" and let At("x") be the number of distinct atomic sentences in a formula "x". There are then at most 2At("x") classical valuations defined on every sentence "x". A supervaluation "V" is a function from sentences to truth values such that "x" is supertrue (i.e. "V"("x")=True) if and only if "v"("x")=True for every "v". Likewise for superfalse. "V(x)" is undefined when there are exactly two valuations "v" and "v"* such that "v(x)"=True and "v"*"(x)"=False. For example, let "Lp" be the formal translation of "Pegasus likes licorice". There are then exactly two classical valuations "v" and "v"* on "Lp", namely "v(Lp)"=True and "v"*"(Lp)"=False. So "Lp" is neither supertrue nor superfalse. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p \\vee \\neg p" }, { "math_id": 1, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=8798339
8799355
Complementary sequences
Pairs of sequences "For complementary sequences in biology, see complementarity (molecular biology). For integer sequences with complementary sets of members see Lambek–Moser theorem." In applied mathematics, complementary sequences (CS) are pairs of sequences with the useful property that their out-of-phase aperiodic autocorrelation coefficients sum to zero. Binary complementary sequences were first introduced by Marcel J. E. Golay in 1949. In 1961–1962 Golay gave several methods for constructing sequences of length 2"N" and gave examples of complementary sequences of lengths 10 and 26. In 1974 R. J. Turyn gave a method for constructing sequences of length "mn" from sequences of lengths "m" and "n" which allows the construction of sequences of any length of the form 2"N"10"K"26"M". Later the theory of complementary sequences was generalized by other authors to polyphase complementary sequences, multilevel complementary sequences, and arbitrary complex complementary sequences. Complementary sets have also been considered; these can contain more than two sequences. Definition. Let ("a"0, "a"1, ..., "a""N" − 1) and ("b"0, "b"1, ..., "b""N" − 1) be a pair of bipolar sequences, meaning that "a"("k") and "b"("k") have values +1 or −1. Let the aperiodic autocorrelation function of the sequence x be defined by formula_0 Then the pair of sequences "a" and "b" is complementary if: formula_1 for "k" = 0, and formula_2 for "k" = 1, ..., "N" − 1. Or using Kronecker delta we can write: formula_3 So we can say that the sum of autocorrelation functions of complementary sequences is a delta function, which is an ideal autocorrelation for many applications like radar pulse compression and spread spectrum telecommunications. formula_4 where "C""S" is a constant. "S""a" and "S""b" are defined as a squared magnitude of the Fourier transform of the sequences. The Fourier transform can be a direct DFT of the sequences, it can be a DFT of zero padded sequences or it can be a continuous Fourier transform of the sequences which is equivalent to the Z transform for "Z" = "e""j"ω. formula_5 also formula_6 Golay pair. A complementary pair "a", "b" may be encoded as polynomials "A"("z") = "a"(0) + "a"(1)"z" + ... + "a"("N" − 1)"z""N"−1 and similarly for "B"("z"). The complementarity property of the sequences is equivalent to the condition formula_7 for all "z" on the unit circle, that is, |"z"| = 1. If so, "A" and "B" form a Golay pair of polynomials. Examples include the Shapiro polynomials, which give rise to complementary sequences of length a power of two. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "R_x(k)=\\sum_{j=0}^{N-k-1} x_jx_{j+k}.\\," }, { "math_id": 1, "text": "R_a(k) + R_b(k) = 2N,\\, " }, { "math_id": 2, "text": "R_a(k) + R_b(k) = 0,\\, " }, { "math_id": 3, "text": "R_a(k) + R_b(k) = 2N\\delta(k),\\, " }, { "math_id": 4, "text": "S_a + S_b = C_S," }, { "math_id": 5, "text": "S_a = C_S - S_b < C_S," }, { "math_id": 6, "text": "S_b < C_S." }, { "math_id": 7, "text": "\\vert A(z) \\vert^2 + \\vert B(z) \\vert^2 = 2N \\, " } ]
https://en.wikipedia.org/wiki?curid=8799355
880145
First class constraint
In physics, a first class constraint is a dynamical quantity in a constrained Hamiltonian system whose Poisson bracket with all the other constraints vanishes on the constraint surface in phase space (the surface implicitly defined by the simultaneous vanishing of all the constraints). To calculate the first class constraint, one assumes that there are no second class constraints, or that they have been calculated previously, and their Dirac brackets generated. First and second class constraints were introduced by Dirac (1950, p.136, 1964, p.17) as a way of quantizing mechanical systems such as gauge theories where the symplectic form is degenerate. The terminology of first and second class constraints is confusingly similar to that of primary and secondary constraints, reflecting the manner in which these are generated. These divisions are independent: both first and second class constraints can be either primary or secondary, so this gives altogether four different classes of constraints. Poisson brackets. Consider a Poisson manifold "M" with a smooth Hamiltonian over it (for field theories, "M" would be infinite-dimensional). Suppose we have some constraints formula_0 for "n" smooth functions formula_1 These will only be defined chartwise in general. Suppose that everywhere on the constrained set, the "n" derivatives of the "n" functions are all linearly independent and also that the Poisson brackets formula_2 and formula_3 all vanish on the constrained subspace. This means we can write formula_4 for some smooth functions formula_5 −−there is a theorem showing this; and formula_6 for some smooth functions formula_7. This can be done globally, using a partition of unity. Then, we say we have an irreducible first-class constraint ("irreducible" here is in a different sense from that used in representation theory). Geometric theory. For a more elegant way, suppose given a vector bundle over formula_8, with formula_9-dimensional fiber formula_10. Equip this vector bundle with a connection. Suppose too we have a smooth section f of this bundle. Then the covariant derivative of f with respect to the connection is a smooth linear map formula_11 from the tangent bundle formula_12 to formula_10, which preserves the base point. Assume this linear map is right invertible (i.e. there exists a linear map formula_13 such that formula_14 is the identity map) for all the fibers at the zeros of f. Then, according to the implicit function theorem, the subspace of zeros of f is a submanifold. The ordinary Poisson bracket is only defined over formula_15, the space of smooth functions over "M". However, using the connection, we can extend it to the space of smooth sections of f if we work with the algebra bundle with the graded algebra of "V"-tensors as fibers. Assume also that under this Poisson bracket, formula_16 (note that it's not true that formula_17 in general for this "extended Poisson bracket" anymore) and formula_18 on the submanifold of zeros of f (If these brackets also happen to be zero everywhere, then we say the constraints close off shell). It turns out the right invertibility condition and the commutativity of flows conditions are "independent" of the choice of connection. So, we can drop the connection provided we are working solely with the restricted subspace. Intuitive meaning. What does it all mean intuitively? It means the Hamiltonian and constraint flows all commute with each other on the constrained subspace; or alternatively, that if we start on a point on the constrained subspace, then the Hamiltonian and constraint flows all bring the point to another point on the constrained subspace. Since we wish to restrict ourselves to the constrained subspace only, this suggests that the Hamiltonian, or any other physical observable, should only be defined on that subspace. Equivalently, we can look at the equivalence class of smooth functions over the symplectic manifold, which agree on the constrained subspace (the quotient algebra by the ideal generated by the f 's, in other words). The catch is, the Hamiltonian flows on the constrained subspace depend on the gradient of the Hamiltonian there, not its value. But there's an easy way out of this. Look at the orbits of the constrained subspace under the action of the symplectic flows generated by the f 's. This gives a local foliation of the subspace because it satisfies integrability conditions (Frobenius theorem). It turns out if we start with two different points on a same orbit on the constrained subspace and evolve both of them under two different Hamiltonians, respectively, which agree on the constrained subspace, then the time evolution of both points under their respective Hamiltonian flows will always lie in the same orbit at equal times. It also turns out if we have two smooth functions "A"1 and "B"1, which are constant over orbits at least on the constrained subspace (i.e. physical observables) (i.e. {A1,f}={B1,f}=0 over the constrained subspace)and another two A2 and B2, which are also constant over orbits such that A1 and B1 agrees with A2 and B2 respectively over the restrained subspace, then their Poisson brackets {A1, B1} and {A2, B2} are also constant over orbits and agree over the constrained subspace. In general, one cannot rule out "ergodic" flows (which basically means that an orbit is dense in some open set), or "subergodic" flows (which an orbit dense in some submanifold of dimension greater than the orbit's dimension). We can't have self-intersecting orbits. For most "practical" applications of first-class constraints, we do not see such complications: the quotient space of the restricted subspace by the f-flows (in other words, the orbit space) is well behaved enough to act as a differentiable manifold, which can be turned into a symplectic manifold by projecting the symplectic form of M onto it (this can be shown to be well defined). In light of the observation about physical observables mentioned earlier, we can work with this more "physical" smaller symplectic manifold, but with 2n fewer dimensions. In general, the quotient space is a bit difficult to work with when doing concrete calculations (not to mention nonlocal when working with diffeomorphism constraints), so what is usually done instead is something similar. Note that the restricted submanifold is a bundle (but not a fiber bundle in general) over the quotient manifold. So, instead of working with the quotient manifold, we can work with a section of the bundle instead. This is called gauge fixing. The "major" problem is this bundle might not have a global section in general. This is where the "problem" of global anomalies comes in, for example. A global anomaly is different from the Gribov ambiguity, which is when a gauge fixing doesn't work to fix a gauge uniquely, in a global anomaly, there is no consistent definition of the gauge field. A global anomaly is a barrier to defining a quantum gauge theory discovered by Witten in 1980. What have been described are irreducible first-class constraints. Another complication is that Δf might not be right invertible on subspaces of the restricted submanifold of codimension 1 or greater (which violates the stronger assumption stated earlier in this article). This happens, for example in the cotetrad formulation of general relativity, at the subspace of configurations where the cotetrad field and the connection form happen to be zero over some open subset of space. Here, the constraints are the diffeomorphism constraints. One way to get around this is this: For reducible constraints, we relax the condition on the right invertibility of Δ"f" into this one: Any smooth function that vanishes at the zeros of "f" is the fiberwise contraction of "f" with (a non-unique) smooth section of a formula_19-vector bundle where formula_19 is the dual vector space to the constraint vector space "V". This is called the "regularity condition". Constrained Hamiltonian dynamics from a Lagrangian gauge theory. First of all, we will assume the action is the integral of a local Lagrangian that only depends up to the first derivative of the fields. The analysis of more general cases, while possible is more complicated. When going over to the Hamiltonian formalism, we find there are constraints. Recall that in the action formalism, there are on shell and off shell configurations. The constraints that hold off shell are called primary constraints while those that only hold on shell are called secondary constraints. Examples. Consider the dynamics of a single point particle of mass m with no internal degrees of freedom moving in a pseudo-Riemannian spacetime manifold S with metric g. Assume also that the parameter τ describing the trajectory of the particle is arbitrary (i.e. we insist upon reparametrization invariance). Then, its symplectic space is the cotangent bundle "T*S" with the canonical symplectic form ω. If we coordinatize "T" * "S" by its position x in the base manifold S and its position within the cotangent space p, then we have a constraint "f" = "m"2 −g("x")−1(p,p) = 0 . The Hamiltonian H is, surprisingly enough, H = 0. In light of the observation that the Hamiltonian is only defined up to the equivalence class of smooth functions agreeing on the constrained subspace, we can use a new Hamiltonian H '= f instead. Then, we have the interesting case where the Hamiltonian is the same as a constraint! See Hamiltonian constraint for more details. Consider now the case of a Yang–Mills theory for a real simple Lie algebra L (with a negative definite Killing form η) minimally coupled to a real scalar field σ, which transforms as an orthogonal representation ρ with the underlying vector space V under L in ( d − 1) + 1 Minkowski spacetime. For l in L, we write "ρ(l)[σ]" as "l[σ]" for simplicity. Let A be the L-valued connection form of the theory. Note that the A here differs from the A used by physicists by a factor of i and g. This agrees with the mathematician's convention. The action S is given by formula_20 where g is the Minkowski metric, F is the curvature form formula_21 (no is or gs!) where the second term is a formal shorthand for pretending the Lie bracket is a commutator, D is the covariant derivative Dσ = dσ − A[σ] and α is the orthogonal form for ρ. What is the Hamiltonian version of this model? Well, first, we have to split A noncovariantly into a time component φ and a spatial part "A". Then, the resulting symplectic space has the conjugate variables σ, "πσ" (taking values in the underlying vector space of formula_22, the dual rep of ρ), "A", "π""A", "φ" and "πφ". For each spatial point, we have the constraints, "πφ"=0 and the Gaussian constraint formula_23 where since ρ is an intertwiner formula_24, ρ ' is the dualized intertwiner formula_25 ( L is self-dual via η). The Hamiltonian, formula_26 The last two terms are a linear combination of the Gaussian constraints and we have a whole family of (gauge equivalent)Hamiltonians parametrized by f. In fact, since the last three terms vanish for the constrained states, we may drop them. Second class constraints. In a constrained Hamiltonian system, a dynamical quantity is second class if its Poisson bracket with at least one constraint is nonvanishing. A constraint that has a nonzero Poisson bracket with at least one other constraint, then, is a second class constraint. See Dirac brackets for diverse illustrations. An example: a particle confined to a sphere. Before going on to the general theory, consider a specific example step by step to motivate the general analysis. Start with the action describing a Newtonian particle of mass m constrained to a spherical surface of radius R within a uniform gravitational field g. When one works in Lagrangian mechanics, there are several ways to implement a constraint: one can switch to generalized coordinates that manifestly solve the constraint, or one can use a Lagrange multiplier while retaining the redundant coordinates so constrained. In this case, the particle is constrained to a sphere, therefore the natural solution would be to use angular coordinates to describe the position of the particle instead of Cartesian and solve (automatically eliminate) the constraint in that way (the first choice). For pedagogical reasons, instead, consider the problem in (redundant) Cartesian coordinates, with a Lagrange multiplier term enforcing the constraint. The action is given by formula_27 where the last term is the Lagrange multiplier term enforcing the constraint. Of course, as indicated, we could have just used different, non-redundant, spherical coordinates and written it as formula_28 instead, without extra constraints; but we are considering the former coordinatization to illustrate constraints. The conjugate momenta are given by formula_29, formula_30, formula_31, formula_32 . Note that we can't determine from the momenta. The Hamiltonian is given by formula_33. We cannot eliminate •"λ" at this stage yet. We are here treating •"λ" as a shorthand for a function of the symplectic space which we have yet to determine and "not" as an independent variable. For notational consistency, define "u"1 •"λ" from now on. The above Hamiltonian with the "p""λ" term is the "naive Hamiltonian". Note that since, on-shell, the constraint must be satisfied, one cannot distinguish, on-shell, between the naive Hamiltonian and the above Hamiltonian with the undetermined coefficient, . We have the primary constraint "pλ" 0. We require, on the grounds of consistency, that the Poisson bracket of all the constraints with the Hamiltonian vanish at the constrained subspace. In other words, the constraints must not evolve in time if they are going to be identically zero along the equations of motion. From this consistency condition, we immediately get the secondary constraint formula_34 This constraint should be added into the Hamiltonian with an undetermined (not necessarily constant) coefficient u2, enlarging the Hamiltonian to formula_35 Similarly, from this secondary constraint, we find the tertiary constraint formula_36 Again, one should add this constraint into the Hamiltonian, since, on-shell, no one can tell the difference. Therefore, so far, the Hamiltonian looks like formula_37 where u1, u2, and u3 are still completely undetermined. Note that, frequently, all constraints that are found from consistency conditions are referred to as "secondary constraints" and secondary, tertiary, quaternary, etc., constraints are not distinguished. We keep turning the crank, demanding this new constraint have vanishing Poisson bracket formula_38 We might despair and think that there is no end to this, but because one of the new Lagrange multipliers has shown up, this is not a new constraint, but a condition that fixes the Lagrange multiplier: formula_39 Plugging this into our Hamiltonian gives us (after a little algebra) formula_40 Now that there are new terms in the Hamiltonian, one should go back and check the consistency conditions for the primary and secondary constraints. The secondary constraint's consistency condition gives formula_41 Again, this is "not" a new constraint; it only determines that formula_42 At this point there are "no more constraints or consistency conditions to check"! Putting it all together, formula_43. When finding the equations of motion, one should use the above Hamiltonian, and as long as one is careful to never use constraints before taking derivatives in the Poisson bracket then one gets the correct equations of motion. That is, the equations of motion are given by formula_44 Before analyzing the Hamiltonian, consider the three constraints, formula_45 Note the nontrivial Poisson bracket structure of the constraints. In particular, formula_46 The above Poisson bracket does not just fail to vanish off-shell, which might be anticipated, but "even on-shell it is nonzero". Therefore, "φ"2 and "φ"3 are second class constraints while "φ"1 is a first class constraint. Note that these constraints satisfy the regularity condition. Here, we have a symplectic space where the Poisson bracket does not have "nice properties" on the constrained subspace. However, Dirac noticed that we can turn the underlying differential manifold of the symplectic space into a Poisson manifold using his eponymous modified bracket, called the Dirac bracket, such that this "Dirac bracket of any (smooth) function with any of the second class constraints always vanishes". Effectively, these brackets (illustrated for this spherical surface in the Dirac bracket article) project the system back onto the constraints surface. If one then wished to canonically quantize this system, then one need promote the canonical Dirac brackets, "not" the canonical Poisson brackets to commutation relations. Examination of the above Hamiltonian shows a number of interesting things happening. One thing to note is that, on-shell when the constraints are satisfied, the extended Hamiltonian is identical to the naive Hamiltonian, as required. Also, note that λ dropped out of the extended Hamiltonian. Since "φ"1 is a first class primary constraint, it should be interpreted as a generator of a gauge transformation. The gauge freedom is the freedom to choose λ, which has ceased to have any effect on the particle's dynamics. Therefore, that λ dropped out of the Hamiltonian, that u1 is undetermined, and that "φ"1 = "pλ" is first class, are all closely interrelated. Note that it would be more natural not to start with a Lagrangian with a Lagrange multiplier, but instead take "r"² − "R"² as a primary constraint and proceed through the formalism: The result would the elimination of the extraneous λ dynamical quantity. However, the example is more edifying in its current form. Example: Proca action. Another example we will use is the Proca action. The fields are formula_47 and the action is formula_48 where formula_49 and formula_50. formula_51 and formula_52 are canonical variables. The second class constraints are formula_53 and formula_54. The Hamiltonian is given by formula_55. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f_i(x)=0, " }, { "math_id": 1, "text": "\\{ f_i \\}_{i= 1}^n" }, { "math_id": 2, "text": "\\{f_i,f_j\\}" }, { "math_id": 3, "text": "\\{f_i,H\\}" }, { "math_id": 4, "text": "\\{f_i,f_j\\}=\\sum_k c_{ij}^k f_k" }, { "math_id": 5, "text": "c_{ij}^k" }, { "math_id": 6, "text": "\\{f_i,H\\}=\\sum_j v_i^j f_j" }, { "math_id": 7, "text": "v_i^j" }, { "math_id": 8, "text": "\\mathcal M" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "V" }, { "math_id": 11, "text": "\\nabla f" }, { "math_id": 12, "text": "T\\mathcal M" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "(\\Delta f)g" }, { "math_id": 15, "text": "C^{\\infty}(M)" }, { "math_id": 16, "text": "\\{f,f\\}=0" }, { "math_id": 17, "text": "\\{g,g\\}=0" }, { "math_id": 18, "text": "\\{f,H\\}=0" }, { "math_id": 19, "text": "\\bar{V}" }, { "math_id": 20, "text": "S[\\mathbf{A},\\sigma]=\\int d^dx \\frac{1}{4g^2}\\eta((\\mathbf{g}^{-1}\\otimes \\mathbf{g}^{-1})(\\mathbf{F},\\mathbf{F}))+\\frac{1}{2}\\alpha(\\mathbf{g}^{-1}(D\\sigma,D\\sigma))" }, { "math_id": 21, "text": "d\\mathbf{A}+\\mathbf{A}\\wedge\\mathbf{A}" }, { "math_id": 22, "text": "\\bar{\\rho}" }, { "math_id": 23, "text": "\\vec{D}\\cdot\\vec{\\pi}_A-\\rho'(\\pi_\\sigma,\\sigma)=0" }, { "math_id": 24, "text": "\\rho:L\\otimes V\\rightarrow V" }, { "math_id": 25, "text": "\\rho':\\bar{V}\\otimes V\\rightarrow L" }, { "math_id": 26, "text": "H_f=\\int d^{d-1}x \\frac{1}{2}\\alpha^{-1}(\\pi_\\sigma,\\pi_\\sigma)+\\frac{1}{2}\\alpha(\\vec{D}\\sigma\\cdot\\vec{D}\\sigma)-\\frac{g^2}{2}\\eta(\\vec{\\pi}_A,\\vec{\\pi}_A)-\\frac{1}{2g^2}\\eta(\\mathbf{B}\\cdot \\mathbf{B})-\\eta(\\pi_\\phi,f)-<\\pi_\\sigma,\\phi[\\sigma]>-\\eta(\\phi,\\vec{D}\\cdot\\vec{\\pi}_A)." }, { "math_id": 27, "text": "S=\\int dt L=\\int dt \\left[\\frac{m}{2}(\\dot{x}^2+\\dot{y}^2+\\dot{z}^2)-mgz+\\frac{\\lambda}{2}(x^2+y^2+z^2-R^2)\\right]" }, { "math_id": 28, "text": "S=\\int dt \\left[\\frac{mR^2}{2}(\\dot{\\theta}^2+\\sin^2(\\theta)\\dot{\\phi}^2)+mgR\\cos(\\theta)\\right]" }, { "math_id": 29, "text": "p_x=m\\dot{x}" }, { "math_id": 30, "text": "p_y=m\\dot{y}" }, { "math_id": 31, "text": "p_z=m\\dot{z}" }, { "math_id": 32, "text": "p_\\lambda=0" }, { "math_id": 33, "text": "H= \\vec{p}\\cdot\\dot{\\vec{r}}+p_\\lambda \\dot{\\lambda}-L=\\frac{p^2}{2m}+p_\\lambda \\dot{\\lambda}+mgz-\\frac{\\lambda}{2}(r^2-R^2)" }, { "math_id": 34, "text": "\\begin{align}\n0&=\\{H,p_\\lambda\\}_\\text{PB}\\\\\n&=\\sum_{i}\\frac{\\partial H}{\\partial q_i}\\frac{\\partial p_\\lambda}{\\partial p_i}-\\frac{\\partial H}{\\partial p_i}\\frac{\\partial p_\\lambda}{\\partial q_i}\\\\\n&=\\frac{\\partial H}{\\partial \\lambda}\\\\\n&=\\frac{1}{2}(r^2-R^2)\\\\\n&\\Downarrow\\\\\n0&=r^2-R^2 \n\\end{align}" }, { "math_id": 35, "text": "\nH = \\frac{p^2}{2m} + mgz - \\frac{\\lambda}{2}(r^2-R^2) + u_1 p_\\lambda + u_2 (r^2-R^2) ~.\n" }, { "math_id": 36, "text": "\\begin{align}\n0&=\\{H,r^2-R^2\\}_{PB}\\\\\n&=\\{H,x^2\\}_{PB}+\\{H,y^2\\}_{PB}+\\{H,z^2\\}_{PB}\\\\\n&=\\frac{\\partial H}{\\partial p_x}2x+\\frac{\\partial H}{\\partial p_y}2y+\\frac{\\partial H}{\\partial p_z}2z\\\\\n&=\\frac{2}{m}(p_xx+p_yy+p_zz)\\\\\n&\\Downarrow\\\\\n0&=\\vec p\\cdot\\vec r \n\\end{align}" }, { "math_id": 37, "text": "\nH = \\frac{p^2}{2m} + mgz - \\frac{\\lambda}{2}(r^2-R^2) + u_1 p_\\lambda + u_2 (r^2-R^2) + u_3 \\vec{p}\\cdot\\vec{r}~,\n" }, { "math_id": 38, "text": "\n0=\\{\\vec{p}\\cdot\\vec{r},\\, H\\}_{PB} = \\frac{p^2}{m} - mgz+ \\lambda r^2 -2 u_2 r^2.\n" }, { "math_id": 39, "text": "\nu_2 = \\frac{\\lambda}{2} + \\frac{1}{r^2}\\left(\\frac{p^2}{2m}-\\frac{1}{2}mgz \\right).\n" }, { "math_id": 40, "text": "\nH = \\frac{p^2}{2m}(2-\\frac{R^2}{r^2}) + \\frac{1}{2}mgz(1+\\frac{R^2}{r^2})+u_1p_\\lambda+u_3\\vec p \\cdot\\vec r\n" }, { "math_id": 41, "text": "\n\\frac{2}{m}\\vec{r}\\cdot\\vec{p} + 2 u_3 r^2 = 0.\n" }, { "math_id": 42, "text": "\nu_3 = -\\frac{\\vec{r}\\cdot\\vec{p}}{m r^2}~.\n" }, { "math_id": 43, "text": "H=\\left(2-\\frac{R^2}{r^2}\\right)\\frac{p^2}{2m} + \\frac{1}{2}\\left(1+\\frac{R^2}{r^2}\\right)mgz - \\frac{(\\vec{r}\\cdot\\vec{p})^2}{mr^2} + u_1 p_\\lambda" }, { "math_id": 44, "text": "\n\\dot{\\vec{r}} = \\{\\vec{r}, \\, H\\}_{PB}, \\quad \\dot{\\vec{p}} = \\{ \\vec{p},\\, H\\}_{PB}, \\quad \\dot{\\lambda} = \\{ \\lambda,\\, H\\}_{PB},\n\\quad \\dot{p}_\\lambda = \\{ p_\\lambda, H\\}_{PB}.\n" }, { "math_id": 45, "text": "\n\\varphi_1 = p_\\lambda, \\quad \\varphi_2 = r^2-R^2, \\quad \\varphi_3 = \\vec{p}\\cdot\\vec{r}.\n" }, { "math_id": 46, "text": "\n\\{\\varphi_2, \\varphi_3\\} = 2 r^2 \\neq 0.\n" }, { "math_id": 47, "text": "A^\\mu = (\\vec{A},\\phi)" }, { "math_id": 48, "text": "S = \\int d^dx dt \\left[ \\frac{1}{2}E^2 - \\frac{1}{4}B_{ij}B_{ij} - \\frac{m^2}{2}A^2 + \\frac{m^2}{2}\\phi^2\\right]" }, { "math_id": 49, "text": "\\vec{E} \\equiv -\\nabla\\phi - \\dot{\\vec{A}}" }, { "math_id": 50, "text": "B_{ij} \\equiv \\frac{\\partial A_j}{\\partial x_i} - \\frac{\\partial A_i}{\\partial x_j}" }, { "math_id": 51, "text": "(\\vec{A},-\\vec{E})" }, { "math_id": 52, "text": "(\\phi,\\pi)" }, { "math_id": 53, "text": "\\pi \\approx 0" }, { "math_id": 54, "text": "\\nabla\\cdot\\vec{E} + m^2 \\phi \\approx 0" }, { "math_id": 55, "text": "H = \\int d^dx \\left[ \\frac{1}{2}E^2 + \\frac{1}{4}B_{ij}B_{ij} - \\pi\\nabla\\cdot\\vec{A} + \\vec{E}\\cdot\\nabla\\phi + \\frac{m^2}{2}A^2 - \\frac{m^2}{2}\\phi^2\\right]" } ]
https://en.wikipedia.org/wiki?curid=880145
8801912
Arden Buck equation
The Arden Buck equations are a group of empirical correlations that relate the saturation vapor pressure to temperature for moist air. The curve fits have been optimized for more accuracy than the Goff–Gratch equation in the range . A set of several equations were developed, each of which is applicable in a different situation. Formula. The equations suggested by (which are modifications of the equations in ) are: formula_0, over liquid water, T &gt; 0 °C formula_1, over ice, T &lt; 0 °C where: Buck (1981) also lists enhancement factors for a temperature range of −80 to 50 °C (−112 to 122 °F) at pressures of 1,000 mb, 500 mb, and 250 mb. These coefficients are listed in the table below. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nP_{\\mathrm{s}}\\left(T \\right) = 6.1121 \\exp \\left(\\left( 18.678 - \\frac{T} {234.5}\\right)\\left( \\frac{T} {257.14 + T} \\right)\\right)\n" }, { "math_id": 1, "text": "\nP_{\\mathrm{s}}\\left(T \\right) = 6.1115 \\exp \\left(\\left( 23.036 - \\frac{T} {333.7}\\right)\\left( \\frac{T} {279.82 + T} \\right)\\right)\n" } ]
https://en.wikipedia.org/wiki?curid=8801912
8802094
Absorption cross section
Mmeasures the probability of an absorption process In physics, absorption cross section is a measure for the probability of an absorption process. More generally, the term cross section is used in physics to quantify the probability of a certain particle-particle interaction, e.g., scattering, electromagnetic absorption, etc. (Note that light in this context is described as consisting of particles, i.e., photons.) Typical absorption cross section has units of cm2⋅molecule−1. In honor of the fundamental contribution of Maria Goeppert Mayer to this area, the unit for the two-photon absorption cross section is named the "GM". One GM is 10−50 cm4⋅s⋅photon−1. In the context of ozone shielding of ultraviolet light, absorption cross section is the ability of a molecule to absorb a photon of a particular wavelength and polarization. Analogously, in the context of nuclear engineering it refers to the probability of a particle (usually a neutron) being absorbed by a nucleus. Although the units are given as an area, it does not refer to an actual size area, at least partially because the density or state of the target molecule will affect the probability of absorption. Quantitatively, the number formula_0 of photons absorbed, between the points formula_1 and formula_2 along the path of a beam is the product of the number formula_3 of photons penetrating to depth formula_1 times the number formula_4 of absorbing molecules per unit volume times the absorption cross section formula_5: formula_6. The absorption cross-section is closely related to molar absorptivity formula_7 and mass absorption coefficient. formula_8 For a given particle and its energy, the absorption cross-section of the target material can be calculated from mass absorption coefficient using: formula_9 where: This is also commonly expressed as: formula_13 where: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dN" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "x + dx" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\sigma" }, { "math_id": 6, "text": "\\frac{dN}{dx}= -N n \\sigma" }, { "math_id": 7, "text": "\\varepsilon" }, { "math_id": 8, "text": "\\sigma= \\frac{\\ln(10)\\times 10^3}{N_\\text{A}}\\times \\varepsilon" }, { "math_id": 9, "text": "\\sigma= (\\mu/\\rho) m_\\text{a}/N_\\text{A}" }, { "math_id": 10, "text": "\\mu/\\rho" }, { "math_id": 11, "text": "m_\\text{a}" }, { "math_id": 12, "text": "N_\\text{A}" }, { "math_id": 13, "text": "\\sigma= \\alpha/n" }, { "math_id": 14, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=8802094
880235
Envelope (mathematics)
Curve external to a family of curves in geometry In geometry, an envelope of a planar family of curves is a curve that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. Classically, a point on the envelope can be thought of as the intersection of two "infinitesimally adjacent" curves, meaning the limit of intersections of nearby curves. This idea can be generalized to an envelope of surfaces in space, and so on to higher dimensions. To have an envelope, it is necessary that the individual members of the family of curves are differentiable curves as the concept of tangency does not apply otherwise, and there has to be a smooth transition proceeding through the members. But these conditions are not sufficient – a given family may fail to have an envelope. A simple example of this is given by a family of concentric circles of expanding radius. Envelope of a family of curves. Let each curve "C""t" in the family be given as the solution of an equation "f""t"("x", "y")=0 (see implicit curve), where "t" is a parameter. Write "F"("t", "x", "y")="f""t"("x", "y") and assume "F" is differentiable. The envelope of the family "C""t" is then defined as the set formula_0 of points ("x","y") for which, simultaneously, formula_1 for some value of "t", where formula_2 is the partial derivative of "F" with respect to "t". If "t" and "u", "t"≠"u" are two values of the parameter then the intersection of the curves "C""t" and "C""u" is given by formula_3 or, equivalently, formula_4 Letting "u" → "t" gives the definition above. An important special case is when "F"("t", "x", "y") is a polynomial in "t". This includes, by clearing denominators, the case where "F"("t", "x", "y") is a rational function in "t". In this case, the definition amounts to "t" being a double root of "F"("t", "x", "y"), so the equation of the envelope can be found by setting the discriminant of "F" to 0 (because the definition demands F=0 at some t and first derivative =0 i.e. its value 0 and it is min/max at that t). For example, let "C""t" be the line whose "x" and "y" intercepts are "t" and 11−"t", this is shown in the animation above. The equation of "C""t" is formula_5 or, clearing fractions, formula_6 The equation of the envelope is then formula_7 Often when "F" is not a rational function of the parameter it may be reduced to this case by an appropriate substitution. For example, if the family is given by "C"θ with an equation of the form "u"("x", "y")cos θ+"v"("x", "y")sin θ="w"("x", "y"), then putting "t"="e""i"θ, cos θ=("t"+1/"t")/2, sin θ=("t"-1/"t")/2"i" changes the equation of the curve to formula_8 or formula_9 The equation of the envelope is then given by setting the discriminant to 0: formula_10 or formula_11 Alternative definitions. Then formula_12, formula_13 and formula_14, where formula_0 is the set of points defined at the beginning of this subsection's parent section. Examples. Example 1. These definitions "E"1, "E"2, and "E"3 of the envelope may be different sets. Consider for instance the curve "y" = "x"3 parametrised by γ : R → R2 where γ("t") = ("t","t"3). The one-parameter family of curves will be given by the tangent lines to γ. First we calculate the discriminant formula_15. The generating function is formula_16 Calculating the partial derivative "F""t" = 6"t"("x" – "t"). It follows that either "x" = "t" or "t" = 0. First assume that "x" = "t" and "t" ≠ 0. Substituting into F: formula_17 and so, assuming that "t" ≠ 0, it follows that "F" = "F""t" = 0 if and only if ("x","y") = ("t","t"3). Next, assuming that "t" = 0 and substituting into "F" gives "F"(0,("x","y")) = −"y". So, assuming "t" = 0, it follows that "F" = "F""t" = 0 if and only if "y" = 0. Thus the discriminant is the original curve and its tangent line at γ(0): formula_18 Next we calculate "E"1. One curve is given by "F"("t",("x","y")) = 0 and a nearby curve is given by "F"("t" + ε,("x","y")) where ε is some very small number. The intersection point comes from looking at the limit of "F"("t",("x","y")) = "F"("t" + ε,("x","y")) as ε tends to zero. Notice that "F"("t",("x","y")) = "F"("t" + ε,("x","y")) if and only if formula_19 If "t" ≠ 0 then "L" has only a single factor of ε. Assuming that "t" ≠ 0 then the intersection is given by formula_20 Since "t" ≠ 0 it follows that "x" = "t". The "y" value is calculated by knowing that this point must lie on a tangent line to the original curve γ: that "F"("t",("x","y")) = 0. Substituting and solving gives "y" = "t"3. When "t" = 0, "L" is divisible by ε2. Assuming that "t" = 0 then the intersection is given by formula_21 It follows that "x" = 0, and knowing that "F"("t",("x","y")) = 0 gives "y" = 0. It follows that formula_22 Next we calculate "E"2. The curve itself is the curve that is tangent to all of its own tangent lines. It follows that formula_23 Finally we calculate "E"3. Every point in the plane has at least one tangent line to γ passing through it, and so region filled by the tangent lines is the whole plane. The boundary "E"3 is therefore the empty set. Indeed, consider a point in the plane, say ("x"0,"y"0). This point lies on a tangent line if and only if there exists a "t" such that formula_24 This is a cubic in "t" and as such has at least one real solution. It follows that at least one tangent line to γ must pass through any given point in the plane. If "y" &gt; "x"3 and "y" &gt; 0 then each point ("x","y") has exactly one tangent line to γ passing through it. The same is true if "y" &lt; "x"3 "y" &lt; 0. If "y" &lt; "x"3 and "y" &gt; 0 then each point ("x","y") has exactly three distinct tangent lines to γ passing through it. The same is true if "y" &gt; "x"3 and "y" &lt; 0. If "y" = "x"3 and "y" ≠ 0 then each point ("x","y") has exactly two tangent lines to γ passing through it (this corresponds to the cubic having one ordinary root and one repeated root). The same is true if "y" ≠ "x"3 and "y" = 0. If "y" = "x"3 and "x" = 0, i.e., "x" = "y" = 0, then this point has a single tangent line to γ passing through it (this corresponds to the cubic having one real root of multiplicity 3). It follows that formula_25 Example 2. In string art it is common to cross-connect two lines of equally spaced pins. What curve is formed? For simplicity, set the pins on the "x"- and "y"-axes; a non-orthogonal layout is a rotation and scaling away. A general straight-line thread connects the two points (0, "k"−"t") and ("t", 0), where "k" is an arbitrary scaling constant, and the family of lines is generated by varying the parameter "t". From simple geometry, the equation of this straight line is "y" = −("k" − "t")"x"/"t" + "k" − "t". Rearranging and casting in the form "F"("x","y","t") = 0 gives: Now differentiate "F"("x","y","t") with respect to "t" and set the result equal to zero, to get These two equations jointly define the equation of the envelope. From (2) we have: formula_26 Substituting this value of "t" into (1) and simplifying gives an equation for the envelope: Or, rearranging into a more elegant form that shows the symmetry between x and y: We can take a rotation of the axes where the "b" axis is the line "y=x" oriented northeast and the "a" axis is the line "y"=−"x" oriented southeast. These new axes are related to the original "x-y" axes by "x"=("b"+"a")/√2 and "y"=("b"−"a")/√2 . We obtain, after substitution into (4) and expansion and simplification, which is apparently the equation for a parabola with axis along "a"=0, or "y"="x". Example 3. Let "I" ⊂ R be an open interval and let γ : "I" → R2 be a smooth plane curve parametrised by arc length. Consider the one-parameter family of normal lines to γ("I"). A line is normal to γ at γ("t") if it passes through γ("t") and is perpendicular to the tangent vector to γ at γ("t"). Let T denote the unit tangent vector to γ and let N denote the unit normal vector. Using a dot to denote the dot product, the generating family for the one-parameter family of normal lines is given by "F" : "I" × R2 → R where formula_27 Clearly (x − γ)·T = 0 if and only if x − γ is perpendicular to T, or equivalently, if and only if x − γ is parallel to N, or equivalently, if and only if x = γ + λN for some λ ∈ R. It follows that formula_28 is exactly the normal line to γ at γ("t"0). To find the discriminant of "F" we need to compute its partial derivative with respect to "t": formula_29 where κ is the plane curve curvature of γ. It has been seen that "F" = 0 if and only if x - γ = λN for some λ ∈ R. Assuming that "F" = 0 gives formula_30 Assuming that κ ≠ 0 it follows that λ = 1/κ and so formula_31 This is exactly the evolute of the curve γ. Example 4. The following example shows that in some cases the envelope of a family of curves may be seen as the topologic boundary of a union of sets, whose boundaries are the curves of the envelope. For formula_32 and formula_33 consider the (open) right triangle in a Cartesian plane with vertices formula_34, formula_35 and formula_36 formula_37 Fix an exponent formula_38, and consider the union of all the triangles formula_39 subjected to the constraint formula_40, that is the open set formula_41 To write a Cartesian representation for formula_42, start with any formula_43, formula_44 satisfying formula_40 and any formula_45. The Hölder inequality in formula_46 with respect to the conjugated exponents formula_47 and formula_48 gives: formula_49, with equality if and only if formula_50. In terms of a union of sets the latter inequality reads: the point formula_51 belongs to the set formula_42, that is, it belongs to some formula_52 with formula_53, if and only if it satisfies formula_54 Moreover, the boundary in formula_55 of the set formula_56 is the envelope of the corresponding family of line segments formula_57 (that is, the hypotenuses of the triangles), and has Cartesian equation formula_58 Notice that, in particular, the value formula_59 gives the arc of parabola of the Example 2, and the value formula_60 (meaning that all hypotenuses are unit length segments) gives the astroid. Example 5. We consider the following example of envelope in motion. Suppose at initial height 0, one casts a projectile into the air with constant initial velocity "v" but different elevation angles θ. Let "x" be the horizontal axis in the motion surface, and let "y" denote the vertical axis. Then the motion gives the following differential dynamical system: formula_61 which satisfies four initial conditions: formula_62 Here "t" denotes motion time, θ is elevation angle, "g" denotes gravitational acceleration, and "v" is the constant initial speed (not velocity). The solution of the above system can take an implicit form: formula_63 To find its envelope equation, one may compute the desired derivative: formula_64 By eliminating θ, one may reach the following envelope equation: formula_65 Clearly the resulted envelope is also a concave parabola. Envelope of a family of surfaces. A one-parameter family of surfaces in three-dimensional Euclidean space is given by a set of equations formula_66 depending on a real parameter "a". For example, the tangent planes to a surface along a curve in the surface form such a family. Two surfaces corresponding to different values "a" and "a' " intersect in a common curve defined by formula_67 In the limit as "a' " approaches "a", this curve tends to a curve contained in the surface at "a" formula_68 This curve is called the characteristic of the family at "a". As "a" varies the locus of these characteristic curves defines a surface called the envelope of the family of surfaces. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The envelope of a family of surfaces is tangent to each surface in the family along the characteristic curve in that surface. Generalisations. The idea of an envelope of a family of smooth submanifolds follows naturally. In general, if we have a family of submanifolds with codimension "c" then we need to have at least a "c"-parameter family of such submanifolds. For example: a one-parameter family of curves in three-space ("c" = 2) does not, generically, have an envelope. Applications. Ordinary differential equations. Envelopes are connected to the study of ordinary differential equations (ODEs), and in particular singular solutions of ODEs. Consider, for example, the one-parameter family of tangent lines to the parabola "y" = "x"2. These are given by the generating family "F"("t",("x","y")) = "t"2 – 2"tx" + "y". The zero level set "F"("t"0,("x","y")) = 0 gives the equation of the tangent line to the parabola at the point ("t"0,"t"02). The equation "t"2 – 2"tx" + "y" = 0 can always be solved for "y" as a function of "x" and so, consider formula_69 Substituting formula_70 gives the ODE formula_71 Not surprisingly "y" = 2"tx" − "t"2 are all solutions to this ODE. However, the envelope of this one-parameter family of lines, which is the parabola "y" = "x"2, is also a solution to this ODE. Another famous example is Clairaut's equation. Partial differential equations. Envelopes can be used to construct more complicated solutions of first order partial differential equations (PDEs) from simpler ones. Let "F"("x","u",D"u") = 0 be a first order PDE, where "x" is a variable with values in an open set Ω ⊂ R"n", "u" is an unknown real-valued function, D"u" is the gradient of "u", and "F" is a continuously differentiable function that is regular in D"u". Suppose that "u"("x";"a") is an "m"-parameter family of solutions: that is, for each fixed "a" ∈ "A" ⊂ R"m", "u"("x";"a") is a solution of the differential equation. A new solution of the differential equation can be constructed by first solving (if possible) formula_72 for "a" = φ("x") as a function of "x". The envelope of the family of functions {"u"(·,"a")}"a"∈"A" is defined by formula_73 and also solves the differential equation (provided that it exists as a continuously differentiable function). Geometrically, the graph of "v"("x") is everywhere tangent to the graph of some member of the family "u"("x";"a"). Since the differential equation is first order, it only puts a condition on the tangent plane to the graph, so that any function everywhere tangent to a solution must also be a solution. The same idea underlies the solution of a first order equation as an integral of the Monge cone. The Monge cone is a cone field in the R"n"+1 of the ("x","u") variables cut out by the envelope of the tangent spaces to the first order PDE at each point. A solution of the PDE is then an envelope of the cone field. In Riemannian geometry, if a smooth family of geodesics through a point "P" in a Riemannian manifold has an envelope, then "P" has a conjugate point where any geodesic of the family intersects the envelope. The same is true more generally in the calculus of variations: if a family of extremals to a functional through a given point "P" has an envelope, then a point where an extremal intersects the envelope is a conjugate point to "P". Caustics. In geometrical optics, a caustic is the envelope of a family of light rays. In this picture there is an arc of a circle. The light rays (shown in blue) are coming from a source "at infinity", and so arrive parallel. When they hit the circular arc the light rays are scattered in different directions according to the law of reflection. When a light ray hits the arc at a point the light will be reflected as though it had been reflected by the arc's tangent line at that point. The reflected light rays give a one-parameter family of lines in the plane. The envelope of these lines is the reflective caustic. A reflective caustic will generically consist of smooth points and ordinary cusp points. From the point of view of the calculus of variations, Fermat's principle (in its modern form) implies that light rays are the extremals for the length functional formula_74 among smooth curves γ on ["a","b"] with fixed endpoints γ("a") and γ("b"). The caustic determined by a given point "P" (in the image the point is at infinity) is the set of conjugate points to "P". Huygens's principle. Light may pass through anisotropic inhomogeneous media at different rates depending on the direction and starting position of a light ray. The boundary of the set of points to which light can travel from a given point q after a time "t" is known as the wave front after time "t", denoted here by Φq("t"). It consists of precisely the points that can be reached from q in time "t" by travelling at the speed of light. Huygens's principle asserts that the wave front set Φq0("s" + "t") is the envelope of the family of wave fronts Φq("s") for q ∈ Φq0("t"). More generally, the point q0 could be replaced by any curve, surface or closed set in space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{D}" }, { "math_id": 1, "text": "F(t, x, y) = 0~~\\mathsf{and}~~{\\partial F \\over \\partial t}(t, x, y) = 0" }, { "math_id": 2, "text": "\\partial F/\\partial t" }, { "math_id": 3, "text": "F(t, x, y) = F(u, x, y) = 0\\," }, { "math_id": 4, "text": "F(t, x, y) = 0~~\\mathsf{and}~~\\frac{F(u, x, y)-F(t, x, y)}{u-t} = 0." }, { "math_id": 5, "text": "\\frac{x}{t}+\\frac{y}{11-t}=1" }, { "math_id": 6, "text": "x(11-t)+yt-t(11-t)=t^2+(-x+y-11)t+11x=0.\\," }, { "math_id": 7, "text": "(-x+y-11)^2-44x=(x-y)^2-22(x+y)+121=0.\\," }, { "math_id": 8, "text": "u{1 \\over 2}(t+{1\\over t})+v{1 \\over 2i}(t-{1\\over t})=w" }, { "math_id": 9, "text": "(u-iv)t^2-2wt+(u+iv)=0.\\," }, { "math_id": 10, "text": "(u-iv)(u+iv)-w^2=0\\," }, { "math_id": 11, "text": "u^2+v^2=w^2.\\," }, { "math_id": 12, "text": "E_1 \\subseteq \\mathcal{D}" }, { "math_id": 13, "text": "E_2 \\subseteq \\mathcal{D}" }, { "math_id": 14, "text": "E_3 \\subseteq \\mathcal{D}" }, { "math_id": 15, "text": "\\mathcal D" }, { "math_id": 16, "text": " F(t,(x,y)) = 3t^2x - y - 2t^3." }, { "math_id": 17, "text": "F(t,(t,y)) = t^3 - y \\, " }, { "math_id": 18, "text": " \\mathcal{D} = \\{(x,y) \\in \\R^2 : y = x^3\\} \\cup \\{(x,y) \\in \\R^2 : y = 0 \\} \\ . " }, { "math_id": 19, "text": " L := F(t,(x,y)) - F(t+\\varepsilon,(x,y)) = 2\\varepsilon^3+6\\varepsilon t^2+6\\varepsilon^2t-(3\\varepsilon^2+6\\varepsilon t)x = 0. " }, { "math_id": 20, "text": "\\lim_{\\varepsilon \\to 0} \\frac{1}{\\varepsilon} L = 6t(t-x) \\ . " }, { "math_id": 21, "text": "\\lim_{\\varepsilon \\to 0} \\frac{1}{\\varepsilon^2} L = 3x \\ . " }, { "math_id": 22, "text": " E_1 = \\{(x,y) \\in \\R^2 : y = x^3 \\} \\ . " }, { "math_id": 23, "text": " E_2 = \\{(x,y) \\in \\R^2 : y = x^3 \\} \\ . " }, { "math_id": 24, "text": "F(t,(x_0,y_0)) = 3t^2x_0 - y_0 - 2t^3 = 0 \\ . " }, { "math_id": 25, "text": "E_3 = \\varnothing. " }, { "math_id": 26, "text": "t = \\sqrt{kx} \\," }, { "math_id": 27, "text": " F(t,{\\mathbf x}) = ({\\mathbf x} - \\gamma(t)) \\cdot {\\mathbf T}(t) \\ . " }, { "math_id": 28, "text": " L_{t_0} := \\{ {\\mathbf x} \\in \\R^2 : F(t_0,{\\mathbf x}) = 0 \\} " }, { "math_id": 29, "text": " \\frac{\\partial F}{\\partial t}(t,{\\mathbf x}) = \\kappa (t) ({\\mathbf x}-\\gamma(t))\\cdot {\\mathbf N}(t) - 1 \\ , " }, { "math_id": 30, "text": " \\frac{\\partial F}{\\partial t} = \\lambda \\kappa(t) - 1 \\ . " }, { "math_id": 31, "text": " \\mathcal{D} = \\gamma(t) + \\frac{1}{\\kappa(t)}{\\mathbf N}(t) \\ . " }, { "math_id": 32, "text": "s>0" }, { "math_id": 33, "text": "t>0" }, { "math_id": 34, "text": "(0,0)" }, { "math_id": 35, "text": "(s,0)" }, { "math_id": 36, "text": "(0,t)" }, { "math_id": 37, "text": "T_{s,t}:=\\left\\{(x,y)\\in\\R_+^2:\\ \\frac{x}{s}+\\frac{y}{t}<1\\right\\}.\n" }, { "math_id": 38, "text": "\\alpha>0" }, { "math_id": 39, "text": "T_{s,t} " }, { "math_id": 40, "text": "\\textstyle s^\\alpha+t^\\alpha=1 " }, { "math_id": 41, "text": "\\Delta_\\alpha:=\\bigcup_ {s^\\alpha+t^\\alpha=1} T_{s,t}." }, { "math_id": 42, "text": "\\textstyle\\Delta_\\alpha" }, { "math_id": 43, "text": "\\textstyle s>0" }, { "math_id": 44, "text": "\\textstyle t>0" }, { "math_id": 45, "text": "\\textstyle(x,y)\\in\\R_+^2" }, { "math_id": 46, "text": "\\textstyle\\R^2" }, { "math_id": 47, "text": "p:=1+\\frac{1}{\\alpha}" }, { "math_id": 48, "text": "\\textstyle q:={1+\\alpha}" }, { "math_id": 49, "text": "x^\\frac{\\alpha}{\\alpha+1}+y^\\frac{\\alpha}{\\alpha+1}\\leq \\left(\\frac{x}{s}+\\frac{y}{t}\\right)^\\frac{\\alpha}{\\alpha+1}\\Big(s^\\alpha+t^\\alpha\\Big)^\\frac{1}{\\alpha+1}=\\left(\\frac{x}{s}+\\frac{y}{t}\\right)^\\frac{\\alpha}{\\alpha+1}" }, { "math_id": 50, "text": "\\textstyle s:\\,t=x^\\frac{1}{1+\\alpha}:\\,y^\\frac{1}{1+\\alpha}" }, { "math_id": 51, "text": "(x,y)\\in\\R_+^2" }, { "math_id": 52, "text": "\\textstyle T_{s,t}" }, { "math_id": 53, "text": "\\textstyle s^\\alpha+t^\\alpha=1" }, { "math_id": 54, "text": "x^\\frac{\\alpha}{\\alpha+1}+y^\\frac{\\alpha}{\\alpha+1}<1." }, { "math_id": 55, "text": "\\R_+^2" }, { "math_id": 56, "text": "\\textstyle \\Delta_\\alpha" }, { "math_id": 57, "text": "\\left\\{(x,y)\\in\\R_+^2:\\ \\frac{x}{s}+\\frac{y}{t}=1\\right\\}\\ ,\\qquad s^\\alpha+t^\\alpha=1" }, { "math_id": 58, "text": "x^\\frac{\\alpha}{\\alpha+1}+y^\\frac{\\alpha}{\\alpha+1}=1." }, { "math_id": 59, "text": "\\alpha=1" }, { "math_id": 60, "text": "\\alpha=2" }, { "math_id": 61, "text": "\\frac{d^2 y}{dt^2} = -g,\\; \\frac{d^2 x}{dt^2} = 0, " }, { "math_id": 62, "text": "\\frac{dx}{dt}\\bigg|_{t=0} = v \\cos \\theta,\\; \\frac{dy}{dt}\\bigg|_{t=0} = v \\sin \\theta,\\; x\\bigg|_{t=0} = y\\bigg|_{t=0} = 0." }, { "math_id": 63, "text": "F(x,y,\\theta) = x\\tan \\theta - \\frac{gx^2}{2v^2 \\cos^2 \\theta} - y = 0." }, { "math_id": 64, "text": "\\frac{\\partial F}{\\partial \\theta} = \\frac{x}{\\cos^2 \\theta} - \\frac{gx^2 \\tan \\theta}{v^2 \\cos^2 \\theta} = 0." }, { "math_id": 65, "text": "y = \\frac{v^2}{2g} - \\frac{g}{2v^2}x^2." }, { "math_id": 66, "text": "F(x,y,z,a)=0" }, { "math_id": 67, "text": " F(x,y,z,a)=0,\\,\\,{F(x,y,z,a^\\prime)-F(x,y,z,a)\\over a^\\prime -a}=0." }, { "math_id": 68, "text": " F(x,y,z,a)=0,\\,\\,{\\partial F\\over \\partial a}(x,y,z,a)=0." }, { "math_id": 69, "text": " t^2 - 2tx + y(x) = 0. \\ " }, { "math_id": 70, "text": " t = \\left(\\frac{dy}{dx}\\right)/2 " }, { "math_id": 71, "text": " \\left(\\frac{dy}{dx}\\right)^2 \\!\\! - 4x\\frac{dy}{dx} + 4y = 0. " }, { "math_id": 72, "text": "D_a u(x;a) = 0\\," }, { "math_id": 73, "text": "v(x) = u(x;\\varphi(x)),\\quad x\\in\\Omega," }, { "math_id": 74, "text": "L[\\gamma] = \\int_a^b |\\gamma'(t)|\\,dt" } ]
https://en.wikipedia.org/wiki?curid=880235
8802504
Voltage graph
Directed graph whose edges are labelled invertibly by elements of a group In graph theory, a voltage graph is a directed graph whose edges are labelled invertibly by elements of a group. It is formally identical to a gain graph, but it is generally used in topological graph theory as a concise way to specify another graph called the derived graph of the voltage graph. Typical choices of the groups used for voltage graphs include the two-element group formula_0 (for defining the bipartite double cover of a graph), free groups (for defining the universal cover of a graph), "d"-dimensional integer lattices formula_1 (viewed as a group under vector addition, for defining periodic structures in "d"-dimensional Euclidean space), and finite cyclic groups formula_2 for "n" &gt; 2. When Π is a cyclic group, the voltage graph may be called a "cyclic-voltage graph". Definition. Formal definition of a Π-voltage graph, for a given group Π: Note that the voltages of a voltage graph need not satisfy Kirchhoff's voltage law, that the sum of voltages around a closed path is 0 (the identity element of the group), although this law does hold for the derived graphs described below. Thus, the name may be somewhat misleading. It results from the origin of voltage graphs as dual to the current graphs of topological graph theory. The derived graph. The derived graph of a voltage graph formula_7 is the graph formula_8 whose vertex set is formula_9 and whose edge set is formula_10, where the endpoints of an edge ("e", "k") such that "e" has tail "v" and head "w" are formula_11 and formula_12. Although voltage graphs are defined for digraphs, they may be extended to undirected graphs by replacing each undirected edge by a pair of oppositely ordered directed edges and by requiring that these edges have labels that are inverse to each other in the group structure. In this case, the derived graph will also have the property that its directed edges form pairs of oppositely oriented edges, so the derived graph may itself be interpreted as being an undirected graph. The derived graph is a covering graph of the given voltage graph. If no edge label of the voltage graph is the identity element, then the group elements associated with the vertices of the derived graph provide a coloring of the derived graph with a number of colors equal to the group order. An important special case is the bipartite double cover, the derived graph of a voltage graph in which all edges are labeled with the non-identity element of a two-element group. Because the order of the group is two, the derived graph in this case is guaranteed to be bipartite. Polynomial time algorithms are known for determining whether the derived graph of a formula_1-voltage graph contains any directed cycles. Examples. Any Cayley graph of a group Π, with a given set Γ of generators, may be defined as the derived graph for a Π-voltage graph having one vertex and Γ self-loops, each labeled with one of the generators in Γ. The Petersen graph is the derived graph for a formula_13-voltage graph in the shape of a dumbbell with two vertices and three edges: one edge connecting the two vertices, and one self-loop on each vertex. One self-loop is labeled with 1, the other with 2, and the edge connecting the two vertices is labeled 0. More generally, the same construction allows any generalized Petersen graph GP("n","k") to be constructed as a derived graph of the same dumbbell graph with labels 1, 0, and "k" in the group formula_14. The vertices and edges of any periodic tessellation of the plane may be formed as the derived graph of a finite graph, with voltages in formula_15. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}_2" }, { "math_id": 1, "text": "\\mathbb{Z}^d" }, { "math_id": 2, "text": "\\mathbb{Z_n}" }, { "math_id": 3, "text": "x \\in \\Pi" }, { "math_id": 4, "text": "\\Pi = \\mathbb{Z}_n" }, { "math_id": 5, "text": "\\alpha : E(G) \\rightarrow \\Pi" }, { "math_id": 6, "text": "( G, \\alpha: E(G) \\rightarrow \\Pi )" }, { "math_id": 7, "text": "( G, \\alpha: E(G) \\rightarrow \\mathbb{Z}_{n} )" }, { "math_id": 8, "text": "\\tilde G" }, { "math_id": 9, "text": "\\tilde V = V \\times \\mathbb{Z}_{n}" }, { "math_id": 10, "text": "\\tilde E = E \\times \\mathbb{Z}_{n}" }, { "math_id": 11, "text": "(v,\\ k)" }, { "math_id": 12, "text": "(w,\\ k+\\alpha(e))" }, { "math_id": 13, "text": "\\mathbb{Z}_5" }, { "math_id": 14, "text": "\\mathbb{Z}_n" }, { "math_id": 15, "text": "\\mathbb{Z}^2" } ]
https://en.wikipedia.org/wiki?curid=8802504
880406
Caustic (mathematics)
Envelope of rays either reflected or refracted by a manifold In differential geometry, a caustic is the envelope of rays either reflected or refracted by a manifold. It is related to the concept of caustics in geometric optics. The ray's source may be a point (called the radiant) or parallel rays from a point at infinity, in which case a direction vector of the rays must be specified. More generally, especially as applied to symplectic geometry and singularity theory, a caustic is the critical value set of a Lagrangian mapping ("π" ○ "i") : "L" ↪ "M" ↠ "B"; where "i" : "L" ↪ "M" is a Lagrangian immersion of a Lagrangian submanifold "L" into a symplectic manifold "M", and "π" : "M" ↠ "B" is a Lagrangian fibration of the symplectic manifold "M". The caustic is a subset of the Lagrangian fibration's base space "B". Explanation. Concentration of light, especially sunlight, can burn. The word "caustic", in fact, comes from the Greek καυστός, burnt, via the Latin "causticus", burning. A common situation where caustics are visible is when light shines on a drinking glass. The glass casts a shadow, but also produces a curved region of bright light. In ideal circumstances (including perfectly parallel rays, as if from a point source at infinity), a nephroid-shaped patch of light can be produced. Rippling caustics are commonly formed when light shines through waves on a body of water. Another familiar caustic is the rainbow. Scattering of light by raindrops causes different wavelengths of light to be refracted into arcs of differing radius, producing the bow. Catacaustic. A catacaustic is the reflective case. With a radiant, it is the evolute of the orthotomic of the radiant. The planar, parallel-source-rays case: suppose the direction vector is formula_0 and the mirror curve is parametrised as formula_1. The normal vector at a point is formula_2; the reflection of the direction vector is (normal needs special normalization) formula_3 Having components of found reflected vector treat it as a tangent formula_4 Using the simplest envelope form formula_5 formula_6 formula_7 formula_8 which may be unaesthetic, but formula_9 gives a linear system in formula_10 and so it is elementary to obtain a parametrisation of the catacaustic. Cramer's rule would serve. Example. Let the direction vector be (0,1) and the mirror be formula_11 Then formula_12   formula_13   formula_14   formula_15   formula_16   formula_17 formula_18 formula_19 and formula_9 has solution formula_20; "i.e.", light entering a parabolic mirror parallel to its axis is reflected through the focus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(a,b)" }, { "math_id": 1, "text": "(u(t),v(t))" }, { "math_id": 2, "text": "(-v'(t),u'(t))" }, { "math_id": 3, "text": "2\\mbox{proj}_nd-d=\\frac{2n}{\\sqrt{n\\cdot n}}\\frac{n\\cdot d}{\\sqrt{n\\cdot n}}-d=2n\\frac{n\\cdot d}{n\\cdot n}-d=\\frac{\n(av'^2-2bu'v'-au'^2,bu'^2-2au'v'-bv'^2)\n}{v'^2+u'^2}" }, { "math_id": 4, "text": "(x-u)(bu'^2-2au'v'-bv'^2)=(y-v)(av'^2-2bu'v'-au'^2)." }, { "math_id": 5, "text": "F(x,y,t)=(x-u)(bu'^2-2au'v'-bv'^2)-(y-v)(av'^2-2bu'v'-au'^2)" }, { "math_id": 6, "text": "=x(bu'^2-2au'v'-bv'^2)\n-y(av'^2-2bu'v'-au'^2)\n+b(uv'^2-uu'^2-2vu'v')\n+a(-vu'^2+vv'^2+2uu'v')" }, { "math_id": 7, "text": "F_t(x,y,t)=2x(bu'u''-a(u'v''+u''v')-bv'v'')\n-2y(av'v''-b(u''v'+u'v'')-au'u'')" }, { "math_id": 8, "text": "+b( u'v'^2 +2uv'v'' -u'^3 -2uu'u'' -2u'v'^2 -2u''vv' -2u'vv'')\n+a(-v'u'^2 -2vu'u'' +v'^3 +2vv'v'' +2v'u'^2 +2v''uu' +2v'uu'')" }, { "math_id": 9, "text": "F=F_t=0" }, { "math_id": 10, "text": "(x,y)" }, { "math_id": 11, "text": "(t,t^2)." }, { "math_id": 12, "text": "u'=1" }, { "math_id": 13, "text": "u''=0" }, { "math_id": 14, "text": "v'=2t" }, { "math_id": 15, "text": "v''=2" }, { "math_id": 16, "text": "a=0" }, { "math_id": 17, "text": "b=1" }, { "math_id": 18, "text": "F(x,y,t)=(x-t)(1-4t^2)+4t(y-t^2)=x(1-4t^2)+4ty-t" }, { "math_id": 19, "text": "F_t(x,y,t)=-8tx+4y-1" }, { "math_id": 20, "text": "(0,1/4)" } ]
https://en.wikipedia.org/wiki?curid=880406
880483
Self number
Type of natural number In number theory, a self number or Devlali number in a given number base formula_0 is a natural number that cannot be written as the sum of any other natural number formula_1 and the individual digits of formula_1. 20 is a self number (in base 10), because no such combination can be found (all formula_2 give a result less than 20; all other formula_1 give a result greater than 20). 21 is not, because it can be written as 15 + 1 + 5 using "n" = 15. These numbers were first described in 1949 by the Indian mathematician D. R. Kaprekar. Definition and properties. Let formula_1 be a natural number. We define the formula_0-self function for base formula_3 formula_4 to be the following: formula_5 where formula_6 is the number of digits in the number in base formula_0, and formula_7 is the value of each digit of the number. A natural number formula_1 is a formula_0-self number if the preimage of formula_1 for formula_8 is the empty set. In general, for even bases, all odd numbers below the base number are self numbers, since any number below such an odd number would have to also be a 1-digit number which when added to its digit would result in an even number. For odd bases, all odd numbers are self numbers. The set of self numbers in a given base formula_0 is infinite and has a positive asymptotic density: when formula_0 is odd, this density is 1/2. Self numbers in specific bases. For base 2 self numbers, see OEIS: . (written in base 10) The first few base 10 self numbers are: 1, 3, 5, 7, 9, 20, 31, 42, 53, 64, 75, 86, 97, 108, 110, 121, 132, 143, 154, 165, 176, 187, 198, 209, 211, 222, 233, 244, 255, 266, 277, 288, 299, 310, 312, 323, 334, 345, 356, 367, 378, 389, 400, 411, 413, 424, 435, 446, 457, 468, 479, 490, ... (sequence in the OEIS) Self primes. A self prime is a self number that is prime. The first few self primes in base 10 are 3, 5, 7, 31, 53, 97, 211, 233, 277, 367, 389, 457, 479, 547, 569, 613, 659, 727, 839, 883, 929, 1021, 1087, 1109, 1223, 1289, 1447, 1559, 1627, 1693, 1783, 1873, ... (sequence in the OEIS) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "b" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n < 15" }, { "math_id": 3, "text": "b > 1" }, { "math_id": 4, "text": "F_b : \\mathbb{N} \\rightarrow \\mathbb{N}" }, { "math_id": 5, "text": "F_{b}(n) = n + \\sum_{i=0}^{k - 1} d_i. " }, { "math_id": 6, "text": "k = \\lfloor \\log_{b}{n} \\rfloor + 1" }, { "math_id": 7, "text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod b^i}{b^i}" }, { "math_id": 8, "text": "F_b" } ]
https://en.wikipedia.org/wiki?curid=880483
880710
Maximum modulus principle
Mathematical theorem in complex analysis In mathematics, the maximum modulus principle in complex analysis states that if formula_1 is a holomorphic function, then the modulus formula_2 cannot exhibit a strict maximum that is strictly within the domain of formula_1. In other words, either formula_1 is locally a constant function, or, for any point formula_3 inside the domain of formula_1 there exist other points arbitrarily close to formula_3 at which formula_2 takes larger values. Formal statement. Let formula_1 be a holomorphic function on some connected open subset formula_4 of the complex plane formula_5 and taking complex values. If formula_3 is a point in formula_4 such that formula_6 for all formula_0 in some neighborhood of formula_3, then formula_1 is constant on formula_4. This statement can be viewed as a special case of the open mapping theorem, which states that a nonconstant holomorphic function maps open sets to open sets: If formula_2 attains a local maximum at formula_0, then the image of a sufficiently small open neighborhood of formula_0 cannot be open, so formula_1 is constant. Related statement. Suppose that formula_4 is a bounded nonempty connected open subset of formula_5. Let formula_7 be the closure of formula_4. Suppose that formula_8 is a continuous function that is holomorphic on formula_4. Then formula_9 attains a maximum at some point of the boundary of formula_4. This follows from the first version as follows. Since formula_7 is compact and nonempty, the continuous function formula_9 attains a maximum at some point formula_3 of formula_7. If formula_3 is not on the boundary, then the maximum modulus principle implies that formula_1 is constant, so formula_9 also attains the same maximum at any point of the boundary. Minimum modulus principle. For a holomorphic function formula_1 on a connected open set formula_4 of formula_5, if formula_3 is a point in formula_4 such that formula_10 for all formula_0 in some neighborhood of formula_3, then formula_1 is constant on formula_4. Proof: Apply the maximum modulus principle to formula_11. Sketches of proofs. Using the maximum principle for harmonic functions. One can use the equality formula_12 for complex natural logarithms to deduce that formula_13 is a harmonic function. Since formula_3 is a local maximum for this function also, it follows from the maximum principle that formula_14 is constant. Then, using the Cauchy–Riemann equations we show that formula_15 = 0, and thus that formula_16 is constant as well. Similar reasoning shows that formula_17 can only have a local minimum (which necessarily has value 0) at an isolated zero of formula_16. Using Gauss's mean value theorem. Another proof works by using Gauss's mean value theorem to "force" all points within overlapping open disks to assume the same value as the maximum. The disks are laid such that their centers form a polygonal path from the value where formula_16 is maximized to any other point in the domain, while being totally contained within the domain. Thus the existence of a maximum value implies that all the values in the domain are the same, thus formula_16 is constant. Using Cauchy's Integral Formula. As formula_4 is open, there exists formula_18 (a closed ball centered at formula_19 with radius formula_20) such that formula_21. We then define the boundary of the closed ball with positive orientation as formula_22. Invoking Cauchy's integral formula, we obtain formula_23 For all formula_24, formula_25, so formula_26. This also holds for all balls of radius less than formula_27 centered at formula_28. Therefore, formula_29 for all formula_30. Now consider the constant function formula_31 for all formula_32. Then one can construct a sequence of distinct points located in formula_18 where the holomorphic function formula_33 vanishes. As formula_18 is closed, the sequence converges to some point in formula_34. This means formula_35 vanishes everywhere in formula_4 which implies formula_29 for all formula_32. Physical interpretation. A physical interpretation of this principle comes from the heat equation. That is, since formula_36 is harmonic, it is thus the steady state of a heat flow on the region formula_4. Suppose a strict maximum was attained on the interior of formula_4, the heat at this maximum would be dispersing to the points around it, which would contradict the assumption that this represents the steady state of a system. Applications. The maximum modulus principle has many uses in complex analysis, and may be used to prove the following: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "|f|" }, { "math_id": 3, "text": "z_0" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "\\mathbb{C}" }, { "math_id": 6, "text": "|f(z_0)|\\ge |f(z)|" }, { "math_id": 7, "text": "\\overline{D}" }, { "math_id": 8, "text": "f \\colon \\overline{D} \\to \\mathbb{C}" }, { "math_id": 9, "text": "|f(z)|" }, { "math_id": 10, "text": "0 < |f(z_0)| \\le |f(z)|" }, { "math_id": 11, "text": "1/f" }, { "math_id": 12, "text": "\\log f(z) = \\ln |f(z)| + i\\arg f(z)" }, { "math_id": 13, "text": " \\ln |f (z) | " }, { "math_id": 14, "text": "| f (z) | " }, { "math_id": 15, "text": "f'(z)" }, { "math_id": 16, "text": "f(z)" }, { "math_id": 17, "text": " | f (z) | " }, { "math_id": 18, "text": "\\overline{B}(a,r)" }, { "math_id": 19, "text": "a \\in D" }, { "math_id": 20, "text": "r>0" }, { "math_id": 21, "text": "\\overline{B}(a,r) \\subset D" }, { "math_id": 22, "text": "\\gamma(t)=a+re^{it}, t \\in [0,2\\pi]" }, { "math_id": 23, "text": " 0 \\leq \\int_{0}^{2\\pi} |f(a)|-| f(a+re^{it})| \\,dt \\leq 0 " }, { "math_id": 24, "text": "t \\in [0,2\\pi]" }, { "math_id": 25, "text": "| f(a) |-| f(a+re^{it}) | \\geq 0" }, { "math_id": 26, "text": "| f(a)|=| f(a+re^{it}) |" }, { "math_id": 27, "text": "r" }, { "math_id": 28, "text": "a" }, { "math_id": 29, "text": "f(z)=f(a)" }, { "math_id": 30, "text": "z \\in \\overline{B}(a,r)" }, { "math_id": 31, "text": "g(z)=f(a)" }, { "math_id": 32, "text": "z \\in D" }, { "math_id": 33, "text": "g-f" }, { "math_id": 34, "text": "\\overline{B}(a,r) \\in D" }, { "math_id": 35, "text": "f-g" }, { "math_id": 36, "text": " \\log | f(z) | " } ]
https://en.wikipedia.org/wiki?curid=880710
880754
Restricted representation
In group theory, restriction forms a representation of a subgroup using a known representation of the whole group. Restriction is a fundamental construction in representation theory of groups. Often the restricted representation is simpler to understand. Rules for decomposing the restriction of an irreducible representation into irreducible representations of the subgroup are called branching rules, and have important applications in physics. For example, in case of explicit symmetry breaking, the symmetry group of the problem is reduced from the whole group to one of its subgroups. In quantum mechanics, this reduction in symmetry appears as a splitting of degenerate energy levels into multiplets, as in the Stark or Zeeman effect. The induced representation is a related operation that forms a representation of the whole group from a representation of a subgroup. The relation between restriction and induction is described by Frobenius reciprocity and the Mackey theorem. Restriction to a normal subgroup behaves particularly well and is often called Clifford theory after the theorem of A. H. Clifford. Restriction can be generalized to other group homomorphisms and to other rings. For any group "G", its subgroup "H", and a linear representation "ρ" of "G", the restriction of "ρ" to "H", denoted formula_0 is a representation of "H" on the same vector space by the same operators: formula_1 Classical branching rules. Classical branching rules describe the restriction of an irreducible complex representation (π, "V") of a classical group "G" to a classical subgroup "H", i.e. the multiplicity with which an irreducible representation ("σ", "W") of "H" occurs in π. By Frobenius reciprocity for compact groups, this is equivalent to finding the multiplicity of π in the unitary representation induced from σ. Branching rules for the classical groups were determined by The results are usually expressed graphically using Young diagrams to encode the signatures used classically to label irreducible representations, familiar from classical invariant theory. Hermann Weyl and Richard Brauer discovered a systematic method for determining the branching rule when the groups "G" and "H" share a common maximal torus: in this case the Weyl group of "H" is a subgroup of that of "G", so that the rule can be deduced from the Weyl character formula. A systematic modern interpretation has been given by in the context of his theory of dual pairs. The special case where σ is the trivial representation of "H" was first used extensively by Hua in his work on the Szegő kernels of bounded symmetric domains in several complex variables, where the Shilov boundary has the form "G"/"H". More generally the Cartan-Helgason theorem gives the decomposition when "G"/"H" is a compact symmetric space, in which case all multiplicities are one; a generalization to arbitrary σ has since been obtained by . Similar geometric considerations have also been used by to rederive Littlewood's rules, which involve the celebrated Littlewood–Richardson rules for tensoring irreducible representations of the unitary groups. has found generalizations of these rules to arbitrary compact semisimple Lie groups, using his path model, an approach to representation theory close in spirit to the theory of crystal bases of Lusztig and Kashiwara. His methods yield branching rules for restrictions to subgroups containing a maximal torus. The study of branching rules is important in classical invariant theory and its modern counterpart, algebraic combinatorics. Example. The unitary group "U"("N") has irreducible representations labelled by signatures formula_2 where the "f""i" are integers. In fact if a unitary matrix "U" has eigenvalues "z""i", then the character of the corresponding irreducible representation πf is given by formula_3 The branching rule from "U"("N") to "U"("N" – 1) states that Example. The unitary symplectic group or quaternionic unitary group, denoted Sp("N") or "U"("N", H), is the group of all transformations of H"N" which commute with right multiplication by the quaternions H and preserve the H-valued hermitian inner product formula_4 on H"N", where "q"* denotes the quaternion conjugate to "q". Realizing quaternions as 2 x 2 complex matrices, the group Sp("N") is just the group of block matrices ("q""ij") in SU(2"N") with formula_5 where "α""ij" and "β""ij" are complex numbers. Each matrix "U" in Sp("N") is conjugate to a block diagonal matrix with entries formula_6 where |"z""i"| = 1. Thus the eigenvalues of "U" are ("z""i"±1). The irreducible representations of Sp("N") are labelled by signatures formula_7 where the "f""i" are integers. The character of the corresponding irreducible representation "σ"f is given by formula_8 The branching rule from Sp("N") to Sp("N" – 1) states that Here "f""N" + 1 = 0 and the multiplicity "m"(f, g) is given by formula_9 where formula_10 is the non-increasing rearrangement of the 2"N" non-negative integers ("f"i), ("g""j") and 0. Example. The branching from U(2"N") to Sp("N") relies on two identities of Littlewood: formula_11 where Πf,0 is the irreducible representation of "U"(2"N") with signature "f"1 ≥ ··· ≥ "f""N" ≥ 0 ≥ ··· ≥ 0. formula_12 where "f""i" ≥ 0. The branching rule from U(2"N") to Sp("N") is given by where all the signature are non-negative and the coefficient "M" (g, h; k) is the multiplicity of the irreducible representation πk of "U"("N") in the tensor product πg formula_13 πh. It is given combinatorially by the Littlewood–Richardson rule, the number of lattice permutations of the skew diagram k/h of weight g. There is an extension of Littlewood's branching rule to arbitrary signatures due to . The Littlewood–Richardson coefficients "M" (g, h; f) are extended to allow the signature f to have 2"N" parts but restricting g to have even column-lengths ("g"2"i" – 1 = "g"2"i"). In this case the formula reads where "M""N" (g, h; f) counts the number of lattice permutations of f/h of weight g are counted for which 2"j" + 1 appears no lower than row "N" + "j" of f for 1 ≤ "j" ≤ |"g"|/2. Example. The special orthogonal group SO("N") has irreducible ordinary and spin representations labelled by signatures The "f""i" are taken in Z for ordinary representations and in ½ + Z for spin representations. In fact if an orthogonal matrix "U" has eigenvalues "z""i"±1 for 1 ≤ "i" ≤ "n", then the character of the corresponding irreducible representation πf is given by formula_16 for "N" = 2"n" and by formula_17 for "N" = 2"n"+1. The branching rules from SO("N") to SO("N" – 1) state that for "N" = 2"n" + 1 and for "N" = 2"n", where the differences "f""i" − "g""i" must be integers. Gelfand–Tsetlin basis. Since the branching rules from formula_18 to formula_19 or formula_20 to formula_21 have multiplicity one, the irreducible summands corresponding to smaller and smaller "N" will eventually terminate in one-dimensional subspaces. In this way Gelfand and Tsetlin were able to obtain a basis of any irreducible representation of formula_18 or formula_20 labelled by a chain of interleaved signatures, called a Gelfand–Tsetlin pattern. Explicit formulas for the action of the Lie algebra on the Gelfand–Tsetlin basis are given in . Specifically, for formula_22, the Gelfand-Testlin basis of the irreducible representation of formula_23 with dimension formula_24 is given by the complex spherical harmonics formula_25. For the remaining classical group formula_26, the branching is no longer multiplicity free, so that if "V" and "W" are irreducible representation of formula_27 and formula_26 the space of intertwiners formula_28 can have dimension greater than one. It turns out that the Yangian formula_29, a Hopf algebra introduced by Ludwig Faddeev and collaborators, acts irreducibly on this multiplicity space, a fact which enabled to extend the construction of Gelfand–Tsetlin bases to formula_26. Clifford's theorem. In 1937 Alfred H. Clifford proved the following result on the restriction of finite-dimensional irreducible representations from a group "G" to a normal subgroup "N" of finite index: Theorem. Let π: "G" formula_30 GL("n","K") be an irreducible representation with "K" a field. Then the restriction of π to "N" breaks up into a direct sum of irreducible representations of "N" of equal dimensions. These irreducible representations of "N" lie in one orbit for the action of "G" by conjugation on the equivalence classes of irreducible representations of "N". In particular the number of distinct summands is no greater than the index of "N" in "G". Twenty years later George Mackey found a more precise version of this result for the restriction of irreducible unitary representations of locally compact groups to closed normal subgroups in what has become known as the "Mackey machine" or "Mackey normal subgroup analysis". Abstract algebraic setting. From the point of view of category theory, restriction is an instance of a forgetful functor. This functor is exact, and its left adjoint functor is called "induction". The relation between restriction and induction in various contexts is called the Frobenius reciprocity. Taken together, the operations of induction and restriction form a powerful set of tools for analyzing representations. This is especially true whenever the representations have the property of complete reducibility, for example, in representation theory of finite groups over a field of characteristic zero. Generalizations. This rather evident construction may be extended in numerous and significant ways. For instance we may take any group homomorphism φ from "H" to "G", instead of the inclusion map, and define the restricted representation of "H" by the composition formula_31 We may also apply the idea to other categories in abstract algebra: associative algebras, rings, Lie algebras, Lie superalgebras, Hopf algebras to name some. Representations or modules "restrict" to subobjects, or via homomorphisms. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho \\, \\Big|_H " }, { "math_id": 1, "text": " \\rho\\,\\Big|_H(h) = \\rho(h). " }, { "math_id": 2, "text": "\\mathbf{f} \\,\\colon \\,f_1\\ge f_2\\ge \\cdots \\ge f_N" }, { "math_id": 3, "text": " \\operatorname{Tr} \\pi_{\\mathbf{f}}(U) = {\\det z_j^{f_i +N -i}\\over \\prod_{i<j} (z_i-z_j)}." }, { "math_id": 4, "text": " (q_1,\\ldots,q_N)\\cdot (r_1,\\ldots,r_N) = \\sum r_i^*q_i" }, { "math_id": 5, "text": "q_{ij}=\\begin{pmatrix}\n\\alpha_{ij}&\\beta_{ij}\\\\\n-\\overline{\\beta}_{ij}&\\overline{\\alpha}_{ij}\n\\end{pmatrix}," }, { "math_id": 6, "text": "q_i=\\begin{pmatrix}\nz_i&0\\\\\n0&\\overline{z}_i\n\\end{pmatrix}," }, { "math_id": 7, "text": "\\mathbf{f} \\,\\colon \\,f_1\\ge f_2\\ge \\cdots \\ge f_N\\ge 0" }, { "math_id": 8, "text": " \\operatorname{Tr} \\sigma_{\\mathbf{f}}(U) = {\\det z_j^{f_i +N -i +1 } - z_j^{-f_i - N +i -1}\\over \\prod (z_i-z_i^{-1})\\cdot \\prod_{i<j} (z_i +z_i^{-1} - z_j - z_j^{-1})}." }, { "math_id": 9, "text": " m(\\mathbf{f},\\mathbf{g})=\\prod_{i=1}^N (a_i - b_i +1)" }, { "math_id": 10, "text": " a_1\\ge b_1 \\ge a_2 \\ge b_2 \\ge \\cdots \\ge a_N \\ge b_N=0" }, { "math_id": 11, "text": "\n\\begin{align}\n& \\sum_{f_1\\ge f_2\\ge f_N\\ge 0} \\operatorname{Tr}\\Pi_{\\mathbf{f},0}(z_1,z_1^{-1},\\ldots, z_N,z_N^{-1}) \\cdot \\operatorname{Tr}\\pi_{\\mathbf{f}}(t_1,\\ldots,t_N) \\\\[5pt]\n= {} & \\sum_{f_1\\ge f_2\\ge f_N\\ge 0} \\operatorname{Tr}\\sigma_{\\mathbf{f}}(z_1,\\ldots, z_N) \\cdot \\operatorname{Tr}\\pi_{\\mathbf{f}}(t_1,\\ldots,t_N)\\cdot \\prod_{i<j} (1-z_iz_j)^{-1},\n\\end{align}\n" }, { "math_id": 12, "text": "\\prod_{i<j} (1-z_iz_j)^{-1} = \\sum_{f_{2i-1}=f_{2i}} \\operatorname{Tr} \\pi_{f}(z_1,\\ldots,z_N)," }, { "math_id": 13, "text": "\\otimes" }, { "math_id": 14, "text": " f_1\\ge f_2 \\ge \\cdots \\ge f_{n-1}\\ge|f_n|" }, { "math_id": 15, "text": " f_1 \\ge f_2 \\ge \\cdots \\ge f_n \\ge 0" }, { "math_id": 16, "text": " \\operatorname{Tr} \\, \\pi_{\\mathbf{f}}(U) = {\\det (z_j^{f_i +n -i} + z_j^{-f_i-n +i}) \\over \\prod_{i<j} (z_i +z_i^{-1}-z_j-z_j^{-1})}" }, { "math_id": 17, "text": "\\operatorname{Tr} \\pi_{\\mathbf{f}}(U) = {\\det (z_j^{f_i +1/2 +n -i} - z_j^{-f_i -1/2-n +i})\\over \\prod_{i<j} (z_i +z_i^{-1}-z_j-z_j^{-1}) \\cdot\\prod_k(z_k^{1/2} -z_k^{-1/2})}" }, { "math_id": 18, "text": "U(N)" }, { "math_id": 19, "text": "<math>U(N-1)</matH>" }, { "math_id": 20, "text": "SO(N)" }, { "math_id": 21, "text": "<matH>SO(N-1)</math>" }, { "math_id": 22, "text": "N=3" }, { "math_id": 23, "text": "SO(3)" }, { "math_id": 24, "text": "2l+1" }, { "math_id": 25, "text": "\\{Y_m^l | -l\\leq m\\leq l\\}" }, { "math_id": 26, "text": "Sp(N)" }, { "math_id": 27, "text": "Sp(N-1)" }, { "math_id": 28, "text": "Hom_{Sp(N-1)}(V,W)" }, { "math_id": 29, "text": "Y(\\mathfrak{gl}_2)" }, { "math_id": 30, "text": "\\rightarrow " }, { "math_id": 31, "text": " \\rho\\circ\\varphi \\, " } ]
https://en.wikipedia.org/wiki?curid=880754
8808200
Roger Evans Howe
American mathematician Roger Evans Howe (born May 23, 1945) is the William R. Kenan, Jr. Professor Emeritus of Mathematics at Yale University, and Curtis D. Robert Endowed Chair in Mathematics Education at Texas A&amp;M University. He is known for his contributions to representation theory, in particular for the notion of a reductive dual pair and the Howe correspondence, and his contributions to mathematics education. Biography. He attended Ithaca High School, then Harvard University as an undergraduate, becoming a Putnam Fellow in 1964. He obtained his Ph.D. from University of California, Berkeley in 1969. His thesis, titled "On representations of nilpotent groups", was written under the supervision of Calvin Moore. Between 1969 and 1974, Howe taught at the State University of New York in Stony Brook before joining the Yale faculty in 1974. His doctoral students include Ju-Lee Kim, Jian-Shu Li, Zeev Rudnick, Eng-Chye Tan, and Chen-Bo Zhu. He moved to Texas A&amp;M University in 2015. He has been a fellow of the American Academy of Arts and Sciences since 1993, and a member of the National Academy of Sciences since 1994. Howe received a Lester R. Ford Award in 1984. In 2006 he was awarded the American Mathematical Society Distinguished Public Service Award in recognition of his "multifaceted contributions to mathematics and to mathematics education." In 2012 he became a fellow of the American Mathematical Society. In 2015 he received the inaugural Award for Excellence in Mathematics Education. A conference in his honor was held at the National University of Singapore in 2006, and at Yale University in 2015.
[ { "math_id": 0, "text": "Gl_n" } ]
https://en.wikipedia.org/wiki?curid=8808200
8808602
Kaufmann–Bucherer–Neumann experiments
The Kaufmann–Bucherer–Neumann experiments measured the dependence of the inertial mass (or momentum) of an object on its velocity. The historical importance of this series of experiments performed by various physicists between 1901 and 1915 is due to the results being used to test the predictions of special relativity. The developing precision and data analysis of these experiments and the resulting influence on theoretical physics during those years is still a topic of active historical discussion, since the early experimental results at first contradicted Einstein's then newly published theory (1905), but later versions of this experiment confirmed it. For modern experiments of that kind, see Tests of relativistic energy and momentum, for general information see Tests of special relativity. Historical context. In 1896, Henri Becquerel discovered radioactive decay in a variety of chemical elements. Subsequently, the beta radiation from these decays was discovered to be the emission of a negatively charged particle. Later these particles were identified with the electron, discovered in cathode ray experiments by J. J. Thomson in 1897. This was connected with the theoretical prediction of the electromagnetic mass by J. J. Thomson in 1881, who showed that the electromagnetic energy contributes to the mass of a moving charged body. Thomson (1893) and George Frederick Charles Searle (1897) also calculated that this mass depends on velocity, and that it becomes infinitely great when the body moves at the speed of light with respect to the luminiferous aether. Also Hendrik Antoon Lorentz (1899, 1900) assumed such a velocity dependence as a consequence of his theory of electrons. At this time, the electromagnetic mass was separated into "transverse" and "longitudinal" mass, and was sometimes denoted as "apparent mass", while the invariant Newtonian mass was denoted as "real mass". On the other hand, it was the belief of the German theoretician Max Abraham that all mass would ultimately prove to be of electromagnetic origin, and that Newtonian mechanics would become subsumed into the laws of electrodynamics. In 1904, Lorentz proposed that the mass along the trajectory of motion (longitudinal mass) and the mass in the directions perpendicular to the motion (transverse) are different. In his theory, the longitudinal mass formula_0 and the transverse mass formula_1, where formula_2 is the Lorentz factor and formula_3 is the rest mass of the electron. The concept of (transverse) electromagnetic mass formula_4, which was based on specific models of the electron, was later transmuted into the purely kinematical concept of relativistic mass which concerns all forms of energy, not only electromagnetic energy. Currently, however, the concept of relativistic mass, although still frequently referred to in popular works on relativity, is now seldom used among professional working physicists, and has been superseded by the expressions for relativistic energy and momentum, which also predict that the speed of light cannot be reached by massive bodies. This is because all of those relations involve the Lorentz factor: formula_5 Therefore, the Bucherer–Kaufmann–Neumann experiments can be seen as early tests of relativistic energy and momentum. (For the following historical description of the experiments, the concepts of "transverse" or "relativistic mass" are still used). Kaufmann's experiments. First experiments. Walter Kaufmann began to experiment with beta rays using a device similar to a cathode ray tube, where the source of the electrons was the decay of radium that was placed in an evacuated container. (See Fig. 1) Such rays emitted from radium were called "Becquerel rays" at that time. Contrary to the then known cathode rays which reached speeds only up to 0.3"c, c" being the speed of light, Becquerel rays reached velocities up to 0.9"c". However, since beta particles have different velocities, the radiation was inhomogeneous. Therefore, Kaufmann applied electric and magnetic fields aligned "parallel" to each other, so that the deflections caused by them were perpendicular to each other. Their impacts on a photographic plate produced a deflection curve, whose individual points corresponded to a certain velocity and a certain mass of the electrons. By reversing the charge on the condenser, thus inverting the electric field, two symmetric curves could be obtained, whose center line determined the direction of the magnetic deflection. Kaufmann published a first analysis of his data in 1901 – he actually was able to measure a decrease of the charge-to-mass ratio, thus demonstrating that mass or momentum increases with velocity. Using Searle's formula (1897) for the electromagnetic energy increase of charged bodies with velocity, he calculated the increase of the electron's electromagnetic mass as a function of velocity: formula_6, Kaufmann noticed that the observed increase cannot be explained by this formula, so he separated the measured "total mass" into a "mechanical (true) mass" and an "electromagnetic (apparent) mass", the mechanical mass being considerably greater than the electromagnetic one. However, he made two mistakes: As shown by Max Abraham, Kaufmann overlooked that Searle's formula only applies in the longitudinal direction, but for deflection measurements the formula for the transverse direction was important. Therefore, Abraham introduced the "transverse electromagnetic mass" with the following velocity dependence: formula_7 Kaufmann also made a calculation mistake in deriving the deflection curves. Those errors were corrected by him in 1902. In 1902 and 1903 Kaufmann performed another series of tests with updated and improved experimental techniques. The results were interpreted by him as a confirmation of Abraham's theory and of the assumption that the electron's mass is completely of electromagnetic origin. Hermann Starke conducted similar measurements in 1903, although he used cathode rays limited to 0.3c. The results that he obtained were interpreted by him as being in agreement with those of Kaufmann. Competing theories. In 1902, Max Abraham published a theory based on the assumption that the electron was a rigid, perfect sphere, with its charge being distributed evenly on its surface. As explained above, he introduced the so-called "transverse electromagnetic mass" besides the "longitudinal electromagnetic mass", and argued that the entire electron mass is of electromagnetic origin. Meanwhile, Lorentz (1899, 1904) extended his theory of electrons, assuming that an electron's charge was spread throughout its volume, and that in Kaufmann's experiment, its shape would be compressed in the direction of motion and would stay unchanged in the transverse directions. To Kaufmann's surprise, Lorentz could show that his model agreed with his experimental data as well. This model was further elaborated and perfected by Henri Poincaré (1905), so that Lorentz's theory was now in agreement with the principle of relativity. A similar theory was developed by Alfred Bucherer and Paul Langevin in 1904, with the difference that the total volume occupied by the deformed electron was assumed unchanged. It turned out that this theory's prediction was closer to Abraham's theory than to Lorentz's. Finally, Albert Einstein's theory of special relativity (1905) predicted the change of the point-like electron's mass due to the properties of the transformation between the rest-frame of the particle and the laboratory frame in which the measurements were performed. Mathematically, this calculation predicts the same dependence between velocity and mass as Lorentz's theory, although it assumes very different physical concepts. As regards the increase of transverse electromagnetic mass, the predictions of the various theories were (Fig. 3): formula_8 Experiments of 1905. In order to make a decision between those theories, Kaufmann again performed his experiments with higher precision. Kaufmann believed that he had conclusively disproven the formula of Lorentz–Einstein, and had therefore also disproven the principle of relativity. In his view, the only remaining options were between the theories of Abraham and Bucherer. Lorentz was perplexed and wrote that he was "at the end of his Latin". However, criticism of Kaufmann's experiment arose. Shortly after Kaufmann published his results and the conclusions of his analysis, Max Planck decided to re-analyze the data obtained by the experiment. In 1906 and 1907, Planck published his own conclusion on the behavior of the inertial mass of electrons with high speeds. Using just nine data points from Kaufmann's publication in 1905, he recalculated the exact setup of the fields for each point, and compared the measurements against the predictions of the two competing theories. He showed that Kaufmann's results are not fully decisive and would lead to superluminal velocities. Einstein remarked in 1907 that although Kaufmann's results were better in agreement with Abraham's and Bucherer's theories than with his own, the foundations of the other theories were not plausible and therefore had only little probability of being correct. Subsequent experiments. Bucherer. The main problem of Kaufmann's experiments was his use of "parallel" magnetic and electric fields, as pointed out by Adolf Bestelmeyer (1907). Using a method based on "perpendicular" magnetic and electric fields (introduced by J. J. Thomson and further developed to a velocity filter by Wilhelm Wien), Bestelmeyer obtained considerably different values for the charge-to-mass ratio for cathode rays up to 0.3c. However, Bestelmeyer added that his experiment was not precise enough to provide a definite decision between the theories. Therefore, Alfred Bucherer (1908) conducted a precise measurement using a velocity filter similar to Bestelmeyer's. See Figs. 4 &amp; 5. A radium beta source was placed at the center of a circular condenser consisting of two silvered glass plates spaced 0.25 mm apart and charged to about 500 volts, set in a homogeneous 140 Gauss magnetic field. The radium emitted beta rays in all directions, but in any particular direction α, only those beta rays exited the velocity filter whose speed was such that the electric and magnetic fields exactly compensated each other. After leaving the condenser, the rays were deflected by the magnetic field, and exposed a photographic plate set parallel to the condenser rim and perpendicular to the non-deflected rays. For his final analysis, Bucherer recalculated the measured values of five runs with Lorentz's and Abraham's formulas respectively, in order to obtain the charge-to-mass ratio as if the electrons were at rest. Since the ratio doesn't vary for resting electrons, the data points should be on a single horizontal line (see Fig. 6). However, this was approximately only in the case when the data were calculated with Lorentz's formula, while the results of Abraham's formula sharply deviated (the red and blue lines represent the average value according to both formulas). The agreement with the Lorentz–Einstein formula was interpreted by Bucherer as the confirmation of the relativity principle and the Lorentz–Einstein theory – a result immediately applauded by Lorentz, Einstein, and Hermann Minkowski. In addition, Bucherer's apparatus was improved in 1909 by his student Kurt Wolz, who also obtained agreement with the Lorentz–Einstein formula (though he didn't compare Abraham's formula with his data, Fig. 7). Even though many physicists accepted Bucherer's result, there still remained some doubts. For instance, Bestelmeyer published a paper in which he cast doubt on the validity of Bucherer's result. He argued that one experiment alone cannot establish the correctness of an important physical law, that Bucherer's result might be significantly distorted by non-compensated rays reaching the photographic plate, and that extensive data protocols and error analysis are necessary. A polemic dispute between those two scholars followed in a series of publications, in which Bestelmeyer argued that Wolz's experiments are affected by the same problems. Hupka. Unlike Kaufmann and Bucherer, Karl Erich Hupka (1909) used cathode rays at 0.5c for his measurements. The radiation (generated at a copper cathode) was strongly accelerated by the field between cathode and anode in a highly evacuated discharge tube. The anode serving as a diaphragm was passed by the ray with constant velocity and drew the shadow image of two Wollaston wires on a phosphorescent screen behind a second diaphragm. If a current was generated behind this diaphragm, then the ray was deflected and the shadow image was displaced. The results were in agreement with Lorentz–Einstein, even though Hupka remarked that this experiment didn't represent a definitive result. Subsequently, W. Heil published some papers dealing with criticisms and interpretations of the result, to which Hupka replied. Neumann and Guye/Lavanchy. In 1914, Günther Neumann conducted new measurements using Bucherer's equipment, in particular making some improvements to address Bestelmeyer's criticisms, especially the question of non-compensated rays, and making extensive refinements to the data protocols. The calculation method was the same as Bucherer's (see Fig. 6). Also in this experiment, the data corresponding to Lorentz's formula are nearly on a horizontal line as required, while the data obtained from Abraham's formula sharply deviate (see Fig. 8). Neumann concluded that his experiments were in agreement with those of Bucherer and Hupka, definitely proving the Lorentz–Einstein formula in the range 0.4–0.7c, and refuted Abraham's formula. Instrumental uncertainties occurred in the range 0.7–0.8c, so the deviation from the Lorentz–Einstein formula in this range wasn't considered as being significant. In 1915, Charles-Eugène Guye and Charles Lavanchy measured the deflection of cathode rays at 0.25c–0.5c. They used a tube with a cathode and anode in order to accelerate the rays. A diaphragm at the anode produced a beam which was deflected. A screen was placed at the end of the apparatus, at which the impacts were photographed by a camera. They subsequently computated the ratio of transverse electromagnetic mass "mT" and rest mass "m0" indicated by the red and blue curve, and obtained good agreement with the Lorentz–Einstein formula (see Fig. 9), supplementing Neumann's result. Neumann's and Guye/Lavanchy's experiments were considered by many as conclusively proving the Lorentz-Einstein formula. Lorentz summarized these efforts in 1915: "Later experiments [..] have confirmed the formula [..] for the transverse electromagnetic mass, so that, in all probability, the only objection that could be raised against the hypothesis of the deformable electron and the principle of relativity has now been removed." Further development. Zahn &amp; Spees (1938) and Faragó &amp; Lajos Jánossy (1954) argued that many assumptions employed in those early experiments as to the nature and the properties of electrons and the experimental setup, were wrong or imprecise. As with Kaufmann's experiments, the Bucherer–Neumann experiments would only show a qualitative increase of mass, and were incapable of deciding between the competing theories. While the results of those electron deflection experiments were disputed for a long time, the investigations of the fine structure of the hydrogen lines by Karl Glitscher (based on the work of Arnold Sommerfeld) had already in 1917 provided a clear confirmation of the Lorentz–Einstein formula, because the relativistic expressions for momentum and energy were necessary to derive the fine structure, and a constituted a refutation of Abraham's theory. In addition, the first electron deflection experiments with sufficient precision was conducted by Rogers "et al." (1940), who developed an improved setup. The radium decay series yields a spectrum of beta particles with a wide range of energies. The earlier measurements by Kaufmann, Bucherer, and others had used flat parallel plate condensers which provided no focusing of the beta particles. Rogers "et al." (Fig. 10) instead constructed an electrostatic spectrograph capable of resolving the energy maxima of individual beta particle lines from the radium decay series. The electrostatic spectrograph was constructed of segments of two cylinders and was enclosed in an evacuated iron box. The beta rays were emitted from a fine platinum wire coated with radium active-deposit. The dispersed rays were incident on a slit in front of a Geiger counter. The data from this experiment was combined with previous magnetic spectrometer measurements of "Hρ" to yield the charge-to-mass ratio, which was subsequently compared with the predictions of Lorentz and Abraham for the ratio of transverse mass and rest mass. The points were all on the curve representing the Lorentz–Einstein formula to within 1% (see Fig. 11). This experiment is seen as being sufficiently precise to distinguish between the theories. Modern tests. Since then, many additional experiments concerning the relativistic energy–momentum relation have been conducted, including measurements of the deflection of electrons, all of them confirming special relativity to high precision. Also in modern particle accelerators, the predictions of special relativity are routinely confirmed. References. Primary sources. &lt;templatestyles src="Reflist/styles.css" /&gt; Secondary sources. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m_L= {{\\gamma}^3}m" }, { "math_id": 1, "text": "m_T= {\\gamma}m" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "m" }, { "math_id": 4, "text": "m_T" }, { "math_id": 5, "text": "\\frac{m_{T}}{m}=\\frac{p}{mv}=\\frac{E}{mc^{2}}=\\frac{1}{\\sqrt{1-\\frac{v^{2}}{c^{2}}}}" }, { "math_id": 6, "text": "\\phi(\\beta)=\\frac{3}{4\\beta^{2}}\\left[\\frac{1}{\\beta}\\lg\\frac{1-\\beta}{1+\\beta}+\\frac{2}{1-\\beta^{2}}\\right],\\;\\beta=\\frac{v}{c}" }, { "math_id": 7, "text": "\\phi(\\beta)=\\frac{3}{4\\beta^{2}}\\left(\\frac{1+\\beta^{2}}{2\\beta}\\lg\\frac{1+\\beta}{1-\\beta}-1\\right), " }, { "math_id": 8, "text": "\\begin{align}\n&\\text{Abraham} & \\phi(\\beta) &=\\frac{3}{4\\beta^{2}}\\left(\\frac{1+\\beta^{2}}{2\\beta}\\lg\\frac{1+\\beta}{1-\\beta}-1\\right)\\\\\n&\\text{Lorentz--Einstein} & \\phi(\\beta) &=(1-\\beta^{2})^{-1/2}\\\\\n&\\text{Bucherer--Langevin} & \\phi(\\beta) &=(1-\\beta^{2})^{-1/3}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=8808602
8810355
Rugosity
Measure of small-scale variations of amplitude in the height of a surface Rugosity, "f"r, is a measure of small-scale variations of amplitude in the height of a surface, formula_0 where "A"r is the real (true, actual) surface area and "A"g is the geometric surface area. Utility. Rugosity calculations are commonly used in materials science to characterize surfaces, amongst others, in marine science to characterize seafloor habitats. A common technique to measure seafloor rugosity is Risk's chain-and-tape method but with the advent of underwater photography less invasive quantitative methods have been developed. Some examples include measuring small-scale seafloor bottom roughness from microtopographic laser scanning (Du Preez and Tunnicliffe 2012), and deriving multi-scale measures of rugosity, slope and aspect from benthic stereo image reconstructions (Friedman et al. 2012). Inconsistency. Despite the popularity of using rugosity for two- and three-dimensional surface analyses, methodological inconsistency has been problematic. Building off recent advances, the new arc-chord ratio (ACR) rugosity index is capable of measuring the rugosity of two-dimensional profiles and three-dimensional surfaces using a single method (Du Preez 2015). The ACR rugosity index is defined as the contoured (real) surface area divided by the area of the surface orthogonally projected onto a plane of best fit (POBF), where the POBF is a function (linear interpolation) of the boundary data only. Using a POBF, instead of an arbitrary horizontal geometric plane, results in an important advantage of the ACR rugosity index: unlike most rugosity indices ACR rugosity is not confounded by slope. Ecology: As a measure of complexity, rugosity is presumed to be an indicator of the amount of available habitat available for colonization by benthic organisms (those attached to the seafloor), and shelter and foraging area for mobile organisms. Geology: For marine geologists and geomorphologists, rugosity is a useful characteristic in distinguishing different types of seafloors in remote sensing applications (e.g., sonar and laser altimetry, based from ships, planes or satellites). Oceanography: Among oceanographers, rugosity is recognized to influence small-scale hydrodynamics by converting organized laminar or oscillatory flow into energy-dissipating turbulence. Coral biology: High rugosity is often an indication of the presence of coral, which creates a complex surface as it grows. A rugose seafloor's tendency to generate turbulence is understood to promote the growth of coral and coralline algae by delivering nutrient-rich water after the organisms have depleted the nutrients from the envelope of water immediately surrounding their tissues. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{\\text{r}} = A_{\\text{r}}/A_{\\text{g}}" } ]
https://en.wikipedia.org/wiki?curid=8810355
8810651
Quasiregular polyhedron
Polyhedron with two kinds of faces In geometry, a quasiregular polyhedron is a uniform polyhedron that has exactly two kinds of regular faces, which alternate around each vertex. They are vertex-transitive and edge-transitive, hence a step closer to regular polyhedra than the semiregular, which are merely vertex-transitive. Their dual figures are face-transitive and edge-transitive; they have exactly two kinds of regular vertex figures, which alternate around each face. They are sometimes also considered quasiregular. There are only two convex quasiregular polyhedra: the cuboctahedron and the icosidodecahedron. Their names, given by Kepler, come from recognizing that their faces are all the faces (turned differently) of the dual-pair cube and octahedron, in the first case, and of the dual-pair icosahedron and dodecahedron, in the second case. These forms representing a pair of a regular figure and its dual can be given a vertical Schläfli symbol formula_0 or r{p,q}, to represent that their faces are all the faces (turned differently) of both the regular {p,q} and the dual regular {q,p}. A quasiregular polyhedron with this symbol will have a vertex configuration p.q.p.q (or (p.q)2). More generally, a quasiregular figure can have a vertex configuration (p.q)r, representing r (2 or more) sequences of the faces around the vertex. Tilings of the plane can also be quasiregular, specifically the trihexagonal tiling, with vertex configuration (3.6)2. Other quasiregular tilings exist on the hyperbolic plane, like the triheptagonal tiling, (3.7)2. Or more generally: (p.q)2, with 1/p + 1/q &lt; 1/2. Regular polyhedra and tilings with an even number of faces at each vertex can also be considered quasiregular by differentiating between faces of the same order, by representing them differently, like coloring them alternately (without defining any surface orientation). A regular figure with Schläfli symbol {p,q} can be considered quasiregular, with vertex configuration (p.p)q/2, if q is even. Examples: The regular octahedron, with Schläfli symbol {3,4} and 4 being even, can be considered quasiregular as a "tetratetrahedron" (2 sets of 4 triangles of the tetrahedron), with vertex configuration (3.3)4/2 = (3a.3b)2, alternating two colors of triangular faces. The square tiling, with vertex configuration 44 and 4 being even, can be considered quasiregular, with vertex configuration (4.4)4/2 = (4a.4b)2, colored as a "checkerboard". The triangular tiling, with vertex configuration 36 and 6 being even, can be considered quasiregular, with vertex configuration (3.3)6/2 = (3a.3b)3, alternating two colors of triangular faces. Wythoff construction. Coxeter defines a "quasiregular polyhedron" as one having a Wythoff symbol in the form "p | q r", and it is regular if q=2 or q=r. The Coxeter-Dynkin diagram is another symbolic representation that shows the quasiregular relation between the two dual-regular forms: The convex quasiregular polyhedra. There are two uniform convex quasiregular polyhedra: In addition, the octahedron, which is also regular, formula_3, vertex configuration (3.3)2, can be considered quasiregular if alternate faces are given different colors. In this form it is sometimes known as the "tetratetrahedron". The remaining convex regular polyhedra have an odd number of faces at each vertex so cannot be colored in a way that preserves edge transitivity. It has "Coxeter-Dynkin diagram" Each of these forms the common core of a dual pair of regular polyhedra. The names of two of these give clues to the associated dual pair: respectively cube formula_4 octahedron, and icosahedron formula_4 dodecahedron. The octahedron is the common core of a dual pair of tetrahedra (a compound known as the stella octangula); when derived in this way, the octahedron is sometimes called the "tetratetrahedron", as tetrahedron formula_4 tetrahedron. Each of these quasiregular polyhedra can be constructed by a rectification operation on either regular parent, truncating the vertices fully, until each original edge is reduced to its midpoint. Quasiregular tilings. This sequence continues as the trihexagonal tiling, vertex figure "(3.6)2" - a quasiregular tiling based on the triangular tiling and hexagonal tiling. The checkerboard pattern is a quasiregular coloring of the square tiling, vertex figure "(4.4)2": The triangular tiling can also be considered quasiregular, with three sets of alternating triangles at each vertex, (3.3)3: In the hyperbolic plane, this sequence continues further, for example the triheptagonal tiling, vertex figure "(3.7)2" - a quasiregular tiling based on the "order-7 triangular tiling" and "heptagonal tiling". Nonconvex examples. Coxeter, H.S.M. et al. (1954) also classify certain star polyhedra, having the same characteristics, as being quasiregular. Two are based on dual pairs of regular Kepler–Poinsot solids, in the same way as for the convex examples: the great icosidodecahedron formula_5, and the dodecadodecahedron formula_6: Nine more are the hemipolyhedra, which are faceted forms of the aforementioned quasiregular polyhedra derived from rectification of regular polyhedra. These include equatorial faces passing through the centre of the polyhedra: Lastly there are three ditrigonal forms, all facetings of the regular dodecahedron, whose vertex figures contain three alternations of the two face types: In the Euclidean plane, the sequence of hemipolyhedra continues with the following four star tilings, where apeirogons appear as the aforementioned equatorial polygons: Quasiregular duals. Some authorities argue that, since the duals of the quasiregular solids share the same symmetries, these duals should be called quasiregular too. But not everybody uses this terminology. These duals are transitive on their edges and faces (but not on their vertices); they are the edge-transitive Catalan solids. The convex ones are, in corresponding order as above: In addition, by duality with the octahedron, the cube, which is usually regular, can be made quasiregular if alternate vertices are given different colors. Their face configurations are of the form V3.n.3.n, and Coxeter-Dynkin diagram These three quasiregular duals are also characterised by having rhombic faces. This rhombic-faced pattern continues as V(3.6)2, the rhombille tiling. Quasiregular polytopes and honeycombs. In higher dimensions, Coxeter defined a quasiregular polytope or honeycomb to have regular facets and quasiregular vertex figures. It follows that all vertex figures are congruent and that there are two kinds of facets, which alternate. In Euclidean 4-space, the regular 16-cell can also be seen as quasiregular as an alternated tesseract, h{4,3,3}, Coxeter diagrams: = , composed of alternating tetrahedron and tetrahedron cells. Its vertex figure is the quasiregular tetratetrahedron (an octahedron with tetrahedral symmetry), . The only quasiregular honeycomb in Euclidean 3-space is the alternated cubic honeycomb, h{4,3,4}, Coxeter diagrams: = , composed of alternating tetrahedral and octahedral cells. Its vertex figure is the quasiregular cuboctahedron, . In hyperbolic 3-space, one quasiregular honeycomb is the alternated order-5 cubic honeycomb, h{4,3,5}, Coxeter diagrams: = , composed of alternating tetrahedral and icosahedral cells. Its vertex figure is the quasiregular icosidodecahedron, . A related paracompact alternated order-6 cubic honeycomb, h{4,3,6} has alternating tetrahedral and hexagonal tiling cells with vertex figure is a quasiregular trihexagonal tiling, . Regular polychora or honeycombs of the form {p,3,4} or can have their symmetry cut in half as into quasiregular form , creating alternately colored {p,3} cells. These cases include the Euclidean cubic honeycomb {4,3,4} with cubic cells, and compact hyperbolic {5,3,4} with dodecahedral cells, and paracompact {6,3,4} with infinite hexagonal tiling cells. They have four cells around each edge, alternating in 2 colors. Their vertex figures are quasiregular tetratetrahedra, = . Similarly regular hyperbolic honeycombs of the form {p,3,6} or can have their symmetry cut in half as into quasiregular form , creating alternately colored {p,3} cells. They have six cells around each edge, alternating in 2 colors. Their vertex figures are quasiregular triangular tilings, . Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{Bmatrix} p \\\\ q \\end{Bmatrix}" }, { "math_id": 1, "text": "\\begin{Bmatrix} 3 \\\\ 4 \\end{Bmatrix}" }, { "math_id": 2, "text": "\\begin{Bmatrix} 3 \\\\ 5 \\end{Bmatrix}" }, { "math_id": 3, "text": "\\begin{Bmatrix} 3 \\\\ 3 \\end{Bmatrix}" }, { "math_id": 4, "text": "\\cap" }, { "math_id": 5, "text": "\\begin{Bmatrix} 3 \\\\ 5/2 \\end{Bmatrix}" }, { "math_id": 6, "text": "\\begin{Bmatrix} 5 \\\\ 5/2 \\end{Bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=8810651
8811
Discrete Fourier transform
Type of Fourier transform in discrete mathematics In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence.  An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle. The DFT is the most important discrete transform, used to perform Fourier analysis in many practical applications. In digital signal processing, the function is any quantity or signal that varies over time, such as the pressure of a sound wave, a radio signal, or daily temperature readings, sampled over a finite time interval (often defined by a window function). In image processing, the samples can be the values of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. Since it deals with a finite amount of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms; so much so that the terms "FFT" and "DFT" are often used interchangeably. Prior to its current usage, the "FFT" initialism may have also been used for the ambiguous term "finite Fourier transform". The DFT has many applications, including purely mathematical ones with no physical interpretation. But physically it can be related to signal processing as a discrete version (i.e. samples) of the discrete-time Fourier transform (DTFT), which is a continuous and periodic function. The DFT computes N equally-spaced samples of one cycle of the DTFT. (see Fig.2 and ) Definition. The "discrete Fourier transform" transforms a sequence of "N" complex numbers formula_0 into another sequence of complex numbers, formula_1 which is defined by: Discrete Fourier transform The transform is sometimes denoted by the symbol formula_2, as in formula_3 or formula_4 or formula_5. Eq.1 can be interpreted or derived in various ways, for example:Eq.1 can also be evaluated outside the domain formula_8, and that extended sequence is formula_6-periodic. Accordingly, other sequences of formula_6 indices are sometimes used, such as formula_9 (if formula_6 is even) and formula_10 (if formula_6 is odd), which amounts to swapping the left and right halves of the result of the transform. The inverse transform is given by: Inverse transform Eq.2. is also formula_6-periodic (in index n). In Eq.2, each formula_11 is a complex number whose polar coordinates are the amplitude and phase of a complex sinusoidal component formula_12 of function formula_13 (see Discrete Fourier series) The sinusoid's frequency is formula_14 cycles per formula_6 samples. The normalization factor multiplying the DFT and IDFT (here 1 and formula_15) and the signs of the exponents are the most common conventions. The only actual requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be formula_16 An uncommon normalization of formula_17 for both the DFT and IDFT makes the transform-pair unitary. Example. This example demonstrates how to apply the DFT to a sequence of length formula_18 and the input vector formula_19 Calculating the DFT of formula_20 using Eq.1 formula_21 results in formula_22 Properties. Linearity. The DFT is a linear transform, i.e. if formula_23 and formula_24, then for any complex numbers formula_25: formula_26 Time and frequency reversal. Reversing the time (i.e. replacing formula_27 by formula_28) in formula_7 corresponds to reversing the frequency (i.e. formula_14 by formula_29). Mathematically, if formula_30 represents the vector x then if formula_23 then formula_31 Conjugation in time. If formula_32 then formula_33. Real and imaginary part. This table shows some mathematical operations on formula_7 in the time domain and the corresponding effects on its DFT formula_11 in the frequency domain. Orthogonality. The vectors formula_34 form an orthogonal basis over the set of "N"-dimensional complex vectors: formula_35 where formula_36 is the Kronecker delta. (In the last step, the summation is trivial if formula_37, where it is 1 + 1 + ⋯ = "N", and otherwise is a geometric series that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below. The Plancherel theorem and Parseval's theorem. If formula_11 and formula_38 are the DFTs of formula_7 and formula_39 respectively then Parseval's theorem states: formula_40 where the star denotes complex conjugation. The Plancherel theorem is a special case of Parseval's theorem and states: formula_41 These theorems are also equivalent to the unitary condition below. Periodicity. The periodicity can be shown directly from the definition: formula_42 Similarly, it can be shown that the IDFT formula leads to a periodic extension. Shift theorem. Multiplying formula_7 by a "linear phase" formula_43 for some integer "m" corresponds to a "circular shift" of the output formula_11: formula_11 is replaced by formula_44, where the subscript is interpreted modulo "N" (i.e., periodically). Similarly, a circular shift of the input formula_7 corresponds to multiplying the output formula_11 by a linear phase. Mathematically, if formula_30 represents the vector x then if formula_23 then formula_45 and formula_46 Circular convolution theorem and cross-correlation theorem. The convolution theorem for the discrete-time Fourier transform (DTFT) indicates that a convolution of two sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when one of sequences is N-periodic, denoted here by formula_47 because formula_48 is non-zero at only discrete frequencies (see ), and therefore so is its product with the continuous function formula_49  That leads to a considerable simplification of the inverse transform. formula_50 where formula_51 is a periodic summation of the formula_52 sequence: formula_53 Customarily, the DFT and inverse DFT summations are taken over the domain formula_54. Defining those DFTs as formula_55 and formula_56, the result is: formula_57 In practice, the formula_52 sequence is usually length "N" or less, and formula_58 is a periodic extension of an N-length formula_59-sequence, which can also be expressed as a "circular function": formula_60 Then the convolution can be written as: formula_61 which gives rise to the interpretation as a "circular" convolution of formula_52 and formula_62 It is often used to efficiently compute their linear convolution. (see Circular convolution, Fast convolution algorithms, and Overlap-save) Similarly, the cross-correlation of formula_52 and formula_58 is given by: formula_63 Uniqueness of the Discrete Fourier Transform. As seen above, the discrete Fourier transform has the fundamental property of carrying convolution into componentwise product. A natural question is whether it is the only one with this ability. It has been shown that any linear transform that turns convolution into pointwise product is the DFT up to a permutation of coefficients. Since the number of permutations of n elements equals n!, there exists exactly n! linear and invertible maps with the same fundamental property as the DFT with respect to convolution. Convolution theorem duality. It can also be shown that: formula_64 formula_65 which is the circular convolution of formula_66 and formula_67. Trigonometric interpolation polynomial. The trigonometric interpolation polynomial formula_68 where the coefficients "X""k" are given by the DFT of "x""n" above, satisfies the interpolation property formula_69 for formula_70. For even "N", notice that the Nyquist component formula_71 is handled specially. This interpolation is "not unique": aliasing implies that one could add "N" to any of the complex-sinusoid frequencies (e.g. changing formula_72 to formula_73) without changing the interpolation property, but giving "different" values in between the formula_7 points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the formula_7 are real numbers, then formula_74 is real as well. In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to formula_75 (instead of roughly formula_76 to formula_77 as above), similar to the inverse DFT formula. This interpolation does "not" minimize the slope, and is "not" generally real-valued for real formula_7; its use is a common mistake. The unitary DFT. Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as the DFT matrix, a Vandermonde matrix, in 1867, formula_78 where formula_79 is a primitive "N"th root of unity. For example, in the case when formula_80, formula_81, and formula_82 (which is a Hadamard matrix) or when formula_18 as in the above, formula_83, and formula_84 The inverse transform is then given by the inverse of the above matrix, formula_85 With unitary normalization constants formula_86, the DFT becomes a unitary transformation, defined by a unitary matrix: formula_87 where formula_88 is the determinant function. The determinant is the product of the eigenvalues, which are always formula_89 or formula_90 as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT. The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity): formula_91 If X is defined as the unitary DFT of the vector x, then formula_92 and the Parseval's theorem is expressed as formula_93 If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case formula_94, this implies that the length of a vector is preserved as well — this is just Plancherel theorem, formula_95 A consequence of the circular convolution theorem is that the DFT matrix F diagonalizes any circulant matrix. Expressing the inverse DFT in terms of the DFT. A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.) First, we can compute the inverse DFT by reversing all but one of the inputs (Duhamel "et al.", 1988): formula_96 Second, one can also conjugate the inputs and outputs: formula_99 Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define formula_100 as formula_7 with its real and imaginary parts swapped—that is, if formula_101 then formula_100 is formula_102. Equivalently, formula_100 equals formula_103. Then formula_104 That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel "et al.", 1988). The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutory—that is, which is its own inverse. In particular, formula_105 is clearly its own inverse: formula_106. A closely related involutory transformation (by a factor of formula_107) is formula_108, since the formula_109 factors in formula_110 cancel the 2. For real inputs formula_20, the real part of formula_111 is none other than the discrete Hartley transform, which is also involutory. Eigenvalues and eigenvectors. The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Explicit formulas are given with a significant amount of number theory. Consider the unitary form formula_112 defined above for the DFT of length "N", where formula_113 This matrix satisfies the matrix polynomial equation: formula_114 This can be seen from the inverse properties above: operating formula_112 twice gives the original data in reverse order, so operating formula_112 four times gives back the original data and is thus the identity matrix. This means that the eigenvalues formula_115 satisfy the equation: formula_116 Therefore, the eigenvalues of formula_112 are the fourth roots of unity: formula_115 is +1, −1, +"i", or −"i". Since there are only four distinct eigenvalues for this formula_117 matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (There are "N" independent eigenvectors; a unitary matrix is never defective.) The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of "N" modulo 4, and is given by the following table: Otherwise stated, the characteristic polynomial of formula_112 is: formula_118 No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan "et al.", 2000; Hanna "et al.", 2004; Gurevich and Hadani, 2008). One method to construct DFT eigenvectors to an eigenvalue formula_115 is based on the linear combination of operators: formula_119 For an arbitrary vector formula_120, vector formula_121 satisfies: formula_122 hence, vector formula_123 is, indeed, the eigenvector of DFT matrix formula_112. Operators formula_124 project vectors onto subspaces which are orthogonal for each value of formula_115. That is, for two eigenvectors, formula_121 and formula_125 we have: formula_126 However, in general, projection operator method does not produce orthogonal eigenvectors within one subspace. The operator formula_127 can be seen as a matrix, whose columns are eigenvectors of formula_112, but they are not orthogonal. When a set of vectors formula_128, spanning formula_129-dimensional space (where formula_129 is the multiplicity of eigenvalue formula_115) is chosen to generate the set of eigenvectors formula_130 to eigenvalue formula_115, the mutual orthogonality of formula_131 is not guaranteed. However, the orthogonal set can be obtained by further applying orthogonalization algorithm to the set formula_132, e.g. Gram-Schmidt process. A straightforward approach to obtain DFT eigenvectors is to discretize an eigenfunction of the continuous Fourier transform, of which the most famous is the Gaussian function. Since periodic summation of the function means discretizing its frequency spectrum and discretization means periodic summation of the spectrum, the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform: The closed form expression for the series can be expressed by Jacobi theta functions as Several other simple closed-form analytical eigenvectors for special DFT period "N" were found (Kong, 2008 and Casper-Yakimov, 2024): For DFT period "N" = 2"L" + 1 = 4"K" + 1, where "K" is an integer, the following is an eigenvector of DFT: For DFT period "N" = 2"L" = 4"K", where "K" is an integer, the following are eigenvectors of DFT: For DFT period "N" = 4"K" - 1, where "K" is an integer, the following are eigenvectors of DFT: The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transform—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however. Uncertainty principles. Probabilistic uncertainty principle. If the random variable "X""k" is constrained by formula_140 then formula_141 may be considered to represent a discrete probability mass function of n, with an associated probability mass function constructed from the transformed variable, formula_142 For the case of continuous functions formula_143 and formula_144, the Heisenberg uncertainty principle states that formula_145 where formula_146 and formula_147 are the variances of formula_148 and formula_149 respectively, with the equality attained in the case of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Still, a meaningful uncertainty principle has been introduced by Massar and Spindel. However, the Hirschman entropic uncertainty will have a useful analog for the case of the DFT. The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions. In the discrete case, the Shannon entropies are defined as formula_150 and formula_151 and the entropic uncertainty principle becomes formula_152 The equality is obtained for formula_153 equal to translations and modulations of a suitably normalized Kronecker comb of period formula_154 where formula_154 is any exact integer divisor of formula_6. The probability mass function formula_155 will then be proportional to a suitably translated Kronecker comb of period formula_156. Deterministic uncertainty principle. There is also a well-known deterministic uncertainty principle that uses signal sparsity (or the number of non-zero coefficients). Let formula_157 and formula_158 be the number of non-zero elements of the time and frequency sequences formula_159 and formula_160, respectively. Then, formula_161 As an immediate consequence of the inequality of arithmetic and geometric means, one also has formula_162. Both uncertainty principles were shown to be tight for specifically-chosen "picket-fence" sequences (discrete impulse trains), and find practical use for signal recovery applications. formula_165, where formula_166 denotes complex conjugation. DFT of real and purely imaginary signals. It follows that for even formula_6 formula_167 and formula_168 are real-valued, and the remainder of the DFT is completely specified by just formula_169 complex numbers. formula_170, where formula_166 denotes complex conjugation. Generalized DFT (shifted and non-linear phase). It is possible to shift the transform sampling in time and/or frequency domain by some real shifts "a" and "b", respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT: formula_171 Most often, shifts of formula_172 (half a sample) are used. While the ordinary DFT corresponds to a periodic signal in both time and frequency domains, formula_173 produces a signal that is anti-periodic in frequency domain (formula_174) and vice versa for formula_175. Thus, the specific case of formula_176 is known as an "odd-time odd-frequency" discrete Fourier transform (or O2 DFT). Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms. Another interesting choice is formula_177, which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when "N" is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005) The term GDFT is also used for the non-linear phase extensions of DFT. Hence, GDFT method provides a generalization for constant amplitude orthogonal block transforms including linear and non-linear phase types. GDFT is a framework to improve time and frequency domain properties of the traditional DFT, e.g. auto/cross-correlations, by the addition of the properly designed phase shaping function (non-linear, in general) to the original linear phase functions (Akansu and Agirman-Tosun, 2010). The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to "complex" shifts "a" and "b" above. Multidimensional DFT. The ordinary DFT transforms a one-dimensional sequence or array formula_7 that is a function of exactly one discrete variable "n". The multidimensional DFT of a multidimensional array formula_178 that is a function of "d" discrete variables formula_179 for formula_180 in formula_181 is defined by: formula_182 where formula_183 as above and the "d" output indices run from formula_184. This is more compactly expressed in vector notation, where we define formula_185 and formula_186 as "d"-dimensional vectors of indices from 0 to formula_187, which we define as formula_188: formula_189 where the division formula_190 is defined as formula_191 to be performed element-wise, and the sum denotes the set of nested summations above. The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by: formula_192 As the one-dimensional DFT expresses the input formula_7 as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is formula_193. The amplitudes are formula_194. This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves. The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case formula_195 the formula_196 independent DFTs of the rows (i.e., along formula_197) are computed first to form a new array formula_198. Then the formula_199 independent DFTs of "y" along the columns (along formula_200) are computed to form the final result formula_201. Alternatively the columns can be computed first and then the rows. The order is immaterial because the nested summations above commute. An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the "row-column" algorithm. There are also intrinsically multidimensional FFT algorithms. The real-input multidimensional DFT. For input data formula_178 consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above: formula_202 where the star again denotes complex conjugation and the formula_180-th subscript is again interpreted modulo formula_203 (for formula_204). Applications. The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform. Spectral analysis. When the DFT is used for signal spectral analysis, the formula_30 sequence usually represents a finite set of uniformly spaced time-samples of some signal formula_205, where formula_206 represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of formula_207 into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see "Nyquist rate") is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called "leakage", which is manifested as a loss of detail (a.k.a. resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation. A final source of distortion (or perhaps "illusion") is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at . Optics, diffraction, and tomography. The discrete Fourier transform is widely used with spatial frequencies in modeling the way that light, electrons, and other probes travel through optical systems and scatter from objects in two and three dimensions. The dual (direct/reciprocal) vector space of three dimensional objects further makes available a three dimensional reciprocal lattice, whose construction from translucent object shadows (via the Fourier slice theorem) allows tomographic reconstruction of three dimensional objects with a wide range of applications e.g. in modern medicine. Filter bank. See and . Data compression. The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.) Some relatively recent compression algorithms, however, use wavelet transforms, which give a more uniform compromise between time and frequency domain than obtained by chopping data into segments and transforming each segment. In the case of JPEG2000, this avoids the spurious image features that appear when images are highly compressed with the original JPEG. Partial differential equations. Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite "N"). The advantage of this approach is that it expands the signal in complex exponentials formula_208, which are eigenfunctions of differentiation: formula_209. Thus, in the Fourier representation, differentiation is simple—we just multiply by formula_210. (However, the choice of formula_27 is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method. Polynomial multiplication. Suppose we wish to compute the polynomial product "c"("x") = "a"("x") · "b"("x"). The ordinary product expression for the coefficients of "c" involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for "a"("x") and "b"("x") with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension "d" &gt; deg("a"("x")) + deg("b"("x")). Then, formula_211 Where c is the vector of coefficients for "c"("x"), and the convolution operator formula_212 is defined so formula_213 But convolution becomes multiplication under the DFT: formula_214 Here the vector product is taken elementwise. Thus the coefficients of the product polynomial "c"("x") are just the terms 0, ..., deg("a"("x")) + deg("b"("x")) of the coefficient vector formula_215 With a fast Fourier transform, the resulting algorithm takes "O"("N" log "N") arithmetic operations. Due to its simplicity and speed, the Cooley–Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, "d" should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation). Multiplication of large integers. The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base (ex. formula_216). After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication. Convolution. When data is convolved with a function with wide support, such as for downsampling by a large sampling ratio, because of the Convolution theorem and the FFT algorithm, it may be faster to transform it, multiply pointwise by the transform of the filter and then reverse transform it. Alternatively, a good filter is obtained by simply truncating the transformed data and re-transforming the shortened data set. Generalizations. Representation theory. The DFT can be interpreted as a complex-valued representation of the finite cyclic group. In other words, a sequence of formula_27 complex numbers can be thought of as an element of formula_27-dimensional complex space formula_217 or equivalently a function formula_218 from the finite cyclic group of order formula_27 to the complex numbers, formula_219. So formula_218 is a class function on the finite cyclic group, and thus can be expressed as a linear combination of the irreducible characters of this group, which are the roots of unity. From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups. More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel. Other fields. Many of the properties of the DFT only depend on the fact that formula_220 is a primitive root of unity, sometimes denoted formula_221 or formula_222 (so that formula_223). Such properties include the completeness, orthogonality, Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called "number-theoretic transforms" (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general). Other finite groups. The standard DFT acts on a sequence "x"0, "x"1, ..., "x""N"−1 of complex numbers, which can be viewed as a function {0, 1, ..., "N" − 1} → C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions formula_224 This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions "G" → C where "G" is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups. Further, Fourier transform can be on cosets of a group. Alternatives. There are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of time–frequency analysis, a key limitation of the Fourier transform is that it does not include "location" information, only "frequency" information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left \\{ \\mathbf{x}_n \\right \\} := x_0, x_1, \\ldots, x_{N-1}" }, { "math_id": 1, "text": "\\left \\{ \\mathbf{X}_k \\right \\} := X_0, X_1, \\ldots, X_{N-1}," }, { "math_id": 2, "text": "\\mathcal{F}" }, { "math_id": 3, "text": "\\mathbf{X} = \\mathcal{F} \\left \\{ \\mathbf{x} \\right \\} " }, { "math_id": 4, "text": "\\mathcal{F} \\left ( \\mathbf{x} \\right )" }, { "math_id": 5, "text": "\\mathcal{F} \\mathbf{x}" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "x_n" }, { "math_id": 8, "text": "k \\in [0,N-1]" }, { "math_id": 9, "text": "\\left[-\\frac{N}{2}, \\frac{N}{2} - 1\\right]" }, { "math_id": 10, "text": "\\left[-\\frac{N-1}{2}, \\frac{N-1}{2}\\right]" }, { "math_id": 11, "text": "X_k" }, { "math_id": 12, "text": "\\left(e^{i 2 \\pi \\tfrac{k}{N}n}\\right)" }, { "math_id": 13, "text": "x_n." }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "\\tfrac{1}{N}" }, { "math_id": 16, "text": "\\tfrac{1}{N}." }, { "math_id": 17, "text": "\\sqrt{\\tfrac{1}{N}}" }, { "math_id": 18, "text": "N = 4" }, { "math_id": 19, "text": "\\mathbf{x} =\n\\begin{pmatrix}\nx_0 \\\\ x_1 \\\\ x_2 \\\\ x_3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n1 \\\\ 2-i \\\\ -i \\\\ -1+2i\n\\end{pmatrix}.\n" }, { "math_id": 20, "text": "\\mathbf{x}" }, { "math_id": 21, "text": "\\begin{align}\nX_0 &= e^{-i 2 \\pi 0 \\cdot 0 / 4} \\cdot 1 + e^{-i 2 \\pi 0 \\cdot 1 / 4} \\cdot (2-i) + e^{-i 2 \\pi 0 \\cdot 2 / 4} \\cdot (-i) + e^{-i 2 \\pi 0 \\cdot 3 / 4} \\cdot (-1+2i) = 2 \\\\\nX_1 &= e^{-i 2 \\pi 1 \\cdot 0 / 4} \\cdot 1 + e^{-i 2 \\pi 1 \\cdot 1 / 4} \\cdot (2-i) + e^{-i 2 \\pi 1 \\cdot 2 / 4} \\cdot (-i) + e^{-i 2 \\pi 1 \\cdot 3 / 4} \\cdot (-1+2i) = -2-2i \\\\\nX_2 &= e^{-i 2 \\pi 2 \\cdot 0 / 4} \\cdot 1 + e^{-i 2 \\pi 2 \\cdot 1 / 4} \\cdot (2-i) + e^{-i 2 \\pi 2 \\cdot 2 / 4} \\cdot (-i) + e^{-i 2 \\pi 2 \\cdot 3 / 4} \\cdot (-1+2i) = -2i \\\\\nX_3 &= e^{-i 2 \\pi 3 \\cdot 0 / 4} \\cdot 1 + e^{-i 2 \\pi 3 \\cdot 1 / 4} \\cdot (2-i) + e^{-i 2 \\pi 3 \\cdot 2 / 4} \\cdot (-i) + e^{-i 2 \\pi 3 \\cdot 3 / 4} \\cdot (-1+2i) = 4+4i\n\\end{align}" }, { "math_id": 22, "text": "\\mathbf{X} =\n\\begin{pmatrix}\nX_0 \\\\\nX_1 \\\\\nX_2 \\\\\nX_3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n2 \\\\\n-2-2i \\\\\n-2i \\\\\n4+4i\n\\end{pmatrix}.\n" }, { "math_id": 23, "text": "\\mathcal{F}(\\{x_n\\})_k=X_k" }, { "math_id": 24, "text": "\\mathcal{F}(\\{y_n\\})_k=Y_k" }, { "math_id": 25, "text": "a,b" }, { "math_id": 26, "text": "\\mathcal{F}(\\{a x_n + b y_n\\})_k=a X_k + b Y_k" }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "N-n" }, { "math_id": 29, "text": "N-k" }, { "math_id": 30, "text": "\\{x_n\\}" }, { "math_id": 31, "text": "\\mathcal{F}(\\{ x_{N-n} \\})_k=X_{N-k}" }, { "math_id": 32, "text": "\\mathcal{F}(\\{x_n\\})_k = X_k" }, { "math_id": 33, "text": "\\mathcal{F}(\\{ x_n^* \\})_k = X_{N-k}^*" }, { "math_id": 34, "text": "u_k = \\left[\\left. e^{ \\frac{i 2\\pi}{N} kn} \\;\\right|\\; n=0,1,\\ldots,N-1 \\right]^\\mathsf{T}" }, { "math_id": 35, "text": "u^\\mathsf{T}_k u_{k'}^* \n = \\sum_{n=0}^{N-1} \\left(e^{ \\frac{i 2\\pi}{N} kn}\\right) \\left(e^{\\frac{i 2\\pi}{N} (-k')n}\\right)\n = \\sum_{n=0}^{N-1} e^{ \\frac{i 2\\pi}{N} (k-k') n} \n = N~\\delta_{kk'}\n" }, { "math_id": 36, "text": "\\delta_{kk'}" }, { "math_id": 37, "text": "k=k'" }, { "math_id": 38, "text": "Y_k" }, { "math_id": 39, "text": "y_n" }, { "math_id": 40, "text": "\\sum_{n=0}^{N-1} x_n y^*_n = \\frac{1}{N} \\sum_{k=0}^{N-1} X_k Y^*_k" }, { "math_id": 41, "text": "\\sum_{n=0}^{N-1} |x_n|^2 = \\frac{1}{N} \\sum_{k=0}^{N-1} |X_k|^2." }, { "math_id": 42, "text": "X_{k+N} \\ \\triangleq \\ \\sum_{n=0}^{N-1} x_n e^{-\\frac{i 2\\pi}{N} (k+N) n} =\n\\sum_{n=0}^{N-1} x_n e^{-\\frac{i 2\\pi}{N} k n} \\underbrace{e^{-i 2 \\pi n}}_{1} = \\sum_{n=0}^{N-1} x_n e^{-\\frac{i 2\\pi}{N} k n} = X_k. " }, { "math_id": 43, "text": "e^{\\frac{i 2\\pi}{N} nm}" }, { "math_id": 44, "text": "X_{k-m}" }, { "math_id": 45, "text": "\\mathcal{F}\\left(\\left\\{ x_n \\cdot e^{\\frac{i 2\\pi}{N}n m} \\right\\}\\right)_k=X_{k-m}" }, { "math_id": 46, "text": "\\mathcal{F}\\left(\\left\\{x_{n-m}\\right\\}\\right)_k=X_k \\cdot e^{-\\frac{i 2\\pi}{N}k m}" }, { "math_id": 47, "text": "y_{_N}," }, { "math_id": 48, "text": "\\scriptstyle \\text{DTFT} \\displaystyle \\{y_{_N}\\}" }, { "math_id": 49, "text": "\\scriptstyle \\text{DTFT} \\displaystyle \\{x\\}." }, { "math_id": 50, "text": "x * y_{_N}\\ =\\ \\scriptstyle{\\rm DTFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DTFT} \\displaystyle \\{x\\}\\cdot \\scriptstyle{\\rm DTFT} \\displaystyle \\{y_{_N}\\}\\right]\\ =\\ \\scriptstyle{\\rm DFT}^{-1} \\displaystyle \\left[\\scriptstyle{\\rm DFT} \\displaystyle \\{x_{_N}\\}\\cdot \\scriptstyle{\\rm DFT} \\displaystyle \\{y_{_N}\\}\\right]," }, { "math_id": 51, "text": "x_{_N}" }, { "math_id": 52, "text": "x" }, { "math_id": 53, "text": "(x_{_N})_n\\ \\triangleq \\sum_{m=-\\infty}^{\\infty} x_{(n-mN)}." }, { "math_id": 54, "text": "[0,N-1]" }, { "math_id": 55, "text": "X" }, { "math_id": 56, "text": "Y" }, { "math_id": 57, "text": "\n(x * y_{_N})_n \\triangleq \\sum_{\\ell=-\\infty}^{\\infty}x_\\ell \\cdot (y_{_N})_{n-\\ell} = \\underbrace{\\mathcal{F}^{-1}}_{\\rm DFT^{-1}} \\left \\{ X\\cdot Y \\right \\}_n." }, { "math_id": 58, "text": "y_{_N}" }, { "math_id": 59, "text": "y" }, { "math_id": 60, "text": "(y_{_N})_n = \\sum_{p=-\\infty}^\\infty y_{(n-pN)} = y_{(n\\operatorname{mod}N)}, \\quad n\\in\\mathbb{Z}." }, { "math_id": 61, "text": "\n\\mathcal{F}^{-1} \\left \\{ X\\cdot Y \\right \\}_n = \\sum_{\\ell=0}^{N-1}x_\\ell \\cdot y_{_{(n-\\ell)\\operatorname{mod}N}}\n" }, { "math_id": 62, "text": "y." }, { "math_id": 63, "text": "(x \\star y_{_N})_n \\triangleq \\sum_{\\ell=-\\infty}^{\\infty} x_\\ell^* \\cdot (y_{_N})_{n+\\ell} = \\mathcal{F}^{-1} \\left \\{ X^* \\cdot Y \\right \\}_n." }, { "math_id": 64, "text": "\\mathcal{F} \\left \\{ \\mathbf{x\\cdot y} \\right \\}_k \\ \\triangleq\n\\sum_{n=0}^{N-1} x_n \\cdot y_n \\cdot e^{-i \\frac{2\\pi}{N} k n}" }, { "math_id": 65, "text": "=\\frac{1}{N} (\\mathbf{X * Y_N})_k, " }, { "math_id": 66, "text": "\\mathbf{X}" }, { "math_id": 67, "text": "\\mathbf{Y}" }, { "math_id": 68, "text": "p(t) = \\begin{cases}\n \\frac{1}{N} \\left[ X_0 + X_1 e^{i 2\\pi t} + \\cdots + X_{N/2-1} e^{i 2\\pi(N/2-1) t} + X_{N/2} \n \\cos(N\\pi t) + X_{N/2+1} e^{-i 2\\pi(N/2-1) t} + \\cdots + X_{N-1} e^{-i 2\\pi t} \\right]\n & N\\text{ even} \\\\\n \\frac{1}{N} \\left[ X_0 + X_1 e^{i 2\\pi t} + \\cdots + X_{(N-1)/2} e^{i 2\\pi(N-1) t} + X_{(N+1)/2} \n e^{-i 2\\pi(N-1) t} + \\cdots + X_{N-1} e^{-i 2\\pi t} \\right]\n & N\\text{ odd}\n\\end{cases}" }, { "math_id": 69, "text": "p(n/N) = x_n" }, { "math_id": 70, "text": "n = 0, \\ldots, N-1" }, { "math_id": 71, "text": "\\frac{X_{N/2}}{N} \\cos(N\\pi t)" }, { "math_id": 72, "text": "e^{-it}" }, { "math_id": 73, "text": "e^{i(N-1)t}" }, { "math_id": 74, "text": "p(t)" }, { "math_id": 75, "text": "N-1" }, { "math_id": 76, "text": "-N/2" }, { "math_id": 77, "text": "+N/2" }, { "math_id": 78, "text": "\\mathbf{F} =\n\\begin{bmatrix}\n \\omega_N^{0 \\cdot 0} & \\omega_N^{0 \\cdot 1} & \\cdots & \\omega_N^{0 \\cdot (N-1)} \\\\\n \\omega_N^{1 \\cdot 0} & \\omega_N^{1 \\cdot 1} & \\cdots & \\omega_N^{1 \\cdot (N-1)} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\omega_N^{(N-1) \\cdot 0} & \\omega_N^{(N-1) \\cdot 1} & \\cdots & \\omega_N^{(N-1) \\cdot (N-1)} \\\\\n\\end{bmatrix}\n" }, { "math_id": 79, "text": "\\omega_N = e^{-i 2 \\pi/N}" }, { "math_id": 80, "text": "N = 2" }, { "math_id": 81, "text": "\\omega_N = e^{-i \\pi}=-1" }, { "math_id": 82, "text": "\\mathbf{F} =\n\\begin{bmatrix}\n 1 & 1 \\\\\n 1 & -1 \\\\\n\\end{bmatrix},\n" }, { "math_id": 83, "text": "\\omega_N = e^{-i \\pi/2}=-i" }, { "math_id": 84, "text": "\\mathbf{F} =\n\\begin{bmatrix}\n 1 & 1 & 1 & 1 \\\\\n 1 & -i & -1 & i \\\\\n 1 & -1 & 1 & -1 \\\\\n 1 & i & -1 & -i \\\\\n\\end{bmatrix}.\n" }, { "math_id": 85, "text": "\\mathbf{F}^{-1}=\\frac{1}{N}\\mathbf{F}^*" }, { "math_id": 86, "text": "1/\\sqrt{N}" }, { "math_id": 87, "text": "\\begin{align}\n \\mathbf{U} &= \\frac{1}{\\sqrt{N}}\\mathbf{F} \\\\\n \\mathbf{U}^{-1} &= \\mathbf{U}^* \\\\\n\\left|\\det(\\mathbf{U})\\right| &= 1\n\\end{align}" }, { "math_id": 88, "text": "\\det()" }, { "math_id": 89, "text": "\\pm 1" }, { "math_id": 90, "text": "\\pm i" }, { "math_id": 91, "text": "\\sum_{m=0}^{N-1}U_{km}U_{mn}^* = \\delta_{kn}" }, { "math_id": 92, "text": "X_k = \\sum_{n=0}^{N-1} U_{kn} x_n" }, { "math_id": 93, "text": "\\sum_{n=0}^{N-1}x_n y_n^* = \\sum_{k=0}^{N-1}X_k Y_k^*" }, { "math_id": 94, "text": "\\mathbf{x} = \\mathbf{y}" }, { "math_id": 95, "text": "\\sum_{n=0}^{N-1} |x_n|^2 = \\sum_{k=0}^{N-1} |X_k|^2" }, { "math_id": 96, "text": "\\mathcal{F}^{-1}(\\{x_n\\}) = \\frac{1}{N}\\mathcal{F}(\\{x_{N - n}\\})" }, { "math_id": 97, "text": "n = 0" }, { "math_id": 98, "text": "x_{N-0} = x_0" }, { "math_id": 99, "text": "\\mathcal{F}^{-1}(\\mathbf{x}) = \\frac{1}{N}\\mathcal{F}\\left(\\mathbf{x}^*\\right)^*" }, { "math_id": 100, "text": "\\operatorname{swap}(x_n)" }, { "math_id": 101, "text": "x_n = a + b i" }, { "math_id": 102, "text": "b + a i" }, { "math_id": 103, "text": "i x_n^*" }, { "math_id": 104, "text": "\\mathcal{F}^{-1}(\\mathbf{x}) = \\frac{1}{N}\\operatorname{swap}(\\mathcal{F}(\\operatorname{swap}(\\mathbf{x})))" }, { "math_id": 105, "text": "T(\\mathbf{x}) = \\mathcal{F}\\left(\\mathbf{x}^*\\right) / \\sqrt{N}" }, { "math_id": 106, "text": "T(T(\\mathbf{x})) = \\mathbf{x}" }, { "math_id": 107, "text": "\\frac{1 + i}{\\sqrt{2}}" }, { "math_id": 108, "text": "H(\\mathbf{x}) = \\mathcal{F}\\left((1 + i) \\mathbf{x}^*\\right) / \\sqrt{2N}" }, { "math_id": 109, "text": "(1 + i)" }, { "math_id": 110, "text": "H(H(\\mathbf{x}))" }, { "math_id": 111, "text": "H(\\mathbf{x})" }, { "math_id": 112, "text": "\\mathbf{U}" }, { "math_id": 113, "text": "\\mathbf{U}_{m,n} = \\frac 1{\\sqrt{N}}\\omega_N^{(m-1)(n-1)} = \\frac 1{\\sqrt{N}}e^{-\\frac{i 2\\pi}N (m-1)(n-1)}." }, { "math_id": 114, "text": "\\mathbf{U}^4 = \\mathbf{I}." }, { "math_id": 115, "text": "\\lambda" }, { "math_id": 116, "text": "\\lambda^4 = 1." }, { "math_id": 117, "text": "N\\times N" }, { "math_id": 118, "text": "\\det (\\lambda I - \\mathbf{U})=\n(\\lambda-1)^{\\left\\lfloor \\tfrac {N+4}{4}\\right\\rfloor}\n(\\lambda+1)^{\\left\\lfloor \\tfrac {N+2}{4}\\right\\rfloor}\n(\\lambda+i)^{\\left\\lfloor \\tfrac {N+1}{4}\\right\\rfloor}\n(\\lambda-i)^{\\left\\lfloor \\tfrac {N-1}{4}\\right\\rfloor}." }, { "math_id": 119, "text": " \\mathcal{P}_\\lambda=\\frac{1}{4}\\left( \\mathbf{I}+\\lambda^{-1}\\mathbf{U}+\\lambda^{-2}\\mathbf{U}^2+\\lambda^{-3} \\mathbf{U}^3\\right)" }, { "math_id": 120, "text": "\\mathbf{v}" }, { "math_id": 121, "text": "\\mathbf{u}(\\lambda)=\\mathcal{P}_{\\lambda}\\mathbf{v}" }, { "math_id": 122, "text": " \\textbf{U}\\mathbf{u}(\\lambda)=\\lambda \\mathbf{u}(\\lambda) " }, { "math_id": 123, "text": "\\mathbf{u}(\\lambda)" }, { "math_id": 124, "text": " \\mathcal{P}_{\\lambda} " }, { "math_id": 125, "text": "\\mathbf{u}'(\\lambda')=\\mathcal{P}_{\\lambda'}\\mathbf{v}'" }, { "math_id": 126, "text": "\\mathbf{u}^\\dagger(\\lambda) \\mathbf{u}'(\\lambda')= \\delta_{\\lambda\\lambda'}\\mathbf{u}^\\dagger(\\lambda) \\mathbf{v}' " }, { "math_id": 127, "text": "\\mathcal{P}_{\\lambda}" }, { "math_id": 128, "text": "\\{\\mathbf{v}_n\\}_{n=1,\\dots,N_{\\lambda}}" }, { "math_id": 129, "text": "N_{\\lambda}" }, { "math_id": 130, "text": "\\{\\mathbf{u}_n(\\lambda)=\\mathcal{P}_{\\lambda}\\mathbf{v}_n\\}_{n=1,\\dots,N_{\\lambda}}" }, { "math_id": 131, "text": "\\mathbf{u}_n(\\lambda)" }, { "math_id": 132, "text": "\\{\\mathbf{u}_n(\\lambda)\\}_{n=1,\\dots,N_{\\lambda}}" }, { "math_id": 133, "text": "F(m) = \\sum_{k\\in\\mathbb{Z}} \\exp\\left(-\\frac{\\pi\\cdot(m+N\\cdot k)^2}{N}\\right)." }, { "math_id": 134, "text": "F(m) = \\frac1{\\sqrt{N}}\\vartheta_3\\left(\\frac{\\pi m}N, \\exp\\left(-\\frac{\\pi}N \\right)\\right)." }, { "math_id": 135, "text": "F(m) = \\prod_{s=K+1}^L \\left[\\cos\\left(\\frac{2\\pi}{N}m\\right) - \\cos\\left(\\frac{2\\pi}{N}s\\right)\\right]" }, { "math_id": 136, "text": "F(m) = \\sin\\left(\\frac{2\\pi}{N}m\\right) \\prod_{s=K+1}^{L-1}\\left[\\cos\\left(\\frac{2\\pi}{N}m\\right)- \\cos\\left(\\frac{2\\pi}{N}s\\right)\\right]" }, { "math_id": 137, "text": "F(m) = \\cos\\left(\\frac{\\pi}{N}m\\right)\\prod_{s=K+1}^{3K-1} \\sin\\left(\\frac{\\pi(s-m)}{N}\\right)" }, { "math_id": 138, "text": "F(m) = \\sin\\left(\\frac{2\\pi}{N}m\\right)\\prod_{s=K+1}^{3K-2} \\sin\\left(\\frac{\\pi(s-m)}{N}\\right)" }, { "math_id": 139, "text": "F(m) = \\left(\\cos\\left(\\frac{2\\pi}{N}m\\right)-\\cos\\left(\\frac{2\\pi}{N}K\\right)\\pm\\sin\\left(\\frac{2\\pi}{N}K\\right)\\right)\\prod_{s=K+1}^{3K-2} \\sin\\left(\\frac{\\pi(s-m)}{N}\\right)" }, { "math_id": 140, "text": "\\sum_{n=0}^{N-1} |X_n|^2 = 1 ," }, { "math_id": 141, "text": "P_n=|X_n|^2" }, { "math_id": 142, "text": "Q_m = N |x_m|^2 ." }, { "math_id": 143, "text": "P(x)" }, { "math_id": 144, "text": "Q(k)" }, { "math_id": 145, "text": "D_0(X)D_0(x)\\ge\\frac{1}{16\\pi^2}" }, { "math_id": 146, "text": "D_0(X)" }, { "math_id": 147, "text": "D_0(x)" }, { "math_id": 148, "text": "|X|^2" }, { "math_id": 149, "text": "|x|^2" }, { "math_id": 150, "text": "H(X)=-\\sum_{n=0}^{N-1} P_n\\ln P_n" }, { "math_id": 151, "text": "H(x)=-\\sum_{m=0}^{N-1} Q_m\\ln Q_m ," }, { "math_id": 152, "text": "H(X)+H(x) \\ge \\ln(N) ." }, { "math_id": 153, "text": "P_n" }, { "math_id": 154, "text": "A" }, { "math_id": 155, "text": "Q_m" }, { "math_id": 156, "text": "B=N/A" }, { "math_id": 157, "text": "\\left\\|x\\right\\|_0" }, { "math_id": 158, "text": "\\left\\|X\\right\\|_0" }, { "math_id": 159, "text": "x_0,x_1,\\ldots,x_{N-1}" }, { "math_id": 160, "text": "X_0,X_1,\\ldots,X_{N-1}" }, { "math_id": 161, "text": "N \\leq \\left\\|x\\right\\|_0 \\cdot \\left\\|X\\right\\|_0." }, { "math_id": 162, "text": "2\\sqrt{N} \\leq \\left\\|x\\right\\|_0 + \\left\\|X\\right\\|_0" }, { "math_id": 163, "text": "x_0, \\ldots, x_{N-1}" }, { "math_id": 164, "text": "X_0, \\ldots, X_{N-1}" }, { "math_id": 165, "text": "x_n \\in \\mathbb{R} \\quad \\forall n \\in \\{0,\\ldots,N-1 \\} \\implies X_k = X_{-k \\mod N}^* \\quad \\forall k \\in \\{0,\\ldots,N-1 \\}" }, { "math_id": 166, "text": "X^*\\," }, { "math_id": 167, "text": "X_0" }, { "math_id": 168, "text": "X_{N/2}" }, { "math_id": 169, "text": "N/2-1" }, { "math_id": 170, "text": "x_n \\in i \\mathbb{R} \\quad \\forall n \\in \\{0,\\ldots,N-1 \\} \\implies X_k = -X_{-k \\mod N}^* \\quad \\forall k \\in \\{0,\\ldots,N-1 \\}" }, { "math_id": 171, "text": "X_k = \\sum_{n=0}^{N-1} x_n e^{-\\frac{i 2 \\pi}{N} (k+b) (n+a)} \\quad \\quad k = 0, \\dots, N-1." }, { "math_id": 172, "text": "1/2" }, { "math_id": 173, "text": "a=1/2" }, { "math_id": 174, "text": "X_{k+N} = - X_k" }, { "math_id": 175, "text": "b=1/2" }, { "math_id": 176, "text": "a = b = 1/2" }, { "math_id": 177, "text": "a=b=-(N-1)/2" }, { "math_id": 178, "text": "x_{n_1, n_2, \\dots, n_d}" }, { "math_id": 179, "text": "n_\\ell = 0, 1, \\dots, N_\\ell-1" }, { "math_id": 180, "text": "\\ell" }, { "math_id": 181, "text": "1, 2, \\dots, d" }, { "math_id": 182, "text": "X_{k_1, k_2, \\dots, k_d} = \\sum_{n_1=0}^{N_1-1} \\left(\\omega_{N_1}^{~k_1 n_1} \\sum_{n_2=0}^{N_2-1} \\left( \\omega_{N_2}^{~k_2 n_2} \\cdots \\sum_{n_d=0}^{N_d-1} \\omega_{N_d}^{~k_d n_d}\\cdot x_{n_1, n_2, \\dots, n_d} \\right) \\right) , " }, { "math_id": 183, "text": "\\omega_{N_\\ell} = \\exp(-i 2\\pi/N_\\ell)" }, { "math_id": 184, "text": "k_\\ell = 0, 1, \\dots, N_\\ell-1" }, { "math_id": 185, "text": "\\mathbf{n} = (n_1, n_2, \\dots, n_d)" }, { "math_id": 186, "text": "\\mathbf{k} = (k_1, k_2, \\dots, k_d)" }, { "math_id": 187, "text": "\\mathbf{N} - 1" }, { "math_id": 188, "text": "\\mathbf{N} - 1 = (N_1 - 1, N_2 - 1, \\dots, N_d - 1)" }, { "math_id": 189, "text": "X_\\mathbf{k} = \\sum_{\\mathbf{n}=\\mathbf{0}}^{\\mathbf{N}-1} e^{-i 2\\pi \\mathbf{k} \\cdot (\\mathbf{n} / \\mathbf{N})} x_\\mathbf{n} \\, ," }, { "math_id": 190, "text": "\\mathbf{n} / \\mathbf{N}" }, { "math_id": 191, "text": "\\mathbf{n} / \\mathbf{N} = (n_1/N_1, \\dots, n_d/N_d)" }, { "math_id": 192, "text": "x_\\mathbf{n} = \\frac{1}{\\prod_{\\ell=1}^d N_\\ell} \\sum_{\\mathbf{k}=\\mathbf{0}}^{\\mathbf{N}-1} e^{i 2\\pi \\mathbf{n} \\cdot (\\mathbf{k} / \\mathbf{N})} X_\\mathbf{k} \\, ." }, { "math_id": 193, "text": "\\mathbf{k} / \\mathbf{N}" }, { "math_id": 194, "text": "X_\\mathbf{k}" }, { "math_id": 195, "text": "x_{n_1,n_2}" }, { "math_id": 196, "text": "N_1" }, { "math_id": 197, "text": "n_2" }, { "math_id": 198, "text": "y_{n_1,k_2}" }, { "math_id": 199, "text": "N_2" }, { "math_id": 200, "text": "n_1" }, { "math_id": 201, "text": "X_{k_1,k_2}" }, { "math_id": 202, "text": "X_{k_1, k_2, \\dots, k_d} = X_{N_1 - k_1, N_2 - k_2, \\dots, N_d - k_d}^* ," }, { "math_id": 203, "text": "N_\\ell" }, { "math_id": 204, "text": "\\ell = 1,2,\\ldots,d" }, { "math_id": 205, "text": "x(t)\\," }, { "math_id": 206, "text": "t" }, { "math_id": 207, "text": "x(t)" }, { "math_id": 208, "text": "e^{inx}" }, { "math_id": 209, "text": "{\\text{d} \\big( e^{inx} \\big) }/\\text{d}x = in e^{inx}" }, { "math_id": 210, "text": "in" }, { "math_id": 211, "text": "\\mathbf{c} = \\mathbf{a} * \\mathbf{b}" }, { "math_id": 212, "text": "*\\," }, { "math_id": 213, "text": "c_n = \\sum_{m=0}^{d-1}a_m b_{n-m\\ \\mathrm{mod}\\ d} \\qquad\\qquad\\qquad n=0,1\\dots,d-1" }, { "math_id": 214, "text": "\\mathcal{F}(\\mathbf{c}) = \\mathcal{F}(\\mathbf{a})\\mathcal{F}(\\mathbf{b})" }, { "math_id": 215, "text": "\\mathbf{c} = \\mathcal{F}^{-1}(\\mathcal{F}(\\mathbf{a})\\mathcal{F}(\\mathbf{b}))." }, { "math_id": 216, "text": "123 = 1 \\cdot 10^2 + 2 \\cdot 10^1 + 3 \\cdot 10^0" }, { "math_id": 217, "text": "\\mathbb{C}^n" }, { "math_id": 218, "text": "f" }, { "math_id": 219, "text": "\\mathbb{Z}_n \\mapsto \\mathbb{C}" }, { "math_id": 220, "text": "e^{-\\frac{i 2 \\pi}{N}}" }, { "math_id": 221, "text": "\\omega_N" }, { "math_id": 222, "text": "W_N" }, { "math_id": 223, "text": "\\omega_N^N = 1" }, { "math_id": 224, "text": " \\{0, 1, \\ldots, N_1-1\\} \\times \\cdots \\times \\{0, 1, \\ldots, N_d-1\\} \\to \\mathbb{C}. " } ]
https://en.wikipedia.org/wiki?curid=8811
8811105
Braking distance
Physics concept relating to automobiles Braking distance refers to the distance a vehicle will travel from the point when its brakes are fully applied to when it comes to a complete stop. It is primarily affected by the original speed of the vehicle and the coefficient of friction between the tires and the road surface, and negligibly by the tires' rolling resistance and vehicle's air drag. The type of brake system in use only affects trucks and large mass vehicles, which cannot supply enough force to match the static frictional force. The braking distance is one of two principal components of the total stopping distance. The other component is the reaction distance, which is the product of the speed and the perception-reaction time of the driver/rider. A perception-reaction time of 1.5 seconds, and a coefficient of kinetic friction of 0.7 are standard for the purpose of determining a bare baseline for accident reconstruction and judicial notice; most people can stop slightly sooner under ideal conditions. Braking distance is not to be confused with stopping sight distance. The latter is a road alignment visibility standard that provides motorists driving at or below the design speed an assured clear distance ahead (ACDA) which exceeds a safety factor distance that would be required by a slightly or nearly negligent driver to stop under a worst likely case scenario: typically slippery conditions (deceleration 0.35g) and a slow responding driver (2.5 seconds). Because the stopping sight distance far exceeds the actual stopping distance under most conditions, an otherwise capable driver who uses the full stopping sight distance, which results in injury, may be negligent for not stopping sooner. Derivation. Energy equation. The theoretical braking distance can be found by determining the work required to dissipate the vehicle's kinetic energy. The kinetic energy "E" is given by the formula: formula_0, where "m" is the vehicle's mass and "v" is the speed at the start of braking. The work "W" done by braking is given by: formula_1, where μ is the coefficient of friction between the road surface and the tires, "g" is the gravity of Earth, and "d" is the distance travelled. The braking distance (which is commonly measured as the skid length) given an initial driving speed "v" is then found by putting "W" = "E", from which it follows that formula_2. The maximum speed given an available braking distance "d" is given by: formula_3. Newton's law and equation of motion. From Newton's second law: formula_4 For a level surface, the frictional force resulting from coefficient of friction formula_5 is: formula_6 Equating the two yields the deceleration: formula_7 The formula_8 form of the formulas for constant acceleration is: formula_9 Setting formula_10 and then substituting formula_11 into the equation yields the braking distance: formula_12 Total stopping distance. "Tables of speed and stopping distances"&lt;br&gt;"Permitted by good tires and clean, dry, level, pavement." The total stopping distance is the sum of the perception-reaction distance and the braking distance. formula_13 A common baseline value of formula_14 is used in stopping distance charts. These values incorporate the ability of the vast majority of drivers under normal road conditions. However, a keen and alert driver may have perception-reaction times well below 1 second, and a modern car with computerized anti-skid brakes may have a friction coefficient of 0.9--or even far exceed 1.0 with sticky tires. Experts historically used a reaction time of 0.75 seconds, but now incorporate perception resulting in an average perception-reaction time of: 1 second for population as an average; occasionally a two-second rule to simulate the elderly or neophyte; or even a 2.5 second reaction time—to specifically accommodate very elderly, debilitated, intoxicated, or distracted drivers. The coefficient of friction may be 0.25 or lower on wet or frozen asphalt, and anti-skid brakes and season specific performance tires may somewhat compensate for driver error and conditions. In legal contexts, conservative values suggestive of greater minimum stopping distances are often used as to be sure to exceed the pertinent legal burden of proof, with care not to go as far as to condone negligence. Thus, the reaction time chosen can be related to the burden's corresponding population percentile; generally a reaction time of 1 second is as a preponderance more probable than not, 1.5 seconds is clear and convincing, and 2.5 seconds is beyond reasonable doubt. The same principle applies to the friction coefficient values. Actual total stopping distance. The actual total stopping distance may differ from the baseline value when the road or tire conditions are substantially different from the baseline conditions, or when the driver's cognitive function is superior or deficient. To determine actual total stopping distance, one would typically empirically obtain the coefficient of friction between the tire material and the exact road spot under the same road conditions and temperature. They would also measure the person's perception and reaction times. A driver who has innate reflexes, and thus braking distances, that are far below the safety margins provided in the road design or expected by other users, may not be safe to drive. Most old roads were not engineered with the deficient driver in mind, and often used a defunct 3/4 second reaction time standard. There have been recent road standard changes to make modern roadways more accessible to an increasingly aging population of drivers. For rubber tyres on cars, the coefficient of friction (μ) decreases as the mass of the car increases. Additionally, μ depends on whether the wheels are locked or rolling during the braking, and a few more parameters such as rubber temperature (increases during the braking) and speed. Rules of thumb. In a non-metric country, the stopping distance in feet given a velocity in MPH can be approximated as follows: Example: velocity = 50 MPH. stopping distance = 5 squared = 25, add a zero = 250, divide by 2 = 125, sum 2*50 = 225 feet (the exact value can be calculated using the formula given below the diagram on the right). In Germany the rule of thumb for the stopping distance in a city in good conditions is the 1-second rule, i.e. the distance covered in 1 second should at most be the distance to the vehicle ahead. At 50 km/h this corresponds to about 15 m. For higher speeds up to about 100 km/h outside built-up areas, a similarly defined 2-second rule applies, which for 100 km/h translates to about 50 m. For speeds on the order of 100 km/h there is also the more or less equivalent rule that the stopping distance be the speed divided by 2 k/h, referred to as "halber tacho" ("half the speedometer") rule, e.g. for 100 km/h the stopping distance should be about 50 m. Additionally, German driving schools teach their pupils that the total stopping distance is typically: formula_15 In the UK, the typical total stopping distances (thinking distance plus braking distance) used in The Highway Code are quoted in Rule 126 as: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E=\\frac{1}{2}mv^{2}" }, { "math_id": 1, "text": "W=\\mu mgd" }, { "math_id": 2, "text": "d=\\frac{v^{2}}{2\\mu g}" }, { "math_id": 3, "text": "v=\\sqrt{2\\mu gd}" }, { "math_id": 4, "text": "F=ma" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "F_{frict}=-\\mu mg" }, { "math_id": 7, "text": "a=-\\mu g" }, { "math_id": 8, "text": "d_f(d_i,v_i,v_f)" }, { "math_id": 9, "text": "d_f=d_i + \\frac{v_f^2-v_i^2}{2a}" }, { "math_id": 10, "text": "d_i, v_f =0" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "d_f=\\frac{-v_i^2}{2a}=\\frac{v_i^2}{2 \\mu g}" }, { "math_id": 13, "text": "D_{total}=D_{p-r}+D_{braking}=v t_{p-r}+ \\frac{v^2}{2 \\mu g}" }, { "math_id": 14, "text": "t_{p-r}=1.5 s, \\mu=0.7" }, { "math_id": 15, "text": "(Speed\\div10)\\times3+(Speed\\div10)^2" } ]
https://en.wikipedia.org/wiki?curid=8811105
881269
Rotational transition
Abrupt change in a quantum particle's angular momentum In quantum mechanics, a rotational transition is an abrupt change in angular momentum. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred. Rotational transitions are important in physics due to the unique spectral lines that result. Because there is a net gain or loss of energy during a transition, electromagnetic radiation of a particular frequency must be absorbed or emitted. This forms spectral lines at that frequency which can be detected with a spectrometer, as in rotational spectroscopy or Raman spectroscopy. Diatomic molecules. Molecules have rotational energy owing to rotational motion of the nuclei about their center of mass. Due to quantization, these energies can take only certain discrete values. Rotational transition thus corresponds to transition of the molecule from one rotational energy level to the other through gain or loss of a photon. Analysis is simple in the case of diatomic molecules. Nuclear wave function. Quantum theoretical analysis of a molecule is simplified by use of Born–Oppenheimer approximation. Typically, rotational energies of molecules are smaller than electronic transition energies by a factor of "m"/"M" ≈ 10−3–10−5, where "m" is electronic mass and "M" is typical nuclear mass. From uncertainty principle, period of motion is of the order of the Planck constant "h" divided by its energy. Hence nuclear rotational periods are much longer than the electronic periods. So electronic and nuclear motions can be treated separately. In the simple case of a diatomic molecule, the radial part of the Schrödinger Equation for a nuclear wave function "F""s"(R), in an electronic state "s", is written as (neglecting spin interactions) formula_0 where "μ" is reduced mass of two nuclei, R is vector joining the two nuclei, "E""s"("R") is energy eigenvalue of electronic wave function Φ"s" representing electronic state "s" and "N" is orbital momentum operator for the relative motion of the two nuclei given by formula_1 The total wave function for the molecule is formula_2 where r"i" are position vectors from center of mass of molecule to "i"th electron. As a consequence of the Born-Oppenheimer approximation, the electronic wave functions Φs is considered to vary very slowly with R. Thus the Schrödinger equation for an electronic wave function is first solved to obtain "E""s"("R") for different values of "R". "E""s" then plays role of a potential well in analysis of nuclear wave functions "F""s"(R). Rotational energy levels. The first term in the above nuclear wave function equation corresponds to kinetic energy of nuclei due to their radial motion. Term represents rotational kinetic energy of the two nuclei, about their center of mass, in a given electronic state Φ"s". Possible values of the same are different rotational energy levels for the molecule. Orbital angular momentum for the rotational motion of nuclei can be written as formula_3 where J is the total orbital angular momentum of the whole molecule and L is the orbital angular momentum of the electrons. If internuclear vector R is taken along z-axis, component of N along z-axis – "N""z" – becomes zero as formula_4 Hence formula_5 Since molecular wave function Ψs is a simultaneous eigenfunction of "J"2 and "J""z", formula_6 where J is called rotational quantum number and "J" can be a positive integer or zero. formula_7 where −"J" ≤ "M""j" ≤ "J". Also since electronic wave function Φ"s" is an eigenfunction of "L""z", formula_8 Hence molecular wave function Ψs is also an eigenfunction of "L""z" with eigenvalue ±Λ"ħ". Since "L""z" and "J""z" are equal, Ψ"s" is an eigenfunction of "J""z" with same eigenvalue ±Λ"ħ". As , we have "J" ≥ Λ. So possible values of rotational quantum number are formula_9 Thus molecular wave function Ψ"s" is simultaneous eigenfunction of "J"2, "J""z" and "L""z". Since molecule is in eigenstate of "L""z", expectation value of components perpendicular to the direction of z-axis (internuclear line) is zero. Hence formula_10 and formula_11 Thus formula_12 Putting all these results together, formula_13 The Schrödinger equation for the nuclear wave function can now be rewritten as formula_14 where formula_15 E′s now serves as effective potential in radial nuclear wave function equation. Sigma states. Molecular states in which the total orbital momentum of electrons is zero are called sigma states. In sigma states Λ = 0. Thus "E"′s("R") = "E"s("R"). As nuclear motion for a stable molecule is generally confined to a small interval around "R"0 where "R"0 corresponds to internuclear distance for minimum value of potential "E"s("R"0), rotational energies are given by, formula_16 with formula_9 "I"0 is moment of inertia of the molecule corresponding to equilibrium distance "R"0 and "B" is called rotational constant for a given electronic state Φ"s". Since reduced mass "μ" is much greater than electronic mass, last two terms in the expression of "E"′"s"("R") are small compared to "E"s. Hence even for states other than sigma states, rotational energy is approximately given by above expression. Rotational spectrum. When a rotational transition occurs, there is a change in the value of rotational quantum number "J". Selection rules for rotational transition are, when Λ = 0, Δ"J" = ±1 and when Λ ≠ 0, Δ"J" = 0, ±1 as absorbed or emitted photon can make equal and opposite change in total nuclear angular momentum and total electronic angular momentum without changing value of "J". The pure rotational spectrum of a diatomic molecule consists of lines in the far infrared or microwave region. The frequency of these lines is given by formula_17 Thus values of "B", "I"0 and "R"0 of a substance can be determined from observed rotational spectrum. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left[- \\frac{\\hbar^2}{2\\mu R^2} \\frac{\\partial}{\\partial R} \\left(R^2 \\frac{\\partial}{\\partial R}\\right)+ \\frac{\\langle \\Phi_s|N^2|\\Phi_s \\rangle}{2\\mu R^2}+ E_s(R)-E\\right]F_s(\\mathbf R) = 0 " }, { "math_id": 1, "text": " N^2 = -\\hbar^2 \\left[ \\frac{1}{\\sin\\Theta} \\frac{\\partial}{\\partial \\Theta}\\left(\\sin \\Theta \\frac{\\partial}{\\partial \\Theta}\\right)+ \\frac{1}{\\sin^2\\Theta} \\frac{\\partial^2}{\\partial \\Phi^2} \\right] " }, { "math_id": 2, "text": " \\Psi_s = F_s(\\mathbf R)\\Phi_s(\\mathbf R,\\mathbf r_1, \\mathbf r_2, \\dots, \\mathbf r_N)" }, { "math_id": 3, "text": " \\mathbf N = \\mathbf J - \\mathbf L " }, { "math_id": 4, "text": " \\mathbf N = \\mathbf R \\times \\mathbf P " }, { "math_id": 5, "text": " J_z = L_z " }, { "math_id": 6, "text": " J^2 \\Psi_s = J(J+1) \\hbar^2 \\Psi_s " }, { "math_id": 7, "text": " J_z \\Psi_s = M_j\\hbar \\Psi_s " }, { "math_id": 8, "text": " L_z \\Phi_s = \\pm \\Lambda\\hbar \\Phi_s " }, { "math_id": 9, "text": " J = \\Lambda, \\Lambda +1, \\Lambda+2, \\dots " }, { "math_id": 10, "text": " \\langle \\Psi_s|L_x|\\Psi_s\\rangle = \\langle L_x \\rangle = 0 " }, { "math_id": 11, "text": " \\langle \\Psi_s|L_y|\\Psi_s\\rangle = \\langle L_y \\rangle = 0 " }, { "math_id": 12, "text": " \\langle \\mathbf J . \\mathbf L \\rangle = \\langle J_z L_z \\rangle = \\langle {L_z}^2 \\rangle " }, { "math_id": 13, "text": " \\begin{align}\n\\langle \\Phi_s |N^2|\\Phi_s \\rangle F_s(\\mathbf R) &= \\langle \\Phi_s | \\left(J^2 + L^2 - 2 \\mathbf J \\cdot \\mathbf L\\right) |\\Phi_s \\rangle F_s(\\mathbf R) \\\\\n&= \\hbar^2 \\left[J(J+1)-\\Lambda^2\\right] F_s(\\mathbf R) + \\langle \\Phi_s | \\left({L_x}^2 + {L_y}^2\\right) |\\Phi_s \\rangle F_s(\\mathbf R)\n\\end{align}" }, { "math_id": 14, "text": "- \\frac{\\hbar^2}{2\\mu R^2}\\left[ \\frac{\\partial}{\\partial R} \\left(R^2 \\frac{\\partial}{\\partial R}\\right)- J(J+1)\\right]F_s(\\mathbf R)+[{E'}_s(R)-E]F_s(\\mathbf R) = 0 " }, { "math_id": 15, "text": " {E'}_s(R) = E_s(R) - \\frac{\\Lambda^2 \\hbar^2}{2\\mu R^2} + \\frac{1}{2\\mu R^2} \\langle \\Phi_s |\\left({L_x}^2 + {L_y}^2\\right)|\\Phi_s \\rangle " }, { "math_id": 16, "text": " E_r = \\frac{\\hbar^2}{2\\mu {R_0}^2} J(J+1) = \\frac{\\hbar^2}{2I_0} J(J+1) = BJ(J+1) " }, { "math_id": 17, "text": " \\hbar \\omega = E_r(J+1)-E_r(J) = 2B(J+1) " } ]
https://en.wikipedia.org/wiki?curid=881269
8812794
Molecular replacement
Molecular replacement (MR) is a method of solving the phase problem in X-ray crystallography. MR relies upon the existence of a previously solved protein structure which is similar to our unknown structure from which the diffraction data is derived. This could come from a homologous protein, or from the lower-resolution protein NMR structure of the same protein. The first goal of the crystallographer is to obtain an electron density map, density being related with diffracted wave as follows: formula_0 With usual detectors the intensity formula_1 is being measured, and all the information about phase (formula_2) is lost. Then, in the absence of phases (Φ), we are unable to complete the shown Fourier transform relating the experimental data from X-ray crystallography (in reciprocal space) to real-space electron density, into which the atomic model is built. MR tries to find the model which fits best experimental intensities among known structures. Principles of Patterson-based molecular replacement. We can derive a Patterson map for the intensities, which is an interatomic vector map created by squaring the structure factor amplitudes and setting all phases to zero. This vector map contains a peak for each atom related to every other atom, with a large peak at 0,0,0, where vectors relating atoms to themselves "pile up". Such a map is far too noisy to derive any high resolution structural information—however if we generate Patterson maps for the data derived from our unknown structure, and from the structure of a previously solved homologue, in the correct orientation and position within the unit cell, the two Patterson maps should be closely correlated. This principle lies at the heart of MR, and can allow us to infer information about the orientation and location of an unknown molecule with its unit cell. Due to historic limitations in computing power, an MR search is typically divided into two steps: rotation and translation. Rotation function. In the rotation function, our unknown Patterson map is compared to Patterson maps derived from our known homologue structure in different orientations. Historically r-factors and/or correlation coefficients were used to score the rotation function, however, modern programs use maximum likelihood-based algorithms. The highest correlation (and therefore scores) are obtained when the two structures (known and unknown) are in similar orientation(s)—these can then be output in Euler angles or spherical polar angles. Translation function. In the translation function, the now correctly oriented known model can be correctly positioned by translating it to the correct co-ordinates within the asymmetric unit. This is accomplished by moving the model, calculating a new Patterson map, and comparing it to the unknown-derived Patterson map. This brute-force search is computationally expensive and fast translation functions are now more commonly used. Positions with high correlations are output in Cartesian coordinates. Using "de novo" predicted structures in molecular replacement. With the improvement of "de novo" protein structure prediction, many protocols including MR-Rosetta, QUARK, AWSEM-Suite and I-TASSER-MR can generate a lot of native-like decoy structures that are useful to solve the phase problem by molecular replacement. The next step. Following this, we should have correctly oriented and translated phasing models, from which we can derive phases which are (hopefully) accurate enough to derive electron density maps. These can be used to build and refine an atomic model of our unknown structure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho(x,y,z)=\\frac{1}{V} \\sum_h\\sum_k\\sum_\\ell|F_{hk\\ell}|\\exp(2\\pi i(hx+ky+\\ell z)+i\\Phi(hk\\ell))." }, { "math_id": 1, "text": "I=F\\cdot F^*" }, { "math_id": 2, "text": "\\Phi" } ]
https://en.wikipedia.org/wiki?curid=8812794
881311
Aperture synthesis
Mixing signals from many telescopes to produce images with high angular resolution Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection. At each separation and orientation, the lobe-pattern of the interferometer produces an output which is one component of the Fourier transform of the spatial distribution of the brightness of the observed object. The image (or "map") of the source is produced from these measurements. Astronomical interferometers are commonly used for high-resolution optical, infrared, submillimetre and radio astronomy observations. For example, the Event Horizon Telescope project derived the first image of a black hole using aperture synthesis. Technical issues. Aperture synthesis is possible only if both the amplitude and the phase of the incoming signal are measured by each telescope. For radio frequencies, this is possible by electronics, while for optical frequencies, the electromagnetic field cannot be measured directly and correlated in software, but must be propagated by sensitive optics and interfered optically. Accurate optical delay and atmospheric wavefront aberration correction are required, a very demanding technology that became possible only in the 1990s. This is why imaging with aperture synthesis has been used successfully in radio astronomy since the 1950s and in optical/infrared astronomy only since the turn of the millennium. See astronomical interferometer for more information. In order to produce a high quality image, a large number of different separations between different telescopes is required (the projected separation between any two telescopes as seen from the radio source is called a baseline) – as many different baselines as possible are required in order to get a good quality image. The number of baselines ("n"b) for an array of "n" telescopes is given by "n"b=("n"2 − "n")/2. (This is formula_0 or "n"C2). For example, the Very Large Array has 27 telescopes giving 351 independent baselines at once, and can give high quality images. In contrast to radio arrays, the largest optical arrays currently have only 6 telescopes, giving poorer image quality from the 15 baselines between the telescopes. Most radio frequency aperture synthesis interferometers use the rotation of the Earth to increase the number of different baselines included in an observation (see diagram on right). Taking data at different times provides measurements with different telescope separations and angles without the need for additional telescopes or moving the telescopes manually, as the rotation of the Earth moves the telescopes to new baselines. The use of Earth rotation was discussed in detail in the 1950 paper "A preliminary survey of the radio stars in the Northern Hemisphere". Some instruments use artificial rotation of the interferometer array instead of Earth rotation, such as in aperture masking interferometry. History. The concept of aperture synthesis was first formulated in 1946 by Australian radio astronomers Ruby Payne-Scott and Joseph Pawsey. Working from Dover Heights in Sydney, Payne-Scott carried out the earliest interferometer observations in radio astronomy on 26 January 1946 using an Australian Army radar as a radio telescope. Aperture synthesis imaging was later developed at radio wavelengths by Martin Ryle and coworkers from the Radio Astronomy Group at Cambridge University. Martin Ryle and Tony Hewish jointly received a Nobel Prize for this and other contributions to the development of radio interferometry. The radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s. During the late 1960s and early 1970s, as computers (such as the Titan) became capable of handling the computationally intensive Fourier transform inversions required, they used aperture synthesis to create a 'One-Mile' and later a '5 km' effective aperture using the One-Mile and Ryle telescopes, respectively. The technique was subsequently further developed in very-long-baseline interferometry to obtain baselines of thousands of kilometers and even in optical telescopes. The term "aperture synthesis" can also refer to a type of radar system known as synthetic aperture radar, but this is technically unrelated to the radio astronomy method and developed independently. Originally it was thought necessary to make measurements at essentially every baseline length and orientation out to some maximum: such a fully sampled Fourier transform formally contains the information exactly equivalent to the image from a conventional telescope with an aperture diameter equal to the maximum baseline, hence the name "aperture synthesis". It was rapidly discovered that in many cases, useful images could be made with a relatively sparse and irregular set of baselines, especially with the help of non-linear deconvolution algorithms such as the maximum entropy method. The alternative name "synthesis imaging" acknowledges the shift in emphasis from trying to synthesize the complete aperture (allowing image reconstruction by Fourier transform) to trying to synthesize the image from whatever data is available, using powerful but computationally expensive algorithms.
[ { "math_id": 0, "text": "\\binom{n}{2}" } ]
https://en.wikipedia.org/wiki?curid=881311
8814960
Kenneth Stewart Cole
American biophysicist (1900–1984) Kenneth Stewart Cole (July 10, 1900 – April 18, 1984) was an American biophysicist described by his peers as "a pioneer in the application of physical science to biology". Cole was awarded the National Medal of Science in 1967. Biography. He was born on July 10, 1900, in Ithaca, New York, to Charles Nelson Cole, an instructor in Latin at Cornell University and Mabel Stewart. Kenneth had a younger brother, Robert H. Cole, with whom he remained very close throughout his life despite a large difference in age; they were joint authors of four papers published between 1936 and 1942. In 1902 the family moved to Oberlin, Ohio, when his father took a post at Oberlin College. His father would later become the Dean. Kenneth's mother was, and Cole graduated from Oberlin College in 1922 and received a Ph.D. in physics with Floyd K. Richtmyer from Cornell University in 1926. He spent summers working at the General Electric laboratory in Schenectady, New York. In 1932, Cole married Elizabeth Evans Roberts, an attorney. Later, her work was mostly concerned with civil rights and in 1957 she joined the staff of the United States Commission on Civil Rights Kenneth joined the staff of Columbia University in 1937 and remained there until 1946. He had also been associated with the Presbyterian Hospital, and the Guggenheim Foundation for Advanced Study at Princeton University and the University of Chicago. From 1949 to 1954 he was the technical director of the Naval Medicine Research Institute in Bethesda, Maryland. In 1954 he became chief of the laboratory of biophysics of the National Institute of Neurological Diseases and Blindness. He achieved advances that led to the "sodium theory" of nerve transmission that later won Nobel Prizes for Alan L. Hodgkin and Andrew F. Huxley in 1963. Cole was elected a Fellow of the American Physical Society in 1931, a member of the National Academy of Sciences in 1956, and a Fellow of the American Academy of Arts and Sciences in 1964. He was awarded the National Medal of Science in 1967, the award citation, read: "As a result, we know far more about how the nervous system functions." In 1972 he was made a member of the Royal Society of London. The Biophysical Society awards the Kenneth S. Cole medal to a scientist studying cell membranes. In 1980 he became an adjunct professor of the Department of Neurosciences at the Scripps Institute of Oceanography in San Diego. He had a son, Roger Braley Cole, and a daughter, Sarah Roberts Cole. He died on April 18, 1984, in La Jolla, California. Electrical Model of Tissue. Tissue can be modeled as an electrical circuit with resistive and capacitive properties: Its dispersion and absorption are represented by the empirical formula: formula_0 In this equation formula_1 is the complex dielectric constant, formula_2 and formula_3 are the "static" and "infinite frequency" dielectric constants, formula_4 times the frequency, and formula_5 is a generalized relaxation time. The parameter formula_6 can assume values between 0 and 1, the former value giving the result of Debye for polar dielectrics. This expression requires that the locus of the dielectric constant in the complex plane be a circular arc with end points on the axis of reals and center below the axis. It is worth emphasizing that the Cole–Cole model is an empirical model of the measured data. It has been successfully applied to a wide variety of tissues over the past 60 years, but it does not give any information about the underlying causes of the phenomena being measured. Several references in the literature use a form of the Cole equation written in terms of impedance instead of a complex permittivity. The impedance formula_7 is given by: formula_8 Where formula_9 and formula_10 are the resistances at zero frequency (i.e. DC) and infinity, respectively. formula_11 is often referred to as the characteristic frequency. The characteristic frequency is not the same when the analysis is carried out in terms of the complex permittivity. A simple interpretation of the above equation is in terms of a circuit where a resistance formula_12 is in series with a capacitor formula_13 and this combination is placed in parallel with a resistance formula_14. In this case formula_15 and formula_16. It can be shown that formula_11 is given by formula_17. Electrical measurements of tissues. In a series of papers in 1930s -- 1940s, he experimentally studied the electric properties of living tissues, such as Nitella, frog eggs, and most famously, the squid giant axon. Figure 4 of is sometimes used as artistic representations of biophysics. It also appeared, rotated 90 degrees, in Swedish apartments as modern art. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\epsilon^* - \\epsilon_\\infty = \\dfrac{\\epsilon_0 - \\epsilon_\\infty}{1 + (i\\omega\\tau_0)^{1-\\alpha}}" }, { "math_id": 1, "text": "\\epsilon^*" }, { "math_id": 2, "text": " {\\epsilon_0 }" }, { "math_id": 3, "text": "\\epsilon_\\infty" }, { "math_id": 4, "text": "\\omega = 2\\pi" }, { "math_id": 5, "text": "\\tau_0" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "Z" }, { "math_id": 8, "text": "\nZ = R_\\infty + \\frac{R_0-R_\\infty}{1+(\\tfrac{jf}{f_c})^{1-\\alpha}}\n" }, { "math_id": 9, "text": "R_0" }, { "math_id": 10, "text": "R_\\infty" }, { "math_id": 11, "text": "f_c" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "C" }, { "math_id": 14, "text": "R" }, { "math_id": 15, "text": "R_0 = R" }, { "math_id": 16, "text": "R_\\infty\\ = \\tfrac{RS}{R+S}" }, { "math_id": 17, "text": "f_c=\\tfrac{1}{2\\pi C(R+S)}" } ]
https://en.wikipedia.org/wiki?curid=8814960
8815
Dual polyhedron
Polyhedron associated with another by swapping vertices for faces In geometry, every polyhedron is associated with a second dual structure, where the vertices of one correspond to the faces of the other, and the edges between pairs of vertices of one correspond to the edges between pairs of faces of the other. Such dual figures remain combinatorial or abstract polyhedra, but not all can also be constructed as geometric polyhedra. Starting with any given polyhedron, the dual of its dual is the original polyhedron. Duality preserves the symmetries of a polyhedron. Therefore, for many classes of polyhedra defined by their symmetries, the duals belong to a corresponding symmetry class. For example, the regular polyhedra – the (convex) Platonic solids and (star) Kepler–Poinsot polyhedra – form dual pairs, where the regular tetrahedron is self-dual. The dual of an isogonal polyhedron (one in which any two vertices are equivalent under symmetries of the polyhedron) is an isohedral polyhedron (one in which any two faces are equivalent [...]), and vice versa. The dual of an isotoxal polyhedron (one in which any two edges are equivalent [...]) is also isotoxal. Duality is closely related to "polar reciprocity", a geometric transformation that, when applied to a convex polyhedron, realizes the dual polyhedron as another convex polyhedron. Kinds of duality. There are many kinds of duality. The kinds most relevant to elementary polyhedra are polar reciprocity and topological or abstract duality. Polar reciprocation. In Euclidean space, the dual of a polyhedron formula_0 is often defined in terms of polar reciprocation about a sphere. Here, each vertex (pole) is associated with a face plane (polar plane or just polar) so that the ray from the center to the vertex is perpendicular to the plane, and the product of the distances from the center to each is equal to the square of the radius. When the sphere has radius formula_1 and is centered at the origin (so that it is defined by the equation formula_2), then the polar dual of a convex polyhedron formula_0 is defined as &lt;templatestyles src="Block indent/styles.css"/&gt;formula_3 for all formula_4 in formula_5 where formula_6 denotes the standard dot product of formula_7 and formula_4. Typically when no sphere is specified in the construction of the dual, then the unit sphere is used, meaning formula_8 in the above definitions. For each face plane of formula_0 described by the linear equation formula_9 the corresponding vertex of the dual polyhedron formula_10 will have coordinates formula_11. Similarly, each vertex of formula_0 corresponds to a face plane of formula_10, and each edge line of formula_0 corresponds to an edge line of formula_10. The correspondence between the vertices, edges, and faces of formula_0 and formula_10 reverses inclusion. For example, if an edge of formula_0 contains a vertex, the corresponding edge of formula_10 will be contained in the corresponding face. For a polyhedron with a center of symmetry, it is common to use a sphere centered on this point, as in the Dorman Luke construction (mentioned below). Failing that, for a polyhedron with a circumscribed sphere, inscribed sphere, or midsphere (one with all edges as tangents), this can be used. However, it is possible to reciprocate a polyhedron about any sphere, and the resulting form of the dual will depend on the size and position of the sphere; as the sphere is varied, so too is the dual form. The choice of center for the sphere is sufficient to define the dual up to similarity. If a polyhedron in Euclidean space has a face plane, edge line, or vertex lying on the center of the sphere, the corresponding element of its dual will go to infinity. Since Euclidean space never reaches infinity, the projective equivalent, called extended Euclidean space, may be formed by adding the required 'plane at infinity'. Some theorists prefer to stick to Euclidean space and say that there is no dual. Meanwhile, found a way to represent these infinite duals, in a manner suitable for making models (of some finite portion). The concept of "duality" here is closely related to the duality in projective geometry, where lines and edges are interchanged. Projective polarity works well enough for convex polyhedra. But for non-convex figures such as star polyhedra, when we seek to rigorously define this form of polyhedral duality in terms of projective polarity, various problems appear. Because of the definitional issues for geometric duality of non-convex polyhedra, argues that any proper definition of a non-convex polyhedron should include a notion of a dual polyhedron. Canonical duals. Any convex polyhedron can be distorted into a canonical form, in which a unit midsphere (or intersphere) exists tangent to every edge, and such that the average position of the points of tangency is the center of the sphere. This form is unique up to congruences. If we reciprocate such a canonical polyhedron about its midsphere, the dual polyhedron will share the same edge-tangency points, and thus will also be canonical. It is the canonical dual, and the two together form a canonical dual compound. Dorman Luke construction. For a uniform polyhedron, each face of the dual polyhedron may be derived from the original polyhedron's corresponding vertex figure by using the Dorman Luke construction. Topological duality. Even when a pair of polyhedra cannot be obtained by reciprocation from each other, they may be called duals of each other as long as the vertices of one correspond to the faces of the other, and the edges of one correspond to the edges of the other, in an incidence-preserving way. Such pairs of polyhedra are still topologically or abstractly dual. The vertices and edges of a convex polyhedron form a graph (the 1-skeleton of the polyhedron), embedded on the surface of the polyhedron (a topological sphere). This graph can be projected to form a Schlegel diagram on a flat plane. The graph formed by the vertices and edges of the dual polyhedron is the dual graph of the original graph. More generally, for any polyhedron whose faces form a closed surface, the vertices and edges of the polyhedron form a graph embedded on this surface, and the vertices and edges of the (abstract) dual polyhedron form the dual graph of the original graph. An abstract polyhedron is a certain kind of partially ordered set (poset) of elements, such that incidences, or connections, between elements of the set correspond to incidences between elements (faces, edges, vertices) of a polyhedron. Every such poset has a dual poset, formed by reversing all of the order relations. If the poset is visualized as a Hasse diagram, the dual poset can be visualized simply by turning the Hasse diagram upside down. Every geometric polyhedron corresponds to an abstract polyhedron in this way, and has an abstract dual polyhedron. However, for some types of non-convex geometric polyhedra, the dual polyhedra may not be realizable geometrically. Self-dual polyhedra. Topologically, a polyhedron is said to be self-dual if its dual has exactly the same connectivity between vertices, edges, and faces. Abstractly, they have the same Hasse diagram. Geometrically, it is not only topologically self-dual, but its polar reciprocal about a certain point, typically its centroid, is a similar figure. For example, the dual of a regular tetrahedron is another regular tetrahedron, reflected through the origin. Every polygon is topologically self-dual, since it has the same number of vertices as edges, and these are switched by duality. But it is not necessarily self-dual (up to rigid motion, for instance). Every polygon has a regular form which is geometrically self-dual about its intersphere: all angles are congruent, as are all edges, so under duality these congruences swap. Similarly, every topologically self-dual convex polyhedron can be realized by an equivalent geometrically self-dual polyhedron, its canonical polyhedron, reciprocal about the center of the midsphere. There are infinitely many geometrically self-dual polyhedra. The simplest infinite family is the pyramids. Another infinite family, elongated pyramids, consists of polyhedra that can be roughly described as a pyramid sitting on top of a prism (with the same number of sides). Adding a frustum (pyramid with the top cut off) below the prism generates another infinite family, and so on. There are many other convex self-dual polyhedra. For example, there are 6 different ones with 7 vertices and 16 with 8 vertices. A self-dual non-convex icosahedron with hexagonal faces was identified by Brückner in 1900. Other non-convex self-dual polyhedra have been found, under certain definitions of non-convex polyhedra and their duals. Dual polytopes and tessellations. Duality can be generalized to "n"-dimensional space and dual polytopes; in two dimension these are called dual polygons. The vertices of one polytope correspond to the ("n" − 1)-dimensional elements, or facets, of the other, and the "j" points that define a ("j" − 1)-dimensional element will correspond to "j" hyperplanes that intersect to give a ("n" − "j")-dimensional element. The dual of an "n"-dimensional tessellation or honeycomb can be defined similarly. In general, the facets of a polytope's dual will be the topological duals of the polytope's vertex figures. For the polar reciprocals of the regular and uniform polytopes, the dual facets will be polar reciprocals of the original's vertex figure. For example, in four dimensions, the vertex figure of the 600-cell is the icosahedron; the dual of the 600-cell is the 120-cell, whose facets are dodecahedra, which are the dual of the icosahedron. Self-dual polytopes and tessellations. The primary class of self-dual polytopes are regular polytopes with palindromic Schläfli symbols. All regular polygons, {a} are self-dual, polyhedra of the form {a,a}, 4-polytopes of the form {a,b,a}, 5-polytopes of the form {a,b,b,a}, etc. The self-dual regular polytopes are: The self-dual (infinite) regular Euclidean honeycombs are: The self-dual (infinite) regular hyperbolic honeycombs are: References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "x^2 + y^2 + z^2 = r^2" }, { "math_id": 3, "text": "P^\\circ = \\{ q~\\big|~q \\cdot p \\leq r^2" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "P \\} ," }, { "math_id": 6, "text": "q \\cdot p" }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "r=1" }, { "math_id": 9, "text": "x_0x + y_0y + z_0z = r^2," }, { "math_id": 10, "text": "P^\\circ" }, { "math_id": 11, "text": "(x_0,y_0,z_0)" } ]
https://en.wikipedia.org/wiki?curid=8815
881512
C1
C1, C01, C.I or C-1 may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. &lt;templatestyles src="Dmbox/styles.css" /&gt; Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination.
[ { "math_id": 0, "text": "C^1" } ]
https://en.wikipedia.org/wiki?curid=881512
8816788
History of Grandi's series
Geometry and infinite zeros. Grandi. Guido Grandi (1671–1742) reportedly provided a simplistic account of the series in 1703. He noticed that inserting parentheses into 1 − 1 + 1 − 1 + · · · produced varying results: either formula_0 or formula_1 Grandi's explanation of this phenomenon became well known for its religious overtones: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;By putting parentheses into the expression 1 − 1 + 1 − 1 + · · · in different ways, I can, if I want, obtain 0 or 1. But then the idea of the creation "ex nihilo" is perfectly plausible. In fact, the series was not an idle subject for Grandi, and he didn't think it summed to either 0 or 1. Rather, like many mathematicians to follow, he thought the true value of the series was 1⁄2 for a variety of reasons. Grandi's mathematical treatment of 1 − 1 + 1 − 1 + · · · occurs in his 1703 book "Quadratura circula et hyperbolae per infinitas hyperbolas geometrice exhibita". Broadly interpreting Grandi's work, he derived 1 − 1 + 1 − 1 + · · · = 1⁄2 through geometric reasoning connected with his investigation of the witch of Agnesi. Eighteenth-century mathematicians immediately translated and summarized his argument in analytical terms: for a generating circle with diameter "a", the equation of the witch "y" = "a"3/("a"2 + "x"2) has the series expansion formula_2 and setting "a" = "x" = 1, one has 1 − 1 + 1 − 1 + · · · = 1⁄2. formula_3 and substituted "x" = 1 to get 1 − 1 + 1 − 1 + · · · = 1⁄2. Grandi "also argued that since the sum was both 0 and 1⁄2, he had proved that the world could be created out of nothing." Grandi offered a new explanation that 1 − 1 + 1 − 1 + · · · = 1⁄2 in 1710, both in the second edition of the "Quadratura circula" and in a new work, "De Infinitis infinitorum, et infinite parvorum ordinibus disquisitio geometrica". Two brothers inherit a priceless gem from their father, whose will forbids them to sell it, so they agree that it will reside in each other's museums on alternating years. If this agreement lasts for all eternity between the brother's descendants, then the two families will each have half possession of the gem, even though it changes hands infinitely often. This argument was later criticized by Leibniz. The parable of the gem is the first of two additions to the discussion of the corollary that Grandi added to the second edition. The second repeats the link between the series and the creation of the universe by God: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Sed inquies: aggregatum ex infinitis differentiis infinitarum ipsi DV æqualium, sive continuè, sive alternè sumptarum, est demum summa ex infinitis nullitatibus, seu 0, quomodo ergo quantitatem notabilem aggreget? At repono, eam Infiniti vim agnoscendam, ut etiam quod per se nullum est multiplicando, in aliquid commutet, sicuti finitam magnitudiné dividendo, in nullam degenerare cogit: unde per infinitam Dei Creatoris potentiam omnia ex nihlo facta, omniaque in nihilum redigi posse: neque adeò absurdum esse, quantitatem aliquam, ut ita dicam, creari per infinitam vel multiplicationem, vel additionem ipsius nihili, aut quodvis quantum infinita divisione, aut subductione in nihilum redigit. Marchetti. After Grandi published the second edition of the "Quadratura", his fellow countryman Alessandro Marchetti became one of his first critics. One historian charges that Marchetti was motivated more by jealousy than any other reason. Marchetti found the claim that an infinite number of zeros could add up to a finite quantity absurd, and he inferred from Grandi's treatment the danger posed by theological reasoning. The two mathematicians began attacking each other in a series of open letters; their debate was ended only by Marchetti's death in 1714. Leibniz. With the help and encouragement of Antonio Magliabechi, Grandi sent a copy of the 1703 "Quadratura" to Leibniz, along with a letter expressing compliments and admiration for the master's work. Leibniz received and read this first edition in 1705, and he called it an unoriginal and less-advanced "attempt" at his calculus. Grandi's treatment of 1 − 1 + 1 − 1 + · · · would not catch Leibniz's attention until 1711, near the end of his life, when Christian Wolff sent him a letter on Marchetti's behalf describing the problem and asking for Leibniz's opinion. Background. As early as 1674, in a minor, lesser-known writing "De Triangulo Harmonico" on the harmonic triangle, Leibniz mentioned 1 − 1 + 1 − 1 + · · · very briefly in an example: formula_4 Presumably he arrived at this series by repeated substitution: formula_5 formula_6 formula_7 And so on. The series 1 − 1 + 1 − 1 + · · · also appears indirectly in a discussion with Tschirnhaus in 1676. Leibniz had already considered the divergent alternating series 1 − 2 + 4 − 8 + 16 − · · · as early as 1673. In that case he argued that by subtracting either on the left or on the right, one could produce either positive or negative infinity, and therefore both answers are wrong and the whole should be finite. Two years after that, Leibniz formulated the first convergence test in the history of mathematics, the alternating series test, in which he implicitly applied the modern definition of convergence. Solutions. In the 1710s, Leibniz described Grandi's series in his correspondence with several other mathematicians. The letter with the most lasting impact was his first reply to Wolff, which he published in the "Acta Eruditorum". In this letter, Leibniz attacked the problem from several angles. In general, Leibniz believed that the algorithms of calculus were a form of "blind reasoning" that ultimately had to be founded upon geometrical interpretations. Therefore, he agreed with Grandi that 1 − 1 + 1 − 1 + · · · = 1⁄2, claiming that the relation was well-founded because there existed a geometric demonstration. On the other hand, Leibniz sharply criticized Grandi's example of the shared gem, claiming that the series 1 − 1 + 1 − 1 + · · · has no relation to the story. He pointed out that for any finite, even number of years, the brothers have equal possession, yet the sum of the corresponding terms of the series is zero. Leibniz thought that the argument from 1/(1 + "x") was valid; he took it as an example of his law of continuity. Since the relation 1 − "x" + "x"2 − "x"3 + · · · = 1/(1 + "x") holds for all "x" less than 1, it should hold for "x" equal to 1 as well. Still, Leibniz thought that one should be able to find the sum of the series 1 − 1 + 1 − 1 + · · · directly, without needing to refer back to the expression 1/(1 + "x") from which it came. This approach may seem obvious by modern standards, but it is a significant step from the point of view of the history of summing divergent series. In the 18th century, the study of series was dominated by power series, and summing a numerical series by expressing it as "f"(1) of some function's power series was thought to be the most natural strategy. Leibniz begins by observing that taking an even number of terms from the series, the last term is −1 and the sum is 0: 1 − 1 = 1 − 1 + 1 − 1 = 1 − 1 + 1 − 1 + 1 − 1 = 0. Taking an odd number of terms, the last term is +1 and the sum is 1: 1 = 1 − 1 + 1 = 1 − 1 + 1 − 1 + 1 = 1. Now, the infinite series 1 − 1 + 1 − 1 + · · · has neither an even nor an odd number of terms, so it produces neither 0 nor 1; by taking the series out to infinity, it becomes something between those two options. There is no more reason why the series should take one value than the other, so the theory of "probability" and the "law of justice" dictate that one should take the arithmetic mean of 0 and 1, which is (0 + 1) / 2 = 1/2. Eli Maor says of this solution, "Such a brazen, careless reasoning indeed seems incredible to us today…" Kline portrays Leibniz as more self-conscious: "Leibniz conceded that his argument was more metaphysical than mathematical, but said that there is more metaphysical truth in mathematics than is generally recognized." Charles Moore muses that Leibniz would hardly have had such confidence in his metaphysical strategy if it did not give the same result (namely 1⁄2) as other approaches. Mathematically, this was no accident: Leibniz's treatment would be partially justified when the compatibility of averaging techniques and power series was finally proven in 1880. Reactions. When he had first raised the question of Grandi's series to Leibniz, Wolff was inclined toward skepticism along with Marchetti. Upon reading Leibniz's reply in mid-1712, Wolff was so pleased with the solution that he sought to extend the arithmetic mean method to more divergent series such as 1 − 2 + 4 − 8 + 16 − · · ·. Leibniz's intuition prevented him from straining his solution this far, and he wrote back that Wolff's idea was interesting but invalid for several reasons. For one, the terms of a summable series should decrease to zero; even 1 − 1 + 1 − 1 + · · · could be expressed as a limit of such series. Leibniz described Grandi's series along with the general problem of convergence and divergence in letters to Nicolaus I Bernoulli in 1712 and early 1713. J. Dutka suggests that this correspondence, along with Nicolaus I Bernoulli's interest in probability, motivated him to formulate the St. Petersburg paradox, another situation involving a divergent series, in September 1713. According to Pierre-Simon Laplace in his "Essai Philosophique sur les Probabilités", Grandi's series was connected with Leibniz seeing "an image of the Creation in his binary arithmetic", and thus Leibniz wrote a letter to Jesuit missionary Claudio Filippo Grimaldi, court mathematician in China, in the hope that Claudio Filippo Grimaldi's interest in science and the mathematical "emblem of creation" might combine to convert the nation to Christianity. Laplace remarks, "I record this anecdote only to show how far the prejudices of infancy may mislead the greatest men." Divergence. Jacob Bernoulli. Jacob Bernoulli (1654–1705) dealt with a similar series in 1696 in the third part of his "Positiones arithmeticae de seriebus infinitis". Applying Nicholas Mercator's method for polynomial long division to the ratio "k"/("m" + "n"), he noticed that one always had a remainder. If "m" &gt; "n" then this remainder decreases and "finally is less than any given quantity", and one has formula_8 If "m" = "n", then this equation becomes formula_9 Bernoulli called this equation a "not inelegant paradox". Varignon. Pierre Varignon (1654–1722) treated Grandi's series in his report "Précautions à prendre dans l'usage des Suites ou Series infinies résultantes…". The first of his purposes for this paper was to point out the divergence of Grandi's series and expand on Jacob Bernoulli's 1696 treatment. The final version of Varignon's paper is dated February 16, 1715, and it appeared in a volume of the "Mémories" of the French Academy of Sciences that was itself not published until 1718. For such a relatively late treatment of Grandi's series, it is surprising that Varignon's report does not even mention Leibniz's earlier work. But most of the "Précautions" was written in October 1712, while Varignon was away from Paris. The Abbé Poignard's 1704 book on magic squares, "Traité des Quarrés sublimes", had become a popular subject around the Academy, and the second revised and expanded edition weighed in at 336 pages. To make the time to read the "Traité", Varignon had to escape to the countryside for nearly two months, where he wrote on the topic of Grandi's series in relative isolation. Upon returning to Paris and checking in at the Academy, Varignon soon discovered that the great Leibniz had ruled in favor of Grandi. Having been separated from his sources, Varignon still had to revise his paper by looking up and including the citation to Jacob Bernoulli. Rather than also take Leibniz's work into account, Varignon explains in a postscript to his report that the citation was the only revision he had made in Paris, and that "if" other research on the topic arose, his thoughts on it would have to wait for a future report. In the 1751 "Encyclopédie", Jean le Rond d'Alembert echoes the view that Grandi's reasoning based on division had been refuted by Varignon in 1715. (Actually, d'Alembert attributes the problem to "Guido Ubaldus", an error that is still occasionally propagated today.) Riccati and Bougainville. In a 1715 letter to Jacopo Riccati, Leibniz mentioned the question of Grandi's series and advertised his own solution in the "Acta Eruditorum". Later, Riccati would criticize Grandi's argument in his 1754 "Saggio intorno al sistema dell'universo", saying that it causes contradictions. He argues that one could just as well write "n" − "n" + "n" − "n" + · · · = "n"/(1 + 1), but that this series has "the same quantity of zeroes" as Grandi's series. These zeroes lack any evanescent character of "n", as Riccati points out that the equality 1 − 1 = "n" − "n" is guaranteed by 1 + "n" = "n" + 1. He concludes that the fundamental mistake is in using a divergent series to begin with: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In fact, it doesn't happen that if we stop this series, the following terms can be neglected in comparison with preceding terms; this property is verified only for convergent series." Another 1754 publication also criticized Grandi's series on the basis of its collapse to 0. Louis Antoine de Bougainville briefly treats the series in his acclaimed 1754 textbook "Traité du calcul intégral". He explains that a series is "true" if its sum is equal to the expression from which is expanded; otherwise it is "false". Thus Grandi's series is false because 1/(1 + 1) = 1/2 and yet (1 − 1) + (1 − 1) + · · · = 0. Euler. Leonhard Euler treats 1 − 1 + 1 − 1 + · · · along with other divergent series in his "De seriebus divergentibus", a 1746 paper that was read to the Academy in 1754 and published in 1760. He identifies the series as being first considered by Leibniz, and he reviews Leibniz's 1713 argument based on the series 1 − "a" + "a"2 − "a"3 + "a"4 − "a"5 + · · ·, calling it "fairly sound reasoning", and he also mentions the even/odd median argument. Euler writes that the usual objection to the use of 1/(1 + "a") is that it does not equal 1 − "a" + "a"2 − "a"3 + "a"4 − "a"5 + · · · unless "a" is less than 1; otherwise all one can say is that formula_10 where the last remainder term does not vanish and cannot be disregarded as "n" is taken to infinity. Still writing in the third person, Euler mentions a possible rebuttal to the objection: essentially, since an infinite series has no last term, there is no place for the remainder and it should be neglected. After reviewing more badly divergent series like 1 + 2 + 4 + 8 + · · ·, where he judges his opponents to have firmer support, Euler seeks to define away the issue: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Yet however substantial this particular dispute seems to be, neither side can be convicted of any error by the other side, whenever the use of such series occurs in analysis, and this ought to be a strong argument that neither side is in error, but that all disagreement is solely verbal. For if in a calculation I arrive at this series 1 − 1 + 1 − 1 + 1 − 1 etc. and if in its place I substitute 1/2, no one will rightly impute to me an error, which however everyone would do had I put some other number in the place of this series. Whence no doubt can remain that in fact the series 1 − 1 + 1 − 1 + 1 − 1 + etc. and the fraction 1/2 are equivalent quantities and that it is always permitted to substitute one for the other without error. Thus the whole question is seen to reduce to this, whether we call the fraction 1/2 the correct sum of 1 − 1 + 1 − 1 + etc.; and it is strongly to be feared that those who insist on denying this and who at the same time do not dare to deny the equivalence have stumbled into a battle over words. But I think all this wrangling can be easily ended if we should carefully attend to what follows… Euler also used finite differences to attack 1 − 1 + 1 − 1 + · · ·. In modern terminology, he took the Euler transform of the sequence and found that it equalled 1⁄2. As late as 1864, De Morgan claims that "this transformation has always appeared one of the strongest presumptions in favour of 1 − 1 + 1 − … being 1⁄2." Dilution and new values. Despite the confident tone of his papers, Euler expressed doubt over divergent series in his correspondence with Nicolaus I Bernoulli. Euler claimed that his attempted definition had never failed him, but Bernoulli pointed out a clear weakness: it does not specify how one should determine "the" finite expression that generates a given infinite series. Not only is this a practical difficulty, it would be theoretically fatal if a series were generated by expanding two expressions with different values. Euler's treatment of 1 − 1 + 1 − 1 + · · · rests upon his firm belief that 1⁄2 is the only possible value of the series; what if there were another? In a 1745 letter to Christian Goldbach, Euler claimed that he was not aware of any such counterexample, and in any case Bernoulli had not provided one. Several decades later, when Jean-Charles Callet finally asserted a counterexample, it was aimed at 1 − 1 + 1 − 1 + · · ·. The background of the new idea begins with Daniel Bernoulli in 1771. Daniel Bernoulli. Daniel Bernoulli, who accepted the probabilistic argument that 1 − 1 + 1 − 1 + · · · = 1⁄2, noticed that by inserting 0s into the series in the right places, it could achieve any value between 0 and 1. In particular, the argument suggested that 1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · · = 2⁄3. Callet and Lagrange. In a memorandum sent to Joseph Louis Lagrange toward the end of the century, Callet pointed out that 1 − 1 + 1 − 1 + · · · could also be obtained from the series formula_11 substituting "x" = 1 now suggests a value of 2⁄3, not 1⁄2. Lagrange approved Callet's submission for publication in the "Mémoires" of the French Academy of Sciences, but it was never directly published. Instead, Lagrange (along with Charles Bossut) summarized Callet's work and responded to it in the "Mémoires" of 1799. He defended Euler by suggesting that Callet's series actually should be written with the 0 terms left in: formula_12 which reduces to 1 + 0 − 1 + 1 + 0 − 1 + 1 + 0 − 1 + · · · instead. 19th century. The 19th century is remembered as the approximate period of Cauchy's and Abel's largely successful ban on the use of divergent series, but Grandi's series continued to make occasional appearances. Some mathematicians did not follow Abel's lead, mostly outside France, and British mathematicians especially took "a long time" to understand the analysis coming from the continent. In 1803, Robert Woodhouse proposed that 1 − 1 + 1 − 1 + · · · summed to something called formula_13 which could be distinguished from 1⁄2. Ivor Grattan-Guinness remarks on this proposal, "… R. Woodhouse … wrote with admirable honesty on the problems which he failed to understand. … Of course, there is no harm in defining new symbols such as 1⁄1+1; but the idea is 'formalist' in the unflattering sense, and it does not bear on the problem of the convergence of series." Algebraic reasoning. In 1830, a mathematician identified only as "M. R. S." wrote in the "Annales de Gergonne" on a technique to numerically find fixed points of functions of one variable. If one can transform a problem into the form of an equation "x = A + f(x)", where "A" can be chosen at will, then formula_14 should be a solution, and truncating this infinite expression results in a sequence of approximations. Conversely, given the series "x" = "a" − "a" + "a" − "a" + · · ·, the author recovers the equation formula_15 to which the solution is (1⁄2)"a". M. R. S. notes that the approximations in this case are "a", 0, "a", 0, …, but there is no need for Leibniz's "subtle reasoning". Moreover, the argument for averaging the approximations is problematic in a wider context. For equations not of the form "x = A + f(x)", M. R. S.'s solutions are continued fractions, continued radicals, and other infinite expressions. In particular, the expression "a" / ("a" / ("a" / · · · ))) should be a solution of the equation "x" = "a"/"x". Here, M. R. S. writes that based on Leibniz's reasoning, one is tempted to conclude that "x" is the average of the truncations "a", 1, "a", 1, …. This average is (1 + "a")/2, but the solution to the equation is the square root of "a". Bernard Bolzano criticized M. R. S.' algebraic solution of the series. In reference to the step formula_16 Bolzano charged, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The series within parentheses has clearly not the same set of numbers of that originally indicated with "x", as the first term "a" is missing. This comment exemplifies Bolzano's intuitively appealing but deeply problematic views on infinity. In his defense, Cantor himself pointed out that Bolzano worked in a time when the concept of the cardinality of a set was absent. De Morgan and company. As late as 1844, Augustus De Morgan commented that if a single instance where 1 − 1 + 1 − 1 + · · · did not equal 1⁄2 could be given, he would be willing to reject the entire theory of trigonometric series. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I do not argue with those who reject everything that is not within the providence of arithmetic, but only with those who abandon the use of infinitely divergent series and yet appear to employ finitely divergent series with confidence. Such appears to be the practice, both at home and abroad. They seem perfectly reconciled to 1 − 1 + 1 − 1 + · · ·, but cannot admit 1 + 2 + 4 + · · · = −1. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The whole fabric of periodic series and integrals … would fall instantly if it were shown to be possible that 1 − 1 + 1 − 1 + · · · might be one quantity as a limiting form of "A"0 − "A"1 + "A"2 − · · ·and another as a limiting form of "A"0 − "A"1 + "A"2 − · · ·. The same volume contains papers by Samuel Earnshaw and J. R. Young dealing in part with 1 − 1 + 1 − 1 + · · ·. G. H. Hardy dismisses both of these as "little more than nonsense", in contrast to De Morgan's "remarkable mixture of acuteness and confusion"; in any case, Earnshaw got De Morgan's attention with the following remarks: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;…it is not very unusual to cast a mantle of mystery over this subject, by introducing zeros into the expansion of 1⁄1+1+1. But such a device, however much it may serve to satisfy the eye, cannot satisfy the head… De Morgan fired back in 1864 in the same journal: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I cannot approve of introducing ciphers to satisfy the eye: but to me they always "introduced themselves." … those who reject casual evanescents out of a routine of operation have no right to charge those who do "not reject" with "introduction." Frobenius and modern mathematics. The last scholarly article to be motivated by 1 − 1 + 1 − 1 + · · · might be identified as the first article in the modern history of divergent series. Georg Frobenius published an article titled "Ueber die Leibnitzsche Reihe" ("On Leibniz's series") in 1880. He had found Leibniz's old letter to Wolff, citing it along with an 1836 article by Joseph Ludwig Raabe, who in turn drew on ideas by Leibniz and Daniel Bernoulli. Frobenius' short paper, barely two pages, begins by quoting from Leibniz's treatment of 1 − 1 + 1 − 1 + · · ·. He infers that Leibniz was actually stating a generalization of Abel's Theorem. The result, now known as Frobenius' theorem, has a simple statement in modern terms: any series that is Cesàro summable is also Abel summable to the same sum. Historian Giovanni Ferraro emphasizes that Frobenius did not actually state the theorem in such terms, and Leibniz did not state it at all. Leibniz was defending the association of the divergent series 1 − 1 + 1 − 1 + · · · with the value 1⁄2, while Frobenius' theorem is stated in terms of convergent sequences and the epsilon-delta formulation of the limit of a function. Frobenius' theorem was soon followed with further generalizations by Otto Hölder and Thomas Joannes Stieltjes in 1882. Again, to a modern reader their work strongly suggests new definitions of the sum of a divergent series, but those authors did not yet make that step. Ernesto Cesàro proposed a systematic definition for the first time in 1890. Since then, mathematicians have explored many different summability methods for divergent series. Most of these, especially the simpler ones with historical parallels, sum Grandi's series to 1⁄2. Others, motivated by Daniel Bernoulli's work, sum the series to another value, and a few do not sum it at all. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; The full texts of many of the following references are publicly available on the Internet from Google Books; the Euler archive at Dartmouth College; DigiZeitschriften, a service of Deutsche Forschungsgemeinschaft; or Gallica, a service of the Bibliothèque nationale de France.
[ { "math_id": 0, "text": "(1-1) + (1-1) + \\cdots = 0" }, { "math_id": 1, "text": "1+(-1+1)+(-1+1) +\\cdots = 1." }, { "math_id": 2, "text": "\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{a^{2n-1}}=a - \\frac{x^2}{a\\vphantom{a^3}} + \\frac{x^4}{a^3} - \\frac{x^6}{a^5} + \\cdots" }, { "math_id": 3, "text": "\\frac{1}{1+x} = 1 - x + x^2 - x^3 + \\cdots" }, { "math_id": 4, "text": "\\frac{1}{1+1} = \\frac11-\\frac{1}{1+1}. \\;\\mathrm{Ergo}\\; \\frac{1}{1+1} = 1-1+1-1+1-1 \\;\\mathrm{etc.}" }, { "math_id": 5, "text": "\\frac{1}{1+1} = 1 - ( 1 - \\frac{1}{1+1} )" }, { "math_id": 6, "text": "\\frac{1}{1+1} = 1 - ( 1 - ( 1 - ( 1 - \\frac{1}{1+1} ) ) )" }, { "math_id": 7, "text": "\\frac{1}{1+1} = 1 - ( 1 - ( 1 - ( 1 - ( 1 - ( 1 - \\frac{1}{1+1} ) ) ) ) )" }, { "math_id": 8, "text": "\\frac{k}{m+n\\vphantom{m^2}}=\\frac{k}{m\\vphantom{m^2}} - \\frac{kn}{m^2} + \\frac{kn^2}{m^3} - \\frac{kn^3}{m^4} + \\cdots." }, { "math_id": 9, "text": "\\frac{k}{2m\\vphantom{m^2}}=\\frac{k}{m\\vphantom{m^2}} - \\frac{k}{m\\vphantom{m^2}} + \\frac{k}{m\\vphantom{m^2}} - \\frac{k}{m\\vphantom{m^2}} + \\cdots." }, { "math_id": 10, "text": "\\frac{1}{1+a} = 1 - a + a^2 - a^3 + \\cdots \\pm a^n \\mp \\frac{a^{n+1}}{1+a}," }, { "math_id": 11, "text": "\\frac{1+x}{1+x+x^2}=1-x^2+x^3-x^5+x^6-x^8+\\cdots;" }, { "math_id": 12, "text": "1+0-x^2+x^3+0-x^5+x^6+0-x^8+\\cdots," }, { "math_id": 13, "text": "\\frac{1}{1+1}," }, { "math_id": 14, "text": "x = A+f(A+f(A+f(\\cdots)))" }, { "math_id": 15, "text": "x = a - x," }, { "math_id": 16, "text": "x = a-a+a-a+\\cdots = a-(a-a+a-\\cdots)," } ]
https://en.wikipedia.org/wiki?curid=8816788
8818504
2-opt
Local search algorithm In optimization, 2-opt is a simple local search algorithm for solving the traveling salesman problem. The 2-opt algorithm was first proposed by Croes in 1958, although the basic move had already been suggested by Flood. The main idea behind it is to take a route that crosses over itself and reorder it so that it does not. A complete 2-opt local search will compare every possible valid combination of the swapping mechanism. This technique can be applied to the traveling salesman problem as well as many related problems. These include the vehicle routing problem (VRP) as well as the capacitated VRP, which require minor modification of the algorithm. Pseudocode. Visually, one swap looks like: - A B - - A - B - - C D - - C - D - In pseudocode, the mechanism by which the 2-opt swap manipulates a given route is as follows. Here v1 and v2 are the first vertices of the edges that are to be swapped when traversing through the route: procedure 2optSwap(route, v1, v2) { 1. take route[start] to route[v1] and add them in order to new_route 2. take route[v1+1] to route[v2] and add them in reverse order to new_route 3. take route[v2+1] to route[start] and add them in order to new_route return new_route; Here is an example of the above with arbitrary input: This is the complete 2-opt swap making use of the above mechanism: repeat until no improvement is made { best_distance = calculateTotalDistance(existing_route) start_again: for (i = 0; i &lt;= number of nodes eligible to be swapped - 1; i++) { for (j = i + 1; j &lt;= number of nodes eligible to be swapped; j++) { new_route = 2optSwap(existing_route, i, j) new_distance = calculateTotalDistance(new_route) if (new_distance &lt; best_distance) { existing_route = new_route best_distance = new_distance goto start_again The particular nodes or depots that are at the start and at the end of the path should be removed from the search as an eligible candidates for swapping, as reversing the order would cause an invalid path. For example, with depot at A: A → B → C → D → A Swapping using node[0] and node[2] would yield C → B → A → D → A which is not valid (does not leave from A, the depot). Efficient implementation. Building the new route and calculating the distance of the new route can be a very expensive operation, usually formula_0 where n is the number of vertices in the route. In a symmetric case (where the distance between A and B is the same as between B and A), this can be skipped by performing a formula_1 operation. Since a 2-opt operation involves removing 2 edges and adding 2 different edges we can subtract and add the distances of only those edges. lengthDelta = - dist(route[v1], route[v1+1]) - dist(route[v2], route[v2+1]) + dist(route[v1+1], route[v2+1]) + dist(route[v1], route[v2]) If codice_0 is negative that would mean that the new distance after the swap would be smaller. Once it is known that codice_0 is negative, then we perform a 2-opt swap. This saves us a lot of computation. C++ code. using namespace std; class Point { public: float x, y; Point(float x, float y) { this-&gt;x = x; this-&gt;y = y; Point() { this-&gt;x = 0.0; this-&gt;y = 0.0; // Distance between two points inline float dist(const Point &amp;other) const { float diffx = x - other.x; float diffy = y - other.y; return sqrt(diffx * diffx + diffy * diffy); }; // Calculate the distance of the whole circuit int pathLength(vector&lt;Point&gt; &amp;path) { int n = path.size(); float length = path[n - 1].dist(path[0]); for (int i = 0; i &lt; n - 1; i++) { length += path[i].dist(path[i + 1]); return length; // Replace edges path[i]-&gt;path[i+1] and path[j]-&gt;path[j+1] // with path[i]-&gt;path[j] and path[i+1]-&gt;path[j+1] void swap_edges(vector&lt;Point&gt; &amp;path, int i, int j) { i += 1; while (i &lt; j) { Point temp = path[i]; path[i] = path[j]; path[j] = temp; i++; j--; // Print the path. void printPath(string pathName, vector&lt;Point&gt; &amp;path) { printf("%s = [", pathName.c_str()); for (int i = 0; i &lt; path.size(); i++) { if (i % 10 == 0) { printf("\n "); if (i &lt; path.size() - 1) { printf("[%.1f, %.1f], ", path[i].x, path[i].y); } else { printf("[%.1f, %.1f]", path[i].x, path[i].y); printf("\n];\n"); // Create a path of length n with random points between 0 and 1000 vector&lt;Point&gt; createRandomPath(int n) { vector&lt;Point&gt; path; for (int i = 0; i &lt; n; i++) { float x = (float)rand() / (float)(RAND_MAX / 1000); float y = (float)rand() / (float)(RAND_MAX / 1000); path.push_back(Point(x, y)); return path; int main() { vector&lt;Point&gt; path = createRandomPath(100); printPath("path1", path); float curLength = pathLength(path); printf("path1len = %.1f;\n\n", curLength); int n = path.size(); bool foundImprovement = true; while (foundImprovement) { foundImprovement = false; for (int i = 0; i &lt; n - 1; i++) { for (int j = i + 2; j &lt; n; j++) { float lengthDelta = -path[i].dist(path[i + 1]) - path[j].dist(path[(j + 1) % n]) + path[i].dist(path[j]) + path[i + 1].dist(path[(j + 1) % n]); // If the length of the path is reduced, do a 2-opt swap if (lengthDelta &lt; 0) { swap_edges(path, i, j); curLength += lengthDelta; foundImprovement = true; printPath("path2", path); printf("path2len = %.1f;\n", curLength); return 0; Output. path1 = [ [0.0, 131.5], [755.6, 458.7], [532.8, 219.0], [47.0, 678.9], [679.3, 934.7], [383.5, 519.4], [831.0, 34.6], [53.5, 529.7], [671.1, 7.7], [383.4, 66.8], [417.5, 686.8], [589.0, 930.4], [846.2, 526.9], [92.0, 653.9], [416.0, 701.2], [910.3, 762.2], [262.5, 47.5], [736.1, 328.2], [632.6, 756.4], [991.0, 365.3], [247.0, 982.6], [722.7, 753.4], [651.5, 72.7], [631.6, 884.7], [272.7, 436.4], [766.5, 477.7], [237.8, 274.9], [359.3, 166.5], [486.5, 897.7], [909.2, 60.6], [904.7, 504.5], [516.3, 319.0], [986.6, 494.0], [266.1, 90.7], [947.8, 73.7], [500.7, 384.1], [277.1, 913.8], [529.7, 464.4], [941.0, 50.1], [761.5, 770.2], [827.8, 125.4], [15.9, 688.5], [868.2, 629.5], [736.2, 725.4], [999.5, 888.6], [233.2, 306.3], [351.0, 513.3], [591.1, 846.0], [412.1, 841.5], [269.3, 415.4], [537.3, 467.9], [287.2, 178.3], [153.7, 571.7], [802.4, 33.1], [534.4, 498.5], [955.4, 748.3], [554.6, 890.7], [624.8, 842.0], [159.8, 212.8], [714.7, 130.4], [91.0, 274.6], [3.0, 414.3], [26.9, 709.8], [937.9, 239.9], [180.9, 317.5], [887.0, 652.1], [150.3, 681.3], [385.8, 387.7], [499.7, 147.5], [587.2, 845.6], [590.1, 955.4], [556.1, 148.2], [983.3, 408.8], [141.8, 564.9], [252.1, 488.5], [464.0, 961.1], [126.0, 199.8], [319.2, 629.3], [126.7, 651.3], [621.6, 803.1], [247.8, 476.4], [389.3, 203.3], [28.4, 901.7], [426.5, 142.0], [947.5, 410.3], [131.2, 885.6], [92.2, 162.2], [71.1, 365.3], [253.1, 135.1], [783.2, 455.3], [349.5, 452.3], [808.9, 931.7], [651.6, 215.2], [679.6, 908.9], [250.1, 860.9], [471.3, 506.0], [600.4, 817.6], [755.8, 462.2], [951.4, 632.7], [439.3, 824.7] path1len = 55723.0; path2 = [ [0.0, 131.5], [91.0, 274.6], [71.1, 365.3], [3.0, 414.3], [53.5, 529.7], [92.0, 653.9], [47.0, 678.9], [15.9, 688.5], [26.9, 709.8], [28.4, 901.7], [131.2, 885.6], [247.0, 982.6], [277.1, 913.8], [464.0, 961.1], [486.5, 897.7], [439.3, 824.7], [412.1, 841.5], [250.1, 860.9], [150.3, 681.3], [126.7, 651.3], [141.8, 564.9], [153.7, 571.7], [247.8, 476.4], [252.1, 488.5], [319.2, 629.3], [416.0, 701.2], [417.5, 686.8], [534.4, 498.5], [537.3, 467.9], [529.7, 464.4], [516.3, 319.0], [500.7, 384.1], [471.3, 506.0], [383.5, 519.4], [351.0, 513.3], [349.5, 452.3], [385.8, 387.7], [272.7, 436.4], [269.3, 415.4], [180.9, 317.5], [233.2, 306.3], [237.8, 274.9], [287.2, 178.3], [389.3, 203.3], [532.8, 219.0], [736.1, 328.2], [783.2, 455.3], [755.6, 458.7], [755.8, 462.2], [766.5, 477.7], [846.2, 526.9], [904.7, 504.5], [868.2, 629.5], [736.2, 725.4], [761.5, 770.2], [722.7, 753.4], [632.6, 756.4], [621.6, 803.1], [600.4, 817.6], [624.8, 842.0], [631.6, 884.7], [591.1, 846.0], [587.2, 845.6], [554.6, 890.7], [589.0, 930.4], [590.1, 955.4], [679.3, 934.7], [679.6, 908.9], [808.9, 931.7], [999.5, 888.6], [955.4, 748.3], [910.3, 762.2], [887.0, 652.1], [951.4, 632.7], [986.6, 494.0], [947.5, 410.3], [983.3, 408.8], [991.0, 365.3], [937.9, 239.9], [827.8, 125.4], [947.8, 73.7], [941.0, 50.1], [909.2, 60.6], [831.0, 34.6], [802.4, 33.1], [671.1, 7.7], [651.5, 72.7], [714.7, 130.4], [651.6, 215.2], [556.1, 148.2], [499.7, 147.5], [426.5, 142.0], [359.3, 166.5], [383.4, 66.8], [262.5, 47.5], [266.1, 90.7], [253.1, 135.1], [159.8, 212.8], [126.0, 199.8], [92.2, 162.2] path2len = 8586.2; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "O(1)" } ]
https://en.wikipedia.org/wiki?curid=8818504
8818695
3-opt
In optimization, 3-opt is a simple local search heuristic for finding approximate solutions to the travelling salesperson problem and related network optimization problems. Compared to the simpler 2-opt algorithm, it is slower but can generate higher-quality solutions. 3-opt analysis involves deleting three edges from the current solution to the problem, creating three sub-tours. There are eight ways of connecting these sub-tours back into a single tour, one of which consists of the three deleted edges. These reconnections are analysed to find the optimum one. This process is then repeated for a different set of 3 connections, until all possible combinations have been tried in a network. A single pass through all triples of edges has a time complexity of formula_0. Iterated 3-opt, in which passes are repeated until no more improvements can be found, has a higher time complexity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^3)" } ]
https://en.wikipedia.org/wiki?curid=8818695
8818888
Lin–Kernighan heuristic
In combinatorial optimization, Lin–Kernighan is one of the best heuristics for solving the symmetric travelling salesman problem. It belongs to the class of local search algorithms, which take a tour (Hamiltonian cycle) as part of the input and attempt to improve it by searching in the neighbourhood of the given tour for one that is shorter, and upon finding one repeats the process from that new one, until encountering a local minimum. As in the case of the related 2-opt and 3-opt algorithms, the relevant measure of "distance" between two tours is the number of edges which are in one but not the other; new tours are built by reassembling pieces of the old tour in a different order, sometimes changing the direction in which a sub-tour is traversed. Lin–Kernighan is adaptive and has no fixed number of edges to replace at a step, but favours small numbers such as 2 or 3. Derivation. For a given instance formula_0 of the travelling salesman problem, tours are uniquely determined by their sets of edges, so we may as well encode them as such. In the main loop of the local search, we have a current tour formula_1 and are looking for new tour formula_2 such that the symmetric difference formula_3 is not too large and the length formula_4 of the new tour is less than the length formula_5 of the current tour. Since formula_6 is typically much smaller than formula_7 and formula_8, it is convenient to consider the quantity formula_9 — the gain of using formula_10 when switching from formula_11 — since formula_12: how much longer the current tour formula_7 is than the new tour formula_8. Naively formula_13-opt can be regarded as examining all formula_10 with exactly formula_14 elements (formula_13 in formula_7 but not in formula_8, and another formula_13 in formula_8 but not in formula_7) such that formula_15 is again a tour, looking for such a set which has formula_16. It is however easier to do those tests in the opposite order: first search for plausible formula_17 with positive gain, and only second check if formula_15 is in fact a tour. Define a trail in formula_18 to be alternating (with respect to formula_7) if its edges are alternatingly in formula_7 and not in formula_7, respectively. Because the subgraphs formula_19 and formula_20 are formula_21-regular, the subgraph formula_22 will have vertices of degree formula_23, formula_21, and formula_24 only, and at each vertex there are as many incident edges from formula_7 as there are from formula_8. Hence (essentially by Hierholzer's algorithm for finding Eulerian circuits) the graph formula_25 decomposes into closed alternating trails. Sets formula_10 that may satisfy formula_3 for some tour formula_8 may thus be found by enumerating closed alternating trails in formula_18, even if not every closed alternating trail formula_6 makes formula_15 into a tour; it could alternatively turn out to be a disconnected formula_21-regular subgraph. Key idea. Alternating trails (closed or open) are built by extending a shorter alternating trail, so when exploring the neighbourhood of the current tour formula_7, one is exploring a search tree of alternating trails. The key idea of the Lin–Kernighan algorithm is to remove from this tree all alternating trails which have gain formula_26. This does not prevent finding every closed trail with positive gain, thanks to the following lemma. Lemma. "If formula_27 are numbers such that formula_28, then there is a cyclic permutation of these numbers such that all partial sums are positive as well, i.e., there is some formula_13 such that" "formula_29 for all formula_30." For a closed alternating trail formula_31, one may define formula_32 if formula_33 and formula_34 if formula_35; the sum formula_36 is then the gain formula_37. Here the lemma implies that there for every closed alternating trail with positive gain exists at least one starting vertex formula_38 for which all the gains of the partial trails are positive as well, so formula_6 will be found when the search explores the branch of alternating trails starting at formula_38. (Prior to that the search may have considered other subtrails of formula_6 starting at other vertices but backed out because some subtrail failed the positive gain constraint.) Reducing the number of branches to explore translates directly to a reduction in runtime, and the sooner a branch can be pruned, the better. This yields the following algorithm for finding all closed, positive gain alternating trails in the graph. State: a stack of triples formula_39, where formula_40 is a vertex, formula_41 is the current number of edges in the trail, and formula_42 is the current trail gain. As an enumeration algorithm this is slightly flawed, because it may report the same trail multiple times, with different starting points, but Lin–Kernighan does not care because it mostly aborts the enumeration after finding the first hit. It should however be remarked that: Basic Lin–Kernighan algorithm. The basic form of the Lin–Kernighan algorithm not only does a local search counterpart of the above enumeration, but it also introduces two parameters that narrow the search. Because there are formula_62 alternating trails of length formula_58, and the final round of the algorithm may have to check all of them before concluding that the current tour is locally optimal, we get formula_63 (standard value formula_21) as a lower bound on the exponent of the algorithm complexity. Lin &amp; Kernighan report formula_64 as an empirical exponent of formula_65 in the average overall running time for their algorithm, but other implementors have had trouble reproducing that result. It appears unlikely that the worst-case running time is polynomial. In terms of a stack as above, the algorithm is: Input: an instance formula_0 of the travelling salesman problem, and a tour formula_1 Output: a locally optimal tour Variables: a stack of triples formula_39, where formula_40 is a vertex, formula_41 is the current number of edges in the trail, and formula_42 is the current trail gain, the sequence formula_66 of vertices in the current alternating trail, the best set formula_17 of exchange edges found for current tour, and its corresponding gain formula_67. Initialise the stack to being empty. Repeat Set formula_68 and formula_69. For all formula_40, push formula_43 onto the stack. While the stack is nonempty: Pop formula_39 off the stack and let formula_44. If formula_46 is even then for each formula_40 such that formula_47, push formula_48 onto the stack if: formula_70, or formula_71 and formula_72 is a tour (Hamiltonicity check) else (formula_46 is odd): If formula_49, formula_73, and formula_74 is a tour (Hamiltonicity check) then let formula_75 and formula_76. For each formula_40 such that formula_52 and formula_53, push formula_55 onto the stack. End if. Let formula_77 be the top element on the stack (peek, not pop). If formula_78 then if formula_79 then set formula_80 (update current tour) and clear the stack. else if formula_81 then pop all elements formula_77 off the stack that have formula_82 end if end if end while until formula_83. Return formula_7 The length of the alternating trails considered are thus not explicitly bounded, but beyond the backtracking depth formula_58 no more than one way of extending the current trail is considered, which in principle stops those explorations from raising the exponent in the runtime complexity. Limitations. The closed alternating trails found by the above method are all connected, but the symmetric difference formula_84 of two tours need not be, so in general this method of alternating trails cannot explore the full neighbourhood of a trail formula_7. The literature on the Lin–Kernighan heuristic uses the term "sequential" exchanges for those that are described by a single alternating trail. The smallest non-sequential exchange would however replace 4 edges and consist of two cycles of 4 edges each (2 edges added, 2 removed), so it is long compared to the typical Lin–Kernighan exchange, and there are few of these compared to the full set of 4-edge exchanges. In at least one implementation by Lin &amp; Kernighan there was an extra final step considering such non-sequential exchanges of 4 edges before declaring a tour locally optimal, which would mean the tours produced are 4-opt unless one introduces further constraints on the search (which Lin and Kernighan in fact did). The literature is vague on exactly what is included in the Lin–Kernighan heuristic proper, and what constitutes further refinements. For the "asymmetric" TSP, the idea of using positive gain alternating trails to find favourable exchanges is less useful, because there are fewer ways in which pieces of a tour can be rearranged to yield new tours when one may not reverse the orientation of a piece. Two pieces can only be patched together to reproduce the original tour. Three pieces can be patched together to form a different tour in one way only, and the corresponding alternating trail does not extend to a closed trail for rearranging four pieces into a new tour. To rearrange four pieces, one needs a non-sequential exchange. Checking Hamiltonicity. The Lin–Kernighan heuristic checks the validity of tour candidates formula_15 at two points: obviously when deciding whether a better tour has been found, but also as a constraint to descending in the search tree, as controlled via the infeasibility depth formula_60. Concretely, at larger depths in the search a vertex formula_85 is only appended to the alternating trail if formula_86 is a tour. By design that set of edges constitutes a 2-factor in formula_87, so what needs to be determined is whether that 2-factor consists of a single Hamiltonian cycle, or instead is made up of several cycles. If naively posing this subproblem as giving a subroutine the set of formula_65 edges as input, one ends up with formula_54 as the time complexity for this check, since it is necessary to walk around the full tour before being able to determine that it is in fact a Hamiltonian cycle. That is too slow for the second usage of this test, which gets carried out for every alternating trail with more than formula_21 edges from formula_7. If keeping track of more information, the test can instead be carried out in constant time. A useful degree of freedom here is that one may choose the order in which step 2.3.2 iterates over all vertices; in particular, one may follow the known tour formula_7. After picking formula_13 edges from formula_7, the remaining subgraph formula_88 consists of formula_13 paths. The outcome of the Hamiltonicity test done when considering the formula_89th edge formula_90 depends only on in which of these paths that formula_91 resides and whether formula_85 is before or after formula_91. Hence it would be sufficient to examine formula_92 different cases as part of performing step 2.3.2 for formula_93; as far as formula_85 is concerned, the outcome of this test can be inherited information rather than something that has to be computed fresh. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(G,c)" }, { "math_id": 1, "text": " T \\subset \\mathrm{E}(G) " }, { "math_id": 2, "text": " T' \\subset \\mathrm{E}(G) " }, { "math_id": 3, "text": " F = T \\mathbin{\\triangle} T' " }, { "math_id": 4, "text": " \\sum_{e \\in T'} c(e) " }, { "math_id": 5, "text": " \\sum_{e \\in T} c(e) " }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "T" }, { "math_id": 8, "text": "T'" }, { "math_id": 9, "text": " g(F) = \\sum_{e \\in F \\cap T} c(e) - \\sum_{e \\in F \\setminus T} c(e) \\quad" }, { "math_id": 10, "text": " F \\subseteq \\mathrm{E}(G) " }, { "math_id": 11, "text": " T " }, { "math_id": 12, "text": " g(T \\mathbin{\\triangle} T') = \\sum_{e \\in T} c(e) - \\sum_{e \\in T'} c(e) " }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": " 2k " }, { "math_id": 15, "text": " T \\mathbin{\\triangle} F " }, { "math_id": 16, "text": " g(F) > 0 " }, { "math_id": 17, "text": " F " }, { "math_id": 18, "text": "G" }, { "math_id": 19, "text": "\\bigl( \\mathrm{V}(G),T \\bigr)" }, { "math_id": 20, "text": "\\bigl( \\mathrm{V}(G),T' \\bigr)" }, { "math_id": 21, "text": "2" }, { "math_id": 22, "text": "G[T \\mathbin{\\triangle} T'] = \\bigl( \\mathrm{V}(G),T \\mathbin{\\triangle} T'\\bigr)" }, { "math_id": 23, "text": "0" }, { "math_id": 24, "text": "4" }, { "math_id": 25, "text": "G[T \\mathbin{\\triangle} T']" }, { "math_id": 26, "text": "\\leq 0" }, { "math_id": 27, "text": " a_0,\\dotsc,a_{n-1} " }, { "math_id": 28, "text": " \\sum_{i=0}^{n-1} a_i > 0 " }, { "math_id": 29, "text": " \\sum_{i=0}^r a_{(k+i) \\bmod n} > 0 " }, { "math_id": 30, "text": "r=0,1,\\dotsc,n-1" }, { "math_id": 31, "text": "F = e_0 \\, e_1 \\, \\dots \\, e_{n-1}" }, { "math_id": 32, "text": " a_i = c(e_i) " }, { "math_id": 33, "text": " e_i \\in T " }, { "math_id": 34, "text": " a_i = -c(e_i) " }, { "math_id": 35, "text": " e_i \\notin T " }, { "math_id": 36, "text": " \\sum\\nolimits_{i=0}^{n-1} a_i " }, { "math_id": 37, "text": " g(F) " }, { "math_id": 38, "text": "v_0" }, { "math_id": 39, "text": "(u,i,g)" }, { "math_id": 40, "text": "u \\in \\mathrm{V}(G)" }, { "math_id": 41, "text": "i \\geq 0" }, { "math_id": 42, "text": "g" }, { "math_id": 43, "text": "(u,0,0)" }, { "math_id": 44, "text": "v_i := u" }, { "math_id": 45, "text": " F = \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i \\} " }, { "math_id": 46, "text": "i" }, { "math_id": 47, "text": " v_i u \\in T \\setminus \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i \\}" }, { "math_id": 48, "text": "\\bigl( u, i+1, g+c(v_i u) \\bigr)" }, { "math_id": 49, "text": " g > c(v_i v_0) " }, { "math_id": 50, "text": " \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i, v_i v_0 \\} " }, { "math_id": 51, "text": " g - c(v_i v_0) > 0 " }, { "math_id": 52, "text": " g > c(v_i u)" }, { "math_id": 53, "text": " v_i u \\notin T \\cup \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i \\}" }, { "math_id": 54, "text": "O(n)" }, { "math_id": 55, "text": "\\bigl( u, i+1, g-c(v_i u) \\bigr)" }, { "math_id": 56, "text": "T \\mathbin{\\triangle} F" }, { "math_id": 57, "text": " i = 2n " }, { "math_id": 58, "text": "p_1" }, { "math_id": 59, "text": " p_1 = 5 " }, { "math_id": 60, "text": "p_2" }, { "math_id": 61, "text": " p_2 = 2 " }, { "math_id": 62, "text": " O( n^{\\lfloor p_1/2 \\rfloor} ) " }, { "math_id": 63, "text": " \\lfloor p_1/2 \\rfloor " }, { "math_id": 64, "text": " 2.2 " }, { "math_id": 65, "text": "n" }, { "math_id": 66, "text": " v_0, v_1, \\dotsc " }, { "math_id": 67, "text": " g^* " }, { "math_id": 68, "text": " g^* := 0 " }, { "math_id": 69, "text": " F := \\varnothing " }, { "math_id": 70, "text": " i \\leq p_2 " }, { "math_id": 71, "text": " u v_0 \\notin T \\cup \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i, v_i u \\} " }, { "math_id": 72, "text": " T \\mathbin{\\triangle} \n\\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i, v_i u, u v_0 \\} " }, { "math_id": 73, "text": " g - c(v_i v_0) > g^* " }, { "math_id": 74, "text": " T \\mathbin{\\triangle} \n\\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i, v_i v_0 \\} " }, { "math_id": 75, "text": " F := \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{i-1} v_i, v_i v_0 \\} " }, { "math_id": 76, "text": " g^* := g - c(v_i v_0) " }, { "math_id": 77, "text": " (u,j,g) " }, { "math_id": 78, "text": " i \\leq j " }, { "math_id": 79, "text": " g^* > 0 " }, { "math_id": 80, "text": " T := T \\mathbin{\\triangle} F " }, { "math_id": 81, "text": " i > p_1 " }, { "math_id": 82, "text": " j > p_1 " }, { "math_id": 83, "text": " g^*=0" }, { "math_id": 84, "text": " T \\mathbin{\\triangle} T' " }, { "math_id": 85, "text": " v_{2k+1} " }, { "math_id": 86, "text": " T \\mathbin{\\triangle} \\{ v_0 v_1, v_1 v_2, \\dotsc, v_{2k} v_{2k+1}, v_{2k+1} v_0 \\} " }, { "math_id": 87, "text": " G " }, { "math_id": 88, "text": " \\bigl( \\mathrm{V}(G), T \\setminus \\{ v_0 v_1, \\dotsc, v_{2k-2} v_{2k-1} \\} \\bigr) " }, { "math_id": 89, "text": "(k+1)" }, { "math_id": 90, "text": " v_{2k} v_{2k+1} " }, { "math_id": 91, "text": " v_{2k} " }, { "math_id": 92, "text": "2k" }, { "math_id": 93, "text": " v_{2k-1} " } ]
https://en.wikipedia.org/wiki?curid=8818888
882010
Cayley transform
Mathematical operation In mathematics, the Cayley transform, named after Arthur Cayley, is any of a cluster of related things. As originally described by , the Cayley transform is a mapping between skew-symmetric matrices and special orthogonal matrices. The transform is a homography used in real analysis, complex analysis, and quaternionic analysis. In the theory of Hilbert spaces, the Cayley transform is a mapping between linear operators . Real homography. A simple example of a Cayley transform can be done on the real projective line. The Cayley transform here will permute the elements of {1, 0, −1, ∞} in sequence. For example, it maps the positive real numbers to the interval [−1, 1]. Thus the Cayley transform is used to adapt Legendre polynomials for use with functions on the positive real numbers with Legendre rational functions. As a real homography, points are described with projective coordinates, and the mapping is formula_0 Complex homography. On the upper half of the complex plane, the Cayley transform is: formula_1 Since formula_2 is mapped to formula_3, and Möbius transformations permute the generalised circles in the complex plane, formula_4 maps the real line to the unit circle. Furthermore, since formula_4 is a homeomorphism and formula_5 is taken to 0 by formula_4, the upper half-plane is mapped to the unit disk. In terms of the models of hyperbolic geometry, this Cayley transform relates the Poincaré half-plane model to the Poincaré disk model. In electrical engineering the Cayley transform has been used to map a reactance half-plane to the Smith chart used for impedance matching of transmission lines. Quaternion homography. In the four-dimensional space of quaternions formula_6, the versors formula_7 form the unit 3-sphere. Since quaternions are non-commutative, elements of its projective line have homogeneous coordinates written formula_8 to indicate that the homogeneous factor multiplies on the left. The quaternion transform is formula_9 The real and complex homographies described above are instances of the quaternion homography where formula_10 is zero or formula_11, respectively. Evidently the transform takes formula_12 and takes formula_13. Evaluating this homography at formula_14 maps the versor formula_15 into its axis: formula_16 But formula_17 Thus formula_18 In this form the Cayley transform has been described as a rational parametrization of rotation: Let formula_19 in the complex number identity formula_20 where the right hand side is the transform of formula_21 and the left hand side represents the rotation of the plane by negative formula_22 radians. Inverse. Let formula_23 Since formula_24 where the equivalence is in the projective linear group over quaternions, the inverse of formula_25 is formula_26 Since homographies are bijections, formula_27 maps the vector quaternions to the 3-sphere of versors. As versors represent rotations in 3-space, the homography formula_28 produces rotations from the ball in formula_29. Matrix map. Among "n"×"n" square matrices over the reals, with "I" the identity matrix, let "A" be any skew-symmetric matrix (so that "A"T = −"A"). Then "I" + "A" is invertible, and the Cayley transform formula_30 produces an orthogonal matrix, "Q" (so that "Q"T"Q" = "I"). The matrix multiplication in the definition of "Q" above is commutative, so "Q" can be alternatively defined as formula_31. In fact, "Q" must have determinant +1, so is special orthogonal. Conversely, let "Q" be any orthogonal matrix which does not have −1 as an eigenvalue; then formula_32 is a skew-symmetric matrix. (See also: Involution.) The condition on "Q" automatically excludes matrices with determinant −1, but also excludes certain special orthogonal matrices. However, any rotation (special orthogonal) matrix "Q" can be written as formula_33 for some skew-symmetric matrix "A"; more generally any orthogonal matrix "Q" can be written as formula_34 for some skew-symmetric matrix "A" and some diagonal matrix "E" with ±1 as entries. A slightly different form is also seen, requiring different mappings in each direction, formula_35 The mappings may also be written with the order of the factors reversed; however, "A" always commutes with (μ"I" ± "A")−1, so the reordering does not affect the definition. Examples. In the 2×2 case, we have formula_36 The 180° rotation matrix, −"I", is excluded, though it is the limit as tan θ⁄2 goes to infinity. In the 3×3 case, we have formula_37 where "K" = "w"2 + "x"2 + "y"2 + "z"2, and where "w" = 1. This we recognize as the rotation matrix corresponding to quaternion formula_38 (by a formula Cayley had published the year before), except scaled so that "w" = 1 instead of the usual scaling so that "w"2 + "x"2 + "y"2 + "z"2 = 1. Thus vector ("x","y","z") is the unit axis of rotation scaled by tan θ⁄2. Again excluded are 180° rotations, which in this case are all "Q" which are symmetric (so that "Q"T = "Q"). Other matrices. One can extend the mapping to complex matrices by substituting "unitary" for "orthogonal" and "skew-Hermitian" for "skew-symmetric", the difference being that the transpose (·T) is replaced by the conjugate transpose (·H). This is consistent with replacing the standard real inner product with the standard complex inner product. In fact, one may extend the definition further with choices of adjoint other than transpose or conjugate transpose. Formally, the definition only requires some invertibility, so one can substitute for "Q" any matrix "M" whose eigenvalues do not include −1. For example, formula_39 Note that "A" is skew-symmetric (respectively, skew-Hermitian) if and only if "Q" is orthogonal (respectively, unitary) with no eigenvalue −1. Operator map. An infinite-dimensional version of an inner product space is a Hilbert space, and one can no longer speak of matrices. However, matrices are merely representations of linear operators, and these can be used. So, generalizing both the matrix mapping and the complex plane mapping, one may define a Cayley transform of operators. formula_40 Here the domain of "U", dom "U", is ("A"+i"I") dom "A". See self-adjoint operator for further details. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[y,\\ 1] = \\left[\\frac {x - 1}{x +1},\\ 1\\right] \\thicksim [x - 1, \\ x + 1] = [x,\\ 1]\\begin{pmatrix}1 & 1 \\\\ -1 & 1 \\end{pmatrix} ." }, { "math_id": 1, "text": "f(z) = \\frac {z - i}{z + i} ." }, { "math_id": 2, "text": "\\{\\infty, 1, -1\\}" }, { "math_id": 3, "text": "\\{1, -i, i\\}" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "i" }, { "math_id": 6, "text": "a+b\\vec{i}+c\\vec{j}+d\\vec{k}" }, { "math_id": 7, "text": "u(\\theta, r) = \\cos \\theta + r \\sin \\theta " }, { "math_id": 8, "text": "U[a,b]" }, { "math_id": 9, "text": "f(u,q) = U[q,1]\\begin{pmatrix}1 & 1 \\\\ -u & u \\end{pmatrix} = U[q - u,\\ q + u] \\sim U[(q + u)^{-1}(q - u),\\ 1]." }, { "math_id": 10, "text": "\\theta" }, { "math_id": 11, "text": "\\pi/2" }, { "math_id": 12, "text": "u\\to 0\\to -1" }, { "math_id": 13, "text": "-u \\to \\infty \\to 1" }, { "math_id": 14, "text": "q=1" }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "f(u,1) =(1+u)^{-1}(1-u) = (1+u)^*(1-u)/ |1+u|^2." }, { "math_id": 17, "text": "|1+u|^2 = (1+u)(1+u^*) = 2 + 2 \\cos \\theta ,\\quad \\text{and}\\quad (1+u^*)(1-u) = -2 r \\sin \\theta ." }, { "math_id": 18, "text": "f(u,1) = -r \\frac {\\sin \\theta}{1 + \\cos \\theta} = -r \\tan \\frac{\\theta}{2} ." }, { "math_id": 19, "text": "t=\\tan\\phi/2" }, { "math_id": 20, "text": "e^{-i \\varphi} = \\frac{1 - ti}{1 + ti} " }, { "math_id": 21, "text": "ti" }, { "math_id": 22, "text": "\\phi" }, { "math_id": 23, "text": "u^* = \\cos \\theta - r \\sin \\theta = u^{-1} ." }, { "math_id": 24, "text": "\\begin{pmatrix} 1 & 1 \\\\ -u & u \\end{pmatrix}\\ \\begin{pmatrix} 1 & -u^* \\\\ 1 & u^* \\end{pmatrix} \\ = \\ \\begin{pmatrix} 2 & 0 \\\\ 0 & 2 \\end{pmatrix} \\ \\sim \\ \\begin{pmatrix} 1 & 0 \\\\ 0 & 1 \\end{pmatrix} \\ ," }, { "math_id": 25, "text": "f(u,1)" }, { "math_id": 26, "text": "U[p,1] \\begin{pmatrix} 1 & -u^* \\\\ 1 & u^* \\end{pmatrix} \\ = \\ U[p+1,\\ (1-p)u^*] \\sim U[u(1-p)^{-1} (p+1), \\ 1] ." }, { "math_id": 27, "text": "f^{-1} (u,1)" }, { "math_id": 28, "text": "f^{-1}" }, { "math_id": 29, "text": "\\R^3" }, { "math_id": 30, "text": " Q = (I - A)(I + A)^{-1} \\,\\!" }, { "math_id": 31, "text": " Q = (I + A)^{-1}(I - A)" }, { "math_id": 32, "text": " A = (I - Q)(I + Q)^{-1} \\,\\!" }, { "math_id": 33, "text": "Q = \\bigl((I - A)(I + A)^{-1}\\bigr)^2" }, { "math_id": 34, "text": "Q = E(I - A)(I + A)^{-1}" }, { "math_id": 35, "text": "\\begin{align}\n Q &= (I - A)^{-1}(I + A), \\\\[5mu]\n A &= (Q - I)(Q + I)^{-1}.\n\\end{align}" }, { "math_id": 36, "text": "\n\\begin{bmatrix} 0 & \\tan \\frac{\\theta}{2} \\\\ -\\tan \\frac{\\theta}{2} & 0 \\end{bmatrix}\n\\leftrightarrow\n\\begin{bmatrix} \\cos \\theta & -\\sin \\theta \\\\ \\sin \\theta & \\cos \\theta \\end{bmatrix} .\n" }, { "math_id": 37, "text": "\n\\begin{bmatrix} 0 & z & -y \\\\ -z & 0 & x \\\\ y & -x & 0 \\end{bmatrix}\n\\leftrightarrow\n\\frac{1}{K}\n\\begin{bmatrix}\n w^2+x^2-y^2-z^2 & 2 (x y-w z) & 2 (w y+x z) \\\\\n 2 (x y+w z) & w^2-x^2+y^2-z^2 & 2 (y z-w x) \\\\\n 2 (x z-w y) & 2 (w x+y z) & w^2-x^2-y^2+z^2\n\\end{bmatrix} ,\n" }, { "math_id": 38, "text": " w + \\mathbf{i} x + \\mathbf{j} y + \\mathbf{k} z \\,\\!" }, { "math_id": 39, "text": "\n\\begin{bmatrix} 0 & -a & ab - c \\\\ 0 & 0 & -b \\\\ 0 & 0 & 0 \\end{bmatrix}\n\\leftrightarrow\n\\begin{bmatrix} 1 & 2a & 2c \\\\ 0 & 1 & 2b \\\\ 0 & 0 & 1 \\end{bmatrix} .\n" }, { "math_id": 40, "text": "\\begin{align}\n U &{}= (A - \\mathbf{i}I) (A + \\mathbf{i}I)^{-1} \\\\\n A &{}= \\mathbf{i}(I + U) (I - U)^{-1}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=882010
882112
BMW M73
The BMW M73 is a naturally-aspirated, SOHC, V12 petrol engine which replaced the BMW M70 and was produced from 1993 to 2002. It was used in the BMW E38 7 Series, E31 8 Series and Rolls-Royce Silver Seraph. Design. Compared with its M70 predecessor, the M73 features an updated roller-rocker valve-train and an increased displacement, due to a bore increase of ending with a cylinder bore of 85mm and a stroke increase of to have a final piston stroke of 79mm. While most other engines in the BMW range had switched to dual overhead camshafts with four valves per cylinder, the M73 used a single overhead camshaft and with two valves per cylinder. However, BMW engineers did create a prototype four-valve per cylinder V12, called the M72, which developed and of torque. However, this 48-valve engine did not fulfill the comfort demands of the large sedan class in all respects and, as a result, did not go into production. These engines have two separate Bosch Motronic engine control units, which are connected to a third single Siemens control unit, called EML. The engine utilizes four knock sensors mounted on the heads, a cam position sensor mounted on the right head and two crank sensors, one for each ECU. Throttle by wire is used. Similar to its predecessor, the M70, spark distributors are still present for each cylinder bank. The hydrogen versions of the engine feature a formula_0 lean burn concept, which allows quality torque control similar to a Diesel engine (i. e. no engine throttling). Due to the lower LHV of a hydrogen-air mixture, and the fact that the engine was designed as a Bi-Fuel (petrol and hydrogen) engine, the power output is reduced by 38 per cent compared to the petrol only version of the engine. Models. M73B54. Applications: M73TUB54. Due to more stringent emissions standards, the 1999 model year marked the addition of an electronically heated catalyst, variable MAP thermostat, and water cooled alternator. Applications: M72B54 prototype. In 1989, as part of a feasibility study, BMW's engine development department produced a four-valve version of its V12 engine. Only one engine was produced, having an up-sized variant of the M70B50 engine called the M72B54. Bore and stroke was increased to 85mm and 79mm and the total displacement of 5.4L was achieved, matching the values of the standard M73 engine. The power and torque both increased to and respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda \\ge 3.0" } ]
https://en.wikipedia.org/wiki?curid=882112
88213
Apsis
Either of two extreme points in a celestial object's orbit An apsis (from grc " ' ()" 'arch, vault'; pl. apsides ) is the farthest or nearest point in the orbit of a planetary body about its primary body. The line of apsides is the line connecting the two extreme values. Apsides pertaining to orbits around the Sun have distinct names to differentiate themselves from other apsides; these names are aphelion for the farthest and perihelion for the nearest point in the solar orbit. The Moon's two apsides are the farthest point, apogee, and the nearest point, perigee, of its orbit around the host Earth. Earth's two apsides are the farthest point, "aphelion", and the nearest point, "perihelion", of its orbit around the host Sun. The terms "aphelion" and "perihelion" apply in the same way to the orbits of Jupiter and the other planets, the comets, and the asteroids of the Solar System. General description. There are two apsides in any elliptic orbit. The name for each apsis is created from the prefixes "ap-", "apo-" (from " "ἀπ(ό)", (ap(o)-)" 'away from') for the farthest or "peri-" (from " "περί" (peri-)" 'near') for the closest point to the primary body, with a suffix that describes the primary body. The suffix for Earth is "-gee", so the apsides' names are "apogee" and "perigee". For the Sun, the suffix is "-helion", so the names are "aphelion" and "perihelion". According to Newton's laws of motion, all periodic orbits are ellipses. The barycenter of the two bodies may lie well within the bigger body—e.g., the Earth–Moon barycenter is about 75% of the way from Earth's center to its surface. If, compared to the larger mass, the smaller mass is negligible (e.g., for satellites), then the orbital parameters are independent of the smaller mass. When used as a suffix—that is, "-apsis"—the term can refer to the two distances from the primary body to the orbiting body when the latter is located: 1) at the "periapsis" point, or 2) at the "apoapsis" point (compare both graphics, second figure). The line of apsides denotes the distance of the line that joins the nearest and farthest points across an orbit; it also refers simply to the extreme range of an object orbiting a host body (see top figure; see third figure). In orbital mechanics, the apsides technically refer to the distance measured between the center of mass of the central body and the center of mass of the orbiting body. However, in the case of a spacecraft, the terms are commonly used to refer to the orbital altitude of the spacecraft above the surface of the central body (assuming a constant, standard reference radius). Terminology. The words "pericenter" and "apocenter" are often seen, although periapsis/apoapsis are preferred in technical usage. Etymology. The words "perihelion" and "aphelion" were coined by Johannes Kepler to describe the orbital motions of the planets around the Sun. The words are formed from the prefixes "peri-" (Greek: "περί", near) and "apo-" (Greek: "ἀπό", away from), affixed to the Greek word for the Sun, ("ἥλιος", or "hēlíos"). Various related terms are used for other celestial objects. The suffixes "-gee", "-helion", "-astron" and "-galacticon" are frequently used in the astronomical literature when referring to the Earth, Sun, stars, and the Galactic Center respectively. The suffix "-jove" is occasionally used for Jupiter, but "-saturnium" has very rarely been used in the last 50 years for Saturn. The "-gee" form is also used as a generic closest-approach-to "any planet" term—instead of applying it only to Earth. During the Apollo program, the terms "pericynthion" and "apocynthion" were used when referring to orbiting the Moon; they reference Cynthia, an alternative name for the Greek Moon goddess Artemis. More recently, during the Artemis program, the terms "perilune" and "apolune" have been used. Regarding black holes, the term peribothron was first used in a 1976 paper by J. Frank and M. J. Rees, who credit W. R. Stoeger for suggesting creating a term using the greek word for pit: "bothron". The terms "perimelasma" and "apomelasma" (from a Greek root) were used by physicist and science-fiction author Geoffrey A. Landis in a story published in 1998, thus appearing before "perinigricon" and "aponigricon" (from Latin) in the scientific literature in 2002. Terminology summary. The suffixes shown below may be added to prefixes "peri-" or "apo-" to form unique names of apsides for the orbiting bodies of the indicated host/(primary) system. However, only for the Earth, Moon and Sun systems are the unique suffixes commonly used. Exoplanet studies commonly use "-astron", but typically, for other host systems the generic suffix, "-apsis", is used instead. Perihelion and aphelion. The perihelion (q) and aphelion (Q) are the nearest and farthest points respectively of a body's direct orbit around the Sun. Comparing osculating elements at a specific epoch to effectively those at a different epoch will generate differences. The time-of-perihelion-passage as one of six osculating elements is not an exact prediction (other than for a generic two-body model) of the actual minimum distance to the Sun using the full dynamical model. Precise predictions of perihelion passage require numerical integration. Inner planets and outer planets. The two images below show the orbits, orbital nodes, and positions of perihelion (q) and aphelion (Q) for the planets of the Solar System as seen from above the northern pole of Earth's ecliptic plane, which is coplanar with Earth's orbital plane. The planets travel counterclockwise around the Sun and for each planet, the blue part of their orbit travels north of the ecliptic plane, the pink part travels south, and dots mark perihelion (green) and aphelion (orange). The first image (below-left) features the "inner" planets, situated outward from the Sun as Mercury, Venus, Earth, and Mars. The "reference" Earth-orbit is colored yellow and represents the orbital plane of reference. At the time of vernal equinox, the Earth is at the bottom of the figure. The second image (below-right) shows the "outer" planets, being Jupiter, Saturn, Uranus, and Neptune. The orbital nodes are the two end points of the "line of nodes" where a planet's tilted orbit intersects the plane of reference; here they may be 'seen' as the points where the blue section of an orbit meets the pink. Lines of apsides. The chart shows the extreme range—from the closest approach (perihelion) to farthest point (aphelion)—of several orbiting celestial bodies of the Solar System: the planets, the known dwarf planets, including Ceres, and Halley's Comet. The length of the horizontal bars correspond to the extreme range of the orbit of the indicated body around the Sun. These extreme distances (between perihelion and aphelion) are "the lines of apsides" of the orbits of various objects around a host body. Distances of selected bodies of the Solar System from the Sun. The left and right edges of each bar correspond to the perihelion and aphelion of the body, respectively, hence long bars denote high orbital eccentricity. The radius of the Sun is 0.7 million km, and the radius of Jupiter (the largest planet) is 0.07 million km, both too small to resolve on this image. Earth perihelion and aphelion. Currently, the Earth reaches perihelion in early January, approximately 14 days after the December solstice. At perihelion, the Earth's center is about astronomical units (AU) or from the Sun's center. In contrast, the Earth reaches aphelion currently in early July, approximately 14 days after the June solstice. The aphelion distance between the Earth's and Sun's centers is currently about or . The dates of perihelion and aphelion change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. In the short term, such dates can vary up to 2 days from one year to another. This significant variation is due to the presence of the Moon: while the Earth–Moon barycenter is moving on a stable orbit around the Sun, the position of the Earth's center which is on average about from the barycenter, could be shifted in any direction from it—and this affects the timing of the actual closest approach between the Sun's and the Earth's centers (which in turn defines the timing of perihelion in a given year). Because of the increased distance at aphelion, only 93.55% of the radiation from the Sun falls on a given area of Earth's surface as does at perihelion, but this does not account for the seasons, which result instead from the tilt of Earth's axis of 23.4° away from perpendicular to the plane of Earth's orbit. Indeed, at both perihelion and aphelion it is summer in one hemisphere while it is winter in the other one. Winter falls on the hemisphere where sunlight strikes least directly, and summer falls where sunlight strikes most directly, regardless of the Earth's distance from the Sun. In the northern hemisphere, summer occurs at the same time as aphelion, when solar radiation is lowest. Despite this, summers in the northern hemisphere are on average warmer than in the southern hemisphere, because the northern hemisphere contains larger land masses, which are easier to heat than the seas. Perihelion and aphelion do however have an indirect effect on the seasons: because Earth's orbital speed is minimum at aphelion and maximum at perihelion, the planet takes longer to orbit from June solstice to September equinox than it does from December solstice to March equinox. Therefore, summer in the northern hemisphere lasts slightly longer (93 days) than summer in the southern hemisphere (89 days). Astronomers commonly express the timing of perihelion relative to the First Point of Aries not in terms of days and hours, but rather as an angle of orbital displacement, the so-called longitude of the periapsis (also called longitude of the pericenter). For the orbit of the Earth, this is called the "longitude of perihelion", and in 2000 it was about 282.895°; by 2010, this had advanced by a small fraction of a degree to about 283.067°, i.e. a mean increase of 62" per year. For the orbit of the Earth around the Sun, the time of apsis is often expressed in terms of a time relative to seasons, since this determines the contribution of the elliptical orbit to seasonal variations. The variation of the seasons is primarily controlled by the annual cycle of the elevation angle of the Sun, which is a result of the tilt of the axis of the Earth measured from the plane of the ecliptic. The Earth's eccentricity and other orbital elements are not constant, but vary slowly due to the perturbing effects of the planets and other objects in the solar system (Milankovitch cycles). On a very long time scale, the dates of the perihelion and of the aphelion progress through the seasons, and they make one complete cycle in 22,000 to 26,000 years. There is a corresponding movement of the position of the stars as seen from Earth, called the apsidal precession. (This is closely related to the precession of the axes.) The dates and times of the perihelions and aphelions for several past and future years are listed in the following table: Other planets. The following table shows the distances of the planets and dwarf planets from the Sun at their perihelion and aphelion. Mathematical formulae. These formulae characterize the pericenter and apocenter of an orbit: While, in accordance with Kepler's laws of planetary motion (based on the conservation of angular momentum) and the conservation of energy, these two quantities are constant for a given orbit: where: Note that for conversion from heights above the surface to distances between an orbit and its primary, the radius of the central body has to be added, and conversely. The arithmetic mean of the two limiting distances is the length of the semi-major axis "a". The geometric mean of the two distances is the length of the semi-minor axis "b". The geometric mean of the two limiting speeds is formula_10 which is the speed of a body in a circular orbit whose radius is formula_11. Time of perihelion. Orbital elements such as the "time of perihelion passage" are defined at the epoch chosen using an unperturbed two-body solution that does not account for the n-body problem. To get an accurate time of perihelion passage you need to use an epoch close to the perihelion passage. For example, using an epoch of 1996, Comet Hale–Bopp shows perihelion on 1 April 1997. Using an epoch of 2008 shows a less accurate perihelion date of 30 March 1997. Short-period comets can be even more sensitive to the epoch selected. Using an epoch of 2005 shows 101P/Chernykh coming to perihelion on 25 December 2005, but using an epoch of 2012 produces a less accurate unperturbed perihelion date of 20 January 2006. Numerical integration shows dwarf planet Eris will come to perihelion around December 2257. Using an epoch of 2021, which is 236 years early, less accurately shows Eris coming to perihelion in 2260. 4 Vesta came to perihelion on 26 December 2021, but using a two-body solution at an epoch of July 2021 less accurately shows Vesta came to perihelion on 25 December 2021. Short arcs. Trans-Neptunian objects discovered when 80+ AU from the Sun need dozens of observations over multiple years to well constrain their orbits because they move very slowly against the background stars. Due to statistics of small numbers, trans-Neptunian objects such as when it had only 8 observations over an observation arc of 1 year that have not or will not come to perihelion for roughly 100 years can have a 1-sigma uncertainty of in the perihelion date. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_\\text{per} = \\sqrt{ \\frac{(1 + e)\\mu}{(1 - e)a} } \\," }, { "math_id": 1, "text": "r_\\text{per} = (1 - e)a" }, { "math_id": 2, "text": " v_\\text{ap} = \\sqrt{\\frac{(1 - e)\\mu}{(1 + e)a} } \\," }, { "math_id": 3, "text": "r_\\text{ap} = (1 + e)a" }, { "math_id": 4, "text": "h = \\sqrt{\\left(1 - e^2\\right)\\mu a}" }, { "math_id": 5, "text": "\\varepsilon = -\\frac{\\mu}{2a}" }, { "math_id": 6, "text": "r_\\text{ap}" }, { "math_id": 7, "text": "r_\\text{per}" }, { "math_id": 8, "text": "a = \\frac{r_\\text{per} + r_\\text{ap}}{2}" }, { "math_id": 9, "text": "e = \\frac{r_\\text{ap} - r_\\text{per}}{r_\\text{ap} + r_\\text{per}} = 1 - \\frac{2}{\\frac{r_\\text{ap}}{r_\\text{per}} + 1}" }, { "math_id": 10, "text": "\\sqrt{-2\\varepsilon} = \\sqrt{\\frac{\\mu}{a}}" }, { "math_id": 11, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=88213
8822002
Hitachi DX07
The Hitachi DX07 is a 2.9 inch (73 mm) liquid crystal display screen which went into mass production in December 2006: Each pixel is therefore 78μm x 77μm (1μm = 1/1000mm). Note: It's also safe to assume that this is an 8-bit per channel display (a 6-bit per channel display can only dither 1.62 million colours) Optical Properties of the DX07. The DX07 has a number of interesting properties related to its small size and high resolution. The following are true for someone with "normal" vision using this screen as a hand held device: Notes: Smallest Visible Pixel Size for a Hand Held Device. The smallest visible pixel size ("p") for a hand held device can be calculated assuming the screen is held at a comfortable distance (250mm) for someone with "normal" vision (able to see detail at a 1/60 degree angle): formula_0 It can therefore be concluded that the pixel size for the Hitachi DX07 screen is only 5% greater than the smallest possible for practical use. Note: Smaller pixel sizes would dither in much the same way as printed dots (72.7μm is approximately 350dpi) Other Hand Held Screen Sizes. If other hand held devices were manufactured for the latest high-definition video screen, then their smallest sizes without loss of detail (i.e. 72.7μm pixels) would be as follows:
[ { "math_id": 0, "text": "p = 250 \\cdot sin\\tfrac{1}{60} \\approx 0.0727mm = 72.7\\mu m" } ]
https://en.wikipedia.org/wiki?curid=8822002
882332
Alexander Macfarlane
Scottish physicist and mathematician (1851–1913) Alexander Macfarlane FRSE LLD (21 April 1851 – 28 August 1913) was a Scottish logician, physicist, and mathematician. Life. Macfarlane was born in Blairgowrie, Scotland, to Daniel MacFarlane (Shoemaker, Blairgowrie) and Ann Small. He studied at the University of Edinburgh. His doctoral thesis "The disruptive discharge of electricity" reported on experimental results from the laboratory of Peter Guthrie Tait. In 1878 Macfarlane spoke at the Royal Society of Edinburgh on algebraic logic as introduced by George Boole. He was elected a Fellow of the Royal Society of Edinburgh. His proposers were Peter Guthrie Tait, Philip Kelland, Alexander Crum Brown, and John Hutton Balfour. The next year he published "Principles of the Algebra of Logic" which interpreted Boolean variable expressions with algebraic manipulation. During his life, Macfarlane played a prominent role in research and education. He taught at the universities of Edinburgh and St Andrews, was physics professor at the University of Texas (1885–1894), professor of Advanced Electricity, and later of mathematical physics, at Lehigh University. In 1896 Macfarlane encouraged the association of quaternion students to promote the algebra. He became the Secretary of the Quaternion Society, and in 1909 its president. He edited the "Bibliography of Quaternions" that the Society published in 1904. Macfarlane was also the author of a popular 1916 collection of mathematical biographies ("Ten British Mathematicians"), a similar work on physicists ("Lectures on Ten British Physicists of the Nineteenth Century", 1919). Macfarlane was caught up in the revolution in geometry during his lifetime, in particular through the influence of G. B. Halsted who was mathematics professor at the University of Texas. Macfarlane originated an "Algebra of Physics", which was his adaptation of quaternions to physical science. His first publication on "Space Analysis" preceded the presentation of Minkowski Space by seventeen years. Macfarlane actively participated in several International Congresses of Mathematicians including the primordial meeting in Chicago, 1893, and the Paris meeting of 1900 where he spoke on "Application of space analysis to curvilinear coordinates". Macfarlane retired to Chatham, Ontario, where he died in 1913. Space analysis. Alexander Macfarlane stylized his work as "Space Analysis". In 1894 he published his five earlier papers and a book review of Alexander McAulay's "Utility of Quaternions in Physics". Page numbers are carried from previous publications, and the reader is presumed familiar with quaternions. The first paper is "Principles of the Algebra of Physics" where he first proposes the hyperbolic quaternion algebra, since "a student of physics finds a difficulty in principle of quaternions which makes the square of a vector negative." The second paper is "The Imaginary of the Algebra". Similar to Homersham Cox (1882/83), Macfarlane uses the hyperbolic versor as the hyperbolic quaternion corresponding to the versor of Hamilton. The presentation is encumbered by the notation formula_0 Later he conformed to the notation exp(A α) used by Euler and Sophus Lie. The expression formula_1 is meant to emphasize that α is a "right versor", where π/2 is the measure of a right angle in radians. The π/2 in the exponent is, in fact, superfluous. Paper three is "Fundamental Theorems of Analysis Generalized for Space". At the 1893 mathematical congress Macfarlane read his paper "On the definition of the trigonometric functions" where he proposed that the radian be defined as a ratio of areas rather than of lengths: "the true analytical argument for the circular ratios is not the ratio of the arc to the radius, but the ratio of twice the area of a sector to the square on the radius." The paper was withdrawn from the published proceedings of mathematical congress (acknowledged at page 167), and privately published in his "Papers on Space Analysis" (1894). Macfarlane reached this idea or ratios of areas while considering the basis for hyperbolic angle which is analogously defined. The fifth paper is "Elliptic and Hyperbolic Analysis" which considers the spherical law of cosines as the fundamental theorem of the sphere, and proceeds to analogues for the ellipsoid of revolution, general ellipsoid, and equilateral hyperboloids of one and two sheets, where he provides the hyperbolic law of cosines. In 1900 Alexander published "Hyperbolic Quaternions" with the Royal Society in Edinburgh, and included a sheet of nine figures, two of which display conjugate hyperbolas. Having been stung in the "Great Vector Debate" over the non-associativity of his Algebra of Physics, he restored associativity by reverting to biquaternions, an algebra used by students of Hamilton since 1853. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h \\alpha ^ A = \\cosh A + \\sinh A \\ \\alpha ^{\\pi/2}." }, { "math_id": 1, "text": "\\alpha ^{\\pi/2}" } ]
https://en.wikipedia.org/wiki?curid=882332
8823369
Fedor Bogomolov
Russian and American mathematician Fedor Alekseyevich Bogomolov (born 26 September 1946) (Фёдор Алексеевич Богомолов) is a Russian and American mathematician, known for his research in algebraic geometry and number theory. Bogomolov worked at the Steklov Institute in Moscow before he became a professor at the Courant Institute in New York. He is most famous for his pioneering work on hyperkähler manifolds. Born in Moscow, Bogomolov graduated from Moscow State University, Faculty of Mechanics and Mathematics, and earned his doctorate ("candidate degree") in 1973, at the Steklov Institute. His doctoral advisor was Sergei Novikov. Geometry of Kähler manifolds. Bogomolov's Ph.D. thesis was entitled "Compact Kähler varieties". In his early papers Bogomolov studied the manifolds which were later called Calabi–Yau and hyperkähler. He proved a decomposition theorem, used for the classification of manifolds with trivial canonical class. It has been re-proven using the Calabi–Yau theorem and Berger's classification of Riemannian holonomies, and is foundational for modern string theory. In the late 1970s and early 1980s Bogomolov studied the deformation theory for manifolds with trivial canonical class. He discovered what is now known as Bogomolov–Tian–Todorov theorem, proving the smoothness and un-obstructedness of the deformation space for hyperkaehler manifolds (in 1978 paper) and then extended this to all Calabi–Yau manifolds in the 1981 IHES preprint. Some years later, this theorem became the mathematical foundation for Mirror Symmetry. While studying the deformation theory of hyperkähler manifolds, Bogomolov discovered what is now known as the Bogomolov–Beauville–Fujiki form on formula_0. Studying properties of this form, Bogomolov erroneously concluded that compact hyperkaehler manifolds do not exist, with the exception of K3 surfaces, tori, and their products. Almost four years passed since this publication before Akira Fujiki found a counterexample. Other works in algebraic geometry. Bogomolov's paper on "Holomorphic tensors and vector bundles on projective manifolds" proves what is now known as the Bogomolov–Miyaoka–Yau inequality, and also proves that a stable bundle on a surface, restricted to a curve of sufficiently big degree, remains stable. In "Families of curves on a surface of general type", Bogomolov laid the foundations to the now popular approach to the theory of diophantine equations through geometry of hyperbolic manifolds and dynamical systems. In this paper Bogomolov proved that on any surface of general type with formula_1, there is only a finite number of curves of bounded genus. Some 25 years later, Michael McQuillan extended this argument to prove the famous Green–Griffiths conjecture for such surfaces. In "Classification of surfaces of class formula_2 with formula_3", Bogomolov made the first step in a famously difficult (and still unresolved) problem of classification of surfaces of Kodaira class VII. These are compact complex surfaces with formula_4. If they are in addition minimal, they are called "class formula_2". Kunihiko Kodaira classified all compact complex surfaces except class VII, which are still not understood, except the case formula_3 (Bogomolov) and formula_5 (Andrei Teleman, 2005). Later career. Bogomolov obtained his Habilitation (Russian "Dr. of Sciences") in 1983. In 1994, he emigrated to the United States and became a full professor at the Courant Institute. He is very active in algebraic geometry and number theory. From 2009 till March 2014 he served as the Editor-in-Chief of the Central European Journal of Mathematics. Since 2014 he serves as the Editor-in-Chief of the European Journal of Mathematics. Since 2010 he is the academic supervisor of the HSE Laboratory of algebraic geometry and its applications. Bogomolov has extensively contributed to the revival of Russian mathematics. Three major international conferences commemorating his 70th birthday were held in 2016: at the Courant Institute, the University of Nottingham, and the Higher School of Economics in Moscow.
[ { "math_id": 0, "text": "H^2(M)" }, { "math_id": 1, "text": "c_1^2>c_2" }, { "math_id": 2, "text": "VII_0" }, { "math_id": 3, "text": "b_{2}=0" }, { "math_id": 4, "text": "b_2=1" }, { "math_id": 5, "text": "b_{2}=1" } ]
https://en.wikipedia.org/wiki?curid=8823369
882410
Untouchable number
Number that cannot be written as an aliquot sum In mathematics, an untouchable number is a positive integer that cannot be expressed as the sum of all the proper divisors of any positive integer. That is, these numbers are not in the image of the aliquot sum function. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable. Examples. The first few untouchable numbers are 2, 5, 52, 88, 96, 120, 124, 146, 162, 188, 206, 210, 216, 238, 246, 248, 262, 268, 276, 288, 290, 292, 304, 306, 322, 324, 326, 336, 342, 372, 406, 408, 426, 430, 448, 472, 474, 498, ... (sequence in the OEIS). Properties. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Are there any odd untouchable numbers other than 5? The number 5 is believed to be the only odd untouchable number, but this has not been proven. It would follow from a slightly stronger version of the Goldbach conjecture, since the sum of the proper divisors of "pq" (with "p", "q" distinct primes) is 1 + "p" + "q". Thus, if a number "n" can be written as a sum of two distinct primes, then "n" + 1 is not an untouchable number. It is expected that every even number larger than 6 is a sum of two distinct primes, so probably no odd number larger than 7 is an untouchable number, and formula_0, formula_1, formula_2, so only 5 can be an odd untouchable number. Thus it appears that besides 2 and 5, all untouchable numbers are composite numbers (since except 2, all even numbers are composite). No perfect number is untouchable, since, at the very least, it can be expressed as the sum of its own proper divisors. Similarly, none of the amicable numbers or sociable numbers are untouchable. Also, none of the Mersenne numbers are untouchable, since "M""n" = 2"n" − 1 is equal to the sum of the proper divisors of 2"n". No untouchable number is one more than a prime number, since if "p" is prime, then the sum of the proper divisors of "p"2 is "p" + 1. Also, no untouchable number is three more than a prime number, except 5, since if "p" is an odd prime then the sum of the proper divisors of 2"p" is "p" + 3. Infinitude. There are infinitely many untouchable numbers, a fact that was proven by Paul Erdős. According to Chen &amp; Zhao, their natural density is at least d &gt; 0.06. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1=\\sigma(2)-2" }, { "math_id": 1, "text": "3=\\sigma(4)-4" }, { "math_id": 2, "text": "7=\\sigma(8)-8" } ]
https://en.wikipedia.org/wiki?curid=882410
882427
Categorial grammar
Family of formalisms in natural language syntax Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics. Basics. A categorial grammar consists of two parts: a lexicon, which assigns a set of types (also called categories) to each basic symbol, and some type inference rules, which determine how the type of a string of symbols follows from the types of the constituent symbols. It has the advantage that the type inference rules can be fixed once and for all, so that the specification of a particular language grammar is entirely determined by the lexicon. A categorial grammar shares some features with the simply typed lambda calculus. Whereas the lambda calculus has only one function type formula_0, a categorial grammar typically has two function types, one type that is applied on the left, and one on the right. For example, a simple categorial grammar might have two function types formula_1 and formula_2. The first, formula_1, is the type of a phrase that results in a phrase of type formula_3 when followed (on the right) by a phrase of type formula_4. The second, formula_5, is the type of a phrase that results in a phrase of type formula_3 when preceded (on the left) by a phrase of type formula_4. The notation is based upon algebra. A fraction when multiplied by (i.e. concatenated with) its denominator yields its numerator. As concatenation is not commutative, it makes a difference whether the denominator occurs to the left or right. The concatenation must be on the same side as the denominator for it to cancel out. The first and simplest kind of categorial grammar is called a basic categorial grammar, or sometimes an AB-grammar (after Ajdukiewicz and Bar-Hillel). Given a set of primitive types formula_6, let formula_7 be the set of types constructed from primitive types. In the basic case, this is the least set such that formula_8 and if formula_9 then formula_10. Think of these as purely formal expressions freely generated from the primitive types; any semantics will be added later. Some authors assume a fixed infinite set of primitive types used by all grammars, but by making the primitive types part of the grammar, the whole construction is kept finite. A basic categorial grammar is a tuple formula_11 where formula_12 is a finite set of symbols, formula_6 is a finite set of primitive types, and formula_13. The relation formula_14 is the lexicon, which relates types to symbols formula_15. Since the lexicon is finite, it can be specified by listing a set of pairs like formula_16. Such a grammar for English might have three basic types formula_17, assigning count nouns the type formula_18, complete noun phrases the type formula_19, and sentences the type formula_20. Then an adjective could have the type formula_21, because if it is followed by a noun then the whole phrase is a noun. Similarly, a determiner has the type formula_22, because it forms a complete noun phrase when followed by a noun. Intransitive verbs have the type formula_23, and transitive verbs the type formula_24. Then a string of words is a sentence if it has overall type formula_20. For example, take the string "the bad boy made that mess". Now "the" and "that" are determiners, "boy" and "mess" are nouns, "bad" is an adjective, and "made" is a transitive verb, so the lexicon is formula_26, formula_27, formula_28, formula_29, formula_30}. and the sequence of types in the string is formula_31 now find functions and appropriate arguments and reduce them according to the two inference rules formula_32 and formula_33: formula_34&lt;br&gt; formula_35&lt;br&gt; formula_36&lt;br&gt; formula_37&lt;br&gt; formula_38&lt;br&gt; formula_39 The fact that the result is formula_20 means that the string is a sentence, while the sequence of reductions shows that it can be parsed as ((the (bad boy)) (made (that mess))). Categorial grammars of this form (having only function application rules) are equivalent in generative capacity to context-free grammars and are thus often considered inadequate for theories of natural language syntax. Unlike CFGs, categorial grammars are lexicalized, meaning that only a small number of (mostly language-independent) rules are employed, and all other syntactic phenomena derive from the lexical entries of specific words. Another appealing aspect of categorial grammars is that it is often easy to assign them a compositional semantics, by first assigning interpretation types to all the basic categories, and then associating all the derived categories with appropriate function types. The interpretation of any constituent is then simply the value of a function at an argument. With some modifications to handle intensionality and quantification, this approach can be used to cover a wide variety of semantic phenomena. Lambek calculus. A Lambek grammar is an elaboration of this idea that has a concatenation operator for types, and several other inference rules. Mati Pentus has shown that these still have the generative capacity of context-free grammars. For the Lambek calculus, there is a type concatenation operator formula_40, so that formula_8 and if formula_9 then formula_41. The Lambek calculus consists of several deduction rules, which specify how type inclusion assertions can be derived. In the following rules, upper case roman letters stand for types, upper case Greek letters stand for sequences of types. A sequent of the form formula_42 can be read: a string is of type X if it consists of the concatenation of strings of each of the types in Γ. If a type is interpreted as a set of strings, then the ← may be interpreted as ⊇, that is, "includes as a subset". A horizontal line means that the inclusion above the line implies the one below the line. The process is begun by the Axiom rule, which has no antecedents and just says that any type includes itself. formula_43 The Cut rule says that inclusions can be composed. formula_44 The other rules come in pairs, one pair for each type construction operator, each pair consisting of one rule for the operator in the target, one in the source, of the arrow. The name of a rule consists of the operator and an arrow, with the operator on the side of the arrow on which it occurs in the conclusion. For an example, here is a derivation of "type raising", which says that formula_45. The names of rules and the substitutions used are to the right. formula_46 Relation to context-free grammars. Recall that a context-free grammar is a 4-tuple formula_47 where From the point of view of categorial grammars, a context-free grammar can be seen as a calculus with a set of special purpose axioms for each language, but with no type construction operators and no inference rules except Cut. Specifically, given a context-free grammar as above, define a categorial grammar formula_53 where formula_54, and formula_55. Let there be an axiom formula_56 for every symbol formula_57, an axiom formula_58 for every production rule formula_59, a lexicon entry formula_60 for every terminal symbol formula_61, and Cut for the only rule. This categorial grammar generates the same language as the given CFG. Of course, this is not a basic categorial grammar, since it has special axioms that depend upon the language; i.e. it is not lexicalized. Also, it makes no use at all of non-primitive types. To show that any context-free language can be generated by a basic categorial grammar, recall that any context-free language can be generated by a context-free grammar in Greibach normal form. The grammar is in Greibach normal form if every production rule is of the form formula_62, where capital letters are variables, formula_61, and formula_63, that is, the right side of the production is a single terminal symbol followed by zero or more (non-terminal) variables. Now given a CFG in Greibach normal form, define a basic categorial grammar with a primitive type for each non-terminal variable formula_64, and with an entry in the lexicon formula_65, for each production rule formula_62. It is fairly easy to see that this basic categorial grammar generates the same language as the original CFG. Note that the lexicon of this grammar will generally assign multiple types to each symbol. The same construction works for Lambek grammars, since they are an extension of basic categorial grammars. It is necessary to verify that the extra inference rules do not change the generated language. This can be done and shows that every context-free language is generated by some Lambek grammar. To show the converse, that every language generated by a Lambek grammar is context-free, is much more difficult. It was an open problem for nearly thirty years, from the early 1960s until about 1991 when it was proven by Pentus. The basic idea is, given a Lambek grammar, formula_53 construct a context-free grammar formula_66 with the same set of terminal symbols, the same start symbol, with variables some (not all) types formula_67, and with a production rule formula_68 for each entry formula_69 in the lexicon, and production rules formula_70 for certain sequents formula_71 that are derivable in the Lambek calculus. Of course, there are infinitely many types and infinitely many derivable sequents, so in order to make a finite grammar it is necessary put a bound on the size of the types and sequents that are needed. The heart of Pentus's proof is to show that there is such a finite bound. Notation. The notation in this field is not standardized. The notations used in formal language theory, logic, category theory, and linguistics, conflict with each other. In logic, arrows point to the more general from the more particular, that is, to the conclusion from the hypotheses. In this article, this convention is followed, i.e. the target of the arrow is the more general (inclusive) type. In logic, arrows usually point left to right. In this article this convention is reversed for consistency with the notation of context-free grammars, where the single non-terminal symbol is always on the left. We use the symbol formula_72 in a production rule as in Backus–Naur form. Some authors use an arrow, which unfortunately may point in either direction, depending on whether the grammar is thought of as generating or recognizing the language. Some authors on categorial grammars write formula_73 instead of formula_2. The convention used here follows Lambek and algebra. Historical notes. The basic ideas of categorial grammar date from work by Kazimierz Ajdukiewicz (in 1935) and other scholars from the Polish tradition of mathematical logic including Stanisław Leśniewski, Emil Post and Alfred Tarski. Ajdukiewicz's formal approach to syntax was influenced by Edmund Husserl's pure logical grammar, which was formalized by Rudolph Carnap. It represents a development in the historical idea of universal logical grammar as an underlying structure of all languages. A core concept of the approach is the substitutability of syntactic categories—hence the name categorial grammar. The membership of an element (e.g., word or phrase) in a syntactic category (word class, phrase type) is established by the commutation test, and the formal grammar is constructed through series of such tests. The term categorial grammar was coined by Yehoshua Bar-Hillel (in 1953). In 1958, Joachim Lambek introduced a syntactic calculus that formalized the function type constructors along with various rules for the combination of functions. This calculus is a forerunner of linear logic in that it is a substructural logic. Montague grammar is based on the same principles as categorial grammar. Montague's work helped to bolster interest in categorial grammar by associating it with his highly successful formal treatment of natural language semantics. Later work in categorial grammar has focused on the improvement of syntactic coverage. One formalism that has received considerable attention in recent years is Steedman and Szabolcsi's combinatory categorial grammar, which builds on combinatory logic invented by Moses Schönfinkel and Haskell Curry. There are a number of related formalisms of this kind in linguistics, such as type logical grammar and abstract categorial grammar. Refinements of categorical grammar. A variety of changes to categorial grammar have been proposed to improve syntactic coverage. Some of the most common are listed below. Features and subcategories. Most systems of categorial grammar subdivide categories. The most common way to do this is by tagging them with features, such as person, gender, number, and tense. Sometimes only atomic categories are tagged in this way. In Montague grammar, it is traditional to subdivide function categories using a multiple slash convention, so "A/B" and "A//B" would be two distinct categories of left-applying functions, that took the same arguments but could be distinguished between by other functions taking them as arguments. Function composition. Rules of function composition are included in many categorial grammars. An example of such a rule would be one that allowed the concatenation of a constituent of type "A/B" with one of type "B/C" to produce a new constituent of type "A/C". The semantics of such a rule would simply involve the composition of the functions involved. Function composition is important in categorial accounts of conjunction and extraction, especially as they relate to phenomena like right node raising. The introduction of function composition into a categorial grammar leads to many kinds of derivational ambiguity that are vacuous in the sense that they do not correspond to semantic ambiguities. Conjunction. Many categorial grammars include a typical conjunction rule, of the general form "X CONJ X → X", where "X" is a category. Conjunction can generally be applied to nonstandard constituents resulting from type raising or function composition.. Discontinuity. The grammar is extended to handle linguistic phenomena such as discontinuous idioms, gapping and extraction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\rightarrow B" }, { "math_id": 1, "text": "B/A\\,\\!" }, { "math_id": 2, "text": "A\\backslash B" }, { "math_id": 3, "text": "B\\,\\!" }, { "math_id": 4, "text": "A\\,\\!" }, { "math_id": 5, "text": "A\\backslash B\\,\\!" }, { "math_id": 6, "text": "\\text{Prim}\\,\\!" }, { "math_id": 7, "text": "\\text{Tp}(\\text{Prim})\\,\\!" }, { "math_id": 8, "text": "\\text{Prim}\\subseteq \\text{Tp}(\\text{Prim})" }, { "math_id": 9, "text": "X, Y\\in \\text{Tp}(\\text{Prim})" }, { "math_id": 10, "text": "(X/Y), (Y\\backslash X) \\in \\text{Tp}(\\text{Prim})" }, { "math_id": 11, "text": "(\\Sigma, \\text{Prim}, S, \\triangleleft)" }, { "math_id": 12, "text": "\\Sigma\\,\\!" }, { "math_id": 13, "text": "S \\in \\text{Tp}(\\text{Prim})" }, { "math_id": 14, "text": "\\triangleleft" }, { "math_id": 15, "text": "(\\triangleleft) \\subseteq \\text{Tp}(\\text{Prim}) \\times \\Sigma" }, { "math_id": 16, "text": "TYPE\\triangleleft\\text{symbol}" }, { "math_id": 17, "text": "(N,NP, \\text{ and } S)\\,\\!" }, { "math_id": 18, "text": "N\\,\\!" }, { "math_id": 19, "text": "NP\\,\\!" }, { "math_id": 20, "text": "S\\,\\!" }, { "math_id": 21, "text": "N/N\\,\\!" }, { "math_id": 22, "text": "NP/N\\,\\!" }, { "math_id": 23, "text": "NP\\backslash S" }, { "math_id": 24, "text": "(NP\\backslash S)/NP" }, { "math_id": 25, "text": "NP/N\\triangleleft\\text{the}" }, { "math_id": 26, "text": "NP/N\\triangleleft\\text{that}" }, { "math_id": 27, "text": "N\\triangleleft\\text{boy}" }, { "math_id": 28, "text": "N\\triangleleft\\text{mess}" }, { "math_id": 29, "text": "N/N\\triangleleft\\text{bad}" }, { "math_id": 30, "text": "(NP\\backslash S)/NP\\triangleleft\\text{made}" }, { "math_id": 31, "text": "\n{\\text{the}\\atop {NP/N,}}\n{\\text{bad}\\atop {N/N,}}\n{\\text{boy}\\atop {N,}}\n{\\text{made}\\atop {(NP\\backslash S)/NP,}}\n{\\text{that}\\atop {NP/N,}}\n{\\text{mess}\\atop {N}}\n" }, { "math_id": 32, "text": " X\\leftarrow X/Y,\\; Y" }, { "math_id": 33, "text": " X\\leftarrow Y,\\; Y\\backslash X" }, { "math_id": 34, "text": ".\\qquad NP/N,\\; N/N,\\; N,\\; (NP\\backslash S)/NP,\\; \\underbrace{NP/N,\\; N}" }, { "math_id": 35, "text": ".\\qquad NP/N,\\; N/N,\\; N,\\; \\underbrace{(NP\\backslash S)/NP, \\quad NP}" }, { "math_id": 36, "text": ".\\qquad NP/N,\\; \\underbrace{N/N,\\; N}, \\qquad (NP\\backslash S)" }, { "math_id": 37, "text": ".\\qquad \\underbrace{NP/N,\\; \\quad N},\\; \\qquad (NP\\backslash S)" }, { "math_id": 38, "text": ".\\qquad \\qquad\\underbrace{NP,\\; \\qquad (NP\\backslash S)}" }, { "math_id": 39, "text": ".\\qquad \\qquad\\qquad\\quad\\;\\;\\; S" }, { "math_id": 40, "text": "\\star" }, { "math_id": 41, "text": "(X/Y), (X\\backslash Y), (X\\star Y)\\in \\text{Tp}(\\text{Prim})" }, { "math_id": 42, "text": " X \\leftarrow \\Gamma " }, { "math_id": 43, "text": "\n\\text{(Axiom)}\\quad\n{{}\\over X \\leftarrow X}\n" }, { "math_id": 44, "text": "\n\\text{(Cut)} \\quad\n{Z \\leftarrow \\Delta X \\Delta' \\qquad X \\leftarrow \\Gamma\n \\over\n Z \\leftarrow \\Delta \\Gamma \\Delta'}\n" }, { "math_id": 45, "text": "(B/A)\\backslash B \\leftarrow A" }, { "math_id": 46, "text": " \n\\dfrac {\\dfrac{}{B \\leftarrow B} \\qquad \\dfrac{}{A \\leftarrow A} }\n {\\dfrac {B \\leftarrow (B/A), \\;\\; A} \n {(B/A)\\backslash B \\leftarrow A} }\n\\qquad\n\\begin{matrix}\n \\mbox{(Axioms)}\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad{ }\\\\\n {(\\leftarrow/)\\,\\,[Z=Y=B,X=A,\\Gamma=(A),\\Delta=\\Delta'=()]}\\\\\n {(\\backslash\\leftarrow)\\,\\,[Y=B,X=(B/A),\\Gamma=(A)]}\\qquad\\qquad\\qquad{ }\\\\\n\\end{matrix}\n" }, { "math_id": 47, "text": "G = (V,\\, \\Sigma,\\, ::=,\\, S)" }, { "math_id": 48, "text": "V\\, " }, { "math_id": 49, "text": "\\Sigma\\," }, { "math_id": 50, "text": "::=\\," }, { "math_id": 51, "text": "(::=)\\subseteq V \\times (V \\cup \\Sigma)^*" }, { "math_id": 52, "text": "S\\," }, { "math_id": 53, "text": "(\\text{Prim},\\, \\Sigma,\\, \\triangleleft,\\, S)" }, { "math_id": 54, "text": "\\text{Prim}=V\\cup\\Sigma" }, { "math_id": 55, "text": "\\text{Tp}(\\text{Prim})=\\text{Prim}\\,\\!" }, { "math_id": 56, "text": "{x \\leftarrow x}" }, { "math_id": 57, "text": "x \\in V\\cup\\Sigma" }, { "math_id": 58, "text": "{X \\leftarrow \\Gamma}" }, { "math_id": 59, "text": "X ::= \\Gamma\\,\\!" }, { "math_id": 60, "text": "{s \\triangleleft s}" }, { "math_id": 61, "text": "s \\in \\Sigma" }, { "math_id": 62, "text": " A ::= s A_0 \\ldots A_{N-1}" }, { "math_id": 63, "text": "N\\ge 0" }, { "math_id": 64, "text": "\\text{Prim}=V\\,\\!" }, { "math_id": 65, "text": " A/A_{N-1}/ \\ldots /A_0 \\triangleleft s " }, { "math_id": 66, "text": "(V,\\, \\Sigma,\\, ::=,\\, S)" }, { "math_id": 67, "text": "V\\subseteq \\text{Tp}(\\text{Prim})\\,\\!" }, { "math_id": 68, "text": "T::=\\text{s}\\,\\!" }, { "math_id": 69, "text": "T\\triangleleft\\text{s}" }, { "math_id": 70, "text": "T::=\\Gamma\\,\\!" }, { "math_id": 71, "text": "T\\leftarrow\\Gamma" }, { "math_id": 72, "text": "::=" }, { "math_id": 73, "text": "B\\backslash A" } ]
https://en.wikipedia.org/wiki?curid=882427
88254
Thomas Johann Seebeck
German physicist (1770–1831) Thomas Johann Seebeck (; 9 April 1770 – 10 December 1831) was a German physicist, who, in 1822, observed a relationship between heat and magnetism. Later, in 1823, Hans Christian Ørsted called this phenomenon the thermoelectric effect. Seebeck was born in Reval (today Tallinn) to a wealthy Baltic German merchant family. He received a medical degree in 1802 from the University of Göttingen, but preferred to study physics. From 1821 to 1823, Seebeck performed a series of experiments trying to understand Ørsted's findings from 1820. During his experiments, he observed that a junction of dissimilar metals produces a deflexion on a magnetic needle (compass) when exposed to a temperature gradient. Because Ørsted had discovered that an electric current produces a deflexion on a compass transversal to the wire, Seebeck's results were interpreted as a thermoelectric effect. This is now called the Peltier–Seebeck effect and is the basis of thermocouples and thermopiles. Seebeck effect. In 1822, after previous experiments on voltaic current and magnetism, Thomas Johann Seebeck found that a circuit made from two dissimilar metals with junctions at different temperatures would deflect a compass magnet. Seebeck believed this was due to magnetism induced by the temperature difference. Based on this result, Seebeck elaborated a table relating different metal junctions and the deflection of the compass. His main conclusion at the end of these experiments was about the influence of the metals and volcanoes on Terrestrial magnetism. However, during the 1820s, there were at least two different explanations to the relationship between electricity and magnetism. One of them was related to the belief in the polarity of the Nature (Naturphilosophie); another, followed Newton's concepts of force. Ørsted, Seebeck, Ritter and some German chemists and physicists believed on the polarity and looked for a relationship among different forces of Nature, like electricity, magnetism, heat, light and chemical reactions. Following Newton's concept of force were André-Marie Ampère and some French physicists. Ørsted interpreted Seebeck's experiment as supporting a relationship between electricity, magnetism and heat. After the discovery of the electron and its fundamental charge, it was quickly realized that Seebeck's effect was an electric current that is induced, which by Ampere's law deflects the magnet. More specifically, the temperature difference produces an electric potential (voltage) which can drive an electric current in a closed circuit. Today, this effect is known as the Peltier–Seebeck effect. The voltage produced is proportional to the temperature difference between the two junctions. The proportionality constant (a) is known as the Seebeck coefficient, and often referred to as the thermoelectric power or thermopower. The Seebeck voltage does not depend on the distribution of temperature along the metals between the junctions. This effect is the physical basis for a thermocouple, which is used often for temperature measurement. formula_0 The voltage difference, "V", produced across the terminals of an open circuit made from a pair of dissimilar metals, A and B, whose two junctions are held at different temperatures, is directly proportional to the difference between the hot and cold junction temperatures, "T"h − "T"c. The voltage or current produced across the junctions of two different metals is caused by the diffusion of electrons from a high electron density region to a low electron density region, as the density of electrons is different in different metals. The conventional current flows in the opposite direction. If both junctions are kept at same temperature, an equal amount of electron diffuses at both of them. Therefore, the currents at the two junctions are equal and opposite and the net current is zero, and if both the junctions are kept at different temperatures then diffusions at the two junctions are different and hence a different amount of current is produced. Therefore, the net current is non-zero. This phenomenon is known as thermoelectricity. Precursors to color photography. In 1810, at Jena, Seebeck described the action of light on silver chloride sensitised paper (a technique used by Johann Ritter). He observed that the exposed chemical would sometimes take on an approximate, pale version of the color of the solar spectrum as projected from a prism to which it had been exposed, and also reported the action of light for a wavelengths beyond the violet end of the spectrum. Seebeck reported that violet produced red-brown; blue in the blue segment, which spread into the green; he got black or yellowish in yellow light; and red produced rose red or hortensia red. The experiment could not be preserved because he could not fix the silver chloride to prevent its further reaction to light, though Hannavy reports that "in a spectrum attributed to Seebeck in a private collection the purple and violet currently remain visible," albeit weakly. He corresponded with J. W. Goethe who was writing on the "Theory of Colours" (Zur Farbenlehre) and who included Seebeck's discovery as an appendix. Other achievements. In 1808, Seebeck was first to produce and describe the amalgam of potassium. In 1810, he observed the magnetic properties of nickel and cobalt. In 1818, Seebeck discovered the optical activity of the solutions of sugar. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V = a(T_h - T_c)\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=88254
882686
Frobenius–Schur indicator
In mathematics, and especially the discipline of representation theory, the Schur indicator, named after Issai Schur, or Frobenius–Schur indicator describes what invariant bilinear forms a given irreducible representation of a compact group on a complex vector space has. It can be used to classify the irreducible representations of compact groups on real vector spaces. Definition. If a finite-dimensional continuous representation of a compact group "G" has character χ its Frobenius–Schur indicator is defined to be formula_0 for Haar measure μ with μ("G") = 1. When "G" is finite it is given by formula_1 If χ is a complex irreducible representation, then its Frobenius–Schur indicator is 1, 0, or -1. It provides a criterion for deciding whether a real irreducible representation of "G" is real, complex or quaternionic, in a specific sense defined below. Much of the content below discusses the case of finite groups, but the general compact case is analogous. Real irreducible representations. There are "three types" of irreducible real representations of a finite group on a real vector space "V", as Schur's lemma implies that the endomorphism ring commuting with the group action is a real associative division algebra and by the Frobenius theorem can only be isomorphic to either the real numbers, or the complex numbers, or the quaternions. Moreover every irreducible representation on a complex vector space can be constructed from a unique irreducible representation on a real vector space in one of the three ways above. So knowing the irreducible representations on complex spaces and their Schur indicators allows one to read off the irreducible representations on real spaces. Real representations can be complexified to get a complex representation of the same dimension and complex representations can be converted into a real representation of twice the dimension by treating the real and imaginary components separately. Also, since all finite dimensional complex representations can be turned into a unitary representation, for unitary representations the dual representation is also a (complex) conjugate representation because the Hilbert space norm gives an antilinear bijective map from the representation to its dual representation. Self-dual complex irreducible representation correspond to either real irreducible representation of the same dimension or real irreducible representations of twice the dimension called quaternionic representations (but not both) and non-self-dual complex irreducible representation correspond to a real irreducible representation of twice the dimension. Note for the latter case, both the complex irreducible representation and its dual give rise to the same real irreducible representation. An example of a quaternionic representation would be the four-dimensional real irreducible representation of the quaternion group "Q"8. Definition in terms of the symmetric and alternating square. If V is the underlying vector space of a representation of a group G, then the tensor product representation formula_2 can be decomposed as the direct sum of two subrepresentations, the symmetric square, denoted formula_3 (also often denoted by formula_4 or formula_5) and the alternating square, formula_6(also often denoted by formula_7, formula_8, or formula_9). In terms of these square representations, the indicator has the following, alternate definition: formula_10 where formula_11is the trivial representation. To see this, note that the term formula_12naturally arises in the characters of these representations; to wit, we haveformula_13andformula_14.Substituting either of these formulae, the Frobenius–Schur indicator takes on the structure of the natural G-invariant inner product on class functions:formula_15The inner product counts the multiplicities of direct summands; the equivalence of the definitions then follows immediately. Applications. Let V be an irreducible complex representation of a group G (or equivalently, an irreducible formula_16-module, where formula_16 denotes the group ring). Then The above is a consequence of the universal properties of the symmetric algebra and exterior algebra, which are the underlying vector spaces of the symmetric and alternating square. Additionally, Higher Frobenius-Schur indicators. Just as for any complex representation ρ, formula_23 is a self-intertwiner, for any integer "n", formula_24 is also a self-intertwiner. By Schur's lemma, this will be a multiple of the identity for irreducible representations. The trace of this self-intertwiner is called the nth "Frobenius-Schur indicator". The original case of the Frobenius–Schur indicator is that for "n" = 2. The zeroth indicator is the dimension of the irreducible representation, the first indicator would be 1 for the trivial representation and zero for the other irreducible representations. It resembles the Casimir invariants for Lie algebra irreducible representations. In fact, since any representation of G can be thought of as a module for C["G"] and vice versa, we can look at the center of C["G"]. This is analogous to looking at the center of the universal enveloping algebra of a Lie algebra. It is simple to check that formula_25 belongs to the center of C["G"], which is simply the subspace of class functions on "G". References. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\int_{g\\in G}\\chi(g^2)\\,d\\mu" }, { "math_id": 1, "text": "{1\\over |G|}\\sum_{g\\in G}\\chi(g^2)." }, { "math_id": 2, "text": "V\\otimes V" }, { "math_id": 3, "text": "\\operatorname{Sym}^2(V)" }, { "math_id": 4, "text": "V\\otimes_S V" }, { "math_id": 5, "text": "V\\odot V" }, { "math_id": 6, "text": "\\operatorname{Alt}^2(V) " }, { "math_id": 7, "text": "\\wedge^2V" }, { "math_id": 8, "text": "V\\otimes_A V" }, { "math_id": 9, "text": "V \\wedge V" }, { "math_id": 10, "text": "\\iota\\chi_V=\\begin{cases}\n 1 &\\text{if }W_{\\text{triv}}\\text{ is a subrepresentation of }\\operatorname{Sym}^2(V) \\\\\n -1 &\\text{if }W_{\\text{triv}}\\text{ is a subrepresentation of }\\operatorname{Alt}^2(V) \\\\\n 0 &\\text{otherwise}\n\\end{cases}" }, { "math_id": 11, "text": "W_{\\text{triv}}" }, { "math_id": 12, "text": "\\chi(g^2)" }, { "math_id": 13, "text": "\\chi_V(g^2)=\\chi_V(g)^2-2\\chi_{\\wedge^2V}(g)" }, { "math_id": 14, "text": "\\chi_V(g^2)=2\\chi_{\\operatorname{Sym}^2(V)}(g)-\\chi_V(g)^2" }, { "math_id": 15, "text": "\\iota\\chi_V =\n\\begin{cases}\n 1 &\\langle\\chi_{\\text{triv}},\\chi_{\\operatorname{Sym}^2(V)}\\rangle=1 \\\\\n -1 &\\langle\\chi_{\\text{triv}},\\chi_{\\operatorname{Alt}^2(V)}\\rangle=1 \\\\\n 0 &\\text{otherwise} \\\\\n\\end{cases}" }, { "math_id": 16, "text": "\\mathbb{C}[G]" }, { "math_id": 17, "text": "\\iota\\chi\\neq 0" }, { "math_id": 18, "text": "\\iota\\chi=1" }, { "math_id": 19, "text": "\\iota\\chi=-1" }, { "math_id": 20, "text": "\\iota\\chi=0" }, { "math_id": 21, "text": "\\chi" }, { "math_id": 22, "text": "\\mathbb{R}" }, { "math_id": 23, "text": "\\frac{1}{|G|}\\sum_{g\\in G}\\rho(g)" }, { "math_id": 24, "text": "\\frac{1}{|G|}\\sum_{g\\in G}\\rho(g^n)" }, { "math_id": 25, "text": "\\sum_{g\\in G}g^n" } ]
https://en.wikipedia.org/wiki?curid=882686
882793
Poisson superalgebra
Z2-graded generalization of a Poisson algebra In mathematics, a Poisson superalgebra is a Z2-graded generalization of a Poisson algebra. Specifically, a Poisson superalgebra is an (associative) superalgebra "A" together with a second product, a Lie superbracket formula_0 such that ("A", [·,·]) is a Lie superalgebra and the operator formula_1 is a superderivation of "A": formula_2 Here, formula_3 is the grading of a (pure) element formula_4. A supercommutative Poisson algebra is one for which the (associative) product is supercommutative. This is one of two possible ways of "super"izing the Poisson algebra. This gives the classical dynamics of fermion fields and classical spin-1/2 particles. The other way is to define an antibracket algebra or Gerstenhaber algebra, used in the BRST and Batalin-Vilkovisky formalism. The difference between these two is in the grading of the Lie bracket. In the Poisson superalgebra, the grading of the bracket is zero: formula_5 whereas in the Gerstenhaber algebra, the bracket decreases the grading by one: formula_6
[ { "math_id": 0, "text": "[\\cdot,\\cdot] : A\\otimes A\\to A" }, { "math_id": 1, "text": "[x,\\cdot] : A\\to A" }, { "math_id": 2, "text": "[x,yz] = [x,y]z + (-1)^{|x||y|}y[x,z]." }, { "math_id": 3, "text": "|a|=\\deg a" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "|[a,b]| = |a|+|b|" }, { "math_id": 6, "text": "|[a,b]| = |a|+|b| - 1" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "[\\cdot,\\cdot]" }, { "math_id": 9, "text": "[x,y]:=xy-(-1)^{|x||y|}yx" } ]
https://en.wikipedia.org/wiki?curid=882793
882902
Poisson manifold
Mathematical structure in differential geometry In differential geometry, a field in mathematics, a Poisson manifold is a smooth manifold endowed with a Poisson structure. The notion of Poisson manifold generalises that of symplectic manifold, which in turn generalises the phase space from Hamiltonian mechanics. A Poisson structure (or Poisson bracket) on a smooth manifold formula_0 is a functionformula_1on the vector space formula_2 of smooth functions on formula_0, making it into a Lie algebra subject to a Leibniz rule (also known as a Poisson algebra). Poisson structures on manifolds were introduced by André Lichnerowicz in 1977 and are named after the French mathematician Siméon Denis Poisson, due to their early appearance in his works on analytical mechanics. A Poisson structure on a manifold formula_3 gives a way of deforming the product of functions on formula_3 to a new product that is typically not commutative. This process is known as deformation quantization, since classical mechanics can be based on Poisson structures, while quantum mechanics involves non-commutative rings. Introduction. From phase spaces of classical mechanics to symplectic and Poisson manifolds. In classical mechanics, the phase space of a physical system consists of all the possible values of the position and of the momentum variables allowed by the system. It is naturally endowed with a Poisson bracket/symplectic form (see below), which allows one to formulate the Hamilton equations and describe the dynamics of the system through the phase space in time. For instance, a single particle freely moving in the formula_4-dimensional Euclidean space (i.e. having formula_5 as configuration space) has phase space formula_6. The coordinates formula_7 describe respectively the positions and the generalised momenta. The space of observables, i.e. the smooth functions on formula_6, is naturally endowed with a binary operation called the Poisson bracket, defined as formula_8 Such a bracket satisfies the standard properties of a Lie bracket, plus a further compatibility with the product of functions, namely the Leibniz identity formula_9. Equivalently, the Poisson bracket on formula_6 can be reformulated using the symplectic form formula_10 Indeed, if one considers the Hamiltonian vector field formula_11 associated to a function formula_12, then the Poisson bracket can be rewritten as formula_13 A standard example of a symplectic manifold, and thus of a Poisson manifold, is the cotangent bundle formula_14 of any finite-dimensional smooth manifold formula_15 The coordinates on formula_16 are interpreted as particle positions; the space of tangents at each point forming the space of (canonically) conjugate momenta. If formula_16 is formula_4-dimensional, formula_14 is a smooth manifold of dimension formula_17 it can be regarded as the associated phase space. The cotangent bundle is naturally equipped with a canonical symplectic form, which, in canonical coordinates, coincides with the one described above. In general, by Darboux theorem, any arbitrary symplectic manifold formula_18 admits special coordinates where the form formula_19 and the bracket formula_20 are equivalent with, respectively, the symplectic form and the Poisson bracket of formula_6. Symplectic geometry is therefore the natural mathematical setting to describe classical Hamiltonian mechanics. Poisson manifolds are further generalisations of symplectic manifolds, which arise by axiomatising the properties satisfied by the Poisson bracket on formula_21. More precisely, a Poisson manifold consists of a smooth manifold formula_3 (not necessarily of even dimension) together with an abstract bracket formula_22, still called Poisson bracket, which does not necessarily arise from a symplectic form formula_23, but satisfies the same algebraic properties. Poisson geometry is closely related to symplectic geometry: for instance every Poisson bracket determines a foliation of the manifold into symplectic submanifolds. However, the study of Poisson geometry requires techniques that are usually not employed in symplectic geometry, such as the theory of Lie groupoids and algebroids. Moreover, there are natural examples of structures which should be "morally" symplectic, but exhibit singularities, i.e. their "symplectic form" should be allowed to be degenerate. For example, the smooth quotient of a symplectic manifold by a group acting by symplectomorphisms is a Poisson manifold, which in general is not symplectic. This situation models the case of a physical system which is invariant under symmetries: the "reduced" phase space, obtained quotienting the original phase space by the symmetries, in general is no longer symplectic, but is Poisson. History. Although the modern definition of Poisson manifold appeared only in the 70's–80's, its origin dates back to the nineteenth century. Alan Weinstein summarized the early history of Poisson geometry as follows:"Poisson invented his brackets as a tool for classical dynamics. Jacobi realized the importance of these brackets and elucidated their algebraic properties, and Lie began the study of their geometry." Indeed, Siméon Denis Poisson introduced in 1809 what we now call Poisson bracket in order to obtain new integrals of motion, i.e. quantities which are preserved throughout the motion. More precisely, he proved that, if two functions formula_12 and formula_24 are integrals of motion, then there is a third function, denoted by formula_25, which is an integral of motion as well. In the Hamiltonian formulation of mechanics, where the dynamics of a physical system is described by a given function formula_26 (usually the energy of the system), an integral of motion is simply a function formula_12 which Poisson-commutes with formula_26, i.e. such that formula_27. What will become known as Poisson's theorem can then be formulated as formula_28 Poisson's computations occupied many pages, and his results were rediscovered and simplified two decades later by Carl Gustav Jacob Jacobi. Jacobi was the first to identify the general properties of the Poisson bracket as a binary operation. Moreover, he established the relation between the (Poisson) bracket of two functions and the (Lie) bracket of their associated Hamiltonian vector fields, i.e.formula_29in order to reformulate (and give a much shorter proof of) Poisson's theorem on integrals of motion. Jacobi's work on Poisson brackets influenced the pioneering studies of Sophus Lie on symmetries of differential equations, which led to the discovery of Lie groups and Lie algebras. For instance, what are now called linear Poisson structures (i.e. Poisson brackets on a vector space which send linear functions to linear functions) correspond precisely to Lie algebra structures. Moreover, the integrability of a linear Poisson structure (see below) is closely related to the integrability of its associated Lie algebra to a Lie group. The twentieth century saw the development of modern differential geometry, but only in 1977 did André Lichnerowicz introduce Poisson structures as geometric objects on smooth manifolds. Poisson manifolds were further studied in the foundational 1983 paper of Alan Weinstein, where many basic structure theorems were first proved. These works exerted a huge influence in the subsequent decades on the development of Poisson geometry, which today is a field of its own, and at the same time is deeply entangled with many others, including non-commutative geometry, integrable systems, topological field theories and representation theory. Formal definition. There are two main points of view to define Poisson structures: it is customary and convenient to switch between them. As bracket. Let formula_0 be a smooth manifold and let formula_2 denote the real algebra of smooth real-valued functions on formula_0, where the multiplication is defined pointwise. A Poisson bracket (or Poisson structure) on formula_0 is an formula_30-bilinear map formula_31 defining a structure of Poisson algebra on formula_2, i.e. satisfying the following three conditions: The first two conditions ensure that formula_35 defines a Lie-algebra structure on formula_2, while the third guarantees that, for each formula_36, the linear map formula_37 is a derivation of the algebra formula_2, i.e., it defines a vector field formula_38 called the Hamiltonian vector field associated to formula_12. Choosing local coordinates formula_39, any Poisson bracket is given byformula_40for formula_41 the Poisson bracket of the coordinate functions. As bivector. A Poisson bivector on a smooth manifold formula_0 is a bivector field formula_42 satisfying the non-linear partial differential equation formula_43, where formula_44 denotes the Schouten–Nijenhuis bracket on multivector fields. Choosing local coordinates formula_39, any Poisson bivector is given byformula_45for formula_46 skew-symmetric smooth functions on formula_47. Equivalence of the definitions. Let formula_35 be a bilinear skew-symmetric bracket (called an "almost Lie bracket") satisfying Leibniz's rule; then the function formula_25 can be described aformula_48for a unique smooth bivector field formula_49. Conversely, given any smooth bivector field formula_50 on formula_0, the same formula formula_51 defines an almost Lie bracket formula_35 that automatically obeys Leibniz's rule. Then the following integrability conditions are equivalent: A Poisson structure without any of the four requirements above is also called an almost Poisson structure. Holomorphic Poisson structures. The definition of Poisson structure for "real" smooth manifolds can be also adapted to the complex case. A holomorphic Poisson manifold is a complex manifold formula_3 whose sheaf of holomorphic functions formula_56 is a sheaf of Poisson algebras. Equivalently, recall that a holomorphic bivector field formula_57 on a complex manifold formula_3 is a section formula_58 such that formula_59. Then a holomorphic Poisson structure on formula_60 is a holomorphic bivector field satisfying the equation formula_61. Holomorphic Poisson manifolds can be characterised also in terms of Poisson-Nijenhuis structures. Many results for real Poisson structures, e.g. regarding their integrability, extend also to holomorphic ones. Holomorphic Poisson structures appear naturally in the context of generalised complex structures: locally, any generalised complex manifold is the product of a symplectic manifold and a holomorphic Poisson manifold. Deformation quantization. The notion of a Poisson manifold arises naturally from the deformation theory of associative algebras. For a smooth manifold formula_3, the smooth functions formula_62 form a commutative algebra over the real numbers formula_63, using pointwise addition and multiplication (meaning that formula_64 for points formula_65 in formula_3). An formula_66th-order deformation of this algebra is given by a formula formula_67 for formula_68 such that the star-product is associative (modulo formula_69), but not necessarily commutative. A first-order deformation of formula_62 is equivalent to an "almost Poisson structure" as defined above, that is, a bilinear "bracket" map formula_31 that is skew-symmetric and satisfies Leibniz's Rule. Explicitly, one can go from the deformation to the bracket by formula_70 A first-order deformation is also equivalent to a bivector field, that is, a smooth section of formula_71. A bracket satisfies the Jacobi identity (that is, it is a Poisson structure) if and only if the corresponding first-order deformation of formula_62 can be extended to a second-order deformation. Remarkably, the Kontsevich quantization formula shows that every Poisson manifold has a deformation quantization. That is, if a first-order deformation of formula_62 can be extended to second order, then it can be extended to infinite order. Example: For any smooth manifold formula_3, the cotangent bundle formula_72 is a symplectic manifold, and hence a Poisson manifold. The corresponding non-commutative deformation of formula_73 is related to the algebra of linear differential operators on formula_3. When formula_3 is the real line formula_63, the non-commutativity of the algebra of differential operators (known as the Weyl algebra) follows from the calculation that formula_74 Symplectic leaves. A Poisson manifold is naturally partitioned into regularly immersed symplectic manifolds of possibly different dimensions, called its symplectic leaves. These arise as the maximal integral submanifolds of the completely integrable singular foliation spanned by the Hamiltonian vector fields. Rank of a Poisson structure. Recall that any bivector field can be regarded as a skew homomorphism, the musical morphism formula_75. The image formula_76 consists therefore of the values formula_77 of all Hamiltonian vector fields evaluated at every formula_78. The rank of formula_50 at a point formula_78 is the rank of the induced linear mapping formula_79. A point formula_78 is called regular for a Poisson structure formula_50 on formula_0 if and only if the rank of formula_50 is constant on an open neighborhood of formula_78; otherwise, it is called a singular point. Regular points form an open dense subspace formula_80; when formula_81, i.e. the map formula_82 is of constant rank, the Poisson structure formula_50 is called regular. Examples of regular Poisson structures include trivial and nondegenerate structures (see below). The regular case. For a regular Poisson manifold, the image formula_76 is a regular distribution; it is easy to check that it is involutive, therefore, by the Frobenius theorem, formula_0 admits a partition into leaves. Moreover, the Poisson bivector restricts nicely to each leaf, which therefore become symplectic manifolds. The non-regular case. For a non-regular Poisson manifold the situation is more complicated, since the distribution formula_76 is singular, i.e. the vector subspaces formula_83 have different dimensions. An integral submanifold for formula_84 is a path-connected submanifold formula_85 satisfying formula_86 for all formula_87. Integral submanifolds of formula_50 are automatically regularly immersed manifolds, and maximal integral submanifolds of formula_50 are called the leaves of formula_50. Moreover, each leaf formula_88 carries a natural symplectic form formula_89 determined by the condition formula_90 for all formula_91 and formula_87. Correspondingly, one speaks of the symplectic leaves of formula_50. Moreover, both the space formula_92 of regular points and its complement are saturated by symplectic leaves, so symplectic leaves may be either regular or singular. Weinstein splitting theorem. To show the existence of symplectic leaves in the non-regular case, one can use Weinstein splitting theorem (or Darboux-Weinstein theorem). It states that any Poisson manifold formula_93 splits locally around a point formula_94 as the product of a symplectic manifold formula_95 and a transverse Poisson submanifold formula_96 vanishing at formula_97. More precisely, if formula_98, there are local coordinates formula_99 such that the Poisson bivector formula_50 splits as the sum formula_100 where formula_101 Notice that, when the rank of formula_50 is maximal (e.g. the Poisson structure is nondegenerate, so that formula_102), one recovers the classical Darboux theorem for symplectic structures. Examples. Trivial Poisson structures. Every manifold formula_0 carries the trivial Poisson structure formula_103, equivalently described by the bivector formula_104. Every point of formula_0 is therefore a zero-dimensional symplectic leaf. Nondegenerate Poisson structures. A bivector field formula_50 is called nondegenerate if formula_105 is a vector bundle isomorphism. Nondegenerate Poisson bivector fields are actually the same thing as symplectic manifolds formula_18. Indeed, there is a bijective correspondence between nondegenerate bivector fields formula_50 and nondegenerate 2-forms formula_19, given by the musical isomorphism formula_106 where formula_19 is encoded by formula_107. Furthermore, formula_50 is Poisson precisely if and only if formula_19 is closed; in such case, the bracket becomes the canonical Poisson bracket from Hamiltonian mechanics: formula_108 Non-degenerate Poisson structures have only one symplectic leaf, namely formula_0 itself, and their Poisson algebra formula_109 become a Poisson ring. Linear Poisson structures. A Poisson structure formula_110 on a vector space formula_111 is called linear when the bracket of two linear functions is still linear. The class of vector spaces with linear Poisson structures coincides with that of the duals of Lie algebras. The dual formula_112 of any finite-dimensional Lie algebra formula_113 carries a linear Poisson bracket, known in the literature under the names of Lie-Poisson, Kirillov-Poisson or KKS (Kostant-Kirillov-Souriau) structure:formula_114where formula_115 and the derivatives formula_116 are interpreted as elements of the bidual formula_117. Equivalently, the Poisson bivector can be locally expressed asformula_118where formula_119 are coordinates on formula_112 and formula_120 are the associated structure constants of formula_121, Conversely, any linear Poisson structure formula_110 on formula_111 must be of this form, i.e. there exists a natural Lie algebra structure induced on formula_122 whose Lie-Poisson bracket recovers formula_110. The symplectic leaves of the Lie-Poisson structure on formula_123 are the orbits of the coadjoint action of formula_124 on formula_123. Fibrewise linear Poisson structures. The previous example can be generalised as follows. A Poisson structure on the total space of a vector bundle formula_125 is called fibrewise linear when the bracket of two smooth functions formula_126, whose restrictions to the fibres are linear, results in a bracket that is linear when restricted to the fibres. Equivalently, the Poisson bivector field formula_50 is asked to satisfy formula_127 for any formula_128, where formula_129 is the scalar multiplication formula_130. The class of vector bundles with linear Poisson structures coincides with that of the duals of Lie algebroids. The dual formula_131 of any Lie algebroid formula_132 carries a fibrewise linear Poisson bracket, uniquely defined byformula_133where formula_134 is the evaluation by formula_135. Equivalently, the Poisson bivector can be locally expressed asformula_136where formula_119 are coordinates around a point formula_78, formula_137 are fibre coordinates on formula_131, dual to a local frame formula_138 of formula_139, and formula_140 and formula_141 are the structure function of formula_139, i.e. the unique smooth functions satisfyingformula_142Conversely, any fibrewise linear Poisson structure formula_110 on formula_143 must be of this form, i.e. there exists a natural Lie algebroid structure induced on formula_144 whose Lie-Poisson backet recovers formula_110. The symplectic leaves of formula_131 are the cotangent bundles of the algebroid orbits formula_145; equivalently, if formula_139 is integrable to a Lie groupoid formula_146, they are the connected components of the orbits of the cotangent groupoid formula_147. For formula_148 one recovers linear Poisson structures, while for formula_149 the fibrewise linear Poisson structure is the nondegenerate one given by the canonical symplectic structure of the cotangent bundle formula_150. Poisson cohomology. The Poisson cohomology groups formula_166 of a Poisson manifold are the cohomology groups of the cochain complexformula_167 where the operator formula_168 is the Schouten-Nijenhuis bracket with formula_50. Notice that such a sequence can be defined for every bivector on formula_0; the condition formula_169 is equivalent to formula_170, i.e. formula_0 being Poisson. Using the morphism formula_105, one obtains a morphism from the de Rham complex formula_171 to the Poisson complex formula_172, inducing a group homomorphism formula_173. In the nondegenerate case, this becomes an isomorphism, so that the Poisson cohomology of a symplectic manifold fully recovers its de Rham cohomology. Poisson cohomology is difficult to compute in general, but the low degree groups contain important geometric information on the Poisson structure: Modular class. The modular class of a Poisson manifold is a class in the first Poisson cohomology group, which is the obstruction to the existence of a volume form invariant under the Hamiltonian flows. It was introduced by Koszul and Weinstein. Recall that the divergence of a vector field formula_178 with respect to a given volume form formula_179 is the function formula_180 defined by formula_181. The modular vector field of a Poisson manifold, with respect to a volume form formula_179, is the vector field formula_182 defined by the divergence of the Hamiltonian vector fields: formula_183. The modular vector field is a Poisson 1-cocycle, i.e. it satisfies formula_184. Moreover, given two volume forms formula_185 and formula_186, the difference formula_187 is a Hamiltonian vector field. Accordingly, the Poisson cohomology class formula_188 does not depend on the original choice of the volume form formula_179, and it is called the modular class of the Poisson manifold. A Poisson manifold is called unimodular if its modular class vanishes. Notice that this happens if and only if there exists a volume form formula_179 such that the modular vector field formula_182 vanishes, i.e. formula_189 for every formula_190; in other words, formula_179 is invariant under the flow of any Hamiltonian vector field. For instance: Poisson homology. Poisson cohomology was introduced in 1977 by Lichnerowicz himself; a decade later, Brylinski introduced a homology theory for Poisson manifolds, using the operator formula_193. Several results have been proved relating Poisson homology and cohomology. For instance, for orientable "unimodular" Poisson manifolds, Poisson homology turns out to be isomorphic to Poisson cohomology: this was proved independently by Xu and Evans-Lu-Weinstein. Poisson maps. A smooth map formula_194 between Poisson manifolds is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Poisson map if it respects the Poisson structures, i.e. one of the following equivalent conditions holds (compare with the equivalent definitions of Poisson structures above): An anti-Poisson map satisfies analogous conditions with a minus sign on one side. Poisson manifolds are the objects of a category formula_206, with Poisson maps as morphisms. If a Poisson map formula_207 is also a diffeomorphism, then we call formula_208 a Poisson-diffeomorphism. Examples. One should notice that the notion of a Poisson map is fundamentally different from that of a symplectic map. For instance, with their standard symplectic structures, there exist no Poisson maps formula_218, whereas symplectic maps abound. Symplectic realisations. A symplectic realisation on a Poisson manifold M consists of a symplectic manifold formula_219 together with a Poisson map formula_220 which is a surjective submersion. Roughly speaking, the role of a symplectic realisation is to "desingularise" a complicated (degenerate) Poisson manifold by passing to a bigger, but easier (non-degenerate), one. Notice that some authors define symplectic realisations without this last condition (so that, for instance, the inclusion of a symplectic leaf in a Poisson manifold is an example) and call full a symplectic realisation where formula_221 is a surjective submersion. Examples of (full) symplectic realisations include the following: A symplectic realisation formula_221 is called complete if, for any complete Hamiltonian vector field formula_229, the vector field formula_230 is complete as well. While symplectic realisations always exist for every Poisson manifold (and several different proofs are available), complete ones do not, and their existence plays a fundamental role in the integrability problem for Poisson manifolds (see below). Integration of Poisson manifolds. Any Poisson manifold formula_162 induces a structure of Lie algebroid on its cotangent bundle formula_224, also called the cotangent algebroid. The anchor map is given by formula_105 while the Lie bracket on formula_231 is defined asformula_232Several notions defined for Poisson manifolds can be interpreted via its Lie algebroid formula_150: It is of crucial importance to notice that the Lie algebroid formula_150 is not always integrable to a Lie groupoid. Symplectic groupoids. A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;symplectic groupoid is a Lie groupoid formula_146 together with a symplectic form formula_233 which is also multiplicative, i.e. it satisfies the following algebraic compatibility with the groupoid multiplication: formula_234. Equivalently, the graph of formula_235 is asked to be a Lagrangian submanifold of formula_236. Among the several consequences, the dimension of formula_237 is automatically twice the dimension of formula_0. The notion of symplectic groupoid was introduced at the end of the 80's independently by several authors. A fundamental theorem states that the base space of any symplectic groupoid admits a unique Poisson structure formula_50 such that the source map formula_238 and the target map formula_239 are, respectively, a Poisson map and an anti-Poisson map. Moreover, the Lie algebroid formula_240 is isomorphic to the cotangent algebroid formula_150 associated to the Poisson manifold formula_162. Conversely, if the cotangent bundle formula_150 of a Poisson manifold is integrable to some Lie groupoid formula_146, then formula_237 is automatically a symplectic groupoid. Accordingly, the integrability problem for a Poisson manifold consists in finding a (symplectic) Lie groupoid which integrates its cotangent algebroid; when this happens, the Poisson structure is called integrable. While any Poisson manifold admits a local integration (i.e. a symplectic groupoid where the multiplication is defined only locally), there are general topological obstructions to its integrability, coming from the integrability theory for Lie algebroids. Using such obstructions, one can show that a Poisson manifold is integrable if and only if it admits a complete symplectic realisation. The candidate formula_241 for the symplectic groupoid integrating a given Poisson manifold formula_162 is called Poisson homotopy groupoid and is simply the Weinstein groupoid of the cotangent algebroid formula_224, consisting of the quotient of the Banach space of a special class of paths in formula_150 by a suitable equivalent relation. Equivalently, formula_241 can be described as an infinite-dimensional symplectic quotient. Submanifolds. A Poisson submanifold of formula_250 is an immersed submanifold formula_251 such that the immersion map formula_252 is a Poisson map. Equivalently, one asks that every Hamiltonian vector field formula_253, for formula_153, is tangent to formula_254. This definition is very natural and satisfies several good properties, e.g. the transverse intersection of two Poisson submanifolds is again a Poisson submanifold. However, it has also a few problems: In order to overcome these problems, one often uses the notion of a Poisson transversal (originally called cosymplectic submanifold). This can be defined as a submanifold formula_257 which is transverse to every symplectic leaf formula_88 and such that the intersection formula_258 is a symplectic submanifold of formula_259. It follows that any Poisson transversal formula_260 inherits a canonical Poisson structure formula_261 from formula_50. In the case of a nondegenerate Poisson manifold formula_250 (whose only symplectic leaf is formula_0 itself), Poisson transversals are the same thing as symplectic submanifolds. More general classes of submanifolds play an important role in Poisson geometry, including Lie–Dirac submanifolds, Poisson–Dirac submanifolds, coisotropic submanifolds and pre-Poisson submanifolds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " M " }, { "math_id": 1, "text": " \\{ \\cdot,\\cdot \\}: \\mathcal{C}^\\infty(M) \\times \\mathcal{C}^\\infty(M) \\to \\mathcal{C}^\\infty(M) " }, { "math_id": 2, "text": " {C^{\\infty}}(M) " }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": " n " }, { "math_id": 5, "text": " \\mathbb{R}^n " }, { "math_id": 6, "text": " \\mathbb{R}^{2n} " }, { "math_id": 7, "text": " (q^1,...,q^n,p_1,...,p_n) " }, { "math_id": 8, "text": " \\{ f,g \\} := \\sum_{i=1}^n \\left( \\frac{\\partial f}{\\partial p_i} \\frac{\\partial g}{\\partial q_i} - \\frac{\\partial f}{\\partial q_i} \\frac{\\partial g}{\\partial p_i} \\right) ." }, { "math_id": 9, "text": " \\{f,g \\cdot h\\} = g \\cdot \\{f,h\\} + \\{f,g\\} \\cdot h " }, { "math_id": 10, "text": " \\omega := \\sum_{i=1}^n dp_i \\wedge dq^i ." }, { "math_id": 11, "text": " X_f := \\sum_{i=1}^n \\frac{\\partial f}{\\partial p_i} \\partial_{q_i} - \\frac{\\partial f}{\\partial q_i} \\partial_{p_i} " }, { "math_id": 12, "text": " f " }, { "math_id": 13, "text": " \\{f,g\\} = \\omega (X_f,X_g). " }, { "math_id": 14, "text": " T^*Q " }, { "math_id": 15, "text": " Q ." }, { "math_id": 16, "text": " Q " }, { "math_id": 17, "text": " 2n ;" }, { "math_id": 18, "text": " (M,\\omega) " }, { "math_id": 19, "text": " \\omega " }, { "math_id": 20, "text": " \\{f,g\\} = \\omega (X_f,X_g) " }, { "math_id": 21, "text": "\\mathbb{R}^{2n}" }, { "math_id": 22, "text": "\\{\\cdot,\\cdot\\}: \\mathcal{C}^\\infty(M) \\times \\mathcal{C}^\\infty(M) \\to \\mathcal{C}^\\infty(M) " }, { "math_id": 23, "text": "\\omega" }, { "math_id": 24, "text": " g " }, { "math_id": 25, "text": " \\{ f,g \\} " }, { "math_id": 26, "text": " h " }, { "math_id": 27, "text": " \\{f,h\\} = 0 " }, { "math_id": 28, "text": " \\{f,h\\} = 0, \\{g,h\\} = 0 \\Rightarrow \\{\\{f,g\\},h\\} = 0." }, { "math_id": 29, "text": " X_{\\{f,g\\}} = [X_f,X_g]," }, { "math_id": 30, "text": " \\mathbb{R} " }, { "math_id": 31, "text": " \\{ \\cdot,\\cdot \\}: {C^{\\infty}}(M) \\times {C^{\\infty}}(M) \\to {C^{\\infty}}(M) " }, { "math_id": 32, "text": " \\{ f,g \\} = - \\{ g,f \\} " }, { "math_id": 33, "text": " \\{ f,\\{ g,h \\} \\} + \\{ g,\\{ h,f \\} \\} + \\{ h,\\{ f,g \\} \\} = 0 " }, { "math_id": 34, "text": " \\{ f g,h \\} = f \\{ g,h \\} + g \\{ f,h \\} " }, { "math_id": 35, "text": " \\{ \\cdot,\\cdot \\} " }, { "math_id": 36, "text": " f \\in {C^{\\infty}}(M) " }, { "math_id": 37, "text": " X_f := \\{ f,\\cdot \\}: {C^{\\infty}}(M) \\to {C^{\\infty}}(M) " }, { "math_id": 38, "text": " X_{f} \\in \\mathfrak{X}(M) " }, { "math_id": 39, "text": " (U, x^i) " }, { "math_id": 40, "text": " \\{f, g\\}_{\\mid U} = \\sum_{i,j} \\pi^{ij} \\frac{\\partial f}{\\partial x^i} \\frac{\\partial g}{\\partial x^j}, " }, { "math_id": 41, "text": " \\pi^{ij} = \\{ x^i, x^j \\} " }, { "math_id": 42, "text": " \\pi \\in \\mathfrak{X}^2(M) := \\Gamma \\big( \\wedge^{2} T M \\big) " }, { "math_id": 43, "text": " [\\pi,\\pi] = 0 " }, { "math_id": 44, "text": " [\\cdot,\\cdot]: {\\mathfrak{X}^{p}}(M) \\times {\\mathfrak{X}^{q}}(M) \\to {\\mathfrak{X}^{p + q - 1}}(M) " }, { "math_id": 45, "text": " \\pi_{\\mid U} = \\sum_{i,j} \\pi^{ij} \\frac{\\partial}{\\partial x^i} \\frac{\\partial}{\\partial x^j}, " }, { "math_id": 46, "text": " \\pi^{ij} " }, { "math_id": 47, "text": " U " }, { "math_id": 48, "text": " \\{ f,g \\} = \\pi(df \\wedge dg), " }, { "math_id": 49, "text": " \\pi \\in \\mathfrak{X}^2(M) " }, { "math_id": 50, "text": " \\pi " }, { "math_id": 51, "text": " \\{ f,g \\} = \\pi(df \\wedge dg) " }, { "math_id": 52, "text": " {C^{\\infty}}(M) \\to \\mathfrak{X}(M), f \\mapsto X_f " }, { "math_id": 53, "text": " [X_f, X_g] = X_{\\{f,g\\}} " }, { "math_id": 54, "text": " {\\rm Graph}(\\pi) \\subset TM \\oplus T^*M " }, { "math_id": 55, "text": " D \\subset TM \\oplus T^*M " }, { "math_id": 56, "text": " \\mathcal{O}_M " }, { "math_id": 57, "text": "\\pi" }, { "math_id": 58, "text": " \\pi \\in \\Gamma (\\wedge^2 T^{1,0}M)" }, { "math_id": 59, "text": " \\bar{\\partial} \\pi = 0" }, { "math_id": 60, "text": "M " }, { "math_id": 61, "text": "[\\pi,\\pi]=0" }, { "math_id": 62, "text": "C^{\\infty}(M)" }, { "math_id": 63, "text": "\\mathbf{R}" }, { "math_id": 64, "text": "(fg)(x) = f(x)g(x)" }, { "math_id": 65, "text": "x" }, { "math_id": 66, "text": "n" }, { "math_id": 67, "text": " f*g = fg + \\epsilon B_1(f,g) + \\cdots + \\epsilon^n B_n(f,g) \\pmod{\\epsilon^{n+1}}" }, { "math_id": 68, "text": "f,g\\in C^{\\infty}(M)" }, { "math_id": 69, "text": "\\epsilon^{n+1}" }, { "math_id": 70, "text": "f*g-g*f=\\epsilon \\{ f,g \\} \\pmod{\\epsilon^2}." }, { "math_id": 71, "text": "\\wedge^2 TM" }, { "math_id": 72, "text": "T^*M" }, { "math_id": 73, "text": "C^{\\infty}(T^*M)" }, { "math_id": 74, "text": "\\bigg[ \\frac{\\partial}{\\partial x},x \\bigg] =1." }, { "math_id": 75, "text": " \\pi^{\\sharp}: T^{*} M \\to T M, \\alpha \\mapsto \\pi(\\alpha,\\cdot) " }, { "math_id": 76, "text": " {\\pi^{\\sharp}}(T^{*} M) \\subset TM " }, { "math_id": 77, "text": " {X_{f}}(x) " }, { "math_id": 78, "text": " x \\in M " }, { "math_id": 79, "text": " \\pi^{\\sharp}_{x} " }, { "math_id": 80, "text": " M_{\\mathrm{reg}} \\subseteq M " }, { "math_id": 81, "text": " M_{\\mathrm{reg}} = M " }, { "math_id": 82, "text": " \\pi^\\sharp " }, { "math_id": 83, "text": " {\\pi^{\\sharp}}(T^{*}_x M) \\subset T_xM " }, { "math_id": 84, "text": " {\\pi^{\\sharp}}(T^{*} M) " }, { "math_id": 85, "text": " S \\subseteq M " }, { "math_id": 86, "text": " T_{x} S = {\\pi^{\\sharp}}(T^{\\ast}_{x} M) " }, { "math_id": 87, "text": " x \\in S " }, { "math_id": 88, "text": " S " }, { "math_id": 89, "text": " \\omega_{S} \\in {\\Omega^{2}}(S) " }, { "math_id": 90, "text": " [{\\omega_{S}}(X_{f},X_{g})](x) = - \\{ f,g \\}(x) " }, { "math_id": 91, "text": " f,g \\in {C^{\\infty}}(M) " }, { "math_id": 92, "text": " M_{\\mathrm{reg}} " }, { "math_id": 93, "text": " (M^n, \\pi) " }, { "math_id": 94, "text": " x_0 \\in M " }, { "math_id": 95, "text": " (S^{2k}, \\omega) " }, { "math_id": 96, "text": " (T^{n-2k}, \\pi_T) " }, { "math_id": 97, "text": " x_0 " }, { "math_id": 98, "text": " \\mathrm{rank}(\\pi_{x_0}) = 2k " }, { "math_id": 99, "text": " (U, p_1,\\ldots,p_k,q^1,\\ldots, q^k,x^1,\\ldots,x^{n-2k}) " }, { "math_id": 100, "text": " \\pi_{\\mid U} = \\sum_{i=1}^{k} \\frac{\\partial}{\\partial q^i} \\frac{\\partial}{\\partial p_i} + \\frac{1}{2} \\sum_{i,j=1}^{n-2k} \\phi^{ij}(x) \\frac{\\partial}{\\partial x^i} \\frac{\\partial}{\\partial x^j}, " }, { "math_id": 101, "text": " \\phi^{ij}(x_0) = 0 ." }, { "math_id": 102, "text": "n=2k" }, { "math_id": 103, "text": " \\{ f,g \\} = 0 " }, { "math_id": 104, "text": " \\pi=0 " }, { "math_id": 105, "text": " \\pi^{\\sharp}: T^{*} M \\to T M " }, { "math_id": 106, "text": " \\pi^\\sharp = (\\omega^{\\flat})^{-1}, " }, { "math_id": 107, "text": " \\omega^{\\flat}: TM \\to T^*M, \\quad v \\mapsto \\omega(v,\\cdot) " }, { "math_id": 108, "text": " \\{ f,g \\} := \\omega (X_f,X_g). " }, { "math_id": 109, "text": " (\\mathcal{C}^{\\infty}(M), \\{\\cdot, \\cdot \\}) " }, { "math_id": 110, "text": " \\{ \\cdot, \\cdot \\} " }, { "math_id": 111, "text": " V " }, { "math_id": 112, "text": " \\mathfrak{g}^{*} " }, { "math_id": 113, "text": " (\\mathfrak{g},[\\cdot,\\cdot]) " }, { "math_id": 114, "text": " \\{ f, g \\} (\\xi) := \\xi ([d_\\xi f,d_\\xi g]_{\\mathfrak{g}}), " }, { "math_id": 115, "text": " f,g \\in \\mathcal{C}^{\\infty}(\\mathfrak{g}^*), \\xi \\in \\mathfrak{g}^* " }, { "math_id": 116, "text": " d_\\xi f, d_\\xi g: T_{\\xi} \\mathfrak{g}^* \\to \\mathbb{R} " }, { "math_id": 117, "text": " \\mathfrak{g}^{**} \\cong \\mathfrak{g} " }, { "math_id": 118, "text": " \\pi = \\sum_{i,j,k} c^{ij}_k x^k \\frac{\\partial}{\\partial x^i} \\frac{\\partial}{\\partial x^j}, " }, { "math_id": 119, "text": " x^i " }, { "math_id": 120, "text": " c_k^{ij} " }, { "math_id": 121, "text": " \\mathfrak{g} " }, { "math_id": 122, "text": " \\mathfrak{g}:=V^* " }, { "math_id": 123, "text": " \\mathfrak{g}^* " }, { "math_id": 124, "text": " G " }, { "math_id": 125, "text": " E \\to M " }, { "math_id": 126, "text": " E \\to \\mathbb{R} " }, { "math_id": 127, "text": " (m_t)^*\\pi = t \\pi " }, { "math_id": 128, "text": " t >0 " }, { "math_id": 129, "text": " m_t: E \\to E " }, { "math_id": 130, "text": " v \\mapsto tv " }, { "math_id": 131, "text": " A^* " }, { "math_id": 132, "text": " (A, [\\cdot, \\cdot]) " }, { "math_id": 133, "text": " \\{ \\mathrm{ev}_\\alpha, \\mathrm{ev}_\\beta \\}:= ev_{[\\alpha,\\beta]} \\quad \\quad \\forall \\alpha, \\beta \\in \\Gamma(A), " }, { "math_id": 134, "text": " \\mathrm{ev}_\\alpha: A^* \\to \\mathbb{R}, \\phi \\mapsto \\phi(\\alpha) " }, { "math_id": 135, "text": " \\alpha " }, { "math_id": 136, "text": " \\pi = \\sum_{i,a} B^i_a(x) \\frac{\\partial}{\\partial y_a} \\frac{\\partial}{\\partial x^i} + \\sum_{a < b,c} C_{ab}^c(x) y_c \\frac{\\partial}{\\partial y_a} \\frac{\\partial}{\\partial y_b}, " }, { "math_id": 137, "text": " y_a " }, { "math_id": 138, "text": " e_a " }, { "math_id": 139, "text": " A " }, { "math_id": 140, "text": " B^i_a " }, { "math_id": 141, "text": " C^c_{ab} " }, { "math_id": 142, "text": " \\rho(e_a) = \\sum_i B^i_a (x) \\frac{\\partial}{\\partial x^i}, \\quad \\quad [e_a, e_b] = \\sum_c C^c_{ab} (x) e_c. " }, { "math_id": 143, "text": " E " }, { "math_id": 144, "text": " A:=E^* " }, { "math_id": 145, "text": " \\mathcal{O} \\subseteq A " }, { "math_id": 146, "text": " \\mathcal{G} \\rightrightarrows M " }, { "math_id": 147, "text": " T^* \\mathcal{G} \\rightrightarrows A^* " }, { "math_id": 148, "text": " M = \\{*\\} " }, { "math_id": 149, "text": " A = TM " }, { "math_id": 150, "text": " T^*M " }, { "math_id": 151, "text": " [\\pi,\\pi] " }, { "math_id": 152, "text": " f \\pi " }, { "math_id": 153, "text": " f \\in \\mathcal{C}^\\infty(M) " }, { "math_id": 154, "text": " (M_{0} \\times M_{1},\\pi_{0} \\times \\pi_{1}) " }, { "math_id": 155, "text": " (M_{0},\\pi_{0}) " }, { "math_id": 156, "text": " (M_{1},\\pi_{1}) " }, { "math_id": 157, "text": " \\mathcal{F} " }, { "math_id": 158, "text": " 2 r " }, { "math_id": 159, "text": " \\omega \\in {\\Omega^{2}}(\\mathcal{F}) " }, { "math_id": 160, "text": " \\omega^{r} " }, { "math_id": 161, "text": " \\omega|_S " }, { "math_id": 162, "text": " (M,\\pi) " }, { "math_id": 163, "text": " M/G " }, { "math_id": 164, "text": " \\pi_{M/G} " }, { "math_id": 165, "text": " (M,\\pi) \\to (M/G,\\pi_{M/G}) " }, { "math_id": 166, "text": " H^k(M,\\pi) " }, { "math_id": 167, "text": " \\ldots \\xrightarrow{d_\\pi} \\mathfrak{X}^\\bullet(M) \\xrightarrow{d_\\pi} \\mathfrak{X}^{\\bullet+1}(M) \\xrightarrow{d_\\pi} \\ldots \\color{white}{\\sum^i} " }, { "math_id": 168, "text": " d_\\pi = [\\pi,-] " }, { "math_id": 169, "text": " d_\\pi \\circ d_\\pi = 0 " }, { "math_id": 170, "text": " [\\pi,\\pi]=0 " }, { "math_id": 171, "text": " (\\Omega^\\bullet(M),d_{dR}) " }, { "math_id": 172, "text": " (\\mathfrak{X}^\\bullet(M), d_\\pi) " }, { "math_id": 173, "text": " H_{dR}^\\bullet(M) \\to H^\\bullet(M,\\pi) " }, { "math_id": 174, "text": " H^0(M,\\pi) " }, { "math_id": 175, "text": " H^1(M,\\pi) " }, { "math_id": 176, "text": " H^2(M,\\pi) " }, { "math_id": 177, "text": " H^3(M,\\pi) " }, { "math_id": 178, "text": "X \\in \\mathfrak{X}(M)" }, { "math_id": 179, "text": "\\lambda" }, { "math_id": 180, "text": "{\\rm div}_\\lambda (X) \\in \\mathcal{C}^\\infty(M)" }, { "math_id": 181, "text": " {\\rm div}_\\lambda (X) = \\frac{\\mathcal{L}_{X} \\lambda}{\\lambda}" }, { "math_id": 182, "text": "X_\\lambda" }, { "math_id": 183, "text": "X_\\lambda: f \\mapsto {\\rm div}_\\lambda (X_f)" }, { "math_id": 184, "text": "\\mathcal{L}_{X_\\lambda} \\pi = 0" }, { "math_id": 185, "text": "\\lambda_1" }, { "math_id": 186, "text": "\\lambda_2" }, { "math_id": 187, "text": "X_{\\lambda_1} - X_{\\lambda_2}" }, { "math_id": 188, "text": "[X_\\lambda]_\\pi \\in H^1 (M,\\pi) " }, { "math_id": 189, "text": " {\\rm div}_\\lambda (X_f) = 0" }, { "math_id": 190, "text": "f" }, { "math_id": 191, "text": "\\mathfrak{g}" }, { "math_id": 192, "text": "\\mathfrak{g}^*" }, { "math_id": 193, "text": "\\partial_\\pi = [d, \\iota_\\pi]" }, { "math_id": 194, "text": " \\varphi: M \\to N " }, { "math_id": 195, "text": " \\{ \\cdot,\\cdot \\}_{M} " }, { "math_id": 196, "text": " \\{ \\cdot,\\cdot \\}_{N} " }, { "math_id": 197, "text": " {\\{ f,g \\}_{N}}(\\varphi(x)) = {\\{ f \\circ \\varphi,g \\circ \\varphi \\}_{M}}(x) " }, { "math_id": 198, "text": " f,g \\in {C^{\\infty}}(N) " }, { "math_id": 199, "text": " \\pi_{M} " }, { "math_id": 200, "text": " \\pi_{N} " }, { "math_id": 201, "text": " \\varphi " }, { "math_id": 202, "text": " \\pi_N = \\varphi_* \\pi_M " }, { "math_id": 203, "text": " H \\in \\mathcal{C}^\\infty(N) " }, { "math_id": 204, "text": "X_H = \\varphi_* X_{H \\circ \\phi}" }, { "math_id": 205, "text": " d\\varphi: (TM,{\\rm Graph}(\\pi_M)) \\to (TN,{\\rm Graph}(\\pi_N)) " }, { "math_id": 206, "text": " \\mathfrak{Poiss} " }, { "math_id": 207, "text": "\\varphi: M\\to N" }, { "math_id": 208, "text": "\\varphi" }, { "math_id": 209, "text": " \\mathrm{pr}_{i}: M_{0} \\times M_{1} \\to M_{i} " }, { "math_id": 210, "text": " i \\in \\{ 0,1 \\} " }, { "math_id": 211, "text": " \\mathfrak{h} " }, { "math_id": 212, "text": " \\mathfrak{g} \\to \\mathfrak{h} " }, { "math_id": 213, "text": " \\mathfrak{h}^* \\to \\mathfrak{g}^* " }, { "math_id": 214, "text": " A \\to M " }, { "math_id": 215, "text": " B \\to M " }, { "math_id": 216, "text": " A \\to B " }, { "math_id": 217, "text": " B^* \\to A^* " }, { "math_id": 218, "text": " \\mathbb{R}^{2} \\to \\mathbb{R}^{4} " }, { "math_id": 219, "text": " (P,\\omega) " }, { "math_id": 220, "text": " \\phi: (P,\\omega) \\to (M,\\pi) " }, { "math_id": 221, "text": " \\phi " }, { "math_id": 222, "text": " (M,0 ) " }, { "math_id": 223, "text": " P " }, { "math_id": 224, "text": " T^*M \\to M " }, { "math_id": 225, "text": " M \\to M " }, { "math_id": 226, "text": " T^*G " }, { "math_id": 227, "text": " \\phi: T^*G \\to \\mathfrak{g}^* " }, { "math_id": 228, "text": " G \\to G " }, { "math_id": 229, "text": "X_H" }, { "math_id": 230, "text": "X_{H \\circ \\phi}" }, { "math_id": 231, "text": " \\Gamma(T^*M) = \\Omega^1(M) " }, { "math_id": 232, "text": " [\\alpha, \\beta] := \\mathcal{L}_{\\pi^\\sharp(\\alpha)} (\\beta) - \\iota_{\\pi^\\sharp(\\beta)} d\\alpha = \\mathcal{L}_{\\pi^\\sharp(\\alpha)} (\\beta) - \\mathcal{L}_{\\pi^\\sharp(\\beta)} (\\alpha) - d\\pi (\\alpha, \\beta). " }, { "math_id": 233, "text": " \\omega \\in \\Omega^2(\\mathcal{G}) " }, { "math_id": 234, "text": " m^*\\omega = {\\rm pr}_1^* \\omega + {\\rm pr}_2^* \\omega " }, { "math_id": 235, "text": " m " }, { "math_id": 236, "text": " (\\mathcal{G} \\times \\mathcal{G} \\times \\mathcal{G}, \\omega \\oplus \\omega \\oplus - \\omega) " }, { "math_id": 237, "text": " \\mathcal{G} " }, { "math_id": 238, "text": " s: (\\mathcal{G}, \\omega) \\to (M,\\pi) " }, { "math_id": 239, "text": " t: (\\mathcal{G}, \\omega) \\to (M,\\pi) " }, { "math_id": 240, "text": " {\\rm Lie}(\\mathcal{G}) " }, { "math_id": 241, "text": " \\Pi(M,\\pi) " }, { "math_id": 242, "text": " (M,0) " }, { "math_id": 243, "text": " T^*M \\rightrightarrows M " }, { "math_id": 244, "text": " M \\times M \\rightrightarrows M " }, { "math_id": 245, "text": " s^* \\omega - t^* \\omega " }, { "math_id": 246, "text": " \\pi^\\sharp = (\\omega^{\\flat})^{-1} " }, { "math_id": 247, "text": " G \\times \\mathfrak{g}^* \\rightrightarrows \\mathfrak{g}^* " }, { "math_id": 248, "text": " T^*G \\cong G \\times \\mathfrak{g}^* " }, { "math_id": 249, "text": " T^*\\mathcal{G} \\rightrightarrows A^* " }, { "math_id": 250, "text": " (M, \\pi) " }, { "math_id": 251, "text": " N \\subseteq M " }, { "math_id": 252, "text": " (N,\\pi_{\\mid N}) \\hookrightarrow (M,\\pi) " }, { "math_id": 253, "text": " X_f " }, { "math_id": 254, "text": " N " }, { "math_id": 255, "text": " \\Phi: (M,\\pi_M) \\to (N,\\pi_N) " }, { "math_id": 256, "text": " \\Phi^{-1} (Q) " }, { "math_id": 257, "text": " X \\subseteq M " }, { "math_id": 258, "text": " X \\cap S " }, { "math_id": 259, "text": " (S,\\omega_S) " }, { "math_id": 260, "text": " X \\subseteq (M,\\pi) " }, { "math_id": 261, "text": " \\pi_X " } ]
https://en.wikipedia.org/wiki?curid=882902
882936
Nambu mechanics
Generalization of Hamiltonian mechanics involving multiple Hamiltonians In mathematics, Nambu mechanics is a generalization of Hamiltonian mechanics involving multiple Hamiltonians. Recall that Hamiltonian mechanics is based upon the flows generated by a smooth Hamiltonian over a symplectic manifold. The flows are symplectomorphisms and hence obey Liouville's theorem. This was soon generalized to flows generated by a Hamiltonian over a Poisson manifold. In 1973, Yoichiro Nambu suggested a generalization involving Nambu–Poisson manifolds with more than one Hamiltonian. Nambu bracket. Specifically, consider a differential manifold M, for some integer "N" ≥ 2; one has a smooth "N"-linear map from "N" copies of "C"∞ ("M") to itself, such that it is completely antisymmetric: the Nambu bracket, formula_0 which acts as a derivation formula_1 whence the Filippov Identities (FI) (evocative of the Jacobi identities, but unlike them, "not" antisymmetrized in all arguments, for "N" ≥ 2 ): formula_2 formula_3 so that {"f"1, ..., "f""N"−1, •} acts as a generalized derivation over the "N"-fold product {. ..., .}. Hamiltonians and flow. There are "N" − 1 Hamiltonians, "H"1, ..., "H""N"−1, generating an incompressible flow, formula_4 The generalized phase-space velocity is divergenceless, enabling Liouville's theorem. The case "N" = 2 reduces to a Poisson manifold, and conventional Hamiltonian mechanics. For larger even "N", the "N"−1 Hamiltonians identify with the maximal number of independent invariants of motion (cf. Conserved quantity) characterizing a superintegrable system that evolves in "N"-dimensional phase space. Such systems are also describable by conventional Hamiltonian dynamics; but their description in the framework of Nambu mechanics is substantially more elegant and intuitive, as all invariants enjoy the "same" geometrical status as the Hamiltonian: the trajectory in phase space is the intersection of the "N" − 1 hypersurfaces specified by these invariants. Thus, the flow is perpendicular to all "N" − 1 gradients of these Hamiltonians, whence parallel to the generalized cross product specified by the respective Nambu bracket. Nambu mechanics can be extended to fluid dynamics, where the resulting Nambu brackets are non-canonical and the Hamiltonians are identified with the Casimir of the system, such as enstrophy or helicity. Quantizing Nambu dynamics leads to intriguing structures that coincide with conventional quantization ones when superintegrable systems are involved—as they must. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{h_1,\\ldots,h_{N-1}, \\cdot \\} : C^\\infty(M) \\times \\cdots C^\\infty(M) \\rightarrow C^\\infty(M)," }, { "math_id": 1, "text": "\\{h_1,\\ldots,h_{N-1},fg\\} = \\{h_1,\\ldots,h_{N-1},f\\}g + f\\{h_1,\\ldots,h_{N-1},g\\}," }, { "math_id": 2, "text": " \\{ f_1,\\cdots , ~f_{N-1},~ \\{ g_1,\\cdots,~ g_N\\}\\} = \\{ \\{ f_1, \\cdots, ~ f_{N-1},~g_1\\},~g_2,\\cdots,~g_N\\}+\\{g_1, \\{f_1,\\cdots,f_{N-1}, ~g_2\\},\\cdots,g_N\\}+\\dots " }, { "math_id": 3, "text": "+\\{g_1,\\cdots, g_{N-1},\\{f_1,\\cdots,f_{N-1},~g_N\\}\\}, " }, { "math_id": 4, "text": "\\frac{d}{dt}f = \\{f, H_1, \\ldots, H_{N-1}\\}," } ]
https://en.wikipedia.org/wiki?curid=882936
8830237
Subspace theorem
Points of small height in projective space lie in a finite number of hyperplanes In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes. It is a result obtained by Wolfgang M. Schmidt (1972). Statement. The subspace theorem states that if "L"1...,"L""n" are linearly independent linear forms in "n" variables with algebraic coefficients and if ε&gt;0 is any given real number, then the non-zero integer points "x" with formula_0 lie in a finite number of proper subspaces of Q"n". A quantitative form of the theorem, which determines the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by to allow more general absolute values on number fields. Applications. The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation. A corollary on Diophantine approximation. The following corollary to the subspace theorem is often itself referred to as the "subspace theorem". If "a"1...,"a""n" are algebraic such that 1,"a"1...,"a""n" are linearly independent over Q and ε&gt;0 is any given real number, then there are only finitely many rational "n"-tuples ("x"1/y...,"x""n"/y) with formula_1 The specialization "n" = 1 gives the Thue–Siegel–Roth theorem. One may also note that the exponent 1+1/"n"+ε is best possible by Dirichlet's theorem on diophantine approximation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|L_1(x)\\cdots L_n(x)|<|x|^{-\\epsilon}" }, { "math_id": 1, "text": "|a_i-x_i/y|<y^{-(1+1/n+\\epsilon)},\\quad i=1,\\ldots,n." } ]
https://en.wikipedia.org/wiki?curid=8830237
883034
Gerstenhaber algebra
In mathematics and theoretical physics, a Gerstenhaber algebra (sometimes called an antibracket algebra or braid algebra) is an algebraic structure discovered by Murray Gerstenhaber (1963) that combines the structures of a supercommutative ring and a graded Lie superalgebra. It is used in the Batalin–Vilkovisky formalism. It appears also in the generalization of Hamiltonian formalism known as the De Donder–Weyl theory as the algebra of generalized Poisson brackets defined on differential forms. Definition. A Gerstenhaber algebra is a graded-commutative algebra with a Lie bracket of degree −1 satisfying the Poisson identity. Everything is understood to satisfy the usual superalgebra sign conventions. More precisely, the algebra has two products, one written as ordinary multiplication and one written as [,], and a Z-grading called degree (in theoretical physics sometimes called ghost number). The degree of an element "a" is denoted by |"a"|. These satisfy the identities Gerstenhaber algebras differ from Poisson superalgebras in that the Lie bracket has degree −1 rather than degree 0. The Jacobi identity may also be expressed in a symmetrical form formula_0
[ { "math_id": 0, "text": "(-1)^{(|a|-1)(|c|-1)}[a,[b,c]]+(-1)^{(|b|-1)(|a|-1)}[b,[c,a]]+(-1)^{(|c|-1)(|b|-1)}[c,[a,b]] = 0.\\," } ]
https://en.wikipedia.org/wiki?curid=883034
8837457
Imperative logic
Imperative logic is the field of logic concerned with imperatives. In contrast to declaratives, it is not clear whether imperatives denote propositions or more generally what role truth and falsity play in their semantics. Thus, there is almost no consensus on any aspect of imperative logic. Jørgensen's dilemma. One of a logic's principal concerns is logical validity. It seems that arguments with imperatives can be valid. Consider: P1. Take all the books off the table! P2. "Foundations of Arithmetic" is on the table. C1. Therefore, take "Foundations of Arithmetic" off the table! However, an argument is valid if the conclusion follows from the premises. This means the premises give us reason to believe the conclusion, or, alternatively, the truth of the premises determines truth of the conclusion. Since imperatives are neither true nor false and since they are not proper objects of belief, none of the standard accounts of logical validity apply to arguments containing imperatives. Here is the dilemma. Either arguments containing imperatives can be valid or not. On the one hand, if such arguments can be valid, we need a new or expanded account of logical validity and the concomitant details. Providing such an account has proved challenging. On the other hand, if such arguments cannot be valid (either because such arguments are all invalid or because validity is not a notion that applies to imperatives), then our logical intuitions regarding the above argument (and others similar to it) are mistaken. Since either answer seems problematic, this has come to be known as Jørgensen's dilemma, named after Jørgen Jørgensen (). While this problem was first noted in a footnote by Frege, it received a more developed formulation by Jørgensen. Deontic logic takes the approach of adding a modal operator formula_0 to an argument with imperatives such that a truth-value can be assigned to the proposition. For example, it may be hard to assign a truth-value to the argument "Take all the books off the table!", but formula_0("take all the books off the table"), which means "It is obligatory to take all the books off the table", can be assigned a truth-value, because it is in the indicative mood. Ross's paradox. Alf Ross observed that applying the classical rule of disjunction introduction under the scope of an imperative operator leads to unintuitive (or apparently absurd) results. When applied to simple declaratives, the result appears to be valid deduction. P1. The room is clean. C1. Therefore, the room is clean or grass is green. However, a similar inference does not seem to be valid for imperatives. Consider: P1. Clean your room! C1. Therefore, clean your room or burn the house down! Ross's paradox highlights the challenge faced by anyone who wants to modify or add to the standard account of validity. The challenge is what we mean by a valid imperative inference. For valid declarative inference, the premises give you a reason to believe the conclusion. One might think that for imperative inference, the premises give you a reason to do as the conclusion says; While Ross's paradox seems to suggest otherwise, its severity has been subject of much debate. The semantics for Deontic logic requires that all obligations in the domain of discourse be fulfilled in an acceptable possible world; The conclusion "It is obligatory to clean your room or burn the house down" does not falsify the premise "It is obligatory to clean your room". In addition, based on the context, it may also be true that "It is obligatory to not burn the house down", in which case any acceptable possible world must have "Your room is cleaned" and "The house is not burnt down" to be both true. Some strands of this debate connect it to Hans Kamp's paradox of free choice, in which disjunction introduction leads to absurd conclusions when applied under the scope of a possibility modal. Mixed inferences. The following is an example of a pure imperative inference: P1. Do both of the following: wash the dishes and clean your room! C1. Therefore, clean your room! In this case, all the sentences making up the argument are imperatives. Not all imperative inferences are of this kind. Consider again: P1. Take all the books off the table! P2. "Foundations of Arithmetic" is on the table. C1. Therefore, take "Foundations of Arithmetic" off the table! Notice that this argument is composed of both imperatives and declaratives and has an imperative conclusion. Mixed inferences are of special interest to logicians. For instance, Henri Poincaré held that no imperative conclusion can be validly drawn from a set of premises which does not contain at least one imperative. While R.M. Hare held that no declarative conclusion can be validly drawn from a set of premises which cannot validly be drawn from the declaratives among them alone. There is no consensus among logicians about the truth or falsity of these (or similar) claims and mixed imperative and declarative inference remains vexed. Applications. Aside from intrinsic interest, imperative logic has other applications. The use of imperatives in moral theory should make imperative inference an important subject for ethics and metaethics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O" } ]
https://en.wikipedia.org/wiki?curid=8837457
8839340
Graph toughness
In graph theory, toughness is a measure of the connectivity of a graph. A graph "G" is said to be "t"-tough for a given real number t if, for every integer "k" &gt; 1, "G" cannot be split into "k" different connected components by the removal of fewer than "tk" vertices. For instance, a graph is 1-tough if the number of components formed by removing a set of vertices is always at most as large as the number of removed vertices. The toughness of a graph is the maximum "t" for which it is "t"-tough; this is a finite number for all finite graphs except the complete graphs, which by convention have infinite toughness. Graph toughness was first introduced by Václav Chvátal (1973). Since then there has been extensive work by other mathematicians on toughness; the recent survey by lists 99 theorems and 162 papers on the subject. Examples. Removing k vertices from a path graph can split the remaining graph into as many as "k" + 1 connected components. The maximum ratio of components to removed vertices is achieved by removing one vertex (from the interior of the path) and splitting it into two components. Therefore, paths are -tough. In contrast, removing k vertices from a cycle graph leaves at most k remaining connected components, and sometimes leaves exactly k connected components, so a cycle is 1-tough. Connection to vertex connectivity. If a graph is t-tough, then one consequence (obtained by setting "k" = 2) is that any set of 2"t" − 1 nodes can be removed without splitting the graph in two. That is, every t-tough graph is also 2"t"-vertex-connected. Connection to Hamiltonicity. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Is there a number formula_0 such that every formula_0-tough graph is Hamiltonian? observed that every cycle, and therefore every Hamiltonian graph, is 1-tough; that is, being 1-tough is a necessary condition for a graph to be Hamiltonian. He conjectured that the connection between toughness and Hamiltonicity goes in both directions: that there exists a threshold t such that every t-tough graph is Hamiltonian. Chvátal's original conjecture that "t" = 2 would have proven Fleischner's theorem but was disproved by . The existence of a larger toughness threshold for Hamiltonicity remains open, and is sometimes called Chvátal's toughness conjecture. Computational complexity. Testing whether a graph is 1-tough is co-NP-complete. That is, the decision problem whose answer is "yes" for a graph that is not 1-tough, and "no" for a graph that is 1-tough, is NP-complete. The same is true for any fixed positive rational number q: testing whether a graph is q-tough is co-NP-complete .
[ { "math_id": 0, "text": "t" } ]
https://en.wikipedia.org/wiki?curid=8839340
8840378
Harmonic generation
Nonlinear optical process Harmonic generation (HG, also called multiple harmonic generation) is a nonlinear optical process in which formula_0 photons with the same frequency interact with a nonlinear material, are "combined", and generate a new photon with formula_0 times the energy of the initial photons (equivalently, formula_0 times the frequency and the wavelength divided by formula_0). General process. In a medium having a substantial nonlinear susceptibility, harmonic generation is possible. Note that for even orders (formula_1), the medium must have no center of symmetry (non-centrosymmetrical). Because the process requires that many photons are present at the same time and at the same place, the generation process has a low probability to occur, and this probability decreases with the order formula_0. To generate efficiently, the symmetry of the medium must allow the signal to be amplified (through phase matching, for instance), and the light source must be intense and well-controlled spatially (with a collimated laser) and temporally (more signal if the laser has short pulses). Sum-frequency generation (SFG). A special case in which the number of photons in the interaction is formula_2, but with two different photons at frequencies formula_3 and formula_4. Second-harmonic generation (SHG). A special case in which the number of photons in the interaction is formula_2. Also a special case of sum-frequency generation in which both photons are at the same frequency formula_5. Third-harmonic generation (THG). A special case in which the number of photons in the interaction is formula_6, if all the photons have the same frequency formula_5. If they have different frequency, the general term of four-wave mixing is preferred. This process involves the 3rd order nonlinear susceptibility formula_7. Unlike SHG, it is a volumetric process and has been shown in liquids. However, it is enhanced at interfaces. Materials used for THG. Nonlinear crystals such as BBO (β-BaB2O4) or LBO can convert THG, otherwise THG can be generated from membranes in microscopy. Fourth-harmonic generation (FHG or 4HG). A special case in which the number of photons in interaction is formula_8. Reported around the year 2000, powerful lasers now enable efficient FHG. This process involves the 4th order nonlinear susceptibility formula_9. Materials used for FHG. Some BBO (β-BaB2O4) are used for FHG. Harmonic generation for formula_10. Harmonic generation for formula_11 (5HG) or more is theoretically possible, but the interaction requires a very high number of photons to interact and has therefore a low probability to happen: the signal at higher harmonics will be very low, and requires very intense lasers to be generated. To generate high harmonics (like formula_12 and so on), the substantially different process of high harmonic generation can be used.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n = 2,4,\\dots" }, { "math_id": 2, "text": "n = 2" }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "\\omega_2" }, { "math_id": 5, "text": "\\omega" }, { "math_id": 6, "text": "n = 3" }, { "math_id": 7, "text": "\\chi^{(3)}" }, { "math_id": 8, "text": "n = 4" }, { "math_id": 9, "text": "\\chi^{(4)}" }, { "math_id": 10, "text": "n > 4" }, { "math_id": 11, "text": "n = 5" }, { "math_id": 12, "text": "n = 30" } ]
https://en.wikipedia.org/wiki?curid=8840378
884040
Bivector
Sum of directed areas in exterior algebra In mathematics, a bivector or 2-vector is a quantity in exterior algebra or geometric algebra that extends the idea of scalars and vectors. Considering a scalar as a degree-zero quantity and a vector as a degree-one quantity, a bivector is of degree two. Bivectors have applications in many areas of mathematics and physics. They are related to complex numbers in two dimensions and to both pseudovectors and vector quaternions in three dimensions. They can be used to generate rotations in a space of any number of dimensions, and are a useful tool for classifying such rotations. Geometrically, a simple bivector can be interpreted as characterizing a directed plane segment, much as vectors can be thought of as characterizing "directed line segments". The bivector a ∧ b has an "attitude" (direction) of the plane spanned by a and b, has an area that is a scalar multiple of any reference plane segment with the same attitude (and in geometric algebra, it has a "magnitude" equal to the area of the parallelogram with edges a and b), and has an "orientation" being the side of a on which b lies within the plane spanned by a and b. In layman terms, any surface defines the same bivector if it is parallel to the same plane (same attitude), has the same area, and same orientation (see figure). Bivectors are generated by the exterior product on vectors: given two vectors a and b, their exterior product a ∧ b is a bivector, as is any sum of bivectors. Not all bivectors can be expressed as an exterior product without such summation. More precisely, a bivector that can be expressed as an exterior product is called "simple"; in up to three dimensions all bivectors are simple, but in higher dimensions this is not the case. The exterior product of two vectors is alternating, so a ∧ a is the zero bivector, and b ∧ a is the negative of the bivector a ∧ b, producing the opposite orientation. Concepts directly related to bivector are rank-2 antisymmetric tensor and skew-symmetric matrix. History. The bivector was first defined in 1844 by German mathematician Hermann Grassmann in exterior algebra as the result of the exterior product of two vectors. Just the previous year, in Ireland, William Rowan Hamilton had discovered quaternions. Hamilton coined both "vector" and "bivector", the latter in his "Lectures on Quaternions" (1853) as he introduced biquaternions, which have bivectors for their vector parts. It was not until English mathematician William Kingdon Clifford in 1888 added the geometric product to Grassmann's algebra, incorporating the ideas of both Hamilton and Grassmann, and founded Clifford algebra, that the bivector of this article arose. Henry Forder used the term "bivector" to develop exterior algebra in 1941. In the 1890s Josiah Willard Gibbs and Oliver Heaviside developed vector calculus, which included separate cross product and dot products that were derived from quaternion multiplication. The success of vector calculus, and of the book "Vector Analysis" by Gibbs and Wilson, had the effect that the insights of Hamilton and Clifford were overlooked for a long time, since much of 20th century mathematics and physics was formulated in vector terms. Gibbs used vectors to fill the role of bivectors in three dimensions, and used bivector in Hamilton's sense, a use that has sometimes been copied. Today the bivector is largely studied as a topic in geometric algebra, a Clifford algebra over real or complex vector spaces with a quadratic form. Its resurgence was led by David Hestenes who, along with others, applied geometric algebra to a range of new applications in physics. Derivation. For this article, the bivector will be considered only in real geometric algebras, which may be applied in most areas of physics. Also unless otherwise stated, all examples have a Euclidean metric and so a positive-definite quadratic form. Geometric algebra and the geometric product. The bivector arises from the definition of the geometric product over a vector space with an associated quadratic form sometimes called the "metric". For vectors a, b and c, the geometric product satisfies the following properties: Scalar product. From associativity, a(ab) a2b, is a scalar times b. When b is not parallel to and hence not a scalar multiple of a, ab cannot be a scalar. But formula_2 is a sum of scalars and so a scalar. From the law of cosines on the triangle formed by the vectors its value is , where θ is the angle between the vectors. It is therefore identical to the scalar product between two vectors, and is written the same way, formula_3 It is symmetric, scalar-valued, and can be used to determine the angle between two vectors: in particular if a and b are orthogonal the product is zero. Exterior product. Just as the scalar product can be formulated as the symmetric part of the geometric product of another quantity, the exterior product (sometimes known as the "wedge" or "progressive" product) can be formulated as its antisymmetric part: formula_4 It is antisymmetric in a and b formula_5 and by addition: formula_6 That is, the geometric product is the sum of the symmetric scalar product and alternating exterior product. To examine the nature of a ∧ b, consider the formula formula_7 which using the Pythagorean trigonometric identity gives the value of (a ∧ b)2 formula_8 With a negative square, it cannot be a scalar or vector quantity, so it is a new sort of object, a bivector. It has magnitude , where "θ" is the angle between the vectors, and so is zero for parallel vectors. To distinguish them from vectors, bivectors are written here with bold capitals, for example: formula_9 although other conventions are used, in particular as vectors and bivectors are both elements of the geometric algebra. Properties. The algebra generated by the geometric product (that is, all objects formed by taking repeated sums and geometric products of scalars and vectors) is the geometric algebra over the vector space. For an Euclidean vector space, this algebra is written formula_10 or Cl"n"(R), where n is the dimension of the vector space R"n". Cl"n"(R) is both a vector space and an algebra, generated by all the products between vectors in R"n", so it contains all vectors and bivectors. More precisely, as a vector space it contains the vectors and bivectors as linear subspaces, though not as subalgebras (since the geometric product of two vectors is not generally another vector). The space ⋀2R"n". The space of all bivectors has dimension "n"("n" − 1) and is written ⋀2R"n", and is the second exterior power of the original vector space. Even subalgebra. The subalgebra generated by the bivectors is the "even subalgebra" of the geometric algebra, written Cl(R). This algebra results from considering all repeated sums and geometric products of scalars and bivectors. It has dimension 2"n"−1, and contains ⋀2R"n" as a linear subspace. In two and three dimensions the even subalgebra contains only scalars and bivectors, and each is of particular interest. In two dimensions, the even subalgebra is isomorphic to the complex numbers, C, while in three it is isomorphic to the quaternions, H. The even subalgebra contains the rotations in any dimension. Magnitude. As noted in the previous section the magnitude of a simple bivector, that is one that is the exterior product of two vectors a and b, is , where θ is the angle between the vectors. It is written , where B is the bivector. For general bivectors, the magnitude can be calculated by taking the norm of the bivector considered as a vector in the space ⋀2R"n". If the magnitude is zero then all the bivector's components are zero, and the bivector is the zero bivector which as an element of the geometric algebra equals the scalar zero. Unit bivectors. A unit bivector is one with unit magnitude. Such a bivector can be derived from any non-zero bivector by dividing the bivector by its magnitude, that is formula_11 Of particular utility are the unit bivectors formed from the products of the standard basis of the vector space. If e"i" and e"j" are distinct basis vectors then the product e"i" ∧ e"j" is a bivector. As e"i" and e"j" are orthogonal, e"i" ∧ e"j" = e"i"e"j", written e"ij", and has unit magnitude as the vectors are unit vectors. The set of all bivectors produced from the basis in this way form a basis for ⋀2R"n". For instance, in four dimensions the basis for ⋀2R4 is (e1e2, e1e3, e1e4, e2e3, e2e4, e3e4) or (e12, e13, e14, e23, e24, e34). Simple bivectors. The exterior product of two vectors is a bivector, but not all bivectors are exterior products of two vectors. For example, in four dimensions the bivector formula_12 cannot be written as the exterior product of two vectors. A bivector that can be written as the exterior product of two vectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions; in four dimensions every bivector is the sum of at most two exterior products. A bivector has a real square if and only if it is simple, and only simple bivectors can be represented geometrically by a directed plane area. Product of two bivectors. The geometric product of two bivectors, A and B, is formula_13 The quantity A · B is the scalar-valued scalar product, while A ∧ B is the grade 4 exterior product that arises in four or more dimensions. The quantity A × B is the bivector-valued commutator product, given by formula_14 The space of bivectors ⋀2R"n" is a Lie algebra over R, with the commutator product as the Lie bracket. The full geometric product of bivectors generates the even subalgebra. Of particular interest is the product of a bivector with itself. As the commutator product is antisymmetric the product simplifies to formula_15 If the bivector is "simple" the last term is zero and the product is the scalar-valued A · A, which can be used as a check for simplicity. In particular the exterior product of bivectors only exists in four or more dimensions, so all bivectors in two and three dimensions are simple. General bivectors and matrices. Bivectors are isomorphic to skew-symmetric matrices in any number of dimensions. For example, the general bivector "B"23e23 + "B"31e31 + "B"12e12 in three dimensions maps to the matrix formula_16 This multiplied by vectors on both sides gives the same vector as the product of a vector and bivector minus the exterior product; an example is the angular velocity tensor. Skew symmetric matrices generate orthogonal matrices with determinant 1 through the exponential map. In particular, applying the exponential map to a bivector that is associated with a rotation yields a rotation matrix. The rotation matrix "M""R" given by the skew-symmetric matrix above is formula_17 The rotation described by "M""R" is the same as that described by the rotor R given by formula_18 and the matrix "M""R" can be also calculated directly from rotor R. In three dimensions, this is given by formula_19 Bivectors are related to the eigenvalues of a rotation matrix. Given a rotation matrix M the eigenvalues can be calculated by solving the characteristic equation for that matrix 0 = det("M" − "λI"). By the fundamental theorem of algebra this has three roots (only one of which is real as there is only one eigenvector, i.e., the axis of rotation). The other roots must be a complex conjugate pair. They have unit magnitude so purely imaginary logarithms, equal to the magnitude of the bivector associated with the rotation, which is also the angle of rotation. The eigenvectors associated with the complex eigenvalues are in the plane of the bivector, so the exterior product of two non-parallel eigenvectors results in the bivector (or a multiple thereof). Two dimensions. When working with coordinates in geometric algebra it is usual to write the basis vectors as (e1, e2, ...), a convention that will be used here. A vector in real two-dimensional space R2 can be written a "a"1e1 + "a"2e2, where "a"1 and "a"2 are real numbers, e1 and e2 are orthonormal basis vectors. The geometric product of two such vectors is formula_20 This can be split into the symmetric, scalar-valued, scalar product and an antisymmetric, bivector-valued exterior product: formula_21 All bivectors in two dimensions are of this form, that is multiples of the bivector e1e2, written e12 to emphasise it is a bivector rather than a vector. The magnitude of e12 is 1, with formula_22 so it is called the unit bivector. The term unit bivector can be used in other dimensions but it is only uniquely defined (up to a sign) in two dimensions and all bivectors are multiples of e12. As the highest grade element of the algebra e12 is also the pseudoscalar which is given the symbol i. Complex numbers. With the properties of negative square and unit magnitude, the unit bivector can be identified with the imaginary unit from complex numbers. The bivectors and scalars together form the even subalgebra of the geometric algebra, which is isomorphic to the complex numbers C. The even subalgebra has basis (1, e12), the whole algebra has basis (1, e1, e2, e12). The complex numbers are usually identified with the coordinate axes and two-dimensional vectors, which would mean associating them with the vector elements of the geometric algebra. There is no contradiction in this, as to get from a general vector to a complex number an axis needs to be identified as the real axis, e1 say. This multiplies by all vectors to generate the elements of even subalgebra. All the properties of complex numbers can be derived from bivectors, but two are of particular interest. First as with complex numbers products of bivectors and so the even subalgebra are commutative. This is only true in two dimensions, so properties of the bivector in two dimensions that depend on commutativity do not usually generalise to higher dimensions. Second a general bivector can be written formula_23 where θ is a real number. Putting this into the Taylor series for the exponential map and using the property e122 = −1 results in a bivector version of Euler's formula, formula_24 which when multiplied by any vector rotates it through an angle "θ" about the origin: formula_25 The product of a vector with a bivector in two dimensions is anticommutative, so the following products all generate the same rotation formula_26 Of these the last product is the one that generalises into higher dimensions. The quantity needed is called a rotor and is given the symbol R, so in two dimensions a rotor that rotates through angle θ can be written formula_27 and the rotation it generates is formula_28 Three dimensions. In three dimensions the geometric product of two vectors is formula_29 This can be split into the symmetric, scalar-valued, scalar product and the antisymmetric, bivector-valued, exterior product: formula_30 In three dimensions all bivectors are simple and so the result of an exterior product. The unit bivectors e23, e31 and e12 form a basis for the space of bivectors ⋀2R3, which is itself a three-dimensional linear space. So if a general bivector is: formula_31 they can be added like vectors formula_32 while when multiplied they produce the following formula_33 which can be split into symmetric scalar and antisymmetric bivector parts as follows formula_34 The exterior product of two bivectors in three dimensions is zero. A bivector B can be written as the product of its magnitude and a unit bivector, so writing β for and using the Taylor series for the exponential map it can be shown that formula_35 This is another version of Euler's formula, but with a general bivector in three dimensions. Unlike in two dimensions bivectors are not commutative so properties that depend on commutativity do not apply in three dimensions. For example, in general exp(A + B) ≠ exp(A) exp(B) in three (or more) dimensions. The full geometric algebra in three dimensions, Cl3(R), has basis (1, e1, e2, e3, e23, e31, e12, e123). The element e123 is a trivector and the pseudoscalar for the geometry. Bivectors in three dimensions are sometimes identified with pseudovectors to which they are related, as discussed below. Quaternions. Bivectors are not closed under the geometric product, but the even subalgebra is. In three dimensions it consists of all scalar and bivector elements of the geometric algebra, so a general element can be written for example "a" + A, where a is the scalar part and A is the bivector part. It is written Cl and has basis (1, e23, e31, e12). The product of two general elements of the even subalgebra is formula_36 The even subalgebra, that is the algebra consisting of scalars and bivectors, is isomorphic to the quaternions, H. This can be seen by comparing the basis to the quaternion basis, or from the above product which is identical to the quaternion product, except for a change of sign which relates to the negative products in the bivector scalar product A · B. Other quaternion properties can be similarly related to or derived from geometric algebra. This suggests that the usual split of a quaternion into scalar and vector parts would be better represented as a split into scalar and bivector parts; if this is done the quaternion product is merely the geometric product. It also relates quaternions in three dimensions to complex numbers in two, as each is isomorphic to the even subalgebra for the dimension, a relationship that generalises to higher dimensions. Rotation vector. The rotation vector, from the axis–angle representation of rotations, is a compact way of representing rotations in three dimensions. In its most compact form, it consists of a vector, the product of a unit vector "ω" that is the axis of rotation with the (signed) angle of rotation "θ", so that the magnitude of the overall rotation vector "θω" equals the (unsigned) rotation angle. The quaternion associated with the rotation is formula_37 In geometric algebra the rotation is represented by a bivector. This can be seen in its relation to quaternions. Let Ω be a unit bivector in the plane of rotation, and let θ be the angle of rotation. Then the rotation bivector is Ωθ". The quaternion closely corresponds to the exponential of half of the bivector Ωθ". That is, the components of the quaternion correspond to the scalar and bivector parts of the following expression: formula_38 The exponential can be defined in terms of its power series, and easily evaluated using the fact that Ω squared is −1. So rotations can be represented by bivectors. Just as quaternions are elements of the geometric algebra, they are related by the exponential map in that algebra. Rotors. The bivector Ω"θ" generates a rotation through the exponential map. The even elements generated rotate a general vector in three dimensions in the same way as quaternions: formula_39 As in two dimensions, the quantity exp(−Ωθ") is called a rotor and written R. The quantity exp(Ωθ") is then "R"−1, and they generate rotations as formula_40 This is identical to two dimensions, except here rotors are four-dimensional objects isomorphic to the quaternions. This can be generalised to all dimensions, with rotors, elements of the even subalgebra with unit magnitude, being generated by the exponential map from bivectors. They form a double cover over the rotation group, so the rotors R and −"R" represent the same rotation. Axial vectors. The rotation vector is an example of an axial vector. Axial vectors, or pseudovectors, are vectors with the special feature that their coordinates undergo a sign change relative to the usual vectors (also called "polar vectors") under inversion through the origin, reflection in a plane, or other orientation-reversing linear transformation. Examples include quantities like torque, angular momentum and vector magnetic fields. Quantities that would use axial vectors in vector algebra are properly represented by bivectors in geometric algebra. More precisely, if an underlying orientation is chosen, the axial vectors are naturally identified with the usual vectors; the Hodge dual then gives the isomorphism between axial vectors and bivectors, so each axial vector is associated with a bivector and vice versa; that is formula_41 where &amp;NoBreak;}&amp;NoBreak; is the Hodge star. Note that if the underlying orientation is reversed by inversion through the origin, both the identification of the axial vectors with the usual vectors and the Hodge dual change sign, but the bivectors don't budge. Alternately, using the unit pseudoscalar in Cl3(R), "i" = e1e2e3 gives formula_42 This is easier to use as the product is just the geometric product. But it is antisymmetric because (as in two dimensions) the unit pseudoscalar i squares to −1, so a negative is needed in one of the products. This relationship extends to operations like the vector-valued cross product and bivector-valued exterior product, as when written as determinants they are calculated in the same way: formula_43 so are related by the Hodge dual: formula_44 Bivectors have a number of advantages over axial vectors. They better disambiguate axial and polar vectors, that is the quantities represented by them, so it is clearer which operations are allowed and what their results are. For example, the inner product of a polar vector and an axial vector resulting from the cross product in the triple product should result in a pseudoscalar, a result which is more obvious if the calculation is framed as the exterior product of a vector and bivector. They generalise to other dimensions; in particular bivectors can be used to describe quantities like torque and angular momentum in two as well as three dimensions. Also, they closely match geometric intuition in a number of ways, as seen in the next section. Geometric interpretation. As suggested by their name and that of the algebra, one of the attractions of bivectors is that they have a natural geometric interpretation. This can be described in any dimension but is best done in three where parallels can be drawn with more familiar objects, before being applied to higher dimensions. In two dimensions the geometric interpretation is trivial, as the space is two-dimensional so has only one plane, and all bivectors are associated with it differing only by a scale factor. All bivectors can be interpreted as planes, or more precisely as directed plane segments. In three dimensions there are three properties of a bivector that can be interpreted geometrically: In three dimensions all bivectors can be generated by the exterior product of two vectors. If the bivector B = a ∧ b then the magnitude of B is formula_45 where θ is the angle between the vectors. This is the area of the parallelogram with edges a and b, as shown in the diagram. One interpretation is that the area is swept out by b as it moves along a. The exterior product is antisymmetric, so reversing the order of a and b to make a move along b results in a bivector with the opposite direction that is the negative of the first. The plane of bivector a ∧ b contains both a and b so they are both parallel to the plane. Bivectors and axial vectors are related by Hodge dual. In a real vector space the Hodge dual relates a subspace to its orthogonal complement, so if a bivector is represented by a plane then the axial vector associated with it is simply the plane's surface normal. The plane has two normals, one on each side, giving the two possible orientations for the plane and bivector. This relates the cross product to the exterior product. It can also be used to represent physical quantities, like torque and angular momentum. In vector algebra they are usually represented by vectors, perpendicular to the plane of the force, linear momentum or displacement that they are calculated from. But if a bivector is used instead the plane is the plane of the bivector, so is a more natural way to represent the quantities and the way they act. It also unlike the vector representation generalises into other dimensions. The product of two bivectors has a geometric interpretation. For non-zero bivectors A and B the product can be split into symmetric and antisymmetric parts as follows: formula_46 Like vectors these have magnitudes |A · B| = |A| |B| cos "θ" and |A × B| = |A| |B| sin "θ", where θ is the angle between the planes. In three dimensions it is the same as the angle between the normal vectors dual to the planes, and it generalises to some extent in higher dimensions. Bivectors can be added together as areas. Given two non-zero bivectors B and C in three dimensions it is always possible to find a vector that is contained in both, a say, so the bivectors can be written as exterior products involving a: formula_47 This can be interpreted geometrically as seen in the diagram: the two areas sum to give a third, with the three areas forming faces of a prism with a, b, c and b + c as edges. This corresponds to the two ways of calculating the area using the distributivity of the exterior product: formula_48 This only works in three dimensions as it is the only dimension where a vector parallel to both bivectors must exist. In higher dimensions bivectors generally are not associated with a single plane, or if they are (simple bivectors) two bivectors may have no vector in common, and so sum to a non-simple bivector. Four dimensions. In four dimensions, the basis elements for the space ⋀2R4 of bivectors are (e12, e13, e14, e23, e24, e34), so a general bivector is of the form formula_49 Orthogonality. In four dimensions, the Hodge dual of a bivector is a bivector, and the space ⋀2R4 is dual to itself. Normal vectors are not unique, instead every plane is orthogonal to all the vectors in its Hodge dual space. This can be used to partition the bivectors into two 'halves', in the following way. We have three pairs of orthogonal bivectors: (e12, e34), (e13, e24) and (e14, e23). There are four distinct ways of picking one bivector from each of the first two pairs, and once these first two are picked their sum yields the third bivector from the other pair. For example, (e12, e13, e14) and (e23, e24, e34). Simple bivectors in 4D. In four dimensions bivectors are generated by the exterior product of vectors in R4, but with one important difference from R3 and R2. In four dimensions not all bivectors are simple. There are bivectors such as e12 + e34 that cannot be generated by the exterior product of two vectors. This also means they do not have a real, that is scalar, square. In this case formula_50 The element e1234 is the pseudoscalar in Cl4, distinct from the scalar, so the square is non-scalar. All bivectors in four dimensions can be generated using at most two exterior products and four vectors. The above bivector can be written as formula_51 Similarly, every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonal bivectors for this, and this is always possible to do. Moreover, for a generic bivector the choice of simple bivectors is unique, that is, there is only one way to decompose into orthogonal bivectors; the only exception is when the two orthogonal bivectors have equal magnitudes (as in the above example): in this case the decomposition is not unique. The decomposition is always unique in the case of simple bivectors, with the added bonus that one of the orthogonal parts is zero. Rotations in R4. As in three dimensions bivectors in four dimension generate rotations through the exponential map, and all rotations can be generated this way. As in three dimensions if B is a bivector then the rotor R is exp B and rotations are generated in the same way: formula_52 The rotations generated are more complex though. They can be categorised as follows: "simple" rotations are those that fix a plane in 4D, and rotate by an angle "about" this plane. "double" rotations have only one fixed point, the origin, and rotate through two angles about two orthogonal planes. In general the angles are different and the planes are uniquely specified "isoclinic" rotations are double rotations where the angles of rotation are equal. In this case the planes about which the rotation is taking place are not unique. These are generated by bivectors in a straightforward way. Simple rotations are generated by simple bivectors, with the fixed plane the dual or orthogonal to the plane of the bivector. The rotation can be said to take place about that plane, in the plane of the bivector. All other bivectors generate double rotations, with the two angles of the rotation equalling the magnitudes of the two simple bivectors that the non-simple bivector is composed of. Isoclinic rotations arise when these magnitudes are equal, in which case the decomposition into two simple bivectors is not unique. Bivectors in general do not commute, but one exception is orthogonal bivectors and exponents of them. So if the bivector B B1 + B2, where B1 and B2 are orthogonal simple bivectors, is used to generate a rotation it decomposes into two simple rotations that commute as follows: formula_53 It is always possible to do this as all bivectors can be expressed as sums of orthogonal bivectors. Spacetime rotations. Spacetime is a mathematical model for our universe used in special relativity. It consists of three space dimensions and one time dimension combined into a single four-dimensional space. It is naturally described using geometric algebra and bivectors, with the Euclidean metric replaced by a Minkowski metric. That algebra is identical to that of Euclidean space, except the signature is changed, so formula_54 (Note the order and indices above are not universal – here e4 is the time-like dimension). The geometric algebra is Cl3,1(R), and the subspace of bivectors is ⋀2R3,1. The simple bivectors are of two types. The simple bivectors e23, e31 and e12 have negative squares and span the bivectors of the three-dimensional subspace corresponding to Euclidean space, R3. These bivectors generate ordinary rotations in R3. The simple bivectors e14, e24 and e34 have positive squares and as planes span a space dimension and the time dimension. These also generate rotations through the exponential map, but instead of trigonometric functions, hyperbolic functions are needed, which generates a rotor as follows: formula_55 where Ω is the bivector (e14, etc.), identified via the metric with an antisymmetric linear transformation of R3,1. These are Lorentz boosts, expressed in a particularly compact way, using the same kind of algebra as in R3 and R4. In general all spacetime rotations are generated from bivectors through the exponential map, that is, a general rotor generated by bivector A is of the form formula_56 The set of all rotations in spacetime form the Lorentz group, and from them most of the consequences of special relativity can be deduced. More generally this show how transformations in Euclidean space and spacetime can all be described using the same kind of algebra. Maxwell's equations. Maxwell's equations are used in physics to describe the relationship between electric and magnetic fields. Normally given as four differential equations they have a particularly compact form when the fields are expressed as a spacetime bivector from ⋀2R3,1. If the electric and magnetic fields in R3 are and then the "electromagnetic bivector" is formula_57 where e4 is again the basis vector for the time-like dimension and c is the speed of light. The product yields the bivector that is Hodge dual to in three dimensions, as discussed above, while as a product of orthogonal vectors is also bivector-valued. As a whole it is the electromagnetic tensor expressed more compactly as a bivector, and is used as follows. First it is related to the 4-current J, a vector quantity given by formula_58 where is current density and ρ is charge density. They are related by a differential operator ∂, which is formula_59 The operator ∇ is a differential operator in geometric algebra, acting on the space dimensions and given by ∇M = ∇·M + ∇∧M. When applied to vectors ∇·M is the divergence and ∇∧M is the curl but with a bivector rather than vector result, that is dual in three dimensions to the curl. For general quantity M they act as grade lowering and raising differential operators. In particular if M is a scalar then this operator is just the gradient, and it can be thought of as a geometric algebraic del operator. Together these can be used to give a particularly compact form for Maxwell's equations with sources: formula_60 This equation, when decomposed according to geometric algebra, using geometric products which have both grade raising and grade lowering effects, is equivalent to Maxwell's four equations. It is also related to the electromagnetic four-potential, a vector A given by formula_61 where is the vector magnetic potential and V is the electric potential. It is related to the electromagnetic bivector as follows formula_62 using the same differential operator "∂". Higher dimensions. As has been suggested in earlier sections much of geometric algebra generalises well into higher dimensions. The geometric algebra for the real space R"n" is Cl"n"(R), and the subspace of bivectors is ⋀2R"n". The number of simple bivectors needed to form a general bivector rises with the dimension, so for n odd it is ("n" − 1) / 2, for n even it is "n" / 2. So for four and five dimensions only two simple bivectors are needed but three are required for six and seven dimensions. For example, in six dimensions with standard basis (e1, e2, e3, e4, e5, e6) the bivector formula_63 is the sum of three simple bivectors but no less. As in four dimensions it is always possible to find orthogonal simple bivectors for this sum. Rotations in higher dimensions. As in three and four dimensions rotors are generated by the exponential map, so formula_64 is the rotor generated by bivector B. Simple rotations, that take place in a plane of rotation around a fixed blade of dimension ("n" − 2) are generated by simple bivectors, while other bivectors generate more complex rotations which can be described in terms of the simple bivectors they are sums of, each related to a plane of rotation. All bivectors can be expressed as the sum of orthogonal and commutative simple bivectors, so rotations can always be decomposed into a set of commutative rotations about the planes associated with these bivectors. The group of the rotors in n dimensions is the spin group, Spin("n"). One notable feature, related to the number of simple bivectors and so rotation planes, is that in odd dimensions every rotation has a fixed axis – it is misleading to call it an axis of rotation as in higher dimensions rotations are taking place in multiple planes orthogonal to it. This is related to bivectors, as bivectors in odd dimensions decompose into the same number of bivectors as the even dimension below, so have the same number of planes, but one extra dimension. As each plane generates rotations in two dimensions in odd dimensions there must be one dimension, that is an axis, that is not being rotated. Bivectors are also related to the rotation matrix in n dimensions. As in three dimensions the characteristic equation of the matrix can be solved to find the eigenvalues. In odd dimensions this has one real root, with eigenvector the fixed axis, and in even dimensions it has no real roots, so either all or all but one of the roots are complex conjugate pairs. Each pair is associated with a simple component of the bivector associated with the rotation. In particular, the log of each pair is the magnitude up to a sign, while eigenvectors generated from the roots are parallel to and so can be used to generate the bivector. In general the eigenvalues and bivectors are unique, and the set of eigenvalues gives the full decomposition into simple bivectors; if roots are repeated then the decomposition of the bivector into simple bivectors is not unique. Projective geometry. Geometric algebra can be applied to projective geometry in a straightforward way. The geometric algebra used is Cl"n"(R), "n" ≥ 3, the algebra of the real vector space R"n". This is used to describe objects in the real projective space RP"n"−1. The non-zero vectors in Cl"n"(R) or R"n" are associated with points in the projective space so vectors that differ only by a scale factor, so their exterior product is zero, map to the same point. Non-zero simple bivectors in ⋀2R"n" represent lines in RP"n"−1, with bivectors differing only by a (positive or negative) scale factor representing the same line. A description of the projective geometry can be constructed in the geometric algebra using basic operations. For example, given two distinct points in RP"n"−1 represented by vectors a and b the line containing them is given by a ∧ b (or b ∧ a). Two lines intersect in a point if A ∧ B 0 for their bivectors A and B. This point is given by the vector formula_65 The operation "∨" is the meet, which can be defined as above in terms of the join, "J" A ∧ B for non-zero A ∧ B. Using these operations projective geometry can be formulated in terms of geometric algebra. For example, given a third (non-zero) bivector C the point p lies on the line given by C if and only if formula_66 So the condition for the lines given by A, B and C to be collinear is formula_67 which in Cl3(R) and RP2 simplifies to formula_68 where the angle brackets denote the scalar part of the geometric product. In the same way all projective space operations can be written in terms of geometric algebra, with bivectors representing general lines in projective space, so the whole geometry can be developed using geometric algebra. Tensors and matrices. As noted above a bivector can be written as a skew-symmetric matrix, which through the exponential map generates a rotation matrix that describes the same rotation as the rotor, also generated by the exponential map but applied to the vector. But it is also used with other bivectors such as the angular velocity tensor and the electromagnetic tensor, respectively a 3×3 and 4×4 skew-symmetric matrix or tensor. Real bivectors in ⋀2R"n" are isomorphic to "n" × "n" skew-symmetric matrices, or alternately to antisymmetric tensors of degree 2 on R"n". While bivectors are isomorphic to vectors (via the dual) in three dimensions they can be represented by skew-symmetric matrices in any dimension. This is useful for relating bivectors to problems described by matrices, so they can be re-cast in terms of bivectors, given a geometric interpretation, then often solved more easily or related geometrically to other bivector problems. More generally, every real geometric algebra is isomorphic to a matrix algebra. These contain bivectors as a subspace, though often in a way which is not especially useful. These matrices are mainly of interest as a way of classifying Clifford algebras. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (\\mathbf{ab})\\mathbf{c} = \\mathbf{a}(\\mathbf{bc}) " }, { "math_id": 1, "text": "\\begin{align}\n \\mathbf{a}(\\mathbf{b} + \\mathbf{c}) &= \\mathbf{ab} + \\mathbf{ac} \\\\\n (\\mathbf{b} + \\mathbf{c})\\mathbf{a} &= \\mathbf{ba} + \\mathbf{ca}\n\\end{align}" }, { "math_id": 2, "text": "\\tfrac{1}{2}(\\mathbf{ab} + \\mathbf{ba}) = \\tfrac{1}{2} \\left((\\mathbf{a} + \\mathbf{b})^2 - \\mathbf{a}^2 - \\mathbf{b}^2\\right)" }, { "math_id": 3, "text": "\\mathbf{a} \\cdot \\mathbf{b} = \\tfrac{1}{2}(\\mathbf{ab} + \\mathbf{ba})." }, { "math_id": 4, "text": "\\mathbf{a} \\wedge \\mathbf{b} = \\tfrac{1}{2}(\\mathbf{ab} - \\mathbf{ba})" }, { "math_id": 5, "text": "\\mathbf{b} \\wedge \\mathbf{a} = \\tfrac{1}{2}(\\mathbf{ba} - \\mathbf{ab}) = -\\tfrac{1}{2}(\\mathbf{ab} - \\mathbf{ba}) = -\\mathbf{a} \\wedge \\mathbf{b}" }, { "math_id": 6, "text": "\\mathbf{a} \\cdot \\mathbf{b} + \\mathbf{a} \\wedge \\mathbf{b} = \\tfrac{1}{2}(\\mathbf{ab} + \\mathbf{ba}) + \\tfrac{1}{2}(\\mathbf{ab} - \\mathbf{ba}) = \\mathbf{ab}" }, { "math_id": 7, "text": "(\\mathbf{a} \\cdot \\mathbf{b})^2 - (\\mathbf{a} \\wedge \\mathbf{b})^2 = \\mathbf{a}^2\\mathbf{b}^2," }, { "math_id": 8, "text": "(\\mathbf{a} \\wedge \\mathbf{b})^2 = (\\mathbf{a} \\cdot \\mathbf{b})^2 - \\mathbf{a}^2\\mathbf{b}^2 = \\left|\\mathbf{a}\\right|^2\\left|\\mathbf{b}\\right|^2( \\cos^2 \\theta - 1) = -\\left|\\mathbf{a}\\right|^2\\left|\\mathbf{b}\\right|^2\\sin^2 \\theta" }, { "math_id": 9, "text": "\\mathbf{A} = \\mathbf{a} \\wedge \\mathbf{b} = -\\mathbf{b} \\wedge \\mathbf{a} \\,," }, { "math_id": 10, "text": "\\mathcal{G}_n" }, { "math_id": 11, "text": "\\frac{\\mathbf{B}}{\\left\\vert\\mathbf{B}\\right\\vert}." }, { "math_id": 12, "text": "\\mathbf{B} = \\mathbf{e}_1 \\wedge \\mathbf{e}_2 + \\mathbf{e}_3 \\wedge \\mathbf{e}_4 = \\mathbf{e}_1\\mathbf{e}_2 + \\mathbf{e}_3\\mathbf{e}_4 = \\mathbf{e}_{12} + \\mathbf{e}_{34}" }, { "math_id": 13, "text": "\\mathbf{A}\\mathbf{B} = \\mathbf{A} \\cdot \\mathbf{B} + \\mathbf{A} \\times \\mathbf{B} + \\mathbf{A} \\wedge \\mathbf{B}." }, { "math_id": 14, "text": "\\mathbf{A} \\times \\mathbf{B} = \\tfrac{1}{2}(\\mathbf{AB} - \\mathbf{BA})," }, { "math_id": 15, "text": "\\mathbf{A}\\mathbf{A} = \\mathbf{A} \\cdot \\mathbf{A} + \\mathbf{A} \\wedge \\mathbf{A}." }, { "math_id": 16, "text": "M_B = \\begin{pmatrix} 0 & B_{12} & -B_{31} \\\\ -B_{12} & 0 & B_{23}\\\\ B_{31} & -B_{23} & 0 \\end{pmatrix}." }, { "math_id": 17, "text": "M_R = \\exp{M_B}." }, { "math_id": 18, "text": "R = \\exp{\\tfrac{1}{2} B}," }, { "math_id": 19, "text": "M_R = \\begin{pmatrix} (R\\mathbf{e}_1R^{-1}) \\cdot \\mathbf{e}_1 & (R\\mathbf{e}_2R^{-1}) \\cdot \\mathbf{e}_1 & (R\\mathbf{e}_3R^{-1}) \\cdot \\mathbf{e}_1 \\\\ (R\\mathbf{e}_1R^{-1}) \\cdot \\mathbf{e}_2 & (R\\mathbf{e}_2R^{-1}) \\cdot \\mathbf{e}_2 & (R\\mathbf{e}_3R^{-1}) \\cdot \\mathbf{e}_2 \\\\ (R\\mathbf{e}_1R^{-1}) \\cdot \\mathbf{e}_3 & (R\\mathbf{e}_2R^{-1}) \\cdot \\mathbf{e}_3 & (R\\mathbf{e}_3R^{-1}) \\cdot \\mathbf{e}_3 \\end{pmatrix}." }, { "math_id": 20, "text": "\\begin{align} \\mathbf{a}\\mathbf{b} &= (a_1\\mathbf{e}_1 + a_2\\mathbf{e}_2)(b_1\\mathbf{e}_1 + b_2\\mathbf{e}_2) \\\\&= a_1b_1\\mathbf{e}_1\\mathbf{e}_1 + a_1b_2\\mathbf{e}_1\\mathbf{e}_2 + a_2b_1\\mathbf{e}_2\\mathbf{e}_1 + a_2b_2\\mathbf{e}_2\\mathbf{e}_2 \\\\&= a_1b_1 + a_2b_2 + (a_1b_2 - a_2b_1)\\mathbf{e}_1\\mathbf{e}_2. \\end{align}" }, { "math_id": 21, "text": "\\begin{align} \\mathbf{a} \\cdot \\mathbf{b} &= a_1b_1 + a_2b_2, \\\\ \\mathbf{a} \\wedge \\mathbf{b} &= (a_1b_2 - a_2b_1)\\mathbf{e}_1\\mathbf{e}_2 = (a_1b_2 - a_2b_1)\\mathbf{e}_{12}. \\end{align}" }, { "math_id": 22, "text": "\\mathbf{e}_{12}^2 = -1," }, { "math_id": 23, "text": "\\theta\\mathbf{e}_{12} = i\\theta," }, { "math_id": 24, "text": "\\exp{\\theta\\mathbf{e}_{12}} = \\exp{i\\theta} = \\cos{\\theta} + i\\sin{\\theta}," }, { "math_id": 25, "text": "(x'\\mathbf{e}_1 + y'\\mathbf{e}_2) = (x\\mathbf{e}_1 + y\\mathbf{e}_2) \\exp {i\\theta}." }, { "math_id": 26, "text": "\\mathbf{v}' = \\mathbf{v} \\exp{i\\theta} = \\exp(-i\\theta)\\,\\mathbf{v} = \\exp({-i\\theta}/{2})\\,\\mathbf{v} \\exp({i\\theta}/{2})." }, { "math_id": 27, "text": "R = \\exp({-\\tfrac{1}{2} i\\theta}) = \\exp({-\\tfrac{1}{2}\\theta\\mathbf{e}_{12}})," }, { "math_id": 28, "text": "\\mathbf{v}' = R\\mathbf{v}R^{-1}." }, { "math_id": 29, "text": "\\begin{align} \\mathbf{ab} &= (a_1\\mathbf{e}_1 + a_2\\mathbf{e}_2 + a_3\\mathbf{e}_3)(b_1\\mathbf{e}_1 + b_2\\mathbf{e}_2 + b_3\\mathbf{e}_3) \\\\ &= a_1 b_1{\\mathbf{e}_1}^2 + a_2 b_2{\\mathbf{e}_2}^2 + a_3 b_3{\\mathbf{e}_3}^2 + (a_2 b_3 - a_3 b_2)\\mathbf{e}_2\\mathbf{e}_3 + (a_3 b_1 - a_1 b_3)\\mathbf{e}_3\\mathbf{e}_1 + (a_1 b_2 - a_2 b_1)\\mathbf{e}_1\\mathbf{e}_2. \\end{align}" }, { "math_id": 30, "text": "\\begin{align} \\mathbf{a} \\cdot \\mathbf{b} &= a_1b_1 + a_2b_2 + a_3b_3 \\\\ \\mathbf{a} \\wedge \\mathbf{b} &= (a_2 b_3 - a_3 b_2)\\mathbf{e}_{23} + (a_3 b_1 - a_1 b_3)\\mathbf{e}_{31} + (a_1 b_2 - a_2 b_1)\\mathbf{e}_{12}. \\end{align}" }, { "math_id": 31, "text": "\\mathbf{A} = A_{23}\\mathbf{e}_{23} + A_{31}\\mathbf{e}_{31} + A_{12}\\mathbf{e}_{12}, " }, { "math_id": 32, "text": "\\mathbf{A} + \\mathbf{B} = (A_{23} + B_{23})\\mathbf{e}_{23} + (A_{31} + B_{31})\\mathbf{e}_{31} + (A_{12} + B_{12})\\mathbf{e}_{12}." }, { "math_id": 33, "text": "\\mathbf{A} \\mathbf{B} = -A_{23}B_{23} - A_{31}B_{31} - A_{12}B_{12} + (A_{12}B_{31} - A_{31}B_{12})\\mathbf{e}_{23} + (A_{23}B_{12} - A_{12}B_{23})\\mathbf{e}_{31} + (A_{31}B_{23} - A_{23}B_{31})\\mathbf{e}_{12}" }, { "math_id": 34, "text": "\\begin{align} \\mathbf{A} \\cdot \\mathbf{B} &= -A_{12}B_{12} - A_{31}B_{31} - A_{23}B_{23} \\\\ \\mathbf{A} \\times \\mathbf{B} &= (A_{23}B_{31} - A_{31}B_{23})\\mathbf{e}_{12} + (A_{12}B_{23} - A_{23}B_{12})\\mathbf{e}_{13} + (A_{31}B_{12} - A_{12}B_{31})\\mathbf{e}_{23}. \\end{align}" }, { "math_id": 35, "text": "\\exp{\\mathbf{B}} = \\exp({\\beta\\frac{\\mathbf{B}}{\\beta}}) = \\cos{\\beta} + \\frac{\\mathbf{B}}{\\beta}\\sin{\\beta}." }, { "math_id": 36, "text": "(a + \\mathbf{A})(b + \\mathbf{B}) = ab + a\\mathbf{B} + b\\mathbf{A} + \\mathbf{A} \\cdot \\mathbf{B} + \\mathbf{A} \\times \\mathbf{B}." }, { "math_id": 37, "text": "q = \\left(\\cos{\\tfrac{{1}}{2} \\theta}, \\omega \\sin{\\tfrac{1}{2} \\theta}\\right)" }, { "math_id": 38, "text": "\\exp{\\tfrac{1}{2} \\boldsymbol{\\Omega} \\theta} = \\cos{\\tfrac{1}{2} \\theta} + \\boldsymbol{\\Omega}\\sin{\\tfrac{1}{2} \\theta} " }, { "math_id": 39, "text": "\\mathbf{v}' = \\exp(-\\tfrac{1}{2} \\boldsymbol{\\Omega} \\theta)\\,\\mathbf{v} \\exp(\\tfrac{1}{2} \\boldsymbol{\\Omega} \\theta)." }, { "math_id": 40, "text": "\\mathbf{v}' = R\\mathbf{v}R^{-1}." }, { "math_id": 41, "text": "\\mathbf{A} = {\\star} \\mathbf{a} \\,,\\quad \\mathbf{a} = {\\star} \\mathbf{A}" }, { "math_id": 42, "text": "\\mathbf{A} = \\mathbf{a}i \\,,\\quad \\mathbf{a} = - \\mathbf{A} i." }, { "math_id": 43, "text": "\\mathbf{a} \\times \\mathbf{b} = \\begin{vmatrix} \\mathbf{e}_1 & \\mathbf{e}_2 & \\mathbf{e}_3\\\\a_1 & a_2 & a_3\\\\b_1 & b_2 & b_3 \\end{vmatrix} \\,,\\quad \\mathbf{a} \\wedge \\mathbf{b} = \\begin{vmatrix} \\mathbf{e}_{23} & \\mathbf{e}_{31} & \\mathbf{e}_{12}\\\\a_1 & a_2 & a_3\\\\b_1 & b_2 & b_3 \\end{vmatrix}\\,," }, { "math_id": 44, "text": "{\\star} (\\mathbf a \\wedge \\mathbf b ) = \\mathbf {a \\times b} \\,,\\quad {\\star} (\\mathbf {a \\times b} ) = \\mathbf a \\wedge \\mathbf b\\,." }, { "math_id": 45, "text": "|\\mathbf{B}| = |\\mathbf{a}||\\mathbf{b}|\\sin\\theta," }, { "math_id": 46, "text": "\\mathbf{AB} = \\mathbf{A} \\cdot \\mathbf{B} + \\mathbf{A} \\times \\mathbf{B}." }, { "math_id": 47, "text": "\\begin{align}\\mathbf{B} &= \\mathbf{a} \\wedge \\mathbf{b}\\\\ \\mathbf{C} &= \\mathbf{a} \\wedge \\mathbf{c}\\end{align}" }, { "math_id": 48, "text": "\\begin{align} \\mathbf{B} + \\mathbf{C} &= \\mathbf{a} \\wedge \\mathbf{b} + \\mathbf{a} \\wedge \\mathbf{c} \\\\ &= \\mathbf{a} \\wedge (\\mathbf{b} + \\mathbf{c}).\\end{align}" }, { "math_id": 49, "text": "\\mathbf{A} = a_{12}\\mathbf{e}_{12} + a_{13}\\mathbf{e}_{13} + a_{14}\\mathbf{e}_{14} + a_{23}\\mathbf{e}_{23} + a_{24}\\mathbf{e}_{24} + a_{34}\\mathbf{e}_{34}." }, { "math_id": 50, "text": "(\\mathbf{e}_{12} + \\mathbf{e}_{34})^2 =\\mathbf{e}_{12} \\mathbf{e}_{12} + \\mathbf{e}_{12} \\mathbf{e}_{34} + \\mathbf{e}_{34} \\mathbf{e}_{12} + \\mathbf{e}_{34} \\mathbf{e}_{34} = -2 + 2 \\mathbf{e}_{1234}." }, { "math_id": 51, "text": "\\mathbf{e}_{12} + \\mathbf{e}_{34} = \\mathbf{e}_{1} \\wedge \\mathbf{e}_{2} + \\mathbf{e}_{3} \\wedge \\mathbf{e}_{4}." }, { "math_id": 52, "text": "v' = RvR^{-1}." }, { "math_id": 53, "text": "R = \\exp(\\tfrac{1}{2} (\\mathbf{B}_1 + \\mathbf{B}_2)) = \\exp(\\tfrac{1}{2} \\mathbf{B}_1)\\,\\exp(\\tfrac{1}{2} \\mathbf{B}_2) = \\exp(\\tfrac{1}{2} \\mathbf{B}_2)\\,\\exp(\\tfrac{1}{2} \\mathbf{B}_1)" }, { "math_id": 54, "text": "\\mathbf{e}_i^2 = \\begin{cases} 1, & i = 1, 2, 3 \\\\ -1, & i = 4 \\end{cases}" }, { "math_id": 55, "text": "\\exp{\\tfrac{1}{2}{\\boldsymbol{\\Omega}\\theta}} = \\cosh{\\tfrac{1}{2} \\theta} + \\boldsymbol{\\Omega}\\sinh{\\tfrac{1}{2} \\theta}," }, { "math_id": 56, "text": "R = \\exp{\\tfrac{1}{2} \\mathbf{A}}." }, { "math_id": 57, "text": "\\mathbf{F} = \\frac{1}{c}\\overline{E}\\mathbf{e}_4 + \\overline{B}\\mathbf{e}_{123}," }, { "math_id": 58, "text": "\\mathbf{J} = \\overline{j} + c\\rho\\mathbf{e}_4," }, { "math_id": 59, "text": "\\partial = \\nabla - \\mathbf{e}_4\\frac{1}{c}\\frac{\\partial}{\\partial t}." }, { "math_id": 60, "text": "\\partial\\mathbf{F} = \\mathbf{J}." }, { "math_id": 61, "text": "\\mathbf{A} = \\overline{A} + \\frac{1}{c}V\\mathbf{e}_4," }, { "math_id": 62, "text": "\\partial\\mathbf{A} = -\\mathbf{F}," }, { "math_id": 63, "text": "\\mathbf{e}_{12} + \\mathbf{e}_{34} + \\mathbf{e}_{56}" }, { "math_id": 64, "text": "\\exp{\\tfrac{1}{2} \\mathbf{B}}" }, { "math_id": 65, "text": "\\mathbf{p} = \\mathbf{A} \\lor \\mathbf{B} = (\\mathbf{A} \\times \\mathbf{B}) J^{-1}." }, { "math_id": 66, "text": "\\mathbf{p} \\land \\mathbf{C} = 0." }, { "math_id": 67, "text": "(\\mathbf{A} \\lor \\mathbf{B}) \\land \\mathbf{C} = 0," }, { "math_id": 68, "text": "\\langle \\mathbf{ABC} \\rangle = 0," } ]
https://en.wikipedia.org/wiki?curid=884040
884142
Klein–Nishina formula
Electron-photon scattering cross section In particle physics, the Klein–Nishina formula gives the differential cross section (i.e. the "likelihood" and angular distribution) of photons scattered from a single free electron, calculated in the lowest order of quantum electrodynamics. It was first derived in 1928 by Oskar Klein and Yoshio Nishina, constituting one of the first successful applications of the Dirac equation. The formula describes both the Thomson scattering of low energy photons (e.g. visible light) and the Compton scattering of high energy photons (e.g. x-rays and gamma-rays), showing that the total cross section and expected deflection angle decrease with increasing photon energy. Formula. For an incident unpolarized photon of energy formula_0, the differential cross section is: formula_1 where The angular dependent photon wavelength (or energy, or frequency) ratio is formula_6 as required by the conservation of relativistic energy and momentum (see Compton scattering). The dimensionless quantity formula_7 expresses the energy of the incident photon in terms of the electron rest energy (~511 keV), and may also be expressed as formula_8, where formula_9 is the Compton wavelength of the electron (~2.42 pm). Notice that the scatter ratio formula_10 increases monotonically with the deflection angle, from formula_11 (forward scattering, no energy transfer) to formula_12 (180 degree backscatter, maximum energy transfer). In some cases it is convenient to express the classical electron radius in terms of the Compton wavelength: formula_13, where formula_14 is the fine structure constant (~1/137) and formula_15 is the "reduced" Compton wavelength of the electron (~0.386 pm), so that the constant in the cross section may be given as: formula_16 Polarized photons. If the incoming photon is polarized, the scattered photon is no longer isotropic with respect to the azimuthal angle. For a linearly polarized photon scattered with a free electron at rest, the differential cross section is instead given by: formula_17 where formula_18 is the azimuthal scattering angle. Note that the unpolarized differential cross section can be obtained by averaging over formula_19. Limits. Low energy. For low energy photons the wavelength shift becomes negligible (formula_20) and the Klein–Nishina formula reduces to the classical Thomson expression: formula_21 which is symmetrical in the scattering angle, i.e. the photon is just as likely to scatter backwards as forwards. With increasing energy this symmetry is broken and the photon becomes more likely to scatter in the forward direction. High energy. For high energy photons it is useful to distinguish between small and large angle scattering. For large angles, where formula_22, the scatter ratio formula_23 is large and formula_24 showing that the (large angle) differential cross section is inversely proportional to the photon energy. The differential cross section has a constant peak in the forward direction: formula_25 independent of formula_26. From the large angle analysis it follows that this peak can only extend to about formula_27. The forward peak is thus confined to a small solid angle of approximately formula_28, and we may conclude that the total small angle cross section decreases with formula_29. Total cross section. The differential cross section may be integrated to find the total cross section: formula_30 In the low-energy limit there is no energy dependence, and we recover the Thomson cross section (~66.5 fm2): formula_31 History. The Klein–Nishina formula was derived in 1928 by Oskar Klein and Yoshio Nishina, and was one of the first results obtained from the study of quantum electrodynamics. Consideration of relativistic and quantum mechanical effects allowed development of an accurate equation for the scattering of radiation from a target electron. Before this derivation, the electron cross section had been classically derived by the British physicist and discoverer of the electron, J.J. Thomson. However, scattering experiments showed significant deviations from the results predicted by the Thomson cross section. Further scattering experiments agreed perfectly with the predictions of the Klein–Nishina formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_\\gamma" }, { "math_id": 1, "text": " \\frac{d\\sigma}{d\\Omega} = \\frac{1}{2} r_e^2 \\left(\\frac{\\lambda}{\\lambda'}\\right)^{2} \\left[\\frac{\\lambda}{\\lambda'} + \\frac{\\lambda'}{\\lambda} - \\sin^2(\\theta)\\right] " }, { "math_id": 2, "text": "r_e" }, { "math_id": 3, "text": "r_e^2" }, { "math_id": 4, "text": "\\lambda/\\lambda'" }, { "math_id": 5, "text": "\\theta" }, { "math_id": 6, "text": "\n\\frac{\\lambda}{\\lambda'} = \\frac{E_{\\gamma'}}{E_\\gamma} = \\frac{\\omega'}{\\omega} = \\frac{1}{1 + \\epsilon(1-\\cos\\theta)}\n" }, { "math_id": 7, "text": "\\epsilon = E_\\gamma/m_e c^2" }, { "math_id": 8, "text": "\n\\epsilon = \\lambda_c/\\lambda\n" }, { "math_id": 9, "text": "\\lambda_c = h/m_e c" }, { "math_id": 10, "text": "\n\\lambda'/\\lambda\n" }, { "math_id": 11, "text": "\n1\n" }, { "math_id": 12, "text": "\n1+2\\epsilon\n" }, { "math_id": 13, "text": "r_e=\\alpha \\bar\\lambda_c = \\alpha \\lambda_c/2\\pi" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "\\bar\\lambda_c=\\hbar/m_e c" }, { "math_id": 16, "text": "\n\\frac{1}{2}r_e^2 = \\frac{1}{2}\\alpha^2\\bar\\lambda_c^2 = \\frac{\\alpha^2\\lambda_c^2}{8\\pi^2} = \\frac{\\alpha^2\\hbar^2}{2m_e^2c^2}\n" }, { "math_id": 17, "text": " \\frac{d\\sigma}{d\\Omega} = \\frac{1}{2} r_e^2 \\left(\\frac{\\lambda}{\\lambda'}\\right)^{2} \\left[\\frac{\\lambda}{\\lambda'} + \\frac{\\lambda'}{\\lambda} - 2 \\sin^2(\\theta) \\cos^2 (\\phi)\\right] " }, { "math_id": 18, "text": "\\phi" }, { "math_id": 19, "text": "\\cos^2 (\\phi)" }, { "math_id": 20, "text": "\\lambda/\\lambda'\\approx 1" }, { "math_id": 21, "text": " \\frac{d\\sigma}{d\\Omega} \\approx \\frac{1}{2} r_e^2 \\left(1 + \\cos^2(\\theta)\\right) \\qquad (\\epsilon \\ll 1) " }, { "math_id": 22, "text": "\\epsilon(1-\\cos\\theta) \\gg 1" }, { "math_id": 23, "text": "\\lambda'/\\lambda" }, { "math_id": 24, "text": " \\frac{d\\sigma}{d\\Omega} \\approx \\frac{1}{2} r_e^2 \\frac{\\lambda}{\\lambda'} \\approx \\frac{1}{2} r_e^2 \\frac{1}{1+\\epsilon(1-\\cos\\theta)} \\qquad (\\epsilon \\gg 1, \\theta \\gg \\epsilon^{-1/2}) " }, { "math_id": 25, "text": " \\left(\\frac{d\\sigma}{d\\Omega}\\right)_{\\theta=0} = r_e^2 " }, { "math_id": 26, "text": "\\epsilon" }, { "math_id": 27, "text": "\\theta_c \\approx \\epsilon^{-1/2}" }, { "math_id": 28, "text": "\\pi\\theta_c^2" }, { "math_id": 29, "text": "\\epsilon^{-1}" }, { "math_id": 30, "text": "\\sigma = 2 \\pi r_e^2 \\Biggl[ \\frac{1 + \\epsilon}{\\epsilon^3} \\Biggl( \\frac{2\\epsilon (1 + \\epsilon)}{1 + 2\\epsilon} - \\ln{(1 + 2\\epsilon)} \\Biggr) + \\frac{\\ln{(1 + 2\\epsilon)}}{2\\epsilon} - \\frac{1 + 3\\epsilon}{(1 +2\\epsilon)^2} \\Biggr]" }, { "math_id": 31, "text": " \\sigma \\approx \\frac{8}{3} \\pi r_e^2 \\qquad (E_\\gamma \\ll m_e c^2) " } ]
https://en.wikipedia.org/wiki?curid=884142
884352
Friedrichs extension
In functional analysis, the Friedrichs extension is a canonical self-adjoint extension of a non-negative densely defined symmetric operator. It is named after the mathematician Kurt Friedrichs. This extension is particularly useful in situations where an operator may fail to be essentially self-adjoint or whose essential self-adjointness is difficult to show. An operator "T" is non-negative if formula_0 Examples. Example. Multiplication by a non-negative function on an "L"2 space is a non-negative self-adjoint operator. Example. Let "U" be an open set in R"n". On "L"2("U") we consider differential operators of the form formula_1 where the functions "a""i j" are infinitely differentiable real-valued functions on "U". We consider "T" acting on the dense subspace of infinitely differentiable complex-valued functions of compact support, in symbols formula_2 If for each "x" ∈ "U" the "n" × "n" matrix formula_3 is non-negative semi-definite, then "T" is a non-negative operator. This means (a) that the matrix is hermitian and formula_4 for every choice of complex numbers "c"1, ..., "c"n. This is proved using integration by parts. These operators are elliptic although in general elliptic operators may not be non-negative. They are however bounded from below. Definition of Friedrichs extension. The definition of the Friedrichs extension is based on the theory of closed positive forms on Hilbert spaces. If "T" is non-negative, then formula_5 is a sesquilinear form on dom "T" and formula_6 Thus Q defines an inner product on dom "T". Let "H"1 be the completion of dom "T" with respect to Q. "H"1 is an abstractly defined space; for instance its elements can be represented as equivalence classes of Cauchy sequences of elements of dom "T". It is not obvious that all elements in "H"1 can be identified with elements of "H". However, the following can be proved: The canonical inclusion formula_7 extends to an "injective" continuous map "H"1 → "H". We regard "H"1 as a subspace of "H". Define an operator "A" by formula_8 In the above formula, "bounded" is relative to the topology on "H"1 inherited from "H". By the Riesz representation theorem applied to the linear functional φξ extended to "H", there is a unique "A" ξ ∈ "H" such that formula_9 Theorem. "A" is a non-negative self-adjoint operator such that "T"1="A" - I extends "T". "T"1 is the Friedrichs extension of "T". Another way to obtain this extension is as follows. Let :formula_10 be the bounded inclusion operator. The inclusion is a bounded injective with dense image. Hence formula_11 is a bounded injective operator with dense image, where formula_12 is the adjoint of formula_13 as an operator between abstract Hilbert spaces. Therefore the operator formula_14 is a non-negative self-adjoint operator whose domain is the image of formula_15. Then formula_16 extends T. Krein's theorem on non-negative self-adjoint extensions. M. G. Krein has given an elegant characterization of all non-negative self-adjoint extensions of a non-negative symmetric operator "T". If "T", "S" are non-negative self-adjoint operators, write formula_17 if, and only if, Theorem. There are unique self-adjoint extensions "T"min and "T"max of any non-negative symmetric operator "T" such that formula_20 and every non-negative self-adjoint extension "S" of "T" is between "T"min and "T"max, i.e. formula_21 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\langle \\xi \\mid T \\xi \\rangle \\geq 0 \\quad \\xi \\in \\operatorname{dom}\\ T " }, { "math_id": 1, "text": " [T \\phi](x) = -\\sum_{i,j} \\partial_{x_i} \\{a_{i j}(x) \\partial_{x_j} \\phi(x)\\} \\quad x \\in U, \\phi \\in \\operatorname{C}_c^\\infty(U), " }, { "math_id": 2, "text": " \\operatorname{C}_c^\\infty(U) \\subseteq L^2(U). " }, { "math_id": 3, "text": " \\begin{bmatrix} a_{1 1}(x) & a_{1 2}(x) & \\cdots & a_{1 n}(x) \\\\ a_{2 1}(x) & a_{2 2} (x) & \\cdots & a_{2 n}(x) \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ a_{n 1}(x) & a_{n 2}(x) & \\cdots & a_{n n}(x) \\end{bmatrix} " }, { "math_id": 4, "text": " \\sum_{i, j} a_{i j }(x) c_i \\overline{c_j} \\geq 0 " }, { "math_id": 5, "text": " \\operatorname{Q}(\\xi, \\eta) = \\langle \\xi \\mid T \\eta \\rangle + \\langle \\xi \\mid \\eta \\rangle " }, { "math_id": 6, "text": " \\operatorname{Q}(\\xi, \\xi) = \\langle \\xi \\mid T \\xi\\rangle + \\langle \\xi \\mid \\xi \\rangle \\geq \\|\\xi\\|^2." }, { "math_id": 7, "text": " \\operatorname{dom} T \\rightarrow H " }, { "math_id": 8, "text": " \\operatorname{dom}\\ A = \\{\\xi \\in H_1: \\phi_\\xi: \\eta \\mapsto \\operatorname{Q}(\\xi, \\eta) \\mbox{ is bounded linear.} \\} " }, { "math_id": 9, "text": " \\operatorname{Q}(\\xi,\\eta) = \\langle A \\xi \\mid \\eta \\rangle \\quad \\eta \\in H_1 " }, { "math_id": 10, "text": " L:H_1\\rightarrow H " }, { "math_id": 11, "text": " LL^*:H\\rightarrow H " }, { "math_id": 12, "text": " L^* " }, { "math_id": 13, "text": " L " }, { "math_id": 14, "text": " A:=(LL^*)^{-1} " }, { "math_id": 15, "text": " LL^* " }, { "math_id": 16, "text": " A-I " }, { "math_id": 17, "text": " T \\leq S " }, { "math_id": 18, "text": " \\operatorname{dom}(S^{1/2}) \\subseteq \\operatorname{dom}(T^{1/2}) " }, { "math_id": 19, "text": " \\langle T^{1/2} \\xi \\mid T^{1/2} \\xi \\rangle \\leq \\langle S^{1/2} \\xi \\mid S^{1/2} \\xi \\rangle \\quad \\forall \\xi \\in \\operatorname{dom}(S^{1/2}) " }, { "math_id": 20, "text": " T_{\\mathrm{min}} \\leq T_{\\mathrm{max}}, " }, { "math_id": 21, "text": " T_{\\mathrm{min}} \\leq S \\leq T_{\\mathrm{max}}. " } ]
https://en.wikipedia.org/wiki?curid=884352
8845390
Rebound rate
Basketball statistic In basketball statistics, rebound rate or rebound percentage is a statistic to gauge how effective a player is at gaining possession of the basketball after a missed field goal or free throw. Rebound rate is an estimate of the percentage of missed shots a player rebounded while he was on the floor. Using raw rebound totals to evaluate rebounding fails to take into account external factors unrelated to a player's ability, such as the number of shots taken in games and the percentage of those shots that are made. Both factors affect the number of missed shots that are available to be rebounded. Rebound rate takes these factors into account. The formula are: formula_0 formula_1 formula_2 In the National Basketball Association (NBA), the statistic is available for seasons since the 1970–71 season. The highest career rebound rate by a player is 23.4, by Dennis Rodman. The highest rebound rate for one season is 29.7, also by Dennis Rodman, which he achieved during the season. He also owned seven of the top ten rebound percentage seasons (four of the top five) in NBA history, all time. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{Rebound Rate} = \\dfrac{100 \\times\\text{Rebounds}\\times \\dfrac{\\text{Team Minutes Played}}{5}}{\\text{Minutes Played}\\times \\left (\\text{Team Total Rebounds} + \\text{Opposing Team Total Rebounds}\\right )}" }, { "math_id": 1, "text": " \\text{Offensive Rebound Rate} = \\dfrac{100 \\times\\text{Offensive Rebounds}\\times \\dfrac{\\text{Team Minutes Played}}{5}}{\\text{Minutes Played}\\times \\left (\\text{Team Offensive Rebounds} + \\text{Opposing Team Defensive Rebounds}\\right )}" }, { "math_id": 2, "text": " \\text{Defensive Rebound Rate} = \\dfrac{100 \\times\\text{Defensive Rebounds}\\times \\dfrac{\\text{Team Minutes Played}}{5}}{\\text{Minutes Played}\\times \\left (\\text{Team Defensive Rebounds} + \\text{Opposing Team Offensive Rebounds}\\right )}" } ]
https://en.wikipedia.org/wiki?curid=8845390
8845470
Golod–Shafarevich theorem
Field in algebra In mathematics, the Golod–Shafarevich theorem was proved in 1964 by Evgeny Golod and Igor Shafarevich. It is a result in non-commutative homological algebra which solves the class field tower problem, by showing that class field towers can be infinite. The inequality. Let "A" = "K"⟨"x"1, ..., "x""n"⟩ be the free algebra over a field "K" in "n" = "d" + 1 non-commuting variables "x""i". Let "J" be the 2-sided ideal of "A" generated by homogeneous elements "f""j" of "A" of degree "d""j" with 2 ≤ "d"1 ≤ "d"2 ≤ ... where "d""j" tends to infinity. Let "r""i" be the number of "d""j" equal to "i". Let "B"="A"/"J", a graded algebra. Let "b""j" = dim "B""j". The "fundamental inequality" of Golod and Shafarevich states that formula_0 As a consequence: Applications. This result has important applications in combinatorial group theory: In class field theory, the class field tower of a number field "K" is created by iterating the Hilbert class field construction. The class field tower problem asks whether this tower is always finite; attributed this question to Furtwangler, though Furtwangler said he had heard it from Schreier. Another consequence of the Golod–Shafarevich theorem is that such towers may be infinite (in other words, do not always terminate in a field equal to its Hilbert class field). Specifically, More generally, a number field with sufficiently many prime factors in the discriminant has an infinite class field tower.
[ { "math_id": 0, "text": " b_j\\ge nb_{j-1} -\\sum_{i=2}^{j} b_{j-i} r_i." } ]
https://en.wikipedia.org/wiki?curid=8845470
8845623
Offensive rating
Offensive proficiency rating or offensive productive efficiency is a statistic used in basketball to measure either a team's offensive performance or an individual player's efficiency at producing points for the offense by approximating the number of points generated by a team or individual over 100 possessions. It was created by author and statistician Dean Oliver. For teams, the formula is: Offensive Team Rating = (Players Points*Total FG%) + Opponents Differential= 1/5 of possessions - Times Fouled+ FTM* FT% * OAPOW (Official Adjusted Players Offensive Withstand). This stat can't be influenced by the defense of a player's teammates. For players, the formula is: formula_0 Points can be produced through field goals, free throws, assists, and offensive rebounds. Individual possessions are the sum of a player's scoring possessions (field goals, free throws, plus partial credit for assists), missed field goals and free throws that the defense rebounds, and turnovers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nOffensive Production Rating = \\frac{Points Produced}{Individual Possessions} \\times OAPOW \\times PPG + \\frac{FTM}{FT} \\times 3pt% + FG%\n" } ]
https://en.wikipedia.org/wiki?curid=8845623
8846521
BCJR algorithm
Error correction algorithm The Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises (principally convolutional codes). The algorithm is named after its inventors: Bahl, Cocke, Jelinek and Raviv. This algorithm is critical to modern iteratively-decoded error-correcting codes, including turbo codes and low-density parity-check codes. Steps involved. Based on the trellis: Variations. SBGT BCJR. Berrou, Glavieux and Thitimajshima simplification.
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=8846521
8847
Commutator subgroup
Smallest normal subgroup by which the quotient is commutative In mathematics, more specifically in abstract algebra, the commutator subgroup or derived subgroup of a group is the subgroup generated by all the commutators of the group. The commutator subgroup is important because it is the smallest normal subgroup such that the quotient group of the original group by this subgroup is abelian. In other words, formula_0 is abelian if and only if formula_1 contains the commutator subgroup of formula_2. So in some sense it provides a measure of how far the group is from being abelian; the larger the commutator subgroup is, the "less abelian" the group is. Commutators. For elements formula_3 and formula_4 of a group "G", the commutator of formula_3 and formula_4 is formula_5. The commutator formula_6 is equal to the identity element "e" if and only if formula_7 , that is, if and only if formula_3 and formula_4 commute. In general, formula_8. However, the notation is somewhat arbitrary and there is a non-equivalent variant definition for the commutator that has the inverses on the right hand side of the equation: formula_9 in which case formula_10 but instead formula_11. An element of "G" of the form formula_6 for some "g" and "h" is called a commutator. The identity element "e" = ["e","e"] is always a commutator, and it is the only commutator if and only if "G" is abelian. Here are some simple but useful commutator identities, true for any elements "s", "g", "h" of a group "G": The first and second identities imply that the set of commutators in "G" is closed under inversion and conjugation. If in the third identity we take "H" = "G", we get that the set of commutators is stable under any endomorphism of "G". This is in fact a generalization of the second identity, since we can take "f" to be the conjugation automorphism on "G", formula_19, to get the second identity. However, the product of two or more commutators need not be a commutator. A generic example is ["a","b"]["c","d"] in the free group on "a","b","c","d". It is known that the least order of a finite group for which there exists two commutators whose product is not a commutator is 96; in fact there are two nonisomorphic groups of order 96 with this property. Definition. This motivates the definition of the commutator subgroup formula_20 (also called the derived subgroup, and denoted formula_21 or formula_22) of "G": it is the subgroup generated by all the commutators. It follows from this definition that any element of formula_20 is of the form formula_23 for some natural number formula_24, where the "g""i" and "h""i" are elements of "G". Moreover, since formula_25, the commutator subgroup is normal in "G". For any homomorphism "f": "G" → "H", formula_26, so that formula_27. This shows that the commutator subgroup can be viewed as a functor on the category of groups, some implications of which are explored below. Moreover, taking "G" = "H" it shows that the commutator subgroup is stable under every endomorphism of "G": that is, ["G","G"] is a fully characteristic subgroup of "G", a property considerably stronger than normality. The commutator subgroup can also be defined as the set of elements "g" of the group that have an expression as a product "g" = "g"1 "g"2 ... "g""k" that can be rearranged to give the identity. Derived series. This construction can be iterated: formula_28 formula_29 The groups formula_30 are called the second derived subgroup, third derived subgroup, and so forth, and the descending normal series formula_31 is called the derived series. This should not be confused with the lower central series, whose terms are formula_32. For a finite group, the derived series terminates in a perfect group, which may or may not be trivial. For an infinite group, the derived series need not terminate at a finite stage, and one can continue it to infinite ordinal numbers via transfinite recursion, thereby obtaining the transfinite derived series, which eventually terminates at the perfect core of the group. Abelianization. Given a group formula_2, a quotient group formula_0 is abelian if and only if formula_33. The quotient formula_34 is an abelian group called the abelianization of formula_2 or formula_2 made abelian. It is usually denoted by formula_35 or formula_36. There is a useful categorical interpretation of the map formula_37. Namely formula_38 is universal for homomorphisms from formula_2 to an abelian group formula_39: for any abelian group formula_39 and homomorphism of groups formula_40 there exists a unique homomorphism formula_41 such that formula_42. As usual for objects defined by universal mapping properties, this shows the uniqueness of the abelianization formula_35 up to canonical isomorphism, whereas the explicit construction formula_43 shows existence. The abelianization functor is the left adjoint of the inclusion functor from the category of abelian groups to the category of groups. The existence of the abelianization functor Grp → Ab makes the category Ab a reflective subcategory of the category of groups, defined as a full subcategory whose inclusion functor has a left adjoint. Another important interpretation of formula_35 is as formula_44, the first homology group of formula_2 with integral coefficients. Classes of groups. A group formula_2 is an abelian group if and only if the derived group is trivial: ["G","G"] = {"e"}. Equivalently, if and only if the group equals its abelianization. See above for the definition of a group's abelianization. A group formula_2 is a perfect group if and only if the derived group equals the group itself: ["G","G"] = "G". Equivalently, if and only if the abelianization of the group is trivial. This is "opposite" to abelian. A group with formula_45 for some "n" in N is called a solvable group; this is weaker than abelian, which is the case "n" = 1. A group with formula_46 for all "n" in N is called a non-solvable group. A group with formula_47 for some ordinal number, possibly infinite, is called a hypoabelian group; this is weaker than solvable, which is the case "α" is finite (a natural number). Perfect group. Whenever a group formula_2 has derived subgroup equal to itself, formula_48, it is called a perfect group. This includes non-abelian simple groups and the special linear groups formula_49 for a fixed field formula_50. Examples. Map from Out. Since the derived subgroup is characteristic, any automorphism of "G" induces an automorphism of the abelianization. Since the abelianization is abelian, inner automorphisms act trivially, hence this yields a map formula_53
[ { "math_id": 0, "text": "G/N" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "[g,h] = g^{-1}h^{-1}gh" }, { "math_id": 6, "text": "[g,h]" }, { "math_id": 7, "text": "gh = hg" }, { "math_id": 8, "text": "gh = hg[g,h]" }, { "math_id": 9, "text": "[g,h] = ghg^{-1}h^{-1}" }, { "math_id": 10, "text": "gh \\neq hg[g,h]" }, { "math_id": 11, "text": "gh = [g,h]hg" }, { "math_id": 12, "text": "[g,h]^{-1} = [h,g]," }, { "math_id": 13, "text": "[g,h]^s = [g^s,h^s]," }, { "math_id": 14, "text": "g^s = s^{-1}gs" }, { "math_id": 15, "text": " g^s = sgs^{-1}" }, { "math_id": 16, "text": "s," }, { "math_id": 17, "text": "f: G \\to H " }, { "math_id": 18, "text": "f([g, h]) = [f(g), f(h)]." }, { "math_id": 19, "text": " x \\mapsto x^s " }, { "math_id": 20, "text": "[G, G]" }, { "math_id": 21, "text": "G'" }, { "math_id": 22, "text": "G^{(1)}" }, { "math_id": 23, "text": "[g_1,h_1] \\cdots [g_n,h_n] " }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "([g_1,h_1] \\cdots [g_n,h_n])^s = [g_1^s,h_1^s] \\cdots [g_n^s,h_n^s]" }, { "math_id": 26, "text": "f([g_1,h_1] \\cdots [g_n,h_n]) = [f(g_1),f(h_1)] \\cdots [f(g_n),f(h_n)]" }, { "math_id": 27, "text": "f([G,G]) \\subseteq [H,H]" }, { "math_id": 28, "text": "G^{(0)} := G" }, { "math_id": 29, "text": "G^{(n)} := [G^{(n-1)},G^{(n-1)}] \\quad n \\in \\mathbf{N}" }, { "math_id": 30, "text": "G^{(2)}, G^{(3)}, \\ldots" }, { "math_id": 31, "text": "\\cdots \\triangleleft G^{(2)} \\triangleleft G^{(1)} \\triangleleft G^{(0)} = G" }, { "math_id": 32, "text": "G_n := [G_{n-1},G]" }, { "math_id": 33, "text": "[G, G]\\subseteq N" }, { "math_id": 34, "text": "G/[G, G]" }, { "math_id": 35, "text": "G^{\\operatorname{ab}}" }, { "math_id": 36, "text": "G_{\\operatorname{ab}}" }, { "math_id": 37, "text": "\\varphi: G \\rightarrow G^{\\operatorname{ab}}" }, { "math_id": 38, "text": "\\varphi" }, { "math_id": 39, "text": "H" }, { "math_id": 40, "text": "f: G \\to H" }, { "math_id": 41, "text": "F: G^{\\operatorname{ab}}\\to H" }, { "math_id": 42, "text": "f = F \\circ \\varphi" }, { "math_id": 43, "text": "G\\to G/[G, G]" }, { "math_id": 44, "text": "H_1(G, \\mathbb{Z})" }, { "math_id": 45, "text": "G^{(n)}=\\{e\\}" }, { "math_id": 46, "text": "G^{(n)} \\neq \\{e\\}" }, { "math_id": 47, "text": "G^{(\\alpha)}=\\{e\\}" }, { "math_id": 48, "text": "G^{(1)} =G" }, { "math_id": 49, "text": "\\operatorname{SL}_n(k)" }, { "math_id": 50, "text": "k" }, { "math_id": 51, "text": "\\operatorname{GL}_n(k)" }, { "math_id": 52, "text": "n \\ne 2" }, { "math_id": 53, "text": "\\operatorname{Out}(G) \\to \\operatorname{Aut}(G^{\\mbox{ab}})" } ]
https://en.wikipedia.org/wiki?curid=8847
8849356
Bertrand's box paradox
Mathematical paradox Bertrand's box paradox is a veridical paradox in elementary probability theory. It was first posed by Joseph Bertrand in his 1889 work "Calcul des Probabilités". There are three boxes: Choose a box at random. From this box, withdraw one coin at random. If that happens to be a gold coin, then what is the probability that the next coin drawn from the same box is also a gold coin? A veridical paradox is a paradox whose correct solution seems to be counterintuitive. It may seem intuitive that the probability that the remaining coin is gold should be , but the probability is actually . Bertrand showed that if were correct, it would result in a contradiction, so cannot be correct. This simple but counterintuitive puzzle is used as a standard example in teaching probability theory. The solution illustrates some basic principles, including the Kolmogorov axioms. Solution. The problem can be reframed by describing the boxes as each having one drawer on each of two sides. Each drawer contains a coin. One box has a gold coin on each side (GG), one a silver coin on each side (SS), and the other a gold coin on one side and a silver coin on the other (GS). A box is chosen at random, a random drawer is opened, and a gold coin is found inside it. What is the chance of the coin on the other side being gold? The following faulty reasoning appears to give a probability of : *Originally, all three boxes were equally likely to be chosen. *The chosen box cannot be box SS. *So it must be box GG or GS. *The two remaining possibilities are equally likely. So the probability that the box is GG, and the other coin is also gold, is . The flaw is in the last step. While those two cases were originally equally likely, the fact that you are certain to find a gold coin if you had chosen the GG box, but are only 50% sure of finding a gold coin if you had chosen the GS box, means they are no longer equally likely given that you have found a gold coin. Specifically: *The probability that GG would produce a gold coin is 1. *The probability that SS would produce a gold coin is 0. *The probability that GS would produce a gold coin is . Initially GG, SS and GS are equally likely formula_0. Therefore, by Bayes' rule the conditional probability that the chosen box is GG, given we have observed a gold coin, is: formula_1 The correct answer of can also be obtained as follows: Bertrand's purpose for constructing this example was to show that merely counting cases is not always proper. Instead, one should sum the probabilities that the cases would produce the observed result; and the two methods are equivalent only if this probability is either 1 or 0 in every case. This condition is applied correctly by the second solution method, but not by the first. Experimental data. In a survey of 53 Psychology freshmen taking an introductory probability course, 35 incorrectly responded ; only 3 students correctly responded . Related problems. Other veridical paradoxes of probability include: The Monty Hall and Three Prisoners problems are identical mathematically to Bertrand's Box paradox. The construction of the Boy or Girl paradox is similar, essentially adding a fourth box with a gold coin and a silver coin. Its answer is controversial, based on how one assumes the "drawer" was chosen. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\mathrm{i.e., P(GG) = P(SS) = P(GS)} = \\frac13\\right)" }, { "math_id": 1, "text": "\\mathrm{ P(GG \\mid see\\ gold) = \\frac { P(see\\ gold \\mid GG)\\times\\frac13} { P(see\\ gold \\mid GG)\\times\\frac13+P(see\\ gold \\mid SS)\\times\\frac13+P(see\\ gold \\mid GS)\\times\\frac13 }} = \\frac{\\frac13}{\\frac13}\\times\\frac{1}{1+0+\\frac12} = \\frac{2}{3}" } ]
https://en.wikipedia.org/wiki?curid=8849356
8849460
Split-ring resonator
A resonator A split-ring resonator (SRR) is an artificially produced structure common to metamaterials. Its purpose is to produce the desired magnetic susceptibility (magnetic response) in various types of metamaterials up to 200 terahertz. Background. Split ring resonators (SRRs) consist of a pair of concentric metallic rings, etched on a dielectric substrate, with slits etched on opposite sides. SRRs can produce the effect of being electrically smaller when responding to an oscillating electromagnetic field. These resonators have been used for the synthesis of left-handed and negative refractive index media, where the necessary value of the negative effective permeability is due to the presence of the SRRs. When an array of electrically small SRRs is excited by means of a time-varying magnetic field, the structure behaves as an effective medium with negative effective permeability in a narrow band above SRR resonance. SRRs have also been coupled to planar transmission lines for the synthesis of metamaterials transmission line. These media create the necessary strong magnetic coupling to an applied electromagnetic field not otherwise available in conventional materials. For example, an effect such as negative permeability is produced with a periodic array of split ring resonators. A single-cell SRR has a pair of enclosed loops with splits in them at opposite ends. The loops are made of nonmagnetic metal like copper and have a small gap between them. The loops can be concentric or square, and gapped as needed. A magnetic flux penetrating the metal rings will induce rotating currents in the rings, which produce their own flux to enhance or oppose the incident field (depending on the SRR resonant properties). This field pattern is dipolar. The small gaps between the rings produces large capacitance values, which lowers the resonating frequency. Hence the dimensions of the structure are small compared to the resonant wavelength. This results in low radiative losses and very high quality factors. The split ring resonator was a microstructure design featured in the paper by Pendry et al in 1999 called, "Magnetism from Conductors and Enhanced Nonlinear Phenomena". It proposed that the split ring resonator design, built out of nonmagnetic material, could enhance the magnetic activity unseen in natural materials. In the simple microstructure design, it is shown that in an array of conducting cylinders, with an applied external formula_0 field parallel to the cylinders, the effective permeability can be written as the following. (This model is very limited and the effective permeability cannot be less than zero or greater than one.) formula_1 Where formula_2 is the resistance of the cylinder surface per unit area, a is the spacing of the cylinders, formula_3 is the angular frequency, formula_4 is the permeability of free space and r is the radius. Moreover, when gaps are introduced to a double cylinder design similar to the image above, we see that the gaps produce a capacitance. This capacitor and inductor microstructure design introduces a resonance that amplifies the magnetic effect. The new form of the effective permeability resembles a familiar response known in plasmonic materials. formula_5 Where d is the spacing of the concentric conducting sheets. The final design replaces the double concentric cylinders with a pair of flat concentric c-shaped sheets, placed on each side of a unit cell. The unit cells are stacked on top of each other by a length, l. The final result of the effective permeability can be seen below. formula_6 where c is the thickness of the c-shaped sheet and formula_2 is the resistance of unit length of the sheets measured around the circumference. Characteristics. The split ring resonator and the metamaterial itself are composite materials. Each SRR has an individual tailored response to the electromagnetic field. However, the periodic construction of many SRR cells is such that the electromagnetic wave interacts as if these were homogeneous materials. This is similar to how light actually interacts with everyday materials; materials such as glass or lenses are made of atoms, an averaging or macroscopic effect is produced. The SRR is designed to mimic the magnetic response of atoms, only on a much larger scale. Also, as part of periodic composite structure, the SRR is designed to have a stronger magnetic coupling than is found in nature. The larger scale allows for more control over the magnetic response, while each unit is smaller than the radiated electromagnetic wave. SRRs are much more active than ferromagnetic materials found in nature. The pronounced magnetic response in such lightweight materials demonstrates an advantage over heavier, naturally occurring materials. Each unit can be designed to have its own magnetic response. The response can be enhanced or lessened as desired. In addition, the overall effect reduces power requirements. SRR configuration. There are a variety of split-ring resonators and periodic structures: rod-split-rings, nested split-rings, single split rings, deformed split-rings, spiral split-rings, and extended S-structures. The variations of split ring resonators have achieved different results, including smaller and higher frequency structures. The research which involves some of these types are discussed throughout the article. To date (December 2009) the capability for desired results in the visible spectrum has not been achieved. However, in 2005 it was noted that, physically, a nested circular split-ring resonator must have an inner radius of 30 to 40 nanometers for success in the mid-range of the visible spectrum. Microfabrication and nanofabrication techniques may utilize direct laser beam writing or electron beam lithography depending on the desired resolution. Various configurations. Split-ring resonators (SRR) are one of the most common elements used to fabricate metamaterials. Split-ring resonators are non-magnetic materials, which initially were fabricated from circuit board material to create metamaterials. Looking at the image directly to the right, it can be seen that at first a single SRR looks like an object with two square perimeters, with each perimeter having a small section removed. This results in square "C" shapes on fiberglass printed circuit board material. In this type of configuration it is actually two concentric bands of non-magnetic conductor material. There is one gap in each band placed 180° relative to each other. The gap in each band gives it the distinctive "C" shape, rather than a totally circular or square shape. Then multiple cells of this double band configuration are fabricated onto circuit board material by an etching technique and lined with copper wire strip arrays. After processing, the boards are cut and assembled into an interlocking unit. It is constructed into a periodic array with a large number of SRRs. There are now a number of different configurations that use the SRR nomenclature. Demonstrations. A periodic array of SRRs was used for the first demonstration of a negative index of refraction. For this demonstration, "square shaped SRRs", with the lined wire configurations, were fabricated into a periodic, arrayed, cell structure. This is the substance of the metamaterial. Then a metamaterial prism was cut from this material. The prism experiment demonstrated a negative index of refraction for the first time in the year 2000; the paper about the demonstration was submitted to the journal Science on January 8, 2001, accepted on February 22, 2001 and published on April 6, 2001. Just before this prism experiment, Pendry et al. was able to demonstrate that a three-dimensional array of intersecting thin wires could be used to create negative values of ε. In a later demonstration, a periodic array of copper split-ring resonators could produce an effective negative μ. In 2000 Smith et al. were the first to successfully combine the two arrays and produce a so-called left-handed material, which has negative values of ε and μ for a band of frequencies in the GHz range. SRRs were first used to fabricate left-handed metamaterials for the microwave range, and several years later for the terahertz range. By 2007, experimental demonstration of this structure at microwave frequencies has been achieved by many groups. In addition, SRRs have been used for research in acoustic metamaterials. The arrayed SRRs and wires of the first left-handed metamaterial were melded into alternating layers. This concept and methodology was then applied to (dielectric) materials with optical resonances producing negative effective permittivity for certain frequency intervals resulting in "photonic bandgap frequencies". Another analysis showed left-handed materials to be fabricated from inhomogeneous constituents, which yet results in a macroscopically homogeneous material. SRRs had been used to focus a signal from a point source, increasing the transmission distance for near field waves. Furthermore, another analysis showed SRRs with a negative index of refraction capable of high-frequency magnetic response, which created an artificial magnetic device composed of non-magnetic materials (dielectric circuit board). The resonance phenomena that occurs in this system is essential to achieving the desired effects. SRRs also exhibit resonant electric response in addition to their resonant magnetic response. The response, when combined with an array of identical wires, is averaged over the whole composite structure which results in effective values, including the refractive index. The original logic behind SRRs specifically, and metamaterials generally was to create a structure, which imitates an arrayed atomic structure only on a much larger scale. Several types of SRR. In research based in metamaterials, and specifically negative refractive index, there are different types of split-ring resonators. Of the examples mentioned below, most of them have a gap in each ring. In other words, with a double ring structure, each ring has a gap. There is the "1-D Split-Ring Structure" with two square rings, one inside the other. One set of cited "unit cell" dimensions would be an outer square of 2.62 mm and an inner square of 0.25 mm. 1-D structures such as this are easier to fabricate compared with constructing a rigid 2-D structure. The "Symmetrical-Ring Structure" is another classic example. Described by the nomenclature these are two rectangular square D type configurations, exactly the same size, lying flat, side by side, in the unit cell. Also these are not concentric. One set of cited dimensions are 2 mm on the shorter side, and 3.12 mm on the longer side. The gaps in each ring face each other, in the unit cell. The "Omega Structure", as the nomenclature describes, has an Ω-shaped ring structure. There are two of these, standing vertical, side by side, instead of lying flat, in the unit cell. In 2005 these were considered to be a new type of metamaterial. One set of cited dimensions are annular parameters of R=1.4 mm and r=1 mm, and the straight edge is 3.33 mm. Another new metamaterial in 2005 was a coupled S-shaped structure. There are two vertical S-shaped structures, side by side, in a unit cell. There is no gap as in the ring structure; however, there is a space between the top and middle parts of the S and space between the middle part and bottom part of the S. Furthermore, it still has the properties of having an electric plasma frequency and a magnetic resonant frequency. Research. On May 1, 2000, research was published about an experiment which involved conducting wires placed symmetrically within each cell of a periodic split-ring resonator array. This effectively achieved negative permeability and permittivity for electromagnetic waves in the microwave regime. The concept was and still is used to build interacting elements smaller than the applied electromagnetic radiation. In addition, the spacing between the resonators is much smaller than the wavelength of the applied radiation. Additionally, the splits in the ring allow the SRR unit to achieve resonance at wavelengths much larger than the diameter of the ring. The unit is designed to generate a large capacitance, lower the resonant frequency, and concentrate the electric field. Combining units creates a design as a periodic medium. Furthermore, the multiple unit structure has strong magnetic coupling with low radiative losses. Research has also covered variations in magnetic resonances for different SRR configurations. Research has continued into terahertz radiations with SRRs Other related work fashioned metamaterial configurations with fractals and non-SRR structures. These can be constructed with materials such as periodic metallic crosses, or an ever-widening concentric ring structures known as Swiss rolls. Permeability for only the red wavelength at 780 nm has been analyzed and along with other related work. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Academic journals Metamaterials books References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links.
[ { "math_id": 0, "text": " H_0 " }, { "math_id": 1, "text": " \\mu_{eff} = 1 - \\frac{\\pi r^2}{a^2}\\ \\left(1 + i\\frac{2\\sigma }{\\omega r \\mu_0}\\ \\right)^{-1} " }, { "math_id": 2, "text": " \\sigma " }, { "math_id": 3, "text": " \\omega " }, { "math_id": 4, "text": " \\mu_0 " }, { "math_id": 5, "text": " \\mu_{eff} = 1 - \\frac{\\frac{\\pi r^2}{a^2}\\ }{1+\\frac{2\\sigma i}{\\omega r \\mu_0}\\ - \\frac{3dc^2_0}{\\pi^2 \\omega^2 r^3}\\ }\\ " }, { "math_id": 6, "text": " \\mu_{eff} = 1 - \\frac{\\frac{\\pi r^2}{a^2}\\ }{1+\\frac{2l\\sigma_1 i}{\\omega r \\mu_0}\\ - \\frac{3lc^2_0}{\\pi \\omega^2 r^3ln(\\frac{2c}{d}\\ )}\\ }\\ " } ]
https://en.wikipedia.org/wiki?curid=8849460
8850783
Formal ethics
Formal ethics is a formal logical system for describing and evaluating the "form" as opposed to the "content" of ethical principles. Formal ethics was introduced by Harry J. Gensler, in part in his 1990 logic textbook "Symbolic Logic: Classical and Advanced Systems", but was more fully developed and justified in his 1996 book "Formal Ethics". Formal ethics is related to ethical formalism in that its focus is the forms of moral judgments, but the exposition in "Formal Ethics" makes it clear that Gensler, unlike previous ethical formalists, does not consider formal ethics to be a complete ethical theory (such that the correct form would be necessary and sufficient for an ethical principle to be "correct"). In fact, the theorems of formal ethics could be seen as a largest common subset of most widely recognized ethical theories, in that none of its axioms (with the possible exception of "rationality") is controversial among philosophers of ethics. Symbolic representation. The axioms and theorems of formal ethics can be represented with the standard notation of predicate logic (but with a grammar closer to higher-order logics), augmented with imperative, deontic, belief, and modal logic symbols. Formal logic uses an underlined symbol (e.g. formula_0) to represent an imperative. If the same symbol is used without an underline, then the plain symbol is an indicative and the underlined symbol is an imperative version of the same proposition. For example, if we take the symbol formula_1 to mean the indicative "You eat an apple", then formula_0 means the imperative "Eat an apple". When a proposition is given as a predicate with one or more of the arguments representing agents, the agent to which the imperative applies is underlined. For example, if formula_2 means "You give a dollar to x" then formula_3 is the correct way to express "Give a dollar to x". Within the system of formal ethics, an imperative is taken to represent a preference rather than a demand (called "anti-modal" view, because an underline doesn't behave like a modal operator). With this interpretation, the negation of an imperative (e.g. formula_4) is taken to mean "Don't do A", not "You may omit A". To express demands, an imperative modal operator formula_5 (for "may") is defined, so that formula_6 = "You may do A" and formula_7 = "You may not omit doing A" = "You must do A". Note that formula_5 is different from the deontic formula_8 "all right" operator defined below, as "You must do A" is still an imperative, without any "ought" judgment (i.e. not the same as "You ought to do A"). Following Castañeda's approach, the deontic operators formula_9 (for "ought") and formula_8 (for "all right", represented formula_10 for "permissible" in some deontic logic notations) are applied to imperatives. This is opposed to many deontic logics which apply the deontic operators to indicatives. Doing so avoids a difficulty of many deontic logics to express conditional imperatives. An often given example is "If you smoke, then you ought to use an ashtray". If the deontic operators formula_9 and formula_8 only attach to indicatives, then it is not clear that either of the following representations is adequate: formula_11 formula_12 However, by attaching the deontic operators to imperatives, we have unambiguously formula_13 Belief logic symbols, when combined with imperative logic, allow beliefs and desires to be expressed. The notation formula_14 is used for beliefs ("You believe A") and formula_15 for desires ("You desire A"). In formal ethics, "desire" is taken in a strong sense when the agent of the belief is the same as the agent of the imperative. The following table shows the different interpretations for formula_16 depending on the agent and the tense of the imperative: This strong interpretation of desires precludes statements such as "I want to get out of bed (right now), but I don't act to get out of bed". It does not, however, preclude "I want to get out of bed (right now), but I don't get out of bed". Perhaps I act to get out of bed (make my best effort), but can't for some reason (e.g. I am tied down, my legs are broken, etc.). Beliefs may be indicative, as above, or imperative (e.g. formula_17 "Believe A", formula_18 "Desire A"). They may also be combined with the deontic operators. For example, if formula_19 means "God exists", then formula_20 is "You ought to believe that God exists", and formula_21 is "Everyone ought to believe that God exists". The modal operators formula_22 and formula_23 are used with their normal meanings in modal logic. In addition, to address the fact that logicians may disagree on what is "logically" necessary or possible, causal modal operators are separately defined to express that something is "causally" necessary or possible. The causal modal operators are represented formula_24 and formula_25. In addition, an operator formula_26 is used to mean "in every actual or hypothetic case". This is used, for example, when expressing deontic and prescriptive counterfactuals, and is weaker than formula_22. For example, formula_27 means "In every actual or hypothetical case, if you ought to do A, do A" whereas formula_28 means "You ought to do A "logically entails" do A" Finally, formal ethics is a higher-order logic in that it allows "properties", predicates that apply to other predicates. Properties can only be applied to actions, and the imperative notation is used (e.g. formula_29 = "action A has property F"). The only types of property that formal ethics admits are "universal properties", properties are not evaluative and do not make reference to proper names or pointer words. The following are examples of properties that are not universal properties: Requiring a property to be universal, however, is different from requiring it to be morally relevant. formula_35, where formula_36 means "Act A is done by a black person" is a universal property, but would not be considered morally relevant to most acts in most ethical theories. Formal ethics has a definition of "relevantly similar" actions that imposes certain consistency constraints, but does not have a definition of "morally relevant properties". The notation formula_37 is used to mean "G is a complete description of A in universal terms". Put another way, formula_19 is a logical conjunction of all universal properties that formula_0 has. The formula_37 notation is the basis for the definition of "exactly similar" actions and is used in the definition of "relevantly similar" actions. Axioms. Formal ethics has four axioms in addition to the axioms of predicate and modal logic. These axioms (with the possible exception of "Rationality", see below) are largely uncontroversial within ethical theory. In natural language, the axioms might be given as follows: Care must be taken in translating each of these natural language axioms to a symbolic representation, in order to avoid axioms that produce absurd results or contradictions. In particular, the axioms advocated by Gensler avoid "if-then" forms in favor of "don't combine" forms.
[ { "math_id": 0, "text": "\\underline{A}" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "Dux" }, { "math_id": 3, "text": "D\\underline{u}x" }, { "math_id": 4, "text": "\\neg \\underline{A}" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "M \\underline{A}" }, { "math_id": 7, "text": "\\neg M \\neg \\underline{A}" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": "O" }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "O(\\mathrm{smoke} \\to \\mathrm{ashtray})" }, { "math_id": 12, "text": "\\mathrm{smoke} \\to O(\\mathrm{ashtray})" }, { "math_id": 13, "text": "O(\\mathrm{smoke} \\to \\underline{\\mathrm{ashtray}})" }, { "math_id": 14, "text": "u:A" }, { "math_id": 15, "text": "u:\\underline{A}" }, { "math_id": 16, "text": "i:\\underline{A}" }, { "math_id": 17, "text": "\\underline{u} : A" }, { "math_id": 18, "text": "\\underline{u} : \\underline{A}" }, { "math_id": 19, "text": "G" }, { "math_id": 20, "text": "O(\\underline{u}:G)" }, { "math_id": 21, "text": "(x)O(\\underline{x}:G)" }, { "math_id": 22, "text": "\\square" }, { "math_id": 23, "text": "\\diamond" }, { "math_id": 24, "text": "\\underset{c}{\\square}" }, { "math_id": 25, "text": "\\underset{c}{\\diamond}" }, { "math_id": 26, "text": "\\blacksquare" }, { "math_id": 27, "text": "\\blacksquare(O\\underline{A} \\to \\underline{A})" }, { "math_id": 28, "text": "\\square(O\\underline{A} \\to \\underline{A})" }, { "math_id": 29, "text": "F\\underline{A}" }, { "math_id": 30, "text": "W" }, { "math_id": 31, "text": "W\\underline{A}" }, { "math_id": 32, "text": "G\\underline{A}" }, { "math_id": 33, "text": "I" }, { "math_id": 34, "text": "I\\underline{A}" }, { "math_id": 35, "text": "B" }, { "math_id": 36, "text": "B \\underline{A}" }, { "math_id": 37, "text": "G \\ast \\underline{A}" }, { "math_id": 38, "text": "U" }, { "math_id": 39, "text": "E" } ]
https://en.wikipedia.org/wiki?curid=8850783
8851949
U wave
ECG waveform The U wave is a wave on an electrocardiogram (ECG). It comes after the T wave of ventricular repolarization and may not always be observed as a result of its small size. 'U' waves are thought to represent repolarization of the Purkinje fibers. However, the exact source of the U wave remains unclear. The most common theories for the origin are: Description. According to V. Gorshkov-Cantacuzene: "The U wave is the momentum carried by the blood in the coronary arteries and blood vessels". The resistivity of stationary blood is expressed as formula_0, where formula_1 is a coefficient, and formula_2 is the hematocrit; at that time, as during acceleration of the blood flow occurs a sharp decrease in the longitudinal resistance with small relaxation times. However, multiple factors affect blood resistivity. Erythrocyte aggregation occurs at low shear rates and implies that to all vessels (with the exception of large veins) the effect of aggregation is irrelevant. The interior of a blood vessel includes a near-wall layer of plasma (referred to as lubricant), the size of which strictly depends on Reynold's criterion and the shear rate of the flowing blood. Given that the thickness of this layer in all blood vessels (except capillaries) is less than 5 microns, and the resistivity of the plasma is two times less than in blood, then according to the scheme of parallel insertion, the contribution of this layer to the resistivity is negligible. By reducing the speed of blood flow profiles the dependence of Ht on the radius of the vessel becomes more elongated. However, at normal values of Ht, the effect is also insignificant. With high enough shear rates, the red blood cells become susceptible to deformation. The contribution of this phenomenon is difficult to assess because it is present in the background of all the above effects. However, even the sum of all these factors has little effect on blood resistivity. It follows that at the time of ejection of blood from the left ventricle, part of the pulse is carried away, because there is no electrical resistivity of blood, which gradually increases high up in the coronary arteries and blood vessels. The U wave is the momentum carried by the blood in the coronary arteries and blood vessels. It is possible to take this momentum back to Purkinje fibers along the vessels of the myocardium. This idea is also proved by the fact that hypertrophy of the left ventricle, myocardial ischemia, coronary and insufficiency have momentum there is no possibility to move to the Purkinje fibers, therefore, the ECG recorded a negative U wave. Interpretation. According to many studies, U waves often register in all leads except V6, most frequently in V2 and V3 when the heart rate is greater than 96 beats per minute. Its amplitude is often 0.1–0.33 mV. Particularly difficult is the allocation of the boundaries of the U wave on the background of the T wave and R wave, which may partial or complete (in the case of T wave) the merger. Higher values of heart rate or hypocalcemia U wave are superimposed on the T wave and in tachycardia — merges with the R-wave of the next cardiac cycle. Prominent U waves (U waves are described as prominent if they are more than 1-2 mm or 25% of the height of the T wave.) are most often seen in hypokalemia but may be present in hypercalcemia, thyrotoxicosis, or exposure to digitalis, epinephrine and Class 1A and 3 antiarrhythmics, as well as in congenital long QT syndrome, and in the setting of intracranial hemorrhage. An inverted U wave may represent myocardial ischemia (and especially appears to have a high positive predictive accuracy for left anterior descending coronary artery disease ) or left ventricular volume overload. A U-wave can sometimes be seen in normal younger, athletic individuals. The U-wave increases in adults that are older and less athletic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(\\text{Ht}\\right) = |\\text{Ht} \\cdot (1 + \\alpha \\text{Ht})" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\text{Ht}" } ]
https://en.wikipedia.org/wiki?curid=8851949
8853472
Ponderomotive energy
In strong-field laser physics, ponderomotive energy is the cycle-averaged quiver energy of a free electron in an electromagnetic field. Equation. The ponderomotive energy is given by formula_0, where formula_1 is the electron charge, formula_2 is the linearly polarised electric field amplitude, formula_3 is the laser carrier frequency and formula_4 is the electron mass. In terms of the laser intensity formula_5, using formula_6, it reads less simply: formula_7, where formula_8 is the vacuum permittivity. For typical orders of magnitudes involved in laser physics, this becomes: formula_9, where the laser wavelength is formula_10, and formula_11 is the speed of light. The units are electronvolts (eV), watts (W), centimeters (cm) and micrometers (μm). Atomic units. In atomic units, formula_12, formula_13, formula_14 where formula_15. If one uses the atomic unit of electric field, then the ponderomotive energy is just formula_16 Derivation. The formula for the ponderomotive energy can be easily derived. A free particle of charge formula_17 interacts with an electric field formula_18. The force on the charged particle is formula_19. The acceleration of the particle is formula_20. Because the electron executes harmonic motion, the particle's position is formula_21. For a particle experiencing harmonic motion, the time-averaged energy is formula_22. In laser physics, this is called the ponderomotive energy formula_23. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U_p = {e^2 E^2 \\over 4m \\omega_0^2}" }, { "math_id": 1, "text": "e" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "\\omega_0" }, { "math_id": 4, "text": "m" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "I=c\\epsilon_0 E^2/2" }, { "math_id": 7, "text": "U_p={e^2 I \\over 2 c \\epsilon_0 m \\omega_0^2}={2e^2 \\over c \\epsilon_0 m} \\cdot {I \\over 4\\omega_0^2}" }, { "math_id": 8, "text": "\\epsilon_0" }, { "math_id": 9, "text": " U_p (\\mathrm{eV}) = 9.33 \\cdot I(10^{14}\\ \\mathrm{W/cm}^2) \\cdot \\lambda^2(\\mathrm{\\mu m}^2) " }, { "math_id": 10, "text": "\\lambda= 2\\pi c/\\omega_0" }, { "math_id": 11, "text": "c" }, { "math_id": 12, "text": "e=m=1" }, { "math_id": 13, "text": "\\epsilon_0=1/4\\pi" }, { "math_id": 14, "text": "\\alpha c=1" }, { "math_id": 15, "text": "\\alpha \\approx 1/137" }, { "math_id": 16, "text": "U_p = \\frac{E^2}{4\\omega_0^2}." }, { "math_id": 17, "text": "q" }, { "math_id": 18, "text": "E \\, \\cos(\\omega t)" }, { "math_id": 19, "text": "F = qE \\, \\cos(\\omega t)" }, { "math_id": 20, "text": "a_{m} = {F \\over m} = {q E \\over m} \\cos(\\omega t)" }, { "math_id": 21, "text": "x = {-a \\over \\omega^2}= -\\frac{qE}{m\\omega^2} \\, \\cos(\\omega t) = -\\frac{q}{m\\omega^2} \\sqrt{\\frac{2I_0}{c\\epsilon_0}} \\, \\cos(\\omega t)" }, { "math_id": 22, "text": "U = \\textstyle{\\frac{1}{2}}m\\omega^2 \\langle x^2\\rangle = {q^2 E^2 \\over 4 m \\omega^2}" }, { "math_id": 23, "text": "U_p" } ]
https://en.wikipedia.org/wiki?curid=8853472
8854508
Ostriker–Peebles criterion
In astronomy, the Ostriker–Peebles criterion, named after its discoverers Jeremiah Ostriker and Jim Peebles, describes the formation of barred galaxies. The rotating disc of a spiral galaxy, consisting of stars and solar systems, may become unstable in a way that the stars in the outer parts of the "arms" are released from the galaxy system, resulting in the collapse of the remaining stars into a bar-shaped galaxy. This occurs in approximately 1/3 of the known spiral galaxies. Based on the first kinetic energy component "T" and the total gravitational energy "W", a galaxy will become barred when formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{T}{W} > 0.15" } ]
https://en.wikipedia.org/wiki?curid=8854508
885651
Stationary point
Zero of the derivative of a function In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" increasing or decreasing (hence the name). For a differentiable function of several real variables, a stationary point is a point on the surface of the graph where all its partial derivatives are zero (equivalently, the gradient has zero norm). The notion of stationary points of a real-valued function is generalized as "critical points" for complex-valued functions. Stationary points are easy to visualize on the graph of a function of one variable: they correspond to the points on the graph where the tangent is horizontal (i.e., parallel to the "x"-axis). For a function of two variables, they correspond to the points on the graph where the tangent plane is parallel to the "xy" plane. The notion of a "stationary point" allows the mathematical description of an astronomical phenomenon that was unexplained before the time of Copernicus. A "stationary point" is the point in the apparent trajectory of the planet on the celestial sphere, where the motion of the planet seems to stop, before restarting in the other direction (see apparent retrograde motion). This occurs because of the projection of the planet orbit into the ecliptic circle. Turning points. A turning point of a differentiable function is a point at which the derivative has an isolated zero and changes sign at the point. A turning point may be either a relative maximum or a relative minimum (also known as local minimum and maximum). A turning point is thus a stationary point, but not all stationary points are turning points. If the function is twice differentiable, the isolated stationary points that are not turning points are horizontal inflection points. For example, the function formula_0 has a stationary point at "x" = 0, which is also an inflection point, but is not a turning point. Classification. Isolated stationary points of a formula_1 real valued function formula_2 are classified into four kinds, by the first derivative test: The first two options are collectively known as "local extrema". Similarly a point that is either a global (or absolute) maximum or a global (or absolute) minimum is called a global (or absolute) extremum. The last two options—stationary points that are "not" local extrema—are known as saddle points. By Fermat's theorem, global extrema must occur (for a formula_1 function) on the boundary or at stationary points. Curve sketching. Determining the position and nature of stationary points aids in curve sketching of differentiable functions. Solving the equation "f′"("x") = 0 returns the "x"-coordinates of all stationary points; the "y"-coordinates are trivially the function values at those "x"-coordinates. The specific nature of a stationary point at "x" can in some cases be determined by examining the second derivative "f″"("x"): A more straightforward way of determining the nature of a stationary point is by examining the function values between the stationary points (if the function is defined and continuous between them). A simple example of a point of inflection is the function "f"("x") = "x"3. There is a clear change of concavity about the point "x" = 0, and we can prove this by means of calculus. The second derivative of "f" is the everywhere-continuous 6"x", and at "x" = 0, "f″" = 0, and the sign changes about this point. So "x" = 0 is a point of inflection. More generally, the stationary points of a real valued function formula_3 are those points x0 where the derivative in every direction equals zero, or equivalently, the gradient is zero. Examples. For the function "f"("x") = "x"4 we have "f′"(0) = 0 and "f″"(0) = 0. Even though "f″"(0) = 0, this point is not a point of inflection. The reason is that the sign of "f′"("x") changes from negative to positive. For the function "f"("x") = sin("x") we have "f′"(0) ≠ 0 and "f″"(0) = 0. But this is not a stationary point, rather it is a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign of "f′"("x") does not change; it stays positive. For the function "f"("x") = "x"3 we have "f′"(0) = 0 and "f″"(0) = 0. This is both a stationary point and a point of inflection. This is because the concavity changes from concave downwards to concave upwards and the sign of "f′"("x") does not change; it stays positive. For the function "f"("x") = 0, one has "f′"(0) = 0 and "f″"(0) = 0. The point 0 is a non-isolated stationary point which is not a turning point nor a horizontal point of inflection as the signs of "f′"("x") and "f″"("x") do not change. The function "f"("x") = "x"5 sin(1/"x") for "x" ≠ 0, and "f"(0) = 0, gives an example where "f′"("x") and "f″"("x") are both continuous, "f′"(0) = 0 and "f″"(0) = 0, and yet "f"("x") does not have a local maximum, a local minimum, nor a point of inflection at 0. So, 0 is a stationary point that is not isolated. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x \\mapsto x^3" }, { "math_id": 1, "text": "C^1" }, { "math_id": 2, "text": "f\\colon \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 3, "text": "f\\colon \\mathbb{R}^{n} \\to \\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=885651