id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14763077
|
Classical modal logic
|
In modal logic, a classical modal logic L is any modal logic containing (as axiom or theorem) the duality of the modal operators
formula_0
that is also closed under the rule
formula_1
Alternatively, one can give a dual definition of L by which L is classical if and only if it contains (as axiom or theorem)
formula_2
and is closed under the rule
formula_3
The weakest classical system is sometimes referred to as E and is non-normal. Both algebraic and neighborhood semantics characterize familiar classical modal systems that are weaker than the weakest normal modal logic K.
Every regular modal logic is classical, and every normal modal logic is regular and hence classical.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Diamond A \\leftrightarrow \\lnot\\Box\\lnot A"
},
{
"math_id": 1,
"text": "\\frac{ A \\leftrightarrow B }{\\Box A\\leftrightarrow \\Box B}."
},
{
"math_id": 2,
"text": "\\Box A \\leftrightarrow \\lnot\\Diamond\\lnot A"
},
{
"math_id": 3,
"text": "\\frac{ A \\leftrightarrow B }{\\Diamond A\\leftrightarrow \\Diamond B}."
}
] |
https://en.wikipedia.org/wiki?curid=14763077
|
14764284
|
Regular modal logic
|
In modal logic, a regular modal logic is a modal logic containing (as axiom or theorem) the duality of the modal operators:
formula_0
and closed under the rule
formula_1
Every normal modal logic is regular, and every regular modal logic is classical.
|
[
{
"math_id": 0,
"text": "\\Diamond A \\leftrightarrow \\lnot\\Box\\lnot A"
},
{
"math_id": 1,
"text": "\\frac{(A\\land B)\\to C}{(\\Box A\\land\\Box B)\\to\\Box C}."
}
] |
https://en.wikipedia.org/wiki?curid=14764284
|
1476440
|
Cobweb model
|
Representation of cyclical supply and demand developed by Nicholas Kaldor
The cobweb model or cobweb theory is an economic model that explains why prices may be subjected to periodic fluctuations in certain types of markets. It describes cyclical supply and demand in a market where the amount produced must be chosen before prices are observed. Producers' expectations about prices are assumed to be based on observations of previous prices. Nicholas Kaldor analyzed the model in 1934, coining the term "cobweb theorem" (see Kaldor, 1938 and Pashigian, 2008), citing previous analyses in German by Henry Schultz and Umberto Ricci.
The model.
The cobweb model is generally based on a time lag between supply and demand decisions. Agricultural markets are a context where the cobweb model might apply, since there is a lag between planting and harvesting (Kaldor, 1934, p. 133–134 gives two agricultural examples: rubber and corn). Suppose for example that as a result of unexpectedly bad weather, farmers go to market with an unusually small crop of strawberries. This shortage, equivalent to a leftward shift in the market's supply curve, results in high prices. If farmers expect these high price conditions to continue, then in the following year, they will raise their production of strawberries relative to other crops. Therefore, when they go to market the supply will be high, resulting in low prices. If they then expect low prices to continue, they will decrease their production of strawberries for the next year, resulting in high prices again.
This process is illustrated by the adjacent diagrams. The equilibrium price is at the intersection of the supply and demand curves. A poor harvest in period 1 means supply falls to Q1, so that prices rise to P1. If producers plan their period 2 production under the expectation that this high price will continue, then the period 2 supply will be higher, at Q2. Prices therefore fall to P2 when they try to sell all their output. As this process repeats itself, oscillating between periods of low supply with high prices and then high supply with low prices, the price and quantity trace out a spiral. They may spiral inwards, as in the top figure, in which case the economy converges to the equilibrium where supply and demand cross; or they may spiral outwards, with the fluctuations increasing in magnitude.
The cobweb model can have two types of outcomes:
Two other possibilities are:
In either of the first two scenarios, the combination of the spiral and the supply and demand curves often looks like a cobweb, hence the name of the theory.
The Gori "et al." group finds that cobwebs experience Hopf bifurcations, in Gori "et al." 2014, Gori "et al." 2015a, and Gori "et al." 2015b.
Elasticities versus slopes.
When supply and demand are linear functions the outcomes of the cobweb model are stated above in terms of slopes, but they are more commonly described in terms of elasticities. The "convergent case" requires that the slope of the (inverse) supply curve be greater than the absolute value of the slope of the (inverse) demand curve:
formula_0
In standard microeconomics terminology, define the "elasticity of supply" as formula_1, and the "elasticity of demand" as formula_2. If we evaluate these two elasticities at the equilibrium point, that is formula_3 and formula_4, then we see that the "convergent case" requires
formula_5
whereas the "divergent case" requires
formula_6
In words, the "convergent case" occurs when the demand curve is more elastic than the supply curve, at the equilibrium point. The "divergent case" occurs when the supply curve is more elastic than the demand curve, at the equilibrium point (see Kaldor, 1934, page 135, propositions (i) and (ii).)
Role of expectations.
One reason to be skeptical of this model's predictions is that it assumes producers are extremely shortsighted. Assuming that farmers look back at the most recent prices in order to forecast future prices might seem very reasonable, but this backward-looking forecasting (which is called adaptive expectations) turns out to be crucial for the model's fluctuations. When farmers expect high prices to continue, they produce too much and therefore end up with low prices, and vice versa.
In the stable case, this may not be an unbelievable outcome, since the farmers' prediction errors (the difference between the price they expect and the price that actually occurs) become smaller every period. In this case, after several periods prices and quantities will come close to the point where supply and demand cross, and predicted prices will be very close to actual prices. But in the unstable case, the farmers' errors get "larger" every period. This seems to indicate that adaptive expectations is a misleading assumption—how could farmers fail to notice that last period's price is "not" a good predictor of this period's price?
The fact that agents with adaptive expectations may make ever-increasing errors over time has led many economists to conclude that it is better to assume rational expectations, that is, expectations consistent with the actual structure of the economy. However, the rational expectations assumption is controversial since it may exaggerate agents' understanding of the economy. The cobweb model serves as one of the best examples to illustrate why understanding expectation formation is so important for understanding economic dynamics, and also why expectations are so controversial in recent economic theory.
The "Anpassung nach Unten" and "Schraube nach Unten" argument.
The German concepts which translate literally "adjustment to lower" and "screw to lower" are known from the works of Hans-Peter Martin and Harald Schumann, the authors of "The Global Trap" (1997). Martin and Schumann see the process to worsened living standards as screw-shaped. Mordecai Ezekiel's "The Cobweb Theorem" (1938) illustrate a screw-shaped expectations-driven process. Eino Haikala has analyzed Ezekiel's work among others, and clarified that time constitutes the axis of the screw-shape. Thus Martin and Schumann point out that the cobweb theorem works to worsen standards of living as well. The idea of expectations-variation and thus modeled and induced expectations is shown clearly in Oskar Morgenstern's "Vollkommene Voraussicht und Wirtschaftliches Gleichgewicht". This article shows also that the concept of perfect foresight (vollkommene Voraussicht) is not a Robert E. Lucas or rational expectations invention but rests in game theory, Morgenstern and John von Neumann being the authors of "Theory of Games and Economic Behavior" (1944). This does not mean that the rational expectations hypothesis (REH) is not game theory or separate from the cobweb theorem, but vice versa. The "there must be" a random component claim by Alan A. Walters alone shows that rational (consistent) expectations is game theory, since the component is there to create an illusion of random walk.
Alan A. Walters (1971) also claims that "extrapolators" are "unsophisticated", thus differentiating between prediction and forecasting. Using induced modeled expectations is prediction, not forecasting, unless these expectations are based on extrapolation. A prediction does not have to even try to be true. To avoid a prediction to be falsified it has to be, according to Franco Modigliani and Emile Grunberg's article "The Predictability of Social Events", kept private. Thus public prediction serves private one in REH. Haikala (1956) claims that cobweb theorem is a theorem of deceiving farmers, thus seeing cobweb theorem as a kind of rational or rather, consistent, expectations model with a game-theoretic feature. This makes sense when considering the argument of Hans-Peter Martin and Harald Schumann. The truth-value of a prediction is one measure in differentiating between non-deceiving and deceiving models. In Martin and Schumann's context, a claim that anti-Keynesian policies lead to a greater welfare of the majority of mankind should be analyzed in terms of truth. One way to do this is to investigate past historical data. This is contrary to the principles of REH, where the measure of policies is an economic model, not reality, and credibility, not truth. The importance of intellectual climate emphasized in Friedmans' work means that the credibility of a prediction can be increased by manipulating public opinion, despite its lack of truth. Morgenstern (1935) states that when varying expectations, the expectation of future has always to be positive (and prediction has to be credible).
Expectation is a dynamic component in both REH and cobweb theorem, and the question of expectation formation is the key to Hans-Peter Martin's and Harald Schumann's argument, which deals with trading current welfare for expected future welfare with actually worsening policies in the middle. This 'in order to achieve that then we have to do this now' is the key in Bertrand de Jouvenel's work. Cobweb theorem and the rational (consistent) expectations hypothesis are part of welfare economics which according to Martin and Schumann's argument act now to worsen the welfare of the majority of mankind. Nicholas Kaldor's work "The Scourge of Monetarism" is an analysis of how the policies described by Martin and Schumann came to the United Kingdom.
Evidence.
Livestock herds.
The cobweb model has been interpreted as an explanation of fluctuations in various livestock markets, like those documented by Arthur Hanau in German hog markets; see Pork cycle. However, Rosen et al. (1994) proposed an alternative model which showed that because of the three-year life cycle of beef cattle, cattle populations would fluctuate over time even if ranchers had perfectly rational expectations.
Human experimental data.
In 1989, Wellford conducted twelve experimental sessions each conducted with five participants over thirty periods simulating the stable and unstable cases. Her results show that the unstable case did not result in the divergent behavior we see with cobweb expectations but rather the participants converged toward the rational expectations equilibrium. However, the price path variance in the unstable case was greater than that in the stable case (and the difference was shown to be statistically significant).
One way of interpreting these results is to say that in the long run, the participants behaved as if they had rational expectations, but that in the short run they made mistakes. These mistakes caused larger fluctuations in the unstable case than in the stable case.
Housing sector in Israel.
The residential construction sector of Israel was, primarily as a result of waves of immigration, and still is, a principal factor in the structure of the business cycles in Israel. The increasing population, financing methods, higher income, and investment needs converged and came to be reflected through the skyrocketing demand for housing. On the other hand, technology, private and public entrepreneurship, the housing inventory and the availability of workforce have converged on the supply side. The position and direction of the housing sector in the business cycle can be identified by using a cobweb model.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{dP^S}{dQ^S} > \\left|\\frac{dP^D}{dQ^D}\\right|."
},
{
"math_id": 1,
"text": "\\frac{dQ^S/Q^S}{dP^S/P^S}"
},
{
"math_id": 2,
"text": "\\frac{dQ^D/Q^D}{dP^D/P^D}"
},
{
"math_id": 3,
"text": "P^S=P^D=P>0"
},
{
"math_id": 4,
"text": "Q^S=Q^D=Q>0"
},
{
"math_id": 5,
"text": "\\frac{dQ^S/Q}{dP^S/P}<\\left|\\frac{dQ^D/Q}{dP^D/P}\\right|,"
},
{
"math_id": 6,
"text": "\\frac{dQ^S/Q}{dP^S/P}>\\left|\\frac{dQ^D/Q}{dP^D/P}\\right|."
}
] |
https://en.wikipedia.org/wiki?curid=1476440
|
1476858
|
Electrophoretic deposition
|
Electrophoretic deposition (EPD), is a term for a broad range of industrial processes which includes electrocoating, cathodic electrodeposition, anodic electrodeposition, and electrophoretic coating, or electrophoretic painting. A characteristic feature of this process is that colloidal particles suspended in a liquid medium migrate under the influence of an electric field (electrophoresis) and are deposited onto an electrode. All colloidal particles that can be used to form stable suspensions and that can carry a charge can be used in electrophoretic deposition. This includes materials such as polymers, pigments, dyes, ceramics and metals.
The process is useful for applying materials to any electrically conductive surface. The materials which are being deposited are the major determining factor in the actual processing conditions and equipment which may be used.
Due to the wide utilization of electrophoretic painting processes in many industries, aqueous EPD is the most common commercially used EPD process. However, non-aqueous electrophoretic deposition applications are known. Applications of non-aqueous EPD are currently being explored for use in the fabrication of electronic components and the production of ceramic coatings. Non-aqueous processes have the advantage of avoiding the electrolysis of water and the oxygen evolution which accompanies electrolysis.
Uses.
This process is industrially used for applying coatings to metal fabricated products. It has been widely used to coat automobile bodies and parts, tractors and heavy equipment, electrical switch gear, appliances, metal furniture, beverage containers, fasteners, and many other industrial products.
EPD processes are often applied for the fabrication of supported titanium dioxide (TiO2) photocatalysts for water purification applications, using precursor powders which can be immobilised using EPD methods onto various support materials. Thick films produced this way allow cheaper and more rapid synthesis relative to sol-gel thin-films, along with higher levels of photocatalyst surface area.
In the fabrication of solid oxide fuel cells EPD techniques are widely employed for the fabrication of porous ZrO2 anodes from powder precursors onto conductive substrates.
EPD processes have a number of advantages which have made such methods widely used
Thick, complex ceramic pieces have been made in several research laboratories. Furthermore, EPD has been used to produce customized microstructures, such as functional gradients and laminates, through suspension control during processing.
History.
The first patent for the use of electrophoretic painting was awarded in 1917 to Davey and General Electric. Since the 1920s, the process has been used for the deposition of rubber latex. In the 1930s the first patents were issued which described base neutralized, water dispersible resins specifically designed for EPD.
Electrophoretic coating began to take its current shape in the late 1950s, when Dr. George E. F. Brewer and the Ford Motor Company team began working on developing the process for the coating of automobiles. The first commercial anodic automotive system began operations in 1963.
The first patent for a cathodic EPD product was issued in 1965 and assigned to BASF AG. PPG Industries, Inc. was the first to introduce commercially cathodic EPD in 1970. The first cathodic EPD use in the automotive industry was in 1975. Today, around 70% of the volume of EPD in use in the world today is the cathodic EPD type, largely due to the high usage of the technology in the automotive industry. It is probably the best system ever developed and has resulted in great extension of body life in the automotive industry
There are thousands of patents which have been issued relating to various EPD compositions, EPD processes, and articles coated with EPD. Although patents have been issued by various government patent offices, virtually all of the significant developments can be followed by reviewing the patents issued by the U.S. Patent and Trademark Office.
Process.
The overall industrial process of electrophoretic deposition consists of several sub-processes:
During the EPD process itself, direct current is applied to a solution of polymers with ionizable groups or a colloidal suspension of polymers with ionizable groups which may also incorporate solid materials such as pigments and fillers. The ionizable groups incorporated into the polymer are formed by the reaction of an acid and a base to form a salt. The particular charge, positive or negative, which is imparted to the polymer depends on the chemical nature of the ionizable group. If the ionizable groups on the polymer are acids, the polymer will carry a negative charge when salted with a base. If the ionizable groups on the polymer are bases, the polymer will carry a positive charge when salted with an acid.
There are two types of EPD processes, anodic and cathodic. In the anodic process, negatively charged material is deposited on the positively charged electrode, or anode. In the cathodic process, positively charged material is deposited on the negatively charged electrode, or cathode.
When an electric field is applied, all of the charged species migrate by the process of electrophoresis towards the electrode with the opposite charge. There are several mechanisms by which material can be deposited on the electrode:
The primary electrochemical process which occurs during aqueous electrodeposition is the electrolysis of water. This can be shown by the following two half reactions which occur at the two electrodes:
Anode: 2H2O → O2(gas) + 4H(+) + 4e(-)
Cathode: 4H2O + 4e(-) → 4OH(-) + 2H2(gas)
In anodic deposition, the material being deposited will have salts of an acid as the charge bearing group. These negatively charged anions react with the positively charged hydrogen ions (protons) which are being produced at the anode by the electrolysis of water to reform the original acid. The fully protonated acid carries no charge (charge destruction) and is less soluble in water, and may precipitate out of the water onto the anode.
The analogous situation occurs in cathodic deposition except that the material being deposited will have salts of a base as the charge bearing group. If the salt of the base has been formed by protonation of the base, the protonated base will react with the hydroxyl ions being formed by electrolysis of water to yield the neutral charged base (again charge destruction) and water. The uncharged polymer is less soluble in water than it was when was charged, and precipitation onto the cathode occurs.
Onium salts, which have been used in the cathodic process, are not protonated bases and do not deposit by the mechanism of charge destruction. These type of materials can be deposited on the cathode by concentration coagulation and salting out. As the colloidal particles reach the solid object to be coated, they become squeezed together, and the water in the interstices is forced out. As the individual micelles are squeezed, they collapse to form increasingly larger micelles. Colloidal stability is inversely proportional to the size of the micelle, so as the micelles get bigger, they become less and less stable until they precipitate from solution onto the object to be coated. As more and more charged groups are concentrated into a smaller volume, this increases the ionic strength of the medium, which also assists in precipitating the materials out of solution. Both of these processes are occurring simultaneously and both contribute to the deposition of material.
Factors affecting electrophoretic painting.
During the aqueous deposition process, gas is being formed at both electrodes. Hydrogen gas is being formed at the cathode, and oxygen gas at the anode. For a given amount of charge transfer, exactly twice as much hydrogen is generated compared to oxygen on a molecular basis.
This has some significant effects on the coating process. The most obvious is in the appearance of the deposited film prior to the baking process. The cathodic process results in considerably more gas being trapped within the film than the anodic process. Since the gas has a higher electrical resistance than either depositing film or the bath itself, the amount of gas has a significant effect on the current at a given applied voltage. This is why cathodic processes are often able to be operated at significantly higher voltages than the corresponding anodic processes.
The deposited coating has significantly higher resistance than the object which is being coated. As the deposited film precipitates, the resistance increases. The increase in resistance is proportional to the thickness of the deposited film, and thus, at a given voltage, the electric current decreases as the film gets thicker until it finally reaches a point where deposition has slowed or stopped occurring (self-limiting). Thus the applied voltage is the primary control for the amount of film applied.
The ability for the EPD coating to coat interior recesses of a part is called the "throwpower". In many applications, it is desirable to use coating materials with a high throwpower. The throwpower of a coating is dependent on a number of variables, but generally, it can be stated that the higher the coating voltage, the further a given coating will "throw" into recesses. High throwpower electrophoretic paints typically use application voltages in excess of 300 volts DC.
The coating temperature is also an important variable affecting the EPD process. The coating temperature has an effect on the bath conductivity and deposited film conductivity, which increases as temperature increases. Temperature also has an effect on the viscosity of the deposited film, which in turn affects the ability of the deposited film to release the gas bubbles being formed.
The coalescence temperature of the coating system is also an important variable for the coating designer. It can be determined by plotting the film build of a given system versus coating temperature keeping the coating time and voltage application profile constant. At temperatures below the coalescence temperature, film growth behavior and rupturing behavior is quite different from the usual practice as a result of porous deposition.
The coating time also is an important variable in determining the film thickness, the quality of the deposited film, and the throwpower. Depending on the type of object being coated, coating times of several seconds up to several minutes may be appropriate.
The maximum voltage which can be utilized depends on the type of coating system and a number of other factors. As already stated, film thickness and throwpower are dependent on the application voltage. However, at excessively high voltages, a phenomenon called "rupture" can occur. The voltage where this phenomenon occurs is called the "rupture voltage". The result of rupture is a film that is usually very thick and porous. Normally this is not an acceptable film cosmetically or functionally. The causes and mechanisms for rupturing are not completely understood, however, the following is known:
Types of EPD chemistries.
There are two major categories of EPD chemistries: anodic and cathodic. Both continue to be used commercially, although the anodic process has been in use industrially for a longer period of time and is thus considered to be the older of the two processes. There are advantages and disadvantages for both types of processes, and different experts may have different perspectives on some of the pros and cons of each.
The major advantages that are normally touted for the anodic process are:
The major advantages that are normally touted for the cathodic processes are:
A significant and real difference which is not often mentioned is the fact that acid catalyzed crosslinking technologies are more appropriate to the anodic process. Such crosslinkers are widely used in all types of coating applications. These include such popular and relatively inexpensive crosslinkers such as melamine-formaldehyde, phenol-formaldehyde, urea-formaldehyde, and acrylamide-formaldehyde crosslinkers.
Melamine-formaldehyde type crosslinkers in particular are widely used in anodic electrocoatings. These types crosslinkers are relatively inexpensive and provide a wide range of cure and performance characteristics which allow the coating designer to tailor the product for the desired end use. Coatings formulated with this type of crosslinker can have acceptable UV light resistance. Many of them are relatively low viscosity materials and can act as a reactive plasticizer, replacing some of the organic solvent that otherwise might be necessary. The amount of free formaldehyde, as well as formaldehyde which may be released during the baking process is of concern as these are considered to be hazardous air pollutants.
The deposited film in cathodic systems is quite alkaline, and acid catalyzed crosslinking technologies have not been preferred in cathodic products in general, although there have been some exceptions. The most common type of crosslinking chemistry in use today with cathodic products are based on urethane and urea chemistries.
The aromatic polyurethane and urea type crosslinker is one of the significant reasons why many cathodic electrocoats show high levels of protection against corrosion. Of course it is not the only reason, but if one compares electrocoating compositions with aromatic urethane crosslinkers to analogous systems containing aliphatic urethane crosslinkers, consistently systems with aromatic urethane crosslinkers perform significantly better. However, coatings containing aromatic urethane crosslinkers generally do not perform well in terms of UV light resistance. If the resulting coating contains aromatic urea crosslinks, the UV resistance will be considerably worse than if only urethane crosslinks can occur. A disadvantage of aromatic urethanes is that they can also cause yellowing of the coating itself as well as cause yellowing in subsequent topcoat layers. A significant undesired side reaction which occurs during the baking process produces aromatic polyamines. Urethane crosslinkers based on toluene diisocyanate (TDI) can be expected to produce toluene diamine as a side reaction, whereas those based on methylene diphenyl diisocyanate produce diaminodiphenylmethane and higher order aromatic polyamines. The undesired aromatic polyamines can inhibit the cure of subsequent acid catalysed topcoat layers, and can cause delamination of the subsequent topcoat layers after exposure to sunlight. Although the industry has never acknowledged this problem, many of these undesired aromatic polyamines are known or suspected carcinogens.
Besides the two major categories of anodic and cathodic, EPD products can also be described by the base polymer chemistry which is utilized. There are several polymer types that have been used commercially. Many of the earlier anodic types were based on maleinized oils of various types, tall oil and linseed oil being two of the more common. Today, epoxy and the acrylic types predominate. The description and the generally touted advantages are as follows:
Kinetics.
The rate of electrophoretic deposition (EPD) is dependent on multiple different kinetic processes acting in concert. One of the primary kinetic processes involved in EPD is electrophoresis, the movement of charged particles in response to an electric field. But as the local concentration of particles decreases near the electrodes, particle diffusion from areas of high concentration to low concentration, driven by a difference in chemical potential, will also influence the rate of deposition. This section will discuss the conditions that determine the rates of each of these processes and how those variables are incorporated into different models used to evaluate EPD.
For either process to occur the molecules must form a stable aqueous suspension. There are four common processes by which the particle can obtain surface charge needed to form a stable dispersion: 1. Dissociation or ionization of a surface group 2. Reabsorption of ions 3. Adsorption of ionized surfactants 4. Isomorphic substitution. The molecule's surface chemistry and its local environment will determine how it obtains a surface charge. Without sufficient surface charge to balance the van der Waals attractive forces between particles, they will aggregate. A charged surface is not the only parameter that influences colloidal stability. Particle size, zeta potential, and the solvent's conductivity, viscosity, and dielectric constant also determine the dispersion's stability. So long as the dispersion is stable, the initial rate of deposition will be primarily determined by the electric field strength. Solution resistance can dissipate the applied voltage, so the actual surface charge on each electrode may be lower than intended. The charged particles will attach to a substrate located on the oppositely charged electrode. As a simplification, under low voltages and short deposition times, Hamaker's law describes a linear relationship between the field strength, deposited thickness, and time.
formula_0
This equation gives the electrophoretically deposited mass "m" in grams, as function of electrophoretic mobility "μ" (in units of cm2s−1), solids loading "Cs" (in g cm −3), covered surface area "S" (cm2), electric field strength "E" (V cm−1) and time "t" (s). This equation is useful to evaluate the efficiency of applied EPD processes relative to theoretical values.
The simple linear approximation applied by Hamaker's law degrades under higher voltages and longer deposition times. Under higher voltage, chemical reactions, such as reduction, driven by the influence of the applied field can obscure the kinetics. So, solvents with high reduction-oxidation potentials should be used to avoid electrolysis and the gas evolution. And if the deposited particles are insulating, then as the deposited layer grows thicker the effective electric field will decrease. In addition, the area surrounding the electroactive region near the electrodes will be depleted of particles. Particle diffusion from the bulk to the electroactive region may limit the rate of growth. The diffusion of particles from high to low concentration can be approximated by Fick's laws and its rate will be determined by the difference in particle concentration as well as solvent viscosity, particle mass, and colloidal stability. Eventually, as deposition thickness increases and field strength decreases, the growth will saturate. The change in thickness that occurs at the onset of saturation is described by the following equation.
formula_1 where formula_2
w is the weight of solid particles deposited on the electrode, k the kinetic constant, t the deposition time, A the area of the electrode, V the slurry volume, formula_3 the starting weight of the solid particles in the slurry, ε the dielectric constant of the liquid, ξ the zeta-potential of the particle in the solvent, n the viscosity of the solvent, E the applied direct-current voltage, and formula_4E the voltage drop across the deposited layer.
Before saturation there is a linear relationship between deposition thickness and time. The onset of saturation leads to a decrease in the rate of deposition that is modelled as parabolic behavior. The critical transition time between linear and parabolic behavior is approximated by the following equation.
formula_5
t is the critical transition time, formula_6 is the slope of the parabolic regime, and formula_7 is the slope of the rate of deposition layer growth in the linear regime.
In determining the applicability of EPD to a system it is necessary to ensure the colloidal stability, and the combination of applied voltage and reaction time that will yield the intended deposited thickness.
Non-aqueous electrophoretic deposition.
In certain applications, such as the deposition of ceramic materials, voltages above 3–4V cannot be applied in aqueous EPD if it is necessary to avoid the electrolysis of water. However, higher application voltages may be desirable in order to achieve higher coating thicknesses or to increase the rate of deposition. In such applications, organic solvents are used instead of water as the liquid medium. The organic solvents used are generally polar solvents such as alcohols and ketones. Ethanol, acetone, and methyl ethyl ketone are examples of solvents which have been reported as suitable candidates for use in electrophoretic deposition.
|
[
{
"math_id": 0,
"text": "m=\\mu \\times C_s \\times S\\times E \\times t"
},
{
"math_id": 1,
"text": "{dw(t) \\over dt}=w_of\\exp(-kt)"
},
{
"math_id": 2,
"text": "k={A \\over V}{\\epsilon\\xi \\over 4\\pi\\eta}(E-\\Delta E)"
},
{
"math_id": 3,
"text": "w_o"
},
{
"math_id": 4,
"text": "\\Delta"
},
{
"math_id": 5,
"text": "t^{\\operatorname{1}\\over\\operatorname{2}}e^{\\operatorname{-k}t}={S_2 \\over 2S_1}"
},
{
"math_id": 6,
"text": "S_2"
},
{
"math_id": 7,
"text": "S_1"
}
] |
https://en.wikipedia.org/wiki?curid=1476858
|
1477110
|
Napierian logarithm
|
The term Napierian logarithm or Naperian logarithm, named after John Napier, is often used to mean the natural logarithm. Napier did not introduce this "natural" logarithmic function, although it is named after him.
However, if it is taken to mean the "logarithms" as originally produced by Napier, it is a function given by (in terms of the modern natural logarithm):
formula_0
The Napierian logarithm satisfies identities quite similar to the modern logarithm, such as
formula_1
or
formula_2
In Napier's 1614 "Mirifici Logarithmorum Canonis Descriptio", he provides tables of logarithms of sines for 0 to 90°, where the values given (columns 3 and 5) are
formula_3
Properties.
Napier's "logarithm" is related to the natural logarithm by the relation
formula_4
and to the common logarithm by
formula_5
Note that
formula_6
and
formula_7
Napierian logarithms are essentially natural logarithms with decimal points shifted 7 places rightward and with sign reversed. For instance the logarithmic values
formula_8
formula_9
would have the corresponding Napierian logarithms:
formula_10
formula_11
For further detail, see history of logarithms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{NapLog}(x) = -10^7 \\ln (x/10^7) "
},
{
"math_id": 1,
"text": "\\mathrm{NapLog}(xy) \\approx \\mathrm{NapLog}(x)+\\mathrm{NapLog}(y)-161180956"
},
{
"math_id": 2,
"text": "\\mathrm{NapLog}(xy/10^7) = \\mathrm{NapLog}(x)+\\mathrm{NapLog}(y) "
},
{
"math_id": 3,
"text": "\\mathrm{NapLog}(\\theta) = -10^7 \\ln (\\sin(\\theta)) "
},
{
"math_id": 4,
"text": "\\mathrm{NapLog} (x) \\approx 10000000 (16.11809565 - \\ln x)"
},
{
"math_id": 5,
"text": "\\mathrm{NapLog} (x) \\approx 23025851 (7 - \\log_{10} x)."
},
{
"math_id": 6,
"text": "16.11809565 \\approx 7 \\ln \\left(10\\right) "
},
{
"math_id": 7,
"text": "23025851 \\approx 10^7 \\ln (10)."
},
{
"math_id": 8,
"text": "\\ln(.5000000) = -0.6931471806"
},
{
"math_id": 9,
"text": "\\ln(.3333333) = -1.0986123887"
},
{
"math_id": 10,
"text": "\\mathrm{NapLog}(5000000) = 6931472"
},
{
"math_id": 11,
"text": "\\mathrm{NapLog}(3333333) = 10986124"
}
] |
https://en.wikipedia.org/wiki?curid=1477110
|
14773
|
Information theory
|
Scientific study of digital information
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and put on a firm footing by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.
A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.
Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet. The theory has found applications in other areas, including statistical inference, cryptography, neurobiology, perception, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, quantum computing, black holes, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, imaging system design, epistemology, and even art creation.
Overview.
Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled "A Mathematical Theory of Communication", in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
Coding theory is concerned with finding explicit methods, called "codes", for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.
A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban.
Historical background.
The landmark event "establishing" the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the "Bell System Technical Journal" in July and October 1948. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush.
Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, "Certain Factors Affecting Telegraph Speed", contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation "W" = "K" log "m" (recalling the Boltzmann constant), where "W" is the speed of transmission of intelligence, "m" is the number of different voltage levels to choose from at each time step, and "K" is a constant. Ralph Hartley's 1928 paper, "Transmission of Information", uses the word "information" as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as "H" = log "S""n" = "n" log "S", where "S" was the number of possible symbols, and "n" the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.
Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in "Entropy in thermodynamics and information theory".
In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion:
"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
With it came the ideas of
Quantities of information.
Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.
In what follows, an expression of the form "p" log "p" is considered by convention to be equal to zero whenever "p" = 0. This is justified because formula_0 for any logarithmic base.
Entropy of an information source.
Based on the probability mass function of each source symbol to be communicated, the Shannon entropy "H", in units of bits (per symbol), is given by
formula_1
where "pi" is the probability of occurrence of the "i"-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
Intuitively, the entropy "HX" of a discrete random variable "X" is a measure of the amount of "uncertainty" associated with the value of "X" when only its distribution is known.
The entropy of a source that emits a sequence of "N" symbols that are independent and identically distributed (iid) is "N" ⋅ "H" bits (per message of "N" symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length "N" will be less than "N" ⋅ "H".
If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If formula_2 is the set of all messages {"x"1, ..., "x""n"} that "X" could be, and "p"("x") is the probability of some formula_3, then the entropy, "H", of "X" is defined:
formula_4
(Here, "I"("x") is the self-information, which is the entropy contribution of an individual message, and formula_5 is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable "p"("x") = 1/"n"; i.e., most unpredictable, in which case "H"("X") = log "n".
The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
formula_6
Joint entropy.
The joint entropy of two discrete random variables "X" and "Y" is merely the entropy of their pairing: ("X", "Y"). This implies that if "X" and "Y" are independent, then their joint entropy is the sum of their individual entropies.
For example, if ("X", "Y") represents the position of a chess piece—"X" the row and "Y" the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
formula_7
Despite similar notation, joint entropy should not be confused with cross-entropy.
Conditional entropy (equivocation).
The conditional entropy or "conditional uncertainty" of "X" given random variable "Y" (also called the "equivocation" of "X" about "Y") is the average conditional entropy over "Y":
formula_8
Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
formula_9
Mutual information (transinformation).
"Mutual information" measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of "X" relative to "Y" is given by:
formula_10
where SI ("S"pecific mutual Information) is the pointwise mutual information.
A basic property of the mutual information is that
formula_11
That is, knowing "Y", we can save an average of "I"("X"; "Y") bits in encoding "X" compared to not knowing "Y".
Mutual information is symmetric:
formula_12
Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of "X" given the value of "Y" and the prior distribution on "X":
formula_13
In other words, this is a measure of how much, on the average, the probability distribution on "X" will change if we are given the value of "Y". This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
formula_14
Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
Kullback–Leibler divergence (information gain).
The "Kullback–Leibler divergence" (or "information divergence", "information gain", or "relative entropy") is a way of comparing two distributions: a "true" probability distribution &NoBreak;&NoBreak;, and an arbitrary probability distribution &NoBreak;&NoBreak;. If we compress data in a manner that assumes &NoBreak;&NoBreak; is the distribution underlying some data, when, in reality, &NoBreak;&NoBreak; is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
formula_15
Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number "X" is about to be drawn randomly from a discrete set with probability distribution &NoBreak;&NoBreak;. If Alice knows the true distribution &NoBreak;&NoBreak;, while Bob believes (has a prior) that the distribution is &NoBreak;&NoBreak;, then Bob will be more surprised than Alice, on average, upon seeing the value of "X". The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the "log" is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
Directed Information.
Directed information, formula_16, is an information theory measure that quantifies the information flow from the random process formula_17 to the random process formula_18. The term "directed information" was coined by James Massey and is defined as
formula_19,
where formula_20 is the conditional mutual information formula_21.
In contrast to "mutual" information, "directed" information is not symmetric. The formula_16 measures the information bits that are transmitted causally[definition of causal transmission?] from formula_22 to formula_23. The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information,
and in real-time control communication settings, statistical physics.
Other quantities.
Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision.
Coding theory.
Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal.
Source theory.
Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
Rate.
Information "rate" is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is
formula_24
that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the "average rate" is
formula_25
that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
Information rate is defined as
formula_26
It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding.
Channel capacity.
Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
Consider the communications process over a discrete channel. A simple model of the process is shown below:
formula_27
Here "X" represents the space of messages transmitted, and "Y" the space of messages received during a unit time over our channel. Let "p"("y"|"x") be the conditional probability distribution function of "Y" given "X". We will consider "p"("y"|"x") to be an inherent fixed property of our communications channel (representing the nature of the "noise" of our channel). Then the joint distribution of "X" and "Y" is completely determined by our channel and by our choice of "f"("x"), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the "signal", we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:
formula_28
This capacity has the following property related to communicating at information rate "R" (where "R" is usually bits per symbol). For any information rate "R" < "C" and coding error "ε" > 0, for large enough "N", there exists a code of length "N" and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ "ε"; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate "R" > "C", it is impossible to transmit with arbitrarily small block error.
"Channel coding" is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
Channels with memory and directed information.
In practice many channels have memory. Namely, at time formula_29 the channel is given by the conditional probabilityformula_30.
It is often more comfortable to use the notation formula_31 and the channel become formula_32.
In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information).
Fungible information.
Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information.
Applications to other fields.
Intelligence uses and secrecy applications.
Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.
Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time.
Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
Pseudorandom number generation.
Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
Seismic exploration.
One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.
Semiotics.
Semioticians and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."
Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.
Integrated process organization of neural information.
Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as functional clusters (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or effective information (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis).
Miscellaneous applications.
Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling.
See also.
<templatestyles src="Div col/styles.css"/>
Applications.
<templatestyles src="Div col/styles.css"/>
Theory.
<templatestyles src="Div col/styles.css"/>
Concepts.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
The classic work.
<templatestyles src="Refbegin/styles.css" />
Other journal articles.
<templatestyles src="Refbegin/styles.css" />
Textbooks on information theory.
<templatestyles src="Refbegin/styles.css" />
Other books.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\lim_{p \\rightarrow 0+} p \\log p = 0"
},
{
"math_id": 1,
"text": "H = - \\sum_{i} p_i \\log_2 (p_i)"
},
{
"math_id": 2,
"text": "\\mathbb{X}"
},
{
"math_id": 3,
"text": "x \\in \\mathbb X"
},
{
"math_id": 4,
"text": " H(X) = \\mathbb{E}_{X} [I(x)] = -\\sum_{x \\in \\mathbb{X}} p(x) \\log p(x)."
},
{
"math_id": 5,
"text": "\\mathbb{E}_X"
},
{
"math_id": 6,
"text": "H_{\\mathrm{b}}(p) = - p \\log_2 p - (1-p)\\log_2 (1-p)."
},
{
"math_id": 7,
"text": "H(X, Y) = \\mathbb{E}_{X,Y} [-\\log p(x,y)] = - \\sum_{x, y} p(x, y) \\log p(x, y) \\,"
},
{
"math_id": 8,
"text": " H(X|Y) = \\mathbb E_Y [H(X|y)] = -\\sum_{y \\in Y} p(y) \\sum_{x \\in X} p(x|y) \\log p(x|y) = -\\sum_{x,y} p(x,y) \\log p(x|y)."
},
{
"math_id": 9,
"text": " H(X|Y) = H(X,Y) - H(Y) .\\,"
},
{
"math_id": 10,
"text": "I(X;Y) = \\mathbb{E}_{X,Y} [SI(x,y)] = \\sum_{x,y} p(x,y) \\log \\frac{p(x,y)}{p(x)\\, p(y)}"
},
{
"math_id": 11,
"text": "I(X;Y) = H(X) - H(X|Y).\\,"
},
{
"math_id": 12,
"text": "I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\\,"
},
{
"math_id": 13,
"text": "I(X;Y) = \\mathbb E_{p(y)} [D_{\\mathrm{KL}}( p(X|Y=y) \\| p(X) )]."
},
{
"math_id": 14,
"text": "I(X; Y) = D_{\\mathrm{KL}}(p(X,Y) \\| p(X)p(Y))."
},
{
"math_id": 15,
"text": "D_{\\mathrm{KL}}(p(X) \\| q(X)) = \\sum_{x \\in X} -p(x) \\log {q(x)} \\, - \\, \\sum_{x \\in X} -p(x) \\log {p(x)} = \\sum_{x \\in X} p(x) \\log \\frac{p(x)}{q(x)}."
},
{
"math_id": 16,
"text": "I(X^n\\to Y^n) "
},
{
"math_id": 17,
"text": "X^n = \\{X_1,X_2,\\dots,X_n\\}"
},
{
"math_id": 18,
"text": "Y^n = \\{Y_1,Y_2,\\dots,Y_n\\}"
},
{
"math_id": 19,
"text": "I(X^n\\to Y^n) \\triangleq \\sum_{i=1}^n I(X^i;Y_i|Y^{i-1})"
},
{
"math_id": 20,
"text": "I(X^{i};Y_i|Y^{i-1})"
},
{
"math_id": 21,
"text": "I(X_1,X_2,...,X_{i};Y_i|Y_1,Y_2,...,Y_{i-1})"
},
{
"math_id": 22,
"text": "X^n"
},
{
"math_id": 23,
"text": "Y^n"
},
{
"math_id": 24,
"text": "r = \\lim_{n \\to \\infty} H(X_n|X_{n-1},X_{n-2},X_{n-3}, \\ldots);"
},
{
"math_id": 25,
"text": "r = \\lim_{n \\to \\infty} \\frac{1}{n} H(X_1, X_2, \\dots X_n);"
},
{
"math_id": 26,
"text": "r = \\lim_{n \\to \\infty} \\frac{1}{n} I(X_1, X_2, \\dots X_n;Y_1,Y_2, \\dots Y_n);"
},
{
"math_id": 27,
"text": "\n\\xrightarrow[\\text{Message}]{W}\n\\begin{array}{ |c| }\\hline \\text{Encoder} \\\\ f_n \\\\ \\hline\\end{array} \\xrightarrow[\\mathrm{Encoded \\atop sequence}]{X^n} \\begin{array}{ |c| }\\hline \\text{Channel} \\\\ p(y|x) \\\\ \\hline\\end{array} \\xrightarrow[\\mathrm{Received \\atop sequence}]{Y^n} \\begin{array}{ |c| }\\hline \\text{Decoder} \\\\ g_n \\\\ \\hline\\end{array} \\xrightarrow[\\mathrm{Estimated \\atop message}]{\\hat W}"
},
{
"math_id": 28,
"text": " C = \\max_{f} I(X;Y).\\! "
},
{
"math_id": 29,
"text": " i "
},
{
"math_id": 30,
"text": " P(y_i|x_i,x_{i-1},x_{i-2},...,x_1,y_{i-1},y_{i-2},...,y_1). "
},
{
"math_id": 31,
"text": " x^i=(x_i,x_{i-1},x_{i-2},...,x_1) "
},
{
"math_id": 32,
"text": " P(y_i|x^i,y^{i-1}). "
}
] |
https://en.wikipedia.org/wiki?curid=14773
|
14777833
|
Barnes–Wall lattice
|
In mathematics, the Barnes–Wall lattice Λ16, discovered by Eric Stephen Barnes and G. E. (Tim) Wall (), is the 16-dimensional positive-definite even integral lattice of discriminant 28 with no norm-2 vectors. It is the sublattice of the Leech lattice fixed by a certain automorphism of order 2, and is analogous to the Coxeter–Todd lattice.
The automorphism group of the Barnes–Wall lattice has order 89181388800 = 221 35 52 7 and has structure 21+8 PSO8+(F2). There are 4320 vectors of norm 4 in the Barnes–Wall lattice (the shortest nonzero vectors in this lattice).
The genus of the Barnes–Wall lattice was described by and contains 24 lattices; all the elements other than the Barnes–Wall lattice have root system of maximal rank 16.
The Barnes–Wall lattice is described in detail in .
While Λ16 is often referred to as "the" Barnes-Wall lattice, their original article in fact construct a family of lattices of increasing dimension n=2k for any integer k, and increasing normalized minimal distance, namely n1/4. This is to be compared to the normalized minimal distance of 1 for the trivial lattice formula_0, and an upper bound of formula_1 given by Minkowski's theorem applied to Euclidean balls. Interestingly, this family comes with a polynomial time decoding algorithm by .
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}^n"
},
{
"math_id": 1,
"text": " 2 \\cdot \\Gamma\\left(\\frac n 2 + 1\\right)^{1/n} \\big/ \\sqrt{\\pi} = \\sqrt{\\frac {2n}{\\pi e}} + o(\\sqrt{n})"
}
] |
https://en.wikipedia.org/wiki?curid=14777833
|
14780124
|
Rasta filtering
|
RASTA filtering and mean subtraction was introduced to support perceptual linear prediction
(PLP) preprocessing. It uses bandpass filtering in the log spectral domain. Rasta filtering then removes slow channel variations. It has also been applied to cepstrum feature-based preprocessing with both log spectral and cepstral domain filtering.
In general a RASTA filter is defined by
formula_0
The numerator is a regression filter with N being the order (must be odd) and the denominator is an integrator with time decay. The pole controls the lower limit of frequency and is normally around 0.9. RASTA-filtering can be changed to use mean subtraction, implementing a moving average filter. Filtering is normally performed in the cepstral domain. The mean becomes the long term cepstrum and is typically computed on the speech part for each separate utterance. A silence is necessary to detect each utterance.
|
[
{
"math_id": 0,
"text": "T(z) = ( k * \\sum (n-(N-1) / 2) * z^{-n}) / (1-\\rho/x) \\,\\!"
}
] |
https://en.wikipedia.org/wiki?curid=14780124
|
1478246
|
External sorting
|
Class of sorting algorithms that can handle massive amounts of data
External sorting is a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory, usually a disk drive. Thus, external sorting algorithms are external memory algorithms and thus applicable in the external memory model of computation.
External sorting algorithms generally fall into two types, distribution sorting, which resembles quicksort, and external merge sort, which resembles merge sort. External merge sort typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted subfiles are combined into a single larger file.
Model.
External sorting algorithms can be analyzed in the external memory model. In this model, a cache or internal memory of size M and an unbounded external memory are divided into blocks of size B, and the running time of an algorithm is determined by the number of memory transfers between internal and external memory. Like their cache-oblivious counterparts, asymptotically optimal external sorting algorithms achieve a running time (in Big O notation) of formula_0.
External merge sort.
One example of external sorting is the external merge sort algorithm, which uses a K-way merge algorithm. It sorts chunks that each fit in RAM, then merges the sorted chunks together.
The algorithm first sorts M items at a time and puts the sorted lists back into external memory. It does a formula_1-way merge on those sorted lists, recursing if there is not enough main memory to merge efficiently in one pass. During a merge pass, B elements from each sorted list are in internal memory, and the minimum is repeatedly outputted.
For example, for sorting 900 megabytes of data using only 100 megabytes of RAM:
The merge pass is key to making external merge sort work externally. The merge algorithm only makes one pass through each chunk, so chunks do not have to be loaded all at once; rather, sequential parts of the chunk are loaded as needed. And as long as the blocks read are relatively large (like the 10 MB in this example), the reads can be relatively efficient even on media with low random-read performance, like hard drives.
Historically, instead of a sort, sometimes a replacement-selection algorithm was used to perform the initial distribution, to produce on average half as many output chunks of double the length.
Additional passes.
The previous example is a two-pass sort: first sort, then merge. The sort ends with a single "k"-way merge, rather than a series of two-way merge passes as in a typical in-memory merge sort. This is because each merge pass reads and writes "every value" from and to disk, so reducing the number of passes more than compensates for the additional cost of a "k"-way merge.
The limitation to single-pass merging is that as the number of chunks increases, memory will be divided into more buffers, so each buffer is smaller. Eventually, the reads become so small that more time is spent on disk seeks than data transfer. A typical magnetic hard disk drive might have a 10 ms access time and 100 MB/s data transfer rate, so each seek takes as much time as transferring 1 MB of data.
Thus, for sorting, say, 50 GB in 100 MB of RAM, using a single 500-way merge pass isn't efficient: we can only read 100 MB / 501 ≈ 200 KB from each chunk at once, so 5/6 of the disk's time is spent seeking. Using two merge passes solves the problem. Then the sorting process might look like this:
Although this requires an additional pass over the data, each read is now 4 MB long, so only 1/5 of the disk's time is spent seeking. The improvement in data transfer efficiency during the merge passes (16.6% to 80% is almost a 5× improvement) more than makes up for the doubled number of merge passes.
Variations include using an intermediate medium like solid-state disk for some stages; the fast temporary storage needn't be big enough to hold the whole dataset, just substantially larger than available main memory. Repeating the example above with 1 GB of temporary SSD storage, the first pass could merge 10×100 MB sorted chunks read from that temporary space to write 50x1 GB sorted chunks to HDD. The high bandwidth and random-read throughput of SSDs help speed the first pass, and the HDD reads for the second pass can then be 2 MB, large enough that seeks will not take up most of the read time. SSDs can also be used as read buffers in a merge phase, allowing fewer larger reads (20MB reads in this example) from HDD storage. Given the lower cost of SSD capacity relative to RAM, SSDs can be an economical tool for sorting large inputs with very limited memory.
Like in-memory sorts, efficient external sorts require O("n" log "n") time: exponentially growing datasets require linearly increasing numbers of passes that each take O(n) time. Under reasonable assumptions at least 500 GB of data stored on a hard drive can be sorted using 1 GB of main memory before a third pass becomes advantageous, and many times that much data can be sorted before a fourth pass becomes useful.
Main memory size is important. Doubling memory dedicated to sorting halves the number of chunks "and" the number of reads per chunk, reducing the number of seeks required by about three-quarters. The ratio of RAM to disk storage on servers often makes it convenient to do huge sorts on a cluster of machines rather than on one machine with multiple passes. Media with high random-read performance like solid-state drives (SSDs) also increase the amount that can be sorted before additional passes improve performance.
External distribution sort.
External distribution sort is analogous to quicksort. The algorithm finds approximately formula_1 pivots and uses them to divide the N elements into approximately equally sized subarrays, each of whose elements are all smaller than the next, and then recurse until the sizes of the subarrays are less than the block size. When the subarrays are less than the block size, sorting can be done quickly because all reads and writes are done in the cache, and in the external memory model requires formula_2 operations.
However, finding exactly formula_1 pivots would not be fast enough to make the external distribution sort asymptotically optimal. Instead, we find slightly fewer pivots. To find these pivots, the algorithm splits the N input elements into formula_3 chunks, and takes every formula_4 elements, and recursively uses the median of medians algorithm to find formula_5 pivots.
There is a duality, or fundamental similarity, between merge- and distribution-based algorithms.
Performance.
The Sort Benchmark, created by computer scientist Jim Gray, compares external sorting algorithms implemented using finely tuned hardware and software. Winning implementations use several techniques:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O \\left(\\tfrac{N}{B}\\log_{\\tfrac{M}{B}} \\tfrac{N}{B} \\right)"
},
{
"math_id": 1,
"text": "\\tfrac{M}{B}"
},
{
"math_id": 2,
"text": "O(1)"
},
{
"math_id": 3,
"text": "\\tfrac{N}{M}"
},
{
"math_id": 4,
"text": "\\sqrt{\\tfrac{M}{16B}}"
},
{
"math_id": 5,
"text": "\\sqrt{\\tfrac{M}{B}}"
}
] |
https://en.wikipedia.org/wiki?curid=1478246
|
147853
|
Speed of sound
|
Speed of sound wave through elastic medium
The speed of sound is the distance travelled per unit of time by a sound wave as it propagates through an elastic medium. More simply, the speed of sound is how fast vibrations travel. At , the speed of sound in air is about , or in or one mile in . It depends strongly on temperature as well as the medium through which a sound wave is propagating. At , the speed of sound in air is about .
The speed of sound in an ideal gas depends only on its temperature and composition. The speed has a weak dependence on frequency and pressure in ordinary air, deviating slightly from ideal behavior.
In colloquial speech, "speed of sound" refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance: typically, sound travels most slowly in gases, faster in liquids, and fastest in solids. For example, while sound travels at in air, it travels at in water (almost 4.3 times as fast) and at in iron (almost 15 times as fast). In an exceptionally stiff material such as diamond, sound travels at , – about 35 times its speed in air and about the fastest it can travel under normal conditions.
In theory, the speed of sound is actually the speed of vibrations.
Sound waves in solids are composed of compression waves (just as in gases and liquids) and a different type of sound wave called a shear wave, which occurs only in solids. Shear waves in solids usually travel at different speeds than compression waves, as exhibited in seismology. The speed of compression waves in solids is determined by the medium's compressibility, shear modulus, and density. The speed of shear waves is determined only by the solid material's shear modulus and density.
In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound (in the same medium) is called the object's Mach number. Objects moving at speeds greater than the speed of sound ("") are said to be traveling at supersonic speeds.
<templatestyles src="Template:TOC limit/styles.css" />
Earth.
In Earth's atmosphere, the speed of sound varies greatly from about at high altitudes to about at high temperatures.
History.
Sir Isaac Newton's 1687 "Principia" includes a computation of the speed of sound in air as . This is too low by about 15%. The discrepancy is due primarily to neglecting the (then unknown) effect of rapidly fluctuating temperature in a sound wave (in modern terms, sound wave compression and expansion of air is an adiabatic process, not an isothermal process). This error was later rectified by Laplace.
During the 17th century there were several attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630 (1,380 Parisian feet per second), Pierre Gassendi in 1635 (1,473 Parisian feet per second) and Robert Boyle (1,125 Parisian feet per second). In 1709, the Reverend William Derham, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. (The Parisian foot was . This is longer than the standard "international foot" in common use today, which was officially defined in 1959 as , making the speed of sound at 1,055 Parisian feet per second).
Derham used a telescope from the tower of the church of St. Laurence, Upminster to observe the flash of a distant shotgun being fired, and then measured the time until he heard the gunshot with a half-second pendulum. Measurements were made of gunshots from a number of local landmarks, including North Ockendon church. The distance was known by triangulation, and thus the speed that the sound had travelled was calculated.
Basic concepts.
The transmission of sound can be illustrated by using a model consisting of an array of spherical objects interconnected by springs.
In real material terms, the spheres represent the material's molecules and the springs represent the bonds between them. Sound passes through the system by compressing and expanding the springs, transmitting the acoustic energy to neighboring spheres. This helps transmit the energy in-turn to the neighboring sphere's springs (bonds), and so on.
The speed of sound through the model depends on the stiffness/rigidity of the springs, and the mass of the spheres. As long as the spacing of the spheres remains constant, stiffer springs/bonds transmit energy more quickly, while more massive spheres transmit energy more slowly.
In a real material, the stiffness of the springs is known as the "elastic modulus", and the mass corresponds to the material density. Sound will travel more slowly in spongy materials and faster in stiffer ones. Effects like dispersion and reflection can also be understood using this model.
Some textbooks mistakenly state that the speed of sound "increases" with density. This notion is illustrated by presenting data for three materials, such as air, water, and steel and noting that the speed of sound is higher in the denser materials. But the example fails to take into account that the materials have vastly different compressibility, which more than makes up for the differences in density, which would "slow" wave speeds in the denser materials. An illustrative example of the two effects is that sound travels only 4.3 times faster in water than air, despite enormous differences in compressibility of the two media. The reason is that the greater density of water, which works to "slow" sound in water relative to the air, nearly makes up for the compressibility differences in the two media.
For instance, sound will travel 1.59 times faster in nickel than in bronze, due to the greater stiffness of nickel at about the same density. Similarly, sound travels about 1.41 times faster in light hydrogen (protium) gas than in heavy hydrogen (deuterium) gas, since deuterium has similar properties but twice the density. At the same time, "compression-type" sound will travel faster in solids than in liquids, and faster in liquids than in gases, because the solids are more difficult to compress than liquids, while liquids, in turn, are more difficult to compress than gases.
A practical example can be observed in Edinburgh when the "One o'Clock Gun" is fired at the eastern end of Edinburgh Castle. Standing at the base of the western end of the Castle Rock, the sound of the Gun can be heard through the rock, slightly before it arrives by the air route, partly delayed by the slightly longer route. It is particularly effective if a multi-gun salute such as for "The Queen's Birthday" is being fired.
Compression and shear waves.
In a gas or liquid, sound consists of compression waves. In solids, waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, and is the same process in gases and liquids, with an analogous compression-type wave in solids. Only compression waves are supported in gases and liquids. An additional type of wave, the transverse wave, also called a shear wave, occurs only in solids because only solids support elastic deformations. It is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations.
These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first and rocking transverse waves seconds later.
The speed of a compression wave in a fluid is determined by the medium's compressibility and density. In solids, the compression waves are analogous to those in fluids, depending on compressibility and density, but with the additional factor of shear modulus which affects compression waves due to off-axis elastic energies which are able to influence effective tension and relaxation in a compression. The speed of shear waves, which can occur only in solids, is determined simply by the solid material's shear modulus and density.
Equations.
The speed of sound in mathematical notation is conventionally represented by "c", from the Latin "celeritas" meaning "swiftness".
For fluids in general, the speed of sound "c" is given by the Newton–Laplace equation:
formula_0
where
formula_3, where formula_4 is the pressure and the derivative is taken isentropically, that is, at constant entropy "s". This is because a sound wave travels so fast that its propagation can be approximated as an adiabatic process, meaning that there isn't enough time, during a pressure cycle of the sound, for significant heat conduction and radiation to occur.
Thus, the speed of sound increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material and decreases with an increase in density. For ideal gases, the bulk modulus "K" is simply the gas pressure multiplied by the dimensionless adiabatic index, which is about 1.4 for air under normal conditions of pressure and temperature.
For general equations of state, if classical mechanics is used, the speed of sound "c" can be derived as follows:
Consider the sound wave propagating at speed formula_5 through a pipe aligned with the formula_6 axis and with a cross-sectional area of formula_7. In time interval formula_8 it moves length formula_9. In steady state, the mass flow rate formula_10 must be the same at the two ends of the tube, therefore the mass flux formula_11 is constant and formula_12. Per Newton's second law, the pressure-gradient force provides the acceleration:
formula_13
And therefore:
formula_14
If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations.
In a non-dispersive medium, the speed of sound is independent of sound frequency, so the speeds of energy transport and sound propagation are the same for all frequencies. Air, a mixture of oxygen and nitrogen, constitutes a non-dispersive medium. However, air does contain a small amount of CO2 which "is" a dispersive medium, and causes dispersion to air at ultrasonic frequencies (greater than ).
In a dispersive medium, the speed of sound is a function of sound frequency, through the dispersion relation. Each frequency component propagates at its own speed, called the phase velocity, while the energy of the disturbance propagates at the group velocity. The same phenomenon occurs with light waves; see optical dispersion for a description.
Dependence on the properties of the medium.
The speed of sound is variable and depends on the properties of the substance through which the wave is travelling. In solids, the speed of transverse (or shear) waves depends on the shear deformation under shear stress (called the shear modulus), and the density of the medium. Longitudinal (or compression) waves in solids depend on the same two factors with the addition of a dependence on compressibility.
In fluids, only the medium's compressibility and density are the important factors, since fluids do not transmit shear stresses. In heterogeneous fluids, such as a liquid filled with gas bubbles, the density of the liquid and the compressibility of the gas affect the speed of sound in an additive manner, as demonstrated in the hot chocolate effect.
In gases, adiabatic compressibility is directly related to pressure through the heat capacity ratio (adiabatic index), while pressure and density are inversely related to the temperature and molecular weight, thus making only the completely independent properties of "temperature and molecular structure" important (heat capacity ratio may be determined by temperature and molecular structure, but simple molecular weight is not sufficient to determine it).
Sound propagates faster in low molecular weight gases such as helium than it does in heavier gases such as xenon. For monatomic gases, the speed of sound is about 75% of the mean speed that the atoms move in that gas.
For a given ideal gas the molecular composition is fixed, and thus the speed of sound depends only on its temperature. At a constant temperature, the gas pressure has no effect on the speed of sound, since the density will increase, and since pressure and density (also proportional to pressure) have equal but opposite effects on the speed of sound, and the two contributions cancel out exactly. In a similar way, compression waves in solids depend both on compressibility and density—just as in liquids—but in gases the density contributes to the compressibility in such a way that some part of each attribute factors out, leaving only a dependence on temperature, molecular weight, and heat capacity ratio which can be independently derived from temperature and molecular composition (see derivations below). Thus, for a single given gas (assuming the molecular weight does not change) and over a small temperature range (for which the heat capacity is relatively constant), the speed of sound becomes dependent on only the temperature of the gas.
In non-ideal gas behavior regimen, for which the Van der Waals gas equation would be used, the proportionality is not exact, and there is a slight dependence of sound velocity on the gas pressure.
Humidity has a small but measurable effect on the speed of sound (causing it to increase by about 0.1%–0.6%), because oxygen and nitrogen molecules of the air are replaced by lighter molecules of water. This is a simple mixing effect.
Altitude variation and implications for atmospheric acoustics.
In the Earth's atmosphere, the chief factor affecting the speed of sound is the temperature. For a given ideal gas with constant heat capacity and composition, the speed of sound is dependent "solely" upon temperature; see "" below. In such an ideal case, the effects of decreased density and decreased pressure of altitude cancel each other out, save for the residual effect of temperature.
Since temperature (and thus the speed of sound) decreases with increasing altitude up to , sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The decrease of the speed of sound with height is referred to as a negative sound speed gradient.
However, there are variations in this trend above . In particular, in the stratosphere above about , the speed of sound increases with height, due to an increase in temperature from heating within the ozone layer. This produces a positive speed of sound gradient in this region. Still another region of positive gradient occurs at very high altitudes, in the thermosphere above .
Details.
Speed of sound in ideal gases and air.
For an ideal gas, "K" (the bulk modulus in equations above, equivalent to "C", the coefficient of stiffness in solids) is given by
formula_15
Thus, from the Newton–Laplace equation above, the speed of sound in an ideal gas is given by
formula_16
where
Using the ideal gas law to replace "p" with "nRT"/"V", and replacing "ρ" with "nM"/"V", the equation for an ideal gas becomes
formula_18
where
This equation applies only when the sound wave is a small perturbation on the ambient condition, and the certain other noted conditions are fulfilled, as noted below. Calculated values for "c"air have been found to vary slightly from experimentally determined values.
Newton famously considered the speed of sound before most of the development of thermodynamics and so incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of "γ" but was otherwise correct.
Numerical substitution of the above values gives the ideal gas approximation of sound velocity for gases, which is accurate at relatively low gas pressures and densities (for air, this includes standard Earth sea-level conditions). Also, for diatomic gases the use of "γ" = 1.4000 requires that the gas exists in a temperature range high enough that rotational heat capacity is fully excited (i.e., molecular rotation is fully used as a heat energy "partition" or reservoir); but at the same time the temperature must be low enough that molecular vibrational modes contribute no heat capacity (i.e., insignificant heat goes into vibration, as all vibrational quantum modes above the minimum-energy-mode have energies that are too high to be populated by a significant number of molecules at this temperature). For air, these conditions are fulfilled at room temperature, and also temperatures considerably below room temperature (see tables below). See the section on gases in specific heat capacity for a more complete discussion of this phenomenon.
For air, we introduce the shorthand
formula_19
In addition, we switch to the Celsius temperature "θ" = "T" −, which is useful to calculate air speed in the region near (). Then, for dry air,
formula_20
Substituting numerical values
formula_21
formula_22
and using the ideal diatomic gas value of "γ" = 1.4000, we have
formula_23
Finally, Taylor expansion of the remaining square root in formula_24 yields
formula_25
A graph comparing results of the two equations is to the right, using the slightly more accurate value of for the speed of sound at .
Effects due to wind shear.
The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. Wind shear of 4 m/(s · km) can produce refraction equal to a typical temperature lapse rate of . Higher values of wind gradient will refract sound downward toward the surface in the downwind direction, eliminating the acoustic shadow on the downwind side. This will increase the audibility of sounds downwind. This downwind refraction effect occurs because there is a wind gradient; the fact that sound is carried along by the wind is not important.
For sound propagation, the exponential variation of wind speed with height can be defined as follows:
formula_26
where
In the 1862 American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only (six miles) downwind.
Tables.
In the standard atmosphere:
In fact, assuming an ideal gas, the speed of sound "c" depends on temperature and composition only, not on the pressure or density (since these change in lockstep for a given temperature and cancel out). Air is almost an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound using the standard atmosphere—"actual conditions may vary".
Given normal atmospheric conditions, the temperature, and thus speed of sound, varies with altitude:
Effect of frequency and gas composition.
General physical considerations.
The medium in which a sound wave is travelling does not always respond adiabatically, and as a result, the speed of sound can vary with frequency.
The limitations of the concept of speed of sound due to extreme attenuation are also of concern. The attenuation which exists at sea level for high frequencies applies to successively lower frequencies as atmospheric pressure decreases, or as the mean free path increases. For this reason, the concept of speed of sound (except for frequencies approaching zero) progressively loses its range of applicability at high altitudes. The standard equations for the speed of sound apply with reasonable accuracy only to situations in which the wavelength of the sound wave is considerably longer than the mean free path of molecules in a gas.
The molecular composition of the gas contributes both as the mass (M) of the molecules, and their heat capacities, and so both have an influence on speed of sound. In general, at the same molecular mass, monatomic gases have slightly higher speed of sound (over 9% higher) because they have a higher "γ" (5/3 = 1.66...) than diatomics do (7/5 = 1.4). Thus, at the same molecular mass, the speed of sound of a monatomic gas goes up by a factor of
formula_27
This gives the 9% difference, and would be a typical ratio for speeds of sound at room temperature in helium vs. deuterium, each with a molecular weight of 4. Sound travels faster in helium than deuterium because adiabatic compression heats helium more since the helium molecules can store heat energy from compression only in translation, but not rotation. Thus helium molecules (monatomic molecules) travel faster in a sound wave and transmit sound faster. (Sound travels at about 70% of the mean molecular speed in gases; the figure is 75% in monatomic gases and 68% in diatomic gases).
In this example we have assumed that temperature is low enough that heat capacities are not influenced by molecular vibration (see heat capacity). However, vibrational modes simply cause gammas which decrease toward 1, since vibration modes in a polyatomic gas give the gas additional ways to store heat which do not affect temperature, and thus do not affect molecular velocity and sound velocity. Thus, the effect of higher temperatures and vibrational heat capacity acts to increase the difference between the speed of sound in monatomic vs. polyatomic molecules, with the speed remaining greater in monatomics.
Practical application to air.
By far, the most important factor influencing the speed of sound in air is temperature. The speed is proportional to the square root of the absolute temperature, giving an increase of about per degree Celsius. For this reason, the pitch of a musical wind instrument increases as its temperature increases.
The speed of sound is raised by humidity. The difference between 0% and 100% humidity is about at standard pressure and temperature, but the size of the humidity effect increases dramatically with temperature.
The dependence on frequency and pressure are normally insignificant in practical applications. In dry air, the speed of sound increases by about as the frequency rises from to . For audible frequencies above it is relatively constant. Standard values of the speed of sound are quoted in the limit of low frequencies, where the wavelength is large compared to the mean free path.
As shown above, the approximate value 1000/3 = 333.33... m/s is exact a little below and is a good approximation for all "usual" outside temperatures (in temperate climates, at least), hence the usual rule of thumb to determine how far lightning has struck: count the seconds from the start of the lightning flash to the start of the corresponding roll of thunder and divide by 3: the result is the distance in kilometers to the nearest point of the lightning bolt.
Mach number.
Mach number, a useful quantity in aerodynamics, is the ratio of air speed to the local speed of sound. At altitude, for reasons explained, Mach number is a function of temperature.
Aircraft flight instruments, however, operate using pressure differential to compute Mach number, not temperature. The assumption is that a particular pressure represents a particular altitude and, therefore, a standard temperature. Aircraft flight instruments need to operate this way because the stagnation pressure sensed by a Pitot tube is dependent on altitude as well as speed.
Experimental methods.
A range of different methods exist for the measurement of sound in air.
The earliest reasonably accurate estimate of the speed of sound in air was made by William Derham and acknowledged by Isaac Newton. Derham had a telescope at the top of the tower of the Church of St Laurence in Upminster, England. On a calm day, a synchronized pocket watch would be given to an assistant who would fire a shotgun at a pre-determined time from a conspicuous point some miles away, across the countryside. This could be confirmed by telescope. He then measured the interval between seeing gunsmoke and arrival of the sound using a half-second pendulum. The distance from where the gun was fired was found by triangulation, and simple division (distance/time) provided velocity. Lastly, by making many observations, using a range of different distances, the inaccuracy of the half-second pendulum could be averaged out, giving his final estimate of the speed of sound. Modern stopwatches enable this method to be used today over distances as short as 200–400 metres, and not needing something as loud as a shotgun.
Single-shot timing methods.
The simplest concept is the measurement made using two microphones and a fast recording device such as a digital storage scope. This method uses the following idea.
If a sound source and two microphones are arranged in a straight line, with the sound source at one end, then the following can be measured:
Then "v" = "x"/"t".
Other methods.
In these methods, the time measurement has been replaced by a measurement of the inverse of time (frequency).
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume. It has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water. In this system it is the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to (1 + 2"n")"λ"/4 where "n" is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that "v" = "fλ".
High-precision measurements in air.
The effect of impurities can be significant when making high-precision measurements. Chemical desiccants can be used to dry the air, but will, in turn, contaminate the sample. The air can be dried cryogenically, but this has the effect of removing the carbon dioxide as well; therefore many high-precision measurements are performed with air free of carbon dioxide rather than with natural air. A 2002 review found that a 1963 measurement by Smith and Harlow using a cylindrical resonator gave "the most probable value of the standard speed of sound to date." The experiment was done with air from which the carbon dioxide had been removed, but the result was then corrected for this effect so as to be applicable to real air. The experiments were done at but corrected for temperature in order to report them at . The result was 331.45 ± 0.01 m/s for dry air at STP, for frequencies from to 1,500 Hz.
Non-gaseous media.
Speed of sound in solids.
Three-dimensional solids.
In a solid, there is a non-zero stiffness both for volumetric deformations and shear deformations. Hence, it is possible to generate sound waves with different velocities dependent
on the deformation mode. Sound waves generating volumetric deformations (compression) and shear deformations (shearing) are called pressure waves (longitudinal waves) and shear waves (transverse waves), respectively. In earthquakes, the corresponding seismic waves are called P-waves (primary waves) and S-waves (secondary waves), respectively. The sound velocities of these two types of waves propagating in a homogeneous 3-dimensional solid are respectively given by
formula_28
formula_29
where
The last quantity is not an independent one, as E = 3K(1 − 2ν). The speed of pressure waves depends both on the pressure and shear resistance properties of the material, while the speed of shear waves depends on the shear properties only.
Typically, pressure waves travel faster in materials than do shear waves, and in earthquakes this is the reason that the onset of an earthquake is often preceded by a quick upward-downward shock, before arrival of waves that produce a side-to-side motion. For example, for a typical steel alloy, "K" = 170 GPa, "G" = 80 GPa and "p" =, yielding a compressional speed "c"solid,p of 6,000 m/s. This is in reasonable agreement with "c"solid,p measured experimentally at 5,930 m/s for a (possibly different) type of steel. The shear speed "c"solid,s is estimated at 3,200 m/s using the same numbers.
Speed of sound in semiconductor solids can be very sensitive to the amount of electronic dopant in them.
One-dimensional solids.
The speed of sound for pressure waves in stiff materials such as metals is sometimes given for "long rods" of the material in question, in which the speed is easier to measure. In rods where their diameter is shorter than a wavelength, the speed of pure pressure waves may be simplified and is given by:
formula_30
where "E" is Young's modulus. This is similar to the expression for shear waves, save that Young's modulus replaces the shear modulus. This speed of sound for pressure waves in long rods will always be slightly less than the same speed in homogeneous 3-dimensional solids, and the ratio of the speeds in the two different types of objects depends on Poisson's ratio for the material.
Speed of sound in liquids.
In a fluid, the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
formula_31
where "K" is the bulk modulus of the fluid.
Water.
In fresh water, sound travels at about at (see the External Links section below for online calculators). Applications of underwater sound can be found in sonar, acoustic communication and acoustical oceanography.
Seawater.
In salt water that is free of air bubbles or suspended sediment, sound travels at about ( at , and 3% salinity by one method). The speed of sound in seawater depends on pressure (hence depth), temperature (a change of ~ ), and salinity (a change of 1‰ ~ ), and empirical equations have been derived to accurately calculate the speed of sound from these variables. Other factors affecting the speed of sound are minor. Since in most ocean regions temperature decreases with depth, the profile of the speed of sound with depth decreases to a minimum at a depth of several hundred metres. Below the minimum, sound speed increases again, as the effect of increasing pressure overcomes the effect of decreasing temperature (right). For more information see Dushaw et al.
An empirical equation for the speed of sound in sea water is provided by Mackenzie:
formula_32
where
The constants "a"1, "a"2, ..., "a"9 are
formula_33
with check value for "T" =, "S" = 35 parts per thousand, "z" = 1,000 m. This equation has a standard error of for salinity between 25 and 40 ppt. See for an online calculator.
(The Sound Speed vs. Depth graph does "not" correlate directly to the MacKenzie formula.
This is due to the fact that the temperature and salinity varies at different depths.
When "T" and "S" are held constant, the formula itself is always increasing with depth.)
Other equations for the speed of sound in sea water are accurate over a wide range of conditions, but are far more complicated, e.g., that by V. A. Del Grosso and the Chen-Millero-Li Equation.
Speed of sound in plasma.
The speed of sound in a plasma for the common case that the electrons are hotter than the ions (but not too much hotter) is given by the formula (see here)
formula_34
where
In contrast to a gas, the pressure and the density are provided by separate species: the pressure by the electrons and the density by the ions. The two are coupled through a fluctuating electric field.
Mars.
The speed of sound on Mars varies as a function of frequency. Higher frequencies travel faster than lower frequencies. Higher frequency sound from lasers travels at , while low frequency sound topped out at .
Gradients.
When sound spreads out evenly in all directions in three dimensions, the intensity drops in proportion to the inverse square of the distance. However, in the ocean, there is a layer called the 'deep sound channel' or SOFAR channel which can confine sound waves at a particular depth.
In the SOFAR channel, the speed of sound is lower than that in the layers above and below. Just as light waves will refract towards a region of higher refractive index, sound waves will refract towards a region where their speed is reduced. The result is that sound gets confined in the layer, much the way light can be confined to a sheet of glass or optical fiber. Thus, the sound is confined in essentially two dimensions. In two dimensions the intensity drops in proportion to only the inverse of the distance. This allows waves to travel much further before being undetectably faint.
A similar effect occurs in the atmosphere. Project Mogul successfully used this effect to detect a nuclear explosion at a considerable distance.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c = \\sqrt{\\frac{K_s}{\\rho}},"
},
{
"math_id": 1,
"text": "K_s"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "K_s = \\rho \\left(\\frac{\\partial P}{\\partial\\rho}\\right)_s"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "dt"
},
{
"math_id": 9,
"text": "dx = v \\, dt"
},
{
"math_id": 10,
"text": "\\dot m = \\rho v A "
},
{
"math_id": 11,
"text": "j=\\rho v "
},
{
"math_id": 12,
"text": "v \\, d\\rho = -\\rho \\, dv"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\frac{dv}{dt}\n&=-\\frac{1}{\\rho}\\frac{dP}{dx} \\\\[1ex]\n\\rightarrow\ndP&=(-\\rho \\,dv)\\frac{dx}{dt}=(v \\, d\\rho)v \\\\[1ex]\n\\rightarrow\nv^2& \\equiv c^2=\\frac{dP}{d\\rho}\n\\end{align}\n"
},
{
"math_id": 14,
"text": "c = \\sqrt{\\left(\\frac{\\partial P}{\\partial\\rho}\\right)_s} = \\sqrt{\\frac{K_s}{\\rho}},"
},
{
"math_id": 15,
"text": "K = \\gamma \\cdot p ."
},
{
"math_id": 16,
"text": "c = \\sqrt{\\gamma \\cdot {p \\over \\rho}},"
},
{
"math_id": 17,
"text": "C_p/C_v"
},
{
"math_id": 18,
"text": "c_{\\mathrm{ideal}} = \\sqrt{\\gamma \\cdot {p \\over \\rho}} = \\sqrt{\\gamma \\cdot R \\cdot T \\over M} = \\sqrt{\\gamma \\cdot k \\cdot T \\over m},"
},
{
"math_id": 19,
"text": "R_* = R/M_{\\mathrm{air}}."
},
{
"math_id": 20,
"text": "\\begin{align}\nc_{\\mathrm{air}} &= \\sqrt{\\gamma \\cdot R_* \\cdot T} = \\sqrt{\\gamma \\cdot R_* \\cdot (\\theta + 273.15\\,\\mathrm{K})},\\\\\nc_{\\mathrm{air}} &= \\sqrt{\\gamma \\cdot R_* \\cdot 273.15\\,\\mathrm{K}} \\cdot \\sqrt{1 + \\frac{\\theta}{273.15\\,\\mathrm{K}}} .\n\\end{align}"
},
{
"math_id": 21,
"text": "R = 8.314\\,462\\,618\\,153\\,24~\\mathrm{J/(mol{\\cdot}K)}"
},
{
"math_id": 22,
"text": "M_{\\mathrm{air}} = 0.028\\,964\\,5~\\mathrm{kg/mol}"
},
{
"math_id": 23,
"text": "c_{\\mathrm{air}} \\approx 331.3\\,\\mathrm{m/s} \\times \\sqrt{1 + \\frac{\\theta}{273.15\\,\\mathrm{K}}} ."
},
{
"math_id": 24,
"text": "\\theta"
},
{
"math_id": 25,
"text": "\\begin{align}\nc_{\\mathrm{air}} & \\approx 331.3\\,\\mathrm{m/s} \\times \\left(1 + \\frac{\\theta}{2 \\times 273.15\\,\\mathrm{K}}\\right),\\\\\n & \\approx 331.3\\,\\mathrm{m/s} + \\theta \\times 0.606 \\,\\mathrm{(m/s)/^\\circ C} .\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\nU(h) &= U(0) h^\\zeta, \\\\\n\\frac{\\mathrm{d}U}{\\mathrm{d}H}(h) &= \\zeta \\frac{U(h)}{h},\n\\end{align}"
},
{
"math_id": 27,
"text": "{c_{\\mathrm{gas,monatomic}} \\over c_{\\mathrm{gas,diatomic}}} = \\sqrt{{{{5/3} \\over {7/5}}}} = \\sqrt{25 \\over 21} = 1.091\\ldots"
},
{
"math_id": 28,
"text": "c_{\\mathrm{solid,p}} = \\sqrt{\\frac{K + \\frac{4}{3}G}{\\rho}} = \\sqrt{\\frac{E(1 - \\nu)}{\\rho (1 + \\nu)(1 - 2 \\nu)}},"
},
{
"math_id": 29,
"text": "c_{\\mathrm{solid,s}} = \\sqrt{\\frac{G}{\\rho}},"
},
{
"math_id": 30,
"text": "c_{\\mathrm{solid}} = \\sqrt{\\frac{E}{\\rho}},"
},
{
"math_id": 31,
"text": "c_{\\mathrm{fluid}} = \\sqrt{\\frac{K}{\\rho}},"
},
{
"math_id": 32,
"text": "c(T, S, z) = a_1 + a_2 T + a_3 T^2 + a_4 T^3 + a_5 (S - 35) + a_6 z + a_7 z^2 + a_8 T(S - 35) + a_9 T z^3,"
},
{
"math_id": 33,
"text": "\\begin{align}\na_1 &= 1,448.96, & a_2 &= 4.591, & a_3 &= -5.304 \\times 10^{-2},\\\\\na_4 &= 2.374 \\times 10^{-4}, & a_5 &= 1.340, & a_6 &= 1.630 \\times 10^{-2},\\\\\na_7 &= 1.675 \\times 10^{-7}, & a_8 &= -1.025 \\times 10^{-2}, & a_9 &= -7.139 \\times 10^{-13},\n\\end{align}"
},
{
"math_id": 34,
"text": "c_s = \\left(\\frac{\\gamma Z k T_\\mathrm{e}}{m_\\mathrm{i}}\\right)^{1/2} = \\left(\\frac{\\gamma Z T_e}{\\mu} \\right)^{1/2} \\times 90.85~\\mathrm{m/s},"
}
] |
https://en.wikipedia.org/wiki?curid=147853
|
147864
|
Random optimization
|
Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods.
The name random optimization is attributed to Matyas who made an early presentation of RO along with basic mathematical analysis. RO works by iteratively moving to better positions in the search-space which are sampled using e.g. a normal distribution surrounding the current position.
Algorithm.
Let formula_0 be the fitness or cost function which must be minimized. Let formula_1 designate a position or candidate solution in the search-space. The basic RO algorithm can then be described as:
This algorithm corresponds to a (1+1) evolution strategy with constant step-size.
Convergence and variants.
Matyas showed the basic form of RO converges to the optimum of a simple unimodal function by using a limit-proof which shows convergence to the optimum is certain to occur if a potentially infinite number of iterations are performed. However, this proof is not useful in practice because a finite number of iterations can only be executed. In fact, such a theoretical limit-proof will also show that purely random sampling of the search-space will inevitably yield a sample arbitrarily close to the optimum.
Mathematical analyses are also conducted by Baba and Solis and Wets to establish that convergence to a region surrounding the optimum is inevitable under some mild conditions for RO variants using other probability distributions for the sampling. An estimate on the number of iterations required to approach the optimum is derived by Dorea. These analyses are criticized through empirical experiments by Sarma who used the optimizer variants of Baba and Dorea on two real-world problems, showing the optimum to be approached very slowly and moreover that the methods were actually unable to locate a solution of adequate fitness, unless the process was started sufficiently close to the optimum to begin with.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f: \\mathbb{R}^{n} \\rarr \\mathbb{R}"
},
{
"math_id": 1,
"text": "x \\isin \\mathbb{R}^{n}"
}
] |
https://en.wikipedia.org/wiki?curid=147864
|
14787365
|
Glass batch calculation
|
Glass batch calculation or glass batching is used to determine the correct mix of raw materials (batch) for a glass melt.
Principle.
The raw materials mixture for glass melting is termed "batch". The batch must be measured properly to achieve a given, desired glass formulation. This batch calculation is based on the common linear regression equation:
formula_0
with NB and NG being the molarities 1-column matrices of the batch and glass components respectively, and B being the batching matrix. The symbol "T" stands for the matrix transpose operation, "−1" indicates matrix inversion, and the sign "·" means the scalar product. From the molarities matrices N, percentages by weight (wt%) can easily be derived using the appropriate molar masses.
Example calculation.
An example batch calculation may be demonstrated here. The desired glass composition in wt% is: 67 SiO2, 12 Na2O, 10 CaO, 5 Al2O3, 1 K2O, 2 MgO, 3 B2O3, and as raw materials are used sand, trona, lime, albite, orthoclase, dolomite, and borax. The formulas and molar masses of the glass and batch components are listed in the following table:
The batching matrix B indicates the relation of the molarity in the batch (columns) and in the glass (rows). For example, the batch component SiO2 adds 1 mol SiO2 to the glass, therefore, the intersection of the first column and row shows "1". Trona adds 1.5 mol Na2O to the glass; albite adds 6 mol SiO2, 1 mol Na2O, and 1 mol Al2O3, and so on. For the example given above, the complete batching matrix is listed below. The molarity matrix NG of the glass is simply determined by dividing the desired wt% concentrations by the appropriate molar masses, e.g., for SiO2 67/60.0843 = 1.1151.
formula_1 formula_2
The resulting molarity matrix of the batch, NB, is given here. After multiplication with the appropriate molar masses of the batch ingredients one obtains the batch mass fraction matrix MB:
formula_3 formula_4 or formula_5
The matrix MB, normalized to sum up to 100% as seen above, contains the final batch composition in wt%: 39.216 sand, 16.012 trona, 10.242 lime, 16.022 albite, 4.699 orthoclase, 7.276 dolomite, 6.533 borax. If this batch is melted to a glass, the desired composition given above is obtained. During glass melting, carbon dioxide (from trona, lime, dolomite) and water (from trona, borax) evaporate.
Simple glass batch calculation can be found at the website of the University of Washington.
Advanced batch calculation by optimization.
If the number of glass and batch components is not equal, if it is impossible to exactly obtain the desired glass composition using the selected batch ingredients, or if the matrix equation is not soluble for other reasons (i.e., the rows/columns are linearly dependent), the batch composition must be determined by optimization techniques.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N_B = (B^T\\cdot B)^{-1}\\cdot B^T \\cdot N_G"
},
{
"math_id": 1,
"text": "\\mathbf{B} = \\begin{bmatrix}\n1 & 0 & 0 & 6 & 6 & 0 & 0 \\\\\n0 & 1.5 & 0 & 1 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 2 \\end{bmatrix}\n\n\n"
},
{
"math_id": 2,
"text": "\\mathbf{N_{G}} = \\begin{bmatrix}\n1.1151 \\\\\n0.1936 \\\\\n0.1783 \\\\\n0.0490 \\\\\n0.0106 \\\\\n0.0496 \\\\\n0.0431 \\end{bmatrix}"
},
{
"math_id": 3,
"text": "\\mathbf{N_{B}} = \\begin{bmatrix}\n0.82087 \\\\\n0.08910 \\\\\n0.12870 \\\\\n0.03842 \\\\\n0.01062 \\\\\n0.04962 \\\\\n0.02155 \\end{bmatrix}"
},
{
"math_id": 4,
"text": "\\mathbf{M_{B}} = \\begin{bmatrix}\n49.321 \\\\\n20.138 \\\\\n12.881 \\\\\n20.150 \\\\\n5.910 \\\\\n9.150 \\\\\n8.217 \\end{bmatrix}"
},
{
"math_id": 5,
"text": "\\mathbf{M_{B}(100\\% normalized)} = \\begin{bmatrix}\n39.216 \\\\\n16.012 \\\\\n10.242 \\\\\n16.022 \\\\\n4.699 \\\\\n7.276 \\\\\n6.533 \\end{bmatrix}"
}
] |
https://en.wikipedia.org/wiki?curid=14787365
|
14790084
|
Logarithmic decrement
|
Measure for the damping of an oscillator
Logarithmic decrement, formula_0, is used to find the damping ratio of an underdamped system in the time domain.
The method of logarithmic decrement becomes less and less precise as the damping ratio increases past about 0.5; it does not apply at all for a damping ratio greater than 1.0 because the system is overdamped.
Method.
The logarithmic decrement is defined as the natural log of the ratio of the amplitudes of any two successive peaks:
formula_1
where "x"("t") is the overshoot (amplitude - final value) at time "t" and "x"("t" + "nT") is the overshoot of the peak "n" periods away, where "n" is any integer number of successive, positive peaks.
The damping ratio is then found from the logarithmic decrement by:
formula_2
Thus logarithmic decrement also permits evaluation of the Q factor of the system:
formula_3
formula_4
The damping ratio can then be used to find the natural frequency "ω""n" of vibration of the system from the damped natural frequency "ω""d":
formula_5
formula_6
where "T", the period of the waveform, is the time between two successive amplitude peaks of the underdamped system.
Simplified variation.
The damping ratio can be found for any two adjacent peaks. This method is used when "n" = 1 and is derived from the general method above:
formula_7
where "x"0 and "x"1 are amplitudes of any two successive peaks.
For system where formula_8 (not too close to the critically damped regime, where formula_9).
formula_10
Method of fractional overshoot.
The method of fractional overshoot can be useful for damping ratios between about 0.5 and 0.8. The fractional overshoot OS is:
formula_11
where "x""p" is the amplitude of the first peak of the step response and "x""f" is the settling amplitude. Then the damping ratio is
formula_12
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\delta "
},
{
"math_id": 1,
"text": " \\delta = \\frac{1}{n} \\ln \\frac{x(t)}{x(t+nT)} "
},
{
"math_id": 2,
"text": " \\zeta = \\frac{\\delta}{\\sqrt{4\\pi^2 + \\delta^2}} "
},
{
"math_id": 3,
"text": " Q = \\frac{1}{2\\zeta} "
},
{
"math_id": 4,
"text": " Q = \\frac{1}{2} \\sqrt{1 + \\left(\\frac{n2\\pi}{\\ln \\frac{x(t)}{x(t+nT)}}\\right)^2} "
},
{
"math_id": 5,
"text": " \\omega_d = \\frac{2\\pi}{T} "
},
{
"math_id": 6,
"text": " \\omega_n = \\frac{\\omega_d}{\\sqrt{1 - \\zeta^2}} "
},
{
"math_id": 7,
"text": " \\zeta = \\frac{1}{\\sqrt{1 + \\left(\\frac{2\\pi}{\\ln \\left(\\frac{x_0}{x_1}\\right)}\\right)^2}} "
},
{
"math_id": 8,
"text": " \\zeta \\ll 1 "
},
{
"math_id": 9,
"text": " \\zeta \\approx 1 "
},
{
"math_id": 10,
"text": " \\zeta \\approx \\frac{\\ln \\left(\\frac{x_0}{x_1}\\right)}{2\\pi} "
},
{
"math_id": 11,
"text": "\\mathrm{ OS} = \\frac{x_p - x_f}{x_f} "
},
{
"math_id": 12,
"text": " \\zeta = \\frac{1}{\\sqrt{1 + \\left(\n\\frac{\\pi}{\\ln(\\mathrm{OS})}\n\\right)^2}} "
}
] |
https://en.wikipedia.org/wiki?curid=14790084
|
1479085
|
Stieltjes constants
|
Constants in the zeta function's Laurent series expansion
In mathematics, the Stieltjes constants are the numbers formula_0 that occur in the Laurent series expansion of the Riemann zeta function:
formula_1
The constant formula_2 is known as the Euler–Mascheroni constant.
Representations.
The Stieltjes constants are given by the limit
formula_3
Cauchy's differentiation formula leads to the integral representation
formula_4
Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that
formula_5
where δ"n,k" is the Kronecker symbol (Kronecker delta). Among other formulae, we find
formula_6
formula_7
see.
As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912
formula_8
Israilov gave semi-convergent series in terms of Bernoulli numbers formula_9
formula_10
Connon, Blagouchine and Coppo gave several series with the binomial coefficients
formula_11
where "G""n" are Gregory's coefficients, also known as reciprocal logarithmic numbers ("G"1=+1/2, "G"2=−1/12, "G"3=+1/24, "G"4=−19/720... ).
More general series of the same nature include these examples
formula_12
and
formula_13
or
formula_14
where "ψn"("a") are the Bernoulli polynomials of the second kind and "Nn,r"("a") are the polynomials given by the generating equation
formula_15
respectively (note that "Nn,1"("a")
"ψn"("a")).
Oloa and Tauraso showed that series with harmonic numbers may lead to Stieltjes constants
formula_16
Blagouchine obtained slowly-convergent series involving unsigned Stirling numbers of the first kind
formula_17
formula_18
as well as semi-convergent series with rational terms only
formula_19
where "m"=0,1,2... In particular, series for the first Stieltjes constant has a surprisingly simple form
formula_20
where "H""n" is the "n"th harmonic number.
More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-You, Williams, Coffey.
Bounds and asymptotic growth.
The Stieltjes constants satisfy the bound
formula_21
given by Berndt in 1972. Better bounds in terms of elementary functions were obtained by Lavrik
formula_22
by Israilov
formula_23
with "k"=1,2... and "C"(1)=1/2, "C"(2)=7/12... , by Nan-You and Williams
formula_24
by Blagouchine
formula_25
where "B""n" are Bernoulli numbers, and by Matsuoka
formula_26
As concerns estimations resorting to non-elementary functions and solutions, Knessl, Coffey and Fekih-Ahmed obtained quite accurate results. For example, Knessl and Coffey give the following formula that approximates the Stieltjes constants relatively well for large "n". If "v" is the unique solution of
formula_27
with formula_28, and if formula_29, then
formula_30
where
formula_31
formula_32
formula_33
formula_34
Up to n = 100000, the Knessl-Coffey approximation correctly predicts the sign of γ"n" with the single exception of n = 137.
In 2022 K. Maślanka gave an asymptotic expression for the Stieltjes constants, which is both simpler and more accurate than those previously known. In particular, it reproduces with a relatively small error the
troublesome value for n = 137.
Namely, when formula_35
formula_36
where formula_37 are the saddle points:
formula_38
formula_39 is the Lambert function and formula_40 is a constant:
formula_41
Defining a complex "phase" formula_42
formula_43
we get a particularly simple expression in which both the rapidly increasing
amplitude and the oscillations are clearly seen:
formula_44
Numerical values.
The first few values are
For large "n", the Stieltjes constants grow rapidly in absolute value, and change signs in a complex pattern.
Further information related to the numerical evaluation of Stieltjes constants may be found in works of Keiper, Kreminski, Plouffe, Johansson and Blagouchine. First, Johansson provided values of the Stieltjes constants up to "n" = 100000, accurate to over 10000 digits each (the numerical values can be retrieved from the LMFDB . Later, Johansson and Blagouchine devised a particularly efficient algorithm for computing generalized Stieltjes constants (see below) for large "n" and complex "a", which can be also used for ordinary Stieltjes constants. In particular, it allows one to compute "γ""n" to 1000 digits in a minute for any "n" up to "n"
10100.
Generalized Stieltjes constants.
General information.
More generally, one can define Stieltjes constants γ"n"(a) that occur in the Laurent series expansion of the Hurwitz zeta function:
formula_45
Here "a" is a complex number with Re("a")>0. Since the Hurwitz zeta function is a generalization of the Riemann zeta function, we have γ"n"(1)=γ"n" The zeroth constant is simply the digamma-function γ0(a)=-Ψ(a), while other constants are not known to be reducible to any elementary or classical function of analysis. Nevertheless, there are numerous representations for them. For example, there exists the following asymptotic representation
formula_46
due to Berndt and Wilton. The analog of Jensen-Franel's formula for the generalized Stieltjes constant is the Hermite formula
formula_47
Similar representations are given by the following formulas:
formula_48
and
formula_49
Generalized Stieltjes constants satisfy the following recurrence relation
formula_50
as well as the multiplication theorem
formula_51
where formula_52 denotes the binomial coefficient (see and, pp. 101–102).
First generalized Stieltjes constant.
The first generalized Stieltjes constant has a number of remarkable properties.
formula_53
where "m" and "n" are positive integers such that "m"<"n".
This formula has been long-time attributed to Almkvist and Meurman who derived it in 1990s. However, it was recently reported that this identity, albeit in a slightly different form, was first obtained by Carl Malmsten in 1846.
formula_54
see Blagouchine. An alternative proof was later proposed by Coffey and several other authors.
formula_55
For more details and further summation formulae, see.
formula_56
At points 1/4, 3/4 and 1/3, values of first generalized Stieltjes constants were independently obtained by Connon and Blagouchine
formula_57
At points 2/3, 1/6 and 5/6
formula_58
These values were calculated by Blagouchine. To the same author are also due
formula_59
Second generalized Stieltjes constant.
The second generalized Stieltjes constant is much less studied than the first constant. Similarly to the first generalized Stieltjes constant, the second generalized Stieltjes constant at rational argument may be evaluated via the following formula
formula_60
see Blagouchine.
An equivalent result was later obtained by Coffey by another method.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma_k"
},
{
"math_id": 1,
"text": "\\zeta(1+s)=\\frac{1}{s}+\\sum_{n=0}^\\infty \\frac{(-1)^n}{n!} \\gamma_n s^n."
},
{
"math_id": 2,
"text": "\\gamma_0 = \\gamma = 0.577\\dots"
},
{
"math_id": 3,
"text": " \\gamma_n = \\lim_{m\\to\\infty} \\left\\{\\sum_{k=1}^m \\frac{(\\ln k)^n}{k} - \\int_1^m\\frac{(\\ln x)^n}{x}\\,dx\\right\\} = \\lim_{m \\rightarrow \\infty}\n{\\left\\{\\sum_{k = 1}^m \\frac{(\\ln k)^n}{k} - \\frac{(\\ln m)^{n+1}}{n+1}\\right\\}}. "
},
{
"math_id": 4,
"text": "\\gamma_n = \\frac{(-1)^n n!}{2\\pi} \\int_0^{2\\pi} e^{-nix} \\zeta\\left(e^{ix}+1\\right) dx."
},
{
"math_id": 5,
"text": "\n\\gamma_n = \\frac{1}{2}\\delta_{n,0}+\\frac{1}{i}\\int_0^\\infty \\frac{dx}{e^{2\\pi x}-1} \\left\\{\n\\frac{(\\ln(1-ix))^n}{1-ix} - \\frac{(\\ln(1+ix))^n}{1+ix} \n\\right\\}\\,,\n\\qquad\\quad n=0, 1, 2,\\ldots\n"
},
{
"math_id": 6,
"text": "\n\\gamma_n = -\\frac{\\pi}{2(n+1)} \\int_{-\\infty}^\\infty\n\\frac{\\left(\\ln\\left(\\frac{1}{2}\\pm ix\\right)\\right)^{n+1}}{\\cosh^2 \\pi x}\\, dx \n\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad n=0, 1, 2,\\ldots\n"
},
{
"math_id": 7,
"text": "\n\\begin{array}{l}\n\\displaystyle \n\\gamma_1 =-\\left[\\gamma -\\frac{\\ln2}{2}\\right]\\ln2 + i\\int_0^\\infty \\frac{dx}{e^{\\pi x}+1} \\left\\{\n\\frac{\\ln(1-ix)}{1-ix} - \\frac{\\ln(1+ix)}{1+ix} \n\\right\\} \\\\[6mm]\n\\displaystyle\n\\gamma_1 = -\\gamma^2 - \\int_0^\\infty \\left[\\frac{1}{1-e^{-x}}-\\frac{1}{x}\\right] e^{-x}\\ln x \\, dx\n\\end{array}\n"
},
{
"math_id": 8,
"text": "\n\\gamma_1 = \\frac{\\ln2}{2}\\sum_{k=2}^\\infty \\frac{(-1)^k}{k} \\lfloor \\log_2{k}\\rfloor\\cdot\n\\left(2\\log_2{k} - \\lfloor \\log_2{2k}\\rfloor\\right)\n"
},
{
"math_id": 9,
"text": "B_{2k}"
},
{
"math_id": 10,
"text": "\n\\gamma_m = \\sum_{k=1}^n \\frac{(\\ln k)^m}{k} - \\frac{(\\ln n)^{m+1}}{m+1}\n- \\frac{(\\ln n)^m}{2n} - \\sum_{k=1}^{N-1} \\frac{B_{2k}}{(2k)!}\\left[\\frac{(\\ln x)^m}{x}\\right]^{(2k-1)}_{x=n} \n- \\theta\\cdot\\frac{B_{2N}}{(2N)!}\\left[\\frac{(\\ln x)^m}{x}\\right]^{(2N-1)}_{x=n} \\,,\\qquad 0<\\theta<1\n"
},
{
"math_id": 11,
"text": "\n\\begin{array}{l}\n\\displaystyle\n\\gamma_m = -\\frac{1}{m+1}\\sum_{n=0}^\\infty\\frac{1}{n+1}\n\\sum_{k=0}^n (-1)^k \\binom{n}{k}(\\ln(k+1))^{m+1} \\\\[7mm]\n\\displaystyle \n\\gamma_m = -\\frac{1}{m+1}\\sum_{n=0}^\\infty\\frac{1}{n+2}\n\\sum_{k=0}^n (-1)^k \\binom{n}{k}\\frac{(\\ln(k+1))^{m+1}}{k+1} \\\\[7mm]\n\\displaystyle \n\\gamma_m=-\\frac{1}{m+1}\\sum_{n=0}^\\infty H_{n+1}\\sum_{k=0}^n (-1)^k \\binom{n}{k}(\\ln(k+2))^{m+1}\\\\[7mm]\n\\displaystyle \n\\gamma_m = \\sum_{n=0}^\\infty\\left|G_{n+1}\\right| \\sum_{k=0}^n\n(-1)^k \\binom{n}{k}\\frac{(\\ln(k+1))^m}{k+1}\n\\end{array}\n"
},
{
"math_id": 12,
"text": "\n\\gamma_m=-\\frac{(\\ln(1+a))^{m+1}}{m+1} + \\sum_{n=0}^\\infty (-1)^n \\psi_{n+1}(a)\n\\sum_{k=0}^{n} (-1)^k \\binom{n}{k}\\frac{(\\ln (k+1))^m}{k+1},\\quad \\Re(a)>-1\n"
},
{
"math_id": 13,
"text": "\n\\gamma_m=-\\frac{1}{r(m+1)}\\sum_{l=0}^{r-1}(\\ln(1+a+l))^{m+1} + \\frac{1}{r}\\sum_{n=0}^\\infty (-1)^n N_{n+1,r}(a)\n\\sum_{k=0}^{n} (-1)^k \\binom{n}{k}\\frac{(\\ln (k+1))^{m}}{k+1},\\quad \\Re(a)>-1, \\; r=1,2,3,\\ldots\n"
},
{
"math_id": 14,
"text": "\n \\gamma_m=-\\frac{1}{\\tfrac{1}{2}+a}\n\\left\\{\\frac{(-1)^m}{m+1}\\,\\zeta^{(m+1)}(0,1+a)- (-1)^m \\zeta^{(m)}(0)\n- \\sum_{n=0}^\\infty (-1)^n \\psi_{n+2}(a) \n\\sum_{k=0}^{n} (-1)^k \\binom{n}{k}\\frac{(\\ln(k+1))^m}{k+1}\\right\\} ,\\quad \\Re(a)>-1\n"
},
{
"math_id": 15,
"text": "\n\\frac{(1+z)^{a+m}-(1+z)^{a}}{\\ln(1+z)}=\\sum_{n=0}^\\infty N_{n,m}(a) z^n , \\qquad |z|<1,\n"
},
{
"math_id": 16,
"text": "\n\\begin{array}{l}\n\\displaystyle \n\\sum_{n=1}^\\infty \\frac{H_n - (\\gamma+\\ln n)}{n} =\n-\\gamma_1 -\\frac{1}{2}\\gamma^2+\\frac{1}{12}\\pi^2 \\\\[6mm]\n\\displaystyle \n\\sum_{n=1}^\\infty \\frac{H^2_n - (\\gamma+\\ln n)^2}{n} =\n-\\gamma_2 -2\\gamma\\gamma_1 -\\frac{2}{3}\\gamma^3+\\frac{5}{3}\\zeta(3)\n\\end{array}\n"
},
{
"math_id": 17,
"text": "\\left[{\\cdot \\atop \\cdot}\\right]"
},
{
"math_id": 18,
"text": "\n\\gamma_m = \\frac{1}{2}\\delta_{m,0}+\n\\frac{(-1)^m m!}{\\pi} \\sum_{n=1}^\\infty\\frac{1}{n\\cdot n!}\n\\sum_{k=0}^{\\lfloor n/2\\rfloor}\\frac{(-1)^{k}\\cdot\\left[{2k+2\\atop m+1}\\right] \\cdot\\left[{n\\atop 2k+1}\\right]}\n{(2\\pi)^{2k+1}}\\,,\\qquad m=0,1,2,...,\n"
},
{
"math_id": 19,
"text": "\n\\gamma_m = \\frac{1}{2}\\delta_{m,0}+(-1)^{m} m!\\cdot\\sum_{k=1}^{N}\\frac{\\left[{2k\\atop m+1}\\right]\\cdot B_{2k}}{(2k)!} \n+ \\theta\\cdot\\frac{(-1)^{m} m!\\cdot \\left[{2N+2\\atop m+1}\\right]\\cdot B_{2N+2}}{(2N+2)!},\\qquad 0<\\theta<1,\n"
},
{
"math_id": 20,
"text": "\n\\gamma_1 = -\\frac{1}{2}\\sum_{k=1}^{N}\\frac{B_{2k}\\cdot H_{2k-1}}{k}\n+ \\theta\\cdot\\frac{B_{2N+2}\\cdot H_{2N+1}}{2N+2},\\qquad 0<\\theta<1,\n"
},
{
"math_id": 21,
"text": "\n|\\gamma_n| \\leq\n\\begin{cases}\n\\displaystyle \\frac{2(n-1)!}{\\pi^n}\\,,\\qquad & n=1, 3, 5,\\ldots \\\\[3mm]\n\\displaystyle \\frac{4(n-1)!}{\\pi^n}\\,,\\qquad & n=2, 4, 6,\\ldots \n\\end{cases}\n"
},
{
"math_id": 22,
"text": "\n|\\gamma_n| \\leq \\frac{n!}{2^{n+1}},\\qquad n=1, 2, 3,\\ldots \n"
},
{
"math_id": 23,
"text": "\n|\\gamma_n| \\leq \\frac{n! C(k)}{(2k)^{n}},\\qquad n=1, 2, 3,\\ldots \n"
},
{
"math_id": 24,
"text": "\n|\\gamma_n| \\leq\n\\begin{cases}\n\\displaystyle \\frac{2(2n)!}{n^{n+1}(2\\pi)^n}\\,,\\qquad & n=1, 3, 5,\\ldots \\\\[4mm]\n\\displaystyle \\frac{4(2n)!}{n^{n+1}(2\\pi)^n}\\,,\\qquad & n=2, 4, 6,\\ldots \n\\end{cases}\n"
},
{
"math_id": 25,
"text": "\n\\begin{array}{ll}\n\\displaystyle-\\frac{\\big|{B}_{m+1}\\big|}{m+1} < \\gamma_m <\n\\frac{(3m+8)\\cdot\\big|{B}_{m+3}\\big|}{24} - \\frac{\\big|{B}_{m+1}\\big|}{m+1} , \n& m=1, 5, 9,\\ldots\\\\[12pt]\n\\displaystyle \n\\frac{\\big|B_{m+1}\\big|}{m+1} - \\frac{(3m+8)\\cdot\\big|B_{m+3}\\big|}{24}\n< \\gamma_m < \\frac{\\big|{B}_{m+1}\\big|}{m+1} , & m=3, 7, 11,\\ldots\\\\[12pt]\n\\displaystyle -\\frac{\\big|{B}_{m+2}\\big|}{2} < \\gamma_m\n < \\frac{(m+3)(m+4)\\cdot\\big|{B}_{m+4}\\big|}{48} - \\frac{\\big|B_{m+2}\\big|}{2} ,\n \\qquad & m=2, 6, 10, \\ldots\\\\[12pt]\n\\displaystyle \n\\frac{\\big|{B}_{m+2}\\big|}{2} - \\frac{(m+3)(m+4)\\cdot\\big|{B}_{m+4}\\big|}{48}\n< \\gamma_m < \\frac{\\big|{B}_{m+2}\\big|}{2}, & m=4, 8, 12, \\ldots\\\\\n\\end{array}\n"
},
{
"math_id": 26,
"text": "\n|\\gamma_n| < 10^{-4} e^{n \\ln \\ln n}\\,,\\qquad n=5,6,7,\\ldots\n"
},
{
"math_id": 27,
"text": "2 \\pi \\exp(v \\tan v) = n \\frac{\\cos(v)}{v}"
},
{
"math_id": 28,
"text": "0 < v < \\pi/2"
},
{
"math_id": 29,
"text": "u = v \\tan v"
},
{
"math_id": 30,
"text": "\\gamma_n \\sim \\frac{B}{\\sqrt{n}} e^{nA} \\cos(an+b)"
},
{
"math_id": 31,
"text": "A = \\frac{1}{2} \\ln(u^2+v^2) - \\frac{u}{u^2+v^2}"
},
{
"math_id": 32,
"text": "B = \\frac{2 \\sqrt{2\\pi} \\sqrt{u^2+v^2}}{[(u+1)^2+v^2]^{1/4}}"
},
{
"math_id": 33,
"text": "a = \\tan^{-1}\\left(\\frac{v}{u}\\right) + \\frac{v}{u^2+v^2}"
},
{
"math_id": 34,
"text": "b = \\tan^{-1}\\left(\\frac{v}{u}\\right) - \\frac{1}{2} \\left(\\frac{v}{u+1}\\right)."
},
{
"math_id": 35,
"text": "n >> 1"
},
{
"math_id": 36,
"text": "\\gamma_{n} \\sim \\sqrt{\\frac{2}{\\pi}} n! \\mathrm{ Re } \\frac{\\Gamma \\left(s_{n}\\right) e^{-cs_{n}}}{\\left( s_{n}\\right) ^{n}\\sqrt{n+s_{n}+\\frac{3}{2}}}"
},
{
"math_id": 37,
"text": "s_{n}"
},
{
"math_id": 38,
"text": "s_{n}=\\frac{n+\\frac{3}{2}}{W\\left( \\pm \\frac{n+\\frac{3}{2}}{2\\pi i}\\right) }"
},
{
"math_id": 39,
"text": "W"
},
{
"math_id": 40,
"text": "c"
},
{
"math_id": 41,
"text": "c=\\log (2\\pi )+\\frac{\\pi }{2}i"
},
{
"math_id": 42,
"text": "\\varphi_{n}"
},
{
"math_id": 43,
"text": "\\varphi _{n}\\equiv \\frac{1}{2}\\ln (8\\pi )-n+(n+\\frac{1}{2})\\ln (n)+(s_{n}-n-\\frac{1}{2})\\ln \\left( s_{n}\\right) -\\frac{1}{2}\\ln \\left( n+s_{n}\\right)-(c+1)s_{n}"
},
{
"math_id": 44,
"text": "\\gamma _{n}\\sim \\mathrm{Re} \\left[ e^{\\varphi _{n}}\\right] =e^{\\mathrm{Re}\\varphi_{n}}\\cos \\left(\\mathrm{Im}\\varphi _{n}\\right)"
},
{
"math_id": 45,
"text": "\\zeta(s,a)=\\frac{1}{s-1}+\\sum_{n=0}^\\infty \\frac{(-1)^n}{n!} \\gamma_n(a) (s-1)^n."
},
{
"math_id": 46,
"text": "\n\\gamma_n(a) = \\lim_{m\\to\\infty}\\left\\{\n\\sum_{k=0}^m \\frac{(\\ln (k+a))^n}{k+a} - \\frac{(\\ln (m+a))^{n+1}}{n+1}\n\\right\\}, \\qquad\n\\begin{array}{l}\nn=0, 1, 2,\\ldots \\\\[1mm]\na\\neq0, -1, -2, \\ldots\n\\end{array}\n"
},
{
"math_id": 47,
"text": "\n\\gamma_n(a) =\\left[\\frac{1}{2a}-\\frac{\\ln{a}}{n+1} \\right](\\ln a)^n\n-i\\int_0^\\infty \\frac{dx}{e^{2\\pi x}-1} \\left\\{\n\\frac{(\\ln(a-ix))^n}{a-ix} - \\frac{(\\ln(a+ix))^n}{a+ix} \n\\right\\} , \\qquad\n\\begin{array}{l}\nn=0, 1, 2,\\ldots \\\\[1mm]\n\\Re(a)>0\n\\end{array}\n"
},
{
"math_id": 48,
"text": "\n\\gamma_n(a) = - \\frac{\\big(\\ln(a-\\frac12)\\big)^{n+1}}{n+1}\n+i\\int_0^\\infty \\frac{dx}{e^{2\\pi x}+1} \\left\\{\n\\frac{\\big(\\ln(a-\\frac12-ix)\\big)^n}{a-\\frac12-ix} - \\frac{\\big(\\ln(a-\\frac12+ix)\\big)^n}{a-\\frac12+ix} \n\\right\\} , \\qquad\n\\begin{array}{l}\nn=0, 1, 2,\\ldots \\\\[1mm]\n\\Re(a)>\\frac12\n\\end{array}\n"
},
{
"math_id": 49,
"text": "\n\\gamma_n(a) = -\\frac{\\pi}{2(n+1)}\\int_0^\\infty \\frac{\\big(\\ln(a-\\frac12-ix)\\big)^{n+1} + \n\\big(\\ln(a-\\frac12+ix)\\big)^{n+1}}{\\big(\\cosh(\\pi x)\\big)^2} \\, dx , \\qquad\n\\begin{array}{l}\nn=0, 1, 2,\\ldots \\\\[1mm]\n\\Re(a)>\\frac12\n\\end{array}\n"
},
{
"math_id": 50,
"text": "\n\\gamma_n(a+1) = \\gamma_n(a) - \\frac{(\\ln a)^n}{a} \\,, \\qquad\n\\begin{array}{l}\nn=0, 1, 2,\\ldots \\\\[1mm]\na\\neq0, -1, -2, \\ldots\n\\end{array}\n"
},
{
"math_id": 51,
"text": "\n\\sum_{l=0}^{n-1} \\gamma_p \\left(a+\\frac{l}{n} \\right) =\n(-1)^p n \\left[\\frac{\\ln n}{p+1} - \\Psi(an) \\right](\\ln n)^p + n\\sum_{r=0}^{p-1}(-1)^r \\binom{p}{r} \\gamma_{p-r}(an) \\cdot (\\ln n)^r\\,,\n\\qquad\\qquad n=2, 3, 4,\\ldots\n"
},
{
"math_id": 52,
"text": "\\binom{p}{r}"
},
{
"math_id": 53,
"text": "\n\\gamma_1 \\biggl(\\frac{m}{n}\\biggr)- \\gamma_1 \\biggl(1-\\frac{m}{n} \n\\biggr) =2\\pi\\sum_{l=1}^{n-1} \\sin\\frac{2\\pi m l}{n} \\cdot\\ln\\Gamma \\biggl(\\frac{l}{n} \\biggr)\n-\\pi(\\gamma+\\ln2\\pi n)\\cot\\frac{m\\pi}{n}\n"
},
{
"math_id": 54,
"text": "\n\\begin{array}{ll}\n\\displaystyle \n\\gamma_1 \\biggl(\\frac{r}{m} \\biggr)\n =& \\displaystyle\n\\gamma_1 +\\gamma^2 + \\gamma\\ln2\\pi m + \\ln2\\pi\\cdot\\ln{m}+\\frac{1}{2}(\\ln m)^2\n+ (\\gamma+\\ln2\\pi m)\\cdot\\Psi\\left(\\frac{r}{m}\\right) \\\\[5mm]\n\\displaystyle & \\displaystyle\\qquad\n+\\pi\\sum_{l=1}^{m-1} \\sin\\frac{2\\pi r l}{m} \\cdot\\ln\\Gamma \\biggl(\\frac{l}{m} \\biggr) \n+ \\sum_{l=1}^{m-1} \\cos\\frac{2\\pi rl}{m}\\cdot\\zeta''\\left(0,\\frac{l}{m}\\right) \n\\end{array}\\,,\\qquad\\quad r=1, 2, 3,\\ldots, m-1\\,.\n"
},
{
"math_id": 55,
"text": "\n\\begin{array}{ll}\n\\displaystyle \n\\sum_{r=0}^{m-1} \\gamma_1\\left( a+\\frac{r}{m} \\right) =\nm\\ln{m}\\cdot\\Psi(am) - \\frac{m}{2}(\\ln m)^2 + m\\gamma_1(am)\\,,\\qquad a\\in\\mathbb{C}\\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{m-1} \\gamma_1\\left(\\frac{r}{m} \\right) =\n(m-1)\\gamma_1 - m\\gamma\\ln{m} - \\frac{m}{2}(\\ln m)^2 \\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{2m-1} (-1)^r \\gamma_1 \\biggl(\\frac{r}{2m} \\biggr)\n= -\\gamma_1+m(2\\gamma+\\ln2+2\\ln m)\\ln2\\\\[6mm]\n\\displaystyle\n\\sum_{r=0}^{2m-1} (-1)^r \\gamma_1\\biggl(\\frac{2r+1}{4m} \\biggr)\n= m\\left\\{4\\pi\\ln\\Gamma \\biggl(\\frac{1}{4} \\biggr) - \\pi\\big(4\\ln2+3\\ln\\pi+\\ln m+\\gamma \\big)\\right\\}\\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{m-1} \\gamma_1 \\biggl(\\frac{r}{m}\\biggr)\n\\cdot\\cos\\dfrac{2\\pi rk}{m} = -\\gamma_1 + m(\\gamma+\\ln2\\pi m)\n\\ln\\left(2\\sin\\frac{k\\pi}{m}\\right) \n+\\frac{m}{2}\n\\left\\{\\zeta''\\left( 0,\\frac{k}{m}\\right) + \\zeta''\\left( 0,1-\\frac{k}{m}\\right) \\right\\}\\,, \\qquad k=1,2,\\ldots,m-1 \\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{m-1} \\gamma_1\\biggl(\\frac{r}{m} \\biggr)\n\\cdot\\sin\\dfrac{2\\pi rk}{m} =\\frac{\\pi}{2} (\\gamma+\\ln2\\pi m)(2k-m) \n- \\frac{\\pi m}{2} \\left\\{\\ln\\pi -\\ln\\sin\\frac{k\\pi}{m} \\right\\} \n+ m\\pi\\ln\\Gamma \\biggl(\\frac{k}{m} \\biggr) \\,, \\qquad k=1,2,\\ldots,m-1 \\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{m-1} \\gamma_1 \\biggl(\\frac{r}{m} \\biggr)\\cdot\\cot\\frac{\\pi r}{m} = \\displaystyle\n\\frac{\\pi }{6} \\Big\\{(1-m)(m-2)\\gamma + 2(m^2-1)\\ln2\\pi - (m^2+2)\\ln{m}\\Big\\}\n-2\\pi\\sum_{l=1}^{m-1} l\\cdot\\ln\\Gamma\\left( \\frac{l}{m}\\right) \\\\[6mm]\n\\displaystyle\n\\sum_{r=1}^{m-1} \\frac{r}{m} \\cdot\\gamma_1 \\biggl(\\frac{r}{m} \\biggr) =\n\\frac{1}{2}\\left\\{(m-1)\\gamma_1 - m\\gamma\\ln{m} - \\frac{m}{2}(\\ln m)^2 \\right\\}\n-\\frac{\\pi}{2m}(\\gamma+\\ln2\\pi m) \\sum_{l=1}^{m-1} l\\cdot \\cot\\frac{\\pi l}{m} \n-\\frac{\\pi}{2} \\sum_{l=1}^{m-1} \\cot\\frac{\\pi l}{m} \\cdot\\ln\\Gamma\\biggl(\\frac{l}{m} \\biggr) \n\\end{array}\n"
},
{
"math_id": 56,
"text": "\n\\gamma_1\\left(\\frac{1}{2}\\right) = - 2\\gamma\\ln 2 - (\\ln 2)^2 + \\gamma_1 = -1.353459680\\ldots\n"
},
{
"math_id": 57,
"text": "\n\\begin{array}{l}\n\\displaystyle\n\\gamma_1\\left(\\frac{1}{4}\\right) = 2\\pi\\ln\\Gamma\\left(\\frac{1}{4} \\right) \n- \\frac{3\\pi}{2}\\ln\\pi - \\frac{7}{2}(\\ln 2)^2 - (3\\gamma+2\\pi)\\ln2 - \\frac{\\gamma\\pi}{2}+\\gamma_1 = -5.518076350\\ldots \\\\[6mm]\n\\displaystyle\n\\gamma_1\\left(\\frac{3}{4} \\right) = -2\\pi\\ln\\Gamma\\left(\\frac{1}{4} \\right) \n+ \\frac{3\\pi}{2}\\ln\\pi - \\frac{7}{2}(\\ln 2)^2 - (3\\gamma-2\\pi)\\ln2 + \\frac{\\gamma\\pi}{2}+\\gamma_1 = -0.3912989024\\ldots \\\\[6mm]\n\\displaystyle\n\\gamma_1\\left(\\frac{1}{3} \\right) = -\\frac{3\\gamma}{2}\\ln3 - \\frac{3}{4}(\\ln 3)^2 \n+ \\frac{\\pi}{4\\sqrt{3}}\\left\\{\\ln3 - 8\\ln2\\pi -2\\gamma +12 \\ln\\Gamma\\left(\\frac{1}{3} \\right) \\right\\}\n+ \\gamma_1 = -3.259557515\\ldots\n\\end{array}\n"
},
{
"math_id": 58,
"text": "\n\\begin{array}{l}\n\\displaystyle\n\\gamma_1\\left(\\frac{2}{3} \\right) = -\\frac{3\\gamma}{2}\\ln3 - \\frac{3}{4}(\\ln 3)^2 \n- \\frac{\\pi}{4\\sqrt{3}}\\left\\{\\ln 3 - 8\\ln 2\\pi -2\\gamma + 12 \\ln\\Gamma\\left(\\frac{1}{3} \\right) \\right\\} \n+ \\gamma_1 = -0.5989062842\\ldots \\\\[6mm]\n\\displaystyle\n\\gamma_1\\left(\\frac{1}{6} \\right) = -\\frac{3\\gamma}{2}\\ln3 - \\frac{3}{4}(\\ln 3)^2\n- (\\ln 2)^2 - (3\\ln3+2\\gamma)\\ln2 + \\frac{3\\pi\\sqrt{3}}{2}\\ln\\Gamma\\left(\\frac{1}{6} \\right) \\\\[5mm]\n\\displaystyle\\qquad\\qquad\\quad\n- \\frac{\\pi}{2\\sqrt{3}}\\left\\{3\\ln3 + 11\\ln2 + \\frac{15}{2}\\ln\\pi + 3\\gamma \\right\\} + \\gamma_1 = -10.74258252\\ldots\\\\[6mm]\n\\displaystyle\n\\gamma_1\\left(\\frac{5}{6} \\right) = -\\frac{3\\gamma}{2}\\ln 3 - \\frac{3}{4}(\\ln 3)^2 \n- (\\ln 2)^2 - (3\\ln3+2\\gamma)\\ln2 - \\frac{3\\pi\\sqrt{3}}{2}\\ln\\Gamma\\left(\\frac{1}{6} \\right) \\\\[6mm]\n\\displaystyle\\qquad\\qquad\\quad\n+ \\frac{\\pi}{2\\sqrt{3}}\\left\\{3\\ln3 + 11\\ln2 + \\frac{15}{2}\\ln\\pi + 3\\gamma \\right\\}+ \\gamma_1 = -0.2461690038\\ldots\n\\end{array}\n"
},
{
"math_id": 59,
"text": "\n\\begin{array}{ll}\n\\displaystyle \n \\gamma_1\\biggl(\\frac{1}{5} \\biggr)=& \\displaystyle\n\\gamma_1 + \\frac{\\sqrt{5}}{2}\\left\\{\\zeta''\\left( 0,\\frac{1}{5}\\right) \n+ \\zeta''\\left( 0,\\frac{4}{5}\\right)\\right\\}\n+ \\frac{\\pi\\sqrt{10+2\\sqrt5}}{2} \\ln\\Gamma \\biggl(\\frac{1}{5} \\biggr)\n\\\\[5mm]\n& \\displaystyle \n+ \\frac{\\pi\\sqrt{10-2\\sqrt5}}{2} \\ln\\Gamma \\biggl(\\frac{2}{5} \\biggr)\n+\\left\\{\\frac{\\sqrt{5}}{2} \\ln{2} -\\frac{\\sqrt{5}}{2} \\ln\\big(1+\\sqrt{5}\\big) -\\frac{5}{4}\\ln5\n-\\frac{\\pi\\sqrt{25+10\\sqrt5}}{10} \\right\\}\\cdot\\gamma \\\\[5mm]\n& \\displaystyle \n- \\frac{\\sqrt{5}}{2}\\left\\{\\ln2+\\ln5+\\ln\\pi+\\frac{\\pi\\sqrt{25-10\\sqrt5}}{10}\\right\\}\\cdot\\ln\\big(1+\\sqrt{5}) \n+\\frac{\\sqrt{5}}{2}(\\ln 2)^2 + \\frac{\\sqrt{5}\\big(1-\\sqrt{5}\\big)}{8}(\\ln 5)^2 \\\\[5mm]\n& \\displaystyle \n +\\frac{3\\sqrt{5}}{4}\\ln2\\cdot\\ln5 + \\frac{\\sqrt{5}}{2}\\ln2\\cdot\\ln\\pi+\\frac{\\sqrt{5}}{4}\\ln5\\cdot\\ln\\pi\n- \\frac{\\pi\\big(2\\sqrt{25+10\\sqrt5}+5\\sqrt{25+2\\sqrt5} \\big)}{20}\\ln2\\\\[5mm]\n& \\displaystyle \n- \\frac{\\pi\\big(4\\sqrt{25+10\\sqrt5}-5\\sqrt{5+2\\sqrt5} \\big)}{40}\\ln5\n- \\frac{\\pi\\big(5\\sqrt{5+2\\sqrt5}+\\sqrt{25+10\\sqrt5} \\big)}{10}\\ln\\pi\\\\[5mm]\n& \\displaystyle \n= -8.030205511\\ldots \\\\[6mm]\n\\displaystyle \n \\gamma_1\\biggl(\\frac{1}{8} \\biggr)\n =& \\displaystyle\\gamma_1 + \\sqrt{2}\\left\\{\\zeta''\\left( 0,\\frac{1}{8}\\right) \n+ \\zeta''\\left( 0,\\frac{7}{8}\\right)\\right\\}\n+ 2\\pi\\sqrt{2}\\ln\\Gamma \\biggl(\\frac{1}{8} \\biggr)\n-\\pi \\sqrt{2}\\big(1-\\sqrt2\\big)\\ln\\Gamma \\biggl(\\frac{1}{4} \\biggr)\n\\\\[5mm]\n& \\displaystyle \n-\\left\\{\\frac{1+\\sqrt2}{2}\\pi+4\\ln{2} +\\sqrt{2}\\ln\\big(1+\\sqrt{2}\\big) \\right\\}\\cdot\\gamma \n- \\frac{1}{\\sqrt{2}}\\big(\\pi+8\\ln2+2\\ln\\pi\\big)\\cdot\\ln\\big(1+\\sqrt{2}) \n\\\\[5mm]\n& \\displaystyle \n - \\frac{7\\big(4-\\sqrt2\\big)}{4}(\\ln 2)^2 + \\frac{1}{\\sqrt{2}}\\ln2\\cdot\\ln\\pi \n -\\frac{\\pi\\big(10+11\\sqrt2\\big)}{4}\\ln2\n -\\frac{\\pi\\big(3+2\\sqrt2\\big)}{2}\\ln\\pi\\\\[5mm]\n& \\displaystyle \n= -16.64171976\\ldots \\\\[6mm]\n\\displaystyle \n \\gamma_1\\biggl(\\frac{1}{12} \\biggr)\n =& \\displaystyle\\gamma_1 + \\sqrt{3}\\left\\{\\zeta''\\left( 0,\\frac{1}{12}\\right) \n+ \\zeta''\\left( 0,\\frac{11}{12}\\right)\\right\\}\n+ 4\\pi\\ln\\Gamma \\biggl(\\frac{1}{4} \\biggr)\n+3\\pi \\sqrt{3}\\ln\\Gamma \\biggl(\\frac{1}{3} \\biggr)\n\\\\[5mm]\n& \\displaystyle \n-\\left\\{\\frac{2+\\sqrt3}{2}\\pi+\\frac{3}{2}\\ln3 -\\sqrt3(1-\\sqrt3)\\ln{2} +2\\sqrt{3}\\ln\\big(1+\\sqrt{3}\\big) \\right\\}\\cdot\\gamma \n\\\\[5mm]\n& \\displaystyle \n- 2\\sqrt3\\big(3\\ln2+\\ln3 +\\ln\\pi\\big)\\cdot\\ln\\big(1+\\sqrt{3}) \n - \\frac{7-6\\sqrt3}{2}(\\ln 2)^2 - \\frac{3}{4}(\\ln 3)^2 \\\\[5mm]\n& \\displaystyle \n+ \\frac{3\\sqrt3(1-\\sqrt3)}{2}\\ln3\\cdot\\ln2\n + \\sqrt3\\ln2\\cdot\\ln\\pi \n -\\frac{\\pi\\big(17+8\\sqrt3\\big)}{2\\sqrt3}\\ln2 \\\\[5mm]\n& \\displaystyle \n +\\frac{\\pi\\big(1-\\sqrt3\\big)\\sqrt3}{4}\\ln3\n -\\pi\\sqrt3(2+\\sqrt3)\\ln\\pi\n= -29.84287823\\ldots\n\\end{array}\n"
},
{
"math_id": 60,
"text": "\n\\begin{array}{rl}\n\\displaystyle\n\\gamma_2 \\biggl(\\frac{r}{m} \\biggr) =\n\\gamma_2 + \\frac{2}{3}\\sum_{l=1}^{m-1}\n\\cos\\frac{2\\pi r l}{m} \\cdot\\zeta'''\\left(0,\\frac{l}{m}\\right) -\n2 (\\gamma+\\ln2\\pi m) \\sum_{l=1}^{m-1}\n\\cos\\frac{2\\pi r l}{m} \\cdot\\zeta''\\left(0,\\frac{l}{m}\\right) \\\\[6mm]\n\\displaystyle \\quad\n+ \\pi\\sum_{l=1}^{m-1}\n\\sin\\frac{2\\pi r l}{m} \\cdot\\zeta''\\left(0,\\frac{l}{m}\\right) \n-2\\pi(\\gamma+\\ln2\\pi m)\n\\sum_{l=1}^{m-1}\n\\sin\\frac{2\\pi r l}{m} \\cdot\\ln\\Gamma \\biggl(\\frac{l}{m} \\biggr) \n - 2\\gamma_1 \\ln{m} \\\\[6mm]\n\\displaystyle\\quad\n- \\gamma^3 \n-\\left[(\\gamma+\\ln2\\pi m)^2-\\frac{\\pi^2}{12}\\right]\\cdot\n\\Psi\\biggl(\\frac{r}{m} \\biggr) + \n\\frac{\\pi^3}{12}\\cot\\frac{\\pi r}{m} \n -\\gamma^2\\ln\\big(4\\pi^2 m^3\\big) +\\frac{\\pi^2}{12}(\\gamma+\\ln{m}) \\\\[6mm]\n\\displaystyle\\quad\n - \\gamma\\big((\\ln 2\\pi)^2 + 4\\ln m \\cdot\\ln 2\\pi + 2(\\ln m)^2\\big)\n - \\left\\{(\\ln 2\\pi)^2 + 2\\ln 2\\pi \\cdot \\ln m + \\frac{2}{3}(\\ln m)^2\\right\\}\\ln m\n\\end{array}\\,,\\qquad\\quad r=1, 2, 3,\\ldots, m-1.\n"
}
] |
https://en.wikipedia.org/wiki?curid=1479085
|
147909
|
Linearity of differentiation
|
Calculus property
In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; this property is known as linearity of differentiation, the rule of linearity, or the superposition rule for differentiation. It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). Thus it can be said that differentiation is linear, or the differential operator is a linear operator.
Statement and derivation.
Let "f" and "g" be functions, with "α" and "β" constants. Now consider
formula_0
By the sum rule in differentiation, this is
formula_1
and by the constant factor rule in differentiation, this reduces to
formula_2
Therefore,
formula_3
Omitting the brackets, this is often written as:
formula_4
Detailed proofs/derivations from definition.
We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown.
Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to formula_5. The difference rule is obtained by setting the first constant coefficient to formula_5 and the second constant coefficient to formula_6. The constant factor rule is obtained by setting either the second constant coefficient or the second function to formula_7. (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to formula_7. One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.)
On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of formula_6. This would, when simplified, give us the difference rule for differentiation.
In the proofs/derivations below, the coefficients formula_8 are used; they correspond to the coefficients formula_9 above.
Linearity (directly).
Let formula_10. Let formula_11 be functions. Let formula_12 be a function, where formula_12 is defined only where formula_13 and formula_14 are both defined. (In other words, the domain of formula_12 is the intersection of the domains of formula_13 and formula_14.) Let formula_15 be in the domain of formula_12. Let formula_16.
We want to prove that formula_17.
By definition, we can see that
formula_18
In order to use the limits law for the sum of limits, we need to know that formula_19 and formula_20 both individually exist. For these smaller limits, we need to know that formula_21 and formula_22 both individually exist to use the coefficient law for limits. By definition, formula_23 and formula_24. So, if we know that formula_25 and formula_26 both exist, we will know that formula_21 and formula_22 both individually exist. This allows us to use the coefficient law for limits to write
formula_27
and
formula_28
With this, we can go back to apply the limit law for the sum of limits, since we know that formula_29 and formula_30 both individually exist. From here, we can directly go back to the derivative we were working on.formula_31Finally, we have shown what we claimed in the beginning: formula_17.
Sum.
Let formula_11 be functions. Let formula_12 be a function, where formula_12 is defined only where formula_13 and formula_14 are both defined.
(In other words, the domain of formula_12 is the intersection of the domains of formula_13 and formula_14.) Let formula_15 be in the domain of formula_12. Let formula_32.
We want to prove that formula_33.
By definition, we can see that
formula_34In order to use the law for the sum of limits here, we need to show that the individual limits, formula_35 and formula_36 both exist. By definition, formula_37and formula_38, so the limits exist whenever the derivatives formula_25 and formula_26 exist. So, assuming that the derivatives exist, we can continue the above derivation
formula_39
Thus, we have shown what we wanted to show, that: formula_33.
Difference.
Let formula_11 be functions. Let formula_12 be a function, where formula_12 is defined only where formula_13 and formula_14 are both defined. (In other words, the domain of formula_12 is the intersection of the domains of formula_13 and formula_14.) Let formula_15 be in the domain of formula_12. Let formula_40.
We want to prove that formula_41.
By definition, we can see that:
formula_42
In order to use the law for the difference of limits here, we need to show that the individual limits, formula_35 and formula_36 both exist. By definition, formula_37 and that formula_38, so these limits exist whenever the derivatives formula_25 and formula_26 exist. So, assuming that the derivatives exist, we can continue the above derivation
formula_43
Thus, we have shown what we wanted to show, that: formula_41.
Constant coefficient.
Let formula_13 be a function. Let formula_44; formula_45 will be the constant coefficient. Let formula_12 be a function, where j is defined only where formula_13 is defined. (In other words, the domain of formula_12 is equal to the domain of formula_13.) Let formula_15 be in the domain of formula_12. Let formula_46.
We want to prove that formula_47.
By definition, we can see that:
formula_48
Now, in order to use a limit law for constant coefficients to show that
formula_49 we need to show that formula_35 exists.
However, formula_37, by the definition of the derivative. So, if formula_25 exists, then formula_35 exists.
Thus, if we assume that formula_25 exists, we can use the limit law and continue our proof.
formula_50
Thus, we have proven that when formula_46, we have formula_51.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\mbox{d}}{\\mbox{d} x} ( \\alpha \\cdot f(x) + \\beta \\cdot g(x) )."
},
{
"math_id": 1,
"text": "\\frac{\\mbox{d}}{\\mbox{d} x} ( \\alpha \\cdot f(x) ) + \\frac{\\mbox{d}}{\\mbox{d} x} (\\beta \\cdot g(x)),"
},
{
"math_id": 2,
"text": "\\alpha \\cdot f'(x) + \\beta \\cdot g'(x)."
},
{
"math_id": 3,
"text": "\\frac{\\mbox{d}}{\\mbox{d} x}(\\alpha \\cdot f(x) + \\beta \\cdot g(x)) = \\alpha \\cdot f'(x) + \\beta \\cdot g'(x)."
},
{
"math_id": 4,
"text": "(\\alpha \\cdot f + \\beta \\cdot g)' = \\alpha \\cdot f'+ \\beta \\cdot g'."
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "-1"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "a, b"
},
{
"math_id": 9,
"text": "\\alpha, \\beta"
},
{
"math_id": 10,
"text": "a, b \\in \\mathbb{R}"
},
{
"math_id": 11,
"text": "f, g"
},
{
"math_id": 12,
"text": "j"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "j(x) = af(x) + bg(x)"
},
{
"math_id": 17,
"text": "j^{\\prime}(x) = af^{\\prime}(x) + bg^{\\prime}(x)"
},
{
"math_id": 18,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\frac{j(x + h) - j(x)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{\\left( af(x + h) + bg(x + h) \\right) - \\left( af(x) + bg(x) \\right)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\left( a\\frac{f(x + h) - f(x)}{h} + b\\frac{g(x + h) - g(x)}{h} \\right) \\\\\n\\end{align}"
},
{
"math_id": 19,
"text": "\\lim_{h \\to 0} a\\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 20,
"text": "\\lim_{h \\to 0} b\\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 21,
"text": "\\lim_{h \\to 0} \\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 22,
"text": "\\lim_{h \\to 0} \\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 23,
"text": "f^{\\prime}(x) = \\lim_{h \\to 0} \\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 24,
"text": "g^{\\prime}(x) = \\lim_{h \\to 0} \\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 25,
"text": "f^{\\prime}(x)"
},
{
"math_id": 26,
"text": "g^{\\prime}(x)"
},
{
"math_id": 27,
"text": "\n\\lim_{h \\to 0} a\\frac{f(x + h) - f(x)}{h}\n= a\\lim_{h \\to 0}\\frac{f(x + h) - f(x)}{h}\n"
},
{
"math_id": 28,
"text": "\n\\lim_{h \\to 0} b\\frac{g(x + h) - g(x)}{h}\n= b\\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}.\n"
},
{
"math_id": 29,
"text": "\\lim_{h \\rightarrow 0} a\\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 30,
"text": "\\lim_{h \\rightarrow 0} b\\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 31,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\left( a\\frac{f(x + h) - f(x)}{h} + b\\frac{g(x + h) - g(x)}{h} \\right) \\\\\n&= \\lim_{h \\rightarrow 0} \\left( a\\frac{f(x + h) - f(x)}{h}\\right) + \\lim_{h \\rightarrow 0} \\left(b\\frac{g(x + h) - g(x)}{h} \\right) \\\\\n&= a\\lim_{h \\rightarrow 0} \\left( \\frac{f(x + h) - f(x)}{h}\\right) + b\\lim_{h \\rightarrow 0} \\left(\\frac{g(x + h) - g(x)}{h} \\right) \\\\\n&= af^{\\prime}(x) + bg^{\\prime}(x)\n\\end{align}"
},
{
"math_id": 32,
"text": "j(x) = f(x) + g(x)"
},
{
"math_id": 33,
"text": "j^{\\prime}(x) = f^{\\prime}(x) + g^{\\prime}(x)"
},
{
"math_id": 34,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\frac{j(x + h) - j(x)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{\\left( f(x + h) + g(x + h) \\right) - \\left( f(x) + g(x) \\right)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\left( \\frac{f(x + h) - f(x)}{h} + \\frac{g(x + h) - g(x)}{h} \\right) \\\\\n\\end{align}"
},
{
"math_id": 35,
"text": "\\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 36,
"text": "\\lim_{h \\rightarrow 0} \\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 37,
"text": "f^{\\prime}(x) = \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h}"
},
{
"math_id": 38,
"text": "g^{\\prime}(x) = \\lim_{h \\rightarrow 0} \\frac{g(x + h) - g(x)}{h}"
},
{
"math_id": 39,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\left( \\frac{f(x + h) - f(x)}{h} + \\frac{g(x + h) - g(x)}{h} \\right) \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} + \\lim_{h \\rightarrow 0} \\frac{g(x + h) - g(x)}{h} \\\\\n&= f^{\\prime}(x) + g^{\\prime}(x)\n\\end{align}"
},
{
"math_id": 40,
"text": "j(x) = f(x) - g(x)"
},
{
"math_id": 41,
"text": "j^{\\prime}(x) = f^{\\prime}(x) - g^{\\prime}(x)"
},
{
"math_id": 42,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\frac{j(x + h) - j(x)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{\\left( f(x + h) - (g(x + h) \\right) - \\left( f(x) - g(x) \\right)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\left( \\frac{f(x + h) - f(x)}{h} - \\frac{g(x + h) - g(x)}{h} \\right) \\\\\n\\end{align}"
},
{
"math_id": 43,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\left( \\frac{f(x + h) - f(x)}{h} - \\frac{g(x + h) - g(x)}{h} \\right) \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} - \\lim_{h \\rightarrow 0} \\frac{g(x + h) - g(x)}{h} \\\\\n&= f^{\\prime}(x) - g^{\\prime}(x)\n\\end{align}"
},
{
"math_id": 44,
"text": "a \\in \\mathbb{R}"
},
{
"math_id": 45,
"text": "a"
},
{
"math_id": 46,
"text": "j(x) = af(x)"
},
{
"math_id": 47,
"text": " j^{\\prime}(x) = af^{\\prime}(x)"
},
{
"math_id": 48,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} \\frac{j(x + h) - j(x)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} \\frac{af(x + h) - af(x)}{h} \\\\\n&= \\lim_{h \\rightarrow 0} a\\frac{f(x + h) - f(x)}{h} \\\\\n\\end{align}"
},
{
"math_id": 49,
"text": "\n\\lim_{h \\rightarrow 0} a\\frac{f(x + h) - f(x)}{h} = a\\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} \n"
},
{
"math_id": 50,
"text": "\\begin{align}\nj^{\\prime}(x) &= \\lim_{h \\rightarrow 0} a\\frac{f(x + h) - f(x)}{h} \\\\\n&= a\\lim_{h \\rightarrow 0} \\frac{f(x + h) - f(x)}{h} \\\\\n&= af^{\\prime}(x) \\\\\n\\end{align}"
},
{
"math_id": 51,
"text": "j^{\\prime}(x) = af^{\\prime}(x)"
}
] |
https://en.wikipedia.org/wiki?curid=147909
|
147912
|
Power rule
|
Method of differentiating single term polynomials
In calculus, the power rule is used to differentiate functions of the form formula_0, whenever formula_1 is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives.
Statement of the power rule.
Let formula_2 be a function satisfying formula_3 for all formula_4, where formula_5. Then,
formula_6
The power rule for integration states that
formula_7
for any real number formula_8. It can be derived by inverting the power rule for differentiation. In this equation C is any constant.
Proofs.
Proof for real exponents.
To start, we should choose a working definition of the value of formula_0, where formula_1 is any real number. Although it is feasible to define the value as the limit of a sequence of rational powers that approach the irrational power whenever we encounter such a power, or as the least upper bound of a set of rational powers less than the given power, this type of definition is not amenable to differentiation. It is therefore preferable to use a functional definition, which is usually taken to be formula_9 for all values of formula_10, where formula_11 is the natural exponential function and formula_12 is Euler's number. First, we may demonstrate that the derivative of formula_13 is formula_14.
If formula_13, then formula_15, where formula_16 is the natural logarithm function, the inverse function of the exponential function, as demonstrated by Euler. Since the latter two functions are equal for all values of formula_10, their derivatives are also equal, whenever either derivative exists, so we have, by the chain rule,
formula_17
or formula_18, as was required.
Therefore, applying the chain rule to formula_19, we see that
formula_20
which simplifies to formula_21.
When formula_22, we may use the same definition with formula_23, where we now have formula_24. This necessarily leads to the same result. Note that because formula_25 does not have a conventional definition when formula_1 is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms).
Finally, whenever the function is differentiable at formula_26, the defining limit for the derivative is:
formula_27
which yields 0 only when formula_1 is a rational number with odd denominator (in lowest terms) and formula_28, and 1 when formula_29. For all other values of formula_1, the expression formula_30 is not well-defined for formula_31, as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made.
The exclusion of the expression formula_32 (the case formula_26) from our scheme of exponentiation is due to the fact that the function formula_33 has no limit at (0,0), since formula_34 approaches 1 as x approaches 0, while formula_35 approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally left undefined.
Proofs for integer exponents.
Proof by induction (natural numbers).
Let formula_36. It is required to prove that formula_37 The base case may be when formula_38 or formula_39, depending on how the set of natural numbers is defined.
When formula_38, formula_40
When formula_39, formula_41
Therefore, the base case holds either way.
Suppose the statement holds for some natural number "k", i.e. formula_42
When formula_43,formula_44By the principle of mathematical induction, the statement is true for all natural numbers "n".
Proof by binomial theorem (natural number).
Let formula_45, where formula_46.
Then,formula_47
Generalization to negative integer exponents.
For a negative integer "n", let formula_48 so that "m" is a positive integer.
Using the reciprocal rule,formula_49In conclusion, for any integer formula_50, formula_51
Generalization to rational exponents.
Upon proving that the power rule holds for integer exponents, the rule can be extended to rational exponents.
Proof by chain rule.
This proof is composed of two steps that involve the use of the chain rule for differentiation.
From the above results, we can conclude that when formula_1 is a rational number, formula_64
Proof by implicit differentiation.
A more straightforward generalization of the power rule to rational exponents makes use of implicit differentiation.
Let formula_65, where formula_66 so that formula_67.
Then,formula_68Differentiating both sides of the equation with respect to formula_4,formula_69Solving for formula_56,formula_70Since formula_71,formula_72Applying laws of exponents,formula_73Thus, letting formula_74, we can conclude that formula_75 when formula_1 is a rational number.
History.
The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of formula_76, and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. At the time, they were treatises on determining the area between the graph of a rational power function and the horizontal axis. With hindsight, however, it is considered the first general theorem of calculus to be discovered. The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. This mirrors the conventional way the related theorems are presented in modern basic calculus textbooks, where differentiation rules usually precede integration rules.
Although both men stated that their rules, demonstrated only for rational quantities, worked for all real powers, neither sought a proof of such, as at the time the applications of the theory were not concerned with such exotic power functions, and questions of convergence of infinite series were still ambiguous.
The unique case of formula_77 was resolved by Flemish Jesuit and mathematician Grégoire de Saint-Vincent and his student Alphonse Antonio de Sarasa in the mid 17th century, who demonstrated that the associated definite integral,
formula_78
representing the area between the rectangular hyperbola formula_79 and the x-axis, was a logarithmic function, whose base was eventually discovered to be the transcendental number e. The modern notation for the value of this definite integral is formula_80, the natural logarithm.
Generalizations.
Complex power functions.
If we consider functions of the form formula_81 where formula_82 is any complex number and formula_83 is a complex number in a slit complex plane that excludes the branch point of 0 and any branch cut connected to it, and we use the conventional multivalued definition formula_84, then it is straightforward to show that, on each branch of the complex logarithm, the same argument used above yields a similar result: formula_85.
In addition, if formula_82 is a positive integer, then there is no need for a branch cut: one may define formula_86, or define positive integral complex powers through complex multiplication, and show that formula_87 for all complex formula_83, from the definition of the derivative and the binomial theorem.
However, due to the multivalued nature of complex power functions for non-integer exponents, one must be careful to specify the branch of the complex logarithm being used. In addition, no matter which branch is used, if formula_82 is not a positive integer, then the function is not differentiable at 0.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x) = x^r"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "f(x)=x^r"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "r \\in \\mathbb{R}"
},
{
"math_id": 6,
"text": "f'(x) = rx^{r-1} \\, ."
},
{
"math_id": 7,
"text": "\\int\\! x^r \\, dx=\\frac{x^{r+1}}{r+1}+C"
},
{
"math_id": 8,
"text": "r \\neq -1"
},
{
"math_id": 9,
"text": "x^r = \\exp(r\\ln x)"
},
{
"math_id": 10,
"text": "x > 0"
},
{
"math_id": 11,
"text": "\\exp(x) = e^x"
},
{
"math_id": 12,
"text": "e"
},
{
"math_id": 13,
"text": "f(x) = e^x"
},
{
"math_id": 14,
"text": "f'(x) = e^x"
},
{
"math_id": 15,
"text": "\\ln (f(x)) = x"
},
{
"math_id": 16,
"text": "\\ln"
},
{
"math_id": 17,
"text": "\\frac{1}{f(x)}\\cdot f'(x) = 1"
},
{
"math_id": 18,
"text": "f'(x) = f(x) = e^x"
},
{
"math_id": 19,
"text": "f(x) = e^{r\\ln x}"
},
{
"math_id": 20,
"text": "f'(x)=\\frac{r}{x} e^{r\\ln x}= \\frac{r}{x}x^r"
},
{
"math_id": 21,
"text": "rx^{r-1}"
},
{
"math_id": 22,
"text": "x < 0"
},
{
"math_id": 23,
"text": "x^r = ((-1)(-x))^r = (-1)^r(-x)^r"
},
{
"math_id": 24,
"text": "-x > 0"
},
{
"math_id": 25,
"text": "(-1)^r"
},
{
"math_id": 26,
"text": "x = 0"
},
{
"math_id": 27,
"text": "\\lim_{h\\to 0} \\frac{h^r - 0^r}{h}"
},
{
"math_id": 28,
"text": "r > 1"
},
{
"math_id": 29,
"text": "r = 1"
},
{
"math_id": 30,
"text": "h^r"
},
{
"math_id": 31,
"text": "h < 0"
},
{
"math_id": 32,
"text": "0^0"
},
{
"math_id": 33,
"text": "f(x, y) = x^y"
},
{
"math_id": 34,
"text": "x^0"
},
{
"math_id": 35,
"text": "0^y"
},
{
"math_id": 36,
"text": "n\\in\\N"
},
{
"math_id": 37,
"text": "\\frac{d}{dx} x^n = nx^{n-1}."
},
{
"math_id": 38,
"text": "n=0"
},
{
"math_id": 39,
"text": "n=1"
},
{
"math_id": 40,
"text": "\\frac{d}{dx} x^0 = \\frac{d}{dx} (1) = \\lim_{h \\to 0}\\frac{1-1}{h} = \\lim_{h \\to 0}\\frac{0}{h} = 0 = 0x^{0-1}."
},
{
"math_id": 41,
"text": "\\frac{d}{dx} x^1 = \\lim_{h \\to 0}\\frac{(x+h)-x}{h} = \\lim_{h \\to 0}\\frac{h}{h} = 1 = 1x^{1-1}."
},
{
"math_id": 42,
"text": "\\frac{d}{dx}x^k = kx^{k-1}."
},
{
"math_id": 43,
"text": "n=k+1"
},
{
"math_id": 44,
"text": "\\frac{d}{dx}x^{k+1} = \\frac{d}{dx}(x^k \\cdot x) = x^k \\cdot \\frac{d}{dx}x + x \\cdot \\frac{d}{dx}x^k = x^k + x \\cdot kx^{k-1} = x^k + kx^k = (k+1)x^k = (k+1)x^{(k+1)-1}"
},
{
"math_id": 45,
"text": "y=x^n"
},
{
"math_id": 46,
"text": "n\\in \\mathbb{N} "
},
{
"math_id": 47,
"text": "\\begin{align}\n\\frac{dy}{dx}\n&=\\lim_{h\\to 0}\\frac{(x+h)^n-x^n}h\\\\[4pt]\n&=\\lim_{h\\to 0}\\frac{1}{h} \\left[x^n+\\binom n1 x^{n-1}h+\\binom n2 x^{n-2}h^2+\\dots+\\binom nn h^n-x^n \\right]\\\\[4pt]\n&=\\lim_{h\\to 0}\\left[\\binom n 1 x^{n-1} + \\binom n2 x^{n-2}h+ \\dots+\\binom nn h^{n-1}\\right]\\\\[4pt]\n&=nx^{n-1}\n\\end{align}"
},
{
"math_id": 48,
"text": "n=-m"
},
{
"math_id": 49,
"text": "\\frac{d}{dx}x^n = \\frac{d}{dx} \\left(\\frac{1}{x^m}\\right) = \\frac{-\\frac{d}{dx}x^m}{(x^m)^2} = -\\frac{mx^{m-1}}{x^{2m}} = -mx^{-m-1} = nx^{n-1}."
},
{
"math_id": 50,
"text": "n"
},
{
"math_id": 51,
"text": "\\frac{d}{dx}x^n = nx^{n-1}."
},
{
"math_id": 52,
"text": "y=x^r=x^\\frac1n"
},
{
"math_id": 53,
"text": "n\\in\\N^+"
},
{
"math_id": 54,
"text": "y^n=x"
},
{
"math_id": 55,
"text": "ny^{n-1}\\cdot\\frac{dy}{dx}=1"
},
{
"math_id": 56,
"text": "\\frac{dy}{dx}"
},
{
"math_id": 57,
"text": "\\frac{dy}{dx}\n=\\frac{1}{ny^{n-1}}\n=\\frac{1}{n\\left(x^\\frac1n\\right)^{n-1}}\n=\\frac{1}{nx^{1-\\frac1n}}\n=\\frac{1}{n}x^{\\frac1n-1}\n=rx^{r-1}"
},
{
"math_id": 58,
"text": "1/n"
},
{
"math_id": 59,
"text": "p/q"
},
{
"math_id": 60,
"text": "y=x^r=x^{p/q}"
},
{
"math_id": 61,
"text": "p\\in\\Z, q\\in\\N^+,"
},
{
"math_id": 62,
"text": "r\\in\\Q"
},
{
"math_id": 63,
"text": "\\frac{dy}{dx}\n=\\frac{d}{dx}\\left(x^\\frac1q\\right)^p\n=p\\left(x^\\frac1q\\right)^{p-1}\\cdot\\frac{1}{q}x^{\\frac1q-1}\n=\\frac{p}{q}x^{p/q-1}=rx^{r-1}"
},
{
"math_id": 64,
"text": "\\frac{d}{dx} x^r=rx^{r-1}."
},
{
"math_id": 65,
"text": " y=x^r=x^{p/q}"
},
{
"math_id": 66,
"text": "p, q \\in \\mathbb{Z}"
},
{
"math_id": 67,
"text": "r \\in \\mathbb{Q}"
},
{
"math_id": 68,
"text": "y^q=x^p"
},
{
"math_id": 69,
"text": "qy^{q-1}\\cdot\\frac{dy}{dx} = px^{p-1}"
},
{
"math_id": 70,
"text": "\\frac{dy}{dx} = \\frac{px^{p-1}}{qy^{q-1}}."
},
{
"math_id": 71,
"text": "y=x^{p/q}"
},
{
"math_id": 72,
"text": "\\frac d{dx}x^{p/q} = \\frac{px^{p-1}}{qx^{p-p/q}}."
},
{
"math_id": 73,
"text": "\\frac d{dx}x^{p/q} = \\frac{p}{q}x^{p-1}x^{-p+p/q} = \\frac{p}{q}x^{p/q-1}."
},
{
"math_id": 74,
"text": "r=\\frac{p}{q}"
},
{
"math_id": 75,
"text": "\\frac d{dx}x^r = rx^{r-1}"
},
{
"math_id": 76,
"text": "{\\displaystyle n}"
},
{
"math_id": 77,
"text": "r = -1"
},
{
"math_id": 78,
"text": "\\int_1^x \\frac{1}{t}\\, dt"
},
{
"math_id": 79,
"text": "xy = 1"
},
{
"math_id": 80,
"text": "\\ln(x)"
},
{
"math_id": 81,
"text": "f(z) = z^c"
},
{
"math_id": 82,
"text": "c"
},
{
"math_id": 83,
"text": "z"
},
{
"math_id": 84,
"text": "z^c := \\exp(c\\ln z)"
},
{
"math_id": 85,
"text": "f'(z) = \\frac{c}{z}\\exp(c\\ln z)"
},
{
"math_id": 86,
"text": "f(0) = 0"
},
{
"math_id": 87,
"text": "f'(z) = cz^{c-1}"
}
] |
https://en.wikipedia.org/wiki?curid=147912
|
14793522
|
Superstatistics
|
Superstatistics is a branch of statistical mechanics or statistical physics devoted to the study of non-linear and non-equilibrium systems. It is characterized by using the superposition of multiple differing statistical models to achieve the desired non-linearity. In terms of ordinary statistical ideas, this is equivalent to compounding the distributions of random variables and it may be considered a simple case of a doubly stochastic model.
Consider an extended thermodynamical system which is locally in equilibrium and has a Boltzmann distribution, that is the probability of finding the system in a state with energy formula_0 is proportional to formula_1. Here formula_2 is the local inverse temperature. A non-equilibrium thermodynamical system is modeled by considering macroscopic fluctuations of the local inverse temperature. These fluctuations happen on time scales which are much larger than the microscopic relaxation times to the Boltzmann distribution. If the fluctuations of formula_2 are characterized by a distribution formula_3, the "superstatistical Boltzmann factor" of the system is given by
formula_4
This defines the superstatistical partition function
formula_5
for system that can assume discrete energy states formula_6. The probability of finding the system in state formula_7 is then given by
formula_8
Modeling the fluctuations of formula_2 leads to a description in terms of statistics of Boltzmann statistics, or "superstatistics". For example, if formula_2 follows a Gamma distribution, the resulting superstatistics corresponds to Tsallis statistics. Superstatistics can also lead to other statistics such as power-law distributions or stretched exponentials. One needs to note here that the word super here is short for superposition of the statistics.
This branch is highly related to the exponential family and Mixing. These concepts are used in many approximation approaches, like particle filtering (where the distribution is approximated by delta functions) for example.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "\\exp(-\\beta E)"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "f(\\beta)"
},
{
"math_id": 4,
"text": "\nB(E)=\\int_0^\\infty d\\beta f(\\beta)\\exp(-\\beta E).\n"
},
{
"math_id": 5,
"text": "\nZ = \\sum_{i=1}^W B(E_i)\n"
},
{
"math_id": 6,
"text": "\\{E_i\\}_{i=1}^W"
},
{
"math_id": 7,
"text": "E_i"
},
{
"math_id": 8,
"text": "\np_i=\\frac{1}{Z}B(E_i).\n"
}
] |
https://en.wikipedia.org/wiki?curid=14793522
|
147939
|
Constant of integration
|
Constant expressing ambiguity from indefinite integrals
In calculus, the constant of integration, often denoted by formula_0 (or formula_1), is a constant term added to an antiderivative of a function formula_2 to indicate that the indefinite integral of formula_2 (i.e., the set of all antiderivatives of formula_2), on a connected domain, is only defined up to an additive constant. This constant expresses an ambiguity inherent in the construction of antiderivatives.
More specifically, if a function formula_2 is defined on an interval, and formula_3 is an antiderivative of formula_4 then the set of "all" antiderivatives of formula_2 is given by the functions formula_5 where formula_0 is an arbitrary constant (meaning that "any" value of formula_0 would make formula_6 a valid antiderivative). For that reason, the indefinite integral is often written as formula_7 although the constant of integration might be sometimes omitted in lists of integrals for simplicity.
Origin.
The derivative of any constant function is zero. Once one has found one antiderivative formula_3 for a function formula_4 adding or subtracting any constant formula_0 will give us another antiderivative, because formula_8 The constant is a way of expressing that every function with at least one antiderivative will have an infinite number of them.
Let formula_9 and formula_10 be two everywhere differentiable functions. Suppose that formula_11 for every real number "x". Then there exists a real number formula_0 such that formula_12 for every real number "x".
To prove this, notice that formula_13 So formula_14 can be replaced by formula_15 and formula_16 by the constant function formula_17 making the goal to prove that an everywhere differentiable function whose derivative is always zero must be constant:
Choose a real number formula_18 and let formula_19 For any "x", the fundamental theorem of calculus, together with the assumption that the derivative of formula_14 vanishes, implying that
formula_20
thereby showing that formula_14 is a constant function.
Two facts are crucial in this proof. First, the real line is connected. If the real line were not connected, one would not always be able to integrate from our fixed "a" to any given "x". For example, if one were to ask for functions defined on the union of intervals [0,1] and [2,3], and if "a" were 0, then it would not be possible to integrate from 0 to 3, because the function is not defined between 1 and 2. Here, there will be "two" constants, one for each connected component of the domain. In general, by replacing constants with locally constant functions, one can extend this theorem to disconnected domains. For example, there are two constants of integration for formula_21, and infinitely many for formula_22, so for example, the general form for the integral of 1/"x" is:
formula_23
Second, formula_14 and formula_16 were assumed to be everywhere differentiable. If formula_14 and formula_16 are not differentiable at even one point, then the theorem might fail. As an example, let formula_3 be the Heaviside step function, which is zero for negative values of "x" and one for non-negative values of "x", and let formula_24 Then the derivative of formula_14 is zero where it is defined, and the derivative of formula_16 is always zero. Yet it's clear that formula_14 and formula_16 do not differ by a constant, even if it is assumed that formula_14 and formula_16 are everywhere continuous and almost everywhere differentiable the theorem still fails. As an example, take formula_14 to be the Cantor function and again let formula_25
It turns out that adding and subtracting constants is the only flexibility available in finding different antiderivatives of the same function. That is, all antiderivatives are the same up to a constant. To express this fact for formula_26 one can write:
formula_27
where formula_0 is constant of integration. It is easily determined that all of the following functions are antiderivatives of formula_28:
formula_29
Significance.
The inclusion of the constant of integration is necessitated in some, but not all circumstances. For instance, when evaluating definite integrals using the fundamental theorem of calculus, the constant of integration can be ignored as it will always cancel with itself.
However, different methods of computation of indefinite integrals can result in multiple resulting antiderivatives, each implicitly containing different constants of integration, and no particular option may be considered simplest. For example, formula_30 can be integrated in at least three different ways.
formula_31Additionally, omission of the constant, or setting it to zero, may make it prohibitive to deal with a number of problems, such as those with initial value conditions. A general solution containing the arbitrary constant is often necessary to identify the correct particular solution. For example, to obtain the antiderivative of formula_28 that has the value 400 at "x" = π, then only one value of formula_0 will work (in this case formula_32).
The constant of integration also implicitly or explicitly appears in the language of differential equations. Almost all differential equations will have many solutions, and each constant represents the unique solution of a well-posed initial value problem.
An additional justification comes from abstract algebra. The space of all (suitable) real-valued functions on the real numbers is a vector space, and the differential operator formula_33 is a linear operator. The operator formula_33 maps a function to zero if and only if that function is constant. Consequently, the kernel of formula_33 is the space of all constant functions. The process of indefinite integration amounts to finding a pre-image of a given function. There is no canonical pre-image for a given function, but the set of all such pre-images forms a coset. Choosing a constant is the same as choosing an element of the coset. In this context, solving an initial value problem is interpreted as lying in the hyperplane given by the initial conditions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "F(x)"
},
{
"math_id": 4,
"text": "f(x),"
},
{
"math_id": 5,
"text": "F(x) + C,"
},
{
"math_id": 6,
"text": "F(x) + C"
},
{
"math_id": 7,
"text": "\\int f(x) \\, dx = F(x) + C,"
},
{
"math_id": 8,
"text": "\\frac{d}{dx}(F(x) + C) = \\frac{d}{dx}F(x) + \\frac{d}{dx}C = F'(x) = f(x) ."
},
{
"math_id": 9,
"text": "F:\\R\\to\\R"
},
{
"math_id": 10,
"text": "G:\\R\\to\\R"
},
{
"math_id": 11,
"text": "F\\,'(x) = G\\,'(x)"
},
{
"math_id": 12,
"text": "F(x) - G(x) = C"
},
{
"math_id": 13,
"text": "[F(x) - G(x)]' = 0 ."
},
{
"math_id": 14,
"text": "F"
},
{
"math_id": 15,
"text": "F-G,"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "0,"
},
{
"math_id": 18,
"text": "a,"
},
{
"math_id": 19,
"text": "C = F(a) ."
},
{
"math_id": 20,
"text": "\\begin{align}\n& 0= \\int_a^x F'(t)\\,dt\\\\\n& 0= F(x)-F(a) \\\\\n& 0= F(x)-C \\\\ \n& F(x)=C \\\\\n\\end{align}"
},
{
"math_id": 21,
"text": "\\int dx/x"
},
{
"math_id": 22,
"text": "\\int \\tan x\\,dx"
},
{
"math_id": 23,
"text": "\\int \\frac{dx}{x} = \\begin{cases}\n\\ln \\left|x \\right| + C^- & x < 0\\\\\n\\ln \\left|x \\right| + C^+ & x > 0\n\\end{cases}"
},
{
"math_id": 24,
"text": "G(x) = 0 ."
},
{
"math_id": 25,
"text": "G = 0 ."
},
{
"math_id": 26,
"text": "\\cos(x),"
},
{
"math_id": 27,
"text": "\\int \\cos(x)\\,dx = \\sin(x) + C,"
},
{
"math_id": 28,
"text": "\\cos(x)"
},
{
"math_id": 29,
"text": "\\begin{align}\n\\frac{d}{dx}[\\sin(x) + C]\n&= \\frac{d}{dx} \\sin(x) + \\frac{d}{dx}C \\\\\n&= \\cos(x) + 0 \\\\\n&= \\cos(x)\n\\end{align}"
},
{
"math_id": 30,
"text": "2\\sin(x)\\cos(x)"
},
{
"math_id": 31,
"text": "\\begin{alignat}{4}\n\\int 2\\sin(x)\\cos(x)\\,dx =&& \\sin^2(x) + C =&& -\\cos^2(x) + 1 + C =&& -\\frac 1 2 \\cos(2x) + \\frac 1 2 + C\\\\\n\\int 2\\sin(x)\\cos(x)\\,dx =&& -\\cos^2(x) + C =&& \\sin^2(x) - 1 + C =&& -\\frac 1 2 \\cos(2x) - \\frac 1 2 + C\\\\\n\\int 2\\sin(x)\\cos(x)\\,dx =&& -\\frac 1 2 \\cos(2x) + C =&& \\sin^2(x) + C =&& -\\cos^2(x) + C \\\\\n\\end{alignat}"
},
{
"math_id": 32,
"text": "C = 400"
},
{
"math_id": 33,
"text": "\\frac{d}{dx}"
}
] |
https://en.wikipedia.org/wiki?curid=147939
|
14795246
|
Tate's algorithm
|
In the theory of elliptic curves, Tate's algorithm takes as input an integral model of an elliptic curve "E" over formula_0, or more generally an algebraic number field, and a prime or prime ideal "p". It returns the exponent "f""p" of "p" in the conductor of "E", the type of reduction at "p", the local index
formula_1
where formula_2 is the group of formula_3-points
whose reduction mod "p" is a non-singular point. Also, the algorithm determines whether or not the given integral model is minimal at "p", and, if not, returns an integral model with integral coefficients for which the valuation at "p" of the discriminant is minimal.
Tate's algorithm also gives the structure of the singular fibers given by the Kodaira symbol or Néron symbol, for which, see elliptic surfaces: in turn this determines the exponent "f""p" of the conductor "E".
Tate's algorithm can be greatly simplified if the characteristic of the residue class field is not 2 or 3; in this case the type and "c" and "f" can be read off from the valuations of "j" and Δ (defined below).
Tate's algorithm was introduced by John Tate (1975) as an improvement of the description of the Néron model of an elliptic curve by Néron (1964).
Notation.
Assume that all the coefficients of the equation of the curve lie in a complete discrete valuation ring "R" with perfect residue field " K" and maximal ideal generated by a prime π.
The elliptic curve is given by the equation
formula_4
Define:
formula_5 the p-adic valuation of formula_6 in formula_7, that is, exponent of formula_6 in prime factorization of formula_7, or infinity if formula_8
formula_9
formula_10
formula_11
formula_12
formula_13
formula_14
formula_15
formula_16
formula_17
formula_18.
If π3 does not divide "b"6 then the type is IV, "c"=3 if formula_19 has two roots in K and 1 if it has two roots outside of K, and "f"=v(Δ)−2.
formula_20
If formula_21 has 3 distinct roots modulo π then the type is I0*, "f"=v(Δ)−4, and "c" is 1+(number of roots of "P" in "K").
formula_22.
If formula_23 has two distinct roots modulo π then the type is IV*, "f"=v(Δ)−6, and "c" is 3 if the roots are in "K", 1 otherwise.
Implementations.
The algorithm is implemented for algebraic number fields in the PARI/GP computer algebra system, available through the function elllocalred.
|
[
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "c_p=[E(\\mathbb{Q}_p):E^0(\\mathbb{Q}_p)],"
},
{
"math_id": 2,
"text": "E^0(\\mathbb{Q}_p)"
},
{
"math_id": 3,
"text": "\\mathbb{Q}_p"
},
{
"math_id": 4,
"text": "y^2+a_1xy+a_3y = x^3+a_2x^2+a_4x+a_6."
},
{
"math_id": 5,
"text": "v(\\Delta)="
},
{
"math_id": 6,
"text": "\\pi"
},
{
"math_id": 7,
"text": "\\Delta"
},
{
"math_id": 8,
"text": "\\Delta = 0"
},
{
"math_id": 9,
"text": "a_{i,m}=a_i/\\pi^m"
},
{
"math_id": 10,
"text": "b_2=a_1^2+4a_2"
},
{
"math_id": 11,
"text": "b_4=a_1a_3+2a_4^{}"
},
{
"math_id": 12,
"text": "b_6=a_3^2+4a_6"
},
{
"math_id": 13,
"text": "b_8=a_1^2a_6-a_1a_3a_4+4a_2a_6+a_2a_3^2-a_4^2"
},
{
"math_id": 14,
"text": "c_4=b_2^2-24b_4"
},
{
"math_id": 15,
"text": "c_6=-b_2^3+36b_2b_4-216b_6"
},
{
"math_id": 16,
"text": "\\Delta=-b_2^2b_8-8b_4^3-27b_6^2+9b_2b_4b_6"
},
{
"math_id": 17,
"text": "j=c_4^3/\\Delta."
},
{
"math_id": 18,
"text": "Q_1(Y) = Y^2+a_{3,1}Y-a_{6,2}."
},
{
"math_id": 19,
"text": "Q_1(Y)"
},
{
"math_id": 20,
"text": "P(T) = T^3+a_{2,1}T^2+a_{4,2}T+a_{6,3}."
},
{
"math_id": 21,
"text": "P(T)"
},
{
"math_id": 22,
"text": "Q_2(Y) = Y^2+a_{3,2}Y-a_{6,4}."
},
{
"math_id": 23,
"text": "Q_2(Y)"
}
] |
https://en.wikipedia.org/wiki?curid=14795246
|
14797458
|
Compound of two snub cubes
|
Polyhedral compound
This uniform polyhedron compound is a composition of the 2 enantiomers of the snub cube. As a holosnub, it is represented by Schläfli symbol βr{4,3} and Coxeter diagram .
The vertex arrangement of this compound is shared by a convex nonuniform truncated cuboctahedron, having rectangular faces, alongside irregular hexagons and octagons, each alternating with two edge lengths.
Together with its convex hull, it represents the snub cube-first projection of the nonuniform snub cubic antiprism.
Cartesian coordinates.
Cartesian coordinates for the vertices are all the permutations of
(±1, ±"ξ", ±1/"ξ")
where "ξ" is the real solution to
formula_0
which can be written
formula_1
or approximately 0.543689. ξ is the reciprocal of the tribonacci constant.
Equally, the tribonacci constant, "t", just like the snub cube, can compute the coordinates as:
(±1, ±"t", ±)
Truncated cuboctahedron.
This compound can be seen as the union of the two chiral alternations of a truncated cuboctahedron:
|
[
{
"math_id": 0,
"text": "\\xi^3+\\xi^2+\\xi=1, \\,"
},
{
"math_id": 1,
"text": "\\xi = \\frac{1}{3}\\left(\\sqrt[3]{17+3\\sqrt{33}} - \\sqrt[3]{-17+3\\sqrt{33}} - 1\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=14797458
|
1480325
|
Normal family
|
In mathematics, with special application to complex analysis, a "normal family" is a pre-compact subset of the space of continuous functions. Informally, this means that the functions in the family are not widely spread out, but rather stick together in a somewhat "clustered" manner. Note that a compact family of continuous functions is automatically a normal family.
Sometimes, if each function in a normal family "F" satisfies a particular property (e.g. is holomorphic),
then the property also holds for each limit point of the set "F".
More formally, let "X" and "Y" be topological spaces. The set of continuous functions formula_0 has a natural topology called the compact-open topology. A normal family is a pre-compact subset with respect to this topology.
If "Y" is a metric space, then the compact-open topology is equivalent to the topology of compact convergence, and we obtain a definition which is closer to the classical one: A collection "F" of continuous functions is called a normal family
if every sequence of functions in "F" contains a subsequence which converges uniformly on compact subsets of "X" to a continuous function from "X" to "Y". That is, for every sequence of functions in "F", there is a subsequence formula_1 and a continuous function formula_2 from "X" to "Y" such that the following holds for every compact subset "K" contained in "X":
formula_3
where formula_4 is the metric of "Y".
Normal families of holomorphic functions.
The concept arose in complex analysis, that is the study of holomorphic functions. In this case, "X" is an open subset of the complex plane, "Y" is the complex plane, and the metric on "Y" is given by formula_5. As a consequence of Cauchy's integral theorem, a sequence of holomorphic functions that converges uniformly on compact sets must
converge to a holomorphic function. That is, each limit point of a normal family is holomorphic.
Normal families of holomorphic functions provide the quickest way of proving the Riemann mapping theorem.
More generally, if the spaces "X" and "Y" are Riemann surfaces, and "Y" is equipped with the metric coming from the uniformization theorem, then each limit point of a normal family of holomorphic functions formula_6 is also holomorphic.
For example, if "Y" is the Riemann sphere, then the metric of uniformization is the spherical distance. In this case, a holomorphic function from "X" to "Y" is called a meromorphic function, and so each limit point of a normal family of meromorphic functions is a meromorphic function.
Criteria.
In the classical context of holomorphic functions, there are several criteria that can be used to establish that a family is normal:
Montel's theorem states that a family of locally bounded holomorphic functions is normal. The Montel-Caratheodory theorem states that the family of meromorphic functions that omit three distinct values in the extended complex plane is normal. For a family of holomorphic functions, this reduces to requiring two values omitted by viewing each function as a meromorphic function omitting the value infinity.
Marty's theorem
provides a criterion equivalent to normality in the context of meromorphic functions: A family formula_7 of meromorphic functions from a domain formula_8 to the complex plane is a normal family if and only if for each compact subset "K" of "U" there exists a constant "C" so that for each formula_9 and each "z" in "K" we have
formula_10
Indeed, the expression on the left is the formula for the pull-back of the arclength element on the Riemann sphere to the complex plane via the inverse of stereographic projection.
History.
Paul Montel first coined the term "normal family" in 1911.
Because the concept of a normal family has continually been very important to complex analysis, Montel's terminology is still used to this day, even though from a modern perspective, the phrase "pre-compact subset" might be preferred by some mathematicians. Note that though the notion of compact open topology generalizes and clarifies the concept, in many applications the original definition is more practical.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from normal family on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "f: X \\to Y"
},
{
"math_id": 1,
"text": "f_n(x)"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "\\lim_{n\\rightarrow\\infty} \\sup_{x\\in K} d_Y(f_n(x),f(x)) = 0"
},
{
"math_id": 4,
"text": "d_Y"
},
{
"math_id": 5,
"text": "d_Y(y_1,y_2) = |y_1-y_2|"
},
{
"math_id": 6,
"text": " f: X \\to Y"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": " U \\subset \\mathbb{C} "
},
{
"math_id": 9,
"text": " f \\in F "
},
{
"math_id": 10,
"text": " \\frac{2|f'(z)|}{1 + |f(z)|^2} \\leq C. "
}
] |
https://en.wikipedia.org/wiki?curid=1480325
|
148048
|
Fire hose
|
Flexible tube used for delivering water or foam at high pressure, to fight fires
A fire hose (or firehose) is a high-pressure hose that carries water or other fire retardant (such as foam) to a fire to extinguish it. Outdoors, it attaches either to a fire engine, fire hydrant, or a portable fire pump. Indoors, it can permanently attach to a building's standpipe or plumbing system.
The usual working pressure of a firehose can vary between while per the NFPA 1961 Fire Hose Standard, its bursting pressure is in excess of 110 bar. (11,000kPa; 1600psi)
Hose is one of the basic, essential pieces of fire-fighting equipment. It is necessary to convey water either from an open water supply, or pressurized water supply. Hoses are divided into two categories, based on their use: suction hose, and delivery hose.
After use, a fire hose is usually hung to dry, because standing water that remains in a hose for a long time can deteriorate the material and render it unreliable or unusable. Therefore, the typical fire station often has a high structure to accommodate the length of a hose for such preventive maintenance, known as a hose tower.
On occasion, fire hoses are used for crowd control (see also water cannon), including by Bull Connor in the Birmingham campaign against protesters during the Civil Rights Movement in 1963.
History.
Until the mid-19th century, most fires were fought by water transported to the scene in buckets. Original hand pumpers discharged their water through a small pipe or monitor attached to the top of the pump tub. It was not until the late 1860s that hoses became widely available to convey water more easily from the hand pumps, and later steam pumpers, to the fire.
In Amsterdam in the Dutch Republic, the Superintendent of the Fire Brigade, Jan van der Heyden, and his son Nicholaas took firefighting to its next step with the fashioning of the first fire hose in 1673. These lengths of leather were sewn together like a boot leg. Even with the limitations of pressure, the attachment of the hose to the gooseneck nozzle allowed closer approaches and more accurate water application. Van der Heyden was also credited with an early version of a suction hose using wire to keep it rigid. In the United States, the fire hose was introduced in Philadelphia in 1794. This canvas hose proved insufficiently durable, and sewn leather hose was then used. The sewn leather hose tended to burst, so a hose fabricated of leather fastened together with copper rivets and washers was invented by members of Philadelphia's Humane Hose Company.
Around 1890, unlined fire hoses made of circular woven linen yarns began to replace leather hoses. They were certainly much lighter. As the hose fibers, made of flax, became wet, they swelled up and tightened the weave, causing the hose to become watertight. Unlined hoses, because of their lack of durability, were rapidly replaced with rubber hoses in municipal fire service use. They continued to be used on interior hose lines and hose rack until the 1960s to 1980s. In January 1981, the Occupational Safety and Health Administration revised their standards such that unlined hoses were to no longer be installed for interior hose lines.
Following the invention of the vulcanization process as a means of curing raw soft rubber into a harder, more useful product, the fire service slowly made the transition from bulky and unreliable leather hose to the unlined linen hose, then to a multi-layer, rubber lined and coated hose with interior fabric reinforcement. This rubber hose was as bulky, heavy, and stiff as a leather hose, but was not prone to leaking. It also proved more durable than unlined linen hose. Its wrapped construction resembled some hoses used today by industry, for example, fuel delivery hoses used to service airliners.
Modern usage.
Modern fire hoses use a variety of natural and synthetic fabrics and elastomers in their construction. These materials allow the hoses to be stored wet without rotting and to resist the damaging effects of exposure to sunlight and chemicals. Modern hoses are lighter weight than older designs, which has reduced the physical strain on firefighters. Various devices are becoming more prevalent to remove air from the interior of fire hose, commonly referred to as fire hose vacuums. This makes hoses smaller and somewhat rigid, allowing more hose to be packed into the same compartment on a fire-fighting apparatus.
Suction Hose
Suction hose is laid down on the suction side of pump (inlet) where the water passing through it is at a pressure either below or above that of the atmosphere. It is designed to resist internal and external pressure. It should have sufficient strength to withstand the pressure of external air when a vacuum has formed inside. It should also be strong enough to resist hydrant pressure. Usually an appliance has to carry about 10 m of suction hose in either 3 m or 2.5 m length. The diameter of the hose depends on the capacity of the pump, and three standard sizes such as 75mm, 100mm, and 140mm are generally used.
Partially Embedded suction hose
Partially Embedded suction hose is usually made of a tough rubber lining embedded fully as a spiral, with tempered, galvanized steel wire. This embedding is arranged so that it provides a full waterway and a relatively smooth internal surface. The wall of the hose is prepared from several layers of canvas and rubber lining so that turns of each one lie midway between turns of the other. The complete wall is consolidated by vulcanizing.
Fully embedded (smooth bore) suction hose
Fully embedded (smooth bore) suction hose has a thick, internal rubber lining embedded fully with a spiral of wire. Suction hose should be constructed to withstand a pressure of 10.5 bar.
Delivery Hose
Delivery hose is laid down from the delivery side of the pump (outlet), and the water passing through it is always at a pressure greater than that of the atmosphere. Delivery hose is divided into two categories: percolating hose, and non-percolating hose.
Percolating hose
Percolating hose is used mainly to fight forest fires. The seepage of water through the hose protects the hose against damage by glowing embers falling onto it or the hose being laid on hot ground.
Non-percolating hose
In fire services, non-percolating hoses are generally used for delivering water. Non-percolating hose consists of a reinforced jacket made from polyester or nylon yarns. This type of hose has an inner lining of vulcanized rubber fixed to the jacket by an adhesive. The use of non-percolating hose is recommended in certain applications, as friction losses will be much less than that of percolating hoses.
Lined hose are divided into 3 types:
Type 1: Lined hose without external jacket treatment:
Such hose absorbs liquid into reinforcement jacket and requires drying after use.
Type 2: Coated lined hose:
This has a thin, elastic outer coating that reduces liquid absorption into the jacket and may slightly improve abrasion resistance.
Type 3: Covered lined hose:
Covered lined hose has a thicker elastic cover that prevents liquid absorption but also adds substantial improvements to abrasion and heat resistance.
Types.
There are several types of hose designed specifically for the fire service. Those designed to operate under positive pressure are called discharge hoses; they include: attack hose, supply hose, relay hose, forestry hose, and booster hose. Those designed to operate under negative pressure are called suction hoses.
Another suction hose, called a soft-suction hose, is actually a short length of fabric-covered, flexible discharge hose used to connect the fire pumper suction inlet with a pressurized hydrant. It is not a true suction hose, since it cannot withstand negative pressure.
Raw materials.
In the past, cotton was the most common fiber used in fire hoses, but most modern hoses use synthetic fiber like polyester or nylon filament. The synthetic fibers provide additional strength and better resistance to abrasion. The fiber yarns may be dyed various colors, or may be left natural.
Coatings and liners use synthetic rubbers, which provide varying degrees of resistance to chemicals, temperature, ozone, ultraviolet (UV) radiation, mold, mildew, and abrasion. Different coatings and liners are chosen for specific applications.
Manufacturing process.
Fire hose is usually manufactured in a plant that specializes in providing hose products to municipal, industrial, and forestry fire departments. Here is a typical sequence of operations used to manufacture a double jacket, rubber-lined fire hose.
In addition to the final pressure testing, each hose is subjected to a variety of inspections and tests at each stage of manufacture. Some of these inspections and tests include visual inspections, ozone resistance tests, accelerated aging tests, adhesion tests of the bond between the liner and inner jacket, determination of the amount of hose twist under pressure, dimensional checks, and many more.
Future.
The trend in fire hose construction over the last 20 years has been to use lighter, stronger, lower maintenance materials.
This trend is expected to continue in the future as new materials and manufacturing methods evolve. One result of this trend has been the introduction of lightweight supply hoses in diameters never possible before. Hoses up to in diameter with pressure ratings up to are now available. These hoses are expected to find applications in large-scale industrial firefighting, as well as in disaster relief efforts and military operations.
Fire hoses come in a variety of diameters. Lightweight, single-jacket construction, <templatestyles src="Fraction/styles.css" />3⁄4, 1, and <templatestyles src="Fraction/styles.css" />1+1⁄2 inch diameter hose lines are commonly used in wildfire suppression applications. Heavy duty double, double-jacket, <templatestyles src="Fraction/styles.css" />1+1⁄2, <templatestyles src="Fraction/styles.css" />1+3⁄4, 2, <templatestyles src="Fraction/styles.css" />2+1⁄2, and on occasion 3-inch lines are used for structural applications. Supply lines, used to supply firefighting apparatus with water, are frequently found in <templatestyles src="Fraction/styles.css" />3+1⁄2, 4, <templatestyles src="Fraction/styles.css" />4+1⁄2, 5 and 6-inch diameters.
There are several systems available for repairing holes in fire hoses, the most common being the Stenor Merlin, which offer patching materials for Type 1, 2, and 3 hoses. The patches come in two different sizes and two different colours (red and yellow). The patches are vulcanised onto the hose and usually last the lifetime of the hose.
Connections.
Hose connections are often made from brass, though hardened aluminum connections are also specified. In countries which use quick-action couplers for attack hoses, forged aluminum has been used for decades because the weight penalty of brass for Storz couplers is higher than for threaded connections.
Threaded hose couplings are used in the United States and Canada. Each of these countries uses a different kind of threading. Many other countries have standardized on quick-action couplings, which do not have a male and female end, but connect either way. Again, there is no international standard: In Central Europe, the Storz connector is used by several countries. Belgium and France use the Guillemin connector. Spain, Sweden and Norway each have their own quick coupling. Countries of the former Soviet Union area use the Gost coupling. Baarle-Nassau and Baarle-Hertog, two municipalities on the Belgian-Dutch border, share a common international fire department. The fire trucks have been equipped with adapters to allow them to work with both Storz and Guillemin connectors.
In the United States, a growing number of departments use Storz couplers for large-diameter supply hose, or other quick-action couplings. Because the usage is not standardized, mutual aid apparatus might have a compartment on their trucks dedicated to a multitude of hose adapters.
The different styles of hose couplings have influenced fireground tactics. Apparatus in the United States features "preconnects": Hose for a certain task is put into an open compartment, and each attack hose is connected to the pump. Time-consuming multiple connections or problems with male and female ends are avoided by such tactics. In countries where Storz (or similar) connectors have been used for attack hoses for generations, firefighters drop a manifold at the border of the danger zone, which is connected to the apparatus by a single supply line. As a result, the tiny item "hose coupler" has also influenced the looks and design of fire apparatus.
Forces on fire hoses and nozzles.
Fire hoses must sustain high tensile forces during operation. These arise from both pressure and flow. The magnitude of the axial tension in a fire hose is
formula_0
where p is pressure in the hose relative to the ambient pressure, A1 is the hose cross-sectional area, "ρ" is the water density, and Q is the volumetric flow rate. This tension is the same regardless of the bend angle of the hose.
When a nozzle is connected to a hose and water is ejected, the nozzle must be restrained by an anchor such as the hands of a firefighter. This anchor must apply a force in the direction of the spray, which is called the nozzle reaction. The magnitude of the nozzle reaction is the jet momentum flow rate,
formula_1
where A2 is the cross-sectional area of the nozzle.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T = p A_1 + \\rho Q^2 / A_1 ,"
},
{
"math_id": 1,
"text": "R = \\rho Q^2 / A_2 ,"
}
] |
https://en.wikipedia.org/wiki?curid=148048
|
1480484
|
Menelaus's theorem
|
Geometric relation on line segments formed by a line cutting through a triangle
In Euclidean geometry, Menelaus's theorem, named for Menelaus of Alexandria, is a proposition about triangles in plane geometry. Suppose we have a triangle △"ABC", and a transversal line that crosses BC, AC, AB at points D, E, F respectively, with D, E, F distinct from A, B, C. A weak version of the theorem states that
formula_0
where "| |" denotes absolute value (i.e., all segment lengths are positive).
The theorem can be strengthened to a statement about signed lengths of segments, which provides some additional information about the relative order of collinear points. Here, the length is taken to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line; for example, formula_1 is defined as having positive value when F is between A and B and negative otherwise. The signed version of Menelaus's theorem states
formula_2
Equivalently,
formula_3
Some authors organize the factors differently and obtain the seemingly different relation
formula_4
but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same.
The converse is also true: If points D, E, F are chosen on BC, AC, AB respectively so that
formula_5
then D, E, F are collinear. The converse is often included as part of the theorem. (Note that the converse of the weaker, unsigned statement is not necessarily true.)
The theorem is very similar to Ceva's theorem in that their equations differ only in sign. By re-writing each in terms of cross-ratios, the two theorems may be seen as projective duals.
Proofs.
A standard proof.
First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line DEF misses the triangle (lower diagram), or one is negative and the other two are positive, the case where DEF crosses two sides of the triangle. (See Pasch's axiom.)
To check the magnitude, construct perpendiculars from A, B, C to the line DEF and let their lengths be a, b, c respectively. Then by similar triangles it follows that
formula_6
Therefore,
formula_7
For a simpler, if less symmetrical way to check the magnitude, draw parallel to where DEF meets at K. Then by similar triangles
formula_8
and the result follows by eliminating from these equations.
The converse follows as a corollary. Let D, E, F be given on the lines BC, AC, AB so that the equation holds. Let F' be the point where DE crosses AB. Then by the theorem, the equation also holds for D, E, F'. Comparing the two,
formula_9
But at most one point can cut a segment in a given ratio so "F" = "F'."
A proof using homotheties.
The following proof uses only notions of affine geometry, notably homotheties.
Whether or not D, E, F are collinear, there are three homotheties with centers D, E, F that respectively send B to C, C to A, and A to B. The composition of the three then is an element of the group of homothety-translations that fixes B, so it is a homothety with center B, possibly with ratio 1 (in which case it is the identity). This composition fixes the line DE if and only if F is collinear with D, E (since the first two homotheties certainly fix DE, and the third does so only if F lies on DE). Therefore D, E, F are collinear if and only if this composition is the identity, which means that the magnitude of the product of the three ratios is 1:
formula_10
which is equivalent to the given equation.
History.
It is uncertain who actually discovered the theorem; however, the oldest extant exposition appears in "Spherics" by Menelaus. In this book, the plane version of the theorem is used as a lemma to prove a spherical version of the theorem.
In Almagest, Ptolemy applies the theorem on a number of problems in spherical astronomy. During the Islamic Golden Age, Muslim scholars devoted a number of works that engaged in the study of Menelaus's theorem, which they referred to as "the proposition on the secants" ("shakl al-qatta"'). The complete quadrilateral was called the "figure of secants" in their terminology. Al-Biruni's work, "The Keys of Astronomy", lists a number of those works, which can be classified into studies as part of commentaries on Ptolemy's "Almagest" as in the works of al-Nayrizi and al-Khazin where each demonstrated particular cases of Menelaus's theorem that led to the sine rule, or works composed as independent treatises such as:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left|\\frac{\\overline{AF}}{\\overline{FB}}\\right| \\times \\left|\\frac{\\overline{BD}}{\\overline{DC}}\\right| \\times \\left|\\frac{\\overline{CE}}{\\overline{EA}}\\right| = 1,"
},
{
"math_id": 1,
"text": "\\tfrac{\\overline{AF}}{\\overline{FB}}"
},
{
"math_id": 2,
"text": "\\frac{\\overline{AF}}{\\overline{FB}} \\times \\frac{\\overline{BD}}{\\overline{DC}} \\times \\frac{\\overline{CE}}{\\overline{EA}} = - 1."
},
{
"math_id": 3,
"text": "\\overline{AF} \\times \\overline{BD} \\times \\overline{CE} = - \\overline{FB} \\times \\overline{DC} \\times \\overline{EA}."
},
{
"math_id": 4,
"text": "\\frac{\\overline{FA}}{\\overline{FB}} \\times \\frac{\\overline{DB}}{\\overline{DC}} \\times \\frac{\\overline{EC}}{\\overline{EA}} = 1,"
},
{
"math_id": 5,
"text": "\\frac{\\overline{AF}}{\\overline{FB}} \\times \\frac{\\overline{BD}}{\\overline{DC}} \\times \\frac{\\overline{CE}}{\\overline{EA}} = -1,"
},
{
"math_id": 6,
"text": "\\left|\\frac{\\overline{AF}}{\\overline{FB}}\\right| = \\left|\\frac{a}{b}\\right|, \\quad \\left|\\frac{\\overline{BD}}{\\overline{DC}}\\right| = \\left|\\frac{b}{c}\\right|, \\quad \\left|\\frac{\\overline{CE}}{\\overline{EA}}\\right| = \\left|\\frac{c}{a}\\right|."
},
{
"math_id": 7,
"text": "\\left|\\frac{\\overline{AF}}{\\overline{FB}}\\right| \\times \\left|\\frac{\\overline{BD}}{\\overline{DC}}\\right| \\times \\left|\\frac{\\overline{CE}}{\\overline{EA}}\\right| = \\left| \\frac{a}{b} \\times \\frac{b}{c} \\times \\frac{c}{a} \\right| = 1."
},
{
"math_id": 8,
"text": "\\left|\\frac{\\overline{BD}}{\\overline{DC}}\\right| = \\left|\\frac{\\overline{BF}}{\\overline{CK}}\\right|, \\quad \\left|\\frac{\\overline{AE}}{\\overline{EC}}\\right| = \\left|\\frac{\\overline{AF}}{\\overline{CK}}\\right|,"
},
{
"math_id": 9,
"text": "\\frac{\\overline{AF}}{\\overline{FB}} = \\frac{\\overline{AF'}}{\\overline{F'B}}\\ ."
},
{
"math_id": 10,
"text": "\\frac{\\overrightarrow{DC}}{\\overrightarrow{DB}} \\times \n \\frac{\\overrightarrow{EA}}{\\overrightarrow{EC}} \\times\n \\frac{\\overrightarrow{FB}}{\\overrightarrow{FA}} = 1,"
}
] |
https://en.wikipedia.org/wiki?curid=1480484
|
148064
|
Inverse function rule
|
Calculus identity
In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function f in terms of the derivative of f. More precisely, if the inverse of formula_0 is denoted as formula_1, where formula_2 if and only if formula_3, then the inverse function rule is, in Lagrange's notation,
formula_4.
This formula holds in general whenever formula_0 is continuous and injective on an interval I, with formula_0 being differentiable at formula_5(formula_6) and whereformula_7. The same formula is also equivalent to the expression
formula_8
where formula_9 denotes the unary derivative operator (on the space of functions) and formula_10 denotes function composition.
Geometrically, a function and inverse function have graphs that are reflections, in the line formula_11. This reflection operation turns the gradient of any line into its reciprocal.
Assuming that formula_0 has an inverse in a neighbourhood of formula_12 and that its derivative at that point is non-zero, its inverse is guaranteed to be differentiable at formula_12 and have a derivative given by the above formula.
The inverse function rule may also be expressed in Leibniz's notation. As that notation suggests,
formula_13
This relation is obtained by differentiating the equation formula_14 in terms of x and applying the chain rule, yielding that:
formula_15
considering that the derivative of x with respect to "x" is 1.
Derivation.
Let formula_0 be an invertible (bijective) function, let formula_12 be in the domain of formula_0, and let formula_16 be in the codomain of formula_0. Since f is a bijective function, formula_16 is in the range of formula_0. This also means that formula_16 is in the domain of formula_1, and that formula_12 is in the codomain of formula_1. Since formula_0 is an invertible function, we know that formula_17. The inverse function rule can be obtained by taking the derivative of this equation.
formula_18
The right side is equal to 1 and the chain rule can be applied to the left side:
formula_19
Rearranging then gives
formula_20
Rather than using formula_16 as the variable, we can rewrite this equation using formula_21 as the input for formula_1, and we get the following:
formula_22
formula_25
formula_26
Examples.
At formula_27, however, there is a problem: the graph of the square root function becomes vertical, corresponding to a horizontal tangent for the square function.
formula_30
formula_31
formula_32
This is only useful if the integral exists. In particular we need formula_33 to be non-zero across the range of integration.
It follows that a function that has a continuous derivative has an inverse in a neighbourhood of every point where the derivative is non-zero. This need not be true if the derivative is not continuous.
formula_34
Where formula_35 denotes the antiderivative of formula_36.
Additional properties.
Let formula_37 then we have, assuming formula_38:formula_39This can be shown using the previous notation formula_40. Then we have:
formula_41Therefore:
formula_42
By induction, we can generalize this result for any integer formula_43, with formula_44, the nth derivative of f(x), and formula_45, assuming formula_46:
formula_47
Higher derivatives.
The chain rule given above is obtained by differentiating the identity formula_48 with respect to x. One can continue the same process for higher derivatives. Differentiating the identity twice with respect to "x", one obtains
formula_49
that is simplified further by the chain rule as
formula_50
Replacing the first derivative, using the identity obtained earlier, we get
formula_51
Similarly for the third derivative:
formula_52
or using the formula for the second derivative,
formula_53
These formulas are generalized by the Faà di Bruno's formula.
These formulas can also be written using Lagrange's notation. If "f" and "g" are inverses, then
formula_54
formula_56
Example.
so that
formula_57,
which agrees with the direct calculation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "f^{-1}"
},
{
"math_id": 2,
"text": "f^{-1}(y) = x"
},
{
"math_id": 3,
"text": "f(x) = y"
},
{
"math_id": 4,
"text": "\\left[f^{-1}\\right]'(a)=\\frac{1}{f'\\left( f^{-1}(a) \\right)}"
},
{
"math_id": 5,
"text": "f^{-1}(a)"
},
{
"math_id": 6,
"text": "\\in I"
},
{
"math_id": 7,
"text": "f'(f^{-1}(a)) \\ne 0"
},
{
"math_id": 8,
"text": "\\mathcal{D}\\left[f^{-1}\\right]=\\frac{1}{(\\mathcal{D} f)\\circ \\left(f^{-1}\\right)},"
},
{
"math_id": 9,
"text": "\\mathcal{D}"
},
{
"math_id": 10,
"text": "\\circ"
},
{
"math_id": 11,
"text": "y=x"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "\\frac{dx}{dy}\\,\\cdot\\, \\frac{dy}{dx} = 1."
},
{
"math_id": 14,
"text": "f^{-1}(y)=x"
},
{
"math_id": 15,
"text": "\\frac{dx}{dy}\\,\\cdot\\, \\frac{dy}{dx} = \\frac{dx}{dx}"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "f(f^{-1}(y)) = y"
},
{
"math_id": 18,
"text": "\n\\dfrac{\\mathrm{d}}{\\mathrm{d}y} f(f^{-1}(y)) = \\dfrac{\\mathrm{d}}{\\mathrm{d}y} y \n"
},
{
"math_id": 19,
"text": "\n\\begin{align}\n\\dfrac{\\mathrm{d}\\left( f(f^{-1}(y)) \\right)}{\\mathrm{d}\\left( f^{-1}(y) \\right)} \\dfrac{\\mathrm{d}\\left(f^{-1}(y)\\right)}{\\mathrm{d}y}\n&= 1 \\\\\n\\dfrac{\\mathrm{d}f(f^{-1}(y))}{\\mathrm{d}f^{-1}(y)} \\dfrac{\\mathrm{d}f^{-1}(y)}{\\mathrm{d}y}\n&= 1 \\\\\nf^{\\prime}(f^{-1}(y))\n(f^{-1})^{\\prime}(y)\n&= 1 \n\\end{align}\n"
},
{
"math_id": 20,
"text": "\n (f^{-1})^{\\prime}(y) = \\frac{1}{f^{\\prime}(f^{-1}(y))}\n"
},
{
"math_id": 21,
"text": "a"
},
{
"math_id": 22,
"text": "\n(f^{-1})^{\\prime}(a) = \\frac{1}{f^{\\prime}\\left( f^{-1}(a) \\right)}\n"
},
{
"math_id": 23,
"text": "y = x^2"
},
{
"math_id": 24,
"text": "x = \\sqrt{y}"
},
{
"math_id": 25,
"text": " \\frac{dy}{dx} = 2x \n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ };\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\n\\frac{dx}{dy} = \\frac{1}{2\\sqrt{y}}=\\frac{1}{2x} "
},
{
"math_id": 26,
"text": " \\frac{dy}{dx}\\,\\cdot\\,\\frac{dx}{dy} = 2x \\cdot\\frac{1}{2x} = 1. "
},
{
"math_id": 27,
"text": "x=0"
},
{
"math_id": 28,
"text": "y = e^x"
},
{
"math_id": 29,
"text": "x = \\ln{y}"
},
{
"math_id": 30,
"text": " \\frac{dy}{dx} = e^x\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ };\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\n\\frac{dx}{dy} = \\frac{1}{y} = e^{-x} "
},
{
"math_id": 31,
"text": " \\frac{dy}{dx}\\,\\cdot\\,\\frac{dx}{dy} = e^x \\cdot e^{-x} = 1. "
},
{
"math_id": 32,
"text": "{f^{-1}}(x)=\\int\\frac{1}{f'({f^{-1}}(x))}\\,{dx} + C."
},
{
"math_id": 33,
"text": "f'(x)"
},
{
"math_id": 34,
"text": " \\int f^{-1}(x)\\, {dx} = x f^{-1}(x) - F(f^{-1}(x)) + C "
},
{
"math_id": 35,
"text": " F "
},
{
"math_id": 36,
"text": " f "
},
{
"math_id": 37,
"text": " z = f'(x)"
},
{
"math_id": 38,
"text": " f''(x) \\neq 0"
},
{
"math_id": 39,
"text": " \\frac{d(f')^{-1}(z)}{dz} = \\frac{1}{f''(x)}"
},
{
"math_id": 40,
"text": " y = f(x)"
},
{
"math_id": 41,
"text": " f'(x) = \\frac{dy}{dx} = \\frac{dy}{dz} \\frac{dz}{dx} = \\frac{dy}{dz} f''(x) \\Rightarrow \\frac{dy}{dz} = \\frac{f'(x) }{f''(x)}"
},
{
"math_id": 42,
"text": " \\frac{d(f')^{-1}(z)}{dz} = \\frac{dx}{dz} = \\frac{dy}{dz}\\frac{dx}{dy} = \\frac{f'(x)}{f''(x)}\\frac{1}{f'(x)} = \\frac{1}{f''(x)}"
},
{
"math_id": 43,
"text": " n \\ge 1"
},
{
"math_id": 44,
"text": " z = f^{(n)}(x)"
},
{
"math_id": 45,
"text": " y = f^{(n-1)}(x)"
},
{
"math_id": 46,
"text": " f^{(i)}(x) \\neq 0 \\text{ for } 0 < i \\le n+1 "
},
{
"math_id": 47,
"text": " \\frac{d(f^{(n)})^{-1}(z)}{dz} = \\frac{1}{f^{(n+1)}(x)}"
},
{
"math_id": 48,
"text": "f^{-1}(f(x))=x"
},
{
"math_id": 49,
"text": " \\frac{d^2y}{dx^2}\\,\\cdot\\,\\frac{dx}{dy} + \\frac{d}{dx} \\left(\\frac{dx}{dy}\\right)\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right) = 0, "
},
{
"math_id": 50,
"text": " \\frac{d^2y}{dx^2}\\,\\cdot\\,\\frac{dx}{dy} + \\frac{d^2x}{dy^2}\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^2 = 0."
},
{
"math_id": 51,
"text": " \\frac{d^2y}{dx^2} = - \\frac{d^2x}{dy^2}\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^3. "
},
{
"math_id": 52,
"text": " \\frac{d^3y}{dx^3} = - \\frac{d^3x}{dy^3}\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^4 -\n3 \\frac{d^2x}{dy^2}\\,\\cdot\\,\\frac{d^2y}{dx^2}\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^2"
},
{
"math_id": 53,
"text": " \\frac{d^3y}{dx^3} = - \\frac{d^3x}{dy^3}\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^4 +\n3 \\left(\\frac{d^2x}{dy^2}\\right)^2\\,\\cdot\\,\\left(\\frac{dy}{dx}\\right)^5"
},
{
"math_id": 54,
"text": " g''(x) = \\frac{-f''(g(x))}{[f'(g(x))]^3}"
},
{
"math_id": 55,
"text": "x = \\ln y"
},
{
"math_id": 56,
"text": " \\frac{dy}{dx} = \\frac{d^2y}{dx^2} = e^x = y \n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ };\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\n\\left(\\frac{dy}{dx}\\right)^3 = y^3;"
},
{
"math_id": 57,
"text": "\n\\frac{d^2x}{dy^2}\\,\\cdot\\,y^3 + y = 0\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ };\n\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\n\\frac{d^2x}{dy^2} = -\\frac{1}{y^2}\n"
}
] |
https://en.wikipedia.org/wiki?curid=148064
|
14808522
|
Triadic closure
|
Triadic closure is a concept in social network theory, first suggested by German sociologist Georg Simmel in his 1908 book "Soziologie" ["Sociology: Investigations on the Forms of Sociation"]. Triadic closure is the property among three nodes A, B, and C (representing people, for instance), that if the connections A-B and A-C exist, there is a tendency for the new connection B-C to be formed. Triadic closure can be used to understand and predict the growth of networks, although it is only one of many mechanisms by which new connections are formed in complex networks.
History.
Triadic closure was made popular by Mark Granovetter in his 1973 article "The Strength of Weak Ties". There he synthesized the theory of cognitive balance first introduced by Fritz Heider in 1946 with a Simmelian understanding of social networks. In general terms, cognitive balance refers to the propensity of two individuals to want to feel the same way about an object. If the triad of three individuals is not closed, then the person connected to both of the individuals will want to close this triad in order to achieve closure in the relationship network.
Measurements.
The two most common measures of triadic closure for a graph are (in no particular order) the clustering coefficient and transitivity for that graph.
Clustering coefficient.
One measure for the presence of triadic closure is clustering coefficient, as follows:
Let formula_0 be an undirected simple graph (i.e., a graph having no self-loops or multiple edges) with V the set of vertices and E the set of edges. Also, let formula_1 and formula_2 denote the number of vertices and edges in G, respectively, and let formula_3 be the degree of vertex i.
We can define a triangle among the triple of vertices formula_4, formula_5, and formula_6 to be a set with the following three edges: {(i,j), (j,k), (i,k)}.
We can also define the number of triangles that vertex formula_4 is involved in as formula_7 and, as each triangle is counted three times, we can express the number of triangles in G as formula_8.
Assuming that triadic closure holds, only two strong edges are required for a triple to form. Thus, the number of theoretical triples that should be present under the triadic closure hypothesis for a vertex formula_4 is formula_9, assuming formula_10. We can express formula_11.
Now, for a vertex formula_4 with formula_10, the clustering coefficient formula_12 of vertex formula_4 is the fraction of triples for vertex formula_4 that are closed, and can be measured as formula_13. Thus, the clustering coefficient formula_14 of graph formula_15 is given by formula_16, where formula_17 is the number of nodes with degree at least 2.
Transitivity.
Another measure for the presence of triadic closure is transitivity, defined as formula_18.
Causes and effects.
In a trust network, triadic closure is likely to develop due to the transitive property. If a node A trusts node B, and node B trusts node C, node A will have the basis to trust node C. In a social network, strong triadic closure occurs because there is increased opportunity for nodes A and C with common neighbor B to meet and therefore create at least weak ties. Node B also has the incentive to bring A and C together to decrease the latent stress in two separate relationships.
Networks that stay true to this principle become highly interconnected and have very high clustering coefficients. However, networks that do not follow this principle turn out to be poorly connected and may suffer from instability once negative relations are included.
Triadic closure is a good model for how networks will evolve over time. While simple graph theory tends to analyze networks at one point in time, applying the triadic closure principle can predict the development of ties within a network and show the progression of connectivity.
In social networks, triadic closure facilitates cooperative behavior, but when new connections are made via referrals from existing connections the average global fraction of cooperators is less than when individuals choose new connections randomly from the population at large. Two possible effects of these are by the structural and informational construction. The structural construction arises from the propensity toward high clusterability. The informational construction comes from the assumption that an individual knows something about a friend's friend, as opposed to a random stranger.
Strong Triadic Closure Property and local bridges.
A node A with strong ties to two neighbors B and C obeys the Strong Triadic Closure Property if these neighbors have an edge (either a weak or strong tie) between them.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G = (V,E)"
},
{
"math_id": 1,
"text": "N = |V|"
},
{
"math_id": 2,
"text": "M = |E|"
},
{
"math_id": 3,
"text": "d_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "\\delta (i)"
},
{
"math_id": 8,
"text": "\\delta (G) = \\frac{1}{3} \\sum_{i\\in V} \\ \\delta (i)"
},
{
"math_id": 9,
"text": "\\tau (i) = \\binom{d_i}{2}"
},
{
"math_id": 10,
"text": "d_i \\ge 2"
},
{
"math_id": 11,
"text": "\\tau (G) = \\frac{1}{3} \\sum_{i\\in V} \\ \\tau (i)"
},
{
"math_id": 12,
"text": "c(i)"
},
{
"math_id": 13,
"text": "\\frac{\\delta (i)}{\\tau (i)}"
},
{
"math_id": 14,
"text": "C(G)"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "C(G) = \\frac {1}{N_2} \\sum_{i \\in V, d_i \\ge 2} c(i)"
},
{
"math_id": 17,
"text": "N_2"
},
{
"math_id": 18,
"text": "T(G) = \\frac{3\\delta (G)}{\\tau (G)}"
}
] |
https://en.wikipedia.org/wiki?curid=14808522
|
14809904
|
Phosphoglycerate dehydrogenase
|
Metabolic enzyme PHGDH
Phosphoglycerate dehydrogenase (PHGDH) is an enzyme that catalyzes the chemical reactions
3-phospho-D-glycerate + NAD+ formula_0 3-phosphonooxypyruvate + NADH + H+
2-hydroxyglutarate + NAD+ formula_0 2-oxoglutarate + NADH + H+
The two substrates of this enzyme are 3-phospho-D-glycerate and NAD+, whereas its 3 products are 3-phosphohydroxypyruvate, NADH, and H+
It is also possible that two substrates of this enzyme are 2-hydroxyglutarate and NAD+, whereas its 3 products are 2-oxoglutarate, NADH, and H+.
As of 2012, the most widely studied variants of PHGDH are from the "E. coli" and "M. tuberculosis" genomes. In humans, this enzyme is encoded by the "PHGDH" gene.
Function.
3-Phosphoglycerate dehydrogenase catalyzes the transition of 3-phosphoglycerate into 3-phosphohydroxypyruvate, which is the committed step in the phosphorylated pathway of L-serine biosynthesis. It is also essential in cysteine and glycine synthesis, which lie further downstream. This pathway represents the only way to synthesize serine in most organisms except plants, which uniquely possess multiple synthetic pathways. Nonetheless, the phosphorylated pathway that PHGDH participates in is still suspected to have an essential role in serine synthesis used in the developmental signaling of plants.
Because of serine and glycine's role as neurotrophic factors in the developing brain, PHGDH has been shown to have high expression in glial and astrocyte cells during neural development.
Mechanism and regulation.
3-phosphoglycerate dehydrogenase works via an induced fit mechanism to catalyze the transfer of a hydride from the substrate to NAD+, a required cofactor. In its active conformation, the enzyme's active site has multiple cationic residues that likely stabilize the transition state of the reaction between the negatively charged substrate and NAD+. The positioning is such that the substrate's alpha carbon and the C4 of the nicotinamide ring are brought into a proximity that facilitates the hydride transfer producing NADH and the oxidized substrate.
PHGDH is allosterically regulated by its downstream product, L-serine. This feedback inhibition is understandable considering that 3-phosphoglycerate is an intermediate in the glycolytic pathway. Given that PHGDH represents the committed step in the production of serine in the cell, flux through the pathway must be carefully controlled.
L-serine binding has been shown to exhibit cooperative behavior. Mutants that decreased this cooperativity also increased in sensitivity to serine's allosteric inhibition, suggesting a separation of the chemical mechanisms that result in allosteric binding cooperativity and active site inhibition. The mechanism of inhibition is Vmax type, indicating that serine affects the reaction rate rather than the binding affinity of the active site.
Although L-serine's allosteric effects are usually the focus of regulatory investigation, it has been noted that in some variants of the enzyme, 3-phosphoglycerate dehydrogenase is inhibited at separate positively charged allosteric site by high concentrations of its own substrate.
Structure.
3-Phosphoglycerate dehydrogenase is a tetramer, composed of four identical, asymmetric subunits. At any time, only a maximum of two adjacent subunits present a catalytically active site; the other two are forced into an inactive conformation. This results in half-of-the-sites activity with regard to both active and allosteric sites, meaning that only the two sites of the active subunits must be bound for essentially maximal effect with regard to catalysis and inhibition respectively. There is some evidence that further inhibition occurs with the binding of the third and fourth serine molecules, but it is relatively minimal.
The subunits from the "E. coli" PHGDH have three distinct domains, whereas those from "M. tuberculosis" have four. It is noted that the human enzyme more closely resembles that of "M. tuberculosis", including the site for allosteric substrate inhibition. Concretely, three general types of PHGDH have been proposed: Type I, II, and III. Type III has two distinct domains, lacks both allosteric sites, and is found in various unicellular organisms. Type II has serine binding sites and encompasses the well-studied "E. coli" PHGDH. Type I possesses both the serine and substrate allosteric binding sites and encompasses "M. tuberculosis" and mammalian PHGDHs.
The regulation of catalytic activity is thought to be a result of the movement of rigid domains about flexible “hinges.” When the substrate binds to the open active site, the hinge rotates and closes the cleft. Allosteric inhibition thus likely works by locking the hinge in a state that produces the open active site cleft.
The variant from "M. tuberculosis" also exhibits an uncommon dual pH optimum for catalytic activity.
Evolution.
3-Phosphoglycerate dehydrogenase possesses less than 20% homology to other NAD-dependent oxidoreductases and exhibits significant variance between species. There does appear to be conservation in specific binding domain residues, but there is still some variation in the positively charged active site residues between variants. For example, Type III PHGDH enzymes can be broken down into two subclasses where the key histidine residue is replaced with a lysine residue.
Disease relevance.
Homozygous or compound heterozygous mutations in 3-phosphoglycerate dehydrogenase cause Neu–Laxova syndrome and phosphoglycerate dehydrogenase deficiency. In addition significantly shortening lifespan, PHGDH deficiencies are known to cause congenital microcephaly, psychomotor retardation, and intractable seizures in both humans and rats, presumably due to the essential signaling within the nervous system that serine, glycine, and other downstream molecules are intimately involved with. Treatment typically involves oral supplementation of serine and glycine and has been shown most effective when started "in utero" via oral ingestion by the mother.
Mutations that result in increased PHGDH activity are also associated with increased risk of oncogenesis, including certain breast cancers. This finding suggests that pathways providing an outlet for diverting carbon out of glycolysis may be beneficial for rapid cell growth.
It has been reported that PHGDH can also catalyze the conversion of alpha-ketoglutarate to 2-Hydroxyglutaric acid in certain variants. Thus, a mutation in the enzyme is hypothesized to contribute to 2-Hydroxyglutaric aciduria in humans, although there is debate as to whether or not this catalysis is shared by human PHGDH.
Research results suggest that PHGDH could serve as a blood biomarker of Alzheimer's disease.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14809904
|
14809983
|
Compound of three cubes
|
Polyhedral compound
In geometry, the compound of three cubes is a uniform polyhedron compound formed from three cubes arranged with octahedral symmetry. It has been depicted in works by Max Brückner and M.C. Escher.
History.
This compound appears in Max Brückner's book "Vielecke und Vielflache" (1900), and in the lithograph print "Waterfall" (1961) by M.C. Escher, who learned of it from Brückner's book. Its dual, the compound of three octahedra, forms the central image in an earlier Escher woodcut, "Stars".
In the 15th-century manuscript "De quinque corporibus regularibus", Piero della Francesca includes a drawing of an octahedron circumscribed around a cube, with eight of the cube edges lying in the octahedron's eight faces. Three cubes inscribed in this way within a single octahedron would form the compound of three cubes, but della Francesca does not depict the compound.
Construction and coordinates.
This compound can be constructed by superimposing three identical cubes, and then rotating each by 45 degrees about a separate axis (that passes through the centres of two opposite faces).
Cartesian coordinates for the vertices of this compound can be chosen as all the permutations of formula_0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(0,\\pm 1,\\pm\\sqrt{2})"
}
] |
https://en.wikipedia.org/wiki?curid=14809983
|
14810768
|
Ferromagnetic superconductor
|
Superconductors whose ferromagnetism is related to their superconductivity
Ferromagnetic superconductors are materials that display intrinsic coexistence of ferromagnetism and superconductivity. They include UGe2, URhGe, and UCoGe. Evidence of ferromagnetic superconductivity was also reported for ZrZn2 in 2001, but later reports question these findings. These materials exhibit superconductivity in proximity to a magnetic quantum critical point.
The nature of the superconducting state in ferromagnetic superconductors is currently under debate. Early investigations studied the coexistence of conventional "s"-wave superconductivity with itinerant ferromagnetism. However, the scenario of spin-triplet pairing soon gained the upper hand. A mean-field model for coexistence of spin-triplet pairing and ferromagnetism was developed in 2005.
These models consider uniform coexistence of ferromagnetism and superconductivity, i.e. the same electrons which are both ferromagnetic and superconducting at the same time. Another scenario where there is an interplay between magnetic and superconducting order in the same material is superconductors with spiral or helical magnetic order. Examples of such include ErRh4B4 and HoMo6S8. In these cases, the superconducting and magnetic order parameters entwine each other in a spatially modulated pattern, which allows for their mutual coexistence, although it is no longer uniform. Even spin-singlet pairing may coexist with ferromagnetism in this manner.
Theory.
In conventional superconductors, the electrons constituting the Cooper pair have opposite spin, forming so-called spin-singlet pairs. However, other types of pairings are also permitted by the governing Pauli principle. In the presence of a magnetic field, spins tend to align themselves with the field, which means that a magnetic field is detrimental for the existence of spin-singlet Cooper pairs. A viable mean-field Hamiltonian for modelling itinerant ferromagnetism coexisting with a non-unitary spin-triplet state may after diagonalization be written as
formula_0
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H = H_0 + \\sum_{\\mathbf{k}\\sigma} E_{\\mathbf{k}\\sigma} \\gamma_{\\mathbf{k}\\sigma}^\\dagger \\gamma_{\\mathbf{k}\\sigma},"
},
{
"math_id": 1,
"text": "H_0 = \\frac{1}{2} \\sum_{\\mathbf{k}\\sigma} (\\xi_{\\mathbf{k}\\sigma} - E_{\\mathbf{k}\\sigma} - \\Delta_{\\mathbf{k}\\sigma}^\\dagger b_{\\mathbf{k}\\sigma}) + INM^2/2,"
},
{
"math_id": 2,
"text": "E_{\\mathbf{k}\\sigma} = \\sqrt{\\xi_{\\mathbf{k}\\sigma}^2 + |\\Delta_{\\mathbf{k}\\sigma}|^2}."
}
] |
https://en.wikipedia.org/wiki?curid=14810768
|
1481119
|
Cournot competition
|
Economic model
Cournot competition is an economic model used to describe an industry structure in which companies compete on the amount of output they will produce, which they decide on independently of each other and at the same time. It is named after Antoine Augustin Cournot (1801–1877) who was inspired by observing competition in a spring water duopoly. It has the following features:
An essential assumption of this model is the "not conjecture" that each firm aims to maximize profits, based on the expectation that its own output decision will not have an effect on the decisions of its rivals.
Price is a commonly known decreasing function of total output. All firms know formula_0, the total number of firms in the market, and take the output of the others as given. The market price is set at a level such that demand equals the total quantity produced by all firms.
Each firm takes the quantity set by its competitors as a given, evaluates its residual demand, and then behaves as a monopoly.
History.
<templatestyles src="Rquote/styles.css"/>{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; "
Antoine Augustin Cournot (1801–1877) first outlined his theory of competition in his 1838 volume "Recherches sur les Principes Mathématiques de la Théorie des Richesses" as a way of describing the competition with a market for spring water dominated by two suppliers (a duopoly). The model was one of a number that Cournot set out "explicitly and with mathematical precision" in the volume. Specifically, Cournot constructed profit functions for each firm, and then used partial differentiation to construct a function representing a firm's best response for given (exogenous) output levels of the other firm(s) in the market. He then showed that a stable equilibrium occurs where these functions intersect (i.e., the simultaneous solution of the best response functions of each firm).
The consequence of this is that in equilibrium, each firm's expectations of how other firms will act are shown to be correct; when all is revealed, no firm wants to change its output decision. This idea of stability was later taken up and built upon as a description of Nash equilibria, of which Cournot equilibria are a subset.
The legacy of the "Recherches".
Cournot's economic theory was little noticed until Léon Walras credited him as a forerunner. This led to an unsympathetic review of Cournot's book by Joseph Bertrand which in turn received heavy criticism. Irving Fisher found Cournot's treatment of oligopoly "brilliant and suggestive, but not free from serious objections". He arranged for a translation to be made by Nathaniel Bacon in 1897.
Reactions to this aspect of Cournot's theory have ranged from searing condemnation to half-hearted endorsement. It has received sympathy in recent years as a contribution to game theory rather than economics. James W. Friedman explains:
In current language and interpretation, Cournot postulated a particular game to represent an oligopolistic market...
The maths in Cournot's book is elementary and the presentation not difficult to follow. The account below follows Cournot's words and diagrams closely. The diagrams were presumably included as an oversized plate in the original edition, and are missing from some modern reprints.
Cournot's conceptual framework.
Cournot's discussion of oligopoly draws on two theoretical advances made in earlier pages of his book. Both have passed (with some adjustment) into microeconomic theory, particularly within subfield of Industrial Organization where Cournot's assumptions can be relaxed to study various Market Structures and Industries, for example, the Stackelberg Competition model. Cournot's discussion of monopoly influenced later writers such as Edward Chamberlin and Joan Robinson during the 1930s revival of interest in imperfect competition.
The 'Law of Demand' or 'of Sales'.
Cournot was wary of psychological notions of demand, defining it simply as the amount sold of a particular good (helped along by the fact that the French word "débit", meaning 'sales quantity', has the same initial letter as "demande", meaning 'demand' ). He formalised it mathematically as follows:
We will regard the sales quantity or annual demand formula_1, for any commodity, to be a function formula_2 of its price.
It follows that his demand curves do some of the work of modern supply curves, since producers who are able to limit the amount sold have an influence on Cournot's demand curve.
Cournot remarks that the demand curve will usually be a decreasing function of price, and that the total value of the good sold is formula_3, which will generally increase to a maximum and then decline towards 0. The condition for a maximum is that the derivative of formula_3, i.e., formula_4, should be 0 (where formula_5 is the derivative of formula_2).
Cournot's duopoly theory.
Monopoly and duopoly.
Cournot insists that each duopolist seeks "independently" to maximize profits, and this restriction is essential, since Cournot tells us that if they came to an understanding between each other so as each to obtain the maximum possible revenue, then completely different results would be obtained, indistinguishable from the consumer's point of view from those entailed by monopoly.
Cournot's price model.
Cournot presents a mathematically correct analysis of the equilibrium condition corresponding to a certain logically consistent model of duopolist behaviour. However his model is not stated and is not particularly natural (Shapiro remarked that observed practice constituted a "natural objection to the Cournot quantity model"), and "his words and the mathematics do not quite match".
His model can be grasped more easily if we slightly embellish it. Suppose that there are two owners of mineral water springs, each able to produce unlimited quantities at zero price. Suppose that instead of selling water to the public they offer it to a middle man. Each proprietor notifies the middle man of the quantity he or she intends to produce. The middle man finds the market-clearing price, which is determined by the demand function formula_6 and the aggregate supply. He or she sells the water at this price, passing the proceeds back to the proprietors.
The consumer demand formula_1 for mineral water at price formula_7 is denoted by formula_2; the inverse of formula_6 is written formula_8 and the market-clearing price is given by formula_9, where formula_10 and formula_11 is the amount supplied by proprietor formula_12.
Each proprietor is assumed to know the amount being supplied by his or her rival, and to adjust his or her own supply in the light of it to maximize his or her profits. The position of equilibrium is one in which neither proprietor is inclined to adjust the quantity supplied.
It needs mental contortions to imagine the same market behaviour arising without a middle man.
Interpretative difficulties.
A feature of Cournot's model is that a single price applies to both proprietors. He justified this assumption by saying that "dès lors le prix est nécessairement le même pour l'un et l'autre propriétaire". de Bornier expands on this by saying that "the obvious conclusion that only a single price can exist at a given moment" follows from "an essential assumption concerning his model, [namely] product homogeneity".
Later on Cournot writes that a proprietor can adjust his supply "en modifiant correctement le prix". Again, this is nonsense: it is impossible for a single price to be simultaneously under the control of two suppliers. If there is a single price, then it must be determined by the market as a consequence of the proprietors' decisions on matters under their individual control.
Cournot's account threw his English translator (Nathaniel Bacon) so completely off-balance that his words were corrected to "properly adjusting "his" price". Edgeworth regarded equality of price in Cournot as "a particular condition, not... abstractly necessary in cases of imperfect competition". Jean Magnan de Bornier says that in Cournot's theory "each owner will use price as a variable to control quantity" without saying how one price can govern two quantities. A. J. Nichol claimed that Cournot's theory makes no sense unless "prices are directly determined by buyers". Shapiro, perhaps in despair, remarked that "the actual process of price formation in Cournot's theory is somewhat mysterious".
Collusion.
Cournot's duopolists are not true profit-maximizers. Either supplier could increase his or her profits by cutting out the middle man and cornering the market by marginally undercutting his or her rival; thus the middle man can be seen as a mechanism for restricting competition.
Finding the Cournot duopoly equilibrium.
Example 1.
Cournot's model of competition is typically presented for the case of a duopoly market structure; the following example provides a straightforward analysis of the Cournot model for the case of Duopoly. Therefore, suppose we have a market consisting of only two firms which we will call firm 1 and firm 2. For simplicity, we assume each firm faces the same marginal cost. That is, for a given firm formula_12's output quantity, denoted formula_13 where formula_14, firm formula_12's cost of producing formula_13 units of output is given by formula_15, where formula_16 is the marginal cost.
This assumption tells us that both firms face the same cost-per-unit produced. Therefore, as each firm's profit is equal to its revenues minus costs, where revenue equals the number of units produced multiplied by the market price, we can denote the profit functions for firm 1 and firm 2 as follows:
formula_17
formula_18
In the above profit functions we have price as a function of total output which we denote as formula_19 and for two firms we must have formula_20. For example's sake, let us assume that price (inverse demand function) is linear and of the form formula_21. So, the inverse demand function can then be rewritten as formula_22.
Now, substituting our equation for price in place of formula_23 we can write each firm's profit function as:
formula_24
formula_25
As firms are assumed to be profit-maximizers, the first-order conditions (F.O.C.s) for each firm are:
formula_26
formula_27
The F.O.C.s state that firm formula_12 is producing at the profit-maximizing level of output when the marginal cost (formula_28) is equal to the marginal revenue (formula_29). Intuitively, this suggests that firms will produce up to the point where it remains profitable to do so, as any further production past this point will mean that formula_30, and therefore production beyond this point results in the firm losing money for each additional unit produced. Notice that at the profit-maximizing quantity where formula_31, we must have formula_32 which is why we set the above equations equal to zero.
Now that we have two equations describing the states at which each firm is producing at the profit-maximizing quantity, we can simply solve this system of equations to obtain each firm's optimal level of output, formula_33 for firms 1 and 2 respectively. So, we obtain:
formula_34
formula_35
These functions describe each firm's optimal (profit-maximizing) quantity of output given the price firms face in the market, formula_7, the marginal cost, formula_36, and output quantity of rival firms. The functions can be thought of as describing a firm's "Best Response" to the other firm's level of output.
We can now find a Cournot-Nash Equilibrium using our "Best Response" functions above for the output quantity of firms 1 and 2. Recall that both firms face the same cost-per-unit (formula_36) and price (formula_7). Therefore, using this symmetrical relationship between firms we find the equilibrium quantity by fixing formula_37. We can be sure this setup gives us the equilibrium levels as neither firm has an incentive to change their level of output as doing so will harm the firm at the benefit of their rival. Now substituting in formula_38 for formula_33 and solving we obtain the symmetric (same for each firm) output quantity in Equilibrium as
formula_39.
This equilibrium value describes the optimal level of output for firms 1 and 2, where each firm is producing an output quantity of formula_38. So, at equilibrium, the total market output formula_19 will be formula_40.
Example 2.
The revenues accruing to the two proprietors are formula_41 and formula_42, i.e., formula_43 and formula_44. The first proprietor maximizes profit by optimizing over the parameterformula_45 under his control, giving the condition that the partial derivative of his profit with respect to formula_45 should be 0, and the mirror-image reasoning applies to his or her rival. We thus get the equations:
formula_46 and
formula_47.
The equlibirum position is found by solving these two equations simultaneously. This is most easily done by adding and subtracting them, turning them into:
formula_48 and
formula_49, where formula_10.
Thus, we see that the two proprietors supply equal quantities, and that the total quantity sold is the root of a single nonlinear equation in formula_1.
Cournot goes further than this simple solution, investigating the stability of the equilibrium. Each of his original equations defines a relation between formula_45 and formula_50 which may be drawn on a graph. If the first proprietor was providing quantity formula_51, then the second proprietor would adopt quantity formula_52 from the red curve to maximize his or her revenue. But then, by similar reasoning, the first proprietor will adjust his supply to formula_53 to give him or her the maximum return as shown by the blue curve when formula_50 is equal to formula_52. This will lead to the second proprietor adapting to the supply value formula_54, and so forth until equilibrium is reached at the point of intersection formula_12, whose coordinates are formula_55.
Since proprietors move towards the equilibrium position it follows that the equilibrium is stable, but Cournot remarks that if the red and blue curves were interchanged then this would cease to be true. He adds that it is easy to see that the corresponding diagram would be inadmissible since, for instance, it is necessarily the case that formula_56. To verify this, notice that when formula_45 is 0, the two equations reduce to:
formula_57 and
formula_58.
The first of these corresponds to the quantity formula_50 sold when the price is zero (which is the maximum quantity the public is willing to consume), while the second states that the derivative of formula_59 with respect to formula_50 is 0, but formula_59 is the monetary value of an aggregate sales quantity formula_50, and the turning point of this value is a maximum. Evidently, the sales quantity which maximizes monetary value is reached before the maximum possible sales quantity (which corresponds to a value of 0). So, the root formula_60 of the first equation is necessarily greater than the root formula_61 of the second equation.
Comparison with monopoly.
We have seen that Cournot's system reduces to the equation formula_62. formula_1 is functionally related to formula_7 via formula_8 in one direction and formula_6 in the other. If we re-express this equation in terms of formula_7, it tells us that formula_63, which can be compared with the equation formula_64 obtained earlier for monopoly.
If we plot another variable formula_65 against formula_7, then we may draw a curve of the function formula_66. The monopoly price is the formula_7 for which this curve intersects the line formula_67, while the duopoly price is given by the intersection of the curve with the steeper line formula_68. Regardless of the shape of the curve, its intersection with formula_68 occurs to the left of (i.e., at a lower price than) its intersection with formula_67. Hence, prices are lower under duopoly than under monopoly, and quantities sold are accordingly higher.
Extension to oligopoly.
When there are formula_69 proprietors, the price equation becomes formula_70. The price can be read from the diagram from the intersection of formula_71 with the curve. Hence, the price diminishes indefinitely as the number of proprietors increases. With an infinite number of proprietors, the price becomes zero; or more generally, if we allow for costs of production, the price becomes the marginal cost.
Bertrand's critique.
The French mathematician Joseph Bertrand, when reviewing Walras's "Théorie Mathématique de la Richesse Sociale", was drawn to Cournot's book by Walras's high praise of it. Bertrand was critical of Cournot's reasoning and assumptions, Bertrand claimed that "removing the symbols would reduce the book to just a few pages". His summary of Cournot's theory of duopoly has remained influential:
Cournot assumes that one of the proprietors will reduce his price to attract buyers to him, and that the other will in turn reduce his price even more to attract buyers back to him. They will only stop undercutting each other in this way, when either proprietor, even if the other abandoned the struggle, has nothing more to gain from reducing his price. One major objection to this is that there is no solution under this assumption, in that there is no limit to the downward movement... If Cournot's formulation conceals this obvious result, it is because he most inadvertently introduces as D and D' the two proprietors' respective outputs, and by considering them as independent variables, he assumes that should either proprietor change his output then the other proprietor's output could remain constant. It quite obviously could not.
Pareto was unimpressed by Bertrand's critique, concluding from it that Bertrand 'wrote his article without consulting the books he criticised'.
Irving Fisher outlined a model of duopoly similar to the one Bertrand had accused Cournot of analysing incorrectly:
A more natural hypothesis, and one often tacitly adopted, is that each [producer] assumes his rival's "price" will remain fixed, while his own price is adjusted. Under this hypothesis each would undersell the other as long as any profit remained, so that the final result would be identical with the result of unlimited competition.
Fisher seemed to regard Bertrand as having been the first to present this model, and it has since entered the literature as Bertrand competition.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "F(p)"
},
{
"math_id": 3,
"text": "pF(p)"
},
{
"math_id": 4,
"text": "F(p)+pF'(p)"
},
{
"math_id": 5,
"text": "F'(p)"
},
{
"math_id": 6,
"text": "F"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "p=f(D)"
},
{
"math_id": 10,
"text": "D=D_1+D_2"
},
{
"math_id": 11,
"text": "D_i"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "q_i"
},
{
"math_id": 14,
"text": "i \\in \\{1,2\\}"
},
{
"math_id": 15,
"text": "C(q_i)=\\chi q_i"
},
{
"math_id": 16,
"text": "\\chi "
},
{
"math_id": 17,
"text": "\\Pi_1(Q)=p(Q)q_1 - \\chi q_1"
},
{
"math_id": 18,
"text": "\\Pi_2(Q)=p(Q)q_2 - \\chi q_2"
},
{
"math_id": 19,
"text": "Q"
},
{
"math_id": 20,
"text": "Q=q_1+q_2"
},
{
"math_id": 21,
"text": "p=a-bQ"
},
{
"math_id": 22,
"text": "p=a-bq_1-bq_2"
},
{
"math_id": 23,
"text": "p(Q)"
},
{
"math_id": 24,
"text": "\\Pi_1(q_1,q_2)=(a-bq_1-bq_2- \\chi)q_1"
},
{
"math_id": 25,
"text": "\\Pi_2(q_1,q_2)=(a-bq_1-bq_2- \\chi)q_2"
},
{
"math_id": 26,
"text": "\\frac{\\partial \\Pi_1(q_1,q_2)}{\\partial q_1}=a-2bq_1-bq_2-\\chi=0"
},
{
"math_id": 27,
"text": "\\frac{\\partial \\Pi_2(q_1,q_2)}{\\partial q_2}=a-bq_1-2bq_2-\\chi=0"
},
{
"math_id": 28,
"text": "\\text{MC}"
},
{
"math_id": 29,
"text": "\\text{MR}"
},
{
"math_id": 30,
"text": "\\text{MC} > \\text{MR}"
},
{
"math_id": 31,
"text": "\\text{MC}=\\text{MR}"
},
{
"math_id": 32,
"text": "\\text{MC}-\\text{MR}=0"
},
{
"math_id": 33,
"text": "q_1,q_2"
},
{
"math_id": 34,
"text": "q_1=\\frac{a-\\chi}{2b}-\\frac{q_2}{2}"
},
{
"math_id": 35,
"text": "q_2=\\frac{a-\\chi}{2b}-\\dfrac{q_1}{2}"
},
{
"math_id": 36,
"text": "\\chi"
},
{
"math_id": 37,
"text": "q_1=q_2=q^*"
},
{
"math_id": 38,
"text": "q^*"
},
{
"math_id": 39,
"text": "q^*=\\frac{a-\\chi}{3b}"
},
{
"math_id": 40,
"text": "Q=q_1^*+q_2^*=\\frac{2(a-\\chi)}{3b}"
},
{
"math_id": 41,
"text": "pD_1"
},
{
"math_id": 42,
"text": "pD_2"
},
{
"math_id": 43,
"text": "f(D_1+D_2) \\cdot D_1"
},
{
"math_id": 44,
"text": "f(D_1+D_2) \\cdot D_2"
},
{
"math_id": 45,
"text": "D_1"
},
{
"math_id": 46,
"text": "f(D_1+D_2) + D_1 f'(D_1+D_2) = 0"
},
{
"math_id": 47,
"text": "f(D_1+D_2) + D_2 f'(D_1+D_2) = 0"
},
{
"math_id": 48,
"text": "D_1=D_2"
},
{
"math_id": 49,
"text": "2 f(D) + D f'(D) = 0"
},
{
"math_id": 50,
"text": "D_2"
},
{
"math_id": 51,
"text": "x_\\textsf{l}"
},
{
"math_id": 52,
"text": "y_\\textsf{l}"
},
{
"math_id": 53,
"text": "x_\\textsf{ll}"
},
{
"math_id": 54,
"text": "y_\\textsf{ll}"
},
{
"math_id": 55,
"text": "(x,y)"
},
{
"math_id": 56,
"text": "m_1>m_2"
},
{
"math_id": 57,
"text": "f(D_2)=0"
},
{
"math_id": 58,
"text": "f(D_2) + D_2 f'(D_2) = 0"
},
{
"math_id": 59,
"text": "D_2 f(D_2)"
},
{
"math_id": 60,
"text": "m_1"
},
{
"math_id": 61,
"text": "m_2"
},
{
"math_id": 62,
"text": "2f(D) + Df'(D)=0"
},
{
"math_id": 63,
"text": "F(p)+2pF'(p)=0"
},
{
"math_id": 64,
"text": "F(p)+pF'(p)=0"
},
{
"math_id": 65,
"text": "u"
},
{
"math_id": 66,
"text": "u=-\\frac{F(p)}{F'(p)}"
},
{
"math_id": 67,
"text": "u=p"
},
{
"math_id": 68,
"text": "u=2p"
},
{
"math_id": 69,
"text": "n"
},
{
"math_id": 70,
"text": "F(p)+npF'(p)=0"
},
{
"math_id": 71,
"text": "u=np"
}
] |
https://en.wikipedia.org/wiki?curid=1481119
|
14812
|
Interquartile range
|
Measure of statistical dispersion
In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference between the 75th and 25th percentiles of the data. To calculate the IQR, the data set is divided into quartiles, or four rank-ordered even parts via linear interpolation. These quartiles are denoted by Q1 (also called the lower quartile), "Q"2 (the median), and "Q"3 (also called the upper quartile). The lower quartile corresponds with the 25th percentile and the upper quartile corresponds with the 75th percentile, so IQR = "Q"3 − "Q"1.
The IQR is an example of a trimmed estimator, defined as the 25% trimmed range, which enhances the accuracy of dataset statistics by dropping lower contribution, outlying points. It is also used as a robust measure of scale It can be clearly visualized by the box on a box plot.
Use.
Unlike total range, the interquartile range has a breakdown point of 25% and is thus often preferred to the total range.
The IQR is used to build box plots, simple graphical representations of a probability distribution.
The IQR is used in businesses as a marker for their income rates.
For a symmetric distribution (where the median equals the midhinge, the average of the first and third quartiles), half the IQR equals the median absolute deviation (MAD).
The median is the corresponding measure of central tendency.
The IQR can be used to identify outliers (see below). The IQR also may indicate the skewness of the dataset.
The quartile deviation or semi-interquartile range is defined as half the IQR.
Algorithm.
The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows.
Given an even "2n" or odd "2n+1" number of values
"first quartile Q1" = median of the "n" smallest values
"third quartile Q3" = median of the "n" largest values
The "second quartile Q2" is the same as the ordinary median.
Examples.
Data set in a table.
The following table has 13 rows, and follows the rules for the odd number of entries.
For the data in this table the interquartile range is IQR = Q3 − Q1 = 119 - 31 = 88.
Data set in a plain-text box plot.
For the data set in this box plot:
This means the 1.5*IQR whiskers can be uneven in lengths. The median, minimum, maximum, and the first and third quartile constitute the Five-number summary.
Distributions.
The interquartile range of a continuous distribution can be calculated by integrating the probability density function (which yields the cumulative distribution function—any other means of calculating the CDF will also work). The lower quartile, "Q"1, is a number such that integral of the PDF from -∞ to "Q"1 equals 0.25, while the upper quartile, "Q"3, is such a number that the integral from -∞ to "Q"3 equals 0.75; in terms of the CDF, the quartiles can be defined as follows:
formula_0
formula_1
where CDF−1 is the quantile function.
The interquartile range and median of some common distributions are shown below
Interquartile range test for normality of distribution.
The IQR, mean, and standard deviation of a population "P" can be used in a simple test of whether or not "P" is normally distributed, or Gaussian. If "P" is normally distributed, then the standard score of the first quartile, "z"1, is −0.67, and the standard score of the third quartile, "z"3, is +0.67. Given "mean" = formula_2 and "standard deviation" = σ for "P", if "P" is normally distributed, the first quartile
formula_3
and the third quartile
formula_4
If the actual values of the first or third quartiles differ substantially from the calculated values, "P" is not normally distributed. However, a normal distribution can be trivially perturbed to maintain its Q1 and Q2 std. scores at 0.67 and −0.67 and not be normally distributed (so the above test would produce a false positive). A better test of normality, such as Q–Q plot would be indicated here.
Outliers.
The interquartile range is often used to find outliers in data. Outliers here are defined as observations that fall below Q1 − 1.5 IQR or above Q3 + 1.5 IQR. In a boxplot, the highest and lowest occurring value within this limit are indicated by "whiskers" of the box (frequently with an additional bar at the end of the whisker) and any outliers as individual points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q_1 = \\text{CDF}^{-1}(0.25) ,"
},
{
"math_id": 1,
"text": "Q_3 = \\text{CDF}^{-1}(0.75) ,"
},
{
"math_id": 2,
"text": "\\bar{P}"
},
{
"math_id": 3,
"text": "Q_1 = (\\sigma \\, z_1) + \\bar{P}"
},
{
"math_id": 4,
"text": "Q_3 = (\\sigma \\, z_3) + \\bar{P}"
}
] |
https://en.wikipedia.org/wiki?curid=14812
|
1481533
|
Lag operator
|
Operator for offsetting time series elements
In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series
formula_0
then
formula_1 for all formula_2
or similarly in terms of the backshift operator "B": formula_3 for all formula_2. Equivalently, this definition can be represented as
formula_4 for all formula_5
The lag operator (as well as backshift operator) can be raised to arbitrary integer powers so that
formula_6
and
formula_7
Lag polynomials.
Polynomials of the lag operator can be used, and this is a common notation for ARMA (autoregressive moving average) models. For example,
formula_8
specifies an AR("p") model.
A polynomial of lag operators is called a lag polynomial so that, for example, the ARMA model can be concisely specified as
formula_9
where formula_10 and formula_11 respectively represent the lag polynomials
formula_12
and
formula_13
Polynomials of lag operators follow similar rules of multiplication and division as do numbers and polynomials of variables. For example,
formula_14
means the same thing as
formula_15
As with polynomials of variables, a polynomial in the lag operator can be divided by another one using polynomial long division. In general dividing one such polynomial by another, when each has a finite order (highest exponent), results in an infinite-order polynomial.
An annihilator operator, denoted formula_16, removes the entries of the polynomial with negative power (future values).
Note that formula_17 denotes the sum of coefficients:
formula_18
Difference operator.
In time series analysis, the first difference operator :formula_19
formula_20
Similarly, the second difference operator works as follows:
formula_21
The above approach generalises to the "i"-th difference operator
formula_22
Conditional expectation.
It is common in stochastic processes to care about the expected value of a variable given a previous information set. Let formula_23 be all information that is common knowledge at time "t" (this is often subscripted below the expectation operator); then the expected value of the realisation of "X", "j" time-steps in the future, can be written equivalently as:
formula_24
With these time-dependent conditional expectations, there is the need to distinguish between the backshift operator ("B") that only adjusts the date of the forecasted variable and the Lag operator ("L") that adjusts equally the date of the forecasted variable and the information set:
formula_25
formula_26
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X= \\{X_1, X_2, \\dots \\}"
},
{
"math_id": 1,
"text": " L X_t = X_{t-1} "
},
{
"math_id": 2,
"text": " t > 1"
},
{
"math_id": 3,
"text": " B X_t = X_{t-1} "
},
{
"math_id": 4,
"text": " X_t = L X_{t+1}"
},
{
"math_id": 5,
"text": " t \\geq 1"
},
{
"math_id": 6,
"text": " L^{-1} X_{t} = X_{t+1}"
},
{
"math_id": 7,
"text": " L^k X_{t} = X_{t-k}."
},
{
"math_id": 8,
"text": " \\varepsilon_t = X_t - \\sum_{i=1}^p \\varphi_i X_{t-i} = \\left(1 - \\sum_{i=1}^p \\varphi_i L^i\\right) X_t"
},
{
"math_id": 9,
"text": " \\varphi (L) X_t = \\theta (L) \\varepsilon_t"
},
{
"math_id": 10,
"text": " \\varphi (L)"
},
{
"math_id": 11,
"text": "\\theta (L)"
},
{
"math_id": 12,
"text": " \\varphi (L) = 1 - \\sum_{i=1}^p \\varphi_i L^i"
},
{
"math_id": 13,
"text": " \\theta (L)= 1 + \\sum_{i=1}^q \\theta_i L^i.\\,"
},
{
"math_id": 14,
"text": " X_t = \\frac{\\theta (L) }{\\varphi (L)}\\varepsilon_t,"
},
{
"math_id": 15,
"text": "\\varphi (L) X_t = \\theta (L) \\varepsilon_t ."
},
{
"math_id": 16,
"text": "[\\ ]_+"
},
{
"math_id": 17,
"text": "\\varphi \\left( 1 \\right)"
},
{
"math_id": 18,
"text": " \\varphi \\left( 1 \\right) = 1 - \\sum_{i=1}^p \\varphi_i "
},
{
"math_id": 19,
"text": " \\Delta "
},
{
"math_id": 20,
"text": "\n\\begin{align}\n \\Delta X_t & = X_t - X_{t-1} \\\\\n \\Delta X_t & = (1-L)X_t ~.\n\\end{align}\n"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n \\Delta ( \\Delta X_t ) & = \\Delta X_t - \\Delta X_{t-1} \\\\\n \\Delta^2 X_t & = (1-L)\\Delta X_t \\\\\n \\Delta^2 X_t & = (1-L)(1-L)X_t \\\\\n \\Delta^2 X_t & = (1-L)^2 X_t ~.\n\\end{align}\n"
},
{
"math_id": 22,
"text": " \\Delta ^i X_t = (1-L)^i X_t \\ ."
},
{
"math_id": 23,
"text": "\\Omega_t"
},
{
"math_id": 24,
"text": "E [ X_{t+j} | \\Omega_t] = E_t [ X_{t+j} ] ."
},
{
"math_id": 25,
"text": "L^n E_t [ X_{t+j} ] = E_{t-n} [ X_{t+j-n} ] ,"
},
{
"math_id": 26,
"text": "B^n E_t [ X_{t+j} ] = E_t [ X_{t+j-n} ] ."
}
] |
https://en.wikipedia.org/wiki?curid=1481533
|
1482061
|
Bekenstein bound
|
Upper limit on entropy in physics
In physics, the Bekenstein bound (named after Jacob Bekenstein) is an upper limit on the thermodynamic entropy "S", or Shannon entropy "H", that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level. It implies that the information of a physical system, or the information necessary to perfectly describe that system, must be finite if the region of space and the energy are finite.
Equations.
The universal form of the bound was originally found by Jacob Bekenstein in 1981 as the inequality
formula_0
where "S" is the entropy, "k" is the Boltzmann constant, "R" is the radius of a sphere that can enclose the given system, "E" is the total mass–energy including any rest masses, "ħ" is the reduced Planck constant, and "c" is the speed of light. Note that while gravity plays a significant role in its enforcement, the expression for the bound does not contain the gravitational constant "G", and so, it ought to apply to quantum field theory in curved spacetime.
The Bekenstein–Hawking boundary entropy of three-dimensional black holes exactly saturates the bound. The Schwarzschild radius is given by
formula_1
and so the two-dimensional area of the black hole's event horizon is
formula_2
and using the Planck length
formula_3
the Bekenstein–Hawking entropy is
formula_4
One interpretation of the bound makes use of the microcanonical formula for entropy,
formula_5
where formula_6 is the number of energy eigenstates accessible to the system. This is equivalent to saying that the dimension of the Hilbert space describing the system is
formula_7
The bound is closely associated with black hole thermodynamics, the holographic principle and the covariant entropy bound of quantum gravity, and can be derived from a conjectured strong form of the latter.
Origins.
Bekenstein derived the bound from heuristic arguments involving black holes. If a system exists that violates the bound, i.e., by having too much entropy, Bekenstein argued that it would be possible to violate the second law of thermodynamics by lowering it into a black hole. In 1995, Ted Jacobson demonstrated that the Einstein field equations (i.e., general relativity) can be derived by assuming that the Bekenstein bound and the laws of thermodynamics are true. However, while a number of arguments were devised which show that some form of the bound must exist in order for the laws of thermodynamics and general relativity to be mutually consistent, the precise formulation of the bound was a matter of debate until Casini's work in 2008.
The following is a heuristic derivation that shows formula_8 for some constant formula_9. Showing that formula_10 requires a more technical analysis.
Suppose we have a black hole of mass formula_11, then the Schwarzschild radius of the black hole is formula_12, and the Bekenstein–Hawking entropy of the black hole is formula_13.
Now take a box of energy formula_14, entropy formula_15, and side length formula_16. If we throw the box into the black hole, the mass of the black hole goes up to formula_17, and the entropy goes up by formula_18. Since entropy does not decrease, formula_19 .
In order for the box to fit inside the black hole, formula_20 . If the two are comparable, formula_21, then we have derived the BH bound: formula_22.
Proof in quantum field theory.
A proof of the Bekenstein bound in the framework of quantum field theory was given in 2008 by Casini. One of the crucial insights of the proof was to find a proper interpretation of the quantities appearing on both sides of the bound.
Naive definitions of entropy and energy density in Quantum Field Theory suffer from ultraviolet divergences. In the case of the Bekenstein bound, ultraviolet divergences can be avoided by taking differences between quantities computed in an excited state and the same quantities computed in the vacuum state. For example, given a spatial region formula_23, Casini defines the entropy on the left-hand side of the Bekenstein bound as
formula_24
where formula_25 is the Von Neumann entropy of the reduced density matrix formula_26 associated with formula_23 in the excited state formula_27, and formula_28 is the corresponding Von Neumann entropy for the vacuum state formula_29.
On the right-hand side of the Bekenstein bound, a difficult point is to give a rigorous interpretation of the quantity formula_30, where formula_16 is a characteristic length scale of the system and formula_14 is a characteristic energy. This product has the same units as the generator of a Lorentz boost, and the natural analog of a boost in this situation is the modular Hamiltonian of the vacuum state formula_31. Casini defines the right-hand side of the Bekenstein bound as the difference between the expectation value of the modular Hamiltonian in the excited state and the vacuum state,
formula_32
With these definitions, the bound reads
formula_33
which can be rearranged to give
formula_34
This is simply the statement of positivity of quantum relative entropy, which proves the Bekenstein bound.
However, the modular Hamiltonian can only be interpreted as a weighted form of energy for conformal field theories, and when V is a sphere.
This construction allows us to make sense of the Casimir effect where the localized energy density is "lower" than that of the vacuum, i.e. a "negative" localized energy. The localized entropy of the vacuum is nonzero, and so, the Casimir effect is possible for states with a lower localized entropy than that of the vacuum. Hawking radiation can be explained by dumping localized negative energy into a black hole.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S \\leq \\frac{2 \\pi k R E}{\\hbar c},"
},
{
"math_id": 1,
"text": "r_{\\rm s} = \\frac{2 G M}{c^2},"
},
{
"math_id": 2,
"text": "A = 4 \\pi r_{\\rm s}^2 = \\frac{16 \\pi G^2 M^2}{c^4},"
},
{
"math_id": 3,
"text": "l_{\\rm P}^2 = \\hbar G/c^3,"
},
{
"math_id": 4,
"text": "S = \\frac{kA}{4 \\ l_{\\rm P}^2} = \\frac{4 \\pi k G M^2}{\\hbar c}."
},
{
"math_id": 5,
"text": "S = k \\log \\Omega,"
},
{
"math_id": 6,
"text": "\\Omega"
},
{
"math_id": 7,
"text": "\\dim \\mathcal{H} = \\exp \\left(\\frac{2\\pi R E}{\\hbar c}\\right)."
},
{
"math_id": 8,
"text": "S \\leq K\\frac{kRE}{\\hbar c} "
},
{
"math_id": 9,
"text": "K"
},
{
"math_id": 10,
"text": "K = 2\\pi"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "R_{bh} \\sim \\frac{GM}{c^2}"
},
{
"math_id": 13,
"text": "\\sim \\frac{kc^3 R_{bh}^2}{\\hbar G} \\sim \\frac{kGM^2}{\\hbar c}"
},
{
"math_id": 14,
"text": "E"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "R"
},
{
"math_id": 17,
"text": "M+\\frac{E}{c^2}"
},
{
"math_id": 18,
"text": "\\frac{kGME}{\\hbar c^3}"
},
{
"math_id": 19,
"text": "\\frac{kGME}{\\hbar c^3}\\gtrsim S"
},
{
"math_id": 20,
"text": "R \\lesssim \\frac{GM}{c^2} "
},
{
"math_id": 21,
"text": "R \\sim \\frac{GM}{c^2} "
},
{
"math_id": 22,
"text": "S \\lesssim \\frac{kRE}{\\hbar c} "
},
{
"math_id": 23,
"text": "V"
},
{
"math_id": 24,
"text": "S_V = S(\\rho_V) - S(\\rho^0_V) = - \\mathrm{tr}(\\rho_V \\log \\rho_V) + \\mathrm{tr}(\\rho_V^0 \\log \\rho_V^0)"
},
{
"math_id": 25,
"text": "S(\\rho_V)"
},
{
"math_id": 26,
"text": "\\rho_V"
},
{
"math_id": 27,
"text": "\\rho"
},
{
"math_id": 28,
"text": "S(\\rho^0_V)"
},
{
"math_id": 29,
"text": "\\rho^0"
},
{
"math_id": 30,
"text": "2\\pi R E"
},
{
"math_id": 31,
"text": "K=-\\log \\rho_V^0"
},
{
"math_id": 32,
"text": " K_V = \\mathrm{tr}(K \\rho_V) - \\mathrm{tr}(K \\rho^0_V). "
},
{
"math_id": 33,
"text": " S_V \\leq K_V, "
},
{
"math_id": 34,
"text": "\\mathrm{tr}(\\rho_V \\log \\rho_V) - \\mathrm{tr}(\\rho_V \\log \\rho_V^0) \\geq 0. "
}
] |
https://en.wikipedia.org/wiki?curid=1482061
|
1482083
|
Real coordinate space
|
Space formed by the "n"-tuples of real numbers
In mathematics, the real coordinate space or real coordinate "n"-space, of dimension n, denoted Rn or formula_0, is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as "coordinate vectors".
Special cases are called the "real line" R1, the "real coordinate plane" R2, and the "real coordinate three-dimensional space" R3.
With component-wise addition and scalar multiplication, it is a real vector space.
The coordinates over any basis of the elements of a real vector space form a "real coordinate space" of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension n, En (Euclidean line, E; Euclidean plane, E2; Euclidean three-dimensional space, E3) form a "real coordinate space" of dimension n.
These one to one correspondences between vectors, points and coordinate vectors explain the names of "coordinate space" and "coordinate vector". It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them.
Definition and structures.
For any natural number n, the set R"n" consists of all n-tuples of real numbers (R). It is called the "n-dimensional real space" or the "real n-space".
An element of R"n" is thus a n-tuple, and is written
formula_1
where each "x""i" is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of R"n" for some n.
The real n-space has several further properties, notably:
These properties and structures of R"n" make it fundamental in almost all areas of mathematics and their application domains, such as statistics, probability theory, and many parts of physics.
The domain of a function of several variables.
Any function "f"("x"1, "x"2, ..., "x""n") of n real variables can be considered as a function on R"n" (that is, with R"n" as its domain). The use of the real n-space, instead of several variables considered separately, can simplify notation and suggest reasonable definitions. Consider, for "n" = 2, a function composition of the following form:
formula_2
where functions "g"1 and "g"2 are continuous. If
then F is not necessarily continuous. Continuity is a stronger condition: the continuity of f in the natural R2 topology (discussed below), also called "multivariable continuity", which is sufficient for continuity of the composition F.
Vector space.
The coordinate space R"n" forms an n-dimensional vector space over the field of real numbers with the addition of the structure of linearity, and is often still denoted R"n". The operations on R"n" as a vector space are typically defined by
formula_3
formula_4
The zero vector is given by
formula_5
and the additive inverse of the vector x is given by
formula_6
This structure is important because any n-dimensional real vector space is isomorphic to the vector space R"n".
Matrix notation.
In standard matrix notation, each element of R"n" is typically written as a column vector
formula_7
and sometimes as a row vector:
formula_8
The coordinate space R"n" may then be interpreted as the space of all "n" × 1 column vectors, or all 1 × "n" row vectors with the ordinary matrix operations of addition and scalar multiplication.
Linear transformations from R"n" to R"m" may then be written as "m" × "n" matrices which act on the elements of R"n" via left multiplication (when the elements of R"n" are column vectors) and on elements of R"m" via right multiplication (when they are row vectors). The formula for left multiplication, a special case of matrix multiplication, is:
formula_9
Any linear transformation is a continuous function (see below). Also, a matrix defines an open map from R"n" to R"m" if and only if the rank of the matrix equals to m.
Standard basis.
The coordinate space R"n" comes with a standard basis:
formula_10
To see that this is a basis, note that an arbitrary vector in R"n" can be written uniquely in the form
formula_11
Geometric properties and uses.
Orientation.
The fact that real numbers, unlike many other fields, constitute an ordered field yields an orientation structure on R"n". Any full-rank linear map of R"n" to itself either preserves or reverses orientation of the space depending on the sign of the determinant of its matrix. If one permutes coordinates (or, in other words, elements of the basis), the resulting orientation will depend on the parity of the permutation.
Diffeomorphisms of R"n" or domains in it, by their virtue to avoid zero Jacobian, are also classified to orientation-preserving and orientation-reversing. It has important consequences for the theory of differential forms, whose applications include electrodynamics.
Another manifestation of this structure is that the point reflection in R"n" has different properties depending on evenness of n. For even n it preserves orientation, while for odd n it is reversed (see also improper rotation).
Affine space.
R"n" understood as an affine space is the same space, where R"n" as a vector space acts by translations. Conversely, a vector has to be understood as a "difference between two points", usually illustrated by a directed line segment connecting two points. The distinction says that there is no canonical choice of where the origin should go in an affine n-space, because it can be translated anywhere.
Convexity.
In a real vector space, such as R"n", one can define a convex cone, which contains all "non-negative" linear combinations of its vectors. Corresponding concept in an affine space is a convex set, which allows only convex combinations (non-negative linear combinations that sum to 1).
In the language of universal algebra, a vector space is an algebra over the universal vector space R∞ of finite sequences of coefficients, corresponding to finite sums of vectors, while an affine space is an algebra over the universal affine hyperplane in this space (of finite sequences summing to 1), a cone is an algebra over the universal orthant (of finite sequences of nonnegative numbers), and a convex set is an algebra over the universal simplex (of finite sequences of nonnegative numbers summing to 1). This geometrizes the axioms in terms of "sums with (possible) restrictions on the coordinates".
Another concept from convex analysis is a convex function from R"n" to real numbers, which is defined through an inequality between its value on a convex combination of points and sum of values in those points with the same coefficients.
Euclidean space.
The dot product
formula_12
defines the norm |x| = √x ⋅ x on the vector space R"n". If every vector has its Euclidean norm, then for any pair of points the distance
formula_13
is defined, providing a metric space structure on R"n" in addition to its affine structure.
As for vector space structure, the dot product and Euclidean distance usually are assumed to exist in R"n" without special explanations. However, the real n-space and a Euclidean n-space are distinct objects, strictly speaking. Any Euclidean n-space has a coordinate system where the dot product and Euclidean distance have the form shown above, called "Cartesian". But there are "many" Cartesian coordinate systems on a Euclidean space.
Conversely, the above formula for the Euclidean metric defines the "standard" Euclidean structure on R"n", but it is not the only possible one. Actually, any positive-definite quadratic form q defines its own "distance" , but it is not very different from the Euclidean one in the sense that
formula_14
Such a change of the metric preserves some of its properties, for example the property of being a complete metric space.
This also implies that any full-rank linear transformation of R"n", or its affine transformation, does not magnify distances more than by some fixed "C"2, and does not make distances smaller than 1 / "C"1 times, a fixed finite number times smaller.
The aforementioned equivalence of metric functions remains valid if is replaced with "M"(x − y), where M is any convex positive homogeneous function of degree 1, i.e. a vector norm (see Minkowski distance for useful examples). Because of this fact that any "natural" metric on R"n" is not especially different from the Euclidean metric, R"n" is not always distinguished from a Euclidean "n"-space even in professional mathematical works.
In algebraic and differential geometry.
Although the definition of a manifold does not require that its model space should be R"n", this choice is the most common, and almost exclusive one in differential geometry.
On the other hand, Whitney embedding theorems state that any real differentiable m-dimensional manifold can be embedded into R2"m".
Other appearances.
Other structures considered on R"n" include the one of a pseudo-Euclidean space, symplectic structure (even n), and contact structure (odd n). All these structures, although can be defined in a coordinate-free manner, admit standard (and reasonably simple) forms in coordinates.
R"n" is also a real vector subspace of C"n" which is invariant to complex conjugation; see also complexification.
Polytopes in R"n".
There are three families of polytopes which have simple representations in R"n" spaces, for any n, and can be used to visualize any affine coordinate system in a real n-space. Vertices of a hypercube have coordinates ("x"1, "x"2, ..., "x""n") where each xk takes on one of only two values, typically 0 or 1. However, any two numbers can be chosen instead of 0 and 1, for example −1 and 1. An n-hypercube can be thought of as the Cartesian product of n identical intervals (such as the unit interval [0,1]) on the real line. As an n-dimensional subset it can be described with a system of 2"n" inequalities:
formula_15 for [0,1], and
formula_16 for [−1,1].
Each vertex of the cross-polytope has, for some k, the xk coordinate equal to ±1 and all other coordinates equal to 0 (such that it is the kth standard basis vector up to sign). This is a dual polytope of hypercube. As an n-dimensional subset it can be described with a single inequality which uses the absolute value operation:
formula_17
but this can be expressed with a system of 2"n" linear inequalities as well.
The third polytope with simply enumerable coordinates is the standard simplex, whose vertices are n standard basis vectors and the origin (0, 0, ..., 0). As an n-dimensional subset it is described with a system of "n" + 1 linear inequalities:
formula_18
Replacement of all "≤" with "<" gives interiors of these polytopes.
Topological properties.
The topological structure of R"n" (called standard topology, Euclidean topology, or usual topology) can be obtained not only from Cartesian product. It is also identical to the natural topology induced by Euclidean metric discussed above: a set is open in the Euclidean topology if and only if it contains an open ball around each of its points. Also, R"n" is a linear topological space (see continuity of linear maps above), and there is only one possible (non-trivial) topology compatible with its linear structure. As there are many open linear maps from R"n" to itself which are not isometries, there can be many Euclidean structures on R"n" which correspond to the same topology. Actually, it does not depend much even on the linear structure: there are many non-linear diffeomorphisms (and other homeomorphisms) of R"n" onto itself, or its parts such as a Euclidean open ball or the interior of a hypercube).
R"n" has the topological dimension n.
An important result on the topology of R"n", that is far from superficial, is Brouwer's invariance of domain. Any subset of R"n" (with its subspace topology) that is homeomorphic to another open subset of R"n" is itself open. An immediate consequence of this is that R"m" is not homeomorphic to R"n" if "m" ≠ "n" – an intuitively "obvious" result which is nonetheless difficult to prove.
Despite the difference in topological dimension, and contrary to a naïve perception, it is possible to map a lesser-dimensional real space continuously and surjectively onto R"n". A continuous (although not smooth) space-filling curve (an image of R1) is possible.
Examples.
"n" ≤ 1.
Cases of 0 ≤ "n" ≤ 1 do not offer anything new: R1 is the real line, whereas R0 (the space containing the empty column vector) is a singleton, understood as a zero vector space. However, it is useful to include these as trivial cases of theories that describe different n.
"n" = 2.
The case of ("x,y") where "x" and "y" are real numbers has been developed as the Cartesian plane "P". Further structure has been attached with Euclidean vectors representing directed line segments in "P". The plane has also been developed as the field extension formula_19 by appending roots of X2 + 1 = 0 to the real field formula_20 The root i acts on P as a quarter turn with counterclockwise orientation. This root generates the group formula_21. When ("x,y") is written "x" + "y" i it is a complex number.
Another group action by formula_22, where the actor has been expressed as j, uses the line "y"="x" for the involution of flipping the plane ("x,y") ↦ ("y,x"), an exchange of coordinates. In this case points of "P" are written "x" + "y" j and called split-complex numbers. These numbers, with the coordinate-wise addition and multiplication according to "jj"=+1, form a ring that is not a field.
Another ring structure on "P" uses a nilpotent e to write "x" + "y" e for ("x,y"). The action of e on "P" reduces the plane to a line: It can be decomposed into the projection into the x-coordinate, then quarter-turning the result to the y-axis: e ("x" + "y" e) = "x" e since e2 = 0. A number "x" + "y" e is a dual number. The dual numbers form a ring, but, since e has no multiplicative inverse, it does not generate a group so the action is not a group action.
Excluding (0,0) from "P" makes ["x" : "y"] projective coordinates which describe the real projective line, a one-dimensional space. Since the origin is excluded, at least one of the ratios "x"/"y" and "y"/"x" exists. Then ["x" : "y"] = ["x"/"y" : 1] or ["x" : "y"] = [1 : "y"/"x"]. The projective line P1(R) is a topological manifold covered by two coordinate charts, ["z" : 1] → "z" or [1 : "z"] → "z", which form an atlas. For points covered by both charts the "transition function" is multiplicative inversion on an open neighborhood of the point, which provides a homeomorphism as required in a manifold. One application of the real projective line is found in Cayley–Klein metric geometry.
"n" = 4.
R4 can be imagined using the fact that 16 points ("x"1, "x"2, "x"3, "x"4), where each xk is either 0 or 1, are vertices of a tesseract (pictured), the 4-hypercube (see above).
The first major use of R4 is a spacetime model: three spatial coordinates plus one temporal. This is usually associated with theory of relativity, although four dimensions were used for such models since Galilei. The choice of theory leads to different structure, though: in Galilean relativity the t coordinate is privileged, but in Einsteinian relativity it is not. Special relativity is set in Minkowski space. General relativity uses curved spaces, which may be thought of as R4 with a curved metric for most practical purposes. None of these structures provide a (positive-definite) metric on R4.
Euclidean R4 also attracts the attention of mathematicians, for example due to its relation to quaternions, a 4-dimensional real algebra themselves. See rotations in 4-dimensional Euclidean space for some information.
In differential geometry, "n" = 4 is the only case where R"n" admits a non-standard differential structure: see exotic R4.
Norms on R"n".
One could define many norms on the vector space R"n". Some common examples are
A really surprising and helpful result is that every norm defined on R"n" is equivalent. This means for two arbitrary norms formula_30 and formula_31 on R"n" you can always find positive real numbers formula_32, such that
formula_33 for all formula_34.
This defines an equivalence relation on the set of all norms on R"n". With this result you can check that a sequence of vectors in R"n" converges with formula_30 if and only if it converges with formula_31.
Here is a sketch of what a proof of this result may look like:
Because of the equivalence relation it is enough to show that every norm on R"n" is equivalent to the Euclidean norm formula_35. Let formula_30 be an arbitrary norm on R"n". The proof is divided in two steps:
|
[
{
"math_id": 0,
"text": "\\R^n"
},
{
"math_id": 1,
"text": "(x_1, x_2, \\ldots, x_n)"
},
{
"math_id": 2,
"text": " F(t) = f(g_1(t),g_2(t)),"
},
{
"math_id": 3,
"text": "\\mathbf x + \\mathbf y = (x_1 + y_1, x_2 + y_2, \\ldots, x_n + y_n)"
},
{
"math_id": 4,
"text": "\\alpha \\mathbf x = (\\alpha x_1, \\alpha x_2, \\ldots, \\alpha x_n)."
},
{
"math_id": 5,
"text": "\\mathbf 0 = (0, 0, \\ldots, 0)"
},
{
"math_id": 6,
"text": "-\\mathbf x = (-x_1, -x_2, \\ldots, -x_n)."
},
{
"math_id": 7,
"text": "\\mathbf x = \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix}"
},
{
"math_id": 8,
"text": "\\mathbf x = \\begin{bmatrix} x_1 & x_2 & \\cdots & x_n \\end{bmatrix}."
},
{
"math_id": 9,
"text": "(A{\\mathbf x})_k = \\sum_{l=1}^n A_{kl} x_l"
},
{
"math_id": 10,
"text": "\\begin{align}\n\\mathbf e_1 & = (1, 0, \\ldots, 0) \\\\\n\\mathbf e_2 & = (0, 1, \\ldots, 0) \\\\\n& {}\\;\\; \\vdots \\\\\n\\mathbf e_n & = (0, 0, \\ldots, 1)\n\\end{align}"
},
{
"math_id": 11,
"text": "\\mathbf x = \\sum_{i=1}^n x_i \\mathbf{e}_i."
},
{
"math_id": 12,
"text": "\\mathbf{x}\\cdot\\mathbf{y} = \\sum_{i=1}^n x_iy_i = x_1y_1+x_2y_2+\\cdots+x_ny_n"
},
{
"math_id": 13,
"text": "d(\\mathbf{x}, \\mathbf{y}) = \\|\\mathbf{x} - \\mathbf{y}\\| = \\sqrt{\\sum_{i=1}^n (x_i - y_i)^2}"
},
{
"math_id": 14,
"text": "\\exist C_1 > 0,\\ \\exist C_2 > 0,\\ \\forall \\mathbf{x}, \\mathbf{y} \\in \\mathbb{R}^n:\n C_1 d(\\mathbf{x}, \\mathbf{y}) \\le \\sqrt{q(\\mathbf{x} - \\mathbf{y})} \\le\n C_2 d(\\mathbf{x}, \\mathbf{y}). "
},
{
"math_id": 15,
"text": "\\begin{matrix}\n0 \\le x_1 \\le 1 \\\\\n\\vdots \\\\\n0 \\le x_n \\le 1\n\\end{matrix}"
},
{
"math_id": 16,
"text": "\\begin{matrix}\n|x_1| \\le 1 \\\\\n\\vdots \\\\\n|x_n| \\le 1\n\\end{matrix}"
},
{
"math_id": 17,
"text": "\\sum_{k=1}^n |x_k| \\le 1\\,,"
},
{
"math_id": 18,
"text": "\\begin{matrix}\n0 \\le x_1 \\\\\n\\vdots \\\\\n0 \\le x_n \\\\\n\\sum\\limits_{k=1}^n x_k \\le 1\n\\end{matrix}"
},
{
"math_id": 19,
"text": "\\mathbf{C}"
},
{
"math_id": 20,
"text": "\\mathbf{R}."
},
{
"math_id": 21,
"text": "\\{i, -1, -i, +1\\} \\equiv \\mathbf{Z}/4\\mathbf{Z}"
},
{
"math_id": 22,
"text": "\\mathbf{Z}/2\\mathbf{Z}"
},
{
"math_id": 23,
"text": "\\|\\mathbf{x}\\|_p := \\sqrt[p]{\\sum_{i=1}^n|x_i|^p}"
},
{
"math_id": 24,
"text": "\\mathbf{x} \\in \\mathbf{R}^n"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": "p = 2"
},
{
"math_id": 27,
"text": "\\infty"
},
{
"math_id": 28,
"text": "\\|\\mathbf{x}\\|_\\infty:=\\max \\{x_1,\\dots,x_n\\}"
},
{
"math_id": 29,
"text": "\\|\\mathbf{x}\\|_\\infty = \\lim_{p \\to \\infty} \\sqrt[p]{\\sum_{i=1}^n|x_i|^p}"
},
{
"math_id": 30,
"text": "\\|\\cdot\\|"
},
{
"math_id": 31,
"text": "\\|\\cdot\\|'"
},
{
"math_id": 32,
"text": "\\alpha,\\beta > 0"
},
{
"math_id": 33,
"text": "\\alpha \\cdot \\|\\mathbf{x}\\| \\leq \\|\\mathbf{x}\\|' \\leq \\beta\\cdot\\|\\mathbf{x}\\|"
},
{
"math_id": 34,
"text": "\\mathbf{x} \\in \\R^n"
},
{
"math_id": 35,
"text": "\\|\\cdot\\|_2"
},
{
"math_id": 36,
"text": "\\beta > 0"
},
{
"math_id": 37,
"text": "\\|\\mathbf{x}\\| \\leq \\beta \\cdot \\|\\mathbf{x}\\|_2"
},
{
"math_id": 38,
"text": "\\mathbf{x} = (x_1, \\dots, x_n) \\in \\mathbf{R}^n"
},
{
"math_id": 39,
"text": "\\mathbf{x} = \\sum_{i=1}^n e_i \\cdot x_i"
},
{
"math_id": 40,
"text": "\\|\\mathbf{x}\\| = \\left\\|\\sum_{i=1}^n e_i \\cdot x_i \\right\\|\\leq \\sum_{i=1}^n \\|e_i\\| \\cdot |x_i|\n\\leq \\sqrt{\\sum_{i=1}^n \\|e_i\\|^2} \\cdot \\sqrt{\\sum_{i=1}^n |x_i|^2} = \\beta \\cdot \\|\\mathbf{x}\\|_2,"
},
{
"math_id": 41,
"text": "\\beta := \\sqrt{\\sum_{i=1}^n \\|e_i\\|^2}"
},
{
"math_id": 42,
"text": "\\alpha > 0"
},
{
"math_id": 43,
"text": "\\alpha\\cdot\\|\\mathbf{x}\\|_2 \\leq \\|\\mathbf{x}\\|"
},
{
"math_id": 44,
"text": "\\alpha"
},
{
"math_id": 45,
"text": "k \\in \\mathbf{N}"
},
{
"math_id": 46,
"text": "\\mathbf{x}_k \\in \\mathbf{R}^n"
},
{
"math_id": 47,
"text": "\\|\\mathbf{x}_k\\|_2 > k \\cdot \\|\\mathbf{x}_k\\|"
},
{
"math_id": 48,
"text": "(\\tilde{\\mathbf{x}}_k)_{k \\in \\mathbf{N}}"
},
{
"math_id": 49,
"text": "\\tilde{\\mathbf{x}}_k := \\frac{\\mathbf{x}_k}{\\|\\mathbf{x}_k\\|_2}"
},
{
"math_id": 50,
"text": "\\|\\tilde{\\mathbf{x}}_k\\|_2 = 1"
},
{
"math_id": 51,
"text": "(\\tilde{\\mathbf{x}}_{k_j})_{j\\in\\mathbf{N}}"
},
{
"math_id": 52,
"text": "\\mathbf{a} \\in"
},
{
"math_id": 53,
"text": "\\|\\mathbf{a}\\|_2 = 1"
},
{
"math_id": 54,
"text": "\\mathbf{a} = \\mathbf{0}"
},
{
"math_id": 55,
"text": "\\|\\mathbf{a}\\| \\leq \\left\\|\\mathbf{a} - \\tilde{\\mathbf{x}}_{k_j}\\right\\| + \\left\\|\\tilde{\\mathbf{x}}_{k_j}\\right\\| \\leq \\beta \\cdot \\left\\|\\mathbf{a} - \\tilde{\\mathbf{x}}_{k_j}\\right\\|_2 + \\frac{\\|\\mathbf{x}_{k_j}\\|}{\\|\\mathbf{x}_{k_j}\\|_2} \\ \\overset{j \\to \\infty}{\\longrightarrow} \\ 0,"
},
{
"math_id": 56,
"text": "\\|\\mathbf{a}-\\tilde{\\mathbf{x}}_{k_j}\\| \\to 0"
},
{
"math_id": 57,
"text": "0 \\leq \\frac{\\|\\mathbf{x}_{k_j}\\|}{\\|\\mathbf{x}_{k_j}\\|_2} < \\frac{1}{k_j}"
},
{
"math_id": 58,
"text": "\\frac{\\|\\mathbf{x}_{k_j}\\|}{\\|\\mathbf{x}_{k_j}\\|_2} \\to 0"
},
{
"math_id": 59,
"text": "\\|\\mathbf{a}\\| = 0"
},
{
"math_id": 60,
"text": "\\mathbf{a}= \\mathbf{0}"
},
{
"math_id": 61,
"text": "\\|\\mathbf{a}\\|_2 = \\left\\| \\lim_{j \\to \\infty}\\tilde{\\mathbf{x}}_{k_j} \\right\\|_2 = \\lim_{j \\to \\infty} \\left\\| \\tilde{\\mathbf{x}}_{k_j} \\right\\|_2 = 1"
}
] |
https://en.wikipedia.org/wiki?curid=1482083
|
1482085
|
Euclidean topology
|
Topological structure of Euclidean space
In mathematics, and especially general topology, the Euclidean topology is the natural topology induced on formula_0-dimensional Euclidean space formula_1 by the Euclidean metric.
Definition.
The Euclidean norm on formula_1 is the non-negative function formula_2 defined by
formula_3
Like all norms, it induces a canonical metric defined by formula_4 The metric formula_5 induced by the Euclidean norm is called the Euclidean metric or the Euclidean distance and the distance between points formula_6 and formula_7 is
formula_8
In any metric space, the open balls form a base for a topology on that space.
The Euclidean topology on formula_1 is the topology generated by these balls.
In other words, the open sets of the Euclidean topology on formula_1 are given by (arbitrary) unions of the open balls formula_9 defined as formula_10 for all real formula_11 and all formula_12 where formula_13 is the Euclidean metric.
Properties.
When endowed with this topology, the real line formula_14 is a T5 space.
Given two subsets say formula_15 and formula_16 of formula_14 with formula_17 where formula_18 denotes the closure of formula_19 there exist open sets formula_20 and formula_21 with formula_22 and formula_23 such that formula_24
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\R^n"
},
{
"math_id": 2,
"text": "\\|\\cdot\\| : \\R^n \\to \\R"
},
{
"math_id": 3,
"text": "\\left\\|\\left(p_1, \\ldots, p_n\\right)\\right\\| ~:=~ \\sqrt{p_1^2 + \\cdots + p_n^2}."
},
{
"math_id": 4,
"text": "d(p, q) = \\|p - q\\|."
},
{
"math_id": 5,
"text": "d : \\R^n \\times \\R^n \\to \\R"
},
{
"math_id": 6,
"text": "p = \\left(p_1, \\ldots, p_n\\right)"
},
{
"math_id": 7,
"text": "q = \\left(q_1, \\ldots, q_n\\right)"
},
{
"math_id": 8,
"text": "d(p, q) ~=~ \\|p - q\\| ~=~ \\sqrt{\\left(p_1 - q_1\\right)^2 + \\left(p_2 - q_2\\right)^2 + \\cdots + \\left(p_i - q_i\\right)^2 + \\cdots + \\left(p_n - q_n\\right)^2}."
},
{
"math_id": 9,
"text": "B_r(p)"
},
{
"math_id": 10,
"text": "B_r(p) := \\left\\{x \\in \\R^n : d(p,x) < r\\right\\},"
},
{
"math_id": 11,
"text": "r > 0"
},
{
"math_id": 12,
"text": "p \\in \\R^n,"
},
{
"math_id": 13,
"text": "d"
},
{
"math_id": 14,
"text": "\\R"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": "\\overline{A} \\cap B = A \\cap \\overline{B} = \\varnothing,"
},
{
"math_id": 18,
"text": "\\overline{A}"
},
{
"math_id": 19,
"text": "A,"
},
{
"math_id": 20,
"text": "S_A"
},
{
"math_id": 21,
"text": "S_B"
},
{
"math_id": 22,
"text": "A \\subseteq S_A"
},
{
"math_id": 23,
"text": "B \\subseteq S_B"
},
{
"math_id": 24,
"text": "S_A \\cap S_B = \\varnothing."
}
] |
https://en.wikipedia.org/wiki?curid=1482085
|
14822
|
Irreducible fraction
|
Fully simplified fraction
An irreducible fraction (or fraction in lowest terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered). In other words, a fraction is irreducible if and only if "a" and "b" are coprime, that is, if "a" and "b" have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every rational number can be represented as an irreducible fraction with positive denominator in exactly one way.
An equivalent definition is sometimes useful: if "a" and "b" are integers, then the fraction is irreducible if and only if there is no other equal fraction such that |"c"| < |"a"| or |"d"| < |"b"|, where |"a"| means the absolute value of "a". (Two fractions and are "equal" or "equivalent" if and only if "ad" = "bc".)
For example, , , and are all irreducible fractions. On the other hand, is reducible since it is equal in value to , and the numerator of is less than the numerator of .
A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor. In order to find the greatest common divisor, the Euclidean algorithm or prime factorization can be used. The Euclidean algorithm is commonly preferred because it allows one to reduce fractions with numerators and denominators too large to be easily factored.
formula_0
Examples.
In the first step both numbers were divided by 10, which is a factor common to both 120 and 90. In the second step, they were divided by 3. The final result, , is an irreducible fraction because 4 and 3 have no common factors other than 1.
The original fraction could have also been reduced in a single step by using the greatest common divisor of 90 and 120, which is 30. As 120 ÷ 30 = 4, and 90 ÷ 30 = 3, one gets
formula_1
Which method is faster "by hand" depends on the fraction and the ease with which common factors are spotted. In case a denominator and numerator remain that are too large to ensure they are coprime by inspection, a greatest common divisor computation is needed anyway to ensure the fraction is actually irreducible.
Uniqueness.
Every rational number has a "unique" representation as an irreducible fraction with a positive denominator (however = although both are irreducible). Uniqueness is a consequence of the unique prime factorization of integers, since = implies "ad" = "bc", and so both sides of the latter must share the same prime factorization, yet "a" and "b" share no prime factors so the set of prime factors of "a" (with multiplicity) is a subset of those of "c" and vice versa, meaning "a" = "c" and by the same argument "b" = "d".
Applications.
The fact that any rational number has a unique representation as an irreducible fraction is utilized in various proofs of the irrationality of the square root of 2 and of other irrational numbers. For example, one proof notes that if √2 could be represented as a ratio of integers, then it would have in particular the fully reduced representation where "a" and "b" are the smallest possible; but given that equals √2, so does (since cross-multiplying this with shows that they are equal). Since "a" > "b" (because √2 is greater than 1), the latter is a ratio of two smaller integers. This is a contradiction, so the premise that the square root of two has a representation as the ratio of two integers is false.
Generalization.
The notion of irreducible fraction generalizes to the field of fractions of any unique factorization domain: any element of such a field can be written as a fraction in which denominator and numerator are coprime, by dividing both by their greatest common divisor. This applies notably to rational expressions over a field. The irreducible fraction for a given element is unique up to multiplication of denominator and numerator by the same invertible element. In the case of the rational numbers this means that any number has two irreducible fractions, related by a change of sign of both numerator and denominator; this ambiguity can be removed by requiring the denominator to be positive. In the case of rational functions the denominator could similarly be required to be a monic polynomial.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{120}{90}=\\frac{12}{9}=\\frac{4}{3}"
},
{
"math_id": 1,
"text": " \\frac{120}{90}=\\frac{4}{3}"
}
] |
https://en.wikipedia.org/wiki?curid=14822
|
1482218
|
Total relation
|
Type of logical relation
In mathematics, a binary relation "R" ⊆ "X"×"Y" between two sets "X" and "Y" is total (or left total) if the source set "X" equals the domain {"x" : there is a "y" with "xRy" }. Conversely, "R" is called right total if "Y" equals the range {"y" : there is an "x" with "xRy" }.
When "f": "X" → "Y" is a function, the domain of "f" is all of "X", hence "f" is a total relation. On the other hand, if "f" is a partial function, then the domain may be a proper subset of "X", in which case "f" is not a total relation.
"A binary relation is said to be total with respect to a universe of discourse just in case everything in that universe of discourse stands in that relation to something else."
Algebraic characterization.
Total relations can be characterized algebraically by equalities and inequalities involving compositions of relations. To this end, let formula_0 be two sets, and let formula_1 For any two sets formula_2 let formula_3 be the universal relation between formula_4 and formula_5 and let formula_6 be the identity relation on formula_7 We use the notation formula_8 for the converse relation of formula_9
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X,Y"
},
{
"math_id": 1,
"text": "R\\subseteq X\\times Y."
},
{
"math_id": 2,
"text": "A,B,"
},
{
"math_id": 3,
"text": "L_{A,B}=A\\times B"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "B,"
},
{
"math_id": 6,
"text": "I_A=\\{(a,a):a\\in A\\}"
},
{
"math_id": 7,
"text": "A."
},
{
"math_id": 8,
"text": "R^\\top"
},
{
"math_id": 9,
"text": "R."
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "W"
},
{
"math_id": 12,
"text": "S\\subseteq W\\times X,"
},
{
"math_id": 13,
"text": "S\\ne\\emptyset"
},
{
"math_id": 14,
"text": "SR\\ne\\emptyset."
},
{
"math_id": 15,
"text": "I_X\\subseteq RR^\\top."
},
{
"math_id": 16,
"text": "L_{X,Y}=RL_{Y,Y}."
},
{
"math_id": 17,
"text": "Y\\ne\\emptyset."
},
{
"math_id": 18,
"text": "\\overline{RL_{Y,Y}}=\\emptyset."
},
{
"math_id": 19,
"text": "\\overline R\\subseteq R\\overline{I_Y}."
},
{
"math_id": 20,
"text": "Z"
},
{
"math_id": 21,
"text": "S\\subseteq Y\\times Z,"
},
{
"math_id": 22,
"text": "\\overline{RS}\\subseteq R\\overline S."
}
] |
https://en.wikipedia.org/wiki?curid=1482218
|
14823018
|
Coherence condition
|
Collection of conditions requiring that various compositions of elementary morphisms are equal
In mathematics, and particularly category theory, a coherence condition is a collection of conditions requiring that various compositions of elementary morphisms are equal. Typically the elementary morphisms are part of the data of the category. A coherence theorem states that, in order to be assured that all these equalities hold, it suffices to check a small number of identities.
An illustrative example: a monoidal category.
Part of the data of a monoidal category is a chosen morphism
formula_0, called the "associator":
formula_1
for each triple of objects formula_2 in the category. Using compositions of these formula_0, one can construct a morphism
formula_3
Actually, there are many ways to construct such a morphism as a composition of various formula_0. One coherence condition that is typically imposed is that these compositions are all equal.
Typically one proves a coherence condition using a coherence theorem, which states that one only needs to check a few equalities of compositions in order to show that the rest also hold. In the above example, one only needs to check that, for all quadruples of objects formula_4, the following diagram commutes.
Any pair of morphisms from formula_5 to formula_6 constructed as compositions of various formula_0 are equal.
Further examples.
Two simple examples that illustrate the definition are as follows. Both are directly from the definition of a category.
Identity.
Let "f" : "A" → "B" be a morphism of a category containing two objects "A" and "B". Associated with these objects are the identity morphisms 1"A" : "A" → "A" and 1"B" : "B" → "B". By composing these with "f", we construct two morphisms:
"f" 1"A" : "A" → "B", and
1"B" "f" : "A" → "B".
Both are morphisms between the same objects as "f". We have, accordingly, the following coherence statement:
"f" 1"A" = "f" = 1"B" "f".
Associativity of composition.
Let "f" : "A" → "B", "g" : "B" → "C" and "h" : "C" → "D" be morphisms of a category containing objects "A", "B", "C" and "D". By repeated composition, we can construct a morphism from "A" to "D" in two ways:
("h" "g") "f" : "A" → "D", and
"h" ("g" "f") : "A" → "D".
We have now the following coherence statement:
("h" "g") "f" = "h" ("g" "f").
In these two particular examples, the coherence statements are "theorems" for the case of an abstract category, since they follow directly from the axioms; in fact, they "are" axioms. For the case of a concrete mathematical structure, they can be viewed as conditions, namely as requirements for the mathematical structure under consideration to be a concrete category, requirements that such a structure may meet or fail to meet.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha_{A,B,C}"
},
{
"math_id": 1,
"text": "\\alpha_{A,B,C} \\colon (A\\otimes B)\\otimes C \\rightarrow A\\otimes(B\\otimes C)"
},
{
"math_id": 2,
"text": "A, B, C"
},
{
"math_id": 3,
"text": "( ( A_N \\otimes A_{N-1} ) \\otimes A_{N-2} ) \\otimes \\cdots \\otimes A_1) \\rightarrow ( A_N \\otimes ( A_{N-1} \\otimes \\cdots \\otimes ( A_2 \\otimes A_1) ). "
},
{
"math_id": 4,
"text": "A,B,C,D"
},
{
"math_id": 5,
"text": " ( ( \\cdots ( A_N \\otimes A_{N-1} ) \\otimes \\cdots ) \\otimes A_2 ) \\otimes A_1) "
},
{
"math_id": 6,
"text": " ( A_N \\otimes ( A_{N-1} \\otimes ( \\cdots \\otimes ( A_2 \\otimes A_1) \\cdots ) ) "
}
] |
https://en.wikipedia.org/wiki?curid=14823018
|
1482326
|
Term symbol
|
Notation in quantum physics
In atomic physics, a term symbol is an abbreviated description of the total spin and orbital angular momentum quantum numbers of the electrons in a multi-electron atom. So while the word "symbol" suggests otherwise, it represents an actual "value" of a physical quantity.
For a given electron configuration of an atom, its state depends also on its total angular momentum, including spin and orbital components, which are specified by the term symbol. The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling) in which the all-electron total quantum numbers for orbital ("L"), spin ("S") and total ("J") angular momenta are good quantum numbers.
In the terminology of atomic spectroscopy, "L" and "S" together specify a term; "L", "S", and "J" specify a level; and "L", "S", "J" and the magnetic quantum number "M""J" specify a state. The conventional term symbol has the form 2"S"+1"L""J", where "J" is written optionally in order to specify a level. "L" is written using spectroscopic notation: for example, it is written "S", "P", "D", or "F" to represent "L" = 0, 1, 2, or 3 respectively. For coupling schemes other that LS coupling, such as the jj coupling that applies to some heavy elements, other notations are used to specify the term.
Term symbols apply to both neutral and charged atoms, and to their ground and excited states. Term symbols usually specify the total for all electrons in an atom, but are sometimes used to describe electrons in a given subshell or set of subshells, for example to describe each open subshell in an atom having more than one. The ground state term symbol for neutral atoms is described, in most cases, by Hund's rules. Neutral atoms of the chemical elements have the same term symbol "for each column" in the s-block and p-block elements, but differ in d-block and f-block elements where the ground-state electron configuration changes within a column, where exceptions to Hund's rules occur. Ground state term symbols for the chemical elements are given below.
Term symbols are also used to describe angular momentum quantum numbers for atomic nuclei and for molecules. For molecular term symbols, Greek letters are used to designate the component of orbital angular momenta along the molecular axis.
The use of the word "term" for an atom's electronic state is based on the Rydberg–Ritz combination principle, an empirical observation that the wavenumbers of spectral lines can be expressed as the difference of two "terms". This was later summarized by the Bohr model, which identified the terms with quantized energy levels, and the spectral wavenumbers of these levels with photon energies.
Tables of atomic energy levels identified by their term symbols are available for atoms and ions in ground and excited states from the National Institute of Standards and Technology (NIST).
Term symbols with "LS" coupling.
The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling), in which the atom's total spin quantum number "S" and the total orbital angular momentum quantum number "L" are "good quantum numbers". (Russell–Saunders coupling is named after Henry Norris Russell and Frederick Albert Saunders, who described it in 1925). The spin-orbit interaction then couples the total spin and orbital moments to give the total electronic angular momentum quantum number "J". Atomic states are then well described by term symbols of the form:
formula_0
where
The orbital symbols S, P, D and F are derived from the characteristics of the spectroscopic lines corresponding to s, p, d, and f orbitals: sharp, principal, diffuse, and fundamental; the rest are named in alphabetical order from G onwards (omitting J, S and P). When used to describe electronic states of an atom, the term symbol is often written following the electron configuration. For example, 1s22s22p2 3P0 represents the ground state of a neutral carbon atom. The superscript 3 indicates that the spin multiplicity 2"S" + 1 is 3 (it is a triplet state), so "S" = 1; the letter "P" is spectroscopic notation for "L" = 1; and the subscript 0 is the value of "J" (in this case "J" = "L" − "S").
Small letters refer to individual orbitals or one-electron quantum numbers, whereas capital letters refer to many-electron states or their quantum numbers.
Terminology: terms, levels, and states.
For a given electron configuration,
The product &NoBreak;&NoBreak; as a number of possible states formula_7 with given "S" and "L" is also a number of basis states in the uncoupled representation, where formula_1", formula_8", formula_2", formula_9" (formula_8 and formula_9 are z-axis components of total spin and total orbital angular momentum respectively) are good quantum numbers whose corresponding operators mutually commute. With given formula_1 and formula_2, the eigenstates formula_7 in this representation span function space of dimension &NoBreak;&NoBreak;, as formula_10 and formula_11. In the coupled representation where total angular momentum (spin + orbital) is treated, the associated states (or eigenstates) are formula_12 and these states span the function space with dimension of
<templatestyles src="Block indent/styles.css"/>formula_13
as formula_14. Obviously, the dimension of function space in both representations must be the same.
As an example, for formula_15, there are (2×1+1)(2×2+1) = 15 different states (= eigenstates in the uncoupled representation) corresponding to the 3D "term", of which (2×3+1) = 7 belong to the 3D3 ("J" = 3) level. The sum of &NoBreak;&NoBreak; for all levels in the same term equals (2"S"+1)(2"L"+1) as the dimensions of both representations must be equal as described above. In this case, "J" can be 1, 2, or 3, so 3 + 5 + 7 = 15.
Term symbol parity.
The parity of a term symbol is calculated as
<templatestyles src="Block indent/styles.css"/>formula_16
where formula_17 is the orbital quantum number for each electron. formula_18 means even parity while formula_19 is for odd parity. In fact, only electrons in odd orbitals (with formula_20 odd) contribute to the total parity: an odd number of electrons in odd orbitals (those with an odd formula_20 such as in p, f...) correspond to an odd term symbol, while an even number of electrons in odd orbitals correspond to an even term symbol. The number of electrons in even orbitals is irrelevant as any sum of even numbers is even. For any closed subshell, the number of electrons is formula_21 which is even, so the summation of formula_17 in closed subshells is always an even number. The summation of quantum numbers formula_22 over open (unfilled) subshells of odd orbitals (formula_20 odd) determines the parity of the term symbol. If the number of electrons in this "reduced" summation is odd (even) then the parity is also odd (even).
When it is odd, the parity of the term symbol is indicated by a superscript letter "o", otherwise it is omitted:
<templatestyles src="Block indent/styles.css"/>2P has odd parity, but 3P0 has even parity.
Alternatively, parity may be indicated with a subscript letter "g" or "u", standing for "gerade" (German for "even") or "ungerade" ("odd"):
<templatestyles src="Block indent/styles.css"/>
Ground state term symbol.
It is relatively easy to predict the term symbol for the ground state of an atom using Hund's rules. It corresponds to a state with maximum "S" and "L".
−<templatestyles src="Fraction/styles.css" />1⁄2 to them.
|"L" − "S"|;
"L" + "S";
"S".
As an example, in the case of fluorine, the electronic configuration is 1s22s22p5.
Atomic term symbols of the chemical elements.
In the periodic table, because atoms of elements in a column usually have the same outer electron structure, and always have the same electron structure in the "s-block" and "p-block" elements (see block (periodic table)), all elements may share the same ground state term symbol for the column. Thus, hydrogen and the alkali metals are all 2S<templatestyles src="Fraction/styles.css" />1⁄2, the alkaline earth metals are 1S0, the boron column elements are 2P<templatestyles src="Fraction/styles.css" />1⁄2, the carbon column elements are 3P0, the pnictogens are 4S<templatestyles src="Fraction/styles.css" />3⁄2, the chalcogens are 3P2, the halogens are 2P<templatestyles src="Fraction/styles.css" />3⁄2, and the inert gases are 1S0, per the rule for full shells and subshells stated above.
Term symbols for the ground states of most chemical elements are given in the collapsed table below. In the d-block and f-block, the term symbols are not always the same for elements in the same column of the periodic table, because open shells of several d or f electrons have several closely spaced terms whose energy ordering is often perturbed by the addition of an extra complete shell to form the next element in the column.
For example, the table shows that the first pair of vertically adjacent atoms with different ground-state term symbols are V and Nb. The 6D<templatestyles src="Fraction/styles.css" />1⁄2 ground state of Nb corresponds to an excited state of V 2112 cm−1 above the 4F<templatestyles src="Fraction/styles.css" />3⁄2 ground state of V, which in turn corresponds to an excited state of Nb 1143 cm−1 above the Nb ground state. These energy differences are small compared to the 15158 cm−1 difference between the ground and first excited state of Ca, which is the last element before V with no d electrons.
Term symbols for an electron configuration.
The process to calculate all possible term symbols for a given electron configuration is somewhat longer.
Alternative method using group theory.
For configurations with at most two electrons (or holes) per subshell, an alternative and much quicker method of arriving at the same result can be obtained from group theory. The configuration 2p2 has the symmetry of the following direct product in the full rotation group:
<templatestyles src="Block indent/styles.css"/>Γ(1) × Γ(1)
Γ(0) + [Γ(1)] + Γ(2),
which, using the familiar labels Γ(0)
S, Γ(1)
P and Γ(2)
D, can be written as
<templatestyles src="Block indent/styles.css"/>P × P
S + [P] + D.
The square brackets enclose the anti-symmetric square. Hence the 2p2 configuration has components with the following symmetries:
<templatestyles src="Block indent/styles.css"/>S + D (from the symmetric square and hence having symmetric spatial wavefunctions);
<templatestyles src="Block indent/styles.css"/>P (from the anti-symmetric square and hence having an anti-symmetric spatial wavefunction).
The Pauli principle and the requirement for electrons to be described by anti-symmetric wavefunctions imply that only the following combinations of spatial and spin symmetry are allowed:
<templatestyles src="Block indent/styles.css"/>1S + 1D (spatially symmetric, spin anti-symmetric)
<templatestyles src="Block indent/styles.css"/>3P (spatially anti-symmetric, spin symmetric).
Then one can move to step five in the procedure above, applying Hund's rules.
The group theory method can be carried out for other such configurations, like 3d2, using the general formula
<templatestyles src="Block indent/styles.css"/>Γ(j) × Γ(j)
Γ(2j) + Γ(2j−2) + ⋯ + Γ(0) + [Γ(2j−1) + ⋯ + Γ(1)].
The symmetric square will give rise to singlets (such as 1S, 1D, & 1G), while the anti-symmetric square gives rise to triplets (such as 3P & 3F).
More generally, one can use
<templatestyles src="Block indent/styles.css"/>Γ("j") × Γ("k")
Γ("j"+"k") + Γ("j"+"k"−1) + ⋯ + Γ(|"j"−"k"|)
where, since the product is not a square, it is not split into symmetric and anti-symmetric parts. Where two electrons come from inequivalent orbitals, both a singlet and a triplet are allowed in each case.
Summary of various coupling schemes and corresponding term symbols.
Basic concepts for all coupling schemes:
"LS"1 coupling.
Most famous coupling schemes are introduced here but these schemes can be mixed to express the energy state of an atom. This summary is based on .
Racah notation and Paschen notation.
These are notations for describing states of singly excited atoms, especially noble gas atoms. Racah notation is basically a combination of "LS" or Russell–Saunders coupling and "J"1"L"2 coupling. "LS" coupling is for a parent ion and "J"1"L"2 coupling is for a coupling of the parent ion and the excited electron. The parent ion is an unexcited part of the atom. For example, in Ar atom excited from a ground state ...3p6 to an excited state ...3p54p in electronic configuration, 3p5 is for the parent ion while 4p is for the excited electron.
In Racah notation, states of excited atoms are denoted as formula_73. Quantities with a subscript 1 are for the parent ion, n and ℓ are principal and orbital quantum numbers for the excited electron, "K" and "J" are quantum numbers for formula_74 and formula_75 where formula_24 and formula_25 are orbital angular momentum and spin for the excited electron respectively. “"o"” represents a parity of excited atom. For an inert (noble) gas atom, usual excited states are "N"p5"nℓ" where "N" = 2, 3, 4, 5, 6 for Ne, Ar, Kr, Xe, Rn, respectively in order. Since the parent ion can only be 2P1/2 or 2P3/2, the notation can be shortened to formula_76 or formula_77, where nℓ means the parent ion is in 2P3/2 while nℓ′ is for the parent ion in 2P1/2 state.
Paschen notation is a somewhat odd notation; it is an old notation made to attempt to fit an emission spectrum of neon to a hydrogen-like theory. It has a rather simple structure to indicate energy levels of an excited atom. The energy levels are denoted as n′ℓ#. ℓ is just an orbital quantum number of the excited electron. n′ℓ is written in a way that 1s for ("n" = "N" + 1, "ℓ" = 0), 2p for ("n" = "N" + 1, "ℓ" = 1), 2s for ("n" = "N" + 2, "ℓ" = 0), 3p for ("n" = "N" + 2, "ℓ" = 1), 3s for ("n" = "N" + 3, "ℓ" = 0), etc. Rules of writing n′ℓ from the lowest electronic configuration of the excited electron are: (1) ℓ is written first, (2) n′ is consecutively written from 1 and the relation of "ℓ" = "n′" − 1, "n′" − 2, ... , 0 (like a relation between n and ℓ) is kept. n′ℓ is an attempt to describe electronic configuration of the excited electron in a way of describing electronic configuration of hydrogen atom. "#" is an additional number denoted to each energy level of given n′ℓ (there can be multiple energy levels of given electronic configuration, denoted by the term symbol). "#" denotes each level in order, for example, "#" = 10 is for a lower energy level than "#" = 9 level and "#" = 1 is for the highest level in a given n′ℓ. An example of Paschen notation is below.
|
[
{
"math_id": 0,
"text": "\n^{2S+1}L_J\n"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "(2S+1)(2L+1)"
},
{
"math_id": 4,
"text": "J"
},
{
"math_id": 5,
"text": "2J+1"
},
{
"math_id": 6,
"text": "M_J"
},
{
"math_id": 7,
"text": "|S,M_S,L,M_L\\rangle"
},
{
"math_id": 8,
"text": "M_S"
},
{
"math_id": 9,
"text": "M_L"
},
{
"math_id": 10,
"text": "M_S=S,S-1,\\dots, -S+1, -S"
},
{
"math_id": 11,
"text": "M_L=L,L-1,...,-L+1,-L"
},
{
"math_id": 12,
"text": "|J,M_J,S,L\\rangle"
},
{
"math_id": 13,
"text": "\\sum_{J=J_{\\min}=|L-S|}^{J_{\\max}=L+S}(2J+1)"
},
{
"math_id": 14,
"text": "M_J=J,J-1,\\dots,-J+1,-J"
},
{
"math_id": 15,
"text": "S = 1, L = 2"
},
{
"math_id": 16,
"text": "P = (-1)^{\\sum_i \\ell_i} ,"
},
{
"math_id": 17,
"text": "\\ell_i"
},
{
"math_id": 18,
"text": "P=1\n"
},
{
"math_id": 19,
"text": "P=-1"
},
{
"math_id": 20,
"text": "\\ell"
},
{
"math_id": 21,
"text": "2(2\\ell+1)"
},
{
"math_id": 22,
"text": "\\sum_{i}\\ell_{i} "
},
{
"math_id": 23,
"text": "m_\\ell"
},
{
"math_id": 24,
"text": "\\boldsymbol{\\ell}"
},
{
"math_id": 25,
"text": "\\mathbf{s}"
},
{
"math_id": 26,
"text": "\\mathbf{j}"
},
{
"math_id": 27,
"text": "\\mathbf{j} = \\boldsymbol{\\ell} + \\mathbf{s}"
},
{
"math_id": 28,
"text": "\\mathbf{L}"
},
{
"math_id": 29,
"text": "\\mathbf{L}=\\sum_{i}\\boldsymbol{\\ell}_{i}"
},
{
"math_id": 30,
"text": "\\mathbf{S}"
},
{
"math_id": 31,
"text": "\\mathbf{S}=\\sum_{i}\\mathbf{s}_{i}"
},
{
"math_id": 32,
"text": "\\mathbf{J}"
},
{
"math_id": 33,
"text": "\\mathbf{J} = \\mathbf{L} + \\mathbf{S}"
},
{
"math_id": 34,
"text": "\\mathbf{J} = \\sum_{i}\\mathbf{j}_{i}"
},
{
"math_id": 35,
"text": "{{\\hat{\\ell}}^2}\\left| \\ell,m_\\ell,\\ldots \\right\\rangle ={\\hbar ^2}\\ell\\left( \\ell+1 \\right)\\left| \\ell,m_\\ell,\\ldots \\right\\rangle "
},
{
"math_id": 36,
"text": "\\mathbf{S}= \\mathbf{S}_{A} + \\mathbf{S}_{B}"
},
{
"math_id": 37,
"text": "\\mathbf{L}=\\mathbf{L}_{A}+ \\mathbf{L}_{B}"
},
{
"math_id": 38,
"text": "X= X_{A}+X_{B},X_{A}+X_{B}-1, \\dots, |X_{A}-X_{B}|"
},
{
"math_id": 39,
"text": "\\mathbf{J}=\\mathbf{L}+\\mathbf{S}"
},
{
"math_id": 40,
"text": "n{\\ell^N}{{(}^{(2S+1)}}{L_J})"
},
{
"math_id": 41,
"text": "{{(}^{(2S+1)}}{{L}_{J}})"
},
{
"math_id": 42,
"text": "n{\\ell^N}"
},
{
"math_id": 43,
"text": "n,\\ell"
},
{
"math_id": 44,
"text": "n \\ell"
},
{
"math_id": 45,
"text": "L > S"
},
{
"math_id": 46,
"text": "S>L"
},
{
"math_id": 47,
"text": "{{(}^{(2S+1)}}{L_J})"
},
{
"math_id": 48,
"text": "{^{\\left( 2S+1 \\right)}{L}}"
},
{
"math_id": 49,
"text": "P={{\\left( -1 \\right)}^{\\underset{i}{\\mathop \\sum }\\, {\\ell_i}}}"
},
{
"math_id": 50,
"text": "P = -1"
},
{
"math_id": 51,
"text": "\\mathbf{J} = \\sum_{i} \\mathbf{j}_{i}"
},
{
"math_id": 52,
"text": "{{\\left( {n_1}{\\ell_1}_{j_1}^{N_1}{n_2}{\\ell_2}_{j_2}^{N_2}\\ldots \\right)}_{J}}"
},
{
"math_id": 53,
"text": "{{\\left( \\text{6p}_{\\frac{1}{2}}^{2}\\text{6p}_{\\frac{3}{2}}^{} \\right)}^{o}}_{3/2}"
},
{
"math_id": 54,
"text": "\\text{6p}^{2}_{1/2}"
},
{
"math_id": 55,
"text": "\\text{6p}_{\\frac{3}{2}}^{}"
},
{
"math_id": 56,
"text": "j=1/2"
},
{
"math_id": 57,
"text": "j=3/2"
},
{
"math_id": 58,
"text": "\\text{4d}_{5/2}^{3}\\text{4d}_{3/2}^{2}~\\ {{\\left( \\frac{9}{2},2 \\right)}_{11/2}}"
},
{
"math_id": 59,
"text": "J_1"
},
{
"math_id": 60,
"text": "\\text{4d}^{3}_{5/2}"
},
{
"math_id": 61,
"text": "\\text{4d}^{2}_{3/2}"
},
{
"math_id": 62,
"text": "\\mathbf{J}=\\mathbf{J}_{1}+\\mathbf{J}_{2}"
},
{
"math_id": 63,
"text": "\\mathbf{K}=\\mathbf{J}_1+\\mathbf{L}_2"
},
{
"math_id": 64,
"text": "\\mathbf{J}= \\mathbf{K}+\\mathbf{S}_2"
},
{
"math_id": 65,
"text": "{n_1}{\\ell_1}^{N_1}\\left( \\mathrm{term}_1 \\right){n_2}{\\ell_2}^{N_2}\\left( \\mathrm{term}_2 \\right)~\\ {^{\\left( 2{S_2}+1 \\right)}{{{\\left[ K \\right]}_J}}}"
},
{
"math_id": 66,
"text": "{J_1}= \\frac{1}{2},{l_2}=4,~{s_2}=1/2"
},
{
"math_id": 67,
"text": "{J_1} = \\frac{7}{2},{L_2}=2,~{S_2}=0"
},
{
"math_id": 68,
"text": "\\mathbf{K} \\ell= \\mathbf{L} +\\mathbf{S_{1}}"
},
{
"math_id": 69,
"text": "\\mathbf{J} = \\mathbf{K}+\\mathbf{S_{2}}"
},
{
"math_id": 70,
"text": "{n_1}{\\ell_1}^{N_1}\\left( \\mathrm{term}_1\\right){n_2} {\\ell_2}^{N_2} \\left(\\mathrm{term}_2\\right)\\ ~L~\\ {^{\\left(2{S_2}+1\\right)}{{{\\left[K\\right]}_{J}}}}"
},
{
"math_id": 71,
"text": "{L_1}=1,~{L_2}=1,~{S_1}=\\frac{3}{2}, ~{S_2}=1"
},
{
"math_id": 72,
"text": "L=2, K=5/2, J=7/2"
},
{
"math_id": 73,
"text": "\\left( ^{\\left( 2{{S}_{1}}+1 \\right)}{{L}_{1}}_{{{J}_{1}}} \\right)n\\ell\\left[ K \\right]_{J}^{o}"
},
{
"math_id": 74,
"text": "\\mathbf{K}=\\mathbf{J}_{1}+\\boldsymbol{\\ell}"
},
{
"math_id": 75,
"text": "\\mathbf{J}=\\mathbf{K}+\\mathbf{s}"
},
{
"math_id": 76,
"text": "n\\ell\\left[ K \\right]_{J}^{o}"
},
{
"math_id": 77,
"text": "n\\ell'\\left[ K \\right]_{J}^{o}"
}
] |
https://en.wikipedia.org/wiki?curid=1482326
|
14824044
|
Trace identity
|
Equations involving the trace of a matrix
In mathematics, a trace identity is any equation involving the trace of a matrix.
Properties.
Trace identities are invariant under simultaneous conjugation.
Uses.
They are frequently used in the invariant theory of formula_0 matrices to find the generators and relations of the ring of invariants, and therefore are useful in answering questions similar to that posed by Hilbert's fourteenth problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n \\times n"
},
{
"math_id": 1,
"text": "\\operatorname{tr}\\left(A^n\\right) - c_{n-1} \\operatorname{tr}\\left(A^{n - 1}\\right) + \\cdots + (-1)^n n \\det(A) = 0\\,"
},
{
"math_id": 2,
"text": "c_i"
},
{
"math_id": 3,
"text": "\\operatorname{tr}(A) = \\operatorname{tr}\\left(A^\\mathsf{T}\\right).\\,"
}
] |
https://en.wikipedia.org/wiki?curid=14824044
|
14826
|
Isomorphism class
|
Equivalence class of isomorphic mathematical objects
In mathematics, an isomorphism class is a collection of mathematical objects which are isomorphic to each other.
Isomorphism classes are considered to specify that the difference between two mathematical objects is considered irrelevant.
Definition in category theory.
Isomorphisms and isomorphism classes can be formalized in great generality using the language of category theory. Let formula_0 be a category. A morphism formula_1 is called an isomorphism if there is a morphism formula_2 such that formula_3 and formula_4. Consider the equivalence relation that regards two objects as related if there is an isomorphism between them. The equivalence classes of this equivalence relation are called isomorphism classes.
Examples.
Examples of isomorphism classes are plentiful in mathematics.
However, there are circumstances in which the isomorphism class of an object conceals vital information about it.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "f : A \\to B"
},
{
"math_id": 2,
"text": "g : B \\to A"
},
{
"math_id": 3,
"text": "gf = \\text{id}_A"
},
{
"math_id": 4,
"text": "fg = \\text{id}_B"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "\\pi_1(X,p)"
},
{
"math_id": 8,
"text": "\\pi_1(X)"
}
] |
https://en.wikipedia.org/wiki?curid=14826
|
148271
|
Best-first search
|
Algorithm
Best-first search is a class of search algorithms, which explores a graph by expanding the most promising node chosen according to a specified rule.
Judea Pearl described the best-first search as estimating the promise of node "n" by a "heuristic evaluation function formula_0 which, in general, may depend on the description of "n", the description of the goal, the information gathered by the search up to that point, and most importantly, on any extra knowledge about the problem domain."
Some authors have used "best-first search" to refer specifically to a search with a heuristic that attempts to predict how close the end of a path is to a solution (or, goal), so that paths which are judged to be closer to a solution (or, goal) are extended first. This specific type of search is called "greedy best-first search" or "pure heuristic search".
Efficient selection of the current best candidate for extension is typically implemented using a priority queue.
The A* search algorithm is an example of a best-first search algorithm, as is B*. Best-first algorithms are often used for path finding in combinatorial search. Neither A* nor B* is a greedy best-first search, as they incorporate the distance from the start in addition to estimated distances to the goal.
Greedy BeFS.
Using a greedy algorithm, expand the first successor of the parent. After a successor is generated:
Below is a pseudocode example of this algorithm, where queue represents a priority queue which orders nodes based on their heuristic distances from the goal. This implementation keeps track of visited nodes, and can therefore be used for undirected graphs. It can be modified to retrieve the path.
procedure GBS(start, target) is:
mark start as visited
add start to queue
while queue is not empty do:
current_node ← vertex of queue with min distance to target
remove current_node from queue
foreach neighbor n of current_node do:
if n not in visited then:
if n is target:
return n
else:
mark n as visited
add n to queue
return failure
|
[
{
"math_id": 0,
"text": "f(n)"
}
] |
https://en.wikipedia.org/wiki?curid=148271
|
14828
|
Isomorphism
|
In mathematics, invertible homomorphism
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος "isos" "equal", and μορφή "morphe" "form" or "shape".
The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are the same up to an isomorphism.
An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number p, all fields with p elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique.
The term isomorphism is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective.
In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example:
Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea.
Examples.
Logarithm and exponential.
Let formula_0 be the multiplicative group of positive real numbers, and let formula_1 be the additive group of real numbers.
The logarithm function formula_2 satisfies formula_3 for all formula_4 so it is a group homomorphism. The exponential function formula_5 satisfies formula_6 for all formula_7 so it too is a homomorphism.
The identities formula_8 and formula_9 show that formula_10 and formula_11 are inverses of each other. Since formula_10 is a homomorphism that has an inverse that is also a homomorphism, formula_10 is an isomorphism of groups.
The formula_10 function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale.
Integers modulo 6.
Consider the group formula_12 the integers from 0 to 5 with addition modulo 6. Also consider the group formula_13 the ordered pairs where the "x" coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the "x"-coordinate is modulo 2 and addition in the "y"-coordinate is modulo 3.
These structures are isomorphic under addition, under the following scheme:
formula_14
or in general formula_15
For example, formula_16 which translates in the other system as formula_17
Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups formula_18 and formula_19 is isomorphic to formula_20 if and only if "m" and "n" are coprime, per the Chinese remainder theorem.
Relation-preserving isomorphism.
If one object consists of a set "X" with a binary relation R and the other object consists of a set "Y" with a binary relation S then an isomorphism from "X" to "Y" is a bijective function formula_21 such that:
formula_22
S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is.
For example, R is an ordering ≤ and S an ordering formula_23 then an isomorphism from "X" to "Y" is a bijective function formula_21 such that
formula_24
Such an isomorphism is called an order isomorphism or (less commonly) an isotone isomorphism.
If formula_25 then this is a relation-preserving automorphism.
Applications.
In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example:
Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group.
In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations.
In graph theory, an isomorphism between two graphs "G" and "H" is a bijective map "f" from the vertices of "G" to the vertices of "H" that preserves the "edge structure" in the sense that there is an edge from vertex "u" to vertex "v" in "G" if and only if there is an edge from formula_26 to formula_27 in "H". See graph isomorphism.
In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product.
In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's "Introduction to Mathematical Philosophy".
In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system.
Category theoretic view.
In category theory, given a category "C", an isomorphism is a morphism formula_28 that has an inverse morphism formula_29 that is, formula_30 and formula_31 For example, a bijective linear map is an isomorphism between vector spaces, and a bijective continuous function whose inverse is also continuous is an isomorphism between topological spaces, called a homeomorphism.
Two categories C and D are isomorphic if there exist functors formula_32 and formula_33 which are mutually inverse to each other, that is, formula_34 (the identity functor on D) and formula_35 (the identity functor on C).
Isomorphism vs. bijective morphism.
In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces).
Relation to equality.
Although there are cases where isomorphic objects can be considered equal, one must distinguish equality and isomorphism. Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure.
For example, the sets
formula_36
are equal; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets formula_37 and formula_38 are not equal since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is
formula_39
while another is
formula_40
and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them identical: one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism.
Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other.
On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties.
For example, the rational numbers are usually defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. It results that given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. For example the real numbers that are obtained by dividing two integers (inside the real numbers) form the smallest subfield of the real numbers. There is thus a unique isomorphism from the rational numbers (defined as equivalence classes of pairs) to the quotients of two real numbers that are integers. This allows identifying these two sorts of rational numbers.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\R ^+ "
},
{
"math_id": 1,
"text": "\\R"
},
{
"math_id": 2,
"text": "\\log : \\R^+ \\to \\R"
},
{
"math_id": 3,
"text": "\\log(xy) = \\log x + \\log y"
},
{
"math_id": 4,
"text": "x, y \\in \\R^+,"
},
{
"math_id": 5,
"text": "\\exp : \\R \\to \\R^+"
},
{
"math_id": 6,
"text": "\\exp(x+y) = (\\exp x)(\\exp y)"
},
{
"math_id": 7,
"text": "x, y \\in \\R,"
},
{
"math_id": 8,
"text": "\\log \\exp x = x"
},
{
"math_id": 9,
"text": "\\exp \\log y = y"
},
{
"math_id": 10,
"text": "\\log"
},
{
"math_id": 11,
"text": "\\exp "
},
{
"math_id": 12,
"text": "(\\Z_6, +),"
},
{
"math_id": 13,
"text": "\\left(\\Z_2 \\times \\Z_3, +\\right),"
},
{
"math_id": 14,
"text": "\\begin{alignat}{4}\n(0, 0) &\\mapsto 0 \\\\\n(1, 1) &\\mapsto 1 \\\\\n(0, 2) &\\mapsto 2 \\\\\n(1, 0) &\\mapsto 3 \\\\\n(0, 1) &\\mapsto 4 \\\\\n(1, 2) &\\mapsto 5 \\\\\n\\end{alignat}"
},
{
"math_id": 15,
"text": "(a, b) \\mapsto (3 a + 4 b) \\mod 6."
},
{
"math_id": 16,
"text": "(1, 1) + (1, 0) = (0, 1),"
},
{
"math_id": 17,
"text": "1 + 3 = 4."
},
{
"math_id": 18,
"text": "\\Z_m"
},
{
"math_id": 19,
"text": "\\Z_n"
},
{
"math_id": 20,
"text": "(\\Z_{mn}, +)"
},
{
"math_id": 21,
"text": "f : X \\to Y"
},
{
"math_id": 22,
"text": "\\operatorname{S}(f(u),f(v)) \\quad \\text{ if and only if } \\quad \\operatorname{R}(u,v) "
},
{
"math_id": 23,
"text": "\\scriptstyle \\sqsubseteq,"
},
{
"math_id": 24,
"text": "f(u) \\sqsubseteq f(v) \\quad \\text{ if and only if } \\quad u \\leq v."
},
{
"math_id": 25,
"text": "X = Y,"
},
{
"math_id": 26,
"text": "f(u)"
},
{
"math_id": 27,
"text": "f(v)"
},
{
"math_id": 28,
"text": "f : a \\to b"
},
{
"math_id": 29,
"text": "g : b \\to a,"
},
{
"math_id": 30,
"text": "f g = 1_b"
},
{
"math_id": 31,
"text": "g f = 1_a."
},
{
"math_id": 32,
"text": "F : C \\to D"
},
{
"math_id": 33,
"text": "G : D \\to C"
},
{
"math_id": 34,
"text": "FG = 1_D"
},
{
"math_id": 35,
"text": "GF = 1_C"
},
{
"math_id": 36,
"text": "A = \\left\\{ x \\in \\Z \\mid x^2 < 2\\right\\} \\quad \\text{ and } \\quad B = \\{-1, 0, 1\\}"
},
{
"math_id": 37,
"text": "\\{A, B, C\\}"
},
{
"math_id": 38,
"text": "\\{1, 2, 3\\}"
},
{
"math_id": 39,
"text": "\\text{A} \\mapsto 1, \\text{B} \\mapsto 2, \\text{C} \\mapsto 3,"
},
{
"math_id": 40,
"text": "\\text{A} \\mapsto 3, \\text{B} \\mapsto 2, \\text{C} \\mapsto 1,"
}
] |
https://en.wikipedia.org/wiki?curid=14828
|
1483049
|
Circuit rank
|
Fewest graph edges whose removal breaks all cycles
In graph theory, a branch of mathematics, the circuit rank, cyclomatic number, cycle rank, or nullity of an undirected graph is the minimum number of edges that must be removed from the graph to break all its cycles, making it into a tree or forest. It is equal to the number of independent cycles in the graph (the size of a cycle basis). Unlike the corresponding feedback arc set problem for directed graphs, the circuit rank r is easily computed using the formula
formula_0,
where m is the number of edges in the given graph, n is the number of vertices, and c is the number of connected components.
It is also possible to construct a minimum-size set of edges that breaks all cycles efficiently, either using a greedy algorithm or by complementing a spanning forest.
The circuit rank can be explained in terms of algebraic graph theory as the dimension of the cycle space of a graph, in terms of matroid theory as the corank of a graphic matroid, and in terms of topology as one of the Betti numbers of a topological space derived from the graph. It counts the ears in an ear decomposition of the graph, forms the basis of parameterized complexity on almost-trees, and has been applied in software metrics as part of the definition of cyclomatic complexity of a piece of code. Under the name of cyclomatic number, the concept was introduced by Gustav Kirchhoff.
Matroid rank and construction of a minimum feedback edge set.
The circuit rank of a graph G may be described using matroid theory as the corank of the graphic matroid of G. Using the greedy property of matroids, this means that one can find a minimum set of edges that breaks all cycles using a greedy algorithm that at each step chooses an edge that belongs to at least one cycle of the remaining graph.
Alternatively, a minimum set of edges that breaks all cycles can be found by constructing a spanning forest of G and choosing the complementary set of edges that do not belong to the spanning forest.
The number of independent cycles.
In algebraic graph theory, the circuit rank is also the dimension of the cycle space of formula_1. Intuitively, this can be explained as meaning that the circuit rank counts the number of independent cycles in the graph, where a collection of cycles is independent if it is not possible to form one of the cycles as the symmetric difference of some subset of the others.
This count of independent cycles can also be explained using homology theory, a branch of topology. Any graph G may be viewed as an example of a 1-dimensional simplicial complex, a type of topological space formed by representing each graph edge by a line segment and gluing these line segments together at their endpoints.
The cyclomatic number is the rank of the first (integer) homology group of this complex,
formula_2
Because of this topological connection, the cyclomatic number of a graph G is also called the first Betti number of G. More generally, the first Betti number of any topological space, defined in the same way, counts the number of independent cycles in the space.
Applications.
Meshedness coefficient.
A variant of the circuit rank for planar graphs, normalized by dividing by the maximum possible circuit rank of any planar graph with the same vertex set, is called the meshedness coefficient. For a connected planar graph with m edges and n vertices, the meshedness coefficient can be computed by the formula
formula_3
Here, the numerator formula_4 of the formula is the circuit rank of the given graph, and the denominator formula_5 is the largest possible circuit rank of an n-vertex planar graph. The meshedness coefficient ranges between 0 for trees and 1 for maximal planar graphs.
Ear decomposition.
The circuit rank controls the number of ears in an ear decomposition of a graph, a partition of the edges of the graph into paths and cycles that is useful in many graph algorithms.
In particular, a graph is 2-vertex-connected if and only if it has an open ear decomposition. This is a sequence of subgraphs, where the first subgraph is a simple cycle, the remaining subgraphs are all simple paths, each path starts and ends on vertices that belong to previous subgraphs,
and each internal vertex of a path appears for the first time in that path. In any biconnected graph with circuit rank formula_6, every open ear decomposition has exactly formula_6 ears.
Almost-trees.
A graph with cyclomatic number formula_6 is also called a "r"-almost-tree, because only "r" edges need to be removed from the graph to make it into a tree or forest. A 1-almost-tree is a near-tree: a connected near-tree is a pseudotree, a cycle with a (possibly trivial) tree rooted at each vertex.
Several authors have studied the parameterized complexity of graph algorithms on "r"-near-trees, parameterized by formula_6.
Generalizations to directed graphs.
The cycle rank is an invariant of directed graphs that measures the level of nesting of cycles in the graph. It has a more complicated definition than circuit rank (closely related to the definition of tree-depth for undirected graphs) and is more difficult to compute. Another problem for directed graphs related to the circuit rank is the minimum feedback arc set, the smallest set of edges whose removal breaks all directed cycles. Both cycle rank and the minimum feedback arc set are NP-hard to compute.
It is also possible to compute a simpler invariant of directed graphs by ignoring the directions of the edges and computing the circuit rank of the underlying undirected graph. This principle forms the basis of the definition of cyclomatic complexity, a software metric for estimating how complicated a piece of computer code is.
Computational chemistry.
In the fields of chemistry and cheminformatics, the circuit rank of a molecular graph (the number of rings in the smallest set of smallest rings) is sometimes referred to as the Frèrejacque number.
Parametrized complexity.
Some computational problems on graphs are NP-hard in general, but can be solved in polynomial time for graphs with a small circuit rank. An example is the path reconfiguration problem.
Related concepts.
Other numbers defined in terms of deleting things from graphs are:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r = m - n + c"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "r = \\operatorname{rank}\\left[H_1(G,\\Z)\\right]."
},
{
"math_id": 3,
"text": "\\frac{m-n+1}{2n-5}."
},
{
"math_id": 4,
"text": "m-n+1"
},
{
"math_id": 5,
"text": "2n-5"
},
{
"math_id": 6,
"text": "r"
}
] |
https://en.wikipedia.org/wiki?curid=1483049
|
1483291
|
Work (electric field)
|
Electric field work is the work performed by an electric field on a charged particle in its vicinity. The particle located experiences an interaction with the electric field. The work per unit of charge is defined by moving a negligible test charge between two points, and is expressed as the difference in electric potential at those points. The work can be done, for example, by electrochemical devices (electrochemical cells) or different metals junctions generating an electromotive force.
Electric field work is formally equivalent to work by other force fields in physics, and the formalism for electrical work is identical to that of mechanical work.
Physical process.
Particles that are free to move, if positively charged, normally tend towards regions of lower electric potential (net negative charge), while negatively charged particles tend to shift towards regions of higher potential (net positive charge).
Any movement of a positive charge into a region of higher potential requires external work to be done against the electric field, which is equal to the work that the electric field would do in moving that positive charge the same distance in the opposite direction. Similarly, it requires positive external work to transfer a negatively charged particle from a region of higher potential to a region of lower potential.
Kirchhoff's voltage law, one of the most fundamental laws governing electrical and electronic circuits, tells us that the voltage gains and the drops in any electrical circuit always sum to zero.
The formalism for electric work has an equivalent format to that of mechanical work. The work per unit of charge, when moving a negligible test charge between two points, is defined as the voltage between those points.
formula_0
where
"Q" is the electric charge of the particle
E is the electric field, which at a location is the force at that location divided by a unit ('test') charge
FE is the Coulomb (electric) force
r is the displacement
"formula_1" is the dot product operator
Mathematical description.
Given a charged object in empty space, Q+. To move q+ "closer" to Q+ (starting from formula_2, where the potential energy=0, for convenience), we would have to apply an external force against the Coulomb field and positive work would be performed. Mathematically, using the definition of a conservative force, we know that we can relate this force to a potential energy gradient as:
formula_3
Where U(r) is the potential energy of q+ at a distance r from the source Q. So, integrating and using Coulomb's Law for the force:
formula_4
Now, use the relationship
formula_5
To show that the external work done to move a point charge q+ from infinity to a distance r is:
formula_6
This could have been obtained equally by using the definition of W and integrating F with respect to r, which will "prove" the above relationship.
In the example both charges are positive; this equation is applicable to any charge configuration (as the product of the charges will be either positive or negative according to their (dis)similarity).
If one of the charges were to be negative in the earlier example, the work taken to wrench that charge away to infinity would be exactly the same as the work needed in the earlier example to push that charge back to that same position.
This is easy to see mathematically, as reversing the boundaries of integration reverses the sign.
Uniform electric field.
Where the electric field is constant (i.e. "not" a function of displacement, r), the work equation simplifies to:
formula_7
or 'force times distance' (times the cosine of the angle between them).
Electric power.
The electric power is the rate of energy transferred in an electric circuit. As a partial derivative, it is expressed as the change of work over time:
formula_8,
where V is the voltage. Work is defined by:
formula_9
Therefore
formula_10
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nW = Q \\int_{a}^{b} \\mathbf{E} \\cdot \\, d \\mathbf{r} = Q \\int_{a}^{b} \\frac{\\mathbf{F_E}}{Q} \\cdot \\, d \\mathbf{r}= \\int_{a}^{b} \\mathbf{F_E} \\cdot \\, d \\mathbf{r}\n"
},
{
"math_id": 1,
"text": "\\cdot"
},
{
"math_id": 2,
"text": " r_0 = \\infty "
},
{
"math_id": 3,
"text": "\\frac{\\partial U}{\\partial \\mathbf{r}} = \\mathbf{F}_{ext}"
},
{
"math_id": 4,
"text": "U(r) = \\Delta U = \\int_{r_0}^{r} \\mathbf{F}_{ext} \\cdot \\, d \\mathbf{r}= \\int_{r_0}^{r} \\frac{1}{4\\pi\\varepsilon_0}\\frac{q_1q_2}{\\mathbf{r^2}} \\cdot \\, d \\mathbf{r}= - \\frac{q_1q_2}{4\\pi\\varepsilon_0}\\left(\\frac{1}{r_0}- \\frac{1}{r}\\right) = \\frac{q_1q_2}{4\\pi\\varepsilon_0} \\frac{1}{r} "
},
{
"math_id": 5,
"text": " W = -\\Delta U \\!"
},
{
"math_id": 6,
"text": "W_{ext} = \\frac{q_1q_2}{4\\pi\\varepsilon_0}\\frac{1}{r}"
},
{
"math_id": 7,
"text": "W =\n\n Q (\\mathbf{E} \\cdot \\, \\mathbf{r})=\\mathbf{F_E} \\cdot \\, \\mathbf{r}"
},
{
"math_id": 8,
"text": "P=\\frac{\\partial W}{\\partial t}=\\frac{\\partial QV}{\\partial t}"
},
{
"math_id": 9,
"text": " \\delta W = \\mathbf{F}\\cdot\\mathbf{v}\\delta t,"
},
{
"math_id": 10,
"text": "\\frac{\\partial W}{\\partial t}=\\mathbf{F_E} \\cdot \\,\\mathbf{v}"
}
] |
https://en.wikipedia.org/wiki?curid=1483291
|
14834796
|
Dominance order
|
Discrete math concept
In discrete mathematics, dominance order (synonyms: dominance ordering, majorization order, natural ordering) is a partial order on the set of partitions of a positive integer "n" that plays an important role in algebraic combinatorics and representation theory, especially in the context of symmetric functions and representation theory of the symmetric group.
Definition.
If "p" = ("p"1,"p"2...) and "q" = ("q"1,"q"2...) are partitions of "n", with the parts arranged in the weakly decreasing order, then "p" precedes "q" in the dominance order if for any "k" ≥ 1, the sum of the "k" largest parts of "p" is less than or equal to the sum of the "k" largest parts of "q":
formula_0
In this definition, partitions are extended by appending zero parts at the end as necessary.
formula_1 if and only if formula_2
Lattice structure.
Partitions of "n" form a lattice under the dominance ordering, denoted "L""n", and the operation of conjugation is an antiautomorphism of this lattice. To explicitly describe the lattice operations, for each partition "p" consider the associated ("n" + 1)-tuple:
formula_3
The partition "p" can be recovered from its associated ("n"+1)-tuple by applying the step 1 difference, formula_4 Moreover, the ("n"+1)-tuples associated to partitions of "n" are characterized among all integer sequences of length "n" + 1 by the following three properties:
By the definition of the dominance ordering, partition "p" precedes partition "q" if and only if the associated ("n" + 1)-tuple of "p" is term-by-term less than or equal to the associated ("n" + 1)-tuple of "q". If "p", "q", "r" are partitions then formula_8 if and only if formula_9 The componentwise minimum of two nondecreasing concave integer sequences is also nondecreasing and concave. Therefore, for any two partitions of "n", "p" and "q", their meet is the partition of "n" whose associated ("n" + 1)-tuple has components formula_10 The natural idea to use a similar formula for the join "fails", because the componentwise maximum of two concave sequences need not be concave. For example, for "n" = 6, the partitions [3,1,1,1] and [2,2,2] have associated sequences (0,3,4,5,6,6,6) and (0,2,4,6,6,6,6), whose componentwise maximum (0,3,4,6,6,6,6) does not correspond to any partition. To show that any two partitions of "n" have a join, one uses the conjugation antiautomorphism: the join of "p" and "q" is the conjugate partition of the meet of "p"′ and "q"′:
formula_11
For the two partitions "p" and "q" in the preceding example, their conjugate partitions are [4,1,1] and [3,3] with meet [3,2,1], which is self-conjugate; therefore, the join of "p" and "q" is [3,2,1].
Thomas Brylawski has determined many invariants of the lattice "L""n", such as the minimal height and the maximal covering number, and classified the intervals of small length. While "L""n" is not distributive for "n" ≥ 7, it shares some properties with distributive lattices: for example, its Möbius function takes on only values 0, 1, −1.
Generalizations.
Partitions of "n" can be graphically represented by Young diagrams on "n" boxes.
Standard Young tableaux are certain ways to fill Young diagrams with numbers, and a partial order on them (sometimes called the "dominance order on Young tableaux") can be defined in terms of the dominance order on the Young diagrams. For a Young tableau "T" to dominate another Young tableau "S", the shape of "T" must dominate that of "S" as a partition, and moreover the same must hold whenever "T" and "S" are first truncated to their sub-tableaux containing entries up to a given value "k", for each choice of "k".
Similarly, there is a dominance order on the set of standard Young bitableaux, which plays a role in the theory of "standard monomials".
|
[
{
"math_id": 0,
"text": " p\\trianglelefteq q \\text{ if and only if } p_1+\\cdots+p_k \\leq q_1+\\cdots+q_k \\text{ for all } k\\geq 1."
},
{
"math_id": 1,
"text": "p\\trianglelefteq q "
},
{
"math_id": 2,
"text": "q^{\\prime}\\trianglelefteq p^{\\prime}."
},
{
"math_id": 3,
"text": " \\hat{p}=(0, p_1, p_1+p_2, \\ldots, p_1+p_2+\\cdots+p_n). "
},
{
"math_id": 4,
"text": "p_i=\\hat{p}_i-\\hat{p}_{i-1}."
},
{
"math_id": 5,
"text": "\\hat{p}_i\\leq \\hat{p}_{i+1};"
},
{
"math_id": 6,
"text": "2\\hat{p}_i\\geq \\hat{p}_{i-1}+\\hat{p}_{i+1};"
},
{
"math_id": 7,
"text": "\\hat{p}_0=0, \\hat{p}_n=n."
},
{
"math_id": 8,
"text": "r\\trianglelefteq p, r\\trianglelefteq q"
},
{
"math_id": 9,
"text": "\\hat{r}\\leq\\hat{p}, \\hat{r}\\leq\\hat{q}."
},
{
"math_id": 10,
"text": "\\operatorname{min}(\\hat{p}_i,\\hat{q}_i)."
},
{
"math_id": 11,
"text": " p\\lor q=(p^{\\prime} \\land q^{\\prime})^{\\prime}. "
}
] |
https://en.wikipedia.org/wiki?curid=14834796
|
14835049
|
Covering relation
|
Mathematical relation inside orderings
In mathematics, especially order theory, the covering relation of a partially ordered set is the binary relation which holds between comparable elements that are immediate neighbours. The covering relation is commonly used to graphically express the partial order by means of the Hasse diagram.
Definition.
Let formula_0 be a set with a partial order formula_1.
As usual, let formula_2 be the relation on formula_0 such that formula_3 if and only if formula_4 and formula_5.
Let formula_6 and formula_7 be elements of formula_0.
Then formula_7 covers formula_6, written formula_8,
if formula_3 and there is no element formula_9 such that formula_10. Equivalently, formula_7 covers formula_6 if the interval formula_11 is the two-element set formula_12.
When formula_8, it is said that formula_7 is a cover of formula_6. Some authors also use the term cover to denote any such pair formula_13 in the covering relation.
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\le"
},
{
"math_id": 2,
"text": "<"
},
{
"math_id": 3,
"text": "x<y"
},
{
"math_id": 4,
"text": "x\\le y"
},
{
"math_id": 5,
"text": "x\\neq y"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "x\\lessdot y"
},
{
"math_id": 9,
"text": "z"
},
{
"math_id": 10,
"text": "x<z<y"
},
{
"math_id": 11,
"text": "[x,y]"
},
{
"math_id": 12,
"text": "\\{x,y\\}"
},
{
"math_id": 13,
"text": "(x,y)"
}
] |
https://en.wikipedia.org/wiki?curid=14835049
|
14835428
|
Temporal finitism
|
Doctrine that time is finite in the past
Temporal finitism is the doctrine that time is finite in the past. The philosophy of Aristotle, expressed in such works as his "Physics", held that although space was finite, with only void existing beyond the outermost sphere of the heavens, time was infinite. This caused problems for mediaeval Islamic, Jewish, and Christian philosophers who, primarily creationist, were unable to reconcile the Aristotelian conception of the eternal with the Genesis creation narrative.
Medieval background.
In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning. This view was inspired by the creation myth shared by the three Abrahamic religions: Judaism, Christianity and Islam.
Prior to Maimonides, it was held that it was possible to prove, philosophically, creation theory. The Kalam cosmological argument held that creation was provable, for example. Maimonides himself held that neither creation nor Aristotle's infinite time were provable, or at least that no proof was available. (According to scholars of his work, he didn't make a formal distinction between unprovability and the simple absence of proof.) Thomas Aquinas was influenced by this belief, and held in his Summa Theologica that neither hypothesis was demonstrable. Some of Maimonides' Jewish successors, including Gersonides and Crescas, conversely held that the question was decidable, philosophically.
John Philoponus was probably the first to use the argument that infinite time is impossible in order to establish temporal finitism. He was followed by many others including St. Bonaventure.
Philoponus' arguments for temporal finitism were severalfold. "Contra Aristotlem" has been lost, and is chiefly known through the citations used by Simplicius of Cilicia in his commentaries on Aristotle's "Physics" and "De Caelo". Philoponus' refutation of Aristotle extended to six books, the first five addressing "De Caelo" and the sixth addressing "Physics", and from comments on Philoponus made by Simplicius can be deduced to have been quite lengthy.
A full exposition of Philoponus' several arguments, as reported by Simplicius, can be found in Sorabji.
One such argument was based upon Aristotle's own theorem that there were not multiple infinities, and ran as follows: If time were infinite, then as the universe continued in existence for another hour, the infinity of its age since creation at the end of that hour must be one hour greater than the infinity of its age since creation at the start of that hour. But since Aristotle holds that such treatments of infinity are impossible and ridiculous, the world cannot have existed for infinite time.
The most sophisticated medieval arguments against an infinite past were later developed by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel). They developed two logical arguments against an infinite past, the first being the "argument from the impossibility of the existence of an actual infinite", which states:
"An actual infinite cannot exist."
"An infinite temporal regress of events is an actual infinite."
"Thus an infinite temporal regress of events cannot exist."
This argument depends on the (unproved) assertion that an actual infinite cannot exist; and that an infinite past implies an infinite succession of "events", a word not clearly defined. The second argument, the "argument from the impossibility of completing an actual infinite by successive addition", states:
"An actual infinite cannot be completed by successive addition."
"The temporal series of past events has been completed by successive addition."
"Thus the temporal series of past events cannot be an actual infinite."
The first statement states, correctly, that a finite (number) cannot be made into an infinite one by the finite addition of more finite numbers. The second skirts around this; the analogous idea in mathematics, that the (infinite) sequence of negative integers "..-3, -2, -1" may be extended by appending zero, then one, and so forth; is perfectly valid.
Both arguments were adopted by later Christian philosophers and theologians, and the second argument in particular became more famous after it was adopted by Immanuel Kant in his thesis of the first antinomy concerning time.
Modern revival.
Immanuel Kant's argument for temporal finitism from his First Antinomy, runs as follows:
<templatestyles src="Template:Blockquote/styles.css" />If we assume that the world has no beginning in time, then up to every given moment an eternity has elapsed, and there has passed away in that world an infinite series of successive states of things. Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away, and that a beginning of the world is therefore a necessary condition of the world's existence.
Modern mathematics generally incorporates infinity. For most purposes it is simply used as convenient; when considered more carefully it is incorporated, or not, according to whether the axiom of infinity is included. This is the mathematical concept of infinity; while this may provide useful analogies or ways of thinking about the physical world, it says nothing directly about the physical world. Georg Cantor recognized two different kinds of infinity. The first, used in calculus, he called the variable finite, or potential infinite, represented by the formula_0 sign (known as the lemniscate), and the actual infinite, which Cantor called the "true infinite." His notion of transfinite arithmetic became the standard system for working with infinity within set theory. David Hilbert thought that the role of the actual infinite was relegated only to the abstract realm of mathematics. "The infinite is nowhere to be found in reality. It neither exists in nature nor provides a legitimate basis for rational thought... The role that remain for the infinite to play is solely that of an idea." Philosopher William Lane Craig argues that if the past were infinitely long, it would entail the existence of actual infinites in reality.
Craig and Sinclair also argue that an actual infinite cannot be formed by successive addition. Quite independent of the absurdities arising from an actual infinite number of past events, the formation of an actual infinite has its own problems. For any finite number n, n+1 equals a finite number. An actual infinity has no immediate predecessor.
The Tristram Shandy paradox is an attempt to illustrate the absurdity of an infinite past. Imagine Tristram Shandy, an immortal man who writes his biography so slowly that for every day that he lives, it takes him a year to record that day. Suppose that Shandy had always existed. Since there is a one-to-one correspondence between the number of past days and the number of past years on an infinite past, one could reason that Shandy could write his entire autobiography. From another perspective, Shandy would only get farther and farther behind, and given a past eternity, would be infinitely far behind.
Craig asks us to suppose that we met a man who claims to have been counting down from infinity and is now just finishing. We could ask why he did not finish counting yesterday or the day before, since eternity would have been over by then. In fact for any day in the past, if the man would have finished his countdown by day n, he would have finished his countdown by n-1. It follows that the man could not have finished his countdown at any point in the finite past, since he would have already been done.
Input from physicists.
In 1984 physicist Paul Davies deduced a finite-time origin of the universe in a quite different way, from physical grounds: "the universe will eventually die, wallowing, as it were, in its own entropy. This is known among physicists as the 'heat death' of the universe... The universe cannot have existed for ever, otherwise it would have reached its equilibrium end state an infinite time ago. Conclusion: the universe did not always exist."
More recently though physicists have proposed various ideas for how the universe could have existed for an infinite time, such as eternal inflation. But in 2012, Alexander Vilenkin and Audrey Mithani of Tufts University wrote a paper claiming that in any such scenario past time could not have been infinite.
It could however have been "before any nameable time", according to Leonard Susskind. There are also very exotic but consistent physical scenarios under which the Universe has existed in eternity.
Critical reception.
Kant's argument for finitism has been widely discussed, for instance Jonathan Bennett points out that Kant's argument is not a sound logical proof: His assertion that "Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away", assumes that the universe was created at a beginning and then progressed from there, which seems to assume the conclusion. A universe that simply existed and had not been created, or a universe that was created as an infinite progression, for instance, would still be possible. Bennett quotes Strawson:
<templatestyles src="Template:Blockquote/styles.css" />"A temporal process both completed and infinite in duration appears to be impossible only on the assumption that it has a beginning. If ... it is urged that we cannot conceive of a process of surveying which does not have a beginning, then we must inquire with what relevance and by what right the notion of surveying is introduced into the discussion at all."
Some of the criticism of William Lane Craig's argument for temporal finitism has been discussed and expanded on by Stephen Puryear.
In this, he writes Craig's argument as:
Puryear points out that Aristotle and Aquinas had an opposing view to point 2, but that the most contentious is point 3. Puryear says that many philosophers have disagreed with point 3, and adds his own objection:
"Consider the fact that things move from one point in space to another. In so doing, the moving object passes through an actual infinity of intervening points. Hence, motion involves traversing an actual infinite ... Accordingly, the finitist of this stripe must be mistaken. Similarly, whenever some period of time elapses, an actual infinite has been traversed, namely, the actual infinity of instants that make up that period of time."
Puryear then points out that Craig has defended his position by saying that time might or must be naturally divided and so there is not an actual infinity of instants between two times. Puryear then goes on to argue that if Craig is willing to turn an infinity of points into a finite number of divisions, then points 1, 2 and 4 are not true.
An article by Louis J. Swingrover makes a number of points relating to the idea that Craig's "absurdities" are not contradictions in themselves: they are all either mathematically consistent (like Hilbert's hotel or the man counting down to today), or do not lead to inescapable conclusions. He argues that if one makes the assumption that any mathematically coherent model is metaphysically possible, then it can be shown that an infinite temporal chain is metaphysically possible, since one can show that there exist mathematically coherent models of an infinite progression of times. He also says that Craig might be making a cardinality error similar to assuming that because an infinitely extended temporal series would contain an infinite number of times, then it would have to contain the number "infinity".
Quentin Smith attacks "their supposition that an infinite series of past events must contain some events separated from the present event by an infinite number of intermediate events, and consequently that from one of these infinitely distant past events the present could never have been reached".
Smith asserts that Craig and Wiltrow are making a cardinality error by confusing an unending sequence with a sequence whose members must be separated by an infinity: None of the integers is separated from any other integer by an infinite number of integers, so why assert that an infinite series of times must contain a time infinitely far back in the past.
Smith then says that Craig uses false presuppositions when he makes statements about infinite collections (in particular the ones relating to Hilbert's Hotel and infinite sets being equivalent to proper subsets of them), often based on Craig finding things "unbelievable", when they are actually mathematically correct. He also points out that the Tristram Shandy paradox is mathematically coherent, but some of Craig's conclusions about when the biography would be finished are incorrect.
Ellery Eells expands on this last point by showing that the Tristram Shandy paradox is internally consistent and fully compatible with an infinite universe.
Graham Oppy embroiled in debate with Oderberg, points out that the Tristram Shandy story has been used in many versions. For it to be useful to the temporal finitism side, a version must be found that is logically consistent and not compatible with an infinite universe. To see this, note that the argument runs as follows:
The problem for the finitist is that point 1 is not necessarily true. If a version of the Tristram Shandy story is internally inconsistent, for instance, then the infinitist could just assert that an infinite past is possible, but that particular Tristram Shandy is not because it's not internally consistent. Oppy then lists the different versions of the Tristram Shandy story that have been put forward and shows that they are all either internally inconsistent or they don't lead to contradiction.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=14835428
|
14836297
|
Young's lattice
|
In mathematics, Young's lattice is a lattice that is formed by all integer partitions. It is named after Alfred Young, who, in a series of papers "On quantitative substitutional analysis," developed the representation theory of the symmetric group. In Young's theory, the objects now called Young diagrams and the partial order on them played a key, even decisive, role. Young's lattice prominently figures in algebraic combinatorics, forming the simplest example of a differential poset in the sense of . It is also closely connected with the crystal bases for affine Lie algebras.
Definition.
Young's lattice is a lattice (and hence also a partially ordered set) "Y" formed by all integer partitions ordered by inclusion of their Young diagrams (or Ferrers diagrams).
Significance.
The traditional application of Young's lattice is to the description of the irreducible representations of symmetric groups S"n" for all "n", together with their branching properties, in characteristic zero. The equivalence classes of irreducible representations may be parametrized by partitions or Young diagrams, the restriction from S"n"&hairsp;+1 to S"n" is multiplicity-free, and the representation of S"n" with partition "p" is contained in the representation of S"n"&hairsp;+1 with partition "q" if and only if "q" covers "p" in Young's lattice. Iterating this procedure, one arrives at "Young's semicanonical basis" in the irreducible representation of S"n" with partition "p", which is indexed by the standard Young tableaux of shape "p".
formula_0
Dihedral symmetry.
Conventionally, Young's lattice is depicted in a Hasse diagram with all elements of the same rank shown at the same height above the bottom.
has shown that a different way of depicting some subsets of Young's lattice shows some unexpected symmetries.
The partition
formula_1
of the "n"th triangular number has a Ferrers diagram that looks like a staircase. The largest elements whose Ferrers diagrams are rectangular that lie under the staircase are these:
formula_2
Partitions of this form are the only ones that have only one element immediately below them in Young's lattice. Suter showed that the set of all elements less than or equal to these particular partitions has not only the bilateral symmetry that one expects of Young's lattice, but also rotational symmetry: the rotation group of order "n" + 1 acts on this poset. Since this set has both bilateral symmetry and rotational symmetry, it must have dihedral symmetry: the ("n" + 1)st dihedral group acts faithfully on this set. The size of this set is 2"n".
For example, when "n" = 4, then the maximal element under the "staircase" that have rectangular Ferrers diagrams are
1 + 1 + 1 + 1
2 + 2 + 2
3 + 3
4
The subset of Young's lattice lying below these partitions has both bilateral symmetry and 5-fold rotational symmetry. Hence the dihedral group "D"5 acts faithfully on this subset of Young's lattice.
|
[
{
"math_id": 0,
"text": "\\mu(q,p) = \\begin{cases}\n(-1)^{|p| - |q|} & \\text{if the skew diagram }p/q\\text{ is a disconnected union of squares} \\\\\n & \\text{(no common edges);} \\\\[10pt]\n0 & \\text{otherwise}. \\end{cases}"
},
{
"math_id": 1,
"text": "n + \\cdots + 3 + 2 + 1"
},
{
"math_id": 2,
"text": "\\begin{array}{c}\n\\underbrace{1 + \\cdots\\cdots\\cdots + 1}_{n\\text{ terms}} \\\\\n\\underbrace{2 + \\cdots\\cdots + 2}_{n-1\\text{ terms}} \\\\\n\\underbrace{3 + \\cdots + 3}_{n-2\\text{ terms}} \\\\\n\\vdots \\\\\n\\underbrace{{}\\quad n\\quad {}}_{1\\text{ term}}\n\\end{array}"
}
] |
https://en.wikipedia.org/wiki?curid=14836297
|
1483799
|
Geometrical frustration
|
In condensed matter physics, the term geometrical frustration (or in short: frustration) refers to a phenomenon where atoms tend to stick to non-trivial positions or where, on a regular crystal lattice, conflicting inter-atomic forces (each one favoring rather simple, but different structures) lead to quite complex structures. As a consequence of the frustration in the geometry or in the forces, a plenitude of distinct ground states may result at zero temperature, and usual thermal ordering may be suppressed at higher temperatures. Much studied examples are amorphous materials, glasses, or dilute magnets.
The term "frustration", in the context of magnetic systems, has been introduced by Gerard Toulouse in 1977. Frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically, by G. H. Wannier, published in 1950. Related features occur in magnets with "competing interactions", where both ferromagnetic as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability, such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori, T. A. Kaplan, R. J. Elliott, and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 1970s, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur experimentally in non-stoichiometric magnetic alloys. Carefully analyzed spin models with frustration include the Sherrington–Kirkpatrick model, describing spin glasses, and the ANNNI model, describing commensurability magnetic superstructures. Recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain.<ref name="https://doi.org/10.1162/netn_a_00268"></ref>
Magnetic ordering.
Geometrical frustration is an important feature in magnetism, where it stems from the relative arrangement of spins. A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align antiparallel, the third one is "frustrated" because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate. Only the two states where all spins are up or down have more energy.
Similarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated.
Geometrical frustration is also possible if the spins are arranged in a non-collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the "easy axis" (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate.
Mathematical definition.
The mathematical definition is simple (and analogous to the so-called Wilson loop in quantum chromodynamics): One considers for example expressions ("total energies" or "Hamiltonians") of the form
formula_0
where "G" is the graph considered, whereas the quantities "I""kν","kμ" are the so-called "exchange energies" between nearest-neighbours, which (in the energy units considered) assume the values ±1 (mathematically, this is a signed graph), while the "S""kν"·"S""kμ" are inner products of scalar or vectorial spins or pseudo-spins. If the graph "G" has quadratic or triangular faces "P", the so-called "plaquette variables" "PW", "loop-products" of the following kind, appear:
formula_1 and formula_2 respectively,
which are also called "frustration products". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or −1. In the last-mentioned case the plaquette is "geometrically frustrated".
It can be shown that the result has a simple gauge invariance: it does "not" change – nor do other measurable quantities, e.g. the "total energy" formula_3 – even if locally the exchange integrals and the spins are simultaneously modified as follows:
formula_4
Here the numbers "εi" and "εk" are arbitrary signs, i.e. +1 or −1, so that the modified structure may look totally random.
Water ice.
Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice. In 1936 Giauque and Stout published "The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15 K to 273 K", reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debye's then recently derived formula. The resulting entropy, "S"1 = 44.28 cal/(K·mol) = 185.3 J/(mol·K) was compared to the theoretical result from statistical mechanics of an ideal gas, "S"2 = 45.10 cal/(K·mol) = 188.7 J/(mol·K). The two values differ by "S"0 = 0.82 ± 0.05 cal/(K·mol) = 3.4 J/(mol·K). This result was then explained by Linus Pauling to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K·mol) or 3.4 J/(mol·K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice.
In the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O–O bond length 2.76 Å (276 pm), while the O–H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal H2O molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O–O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘ice rules’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules.
Pauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of "N" O2− and 2"N" protons. Each O–O bond has two positions for a proton, leading to 22"N" possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the H2O molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as "Ω" < 22"N"()"N". Correspondingly the configurational entropy "S"0 = "k""B"ln("Ω") = "Nk""B"ln() = 0.81 cal/(K·mol) = 3.4 J/(mol·K) is in amazing agreement with the missing entropy measured by Giauque and Stout.
Although Pauling's calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy.
Spin ice.
A mathematically analogous situation to the degeneracy in water ice is found in the spin ices. A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the <111> cubic axes, which coincide with the lines connecting each tetrahedral vertex to the center. Every tetrahedral cell must have two spins pointing in and two pointing out in order to minimize the energy. Currently the spin ice model has been approximately realized by real materials, most notably the rare earth pyrochlores Ho2Ti2O7, Dy2Ti2O7, and Ho2Sn2O7. These materials all show nonzero residual entropy at low temperature.
Extension of Pauling’s model: General frustration.
The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a system's inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the "Villain model") or by lattice structure such as in the triangular, face-centered cubic (fcc), hexagonal-close-packed, tetrahedron, pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass, which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin.
Artificial geometrically frustrated ferromagnets.
With the help of lithography techniques, it is possible to fabricate sub-micrometer size magnetic islands whose geometric arrangement reproduces the frustration found in naturally occurring spin ice materials. Recently R. F. Wang et al. reported the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. The magnetic moments of the ordered ‘spin’ islands were imaged with magnetic force microscopy (MFM) and then the local accommodation of frustration was thoroughly studied. In their previous work on a square lattice of frustrated magnets, they observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity.
These artificially frustrated ferromagnets can exhibit unique magnetic properties when studying their global response to an external field using Magneto-Optical Kerr Effect. In particular, a non-monotonic angular dependence of the square lattice coercivity is found to be related to disorder in the artificial spin ice system.
Geometric frustration without lattice.
Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid.
It is sometimes possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays a role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids.
The general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, unfrustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role.
Simple two-dimensional examples.
Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be "unfrustrated".
But now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon. Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide 2π. Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called "geometric frustration". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an "ideal" (defect-free) model for the considered structure.
Dense structures and tetrahedral packings.
The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face-centered cubic (fcc) or hexagonal close packing (hcp) lattices. Up to some extent amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order.
A regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem. It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the fcc structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (fcc and hcp). Adding a seventh sphere gives a new cluster consisting in two "axial" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with 2π; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R"3" is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedra (here five) share a common edge.
The next step is crucial: the search for an unfrustrated structure by allowing for curvature in the space, in order for the local configurations to propagate identically and without defects throughout the whole space.
Regular packing of tetrahedra: the polytope {3,3,5}.
Twenty irregular tetrahedra pack with a common vertex in such a way that the twelve outer vertices form a regular icosahedron. Indeed, the icosahedron edge length "l" is slightly longer than the circumsphere radius "r" ("l" ≈ 1.05"r"). There is a solution with regular tetrahedra if the space is not Euclidean, but spherical. It is the polytope {3,3,5}, using the Schläfli notation, also known as the 600-cell.
There are one hundred and twenty vertices which all belong to the hypersphere "S"3 with radius equal to the golden ratio ("φ" = ) if the edges are of unit length. The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. This structure is called a polytope (see Coxeter) which is the general name in higher dimension in the series containing polygons and polyhedra. Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the {3,3,5} polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations.
|
[
{
"math_id": 0,
"text": "\\mathcal H=\\sum_G -I_{k_\\nu , k_\\mu}\\,\\,S_{k_\\nu}\\cdot S_{k_\\mu}\\,,"
},
{
"math_id": 1,
"text": "P_W=I_{1,2}\\,I_{2,3}\\,I_{3,4}\\,I_{4,1}"
},
{
"math_id": 2,
"text": "P_W=I_{1,2}\\,I_{2,3}\\,I_{3,1}\\,,"
},
{
"math_id": 3,
"text": "\\mathcal H"
},
{
"math_id": 4,
"text": "I_{i,k}\\to\\varepsilon_i I_{i,k}\\varepsilon_k ,\\quad S_i\\to\\varepsilon_i S_i ,\\quad S_k\\to \\varepsilon_k S_k\\,."
}
] |
https://en.wikipedia.org/wiki?curid=1483799
|
14838
|
Inertial frame of reference
|
Fundamental concept of classical mechanics
<templatestyles src="Hlist/styles.css"/>
In classical physics and special relativity, an inertial frame of reference (also called inertial space, or Galilean reference frame) is a stationary or uniformly moving frame of reference. From this viewpoint, objects remain at rest until acted upon by external forces, and the laws of nature can be observed without the need for acceleration correction.
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Originally, some physicists, like Isaac Newton, thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving.
According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light.
By contrast, a "non-inertial reference frame" has non-zero acceleration. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia.
Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime.
Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, the Earth is a good approximation of an inertial reference frame, adequate for many applications.
Introduction.
The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation:<templatestyles src="Template:Blockquote/styles.css" />Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K.
This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity:
<templatestyles src="Template:Blockquote/styles.css" />The laws of Newtonian mechanics do not always hold in their simplest form...If, for instance, an observer is placed on a disc rotating relative to the earth, he/she will sense a 'force' pushing him/her toward the periphery of the disc, which is not caused by any interaction with other bodies. Here, the acceleration is not the consequence of the usual force, but of the so-called inertial force. Newton's laws hold in their simplest form only in a family of reference frames, called inertial frames. This fact represents the essence of the Galilean principle of relativity: The laws of mechanics have the same form in all inertial frames.
However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects.
In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries.
Newton's inertial frame of reference.
Absolute space.
Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced.
The expression "inertial frame of reference" () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition:
<templatestyles src="Template:Blockquote/styles.css" />
The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich:<templatestyles src="Template:Blockquote/styles.css" />*The existence of absolute space contradicts the internal logic of classical mechanics since, according to the Galilean principle of relativity, none of the inertial frames can be singled out.
The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary:
<templatestyles src="Template:Blockquote/styles.css" />The original question, "relative to what frame of reference do the laws of motion hold?" is revealed to be wrongly posed. The laws of motion essentially determine a class of reference frames, and (in principle) a procedure for constructing them.
Newtonian mechanics.
Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, formula_0, to another, formula_1, by simple addition or subtraction of coordinates:
formula_2
formula_3
where r0 and "t"0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time "t"2 − "t"1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same.
Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law.
Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed stars In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as:
<templatestyles src="Template:Blockquote/styles.css" />An inertial frame of reference is one in which the motion of a particle not subject to forces is in a straight line at constant speed.
Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries.
If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein:
<templatestyles src="Template:Blockquote/styles.css" />The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.
There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces.
Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: <templatestyles src="Template:Blockquote/styles.css" />The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.
This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below.
Special relativity.
Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics.
The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero.
Examples.
Simple example.
Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose.
First, we could observe the two cars from the side of the road. We define our "frame of reference" "S" as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance "d" = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where formula_4 is the position in meters of car one after time "t" in seconds and formula_5 is the position of car two after time "t".
formula_6
Notice that these formulas predict at "t" = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which formula_7. Therefore, we set formula_7 and solve for formula_8, that is:
formula_9
formula_10
formula_11
Alternatively, we could choose a frame of reference "S′" situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of "v"2 − "v"1 = 8 m/s. To catch up to the first car, it will take a time of = s, that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m/s.
It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).
Additional example.
For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system.
For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the "x"-axis, and the direction in front of him as the positive "y"-axis. To him, the car moves along the "x" axis with some velocity "v" in the positive "x"-direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity.
Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive "x"-axis, and the direction in front of her as the positive "y"-axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity "v" in the negative "y"-direction. If she is driving north, then north is the positive "y"-direction; if she turns east, east becomes the positive "y"-direction.
Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be "a" in the negative "x"-direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity "v" is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, "a" in the negative "y"-direction. However, if she is accelerating at rate "A" in the negative "y"-direction (in other words, slowing down), she will find Candace's acceleration to be "a′" = "a" − "A" in the negative "y"-direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate "A" in the positive "y"-direction (speeding up), she will observe Candace's acceleration as "a′" = "a" + "A" in the negative "y"-direction—a larger value than Alfred's measurement.
Non-inertial frames.
Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below.
General relativity.
General relativity is based upon the principle of equivalence:<templatestyles src="Template:Blockquote/styles.css" />There is no experiment observers can perform to distinguish whether an acceleration arises because of a gravitational force or because their reference frame is accelerating.
This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin.
Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity.
However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian.
Inertial frames and rotation.
In an inertial frame, Newton's first law, the "law of inertia", is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form:
formula_12
with F the net force (a vector), "m" the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces.
In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate "Ω" about an axis, takes the form:
formula_13
which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics):
formula_14
where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation "Ω", symbol × denotes the vector cross product, vector x"B" locates the body and vector v"B" is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer).
The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when "Ω" = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis.
All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present.
In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket).
As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame.
The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion.
In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center.
To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is "any" rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity.
When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames.
Primed frames.
An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. "x′", "y′", "a′".
The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′.
From the geometry of the situation
formula_15
Taking the first and second derivatives of this with respect to time
formula_16
formula_17
where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame.
These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as
formula_18
When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect).
A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried).
This arrangement leads to the equation (see Fictitious force for a derivation):
formula_19
or, to solve for the acceleration in the accelerated frame,
formula_20
Multiplying through by the mass "m" gives
formula_21
where
formula_22 (Euler force),
formula_23 (Coriolis force),
formula_24 (centrifugal force).
Separating non-inertial from inertial reference frames.
Theory.
Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces. <templatestyles src="Template:Blockquote/styles.css" />The effect of this being in the noninertial frame is to require the observer to introduce a fictitious force into his calculations….
The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame:
<templatestyles src="Template:Blockquote/styles.css" />The equations of motion in a non-inertial system differ from the equations in an inertial system by additional terms called inertial forces. This allows us to detect experimentally the non-inertial nature of a system.
Bodies in non-inertial reference frames are subject to so-called "fictitious" forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames.
To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear.
For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame.
Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish.
For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common:
<templatestyles src="Template:Blockquote/styles.css" />If bodies, any how moved among themselves, are urged in the direction of parallel lines by equal accelerative forces, they will continue to move among themselves, after the same manner as if they had been urged by no such forces.
This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set.
For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate.
Applications.
Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source.
A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{s}"
},
{
"math_id": 1,
"text": "\\mathbf{s}^{\\prime}"
},
{
"math_id": 2,
"text": "\n\\mathbf{r}^{\\prime} = \\mathbf{r} - \\mathbf{r}_{0} - \\mathbf{v} t\n"
},
{
"math_id": 3,
"text": "\nt^{\\prime} = t - t_{0}\n"
},
{
"math_id": 4,
"text": "x_1(t)"
},
{
"math_id": 5,
"text": "x_2(t)"
},
{
"math_id": 6,
"text": "x_1(t) = d + v_1 t = 200 + 22t,\\quad x_2(t) = v_2 t = 30t."
},
{
"math_id": 7,
"text": "x_1=x_2"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "200 + 22t = 30t,"
},
{
"math_id": 10,
"text": "8t = 200,"
},
{
"math_id": 11,
"text": "t = 25\\ \\mathrm{seconds}."
},
{
"math_id": 12,
"text": "\\mathbf{F} = m \\mathbf{a} \\ ,"
},
{
"math_id": 13,
"text": "\\mathbf{F}' = m \\mathbf{a} \\ ,"
},
{
"math_id": 14,
"text": "\\mathbf{F}' = \\mathbf{F} - 2m \\mathbf{\\Omega} \\times \\mathbf{v}_{B} - m \\mathbf{\\Omega} \\times (\\mathbf{\\Omega} \\times \\mathbf{x}_B ) - m \\frac{d \\mathbf{\\Omega}}{dt} \\times \\mathbf{x}_B \\ , "
},
{
"math_id": 15,
"text": "\\mathbf r = \\mathbf R + \\mathbf r'."
},
{
"math_id": 16,
"text": "\\mathbf v = \\mathbf V + \\mathbf v',"
},
{
"math_id": 17,
"text": "\\mathbf a = \\mathbf A + \\mathbf a'."
},
{
"math_id": 18,
"text": "\\mathbf F = m\\mathbf a = m\\mathbf A + m\\mathbf a'."
},
{
"math_id": 19,
"text": "\\mathbf a = \\mathbf a' + \\dot{\\boldsymbol\\omega} \\times \\mathbf r' + 2\\boldsymbol\\omega \\times \\mathbf v' + \\boldsymbol\\omega \\times (\\boldsymbol\\omega \\times \\mathbf r') + \\mathbf A_0,"
},
{
"math_id": 20,
"text": "\\mathbf a' = \\mathbf a - \\dot{\\boldsymbol\\omega} \\times \\mathbf r' - 2\\boldsymbol\\omega \\times \\mathbf v' - \\boldsymbol\\omega \\times (\\boldsymbol\\omega \\times \\mathbf r') - \\mathbf A_0."
},
{
"math_id": 21,
"text": "\\mathbf F' = \\mathbf F_\\mathrm{physical} + \\mathbf F'_\\mathrm{Euler} + \\mathbf F'_\\mathrm{Coriolis} + \\mathbf F'_\\mathrm{centripetal} - m\\mathbf A_0,"
},
{
"math_id": 22,
"text": "\\mathbf F'_\\mathrm{Euler} = -m\\dot{\\boldsymbol\\omega} \\times \\mathbf r'"
},
{
"math_id": 23,
"text": "\\mathbf F'_\\mathrm{Coriolis} = -2m\\boldsymbol\\omega \\times \\mathbf v'"
},
{
"math_id": 24,
"text": "\\mathbf F'_\\mathrm{centrifugal} = -m\\boldsymbol\\omega \\times (\\boldsymbol\\omega \\times \\mathbf r') = m(\\omega^2 \\mathbf r' - (\\boldsymbol\\omega \\cdot \\mathbf r')\\boldsymbol\\omega)"
}
] |
https://en.wikipedia.org/wiki?curid=14838
|
1483960
|
Charge invariance
|
Principle in particle physics
Charge invariance refers to the fixed value of the electric charge of a particle regardless of its motion. Like mass, total spin and magnetic moment, particle's charge quantum number remains unchanged between two reference frames in relative motion. For example, an electron has a specific charge "e", total spin formula_0, and invariant mass "m"e. Accelerate that electron, and the charge, spin and mass assigned to it in all physical laws in the frame at rest and the moving frame remain the same – "e", formula_0, "m"e. In contrast, the particle's total relativistic energy or de Broglie wavelength change values between the reference frames.
The origin of charge invariance, and all relativistic invariants, is presently unclear. There may be some hints proposed by string/M-theory. It is possible the concept of charge invariance may provide a key to unlocking the mystery of unification in physics – the single theory of gravity, electromagnetism, the strong, and weak nuclear forces.
The property of charge invariance is embedded in the charge density – current density four-vector formula_1, whose vanishing divergence formula_2 then signifies charge conservation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tfrac{\\hbar}{2}"
},
{
"math_id": 1,
"text": "j^\\mu = \\left(c\\rho, \\vec{j}\\right)"
},
{
"math_id": 2,
"text": "\\partial_\\mu j^\\mu = 0"
}
] |
https://en.wikipedia.org/wiki?curid=1483960
|
14841405
|
Pseudo-zero set
|
In complex analysis (a branch of mathematical analysis), the pseudo-zero set or root neighborhood of a degree-"m" polynomial "p"("z") is the set of all complex numbers that are roots of polynomials whose coefficients differ from those of "p" by a small amount. Namely, given a norm |·| on the space formula_0 of polynomial coefficients, the pseudo-zero set is the set of all zeros of all degree-"m" polynomials "q" such that |p − q| (as vectors of coefficients) is less than a given "ε".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{C}^{m+1}"
}
] |
https://en.wikipedia.org/wiki?curid=14841405
|
148420
|
Euler characteristic
|
Topological invariant in mathematics
In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler–Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by formula_0 (Greek lower-case letter chi).
The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra.
Polyhedra.
The Euler characteristic χ was classically defined for the surfaces of polyhedra, according to the formula
formula_1
where V, E, and F are respectively the numbers of vertices (corners), edges and faces in the given polyhedron. Any convex polyhedron's surface has Euler characteristic
formula_2
This equation, stated by Euler in 1758,
is known as Euler's polyhedron formula. It corresponds to the Euler characteristic of the sphere (i.e. formula_3), and applies identically to spherical polyhedra. An illustration of the formula on all Platonic polyhedra is given below.
The surfaces of nonconvex polyhedra can have various Euler characteristics:
For regular polyhedra, Arthur Cayley derived a modified form of Euler's formula using the density D, vertex figure density formula_4 and face density formula_5
formula_6
This version holds both for convex polyhedra (where the densities are all 1) and the non-convex Kepler–Poinsot polyhedra.
Projective polyhedra all have Euler characteristic 1, like the real projective plane, while the surfaces of toroidal polyhedra all have Euler characteristic 0, like the torus.
Plane graphs.
The Euler characteristic can be defined for connected plane graphs by the same formula_7 formula as for polyhedral surfaces, where F is the number of faces in the graph, including the exterior face.
The Euler characteristic of any plane connected graph G is 2. This is easily proved by induction on the number of faces determined by G, starting with a tree as the base case. For trees, formula_8 and formula_9 If G has C components (disconnected graphs), the same argument by induction on F shows that formula_10 One of the few graph theory papers of Cauchy also proves this result.
Via stereographic projection the plane maps to the 2-sphere, such that a connected graph maps to a polygonal decomposition of the sphere, which has Euler characteristic 2. This viewpoint is implicit in Cauchy's proof of Euler's formula given below.
Proof of Euler's formula.
There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and whose faces are topologically equivalent to disks.
Remove one face of the polyhedral surface. By pulling the edges of the missing face away from each other, deform all the rest into a planar graph of points and curves, in such a way that the perimeter of the missing face is placed externally, surrounding the graph obtained, as illustrated by the first of the three graphs for the special case of the cube. (The assumption that the polyhedral surface is homeomorphic to the sphere at the beginning is what makes this possible.) After this deformation, the regular faces are generally not regular anymore. The number of vertices and edges has remained the same, but the number of faces has been reduced by 1. Therefore, proving Euler's formula for the polyhedron reduces to proving formula_11 for this deformed, planar object.
If there is a face with more than three sides, draw a diagonal—that is, a curve through the face connecting two vertices that are not yet connected. Each new diagonal adds one edge and one face and does not change the number of vertices, so it does not change the quantity formula_12 (The assumption that all faces are disks is needed here, to show via the Jordan curve theorem that this operation increases the number of faces by one.) Continue adding edges in this manner until all of the faces are triangular.
Apply repeatedly either of the following two transformations, maintaining the invariant that the exterior boundary is always a simple cycle:
These transformations eventually reduce the planar graph to a single triangle. (Without the simple-cycle invariant, removing a triangle might disconnect the remaining triangles, invalidating the rest of the argument. A valid removal order is an elementary example of a shelling.)
At this point the lone triangle has formula_13formula_14 and formula_15 so that formula_16 Since each of the two above transformation steps preserved this quantity, we have shown formula_11 for the deformed, planar object thus demonstrating formula_17 for the polyhedron. This proves the theorem.
For additional proofs, see Eppstein (2013). Multiple proofs, including their flaws and limitations, are used as examples in "Proofs and Refutations" by Lakatos (1976).
Topological definition.
The polyhedral surfaces discussed above are, in modern language, two-dimensional finite CW-complexes. (When only triangular faces are used, they are two-dimensional finite simplicial complexes.) In general, for any finite CW-complex, the Euler characteristic can be defined as the alternating sum
formula_18
where "k""n" denotes the number of cells of dimension "n" in the complex.
Similarly, for a simplicial complex, the Euler characteristic equals the alternating sum
formula_18
where "k""n" denotes the number of "n"-simplexes in the complex.
Betti number alternative.
More generally still, for any topological space, we can define the "n"th Betti number "b""n" as the rank of the "n"-th singular homology group. The Euler characteristic can then be defined as the alternating sum
formula_19
This quantity is well-defined if the Betti numbers are all finite and if they are zero beyond a certain index "n"0. For simplicial complexes, this is not the same definition as in the previous paragraph but a homology computation shows that the two definitions will give the same value for formula_20.
Properties.
The Euler characteristic behaves well with respect to many basic operations on topological spaces, as follows.
Homotopy invariance.
Homology is a topological invariant, and moreover a homotopy invariant: Two topological spaces that are homotopy equivalent have isomorphic homology groups. It follows that the Euler characteristic is also a homotopy invariant.
For example, any contractible space (that is, one homotopy equivalent to a point) has trivial homology, meaning that the 0th Betti number is 1 and the others 0. Therefore, its Euler characteristic is 1. This case includes Euclidean space formula_21 of any dimension, as well as the solid unit ball in any Euclidean space — the one-dimensional interval, the two-dimensional disk, the three-dimensional ball, etc.
For another example, any convex polyhedron is homeomorphic to the three-dimensional ball, so its surface is homeomorphic (hence homotopy equivalent) to the two-dimensional sphere, which has Euler characteristic 2. This explains why convex polyhedra have Euler characteristic 2.
Inclusion–exclusion principle.
If "M" and "N" are any two topological spaces, then the Euler characteristic of their disjoint union is the sum of their Euler characteristics, since homology is additive under disjoint union:
formula_22
More generally, if "M" and "N" are subspaces of a larger space "X", then so are their union and intersection. In some cases, the Euler characteristic obeys a version of the inclusion–exclusion principle:
formula_23
This is true in the following cases:
In general, the inclusion–exclusion principle is false. A counterexample is given by taking "X" to be the real line, "M" a subset consisting of one point and "N" the complement of "M".
Connected sum.
For two connected closed n-manifolds formula_24 one can obtain a new connected manifold formula_25 via the connected sum operation. The Euler characteristic is related by the formula
formula_26
Product property.
Also, the Euler characteristic of any product space "M" × "N" is
formula_27
These addition and multiplication properties are also enjoyed by cardinality of sets. In this way, the Euler characteristic can be viewed as a generalisation of cardinality; see .
Covering spaces.
Similarly, for a "k"-sheeted covering space formula_28 one has
formula_29
More generally, for a ramified covering space, the Euler characteristic of the cover can be computed from the above, with a correction factor for the ramification points, which yields the Riemann–Hurwitz formula.
Fibration property.
The product property holds much more generally, for fibrations with certain conditions.
If formula_30 is a fibration with fiber "F," with the base "B" path-connected, and the fibration is orientable over a field "K," then the Euler characteristic with coefficients in the field "K" satisfies the product property:
formula_31
This includes product spaces and covering spaces as special cases,
and can be proven by the Serre spectral sequence on homology of a fibration.
For fiber bundles, this can also be understood in terms of a transfer map formula_32 – note that this is a lifting and goes "the wrong way" – whose composition with the projection map formula_33 is multiplication by the Euler class of the fiber:
formula_34
Examples.
Surfaces.
The Euler characteristic can be calculated easily for general surfaces by finding a polygonization of the surface (that is, a description as a CW-complex) and using the above definitions.
Soccer ball.
It is common to construct soccer balls by stitching together pentagonal and hexagonal pieces, with three pieces meeting at each vertex (see for example the Adidas Telstar). If P pentagons and H hexagons are used, then there are formula_35 faces, formula_36 vertices, and formula_37 edges. The Euler characteristic is thus
formula_38
Because the sphere has Euler characteristic 2, it follows that formula_39 That is, a soccer ball constructed in this way always has 12 pentagons. The number of hexagons can be any nonnegative integer except 1.
This result is applicable to fullerenes and Goldberg polyhedra.
Arbitrary dimensions.
The n dimensional sphere has singular homology groups equal to
formula_40
hence has Betti number 1 in dimensions 0 and n, and all other Betti numbers are 0. Its Euler characteristic is then that is, either 0 if n is odd, or 2 if n is even.
The n dimensional real projective space is the quotient of the n sphere by the antipodal map. It follows that its Euler characteristic is exactly half that of the corresponding sphere – either 0 or 1.
The n dimensional torus is the product space of n circles. Its Euler characteristic is 0, by the product property. More generally, any compact parallelizable manifold, including any compact Lie group, has Euler characteristic 0.
The Euler characteristic of any closed odd-dimensional manifold is also 0. The case for orientable examples is a corollary of Poincaré duality. This property applies more generally to any compact stratified space all of whose strata have odd dimension. It also applies to closed odd-dimensional non-orientable manifolds, via the two-to-one orientable double cover.
Relations to other invariants.
The Euler characteristic of a closed orientable surface can be calculated from its genus g (the number of tori in a connected sum decomposition of the surface; intuitively, the number of "handles") as
formula_41
The Euler characteristic of a closed non-orientable surface can be calculated from its non-orientable genus k (the number of real projective planes in a connected sum decomposition of the surface) as
formula_42
For closed smooth manifolds, the Euler characteristic coincides with the Euler number, i.e., the Euler class of its tangent bundle evaluated on the fundamental class of a manifold. The Euler class, in turn, relates to all other characteristic classes of vector bundles.
For closed Riemannian manifolds, the Euler characteristic can also be found by integrating the curvature; see the Gauss–Bonnet theorem for the two-dimensional case and the generalized Gauss–Bonnet theorem for the general case.
A discrete analog of the Gauss–Bonnet theorem is Descartes' theorem that the "total defect" of a polyhedron, measured in full circles, is the Euler characteristic of the polyhedron.
Hadwiger's theorem characterizes the Euler characteristic as the "unique" (up to scalar multiplication) translation-invariant, finitely additive, not-necessarily-nonnegative set function defined on finite unions of compact convex sets in ℝ"n" that is "homogeneous of degree 0".
Generalizations.
For every combinatorial cell complex, one defines the Euler characteristic as the number of 0-cells, minus the number of 1-cells, plus the number of 2-cells, etc., if this alternating sum is finite. In particular, the Euler characteristic of a finite set is simply its cardinality, and the Euler characteristic of a graph is the number of vertices minus the number of edges. (Olaf Post calls this a "well-known formula".)
More generally, one can define the Euler characteristic of any chain complex to be the alternating sum of the ranks of the homology groups of the chain complex, assuming that all these ranks are finite.
A version of Euler characteristic used in algebraic geometry is as follows. For any coherent sheaf formula_43 on a proper scheme X, one defines its Euler characteristic to be
formula_44
where formula_45 is the dimension of the i-th sheaf cohomology group of formula_43. In this case, the dimensions are all finite by Grothendieck's finiteness theorem. This is an instance of the Euler characteristic of a chain complex, where the chain complex is a finite resolution of formula_46 by acyclic sheaves.
Another generalization of the concept of Euler characteristic on manifolds comes from orbifolds (see Euler characteristic of an orbifold). While every manifold has an integer Euler characteristic, an orbifold can have a fractional Euler characteristic. For example, the teardrop orbifold has Euler characteristic 1 + , where p is a prime number corresponding to the cone angle .
The concept of Euler characteristic of the reduced homology of a bounded finite poset is another generalization, important in combinatorics. A poset is "bounded" if it has smallest and largest elements; call them 0 and 1. The Euler characteristic of such a poset is defined as the integer "μ"(0,1), where μ is the Möbius function in that poset's incidence algebra.
This can be further generalized by defining a rational valued Euler characteristic for certain finite categories, a notion compatible with the Euler characteristics of graphs, orbifolds and posets mentioned above. In this setting, the Euler characteristic of a finite group or monoid G is , and the Euler characteristic of a finite groupoid is the sum of , where we picked one representative group G for each connected component of the groupoid.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\chi "
},
{
"math_id": 1,
"text": " \\chi = V - E + F "
},
{
"math_id": 2,
"text": "\\ \\chi = V - E + F = 2 ~."
},
{
"math_id": 3,
"text": "\\ \\chi = 2\\ "
},
{
"math_id": 4,
"text": "\\ d_v\\ ,"
},
{
"math_id": 5,
"text": "\\ d_f\\ :"
},
{
"math_id": 6,
"text": "\\ d_v V - E + d_f F = 2 D ~."
},
{
"math_id": 7,
"text": "\\ V - E + F\\ "
},
{
"math_id": 8,
"text": "\\ E = V - 1\\ "
},
{
"math_id": 9,
"text": "\\ F = 1 ~."
},
{
"math_id": 10,
"text": "\\ V - E + F - C = 1 ~."
},
{
"math_id": 11,
"text": "\\ V - E + F = 1\\ "
},
{
"math_id": 12,
"text": "\\ V - E + F ~."
},
{
"math_id": 13,
"text": "\\ V = 3\\ ,"
},
{
"math_id": 14,
"text": "\\ E = 3\\ ,"
},
{
"math_id": 15,
"text": "\\ F = 1\\ ,"
},
{
"math_id": 16,
"text": "\\ V - E + F = 1 ~."
},
{
"math_id": 17,
"text": "\\ V - E + F = 2\\ "
},
{
"math_id": 18,
"text": "\\chi = k_0 - k_1 + k_2 - k_3 + \\cdots,"
},
{
"math_id": 19,
"text": "\\chi = b_0 - b_1 + b_2 - b_3 + \\cdots."
},
{
"math_id": 20,
"text": "\\chi"
},
{
"math_id": 21,
"text": "\\mathbb{R}^n"
},
{
"math_id": 22,
"text": "\\chi(M \\sqcup N) = \\chi(M) + \\chi(N)."
},
{
"math_id": 23,
"text": "\\chi(M \\cup N) = \\chi(M) + \\chi(N) - \\chi(M \\cap N)."
},
{
"math_id": 24,
"text": "M, N"
},
{
"math_id": 25,
"text": "M \\# N"
},
{
"math_id": 26,
"text": " \\chi(M \\# N) = \\chi(M) + \\chi(N) - \\chi(S^n)."
},
{
"math_id": 27,
"text": "\\chi(M \\times N) = \\chi(M) \\cdot \\chi(N)."
},
{
"math_id": 28,
"text": "\\tilde{M} \\to M,"
},
{
"math_id": 29,
"text": "\\chi(\\tilde{M}) = k \\cdot \\chi(M)."
},
{
"math_id": 30,
"text": "p\\colon E \\to B"
},
{
"math_id": 31,
"text": "\\chi(E) = \\chi(F)\\cdot \\chi(B)."
},
{
"math_id": 32,
"text": "\\tau\\colon H_*(B) \\to H_*(E)"
},
{
"math_id": 33,
"text": "p_*\\colon H_*(E) \\to H_*(B)"
},
{
"math_id": 34,
"text": "p_* \\circ \\tau = \\chi(F) \\cdot 1."
},
{
"math_id": 35,
"text": "\\ F = P + H\\ "
},
{
"math_id": 36,
"text": "\\ V = \\tfrac{1}{3}\\left(\\ 5 P + 6 H\\ \\right)\\ "
},
{
"math_id": 37,
"text": "\\ E = \\tfrac{1}{2} \\left(\\ 5P + 6H\\ \\right)\\ "
},
{
"math_id": 38,
"text": " V - E + F = \\tfrac{1}{3} \\left(\\ 5 P + 6 H\\ \\right) - \\tfrac{1}{2} \\left(\\ 5 P + 6 H\\ \\right) + P + H = \\tfrac{1}{6} P ~."
},
{
"math_id": 39,
"text": "\\ P = 12 ~."
},
{
"math_id": 40,
"text": "H_k(\\mathrm{S}^n) = \\begin{cases} \\mathbb{Z} ~& k = 0 ~~ \\mathsf{ or } ~~ k = n \\\\ \\{0\\} & \\mathsf{otherwise}\\ , \\end{cases}"
},
{
"math_id": 41,
"text": "\\chi = 2 - 2g ~."
},
{
"math_id": 42,
"text": "\\chi = 2 - k ~."
},
{
"math_id": 43,
"text": "\\mathcal{F}"
},
{
"math_id": 44,
"text": " \\chi ( \\mathcal{F})= \\sum_i (-1)^i h^i(X,\\mathcal{F})\\ ,"
},
{
"math_id": 45,
"text": "\\ h^i(X, \\mathcal{F})\\ "
},
{
"math_id": 46,
"text": "\\ \\mathcal{F}\\ "
}
] |
https://en.wikipedia.org/wiki?curid=148420
|
1484228
|
Montel's theorem
|
Two theorems about families of holomorphic functions
In complex analysis, an area of mathematics, Montel's theorem refers to one of two theorems about families of holomorphic functions. These are named after French mathematician Paul Montel, and give conditions under which a family of holomorphic functions is normal.
Locally uniformly bounded families are normal.
The first, and simpler, version of the theorem states that a family of holomorphic functions defined on an open subset of the complex numbers is normal if and only if it is locally uniformly bounded.
This theorem has the following formally stronger corollary. Suppose that
formula_0 is a family of
meromorphic functions on an open set formula_1. If formula_2 is such that
formula_0 is not normal at formula_3, and formula_4 is a neighborhood of formula_3, then formula_5 is dense
in the complex plane.
Functions omitting two values.
The stronger version of Montel's Theorem (occasionally referred to as the Fundamental Normality Test) states that a family of holomorphic functions, all of which omit the same two values formula_6 is normal.
Necessity.
The conditions in the above theorems are sufficient, but not necessary for normality. Indeed,
the family formula_7 is normal, but does not omit any complex value.
Proofs.
The first version of Montel's theorem is a direct consequence of Marty's Theorem (which
states that a family is normal if and only if the spherical derivatives are locally bounded)
and Cauchy's integral formula.
This theorem has also been called the Stieltjes–Osgood theorem, after Thomas Joannes Stieltjes and William Fogg Osgood.
The Corollary stated above is deduced as follows. Suppose that all the functions in formula_0 omit the same neighborhood of the point formula_8. By postcomposing with the map formula_9 we obtain a uniformly bounded family, which is normal by the first version of the theorem.
The second version of Montel's theorem can be deduced from the first by using the fact that there exists a holomorphic universal covering from the unit disk to the twice punctured plane formula_10. (Such a covering is given by the elliptic modular function).
This version of Montel's theorem can be also derived from Picard's theorem,
by using Zalcman's lemma.
Relationship to theorems for entire functions.
A heuristic principle known as Bloch's Principle (made precise by Zalcman's lemma) states that properties that imply that an entire function is constant correspond to properties that ensure that a family of holomorphic functions is normal.
For example, the first version of Montel's theorem stated above is the analog of Liouville's theorem, while the second version corresponds to Picard's theorem.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from Montel's theorem on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "z_0\\in D"
},
{
"math_id": 3,
"text": "z_0"
},
{
"math_id": 4,
"text": "U\\subset D"
},
{
"math_id": 5,
"text": "\\bigcup_{f\\in\\mathcal{F}}f(U)"
},
{
"math_id": 6,
"text": "a,b\\in\\mathbb{C},"
},
{
"math_id": 7,
"text": "\\{z\\mapsto z\\}"
},
{
"math_id": 8,
"text": "z_1"
},
{
"math_id": 9,
"text": "z\\mapsto \\frac{1}{z-z_1}"
},
{
"math_id": 10,
"text": "\\mathbb{C}\\setminus\\{a,b\\}"
}
] |
https://en.wikipedia.org/wiki?curid=1484228
|
14843
|
Interstellar travel
|
Hypothetical travel between stars or planetary systems
Interstellar travel is the hypothetical travel of spacecraft from one star system, solitary star, or planetary system to another. Interstellar travel is expected to prove much more difficult than interplanetary spaceflight due to the vast difference in the scale of the involved distances. Whereas the distance between any two planets in the Solar System is less than 55 astronomical units (AU), stars are typically separated by hundreds of thousands of AU, causing these distances to typically be expressed instead in light-years. Because of the vastness of these distances, non-generational interstellar travel based on known physics would need to occur at a high percentage of the speed of light; even so, travel times would be long, at least decades and perhaps millennia or longer.
As of 2024[ [update]], five uncrewed spacecraft, all launched and operated by the United States, have achieved the escape velocity required to leave the Solar System as part of missions to explore parts of the outer system. They will therefore continue to travel through interstellar space indefinitely. However, they will not approach another star for hundreds of thousands of years, long after they have ceased to operate (though in theory the Voyager Golden Record would be playable in the event that the spacecraft is retrieved by an extraterrestrial civilization).
The speeds required for interstellar travel in a human lifetime far exceed what current methods of space travel can provide. Even with a hypothetically perfectly efficient propulsion system, the kinetic energy corresponding to those speeds is enormous by today's standards of energy development. Moreover, collisions by spacecraft with cosmic dust and gas at such speeds would be very dangerous for both passengers and the spacecraft itself.
A number of strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems, to microscopic space probes. Many different spacecraft propulsion systems have been proposed to give spacecraft the required speeds, including nuclear propulsion, beam-powered propulsion, and methods based on speculative physics.
Humanity would need to overcome considerable technological and economic challenges to achieve either crewed or uncrewed interstellar travel. Even the most optimistic views forecast that it will be decades before this milestone is reached. However, in spite of the challenges, a wide range of scientific benefits are expected should interstellar travel become a reality.
Most interstellar travel concepts require a developed space logistics system capable of moving millions of tonnes to a construction / operating location, and most would require gigawatt-scale power for construction or power (such as Star Wisp– or Light Sail–type concepts). Such a system could grow organically if space-based solar power became a significant component of Earth's energy mix. Consumer demand for a multi-terawatt system would create the necessary multi-million ton/year logistical system.
Challenges.
Interstellar distances.
Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some . Venus, the closest planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 20, 2023, Voyager 1, the farthest human-made object from Earth, is 163 AU away, exiting the Solar System at a speed of 17 km/s (0.006% of the speed of light).
The closest known star, Proxima Centauri, is approximately away, or over 9,000 times farther away than Neptune.
Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around per second, so 1 light-year is about or AU. Hence, Proxima Centauri is approximately 4.243 light-years from Earth.
Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star that is one of two companions of Proxima Centauri), can be pictured by scaling down the Earth–Sun distance to . On this scale, the distance to Alpha Centauri A would be .
The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/390 of a light-year in 46 years and is currently moving at 1/17,600 the speed of light. At this rate, a journey to Proxima Centauri would take 75,000 years.
Required energy.
A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy formula_0 where formula_1 is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to formula_2.
The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the formula_3 term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances.
Interstellar medium.
A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that due to the requisite high relative speeds and large kinetic energies, collisions with interstellar dust could cause considerable damage to the craft. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects and mitigation methods have been discussed in literature, but many unknowns remain. An additional consideration is that due the non-homogeneous distribution of interstellar matter around the Sun, these risks would vary between different trajectories. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium.
Hazards.
The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the physiological effects of extreme acceleration, the effects of exposure to ionising radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome.
Wait calculation.
The speculative fiction writer and physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). In 2006, Andrew Kennedy calculated ideal departure dates for a trip to Barnard's Star using a more precise concept of the wait calculation where for a given destination and growth rate in propulsion capacity there is a departure point that overtakes earlier launches and will not be overtaken by later ones and concluded "an interstellar journey of 6 light years can best be made in about 635 years from now if growth continues at about 1.4% per annum", or approximately 2641 AD. It may be the most significant calculation for competing cultures occupying the galaxy.
Prime targets for interstellar travel.
There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions:
Existing astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration.
Proposed methods.
Slow, uncrewed probes.
"Slow" interstellar missions (still fast by other standards) based on current and near-future propulsion technologies are associated with trip times starting from about several decades to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced, as is the mass that needs to be accelerated, although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot.
Fast, uncrewed probes.
Nanoprobes.
Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space.
Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination.
As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly.
Slow, crewed missions.
In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft.
Generation ships.
A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises.
Suspended animation.
Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage.
Frozen embryos.
A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents.
Island hopping through interstellar space.
Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way.
Fast, crewed missions.
If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost.
Time dilation.
Physicists generally believe faster-than-light travel is impossible.
Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth.
For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch.
From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut.
At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth.
Constant acceleration.
Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology.
From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey.
From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed.
When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction.
The result is an impressively fast journey for the crew.
Propulsion.
Rocket concepts.
All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial ("M"0, including fuel) to final ("M"1, fuel depleted) mass.
Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable, resulting in an extreme thermal load.
Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle.
Ion engine.
A type of electric propulsion, spacecraft such as "Dawn" use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited to about 5 km/s by the chemical energy stored in the fuel's molecular bonds. They produce a high thrust (about 106 N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s.
Nuclear fission powered.
Fission-electric.
Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power solar system exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime.
Fission-fragment.
Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to . With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio.
Nuclear pulse.
Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse and high specific power.
Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight.
In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes.
A current impediment to the development of "any" nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station.
Another issue to be considered, would be the g-forces imparted to a rapidly accelerated spacecraft, cargo, and passengers inside (see Inertia negation).
Nuclear fusion rockets.
Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of the speed of light. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries.
Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 "g", with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s.
Antimatter rockets.
An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required.
Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of "mc"2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of "mc"2 yield of nuclear fusion, the next-best rival candidate.
Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1"g" ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved.
More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft.
Rockets with an external energy source.
Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis proposed an interstellar probe propelled by an ion thruster powered by the energy beamed to it from a base station laser. Lenard and Andrews proposed using a base station laser to accelerate nuclear fuel pellets towards a Mini-Mag Orion spacecraft that ignites them for propulsion.
Non-rocket concepts.
A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem:
RF resonant cavity thruster.
A radio frequency (RF) resonant cavity thruster is a device that is claimed to be a spacecraft thruster. In 2016, the Advanced Propulsion Physics Laboratory at NASA reported observing a small apparent thrust from one such test, a result not since replicated. One of the designs is called EMDrive. In December 2002, Satellite Propulsion Research Ltd described a working prototype with an alleged total thrust of about 0.02 newtons powered by an 850 W cavity magnetron. The device could operate for only a few dozen seconds before the magnetron failed, due to overheating. The latest test on the EMDrive concluded that it does not work.
Helical engine.
Proposed in 2019 by NASA scientist Dr. David Burns, the helical engine concept would use a particle accelerator to accelerate particles to near the speed of light. Since particles traveling at such speeds acquire more mass, it is believed that this mass change could create acceleration. According to Burns, the spacecraft could theoretically reach 99% the speed of light.
Interstellar ramjets.
In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected "en route" (commensurate with the concept of "energy harvesting"), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration.
Beamed propulsion.
A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar craft with a light sail of 100 kilometers in the destination star system without requiring a laser array to be present in that system. In this scheme, a secondary sail of 30 kilometers is deployed to the rear of the spacecraft, while the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. It has also been proposed to use beamed-powered propulsion to accelerate a spacecraft, and electromagnetic propulsion to decelerate it; thus, eliminating the problem that the Bussard ramjet has with the drag produced during acceleration.
A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium.
The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward:
Interstellar travel catalog to use photogravitational assists for a full stop.
The following table is based on work by Heller, Hippke and Kervella.
Pre-accelerated fuel.
Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation.
Dynamic soaring.
Dynamic soaring as a way to travel across interstellar space has been proposed.
Theoretical concepts.
Transmission of minds with light.
Uploaded human minds or AI could be transmitted with laser or radio signals at the speed of light. This requires a receiver at the destination which would first have to be set up e.g. by humans, probes, self replicating machines (potentially along with AI or uploaded humans), or an alien civilization (which might also be in a different galaxy, perhaps a Kardashev type III civilization).
Artificial black hole.
A theoretical idea for enabling interstellar travel is to propel a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back.
Faster-than-light travel.
Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative.
It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and, it is not known if it could be produced in sufficient quantities, if at all.
Alcubierre drive.
In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or the hypothetical concept of negative mass.
Wormholes.
Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer "et al." argue that such wormholes might have been created in the early universe, stabilized by cosmic strings. The general theory of wormholes is discussed by Visser in the book "Lorentzian Wormholes".
Designs and studies.
Project Hyperion.
Project Hyperion has looked into various feasibility issues of crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies.
Enzmann starship.
The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of "Analog", was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building is tall and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems.
NASA research.
NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel.
In 1994, NASA and JPL cosponsored a "Workshop on Advanced Quantum/Relativity Theory Propulsion" to "establish and use new frames of reference for thinking about the faster-than-light (FTL) question".
The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible.
Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by.
100 Year Starship study.
The 100 Year Starship (100YSS) study was the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. 100YSS-related symposia were organized between 2011 and 2015.
Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible.
Non-profit organizations.
A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals.
Feasibility.
The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star.
Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list.
Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT).
One of the major stumbling blocks is having enough Onboard Spares & Repairs facilities for such a lengthy time journey assuming all other considerations are solved, without access to all the resources available on Earth.
Interstellar missions not for human benefit.
Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for uncrewed slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on Earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of formula_4, corresponding to about formula_5, can be decelerated using a magnetic sail. Uncrewed missions not for human benefit would hence be feasible.
Discovery of Earth-like planets.
On August 24, 2016, Earth-size exoplanet Proxima Centauri b orbiting in the habitable zone of Proxima Centauri, 4.2 light-years away, was announced. This is the nearest known potentially-habitable exoplanet outside our Solar System.
In February 2017, NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from the Solar System. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside the Solar System. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K = \\tfrac{1}{2}mv^2"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "mv^2"
},
{
"math_id": 3,
"text": "v^2"
},
{
"math_id": 4,
"text": "c/300"
},
{
"math_id": 5,
"text": "1000\\,\\mbox{km/s}"
}
] |
https://en.wikipedia.org/wiki?curid=14843
|
1484457
|
Atmospheric focusing
|
Type of wave interaction causing shock waves
Atmospheric focusing is a type of wave interaction causing shock waves to affect areas at a greater distance than otherwise expected. Variations in the atmosphere create distortions in the wavefront by refracting a segment, allowing it to converge at certain points and constructively interfere. In the case of destructive shock waves, this may result in areas of damage far beyond the theoretical extent of its blast effect. Examples of this are seen during supersonic booms, large extraterrestrial impacts from objects like meteors, and nuclear explosions.
Density variations in the atmosphere (e.g. due to temperature variations), or airspeed variations cause refraction along the shock wave, allowing the uniform wavefront to separate and eventually interfere, dispersing the wave at some points and focusing it at others. A similar effect occurs in water when a wave travels through a patch of different density fluid, causing it to diverge over a large distance. For powerful shock waves this can cause damage farther than expected; the shock wave energy density will decrease beyond expected values based on uniform geometry (formula_0 falloff for weak shock or acoustic waves, as expected at large distances).
Types of atmospheric focusing.
Supersonic booms.
Atmospheric focusing from supersonic booms is a modern occurrence and a result of the actions of air forces across the world. When objects like planes travel faster than the speed of sound, they create sonic booms and pressure waves that can be focused. Atmospheric factors present when these waves are created can focus the waves and cause damage.
Planes can also create boom waves and explosion waves that can be focused. Consideration for atmospheric focusing in flight plans is critical. The wind and altitude during a flight can create environments for atmospheric focusing, which can be determined through reference to a focusing curve. When this is the case, supersonic flight may cause damage on the ground.
Meteor impacts.
Meteors can also cause shock waves that can be focused. As the meteor enters Earth’s atmosphere and reaches lower altitudes, it can create a shock wave. The shock wave is impacted by what the meteor is made of, temperature, and pressure. Because the meteors need to have a large size and mass, there is only a small percentage of meteors that can create these shock waves. Radar and Infrasonic methodologies are able to detect meteor shock waves. These tools are used to study these shock waves and can help create new methods of learning about meteor shock waves.
Nuclear explosions and bombs.
Nuclear explosions and bombs can also lead to atmospheric focusing. The effects of focusing may be found hundreds of kilometers from the blast site. An example of this is the case of the Tsar Bomba test, where damage was caused up to approximately 1,000 km away. Atmospheric focusing can increase the damage caused by these explosions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1/r^2"
}
] |
https://en.wikipedia.org/wiki?curid=1484457
|
14846664
|
Solar car racing
|
Solar car racing refers to competitive races of electric vehicles which are powered by solar energy obtained from solar panels on the surface of the car (solar cars). The first solar car race was the Tour de Sol in 1985 which led to several similar races in Europe, US and Australia. Such challenges are often entered by universities to develop their students' engineering and technological skills, but many business corporations have entered competitions in the past. A small number of high school teams participate in solar car races designed exclusively for high school students.
Distance races.
The two most notable solar car distance (overland) races are the World Solar Challenge and the American Solar Challenge. They are contested by a variety of university and corporate teams. Corporate teams participate in the races to give their design teams experience of working with both alternative energy sources and advanced materials. University teams participate in order to give their students experience in designing high technology cars and working with environmental and advanced materials technology. These races are often sponsored by government or educational agencies, and businesses such as Toyota keen to promote renewable energy sources.
Support.
The cars require intensive support teams similar in size to professional motor racing teams. This is especially the case with the World Solar Challenge where sections of the race run through very remote country. The solar car will travel escorted by a small caravan of support cars. In a long distance race each solar car will be preceded by a lead car that can identify problems or obstacles ahead of the race car. Behind the solar car there will be a mission control vehicle from which the race pace is controlled. Here tactical decisions are made based on information from the solar car and environmental information about the weather and terrain. Behind the mission control there might be one or more other vehicles carrying replacement drivers and maintenance support as well as supplies and camping equipment for the entire team.
World Solar Challenge.
This race features a field of competitors from around the world who race to cross the Australian continent. The 30th Anniversary race of the World Solar Challenge was held in October 2017. Major regulation changes were released in June 2006 for this race to increase safety, to build a new generation of solar car, which with little modification could be the basis for a practical proposition for sustainable transport and intended to slow down cars in the main event, which could easily exceed the speed limit (110 km/h) in previous years.
In 2013 the organisers of the event introduced the Cruiser Class to the World Solar Challenge, designed to encourage contestants to design a "practical" solar powered vehicle. This race requires that vehicles have four wheels and upright seating for passengers, and is judged on a number of factors including time, payload, passenger miles, and external energy use. The Dutch TU Eindhoven solar racing team were the inaugural Cruiser Class winner with their vehicle "Stella".
American Solar Challenge.
The American Solar Challenge, previously known as the 'North American Solar Challenge' and 'Sunrayce', features mostly collegiate teams racing in timed intervals in the United States and Canada. The annual Formula Sun Grand Prix track race is used as a qualifier for ASC.
The American Solar Challenge was sponsored in part by several small sponsors. However, funding was cut near the end of 2005, and the NASC 2007 was cancelled. The North American solar racing community worked to find a solution, bringing in Toyota as a primary sponsor for a 2008 race. Toyota has since dropped the sponsorship. The last North American Solar Challenge was run 2016, from Brecksville, OH to Hot Springs, SD. The race was won by the University of Michigan. Michigan has won the race the last 6 times it has been held.
The Solar Car Challenge (Highschool).
The Solar Car Challenge is an annual event that fosters education and innovation in renewable energy by engaging high school students in the design, engineering, and racing of solar-powered vehicles. Founded in 1989 by Dr. Lehman Marks, the challenge has grown to become a premier educational program, combining science, technology, engineering, and mathematics (STEM) principles with hands-on experience. Participants are tasked with building and racing solar cars, allowing them to apply theoretical knowledge to practical problems while promoting sustainable technology and teamwork.
Held over several days, the Solar Car Challenge typically includes a cross-country race or a track event, depending on the year. The event draws teams from across the United States and occasionally international participants, fostering a spirit of friendly competition and collaboration. Beyond the race itself, the Solar Car Challenge provides extensive educational resources, workshops, and mentorship to help students succeed. This competition not only highlights the potential of solar energy but also inspires the next generation of engineers, scientists, and environmentally-conscious citizens.
South African Solar Challenge.
The South African Solar Challenge is a biennial, two-week solar-powered car race through the length and breadth of South Africa. The first challenge in 2008 proved that this event can attract the interest of the public, and that it has the necessary international backing from the FIA. Late in September, all entrants will take off from Pretoria and make their way to Cape Town, then drive along the coast to Durban, before climbing the escarpment on their way back to the finish line in Pretoria 11 days later. The event has (in both 2008 and 2010) been endorsed by International Solarcar Federation (ISF), Fédération Internationale de l'Automobile (FIA), World Wildlife Fund (WWF) making it the first Solar Race to receive endorsement from these 3 organizations. The last race took place in 2016. Sasol confirmed their support of the South Africa Solar Challenge, by taking naming rights to the event, so that for the duration of their sponsorship, the event was known as the Sasol Solar Challenge, South Africa.
Carrera Solar Atacama.
The Carrera Solar Atacama is the first solar-powered car race of its kind in Latin America; the race covers from Santiago to Arica in the north of Chile. The race's founder, La Ruta Solar, claims it is the most extreme of the vehicular races due to the high levels of solar radiation, up to 8.5 kWh/m2/day, encountered while traversing the Atacama Desert, as well as challenging participating teams to climb above sea level. After the 2018 race, La Ruta Solar organized its next edition for 2020, but it never came to be. In the end of 2019, the organization struggled with funding and decided to cancel the race. A few months later they declared bankruptcy.
Solar drag races.
Solar drag races are another form of solar racing. Unlike long distance solar races, solar dragsters do not use any batteries or pre-charged energy storage devices. Racers go head-to-head over a straight quarter kilometer distance. Currently, a solar drag race is held each year on the Saturday closest to the summer solstice in Wenatchee, Washington, USA. The world record for this event is 29.5 seconds set by the South Whidbey High School team on June 23, 2007.
Model and educational solar races.
Solar vehicle technology can be applied on a small scale, which makes it ideal for educational purposes in the STEM areas. Some events are:
Model Solar Vehicle Challenge Victoria.
The Victorian Model Solar Vehicle Challenge is an engineering competition undertaken by students across Victoria, year 1 to Year 12. Students design and construct their own vehicle, be it a car or boat. This event is currently held at ScienceWorks (Melbourne) in October each year. The first event was held in 1986. The goal of the challenge is to provide students with an experience of what it is like to work in STEM and to understand what can be achieved with renewable technology.
Junior Solar Sprint.
Junior Solar Sprint was created in the 1980s by the National Renewable Energy Laboratory (NREL) to teach younger children about the importance and challenges of using renewable energy. The project also teaches students how the engineering process is applied, and how solar panels, transmission, and aerodynamics can be used in practice.
Speed records.
Fédération Internationale de l'Automobile (FIA).
The FIA recognise a land speed record for vehicles powered only by solar panels. The current record was set by the Solar Team Twente, of the University of Twente with their car SolUTra. The record of 37.757 km/h was set in 2005. The record takes place over a flying 1000m run, and is the average speed of 2 runs in opposite directions.
In July, 2014, a group of Australian students from the UNSW Sunswift solar racing team at the University of New South Wales broke a world record in their solar car, for the fastest electric car weighing less than and capable of travelling on a single battery charge. This particular record was overseen by the Confederation of Australian Motorsport on behalf of the FIA and is not exclusive to solar-powered cars but to any electric car, and so during the attempt, the solar panels were disconnected from the electrical systems. The previous record of - which had been set in 1988 - was broken by the team with an average speed of over the distance.
Guinness world record.
Guinness World Records recognize a land speed record for vehicles powered only by solar panels. This record is currently held by the University of New South Wales with the car Sunswift IV. Its battery was removed so the vehicle was powered only by its solar panels. The record of was set on 7 January 2011 at the naval air base in Nowra, breaking the record previously held by the General Motors car Sunraycer of . The record takes place over a flying stretch, and is the average of two runs in opposite directions.
Miscellaneous records.
Australian Transcontinental (Perth to Sydney) Speed Record.
The Perth to Sydney Transcontinental record has held a certain allure in Solar Car Racing. Hans Tholstrup (the founder of the World Solar Challenge) first completed this journey in "The Quiet Achiever" in under 20 days in 1983. This vehicle is in the collection of the National Museum of Australia in Canberra.
The record was beaten by Dick Smith and the Aurora Solar Vehicle Association racing in the "Aurora Q1"
The current record was set in 2007 by the UNSW Solar Racing Team with their car "Sunswift III mk2"
Vehicle design.
Solar cars combine technology used in the aerospace, bicycle, alternative energy and automotive industries. Unlike most race cars, solar cars are designed with severe energy constraints imposed by the race regulations. These rules limit the energy used to only that collected from solar radiation, albeit starting with a fully charged battery pack. Some vehicle classes also allow human power input. As a result, optimizing the design to account for aerodynamic drag, vehicle weight, rolling resistance and electrical efficiency are paramount.
A usual design for today's successful vehicles is a small canopy in the middle of a curved wing-like array, entirely covered in cells, with 3 wheels. Before, the cockroach style with a smooth nose fairing into the panel was more successful. At lower speeds, with less powerful arrays, other configurations are viable and easier to construct, e.g. covering available surfaces of existing electric vehicles with solar cells or fastening solar canopies above them.
Electrical system.
The electrical system controls all of the power entering and leaving the system. The battery pack stores surplus solar energy produced when the vehicle is stationary or travelling slowly or downhill. Solar cars use a range of batteries including lead-acid batteries, nickel-metal hydride batteries (NiMH), nickel-cadmium batteries (NiCd), lithium ion batteries and lithium polymer batteries.
Power electronics may be used to optimize the electrical system. The maximum power tracker adjusts the operating point of the solar array to the voltage that produces the most power for the given conditions, e.g. temperature. The battery manager protects the batteries from overcharging. The motor controller controls the desired motor power. Many controllers allow regenerative braking, i.e. power is fed back into the battery during deceleration.
Some solar cars have complex data acquisition systems that monitor the whole electrical system, while basic cars show battery voltage and motor current. In order to judge the range available with varying solar production and motive consumption, an ampere-hour meter multiplies battery current and rate, thus providing the remaining vehicle range at each moment in the given conditions.
A wide variety of motor types have been used. The most efficient motors exceed 98% efficiency. These are brushless three-"phase" DC, electronically commutated, wheel motors, with a Halbach array configuration for the neodymium-iron-boron magnets, and Litz wire for the windings. Cheaper alternatives are asynchronous AC or brushed DC motors.
Mechanical systems.
The mechanical systems are designed to keep friction and weight to a minimum while maintaining strength and stiffness. Designers normally use aluminium, titanium and composites to provide a structure that meets strength and stiffness requirements whilst being fairly light. Steel is used for some suspension parts on many cars.
Solar cars usually have three wheels, but some have four. Three-wheelers usually have two front wheels and one rear wheel: the front wheels steer and the rear wheel follows. Four-wheel vehicles are set up like normal cars or similarly to three-wheeled vehicles with the two rear wheels close together.
Solar cars have a wide range of suspensions because of varying bodies and chassis. The most common front suspension is the double wishbone suspension. The rear suspension is often a trailing-arm suspension as found in motorcycles.
Solar cars are required to meet rigorous standards for brakes. Disc brakes are the most commonly used due to their good braking ability and ability to adjust. Mechanical and hydraulic brakes are both widely used. The brake pads or shoes are typically designed to retract to minimize brake drag, on leading cars.
Steering systems for solar cars also vary. The major design factors for steering systems are efficiency, reliability and precision alignment to minimize tire wear and power loss. The popularity of solar car racing has led to some tire manufacturers designing tires for solar vehicles. This has increased overall safety and performance.
All the top teams now use wheel motors, eliminating belt or chain drives.
Testing is essential to demonstrating vehicle reliability prior to a race. It is easy to spend a hundred thousand dollars to gain a two-hour advantage, and equally easy to lose two hours due to reliability issues.
Solar array.
The solar array consists of hundreds (or thousands) of photovoltaic solar cells converting sunlight into electricity. Cars can use a variety of solar cell technologies; most often polycrystalline silicon, mono-crystalline silicon, or gallium arsenide. The cells are wired together into strings while strings are often wired together to form a panel. Panels normally have voltages close to the nominal battery voltage. The main aim is to get as much cell area in as small a space as possible. Designers encapsulate the cells to protect them from the weather and breakage.
Designing a solar array is more than just stringing a bunch of cells together. A solar array acts like many very small batteries all hooked together in series. The total voltage produced is the sum of all cell voltages. The problem is that if a single cell is in shadow it acts like a diode, blocking the current for the entire string of cells. To design against this, array designers use by-pass diodes in parallel with smaller segments of the string of cells, allowing current around the non-functioning cell(s). Another consideration is that the battery itself can force current backward through the array unless there are blocking diodes put at the end of each panel.
The power produced by the solar array depends on the weather conditions, the position of the sun and the capacity of the array. At noon on a bright day, a good array can produce over 2 kilowatts (2.6 hp). A 6 m2 array of 20% cells will produce roughly 6 kW·h (22 kJ) of energy during a typical day on the WSC.
Some cars have employed free-standing or integrated sails to harness wind energy. Races including the WSC and ASC, consider wind energy to be solar energy, so their race regulations allow this practice.
Aerodynamics.
Aerodynamic drag is the main source of losses on a solar race car. The aerodynamic drag of a vehicle is the product of the frontal area and its "C""d". For most solar cars the frontal area is 0.75 to 1.3 m2. While "C""d" as low as 0.10 have been reported, 0.13 is more typical. This needs a great deal of attention to detail.
Mass.
The vehicle's mass is also a significant factor. A light vehicle generates less rolling resistance and will need smaller lighter brakes and other suspension components. This is the virtuous circle when designing lightweight vehicles.
Rolling resistance.
Rolling resistance can be minimized by using the right tires, inflated to the right pressure, correctly aligned, and by minimizing the weight of the vehicle.
Performance equation.
The design of a solar car is governed by the following work equation:
formula_0
which can be usefully simplified to the performance equation
formula_1
for long-distance races, and values seen in practice.
Briefly, the left-hand side represents the energy input into the car (batteries and power from the sun) and the right-hand side is the energy needed to drive the car along the race route (overcoming rolling resistance, aerodynamic drag, going uphill and accelerating). Everything in this equation can be estimated except "v". The parameters include:
Note 1 For the WSC the average panel power can be approximated as (7/9)×nominal power.
Solving the long form of the equation for velocity results in a large equation (approximately 100 terms). Using the power equation as the arbiter, vehicle designers can compare various car designs and evaluate the comparative performance over a given route. Combined with CAE and systems modeling, the power equation can be a useful tool in solar car design.
Race route considerations.
The directional orientation of a solar car race route affects the apparent position of the sun in the sky during a race day, which in turn affects the energy input to the vehicle.
This is significant to designers, who seek to maximize energy input to a panel of solar cells (often called an "array" of cells) by designing the array to point directly toward the sun for as long as possible during the race day. Thus, a south-north race car designer might increase the car's total energy input by using solar cells on the sides of the vehicle where the sun will strike them (or by creating a convex array coaxial with the vehicle's movement). In contrast, an east-west race alignment might reduce the benefit from having cells on the side of the vehicle, and thus might encourage design of a flat array.
Because solar cars are often purpose-built, and because arrays do not usually move in relation to the rest of the vehicle (with notable exceptions), this race-route-driven, flat-panel versus convex design compromise is one of the most significant decisions that a solar car designer must make.
For example, the 1990 and 1993 Sunrayce USA events were won by vehicles with significantly convex arrays, corresponding to the south-north race alignments; by 1997, however, most cars in that event had flat arrays to match the change to an east-west route.
Race strategy.
Energy consumption.
Optimizing energy consumption is of prime importance in a solar car race. Therefore, it is useful to be able to continually monitor and optimize the vehicle's energy parameters. Given the variable conditions, most teams have race speed optimization programs that continuously update the team on how fast the vehicle should be traveling. Some teams employ telemetry that relays vehicle performance data to a following support vehicle, which can provide the vehicle's driver with an optimum strategy.
Race route.
The race route itself will affect strategy, because the apparent position of the sun in the sky will vary depending on various factors which are specific to the vehicle's orientation (see "Race Route Considerations," above).
In addition, elevation changes over a race route can dramatically change the amount of power needed to travel the route. For example, the 2001 and 2003 North American Solar Challenge route crossed the Rocky Mountains (see graph at right).
Weather forecasting.
A successful solar car racing team will need to have access to reliable weather forecasts in order to predict the power input to the vehicle from the sun during each race day.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\eta \\left\\{\\eta_bE + \\frac{Px}{v}\\right\\} = \\left\\{W C_{rr1} + N C_{rr2} v + \\frac{1}{2}\\rho C_d A v^2\\right\\}x +Wh + \\frac{N_a W v^2}{2g}"
},
{
"math_id": 1,
"text": "\\eta \\left\\{\\eta_bEv/x + P\\right\\} = \\left\\{W C_{rr1} v + \\frac{1}{2}\\rho C_d A v^3\\right\\} "
}
] |
https://en.wikipedia.org/wiki?curid=14846664
|
14848497
|
Nuclear timescale
|
Estimate of the lifetime of a star
In astrophysics, the nuclear timescale is an estimate of the lifetime of a star based solely on its rate of fuel consumption. Along with the thermal and free-fall (aka dynamical) time scales, it is used to estimate the length of time a particular star will remain in a certain phase of its life and its lifespan if hypothetical conditions are met. In reality, the lifespan of a star is greater than what is estimated by the nuclear time scale because as one fuel becomes scarce, another will generally take its place—hydrogen burning gives way to helium burning, etc. However, all the phases after hydrogen burning combined typically add up to less than 10% of the duration of hydrogen burning.
Stellar astrophysics.
Hydrogen generally determines a star's nuclear lifetime because it is used as the main source of fuel in a main sequence star. Hydrogen becomes helium in the nuclear reaction that takes place within stars; when the hydrogen has been exhausted, the star moves on to another phase of its life and begins burning the helium.
formula_0
where M is the mass of the star, X is the fraction of the star (by mass) that is composed of the fuel, L is the star's luminosity, Q is the energy released per mass of the fuel from nuclear fusion (the chemical equation should be examined to get this value), and F is the fraction of the star where the fuel is burned (F is generally equal to .1 or so). As an example, the Sun's nuclear time scale is approximately 10 billion years.
|
[
{
"math_id": 0,
"text": " \\tau_{nuc} = \\frac{\\mbox{total mass of fuel available}}{\\mbox{rate of fuel consumption}} \\times \\mbox{fraction of star over which fuel is burned} = \\frac{MX}{\\frac{L}{Q}} \\times F "
}
] |
https://en.wikipedia.org/wiki?curid=14848497
|
1485104
|
Chemical ionization
|
Technique in mass spectroscopy
Chemical ionization (CI) is a soft ionization technique used in mass spectrometry. This was first introduced by Burnaby Munson and Frank H. Field in 1966. This technique is a branch of gaseous ion-molecule chemistry. Reagent gas molecules (often methane or ammonia) are ionized by electron ionization to form reagent ions, which subsequently react with analyte molecules in the gas phase to create analyte ions for analysis by mass spectrometry. Negative chemical ionization (NCI), charge-exchange chemical ionization, atmospheric-pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) are some of the common variants of the technique. CI mass spectrometry finds general application in the identification, structure elucidation and quantitation of organic compounds as well as some utility in biochemical analysis. Samples to be analyzed must be in vapour form, or else (in the case of liquids or solids), must be vapourized before introduction into the source.
Principles of operation.
The chemical ionization process generally imparts less energy to an analyte molecule than does electron impact (EI) ionization, resulting in less fragmentation and usually a simpler spectrum. The amount of fragmentation, and therefore the amount of structural information produced by the process can be controlled to some degree by selection of the reagent ion. In addition to some characteristic fragment ion peaks, a CI spectrum usually has an identifiable protonated molecular ion peak [M+1]+, allowing determination of the molecular mass. CI is thus useful as an alternative technique in cases where EI produces excessive fragmentation of the analyte, causing the molecular-ion peak to be weak or completely absent.
Instrumentation.
The CI source design for a mass spectrometer is very similar to that of the EI source. To facilitate the reactions between the ions and molecules, the chamber is kept relatively gas tight at a pressure of about 1 torr. Electrons are produced externally to the source volume (at a lower pressure of 10−4 torr or below) by heating a metal filament which is made of tungsten, rhenium, or iridium. The electrons are introduced through a small aperture in the source wall at energies 200–1000 eV so that they penetrate to at least the centre of the box. In contrast to EI, the magnet and the electron trap are not needed for CI, since the electrons do not travel to the end of the chamber. Many modern sources are dual or combination EI/CI sources and can be switched from EI mode to CI mode and back in seconds.
Mechanism.
A CI experiment involves the use of gas phase acid-base reactions in the chamber. Some common reagent gases include: methane, ammonia, water and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will mainly ionize the reagent gas because it is in large excess compared to the analyte. The primary reagent ions then undergo secondary ion/molecule reactions (as below) to produce more stable reagent ions which ultimately collide and react with the lower concentration analyte molecules to form product ions. The collisions between reagent ions and analyte molecules occur at close to thermal energies, so that the energy available to fragment the analyte ions is limited to the exothermicity of the ion-molecule reaction. For a proton transfer reaction, this is just the difference in proton affinity between the neutral reagent molecule and the neutral analyte molecule. This results in significantly less fragmentation than does 70 eV electron ionization (EI).
The following reactions are possible with methane as the reagent gas.
<chem>CH4{} + e^- -> CH4^{+\bullet}{} + 2e^-</chem>
<chem>CH4{} + CH4^{+\bullet} -> CH5+{} + CH3^{\bullet}</chem>
<chem>CH4 + CH3^+ -> C2H5+ + H2</chem>
<chem>M + CH5+ -> CH4 + [M + H]+</chem> (protonation)
<chem>AH + CH3+ -> CH4 + A+</chem> (<chem>H^-</chem> abstraction)
<chem>M + C2H5+ -> [M + C2H5]+</chem> (adduct formation)
<chem>A + CH4+ -> CH4 + A+</chem> (charge exchange)
Product ion formation.
If ammonia is the reagent gas,
<chem>NH3{} + e^- -> NH3^{+\bullet}{} + 2e^-</chem>
<chem>NH3{} + NH3^{+\bullet} -> NH4+{} + NH2</chem>
<chem>M + NH4^+ -> MH+ + NH3</chem>
For isobutane as the reagent gas,
formula_0
<chem>C3H7^+{} + C4H10^{+\bullet} -> C4H9^+{} + C3H8 </chem>
<chem>M + C4H9^+ -> MH^+ + C4H8 </chem>
Self chemical ionization is possible if the reagent ion is an ionized form of the analyte.
Advantages and limitations.
One of the main advantages of CI over EI is the reduced fragmentation as noted above, which for more fragile molecules, results in a peak in the mass spectrum indicative of the molecular weight of the analyte. This proves to be a particular advantage for biological applications where EI often does not yield useful molecular ions in the spectrum. The spectra given by CI are simpler than EI spectra and CI can be more sensitive than other ionization methods, at least in part to the reduced fragmentation which concentrates the ion signal in fewer and therefore more intense peaks. The extent of fragmentation can be somewhat controlled by proper selection of reagent gases. Moreover, CI is often coupled to chromatographic separation techniques, thereby improving its usefulness in identification of compounds. As with EI, the method is limited compounds that can be vapourized in the ion source. The lower degree of fragmentation can be a disadvantage in that less structural information is provided. Additionally, the degree of fragmentation and therefore the mass spectrum, can be sensitive to source conditions such as pressure, temperature, and the presence of impurities (such as water vapour) in the source. Because of this lack of reproducibility, libraries of CI spectra have not been generated for compound identification.
Applications.
CI mass spectrometry is a useful tool in structure elucidation of organic compounds. This is possible with CI, because formation of [M+1]+ eliminates a stable molecule, which can be used to guess the functional groups present. Besides that, CI facilitates the ability to detect the molecular ion peak, due to less extensive fragmentation. Chemical ionization can also be used to identify and quantify an analyte present in a sample, by coupling chromatographic separation techniques to CI such as gas chromatography (GC), high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). This allows selective ionization of an analyte from a mixture of compounds, where accurate and precised results can be obtained.
Variants.
Negative chemical ionization.
Chemical ionization for gas phase analysis is either positive or negative. Almost all neutral analytes can form positive ions through the reactions described above.
In order to see a response by negative chemical ionization (NCI, also NICI), the analyte must be capable of producing a negative ion (stabilize a negative charge) for example by electron capture ionization. Because not all analytes can do this, using NCI provides a certain degree of selectivity that is not available with other, more universal ionization techniques (EI, PCI). NCI can be used for the analysis of compounds containing acidic groups or electronegative elements (especially halogens).Moreover, negative chemical ionization is more selective and demonstrates a higher sensitivity toward oxidizing agents and alkylating agents.
Because of the high electronegativity of halogen atoms, NCI is a common choice for their analysis. This includes many groups of compounds, such as PCBs, pesticides, and fire retardants. Most of these compounds are environmental contaminants, thus much of the NCI analysis that takes place is done under the auspices of environmental analysis. In cases where very low limits of detection are needed, environmental toxic substances such as halogenated species, oxidizing and alkylating agents are frequently analyzed using an electron capture detector coupled to a gas chromatograph.
Negative ions are formed by resonance capture of a near-thermal energy electron, dissociative capture of a low energy electron and via ion-molecular interactions such as proton transfer, charge transfer and hydride transfer. Compared to the other methods involving negative ion techniques, NCI is quite advantageous, as the reactivity of anions can be monitored in the absence of a solvent. Electron affinities and energies of low-lying valencies can be determined by this technique as well.
Charge-exchange chemical ionization.
This is also similar to CI and the difference lies in the production of a radical cation with an odd number of electrons. The reagent gas molecules are bombarded with high energy electrons and the product reagent gas ions abstract electrons from the analyte to form radical cations. The common reagent gases used for this technique are toluene, benzene, NO, Xe, Ar and He.
Careful control over the selection of reagent gases and the consideration toward the difference between the resonance energy of the reagent gas radical cation and the ionization energy of the analyte can be used to control fragmentation. The reactions for charge-exchange chemical ionization are as follows.
<chem> He{} + e^- -> He^{+\bullet}{} + 2e^- </chem>
<chem> He^{+\bullet}{} + M -> M^{+\bullet} </chem>
Atmospheric-pressure chemical ionization.
Chemical ionization in an atmospheric pressure electric discharge is called atmospheric pressure chemical ionization (APCI), which usually uses water as the reagent gas. An APCI source is composed of a liquid chromatography outlet, nebulizing the eluent, a heated vaporizer tube, a corona discharge needle and a pinhole entrance to 10−3 torr vacuum. The analyte is a gas or liquid spray and ionization is accomplished using an atmospheric pressure corona discharge. This ionization method is often coupled with high performance liquid chromatography where the mobile phase containing eluting analyte sprayed with high flow rates of nitrogen or helium and the aerosol spray is subjected to a corona discharge to create ions. It is applicable to relatively less polar and thermally less stable compounds. The difference between APCI and CI is that APCI functions under atmospheric pressure, where the frequency of collisions is higher. This enables the improvement in sensitivity and ionization efficiency.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ce{C4H10{} + e^- -> C4H10^{+\\bullet}{} + 2e^-} (\\ce{ + C3H7+} \\text{and other ions}) "
}
] |
https://en.wikipedia.org/wiki?curid=1485104
|
14854511
|
Ayaks
|
Russian hypersonic aircraft program
The Ayaks () is a hypersonic waverider aircraft program started in the Soviet Union and currently under development by the Hypersonic Systems Research Institute (HSRI) of Leninets Holding Company in Saint Petersburg, Russia.
Purpose.
Ayaks was initially a classified Soviet spaceplane project aimed to design a new kind of global range hypersonic cruise vehicle capable of flying and conducting a variety of military missions in the mesosphere. The original concept revolved around a hypersonic reconnaissance aircraft project, but later was expanded into the wider concept of hypersonic multi-purpose military and civilian jets, as well as a SSTO platform for launching satellites.
The mesosphere is the layer of the Earth's atmosphere from to high, above the stratosphere and below the thermosphere. It is very difficult to fly in the mesosphere — the air is too rarefied for aircraft wings to generate lift, but sufficiently dense to cause aerodynamic drag on satellites. In addition, parts of the mesosphere fall inside the ionosphere, meaning the air is ionized due to solar radiation.
The ability to conduct military activities in the mesosphere gives a country some significant military potential.
History.
In the late 1970s, Soviet scientists began to explore a novel type of hypersonic propulsion system concept, exposed for the first time in a Russian newspaper with a short interview of Ayaks' inventor, Pr. Vladimir L. Fraĭshtadt. Fraĭshtadt worked at that time at the aero branch of the PKB Nevskoye-Neva Design Bureau in Leningrad. He developed the Ayaks concept around the idea that an efficient hypersonic vehicle cannot afford to lose energy to its surroundings (i.e. to overcome air resistance), but should instead take advantage of the energy carried by the high speed incoming flux. At that time, the whole concept was unknown to the West, although early developments involved the cooperation of Soviet industrial enterprises, technical institutes, the Military-Industrial Commission of the USSR (VPK) and the Russian Academy of Sciences.
In 1990, two articles by defense specialist and writer Nikolai Novichkov gave more details about the Ayaks program. The second was the first document available in English.
Shortly after the dissolution of the Soviet Union, funding was cut and the Ayaks program had to evolve, especially as the US government announced the National Aero-Space Plane (NASP) program. At that time, Fraĭshtadt became director of the OKB-794 Design Bureau, publicly known as "Leninets", a holding company running the open joint-stock company "State Hypersonic Systems Research Institute" (HSRI) ( pr: "NIPGS") in Saint Petersburg.
In early 1993, as an answer to the American announcement of the X-30 NASP demonstrator, the Ayaks project integrates into the wider national ORYOL ( pr: "Or'yol", "Eagle") program, federating all Russian hypersonic works to design a competing spaceplane as a reusable launch system.
In September 1993 the program was unveiled and a first small-scale model of Ayaks was publicly shown for the first time on the Leninetz booth at the 2nd MAKS Air Show in Moscow.
In 1994 Novichkov revealed that the Russian Federation was ready to fund the Ayaks program for eight years and that a reusable small-scale flight test module had been built by the Arsenal Design Bureau. He also stated that Ayaks' working principles had been validated with an engine test stand in a wind tunnel. The same year, the American NASP project was cancelled and replaced by the "Hypersonic Systems Technology Program" (HySTP), cancelled as well after three months. In 1995 NASA launched the Advanced Reusable Transportation Technologies (ARTT) program, part of the Highly Reusable Space Transportation (HRST) initiative, but experts from consulting firm ANSER evaluating Ayaks technologies did not believe at first in the performances announced by the Russians and did not recommend development along the same path.
However, between October 1995 and April 1997, a series of Russian patents covering Ayaks technologies were granted to "Leninetz HLDG Co." and consequently available publicly, the oldest having been filed 14 years before.
As the information available out of Russia started to grow, three western academic researchers started to collect the sparse data about Ayaks: Claudio Bruno, professor at the Sapienza University of Rome; Paul A. Czysz, professor at the Parks College of Engineering, Aviation and Technology at Saint Louis University; and S. N. B. Murthy, professor at Purdue University. In September 1996, as part of the Capstone Design Course and the Hypersonic Aero-Propulsion Integration Course at Parks College, Czysz assigned his students to analyze the information gathered, as the "ODYSSEUS" project. Thereafter the three researchers copublished a conference paper summarizing the Western analysis of Ayaks principles.
With such information, long-time ANSER main expert Ramon L. Chase reviewed his former position and assembled a team to evaluate and develop American versions of Ayaks technologies within the HRST program. He recruited H. David Froning Jr., CEO of "Flight Unlimited"; Leon E. McKinney, world expert in fluid dynamics; Paul A. Czysz; Mark J. Lewis, aerodynamicist at the University of Maryland, College Park, specialist of waveriders and airflows around leading edges and director of the NASA-sponsored Maryland Center for Hypersonic Education and Research; Dr. Robert Boyd of Lockheed Martin Skunk Works able to build real working prototypes with allocated budgets from black projects, whose contractor General Atomics is a world leader in superconducting magnets (that Ayaks uses); and Dr. Daniel Swallow from Textron Systems, one of the few firms still possessing expertise in magnetohydrodynamic converters, which Ayaks extensively uses.
Novel technologies.
MHD bypass.
The Ayaks was projected to employ a novel engine using a magnetohydrodynamic generator to collect and slow down highly ionized and rarefied air upstream of airbreathing jet engines, usually scramjets, although HSRI project lead Vladimir L. Fraĭshtadt said in a 2001 interview that the Ayaks MHD bypass system could decelerate the incoming hypersonic airflow sufficiently to almost use conventional turbomachinery. This would be a surprising technical solution considering such hypersonic speeds, yet confirmed as feasible by independent studies using Mach 2.7 turbojets or even subsonic ramjets.
The air is mixed with fuel into the mixture that burns in the combustor, while the electricity produced by the inlet MHD generator feeds the MHD accelerator located behind the jet engine near the single expansion ramp nozzle to provide additional thrust and specific impulse. The plasma funnel developed over the air inlet from the Lorentz forces greatly increases the ability of the engine to collect air, increasing the effective diameter of the air inlet up to hundreds of meters. It also extends the Mach regime and altitude the aircraft can cruise to. Thus, it is theorized that the Ayaks' engine can operate using atmospheric oxygen even at heights above .
A non-equilibrium MHD generator typically produces 1–5 MWe with such parameters (channel cross-section, magnetic field strength, pressure, degree of ionization and velocity of the working fluid) but the increased effective diameter of the air inlet by the virtual plasma funnel greatly increases the power produced to 45–100 MWe per engine. As Ayaks may use two to four of such engines, some electrical energy could be diverted to peaceful or military directed-energy devices.
Thermochemical reactors.
The fuel feed system of the Ayaks engine is also novel. At supersonic speeds, air brutally recompress downstream the stagnation point of a shock wave, producing heat. At hypersonic speeds, the heat flux from shock waves and air friction on the body of an aircraft, especially at the nose and leading edges, becomes considerable, as the temperature is proportional to the square of the Mach number. That is why hypersonic speeds are problematic with respect to the strength of materials and are often referred to as the "heat barrier".
Ayaks uses thermochemical reactors (TCRs): the heating energy from air friction is used to increase the heat capacity of the fuel, by cracking the fuel with a catalytic chemical reaction. The aircraft has double shielding between which water and ordinary, cheap kerosene circulates in hot parts of the airframe. The energy of surface heating is absorbed through heat exchangers to trigger a series of chemical reactions in presence of a nickel catalyzer, called hydrocarbon steam reforming. Kerosene and water spits into a new fuel reformate: methane (70–80% in volume) and carbon dioxide (20–30%) in a first stage:
CnHm + H2O formula_0 CH4 + CO2
Then methane and water reform in their turn in a second stage into hydrogen, a new fuel of better quality, in a strong endothermic reaction:
CH4 + H2O formula_0 CO + 3H2
CO + H2O formula_0 CO2 + H2
Thus, the heating capacity of the fuel increases, and the surface of the aircraft cools down.
The calorific value of the mixture CO + 3H2 produced from 1 kg of methane through water steam reforming (62,900 kJ) is 25% higher than that of methane only (50,100 kJ).
Besides a more energetic fuel, the mixture is populated by many free radicals that enhance the degree of ionization of the plasma, further increased by the combined use of e-beams that control electron concentration, and HF pulse repetitive discharges (PRDs) that control electron temperature. Such systems create streamer discharges that irrigate the ionized flow with free electrons, increasing combustion effectiveness, a process known as plasma-assisted combustion (PAC).
Such concept was initially named "Magneto-Plasma-Chemical Engine" (MPCE), and the working principle referred to as "Chemical Heat Regeneration and Fuel Transformation" (CHRFT). In subsequent literature, the accent has been put more on magnetohydrodynamics than on the chemical part of these engines, which are now simply referred to as a "scramjet with MHD bypass" as these concepts intimately require each other to work efficiently.
The idea of thermally shielding the engine is detailed in the fundamental analysis of an ideal turbojet for maximum thrust analysis in the aerothermodynamics literature. That is, putting the turbine (work extraction) upstream and the compressor (work addition) downstream. For a conventional jet engine, the thermodynamics works, however the advanced thermo-fluids analysis shows that in order to add sufficient heat to power the aircraft without thermally choking the flow (and unstarting the engine) the combustor has to grow and the amount of heat added grows as well. It is more "efficient" in using the heat, it just needs a lot of heat. While thermodynamically very sound, the real engine is too large and consumes too much power to ever fly on an aircraft. These issues do not arise in the Ayaks concept as the plasma funnel virtually increases the cross-section of the air inlet while maintaining its limited physical size, and additional energy is taken from the flow itself. As Fraĭshtadt said, "Since it takes advantage of the CHRFT technology, Ayaks cannot be analyzed as a classical heat engine."
Plasma sheath.
As altitude increases, the electrical resistance of air decreases according to Paschen's law. The air at the nose of Ayaks is ionized. Besides e-beams and HF pulse discharges, a high voltage is produced by the Hall effect in the MHD generator that allows a planar glow discharge to be emitted from the sharp nose of the aircraft and the thin leading edges of its wings, by a St. Elmo's fire effect. Such a plasma cushion in front and around the aircraft is said to offer several advantages:
Specifications.
According to the data presented at the 2001 MAKS Airshow, the specifications of the Ayaks are:
Later publications cite even more impressive numbers, with expected performance of service ceiling of 60 km and cruising speed of Mach 10–20, and the ability to reach the orbital speed of 28,440 km/h with the addition of booster rockets, the spaceplane then flying in boost-glide trajectories (successive rebounds or "skips" on the upper layers of the atmosphere, alternating unpowered gliding and powered modes) similarly to the US hypersonic waverider project HyperSoar with a high glide ratio of 40:1.
Speculation.
In 2003, French engineer Jean-Pierre Petit's study was based on a paper published in January 2001 in the French magazine "Air et Cosmos" by Alexandre-David Szamès, and in the same month from information gathered in a small workshop on advanced propulsion in Brighton, England, especially after discussions with David Froning Jr. from "Flight Unlimited" about his prior work involving electric and electromagnetic discharges in hypersonic flows, presented during the workshop.
Petit wrote about a large and long multipole wall MHD converter on the upper flat surface of the aircraft in contact with the freestream, instead of the linear cross-field Faraday converters located within a channel usually considered. In such a multipole converter, magnetic field is produced by many parallel superconducting thin wires instead of pairs of bigger electromagnets. These wires run below the surface directly in contact with the airflow, their profile following the body of the vehicle. Air is progressively decelerated in the boundary layer in a laminar flow without too much recompression, down to subsonic values as it enters the inlet then the air-breathing jet engines. Such an open wall MHD-controlled inlet will be exposed by two scientists of the Ayaks program in a similar way two years later, although they propose to locate it on the surface of the inclined front ramp underneath the aircraft, to vector the shock wave as a "shock-on-lip" upon the air inlet, whatever the speed and altitude.
As subsonic velocities can be achieved internally while the external flow is still hypersonic, Petit proposes that such platform could use almost conventional turbojets and ramjets instead of scramjets more difficult to control, and such plane would not need vertical stabilizers nor fins anymore, as it would maneuver through locally increasing or reducing drag on particular regions of the wetted area with electromagnetic forces. He then describes a similar multipole MHD accelerator located on the physical surface of the semi-guided ramp nozzle, which accelerates the conductive exhaust gases downstream the jet engines.
Ten years before Petit, Dr. Vladimir I. Krementsov, head of the Nizhny Novgorod Research Institute of Radio Engineering (NIIRT), and Dr Anatoly Klimov, chief of the Moscow Radiotechnical Institute of the Russian Academy of Sciences (MRTI RAS), exposed to William Kaufmann that the MHD bypass system of the Ayaks concept would have been already built in the rumored Aurora secret spaceplane, successor of the Lockheed SR-71 Blackbird.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14854511
|
14855183
|
Prismatic compound of antiprisms
|
Polyhedral compound
In geometry, a prismatic compound of antiprism is a category of uniform polyhedron compound. Each member of this infinite family of uniform polyhedron compounds is a symmetric arrangement of antiprisms sharing a common axis of rotational symmetry.
Infinite family.
This infinite family can be enumerated as follows:
Where "p"/"q"=2, the component is the tetrahedron (or dyadic antiprism). In this case, if "n"=2 then the compound is the stella octangula, with higher symmetry (Oh).
Compounds of two antiprisms.
Compounds of two "n"-antiprisms share their vertices with a 2"n"-prism, and exist as two alternated set of vertices.
Cartesian coordinates for the vertices of an antiprism with "n"-gonal bases and isosceles triangles are
with "k" ranging from 0 to 2"n"−1; if the triangles are equilateral,
formula_2
Compound of two trapezohedra (duals).
The duals of the prismatic compound of antiprisms are compounds of trapezohedra:
Compound of three antiprisms.
For compounds of three digonal antiprisms, they are rotated 60 degrees, while three triangular antiprisms are rotated 40 degrees.
|
[
{
"math_id": 0,
"text": "\\left( \\cos\\frac{k\\pi}{n}, \\sin\\frac{k\\pi}{n}, (-1)^k h \\right)"
},
{
"math_id": 1,
"text": "\\left( \\cos\\frac{k\\pi}{n}, \\sin\\frac{k\\pi}{n}, (-1)^{k+1} h \\right)"
},
{
"math_id": 2,
"text": "2h^2=\\cos\\frac{\\pi}{n}-\\cos\\frac{2\\pi}{n}."
}
] |
https://en.wikipedia.org/wiki?curid=14855183
|
148555
|
Spin glass
|
Disordered magnetic state
In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called "freezing temperature" "T"f. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or without a regular pattern and the couplings too are random.
The term "glass" comes from an analogy between the "magnetic" disorder in a spin glass and the "positional" disorder of a conventional, chemical glass, e.g., a window glass. In window glass or any amorphous solid the atomic bond structure is highly irregular; in contrast, a crystal has a uniform pattern of atomic bonds. In ferromagnetic solids, magnetic spins all align in the same direction; this is analogous to a crystal's lattice-based structure.
The individual atomic bonds in a spin glass are a mixture of roughly equal numbers of ferromagnetic bonds (where neighbors have the same orientation) and antiferromagnetic bonds (where neighbors have exactly the opposite orientation: north and south poles are flipped 180 degrees). These patterns of aligned and misaligned atomic magnets create what are known as frustrated interactions – distortions in the geometry of atomic bonds compared to what would be seen in a regular, fully aligned solid. They may also create situations where more than one geometric arrangement of atoms is stable.
There are two main aspects of spin glass. On the physical side, spin glasses are real materials with distinctive properties, a review of which is. On the mathematical side, simple statistical mechanics models, inspired by real spin glasses, are widely studied and applied.
Spin glasses and the complex internal structures that arise within them are termed "metastable" because they are "stuck" in stable configurations other than the lowest-energy configuration (which would be aligned and ferromagnetic). The mathematical complexity of these structures is difficult but fruitful to study experimentally or in simulations; with applications to physics, chemistry, materials science and artificial neural networks in computer science.
Magnetic behavior.
It is the time dependence which distinguishes spin glasses from other magnetic systems.
Above the spin glass transition temperature, "T"c, the spin glass exhibits typical magnetic behaviour (such as paramagnetism).
If a magnetic field is applied as the sample is cooled to the transition temperature, magnetization of the sample increases as described by the Curie law. Upon reaching "T"c, the sample becomes a spin glass, and further cooling results in little change in magnetization. This is referred to as the "field-cooled" magnetization.
When the external magnetic field is removed, the magnetization of the spin glass falls rapidly to a lower value known as the "remanent" magnetization.
Magnetization then decays slowly as it approaches zero (or some small fraction of the original value – this remains unknown). This decay is non-exponential, and no simple function can fit the curve of magnetization versus time adequately. This slow decay is particular to spin glasses. Experimental measurements on the order of days have shown continual changes above the noise level of instrumentation.
Spin glasses differ from ferromagnetic materials by the fact that after the external magnetic field is removed from a ferromagnetic substance, the magnetization remains indefinitely at the remanent value. Paramagnetic materials differ from spin glasses by the fact that, after the external magnetic field is removed, the magnetization rapidly falls to zero, with no remanent magnetization. The decay is rapid and exponential.
If the sample is cooled below "T"c in the absence of an external magnetic field, and a magnetic field is applied after the transition to the spin glass phase, there is a rapid initial increase to a value called the "zero-field-cooled" magnetization. A slow upward drift then occurs toward the field-cooled magnetization.
Surprisingly, the sum of the two complicated functions of time (the zero-field-cooled and remanent magnetizations) is a constant, namely the field-cooled value, and thus both share identical functional forms with time, at least in the limit of very small external fields.
Edwards–Anderson model.
This is similar to the Ising model. In this model, we have spins arranged on a formula_0-dimensional lattice with only nearest neighbor interactions. This model can be solved exactly for the critical temperatures and a glassy phase is observed to exist at low temperatures. The Hamiltonian for this spin system is given by:
formula_1
where formula_2 refers to the Pauli spin matrix for the spin-half particle at lattice point formula_3, and the sum over formula_4 refers to summing over neighboring lattice points formula_3 and formula_5. A negative value of formula_6 denotes an antiferromagnetic type interaction between spins at points formula_3 and formula_5. The sum runs over all nearest neighbor positions on a lattice, of any dimension. The variables formula_6 representing the magnetic nature of the spin-spin interactions are called bond or link variables.
In order to determine the partition function for this system, one needs to average the free energy formula_7 where formula_8, over all possible values of formula_6. The distribution of values of formula_6 is taken to be a Gaussian with a mean formula_9 and a variance formula_10:
formula_11
Solving for the free energy using the replica method, below a certain temperature, a new magnetic phase called the spin glass phase (or glassy phase) of the system is found to exist which is characterized by a vanishing magnetization formula_12 along with a non-vanishing value of the two point correlation function between spins at the same lattice point but at two different replicas:
formula_13
where formula_14 are replica indices. The order parameter for the ferromagnetic to spin glass phase transition is therefore formula_15, and that for paramagnetic to spin glass is again formula_15. Hence the new set of order parameters describing the three magnetic phases consists of both formula_16 and formula_15.
Under the assumption of replica symmetry, the mean-field free energy is given by the expression:
formula_17
Sherrington–Kirkpatrick model.
In addition to unusual experimental properties, spin glasses are the subject of extensive theoretical and computational investigations. A substantial part of early theoretical work on spin glasses dealt with a form of mean-field theory based on a set of replicas of the partition function of the system.
An important, exactly solvable model of a spin glass was introduced by David Sherrington and Scott Kirkpatrick in 1975. It is an Ising model with long range frustrated ferro- as well as antiferromagnetic couplings. It corresponds to a mean-field approximation of spin glasses describing the slow dynamics of the magnetization and the complex non-ergodic equilibrium state.
Unlike the Edwards–Anderson (EA) model, in the system though only two-spin interactions are considered, the range of each interaction can be potentially infinite (of the order of the size of the lattice). Therefore, we see that any two spins can be linked with a ferromagnetic or an antiferromagnetic bond and the distribution of these is given exactly as in the case of Edwards–Anderson model. The Hamiltonian for SK model is very similar to the EA model:
formula_18
where formula_19 have same meanings as in the EA model. The equilibrium solution of the model, after some initial attempts by Sherrington, Kirkpatrick and others, was found by Giorgio Parisi in 1979 with the replica method. The subsequent work of interpretation of the Parisi solution—by M. Mezard, G. Parisi, M.A. Virasoro and many others—revealed the complex nature of a glassy low temperature phase characterized by ergodicity breaking, ultrametricity and non-selfaverageness. Further developments led to the creation of the cavity method, which allowed study of the low temperature phase without replicas. A rigorous proof of the Parisi solution has been provided in the work of Francesco Guerra and Michel Talagrand.
Phase diagram.
When there is a uniform external magnetic field of magnitude formula_20, the energy function becomesformula_21Let all couplings formula_22 are IID samples from the gaussian distribution of mean 0 and variance formula_23. In 1979, J.R.L. de Almeida and David Thouless found that, as in the case of the Ising model, the mean-field solution to the SK model becomes unstable when under low-temperature, low-magnetic field state.
The stability region on the phase diagram of the SK model is determined by two dimensionless parameters formula_24. Its phase diagram has two parts, divided by the "de Almeida-Thouless curve", The curve is the solution set to the equationsformula_25The phase transition occurs at formula_26. Just below it, we haveformula_27At low temperature, high magnetic field limit, the line isformula_28
Infinite-range model.
This is also called the "p-spin model". The infinite-range model is a generalization of the Sherrington–Kirkpatrick model where we not only consider two-spin interactions but formula_29-spin interactions, where formula_30 and formula_31 is the total number of spins. Unlike the Edwards–Anderson model, but similar to the SK model, the interaction range is infinite. The Hamiltonian for this model is described by:
formula_32
where formula_33 have similar meanings as in the EA model. The formula_34 limit of this model is known as the random energy model. In this limit, the probability of the spin glass existing in a particular state depends only on the energy of that state and not on the individual spin configurations in it.
A Gaussian distribution of magnetic bonds across the lattice is assumed usually to solve this model. Any other distribution is expected to give the same result, as a consequence of the central limit theorem. The Gaussian distribution function, with mean formula_35 and variance formula_36, is given as:
formula_37
The order parameters for this system are given by the magnetization formula_16 and the two point spin correlation between spins at the same site formula_15, in two different replicas, which are the same as for the SK model. This infinite range model can be solved explicitly for the free energy in terms of formula_16 and formula_15, under the assumption of replica symmetry as well as 1-Replica Symmetry Breaking.
formula_38
Non-ergodic behavior and applications.
A thermodynamic system is ergodic when, given any (equilibrium) instance of the system, it eventually visits every other possible (equilibrium) state (of the same energy). One characteristic of spin glass systems is that, below the freezing temperature formula_39, instances are trapped in a "non-ergodic" set of states: the system may fluctuate between several states, but cannot transition to other states of equivalent energy. Intuitively, one can say that the system cannot escape from deep minima of the hierarchically disordered energy landscape; the distances between minima are given by an ultrametric, with tall energy barriers between minima. The participation ratio counts the number of states that are accessible from a given instance, that is, the number of states that participate in the ground state. The ergodic aspect of spin glass was instrumental in the awarding of half the 2021 Nobel Prize in Physics to Giorgio Parisi.
For physical systems, such as dilute manganese in copper, the freezing temperature is typically as low as 30 kelvins (−240 °C), and so the spin-glass magnetism appears to be practically without applications in daily life. The non-ergodic states and rugged energy landscapes are, however, quite useful in understanding the behavior of certain neural networks, including Hopfield networks, as well as many problems in computer science optimization and genetics.
Spin-glass without structural disorder.
Elemental crystalline neodymium is paramagnetic at room temperature and becomes an antiferromagnet with incommensurate order upon cooling below 19.9 K. Below this transition temperature it exhibits a complex set of magnetic phases that have long spin relaxation times and spin-glass behavior that does not rely on structural disorder.
History.
A detailed account of the history of spin glasses from the early 1960s to the late 1980s can be found in a series of popular articles by Philip W. Anderson in "Physics Today".
Discovery.
In 1930s, material scientists discovered the Kondo effect, where the resistivity of nominally pure gold reaches a minimum at 10 K, and similarly for nominally pure Cu at 2 K. It was later understood that the Kondo effect occurs when a nonmagnetic metal is infused with dilute magnetic atoms.
Unusual behavior was observed in of iron-in-gold alloy (Au"Fe") and manganese-in-copper alloy (Cu"Mn") at around 1 to 10 atomic percent. Cannella and Mydosh observed in 1972 that Au"Fe" had an unexpected cusplike peak in the a.c. susceptibility at a well defined temperature, which would later be termed "spin glass freezing temperature".
It was also called "mictomagnet" (micto- is Greek for "mixed"). The term arose from the observation that these materials often contain a mix of ferromagnetic (formula_40) and antiferromagnetic (formula_41) interactions, leading to their disordered magnetic structure. This term fell out of favor as the theoretical understanding of spin glasses evolved, recognizing that the magnetic frustration arises not just from a simple mixture of ferro- and antiferromagnetic interactions, but from their randomness and frustration in the system.
Sherrington-Kirkpatrick model.
Sherrington and Kirkpatrick proposed the SK model in 1975, and solved it by the replica method. They discovered that at low temperatures, its entropy becomes negative, which they thought was because the replica method is a heuristic method that does not apply at low temperatures.
It was then discovered that the replica method was correct, but the problem lies in that the low-temperature broken symmetry in the SK model cannot be purely characterized by the Edwards-Anderson order parameter. Instead, further order parameters are necessary, which leads to replica breaking ansatz of Giorgio Parisi. At the full replica breaking ansatz, infinitely many order parameters are required to characterize a stable solution.
Applications.
The formalism of replica mean-field theory has also been applied in the study of neural networks, where it has enabled calculations of properties such as the storage capacity of simple neural network architectures without requiring a training algorithm (such as backpropagation) to be designed or implemented.
More realistic spin glass models with short range frustrated interactions and disorder, like the Gaussian model where the couplings between neighboring spins follow a Gaussian distribution, have been studied extensively as well, especially using Monte Carlo simulations. These models display spin glass phases bordered by sharp phase transitions.
Besides its relevance in condensed matter physics, spin glass theory has acquired a strongly interdisciplinary character, with applications to neural network theory, computer science, theoretical biology, econophysics etc.
Spin glass models were adapted to the folding funnel model of protein folding.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "H = -\\sum_{\\langle ij\\rangle} J_{ij} S_i S_j,"
},
{
"math_id": 2,
"text": "S_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\langle ij\\rangle"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "J_{ij}"
},
{
"math_id": 7,
"text": "f\\left[J_{ij}\\right] = -\\frac{1}{\\beta} \\ln\\mathcal{Z}\\left[J_{ij}\\right]"
},
{
"math_id": 8,
"text": "\\mathcal{Z}\\left[J_{ij}\\right] = \\operatorname{Tr}_S \\left(e^{-\\beta H}\\right)"
},
{
"math_id": 9,
"text": "J_0"
},
{
"math_id": 10,
"text": "J^2"
},
{
"math_id": 11,
"text": "P(J_{ij}) = \\sqrt{\\frac{N}{2\\pi J^2}} \\exp\\left\\{-\\frac N {2J^2} \\left(J_{ij} - \\frac{J_0}{N}\\right)^2\\right\\}."
},
{
"math_id": 12,
"text": "m = 0"
},
{
"math_id": 13,
"text": "q = \\sum_{i=1}^N S^\\alpha_i S^\\beta_i \\neq 0,"
},
{
"math_id": 14,
"text": "\\alpha, \\beta"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": "m"
},
{
"math_id": 17,
"text": "\\begin{align}\n \\beta f ={} - \\frac{\\beta^2 J^2}{4}(1 - q)^2 + \\frac{\\beta J_0 m^2}{2}\n - \\int \\exp\\left( -\\frac{z^2} 2 \\right) \\log \\left(2\\cosh\\left(\\beta Jz + \\beta J_0 m\\right)\\right) \\, \\mathrm{d}z.\n\\end{align}"
},
{
"math_id": 18,
"text": "\nH = - \\frac 1\\sqrt N \\sum_{i<j} J_{ij} S_i S_j\n"
},
{
"math_id": 19,
"text": "J_{ij}, S_i, S_j"
},
{
"math_id": 20,
"text": "\nM\n"
},
{
"math_id": 21,
"text": "\nH = - \\frac 1\\sqrt N \\sum_{i<j} J_{ij} S_i S_j - M \\sum_i S_i\n"
},
{
"math_id": 22,
"text": "\nJ_{ij}\n"
},
{
"math_id": 23,
"text": "\nJ^2\n"
},
{
"math_id": 24,
"text": "\nx := \\frac{kT}{J}, \\quad y := \\frac{M}{J}\n"
},
{
"math_id": 25,
"text": "\n\\begin{aligned}\n& x^2 = \\frac{1}{(2 \\pi)^{1 / 2}} \\int \\mathrm{d} z\\; \\mathrm{e}^{-\\frac 12 z^2} \\operatorname{sech}^4\\left(\\frac{q^{1 / 2} z + y}{x}\\right), \\\\\n& q=\\frac{1}{(2 \\pi)^{1 / 2}} \\int \\mathrm{d} z\\; \\mathrm{e}^{-\\frac{1}{2} z^2} \\tanh ^2\\left(\\frac{q^{1 / 2} z + y}{x}\\right) .\n\\end{aligned}\n"
},
{
"math_id": 26,
"text": "x = 1"
},
{
"math_id": 27,
"text": "\ny^2 \\approx \\frac 43 ( 1-x)^3.\n"
},
{
"math_id": 28,
"text": "\nx \\approx \\frac{4}{3\\sqrt{2\\pi}} e^{-\\frac 12 y^2}\n"
},
{
"math_id": 29,
"text": "p"
},
{
"math_id": 30,
"text": "p \\leq N"
},
{
"math_id": 31,
"text": "N"
},
{
"math_id": 32,
"text": "\nH = -\\sum_{i_1 < i_2 < \\cdots < i_p} J_{i_1 \\dots i_p} S_{i_1}\\cdots S_{i_p}\n"
},
{
"math_id": 33,
"text": "J_{i_1\\dots i_p}, S_{i_1},\\dots, S_{i_p}"
},
{
"math_id": 34,
"text": "p\\to \\infty"
},
{
"math_id": 35,
"text": "\\frac{J_0}{N} "
},
{
"math_id": 36,
"text": "\\frac{J^2}{N}"
},
{
"math_id": 37,
"text": "\nP\\left(J_{i_1\\cdots i_p}\\right) = \\sqrt{\\frac{N^{p-1}}{J^2 \\pi p!}} \\exp\\left\\{-\\frac{N^{p-1}}{J^2 p!} \\left(J_{i_1 \\cdots i_p} - \\frac{J_0 p!}{2N^{p-1}}\\right)\\right\\}\n"
},
{
"math_id": 38,
"text": "\\begin{align}\n \\beta f ={} &\\frac{1}{4}\\beta^2 J^2 q^p - \\frac{1}{2}p\\beta^2 J^2 q^p - \\frac{1}{4}\\beta^2 J^2 + \\frac{1}{2}\\beta J_0 p m^p + \\frac{1}{4\\sqrt{2\\pi}}p\\beta^2 J^2 q^{p-1} +{} \\\\\n &\\int \\exp\\left(-\\frac{1}{2}z^2\\right) \\log\\left(2\\cosh\\left(\\beta Jz \\sqrt{\\frac{1}{2}pq^{p-1}} + \\frac{1}{2}\\beta J_0 p m^{p-1}\\right)\\right)\\, \\mathrm{d}z\n\\end{align}"
},
{
"math_id": 39,
"text": "T_\\text{f}"
},
{
"math_id": 40,
"text": "J > 0"
},
{
"math_id": 41,
"text": "J < 0"
}
] |
https://en.wikipedia.org/wiki?curid=148555
|
14855633
|
Composition ring
|
In mathematics, a composition ring, introduced in , is a commutative ring ("R", 0, +, −, ·), possibly without an identity 1 (see non-unital ring), together with an operation
formula_0
such that, for any three elements formula_1 one has
It is "not" generally the case that formula_5, "nor" is it generally the case that formula_6 (or formula_7) has any algebraic relationship to formula_8 and formula_9.
Examples.
There are a few ways to make a commutative ring "R" into a composition ring without introducing anything new.
More interesting examples can be formed by defining a composition on another ring constructed from "R".
formula_16
However, as for formal power series, the composition cannot always be defined when the right operand "g" is a constant: in the formula given the denominator formula_17 should not be identically zero. One must therefore restrict to a subring of "R"("X") to have a well-defined composition operation; a suitable subring is given by the rational functions of which the numerator has zero constant term, but the denominator has nonzero constant term. Again this composition ring has no multiplicative unit; if "R" is a field, it is in fact a subring of the formal power series example.
For a concrete example take the ring formula_19, considered as the ring of polynomial maps from the integers to itself. A ring endomorphism
formula_20
of formula_19 is determined by the image under formula_21 of the variable formula_22, which we denote by
formula_23
and this image formula_24 can be any element of formula_19. Therefore, one may consider the elements formula_25 as endomorphisms and assign formula_26, accordingly. One easily verifies that formula_19 satisfies the above axioms. For example, one has
formula_27
This example is isomorphic to the given example for "R"["X"] with "R" equal to formula_28, and also to the subring of all functions formula_29 formed by the polynomial functions.
|
[
{
"math_id": 0,
"text": "\\circ: R \\times R \\rightarrow R"
},
{
"math_id": 1,
"text": "f,g,h\\in R"
},
{
"math_id": 2,
"text": "(f+g)\\circ h=(f\\circ h)+(g\\circ h)"
},
{
"math_id": 3,
"text": "(f\\cdot g)\\circ h = (f\\circ h)\\cdot (g\\circ h)"
},
{
"math_id": 4,
"text": "(f\\circ g)\\circ h = f\\circ (g\\circ h)."
},
{
"math_id": 5,
"text": "f\\circ g=g\\circ f"
},
{
"math_id": 6,
"text": "f\\circ (g+h)"
},
{
"math_id": 7,
"text": "f\\circ (g\\cdot h)"
},
{
"math_id": 8,
"text": "f\\circ g"
},
{
"math_id": 9,
"text": "f\\circ h"
},
{
"math_id": 10,
"text": "f\\circ g=0"
},
{
"math_id": 11,
"text": "f\\circ g=f"
},
{
"math_id": 12,
"text": "f\\circ g=fg"
},
{
"math_id": 13,
"text": "(f\\circ g) (x)=f(g(x))"
},
{
"math_id": 14,
"text": "f, g \\in R"
},
{
"math_id": 15,
"text": "g_2^n"
},
{
"math_id": 16,
"text": "\\frac{f_1}{f_2}\\circ g=\\frac{f_1\\circ g}{f_2\\circ g}."
},
{
"math_id": 17,
"text": "f_2\\circ g"
},
{
"math_id": 18,
"text": "\\circ"
},
{
"math_id": 19,
"text": "{\\mathbb Z}[x]"
},
{
"math_id": 20,
"text": "F:{\\mathbb Z}[x]\\rightarrow{\\mathbb Z}[x]"
},
{
"math_id": 21,
"text": " F"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "f=F(x)"
},
{
"math_id": 24,
"text": "f"
},
{
"math_id": 25,
"text": "f\\in{\\mathbb Z}[x]"
},
{
"math_id": 26,
"text": "\\circ:{\\mathbb Z}[x]\\times{\\mathbb Z}[x]\\rightarrow{\\mathbb Z}[x]"
},
{
"math_id": 27,
"text": "(x^2+3x+5)\\circ(x-2)=(x-2)^2+3(x-2)+5=x^2-x+3."
},
{
"math_id": 28,
"text": "\\mathbb Z"
},
{
"math_id": 29,
"text": "\\mathbb Z\\to\\mathbb Z"
}
] |
https://en.wikipedia.org/wiki?curid=14855633
|
14855952
|
Johnson–Lindenstrauss lemma
|
Mathematical result
In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. In the classical proof of the lemma, the embedding is a random orthogonal projection.
The lemma has applications in compressed sensing, manifold learning, dimensionality reduction, and graph embedding. Much of the data stored and manipulated on computers, including text and images, can be represented as points in a high-dimensional space (see vector space model for the case of text). However, the essential algorithms for working with such data tend to become bogged down very quickly as dimension increases. It is therefore desirable to reduce the dimensionality of the data in a way that preserves its relevant structure. The Johnson–Lindenstrauss lemma is a classic result in this vein.
Also, the lemma is tight up to a constant factor, i.e. there exists a set of points of size "m" that needs dimension
formula_0
in order to preserve the distances between all pairs of points within a factor of formula_1.
Lemma.
Given formula_2, a set formula_3 of formula_4 points in formula_5 , and an integer formula_6, there is a linear map formula_7 such that
formula_8
for all formula_9.
The formula can be rearranged:formula_10
Alternatively, for any formula_11 and any integer formula_12 there exists a linear function formula_7 such that the restriction formula_13 is formula_14-bi-Lipschitz.
The classical proof of the lemma takes formula_15 to be a scalar multiple of an orthogonal projection formula_16 onto a random subspace of dimension formula_17 in formula_5. An orthogonal projection collapses some dimensions of the space it is applied to, which reduces the length of all vectors, as well as distance between vectors in the space. Under the conditions of the lemma, concentration of measure ensures there is a nonzero chance that a random orthogonal projection reduces pairwise distances between all points in formula_3 by roughly a constant factor formula_18. Since the chance is nonzero, such projections must exist, so we can choose one formula_16 and set formula_19.
To obtain the projection algorithmically, it suffices with high probability to repeatedly sample orthogonal projection matrices at random. If you keep rolling the dice, you will eventually obtain one in polynomial random time.
Alternate statement.
A related lemma is the distributional JL lemma. This lemma states that for any formula_20 and positive integer formula_21, there exists a distribution over formula_22 from which the matrix formula_23 is drawn such that for formula_24 and for any unit-length vector formula_25, the claim below holds.
formula_26
One can obtain the JL lemma from the distributional version by setting formula_27 and formula_28 for some pair "u","v" both in "X". Then the JL lemma follows by a union bound over all such pairs.
Speeding up the JL transform.
Given "A", computing the matrix vector product takes formula_29 time. There has been some work in deriving distributions for which the matrix vector product can be computed in less than formula_29 time.
There are two major lines of work. The first, "Fast Johnson Lindenstrauss Transform" (FJLT), was introduced by Ailon and Chazelle in 2006.
This method allows the computation of the matrix vector product in just formula_30 for any constant formula_31.
Another approach is to build a distribution supported over matrices that are sparse.
This method allows keeping only an formula_32 fraction of the entries in the matrix, which means the computation can be done in just formula_33 time.
Furthermore, if the vector has only formula_34 non-zero entries, the Sparse JL takes time formula_35, which may be much less than the formula_36 time used by Fast JL.
Tensorized random projections.
It is possible to combine two JL matrices by taking the so-called face-splitting product, which is defined as the tensor products of the rows (was proposed by V. Slyusar in 1996 for radar and digital antenna array applications).
More directly, let formula_37 and formula_38 be two matrices.
Then the face-splitting product formula_39 is
formula_40
This idea of tensorization was used by Kasiviswanathan et al. for differential privacy.
JL matrices defined like this use fewer random bits, and can be applied quickly to vectors that have tensor structure, due to the following identity:
formula_41,
where formula_42 is the element-wise (Hadamard) product.
Such computations have been used to efficiently compute polynomial kernels and many other .
In 2020 it was shown that if the matrices formula_43 are independent formula_44 or Gaussian matrices, the combined matrix formula_45 satisfies the distributional JL lemma if the number of rows is at least
formula_46.
For large formula_47 this is as good as the completely random Johnson-Lindenstrauss, but
a matching lower bound in the same paper shows that this exponential dependency on formula_48 is necessary.
Alternative JL constructions are suggested to circumvent this.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\Omega \\left(\\frac{\\log(m)}{\\varepsilon^2}\\right)"
},
{
"math_id": 1,
"text": "(1 \\pm \\varepsilon)"
},
{
"math_id": 2,
"text": "0 < \\varepsilon < 1"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "\\mathbb{R}^N"
},
{
"math_id": 6,
"text": "n > 8(\\ln m)/\\varepsilon^2"
},
{
"math_id": 7,
"text": "f: \\mathbb{R}^N \\rightarrow \\mathbb{R}^n"
},
{
"math_id": 8,
"text": "(1-\\varepsilon)\\|u-v\\|^2 \\leq \\|f(u) - f(v)\\|^2 \\leq (1+\\varepsilon)\\|u-v\\|^2"
},
{
"math_id": 9,
"text": "u,v \\in X"
},
{
"math_id": 10,
"text": "(1+\\varepsilon)^{-1}\\|f(u)-f(v)\\|^2 \\leq \\|u-v\\|^2 \\leq (1-\\varepsilon)^{-1}\\|f(u)-f(v)\\|^2 "
},
{
"math_id": 11,
"text": "\\epsilon\\in(0,1)"
},
{
"math_id": 12,
"text": "n\\ge15(\\ln m)/\\varepsilon^2"
},
{
"math_id": 13,
"text": "f|_X"
},
{
"math_id": 14,
"text": "(1+\\varepsilon)"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "P"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "c"
},
{
"math_id": 19,
"text": "f(v) = Pv/c"
},
{
"math_id": 20,
"text": " 0 < \\varepsilon, \\delta < 1/2 "
},
{
"math_id": 21,
"text": " d "
},
{
"math_id": 22,
"text": " \\mathbb{R}^{k \\times d} "
},
{
"math_id": 23,
"text": " A "
},
{
"math_id": 24,
"text": " k = O(\\varepsilon^{-2} \\log(1/\\delta)) "
},
{
"math_id": 25,
"text": " x \\in \\mathbb{R}^{d} "
},
{
"math_id": 26,
"text": " P(|\\Vert Ax\\Vert_2^2-1|>\\varepsilon)<\\delta"
},
{
"math_id": 27,
"text": "x = (u-v)/\\|u-v\\|_2"
},
{
"math_id": 28,
"text": "\\delta < 1/n^2"
},
{
"math_id": 29,
"text": "O(kd)"
},
{
"math_id": 30,
"text": "d\\log d + k^{2+\\gamma}"
},
{
"math_id": 31,
"text": "\\gamma>0"
},
{
"math_id": 32,
"text": "\\varepsilon"
},
{
"math_id": 33,
"text": "kd\\varepsilon"
},
{
"math_id": 34,
"text": "b"
},
{
"math_id": 35,
"text": "kb\\varepsilon"
},
{
"math_id": 36,
"text": "d\\log d"
},
{
"math_id": 37,
"text": "{C}\\in\\mathbb R^{3\\times 3}"
},
{
"math_id": 38,
"text": "{D}\\in\\mathbb R^{3\\times 3}"
},
{
"math_id": 39,
"text": "{C}\\bullet {D}"
},
{
"math_id": 40,
"text": "\n{C} \\bullet {D}\n= \n\\left[\n\\begin{array} { c }\n{C}_1 \\otimes {D}_1\\\\\\hline \n{C}_2 \\otimes {D}_2\\\\\\hline\n{C}_3 \\otimes {D}_3\\\\\n\\end{array}\n\\right].\n"
},
{
"math_id": 41,
"text": "(\\mathbf{C} \\bull \\mathbf{D})(x\\otimes y) = \\mathbf{C}x \\circ \\mathbf{D} y\n= \\left[\n\\begin{array} { c }\n(\\mathbf{C}x)_1 (\\mathbf{D} y)_1 \\\\\n(\\mathbf{C}x)_2 (\\mathbf{D} y)_2 \\\\\n\\vdots\n\\end{array}\\right]\n"
},
{
"math_id": 42,
"text": "\\circ"
},
{
"math_id": 43,
"text": "C_1, C_2, \\dots, C_c"
},
{
"math_id": 44,
"text": "\\pm1"
},
{
"math_id": 45,
"text": "C_1 \\bullet \\dots \\bullet C_c"
},
{
"math_id": 46,
"text": "O(\\epsilon^{-2}\\log1/\\delta + \\epsilon^{-1}(\\tfrac1c\\log1/\\delta)^c)"
},
{
"math_id": 47,
"text": "\\epsilon"
},
{
"math_id": 48,
"text": "(\\log1/\\delta)^c"
}
] |
https://en.wikipedia.org/wiki?curid=14855952
|
14856
|
Inner product space
|
Generalization of the dot product; used to define Hilbert spaces
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in formula_0. Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality (zero inner product) of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or "scalar product" of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
An inner product naturally induces an associated norm, (denoted formula_1 and formula_2 in the picture); so, every inner product space is a normed vector space. If this normed space is also complete (that is, a Banach space) then the inner product space is a Hilbert space. If an inner product space H is not a Hilbert space, it can be "extended" by completion to a Hilbert space formula_3 This means that formula_4 is a linear subspace of formula_5 the inner product of formula_4 is the restriction of that of formula_5 and formula_4 is dense in formula_6 for the topology defined by the norm.
Definition.
In this article, "F" denotes a field that is either the real numbers formula_7 or the complex numbers formula_8 A scalar is thus an element of "F". A bar over an expression representing a scalar denotes the complex conjugate of this scalar. A zero vector is denoted formula_9 for distinguishing it from the scalar 0.
An "inner product" space is a vector space "V" over the field "F" together with an "inner product", that is, a map
formula_10
that satisfies the following three properties for all vectors formula_11 and all scalars
If the positive-definiteness condition is replaced by merely requiring that formula_21 for all formula_18, then one obtains the definition of "positive semi-definite Hermitian form". A positive semi-definite Hermitian form formula_22 is an inner product if and only if for all formula_18, if formula_23 then formula_24.
Basic properties.
In the following properties, which result almost immediately from the definition of an inner product, "x", "y" and z are arbitrary vectors, and a and b are arbitrary scalars.
Over formula_16, conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. Hence an inner product on a real vector space is a "positive-definite symmetric bilinear form". The binomial expansion of a square becomes
formula_31
Convention variant.
Some authors, especially in physics and matrix algebra, prefer to define inner products and sesquilinear forms with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. Bra-ket notation in quantum mechanics also uses slightly different notation, i.e. formula_32, where formula_33.
Notation.
Several notations are used for inner products, including
formula_34,
formula_35,
formula_32 and
formula_36, as well as the usual dot product.
Examples.
Real and complex numbers.
Among the simplest examples of inner product spaces are formula_16 and formula_8
The real numbers formula_16 are a vector space over formula_16 that becomes an inner product space with arithmetic multiplication as its inner product:
formula_37
The complex numbers formula_38 are a vector space over formula_38 that becomes an inner product space with the inner product
formula_39
Unlike with the real numbers, the assignment formula_40 does not define a complex inner product on formula_8
Euclidean vector space.
More generally, the real formula_41-space formula_42 with the dot product is an inner product space, an example of a Euclidean vector space.
formula_43
where formula_44 is the transpose of formula_45
A function formula_46 is an inner product on formula_42 if and only if there exists a symmetric positive-definite matrix formula_47 such that formula_48 for all formula_49 If formula_47 is the identity matrix then formula_48 is the dot product. For another example, if formula_50 and formula_51 is positive-definite (which happens if and only if formula_52 and one/both diagonal elements are positive) then for any formula_53
formula_54
As mentioned earlier, every inner product on formula_55 is of this form (where formula_56 and formula_57 satisfy formula_58).
Complex coordinate space.
The general form of an inner product on formula_59 is known as the Hermitian form and is given by
formula_60
where formula_61 is any Hermitian positive-definite matrix and formula_62 is the conjugate transpose of formula_63 For the real case, this corresponds to the dot product of the results of directionally-different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. It is a weighted-sum version of the dot product with positive weights—up to an orthogonal transformation.
Hilbert space.
The article on Hilbert spaces has several examples of inner product spaces, wherein the metric induced by the inner product yields a complete metric space. An example of an inner product space which induces an incomplete metric is the space formula_64 of continuous complex valued functions formula_65 and formula_66 on the interval formula_67 The inner product is
formula_68
This space is not complete; consider for example, for the interval [−1, 1] the sequence of continuous "step" functions, formula_69 defined by:
formula_70
This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a continuous function.
Random variables.
For real random variables formula_71 and formula_72 the expected value of their product
formula_73
is an inner product. In this case, formula_74 if and only if formula_75 (that is, formula_76 almost surely), where formula_77 denotes the probability of the event. This definition of expectation as inner product can be extended to random vectors as well.
Complex matrices.
The inner product for complex square matrices of the same size is the Frobenius inner product formula_78. Since trace and transposition are linear and the conjugation is on the second matrix, it is a sesquilinear operator. We further get Hermitian symmetry by,
formula_79
Finally, since for formula_80 nonzero, formula_81, we get that the Frobenius inner product is positive definite too, and so is an inner product.
Vector spaces with forms.
On an inner product space, or more generally a vector space with a nondegenerate form (hence an isomorphism formula_82), vectors can be sent to covectors (in coordinates, via transpose), so that one can take the inner product and outer product of two vectors—not simply of a vector and a covector.
Basic results, terminology, and definitions.
Norm properties.
Every inner product space induces a norm, called its , that is defined by
formula_83
With this norm, every inner product space becomes a normed vector space.
So, every general property of normed vector spaces applies to inner product spaces.
In particular, one has the following properties:
<templatestyles src="Glossary/styles.css" />
Orthogonality.
<templatestyles src="Glossary/styles.css" />
Real and complex parts of inner products.
Suppose that formula_22 is an inner product on formula_86 (so it is antilinear in its second argument). The polarization identity shows that the real part of the inner product is
formula_88
If formula_86 is a real vector space then
formula_89
and the imaginary part (also called the complex part) of formula_22 is always formula_90
Assume for the rest of this section that formula_86 is a complex vector space.
The polarization identity for complex vector spaces shows that
formula_91
The map defined by formula_92 for all formula_93 satisfies the axioms of the inner product except that it is antilinear in its first, rather than its second, argument. The real part of both formula_94 and formula_87 are equal to formula_95 but the inner products differ in their complex part:
formula_96
The last equality is similar to the formula expressing a linear functional in terms of its real part.
These formulas show that every complex inner product is completely determined by its real part. Moreover, this real part defines an inner product on formula_97 considered as a real vector space. There is thus a one-to-one correspondence between complex inner products on a complex vector space formula_97 and real inner products on formula_98
For example, suppose that formula_99 for some integer formula_100 When formula_86 is considered as a real vector space in the usual way (meaning that it is identified with the formula_101dimensional real vector space formula_102 with each formula_103 identified with formula_104), then the dot product formula_105 defines a real inner product on this space. The unique complex inner product formula_106 on formula_107 induced by the dot product is the map that sends formula_108 to formula_109 (because the real part of this map formula_106 is equal to the dot product).
Real vs. complex inner products
Let formula_110 denote formula_86 considered as a vector space over the real numbers rather than complex numbers.
The real part of the complex inner product formula_87 is the map formula_111 which necessarily forms a real inner product on the real vector space formula_112 Every inner product on a real vector space is a bilinear and symmetric map.
For example, if formula_113 with inner product formula_114 where formula_86 is a vector space over the field formula_115 then formula_116 is a vector space over formula_16 and formula_117 is the dot product formula_118 where formula_119 is identified with the point formula_120 (and similarly for formula_84); thus the standard inner product formula_114 on formula_38 is an "extension" the dot product . Also, had formula_87 been instead defined to be the symmetric map formula_121 (rather than the usual conjugate symmetric map formula_122) then its real part formula_117 would not be the dot product; furthermore, without the complex conjugate, if formula_123 but formula_124 then formula_125 so the assignment formula_126 would not define a norm.
The next examples show that although real and complex inner products have many properties and results in common, they are not entirely interchangeable.
For instance, if formula_127 then formula_128 but the next example shows that the converse is in general not true.
Given any formula_129 the vector formula_130 (which is the vector formula_18 rotated by 90°) belongs to formula_86 and so also belongs to formula_110 (although scalar multiplication of formula_18 by formula_131 is not defined in formula_132 the vector in formula_86 denoted by formula_130 is nevertheless still also an element of formula_110). For the complex inner product, formula_133 whereas for the real inner product the value is always formula_134
If formula_106 is a complex inner product and formula_135 is a continuous linear operator that satisfies formula_136 for all formula_129 then formula_137 This statement is no longer true if formula_106 is instead a real inner product, as this next example shows.
Suppose that formula_113 has the inner product formula_138 mentioned above. Then the map formula_135 defined by formula_139 is a linear map (linear for both formula_86 and formula_110) that denotes rotation by formula_140 in the plane. Because formula_18 and formula_141 are perpendicular vectors and formula_142 is just the dot product, formula_143 for all vectors formula_144 nevertheless, this rotation map formula_80 is certainly not identically formula_90 In contrast, using the complex inner product gives formula_145 which (as expected) is not identically zero.
Orthonormal sequences.
Let formula_86 be a finite dimensional inner product space of dimension formula_146 Recall that every basis of formula_86 consists of exactly formula_41 linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis formula_147 is orthonormal if formula_148 for every formula_149 and formula_150 for each index formula_151
This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let formula_86 be any inner product space. Then a collection
formula_152
is a basis for formula_86 if the subspace of formula_86 generated by finite linear combinations of elements of formula_153 is dense in formula_86 (in the norm induced by the inner product). Say that formula_153 is an orthonormal basis for formula_86 if it is a basis and
formula_154
if formula_155 and formula_156 for all formula_157
Using an infinite-dimensional analog of the Gram-Schmidt process one may show:
Theorem. Any separable inner product space has an orthonormal basis.
Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that
Theorem. Any complete inner product space has an orthonormal basis.
The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's "A Hilbert Space Problem Book" (see the references).
Parseval's identity leads immediately to the following theorem:
Theorem. Let formula_86 be a separable inner product space and formula_158 an orthonormal basis of formula_98 Then the map
formula_159
is an isometric linear map formula_160 with a dense image.
This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided formula_161 is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series:
Theorem. Let formula_86 be the inner product space formula_162 Then the sequence (indexed on set of all integers) of continuous functions
formula_163
is an orthonormal basis of the space formula_164 with the formula_165 inner product. The mapping
formula_166
is an isometric linear map with dense image.
Orthogonality of the sequence formula_167 follows immediately from the fact that if formula_168 then
formula_169
Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on formula_170 with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.
Operators on inner product spaces.
Several types of linear maps formula_171 between inner product spaces formula_86 and formula_172 are of relevance:
From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces.
Generalizations.
Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened.
Degenerate inner products.
If formula_86 is a vector space and formula_179 a semi-definite sesquilinear form, then the function:
formula_180
makes sense and satisfies all the properties of norm except that formula_181 does not imply formula_182 (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient formula_183 The sesquilinear form formula_179 factors through formula_184
This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets.
Nondegenerate conjugate symmetric forms.
Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero formula_185 there exists some formula_84 such that formula_186 though formula_84 need not equal formula_18; in other words, the induced map to the dual space formula_82 is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with nonzero weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions).
Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism formula_82) and thus hold more generally.
Related products.
The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a formula_187 covector with an formula_188 vector, yielding a formula_189 matrix (a scalar), while the outer product is the product of an formula_190 vector with a formula_187 covector, yielding an formula_191 matrix. The outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the trace of the outer product (trace only being properly defined for square matrices). In an informal summary: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".
More abstractly, the outer product is the bilinear map formula_192 sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1, 1)), while the inner product is the bilinear evaluation map formula_193 given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction.
The inner product and outer product should not be confused with the interior product and exterior product, which are instead operations on vector fields and differential forms, or more generally on the exterior algebra.
As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the outer product (alternatively, wedge product). The inner product is more correctly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\langle a, b \\rangle"
},
{
"math_id": 1,
"text": "|x|"
},
{
"math_id": 2,
"text": "|y|"
},
{
"math_id": 3,
"text": "\\overline{H}."
},
{
"math_id": 4,
"text": "H"
},
{
"math_id": 5,
"text": "\\overline{H},"
},
{
"math_id": 6,
"text": "\\overline{H}"
},
{
"math_id": 7,
"text": "\\R,"
},
{
"math_id": 8,
"text": "\\Complex."
},
{
"math_id": 9,
"text": "\\mathbf 0"
},
{
"math_id": 10,
"text": " \\langle \\cdot, \\cdot \\rangle : V \\times V \\to F "
},
{
"math_id": 11,
"text": "x,y,z\\in V"
},
{
"math_id": 12,
"text": "\\langle x, y \\rangle = \\overline{\\langle y, x \\rangle}."
},
{
"math_id": 13,
"text": "\n a = \\overline{a}\n"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "\\langle x, x \\rangle "
},
{
"math_id": 16,
"text": "\\R"
},
{
"math_id": 17,
"text": "\n \\langle ax+by, z \\rangle = a \\langle x, z \\rangle + b \\langle y, z \\rangle."
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\n \\langle x, x \\rangle > 0\n"
},
{
"math_id": 20,
"text": "\\langle x, x \\rangle"
},
{
"math_id": 21,
"text": "\\langle x, x \\rangle \\geq 0"
},
{
"math_id": 22,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 23,
"text": "\\langle x, x \\rangle = 0"
},
{
"math_id": 24,
"text": "x = \\mathbf 0"
},
{
"math_id": 25,
"text": "\\langle \\mathbf{0}, x \\rangle=\\langle x,\\mathbf{0}\\rangle=0."
},
{
"math_id": 26,
"text": " \\langle x, x \\rangle"
},
{
"math_id": 27,
"text": "x=\\mathbf{0}."
},
{
"math_id": 28,
"text": "\\langle x, ay+bz \\rangle= \\overline a \\langle x, y \\rangle + \\overline b \\langle x, z \\rangle."
},
{
"math_id": 29,
"text": "\\langle x + y, x + y \\rangle = \\langle x, x \\rangle + 2\\operatorname{Re}(\\langle x, y \\rangle) + \\langle y, y \\rangle,"
},
{
"math_id": 30,
"text": "\\operatorname{Re}"
},
{
"math_id": 31,
"text": "\\langle x + y, x + y \\rangle = \\langle x, x \\rangle + 2\\langle x, y \\rangle + \\langle y, y \\rangle ."
},
{
"math_id": 32,
"text": " \\langle \\cdot | \\cdot \\rangle "
},
{
"math_id": 33,
"text": " \\langle x | y \\rangle := \\left ( y, x \\right ) "
},
{
"math_id": 34,
"text": " \\langle \\cdot, \\cdot \\rangle "
},
{
"math_id": 35,
"text": " \\left ( \\cdot, \\cdot \\right ) "
},
{
"math_id": 36,
"text": " \\left ( \\cdot | \\cdot \\right ) "
},
{
"math_id": 37,
"text": "\\langle x, y \\rangle := x y \\quad \\text{ for } x, y \\in \\R."
},
{
"math_id": 38,
"text": "\\Complex"
},
{
"math_id": 39,
"text": "\\langle x, y \\rangle := x \\overline{y} \\quad \\text{ for } x, y \\in \\Complex."
},
{
"math_id": 40,
"text": "(x, y) \\mapsto x y"
},
{
"math_id": 41,
"text": "n"
},
{
"math_id": 42,
"text": "\\R^n"
},
{
"math_id": 43,
"text": "\n\\left\\langle\n \\begin{bmatrix} x_1 \\\\ \\vdots \\\\ x_n \\end{bmatrix},\n \\begin{bmatrix} y_1 \\\\ \\vdots \\\\ y_n \\end{bmatrix}\n\\right\\rangle \n= x^\\textsf{T} y = \\sum_{i=1}^n x_i y_i = x_1 y_1 + \\cdots + x_n y_n,\n"
},
{
"math_id": 44,
"text": "x^{\\operatorname{T}}"
},
{
"math_id": 45,
"text": "x."
},
{
"math_id": 46,
"text": "\\langle \\,\\cdot, \\cdot\\, \\rangle : \\R^n \\times \\R^n \\to \\R"
},
{
"math_id": 47,
"text": "\\mathbf{M}"
},
{
"math_id": 48,
"text": "\\langle x, y \\rangle = x^{\\operatorname{T}} \\mathbf{M} y"
},
{
"math_id": 49,
"text": "x, y \\in \\R^n."
},
{
"math_id": 50,
"text": "n = 2"
},
{
"math_id": 51,
"text": "\\mathbf{M} = \\begin{bmatrix} a & b \\\\ b & d \\end{bmatrix}"
},
{
"math_id": 52,
"text": "\\det \\mathbf{M} = a d - b^2 > 0"
},
{
"math_id": 53,
"text": "x := \\left[x_1, x_2\\right]^{\\operatorname{T}}, y := \\left[y_1, y_2\\right]^{\\operatorname{T}} \\in \\R^2,"
},
{
"math_id": 54,
"text": "\\langle x, y \\rangle \n:= x^{\\operatorname{T}} \\mathbf{M} y \n= \\left[x_1, x_2\\right] \\begin{bmatrix} a & b \\\\ b & d \\end{bmatrix} \\begin{bmatrix} y_1 \\\\ y_2 \\end{bmatrix} \n= a x_1 y_1 + b x_1 y_2 + b x_2 y_1 + d x_2 y_2."
},
{
"math_id": 55,
"text": "\\R^2"
},
{
"math_id": 56,
"text": "b \\in \\R, a > 0"
},
{
"math_id": 57,
"text": "d > 0"
},
{
"math_id": 58,
"text": "a d > b^2"
},
{
"math_id": 59,
"text": "\\Complex^n"
},
{
"math_id": 60,
"text": "\\langle x, y \\rangle = y^\\dagger \\mathbf{M} x = \\overline{x^\\dagger \\mathbf{M} y},"
},
{
"math_id": 61,
"text": "M"
},
{
"math_id": 62,
"text": "y^{\\dagger}"
},
{
"math_id": 63,
"text": "y."
},
{
"math_id": 64,
"text": "C([a, b])"
},
{
"math_id": 65,
"text": "f"
},
{
"math_id": 66,
"text": "g"
},
{
"math_id": 67,
"text": "[a, b]."
},
{
"math_id": 68,
"text": "\\langle f, g \\rangle = \\int_a^b f(t) \\overline{g(t)} \\, \\mathrm{d}t."
},
{
"math_id": 69,
"text": "\\{ f_k \\}_k,"
},
{
"math_id": 70,
"text": "f_k(t) = \\begin{cases} 0 & t \\in [-1, 0] \\\\ 1 & t \\in \\left[\\tfrac{1}{k}, 1\\right] \\\\ kt & t \\in \\left(0, \\tfrac{1}{k}\\right) \\end{cases}"
},
{
"math_id": 71,
"text": "X"
},
{
"math_id": 72,
"text": "Y,"
},
{
"math_id": 73,
"text": "\\langle X, Y \\rangle = \\mathbb{E}[XY]"
},
{
"math_id": 74,
"text": "\\langle X, X \\rangle = 0"
},
{
"math_id": 75,
"text": "\\mathbb{P}[X = 0] = 1"
},
{
"math_id": 76,
"text": "X = 0"
},
{
"math_id": 77,
"text": "\\mathbb{P}"
},
{
"math_id": 78,
"text": "\\langle A, B \\rangle := \\operatorname{tr}\\left(AB^\\dagger\\right)"
},
{
"math_id": 79,
"text": "\\langle A, B \\rangle = \\operatorname{tr}\\left(AB^\\dagger\\right) = \\overline{\\operatorname{tr}\\left(BA^\\dagger\\right)} = \\overline{\\left\\langle B,A \\right\\rangle}"
},
{
"math_id": 80,
"text": "A"
},
{
"math_id": 81,
"text": "\\langle A, A\\rangle = \\sum_{ij} \\left|A_{ij}\\right|^2 > 0 "
},
{
"math_id": 82,
"text": "V \\to V^*"
},
{
"math_id": 83,
"text": "\\|x\\| = \\sqrt{\\langle x, x \\rangle}."
},
{
"math_id": 84,
"text": "y"
},
{
"math_id": 85,
"text": "x \\in V."
},
{
"math_id": 86,
"text": "V"
},
{
"math_id": 87,
"text": "\\langle x, y \\rangle"
},
{
"math_id": 88,
"text": "\\operatorname{Re} \\langle x, y \\rangle = \\frac{1}{4} \\left(\\|x + y\\|^2 - \\|x - y\\|^2\\right)."
},
{
"math_id": 89,
"text": "\\langle x, y \\rangle\n= \\operatorname{Re} \\langle x, y \\rangle\n= \\frac{1}{4} \\left(\\|x + y\\|^2 - \\|x - y\\|^2\\right)"
},
{
"math_id": 90,
"text": "0."
},
{
"math_id": 91,
"text": "\\begin{alignat}{4}\n\\langle x, \\ y \\rangle\n&= \\frac{1}{4} \\left(\\|x + y\\|^2 - \\|x - y\\|^2 + i\\|x + iy\\|^2 - i\\|x - iy\\|^2 \\right) \\\\\n&= \\operatorname{Re} \\langle x, y \\rangle + i \\operatorname{Re} \\langle x, i y \\rangle. \\\\\n\\end{alignat}"
},
{
"math_id": 92,
"text": "\\langle x \\mid y \\rangle = \\langle y, x \\rangle"
},
{
"math_id": 93,
"text": "x, y \\in V"
},
{
"math_id": 94,
"text": "\\langle x \\mid y \\rangle"
},
{
"math_id": 95,
"text": "\\operatorname{Re} \\langle x, y \\rangle"
},
{
"math_id": 96,
"text": "\\begin{alignat}{4}\n\\langle x \\mid y \\rangle\n&= \\frac{1}{4} \\left(\\|x + y\\|^2 - \\|x - y\\|^2 - i\\|x + iy\\|^2 + i\\|x - iy\\|^2 \\right) \\\\\n&= \\operatorname{Re} \\langle x, y \\rangle - i \\operatorname{Re} \\langle x, i y \\rangle. \\\\\n\\end{alignat}"
},
{
"math_id": 97,
"text": "V,"
},
{
"math_id": 98,
"text": "V."
},
{
"math_id": 99,
"text": "V = \\Complex^n"
},
{
"math_id": 100,
"text": "n > 0."
},
{
"math_id": 101,
"text": "2 n-"
},
{
"math_id": 102,
"text": "\\R^{2n},"
},
{
"math_id": 103,
"text": "\\left(a_1 + i b_1, \\ldots, a_n + i b_n\\right) \\in \\Complex^n"
},
{
"math_id": 104,
"text": "\\left(a_1, b_1, \\ldots, a_n, b_n\\right) \\in \\R^{2n}"
},
{
"math_id": 105,
"text": "x \\,\\cdot\\, y = \\left(x_1, \\ldots, x_{2n}\\right) \\, \\cdot \\, \\left(y_1, \\ldots, y_{2n}\\right) := x_1 y_1 + \\cdots + x_{2n} y_{2n}"
},
{
"math_id": 106,
"text": "\\langle \\,\\cdot, \\cdot\\, \\rangle"
},
{
"math_id": 107,
"text": "V = \\C^n"
},
{
"math_id": 108,
"text": "c = \\left(c_1, \\ldots, c_n\\right), d = \\left(d_1, \\ldots, d_n\\right) \\in \\Complex^n"
},
{
"math_id": 109,
"text": "\\langle c, d \\rangle := c_1 \\overline{d_1} + \\cdots + c_n \\overline{d_n}"
},
{
"math_id": 110,
"text": "V_{\\R}"
},
{
"math_id": 111,
"text": "\\langle x, y \\rangle_{\\R} = \\operatorname{Re} \\langle x, y \\rangle ~:~ V_{\\R} \\times V_{\\R} \\to \\R,"
},
{
"math_id": 112,
"text": "V_{\\R}."
},
{
"math_id": 113,
"text": "V = \\Complex"
},
{
"math_id": 114,
"text": "\\langle x, y \\rangle = x \\overline{y},"
},
{
"math_id": 115,
"text": "\\Complex,"
},
{
"math_id": 116,
"text": "V_{\\R} = \\R^2"
},
{
"math_id": 117,
"text": "\\langle x, y \\rangle_{\\R}"
},
{
"math_id": 118,
"text": "x \\cdot y,"
},
{
"math_id": 119,
"text": "x = a + i b \\in V = \\Complex"
},
{
"math_id": 120,
"text": "(a, b) \\in V_{\\R} = \\R^2"
},
{
"math_id": 121,
"text": "\\langle x, y \\rangle = x y"
},
{
"math_id": 122,
"text": "\\langle x, y \\rangle = x \\overline{y}"
},
{
"math_id": 123,
"text": "x \\in \\C"
},
{
"math_id": 124,
"text": "x \\not\\in \\R"
},
{
"math_id": 125,
"text": "\\langle x, x \\rangle = x x = x^2 \\not\\in [0, \\infty)"
},
{
"math_id": 126,
"text": "x \\mapsto \\sqrt{\\langle x, x \\rangle}"
},
{
"math_id": 127,
"text": "\\langle x, y \\rangle = 0"
},
{
"math_id": 128,
"text": "\\langle x, y \\rangle_{\\R} = 0,"
},
{
"math_id": 129,
"text": "x \\in V,"
},
{
"math_id": 130,
"text": "i x"
},
{
"math_id": 131,
"text": "i = \\sqrt{-1}"
},
{
"math_id": 132,
"text": "V_{\\R},"
},
{
"math_id": 133,
"text": "\\langle x, ix \\rangle = -i \\|x\\|^2,"
},
{
"math_id": 134,
"text": "\\langle x, ix \\rangle_{\\R} = 0."
},
{
"math_id": 135,
"text": "A : V \\to V"
},
{
"math_id": 136,
"text": "\\langle x, A x \\rangle = 0"
},
{
"math_id": 137,
"text": "A = 0."
},
{
"math_id": 138,
"text": "\\langle x, y \\rangle := x \\overline{y}"
},
{
"math_id": 139,
"text": "A x = ix"
},
{
"math_id": 140,
"text": "90^{\\circ}"
},
{
"math_id": 141,
"text": "A x"
},
{
"math_id": 142,
"text": "\\langle x, Ax \\rangle_{\\R}"
},
{
"math_id": 143,
"text": "\\langle x, Ax \\rangle_{\\R} = 0"
},
{
"math_id": 144,
"text": "x;"
},
{
"math_id": 145,
"text": "\\langle x, Ax \\rangle = -i \\|x\\|^2,"
},
{
"math_id": 146,
"text": "n."
},
{
"math_id": 147,
"text": "\\{e_1, \\ldots, e_n\\}"
},
{
"math_id": 148,
"text": "\\langle e_i, e_j \\rangle = 0"
},
{
"math_id": 149,
"text": "i \\neq j"
},
{
"math_id": 150,
"text": "\\langle e_i, e_i \\rangle = \\|e_a\\|^2 = 1"
},
{
"math_id": 151,
"text": "i."
},
{
"math_id": 152,
"text": "E = \\left\\{ e_a \\right\\}_{a \\in A}"
},
{
"math_id": 153,
"text": "E"
},
{
"math_id": 154,
"text": "\\left\\langle e_{a}, e_{b} \\right\\rangle = 0"
},
{
"math_id": 155,
"text": "a \\neq b"
},
{
"math_id": 156,
"text": "\\langle e_a, e_a \\rangle = \\|e_a\\|^2 = 1"
},
{
"math_id": 157,
"text": "a, b \\in A."
},
{
"math_id": 158,
"text": "\\left\\{e_k\\right\\}_k"
},
{
"math_id": 159,
"text": "x \\mapsto \\bigl\\{\\langle e_k, x \\rangle\\bigr\\}_{k \\in \\N}"
},
{
"math_id": 160,
"text": "V \\rightarrow \\ell^2"
},
{
"math_id": 161,
"text": "\\ell^2"
},
{
"math_id": 162,
"text": "C[-\\pi, \\pi]."
},
{
"math_id": 163,
"text": "e_k(t) = \\frac{e^{i k t}}{\\sqrt{2 \\pi}}"
},
{
"math_id": 164,
"text": "C[-\\pi, \\pi]"
},
{
"math_id": 165,
"text": "L^2"
},
{
"math_id": 166,
"text": "f \\mapsto \\frac{1}{\\sqrt{2 \\pi}} \\left\\{\\int_{-\\pi}^\\pi f(t) e^{-i k t} \\, \\mathrm{d}t \\right\\}_{k \\in \\Z}"
},
{
"math_id": 167,
"text": "\\{ e_k \\}_k"
},
{
"math_id": 168,
"text": "k \\neq j,"
},
{
"math_id": 169,
"text": "\\int_{-\\pi}^\\pi e^{-i (j - k) t} \\, \\mathrm{d}t = 0."
},
{
"math_id": 170,
"text": "[-\\pi, \\pi]"
},
{
"math_id": 171,
"text": "A : V \\to W"
},
{
"math_id": 172,
"text": "W"
},
{
"math_id": 173,
"text": "\\{ \\|Ax\\| : \\|x\\| \\leq 1\\},"
},
{
"math_id": 174,
"text": "\\langle Ax, y \\rangle = \\langle x, Ay \\rangle"
},
{
"math_id": 175,
"text": "x, y \\in V."
},
{
"math_id": 176,
"text": "\\|A x\\| = \\|x\\|"
},
{
"math_id": 177,
"text": "\\langle Ax, Ay \\rangle = \\langle x, y \\rangle"
},
{
"math_id": 178,
"text": "A(0) = 0."
},
{
"math_id": 179,
"text": "\\langle \\,\\cdot\\,, \\,\\cdot\\, \\rangle"
},
{
"math_id": 180,
"text": "\\|x\\| = \\sqrt{\\langle x, x\\rangle}"
},
{
"math_id": 181,
"text": "\\|x\\| = 0"
},
{
"math_id": 182,
"text": "x = 0"
},
{
"math_id": 183,
"text": "W = V / \\{x : \\|x\\| = 0\\}."
},
{
"math_id": 184,
"text": "W."
},
{
"math_id": 185,
"text": "x \\neq 0"
},
{
"math_id": 186,
"text": "\\langle x, y \\rangle \\neq 0,"
},
{
"math_id": 187,
"text": "1 \\times n"
},
{
"math_id": 188,
"text": "n \\times 1"
},
{
"math_id": 189,
"text": "1 \\times 1"
},
{
"math_id": 190,
"text": "m \\times 1"
},
{
"math_id": 191,
"text": "m \\times n"
},
{
"math_id": 192,
"text": "W \\times V^* \\to \\hom(V, W)"
},
{
"math_id": 193,
"text": "V^* \\times V \\to F"
}
] |
https://en.wikipedia.org/wiki?curid=14856
|
148594
|
Euler's constant
|
Relates logarithm and harmonic series
Constant value used in mathematics
Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma ("γ"), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by log:
formula_0
Here, ⌊·⌋ represents the floor function.
The numerical value of Euler's constant, to 50 decimal places, is:
<templatestyles src="Block indent/styles.css"/>
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Is Euler's constant irrational? If so, is it transcendental?
History.
The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled "De Progressionibus harmonicis observationes" (Eneström Index 43). Euler used the notations "C" and "O" for the constant. In 1790, the Italian mathematician Lorenzo Mascheroni used the notations "A" and "a" for the constant. The notation "γ" appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time, perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation "γ" in 1835, and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842.
Appearances.
Euler's constant appears, among other places, in the following (where '*' means that this entry contains an explicit equation):
Properties.
The number "γ" has not been proved algebraic or transcendental. In fact, it is not even known whether "γ" is irrational. Using a continued fraction analysis, Papanikolaou showed in 1997 that if "γ" is rational, its denominator must be greater than 10244663. The ubiquity of "γ" revealed by the large number of equations below makes the irrationality of "γ" a major open question in mathematics.
However, some progress has been made. Kurt Mahler showed in 1968 that the number formula_2 is transcendental (here, formula_3 and formula_4 are Bessel functions). In 2009 Alexander Aptekarev proved that at least one of Euler's constant "γ" and the Euler–Gompertz constant "δ" is irrational; Tanguy Rivoal proved in 2012 that at least one of them is transcendental. In 2010 M. Ram Murty and N. Saradha showed that at most one of the numbers of the form
formula_5
with and is algebraic; this family includes the special case γ(2,4) =. In 2013 M. Ram Murty and A. Zaytseva found a different family containing γ, which is based on sums of reciprocals of integers not divisible by a fixed list of primes, with the same property.
Relation to gamma function.
γ is related to the digamma function Ψ, and hence the derivative of the gamma function Γ, when both functions are evaluated at 1. Thus:
formula_6
This is equal to the limits:
formula_7
Further limit results are:
formula_8
A limit related to the beta function (expressed in terms of gamma functions) is
formula_9
Relation to the zeta function.
γ can also be expressed as an infinite sum whose terms involve the Riemann zeta function evaluated at positive integers:
formula_10
The constant formula_11 can also be expressed in terms of the sum of the reciprocals of non-trivial zeros formula_12 of the zeta function:
formula_13
Other series related to the zeta function include:
formula_14
The error term in the last equation is a rapidly decreasing function of n. As a result, the formula is well-suited for efficient computation of the constant to high precision.
Other interesting limits equaling Euler's constant are the antisymmetric limit:
formula_15
and the following formula, established in 1898 by de la Vallée-Poussin:
formula_16
where ⌈ ⌉ are ceiling brackets.
This formula indicates that when taking any positive integer n and dividing it by each positive integer k less than n, the average fraction by which the quotient falls short of the next integer tends to γ (rather than 0.5) as n tends to infinity.
Closely related to this is the rational zeta series expression. By taking separately the first few terms of the series above, one obtains an estimate for the classical series limit:
formula_17
where is the Hurwitz zeta function. The sum in this equation involves the harmonic numbers, . Expanding some of the terms in the Hurwitz zeta function gives:
formula_18
where
γ can also be expressed as follows where A is the Glaisher–Kinkelin constant:
formula_19
γ can also be expressed as follows, which can be proven by expressing the zeta function as a Laurent series:
formula_20
Relation to triangular numbers.
Numerous formulations have been derived that express formula_11 in terms of sums and logarithms of triangular numbers. One of the earliest of these is a formula for the formula_21th harmonic number attributed to Srinivasa Ramanujan where formula_11 is related to formula_22 in a series that considers the powers of formula_23 (an earlier, less-generalizable proof by Ernesto Cesàro gives the first two terms of the series, with an error term):
formula_24
From Stirling's approximation follows a similar series:
formula_25
The series of inverse triangular numbers also features in the study of the Basel problem posed by Pietro Mengoli. Mengoli proved that formula_26, a result Jacob Bernoulli later used to estimate the value of formula_27, placing it between formula_28 and formula_29. This identity appears in a formula used by Bernhard Riemann to compute roots of the zeta function, where formula_11 is expressed in terms of the sum of roots formula_12 plus the difference between Boya's expansion and the series of exact unit fractions formula_30:
formula_31
Integrals.
γ equals the value of a number of definite integrals:
formula_32
where is the fractional harmonic number, and formula_33 is the fractional part of formula_34.
The third formula in the integral list can be proved in the following way:
formula_35
The integral on the second line of the equation stands for the Debye function value of +∞, which is .
Definite integrals in which γ appears include:
formula_36
One can express γ using a special case of Hadjicostas's formula as a double integral with equivalent series:
formula_37
An interesting comparison by Sondow is the double integral and alternating series
formula_38
It shows that log may be thought of as an "alternating Euler constant".
The two constants are also related by the pair of series
formula_39
where and are the number of 1s and 0s, respectively, in the base 2 expansion of n.
We also have Catalan's 1875 integral
formula_40
Series expansions.
In general,
formula_41
for any . However, the rate of convergence of this expansion depends significantly on α. In particular, exhibits much more rapid convergence than the conventional expansion . This is because
formula_42
while
formula_43
Even so, there exist other series expansions which converge more rapidly than this; some of these are discussed below.
Euler showed that the following infinite series approaches γ:
formula_44
The series for γ is equivalent to a series Nielsen found in 1897:
formula_45
In 1910, Vacca found the closely related series
formula_46
where log2 is the logarithm to base 2 and is the floor function.
In 1926 he found a second series:
formula_47
From the Malmsten–Kummer expansion for the logarithm of the gamma function we get:
formula_48
An important expansion for Euler's constant is due to Fontana and Mascheroni
formula_49
where are Gregory coefficients. This series is the special case k = 1 of the expansions
formula_50
convergent for k = 1, 2, ...
A similar series with the Cauchy numbers of the second kind is
formula_51
Blagouchine (2018) found an interesting generalisation of the Fontana–Mascheroni series
formula_52
where are the Bernoulli polynomials of the second kind, which are defined by the generating function
formula_53
For any rational a this series contains rational terms only. For example, at a = 1, it becomes
formula_54
Other series with the same polynomials include these examples:
formula_55
and
formula_56
where is the gamma function.
A series related to the Akiyama–Tanigawa algorithm is
formula_57
where are the Gregory coefficients of the second order.
As a series of prime numbers:
formula_58
Asymptotic expansions.
γ equals the following asymptotic formulas (where is the nth harmonic number):
The third formula is also called the Ramanujan expansion.
Alabdulmohsin derived closed-form expressions for the sums of errors of these approximations. He showed that (Theorem A.1):
formula_62
Exponential.
The constant is important in number theory. It equals the following limit, where is the nth prime number:
formula_63
This restates the third of Mertens' theorems. The numerical value of is:
<templatestyles src="Block indent/styles.css"/>.
Other infinite products relating to include:
formula_64
These products result from the Barnes G-function.
In addition,
formula_65
where the nth factor is the th root of
formula_66
This infinite product, first discovered by Ser in 1926, was rediscovered by Sondow using hypergeometric functions.
It also holds that
formula_67
Continued fraction.
The continued fraction expansion of γ begins [0; 1, 1, 2, 1, 2, 1, 4, 3, 13, 5, 1, 1, 8, 1, 2, 4, 1, 1, 40, ...], which has no "apparent" pattern. The continued fraction is known to have at least 475,006 terms, and it has infinitely many terms if and only if γ is irrational.
Generalizations.
"Euler's generalized constants" are given by
formula_68
for , with γ as the special case α = 1. This can be further generalized to
formula_69
for some arbitrary decreasing function f. For example,
formula_70
gives rise to the Stieltjes constants, and
formula_71
gives
formula_72
where again the limit
formula_73
appears.
A two-dimensional limit generalization is the Masser–Gramain constant.
"Euler–Lehmer constants" are given by summation of inverses of numbers in a common
modulo class:
formula_74
The basic properties are
formula_75
and if the greatest common divisor gcd(a,q) = d then
formula_76
Published digits.
Euler initially calculated the constant's value to 6 decimal places. In 1781, he calculated it to 16 decimal places. Mascheroni attempted to calculate the constant to 32 decimal places, but made errors in the 20th–22nd and 31st–32nd decimal places; starting from the 20th digit, he calculated ...1811209008239 when the correct value is ...0651209008240.
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n\\gamma &= \\lim_{n\\to\\infty}\\left(-\\log n + \\sum_{k=1}^n \\frac1{k}\\right)\\\\[5px]\n&=\\int_1^\\infty\\left(-\\frac1x+\\frac1{\\lfloor x\\rfloor}\\right)\\,\\mathrm dx.\n\\end{align}"
},
{
"math_id": 1,
"text": "2e^\\gamma/\\pi=1.134"
},
{
"math_id": 2,
"text": "\\frac{\\pi}{2}\\frac{Y_0(2)}{J_0(2)}-\\gamma"
},
{
"math_id": 3,
"text": "J_\\alpha(x)"
},
{
"math_id": 4,
"text": "Y_\\alpha(x)"
},
{
"math_id": 5,
"text": "\\gamma(a,q) = \\lim_{n\\rightarrow\\infty}\\left(\\left(\\sum_{k=0}^n{\\frac{1}{a+kq}}\\right) - \\frac{\\log{(a+nq})}{q}\\right)"
},
{
"math_id": 6,
"text": "-\\gamma = \\Gamma'(1) = \\Psi(1). "
},
{
"math_id": 7,
"text": "\\begin{align}-\\gamma &= \\lim_{z\\to 0}\\left(\\Gamma(z) - \\frac1{z}\\right) \\\\&= \\lim_{z\\to 0}\\left(\\Psi(z) + \\frac1{z}\\right).\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align} \\lim_{z\\to 0}\\frac1{z}\\left(\\frac1{\\Gamma(1+z)} - \\frac1{\\Gamma(1-z)}\\right) &= 2\\gamma \\\\\n\\lim_{z\\to 0}\\frac1{z}\\left(\\frac1{\\Psi(1-z)} - \\frac1{\\Psi(1+z)}\\right) &= \\frac{\\pi^2}{3\\gamma^2}. \\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align} \\gamma &= \\lim_{n\\to\\infty}\\left(\\frac{ \\Gamma\\left(\\frac1{n}\\right) \\Gamma(n+1)\\, n^{1+\\frac1{n}}}{\\Gamma\\left(2+n+\\frac1{n}\\right)} - \\frac{n^2}{n+1}\\right) \\\\\n&= \\lim\\limits_{m\\to\\infty}\\sum_{k=1}^m{m \\choose k}\\frac{(-1)^k}{k}\\log\\big(\\Gamma(k+1)\\big). \\end{align}"
},
{
"math_id": 10,
"text": "\\begin{align}\\gamma &= \\sum_{m=2}^{\\infty} (-1)^m\\frac{\\zeta(m)}{m} \\\\\n &= \\log\\frac4{\\pi} + \\sum_{m=2}^{\\infty} (-1)^m\\frac{\\zeta(m)}{2^{m-1}m}.\\end{align} "
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "\\rho"
},
{
"math_id": 13,
"text": "\\gamma = \\log 4\\pi + \\sum_{\\rho} \\frac{2}{\\rho} - 2"
},
{
"math_id": 14,
"text": "\\begin{align} \\gamma &= \\tfrac3{2}- \\log 2 - \\sum_{m=2}^\\infty (-1)^m\\,\\frac{m-1}{m}\\big(\\zeta(m)-1\\big) \\\\\n &= \\lim_{n\\to\\infty}\\left(\\frac{2n-1}{2n} - \\log n + \\sum_{k=2}^n \\left(\\frac1{k} - \\frac{\\zeta(1-k)}{n^k}\\right)\\right) \\\\\n &= \\lim_{n\\to\\infty}\\left(\\frac{2^n}{e^{2^n}} \\sum_{m=0}^\\infty \\frac{2^{mn}}{(m+1)!} \\sum_{t=0}^m \\frac1{t+1} - n \\log 2+ O \\left (\\frac1{2^{n}\\, e^{2^n}}\\right)\\right).\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align} \\gamma &= \\lim_{s\\to 1^+}\\sum_{n=1}^\\infty \\left(\\frac1{n^s}-\\frac1{s^n}\\right) \\\\&= \\lim_{s\\to 1}\\left(\\zeta(s) - \\frac{1}{s-1}\\right) \\\\&= \\lim_{s\\to 0}\\frac{\\zeta(1+s)+\\zeta(1-s)}{2} \\end{align}"
},
{
"math_id": 16,
"text": "\\gamma = \\lim_{n\\to\\infty}\\frac1{n}\\, \\sum_{k=1}^n \\left(\\left\\lceil \\frac{n}{k} \\right\\rceil - \\frac{n}{k}\\right)"
},
{
"math_id": 17,
"text": "\\gamma =\\lim_{n\\to\\infty}\\left( \\sum_{k=1}^n \\frac1{k} - \\log n -\\sum_{m=2}^\\infty \\frac{\\zeta(m,n+1)}{m}\\right),"
},
{
"math_id": 18,
"text": "H_n = \\log(n) + \\gamma + \\frac1{2n} - \\frac1{12n^2} + \\frac1{120n^4} - \\varepsilon,"
},
{
"math_id": 19,
"text": "\\gamma =12\\,\\log(A)-\\log(2\\pi)+\\frac{6}{\\pi^2}\\,\\zeta'(2)"
},
{
"math_id": 20,
"text": "\\gamma=\\lim_{n\\to\\infty}\\left(-n+\\zeta\\left(\\frac{n+1}{n}\\right)\\right)"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "\\textstyle \\ln 2T_{k}"
},
{
"math_id": 23,
"text": "\\textstyle \\frac{1}{T_{k}}"
},
{
"math_id": 24,
"text": "\\begin{align} \n \\gamma \n &= H_u - \\frac{1}{2} \\ln 2T_u - \\sum_{k=1}^{v}\\frac{R(k)}{T_{u}^{k}}-\\Theta_{v}\\,\\frac{R(v+1)}{T_{u}^{v+1}}\n\\end{align}"
},
{
"math_id": 25,
"text": "\\gamma = \\ln 2\\pi - \\sum_{k=2}^{n} \\frac{\\zeta(k)}{T_{k}}"
},
{
"math_id": 26,
"text": "\\textstyle \\sum_{k = 1}^\\infty \\frac{1}{2T_k} = 1"
},
{
"math_id": 27,
"text": "\\zeta(2)"
},
{
"math_id": 28,
"text": "1"
},
{
"math_id": 29,
"text": "\\textstyle \\sum_{k = 1}^\\infty \\frac{2}{2T_k} = \\sum_{k = 1}^\\infty \\frac{1}{T_{k}} = 2"
},
{
"math_id": 30,
"text": "\\textstyle \\sum_{k = 1}^n \\frac{1}{T_{k}}"
},
{
"math_id": 31,
"text": "\\gamma - \\ln 2 = \\ln 2\\pi + \\sum_{\\rho} \\frac{2}{\\rho} - \\sum_{k = 1}^n \\frac{1}{T_k}"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\gamma &= - \\int_0^\\infty e^{-x} \\log x \\,dx \\\\\n &= -\\int_0^1\\log\\left(\\log\\frac 1 x \\right) dx \\\\ \n &= \\int_0^\\infty \\left(\\frac1{e^x-1}-\\frac1{x\\cdot e^x} \\right)dx \\\\\n &= \\int_0^1\\frac{1-e^{-x}}{x} \\, dx -\\int_1^\\infty \\frac{e^{-x}}{x}\\, dx\\\\\n &= \\int_0^1\\left(\\frac1{\\log x} + \\frac1{1-x}\\right)dx\\\\\n &= \\int_0^\\infty \\left(\\frac1{1+x^k}-e^{-x}\\right)\\frac{dx}{x},\\quad k>0\\\\\n &= 2\\int_0^\\infty \\frac{e^{-x^2}-e^{-x}}{x} \\, dx ,\\\\\n&= \\log\\frac{\\pi}{4}-\\int_0^\\infty \\frac{\\log x}{\\cosh^2x} \\, dx ,\\\\\n &= \\int_0^1 H_x \\, dx, \\\\\n &= \\frac{1}{2}+\\int_{0}^{\\infty}\\log\\left(1+\\frac{\\log\\left(1+\\frac{1}{t}\\right)^{2}}{4\\pi^{2}}\\right)dt \\\\\n&= 1-\\int_0^1 \\{1/x\\} dx \n \\end{align}\n "
},
{
"math_id": 33,
"text": "\\{1/x\\}"
},
{
"math_id": 34,
"text": "1/x"
},
{
"math_id": 35,
"text": "\\begin{align}\n&\\int_0^{\\infty} \\left(\\frac{1}{e^x - 1} - \\frac{1}{x e^x} \\right) dx\n = \\int_0^{\\infty} \\frac{e^{-x} + x - 1}{x[e^x -1]} dx\n = \\int_0^{\\infty} \\frac{1}{x[e^x - 1]} \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}x^{m+1}}{(m+1)!} dx \\\\[2pt]\n&= \\int_0^{\\infty} \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}x^m}{(m+1)![e^x -1]} dx\n = \\sum_{m = 1}^{\\infty} \\int_0^{\\infty} \\frac{(-1)^{m+1}x^m}{(m+1)![e^x -1]} dx\n = \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}}{(m+1)!} \\int_0^{\\infty} \\frac{x^m}{e^x - 1} dx \\\\[2pt]\n&= \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}}{(m+1)!} m!\\zeta(m+1)\n = \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}}{m+1}\\zeta(m+1)\n = \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}}{m+1} \\sum_{n = 1}^{\\infty}\\frac{1}{n^{m+1}}\n = \\sum_{m = 1}^{\\infty} \\sum_{n = 1}^{\\infty} \\frac{(-1)^{m+1}}{m+1}\\frac{1}{n^{m+1}} \\\\[2pt]\n&= \\sum_{n = 1}^{\\infty} \\sum_{m = 1}^{\\infty} \\frac{(-1)^{m+1}}{m+1}\\frac{1}{n^{m+1}}\n = \\sum_{n = 1}^{\\infty} \\left[\\frac{1}{n} - \\log\\left(1+\\frac{1}{n}\\right)\\right]\n = \\gamma\n\\end{align}"
},
{
"math_id": 36,
"text": "\\begin{align}\n\\int_0^\\infty e^{-x^2} \\log x \\,dx &= -\\frac{(\\gamma+2\\log 2)\\sqrt{\\pi}}{4} \\\\\n\\int_0^\\infty e^{-x} \\log^2 x \\,dx &= \\gamma^2 + \\frac{\\pi^2}{6}\n\\end{align}"
},
{
"math_id": 37,
"text": "\\begin{align}\n\\gamma &= \\int_0^1 \\int_0^1 \\frac{x-1}{(1-xy)\\log xy}\\,dx\\,dy \\\\\n&= \\sum_{n=1}^\\infty \\left(\\frac 1 n -\\log\\frac{n+1} n \\right).\n\\end{align}"
},
{
"math_id": 38,
"text": "\\begin{align}\n\\log\\frac 4 \\pi &= \\int_0^1 \\int_0^1 \\frac{x-1}{(1+xy)\\log xy} \\,dx\\,dy \\\\\n&= \\sum_{n=1}^\\infty \\left((-1)^{n-1}\\left(\\frac 1 n -\\log\\frac{n+1} n \\right)\\right).\n\\end{align}"
},
{
"math_id": 39,
"text": "\\begin{align}\n\\gamma &= \\sum_{n=1}^\\infty \\frac{N_1(n) + N_0(n)}{2n(2n+1)} \\\\\n\\log\\frac4{\\pi} &= \\sum_{n=1}^\\infty \\frac{N_1(n) - N_0(n)}{2n(2n+1)} ,\n\\end{align}"
},
{
"math_id": 40,
"text": "\\gamma = \\int_0^1 \\left(\\frac1{1+x}\\sum_{n=1}^\\infty x^{2^n-1}\\right)\\,dx."
},
{
"math_id": 41,
"text": "\n\\gamma = \\lim_{n \\to \\infty}\\left(\\frac{1}{1}+\\frac{1}{2}+\\frac{1}{3} + \\ldots + \\frac{1}{n} - \\log(n+\\alpha) \\right) \\equiv \\lim_{n \\to \\infty} \\gamma_n(\\alpha)\n"
},
{
"math_id": 42,
"text": "\n\\frac{1}{2(n+1)} < \\gamma_n(0) - \\gamma < \\frac{1}{2n},\n"
},
{
"math_id": 43,
"text": "\n\\frac{1}{24(n+1)^2} < \\gamma_n(1/2) - \\gamma < \\frac{1}{24n^2}.\n"
},
{
"math_id": 44,
"text": "\\gamma = \\sum_{k=1}^\\infty \\left(\\frac 1 k - \\log\\left(1+\\frac 1 k \\right)\\right)."
},
{
"math_id": 45,
"text": "\\gamma = 1 - \\sum_{k=2}^\\infty (-1)^k\\frac{\\left\\lfloor\\log_2 k\\right\\rfloor}{k+1}."
},
{
"math_id": 46,
"text": "\\begin{align}\n\\gamma & = \\sum_{k=2}^\\infty (-1)^k\\frac{\\left\\lfloor\\log_2 k\\right\\rfloor} k \\\\[5pt]\n& = \\tfrac12-\\tfrac13 + 2\\left(\\tfrac14 - \\tfrac15 + \\tfrac16 - \\tfrac17\\right) + 3\\left(\\tfrac18 - \\tfrac19 + \\tfrac1{10} - \\tfrac1{11} + \\cdots - \\tfrac1{15}\\right) + \\cdots,\n\\end{align}"
},
{
"math_id": 47,
"text": "\\begin{align}\n\\gamma + \\zeta(2) & = \\sum_{k=2}^\\infty \\left( \\frac1{\\left\\lfloor\\sqrt{k}\\right\\rfloor^2} - \\frac1{k}\\right) \\\\[5pt]\n& = \\sum_{k=2}^\\infty \\frac{k - \\left\\lfloor\\sqrt{k}\\right\\rfloor^2}{k \\left\\lfloor \\sqrt{k} \\right\\rfloor^2} \\\\[5pt]\n&= \\frac12 + \\frac23 + \\frac1{2^2}\\sum_{k=1}^{2\\cdot 2} \\frac{k}{k+2^2} + \\frac1{3^2}\\sum_{k=1}^{3\\cdot 2} \\frac{k}{k+3^2} + \\cdots\n\\end{align}"
},
{
"math_id": 48,
"text": "\\gamma = \\log\\pi - 4\\log\\left(\\Gamma(\\tfrac34)\\right) + \\frac 4 \\pi \\sum_{k=1}^\\infty (-1)^{k+1}\\frac{\\log(2k+1)}{2k+1}."
},
{
"math_id": 49,
"text": "\\gamma = \\sum_{n=1}^\\infty \\frac{|G_n|}{n} = \\frac12 + \\frac1{24} + \\frac1{72} + \\frac{19}{2880} + \\frac3{800} + \\cdots,"
},
{
"math_id": 50,
"text": "\\begin{align}\n \\gamma &= H_{k-1} - \\log k + \\sum_{n=1}^{\\infty}\\frac{(n-1)!|G_n|}{k(k+1) \\cdots (k+n-1)} && \\\\\n &= H_{k-1} - \\log k + \\frac1{2k} + \\frac1{12k(k+1)} + \\frac1{12k(k+1)(k+2)} + \\frac{19}{120k(k+1)(k+2)(k+3)} + \\cdots &&\n\\end{align}"
},
{
"math_id": 51,
"text": "\\gamma = 1 - \\sum_{n=1}^\\infty \\frac{C_{n}}{n \\, (n+1)!} =1- \\frac{1}{4} -\\frac{5}{72} - \\frac{1}{32} - \\frac{251}{14400} - \\frac{19}{1728} - \\ldots"
},
{
"math_id": 52,
"text": "\\gamma=\\sum_{n=1}^\\infty\\frac{(-1)^{n+1}}{2n}\\Big\\{\\psi_{n}(a)+ \\psi_{n}\\Big(-\\frac{a}{1+a}\\Big)\\Big\\},\n\\quad a>-1"
},
{
"math_id": 53,
"text": "\n\\frac{z(1+z)^s}{\\log(1+z)}= \\sum_{n=0}^\\infty z^n \\psi_n(s) ,\\qquad |z|<1.\n"
},
{
"math_id": 54,
"text": "\\gamma=\\frac{3}{4} - \\frac{11}{96} - \\frac{1}{72} - \\frac{311}{46080} - \\frac{5}{1152} - \\frac{7291}{2322432} - \\frac{243}{100352} - \\ldots"
},
{
"math_id": 55,
"text": "\\gamma= -\\log(a+1) - \\sum_{n=1}^\\infty\\frac{(-1)^n \\psi_{n}(a)}{n},\\qquad \\Re(a)>-1 "
},
{
"math_id": 56,
"text": "\\gamma= -\\frac{2}{1+2a} \\left\\{\\log\\Gamma(a+1) -\\frac{1}{2}\\log(2\\pi) + \\frac{1}{2} + \\sum_{n=1}^\\infty\\frac{(-1)^n \\psi_{n+1}(a)}{n}\\right\\},\\qquad \\Re(a)>-1 "
},
{
"math_id": 57,
"text": "\\gamma= \\log(2\\pi) - 2 - 2 \\sum_{n=1}^\\infty\\frac{(-1)^n G_{n}(2)}{n}=\n\\log(2\\pi) - 2 + \\frac{2}{3} + \\frac{1}{24}+ \\frac{7}{540} + \\frac{17}{2880}+ \\frac{41}{12600} + \\ldots "
},
{
"math_id": 58,
"text": "\\gamma = \\lim_{n\\to\\infty}\\left(\\log n - \\sum_{p\\le n}\\frac{\\log p}{p-1}\\right)."
},
{
"math_id": 59,
"text": "\\gamma \\sim H_n - \\log n - \\frac1{2n} + \\frac1{12n^2} - \\frac1{120n^4} + \\cdots"
},
{
"math_id": 60,
"text": "\\gamma \\sim H_n - \\log\\left({n + \\frac1{2} + \\frac1{24n} - \\frac1{48n^2} + \\cdots}\\right)"
},
{
"math_id": 61,
"text": "\\gamma \\sim H_n - \\frac{\\log n + \\log(n+1)}{2} - \\frac1{6n(n+1)} + \\frac1{30n^2(n+1)^2} - \\cdots"
},
{
"math_id": 62,
"text": "\\begin{align}\n\\sum_{n=1}^\\infty \\log n +\\gamma - H_n + \\frac{1}{2n} &= \\frac{\\log (2\\pi)-1-\\gamma}{2} \\\\\n\\sum_{n=1}^\\infty \\log \\sqrt{n(n+1)} +\\gamma - H_n &= \\frac{\\log (2\\pi)-1}{2}-\\gamma \\\\\n\\sum_{n=1}^\\infty (-1)^n\\Big(\\log n +\\gamma - H_n\\Big) &= \\frac{\\log \\pi-\\gamma}{2}\n\\end{align}"
},
{
"math_id": 63,
"text": "e^\\gamma = \\lim_{n\\to\\infty}\\frac1{\\log p_n} \\prod_{i=1}^n \\frac{p_i}{p_i-1}."
},
{
"math_id": 64,
"text": "\\begin{align}\n\\frac{e^{1+\\frac{\\gamma}{2}}}{\\sqrt{2\\pi}} &= \\prod_{n=1}^\\infty e^{-1+\\frac1{2n}}\\left(1+\\frac1{n}\\right)^n \\\\\n\\frac{e^{3+2\\gamma}}{2\\pi} &= \\prod_{n=1}^\\infty e^{-2+\\frac2{n}}\\left(1+\\frac2{n}\\right)^n. \\end{align}"
},
{
"math_id": 65,
"text": "e^{\\gamma} = \\sqrt{\\frac2{1}} \\cdot \\sqrt[3]{\\frac{2^2}{1\\cdot 3}} \\cdot \\sqrt[4]{\\frac{2^3\\cdot 4}{1\\cdot 3^3}} \\cdot \\sqrt[5]{\\frac{2^4\\cdot 4^4}{1\\cdot 3^6\\cdot 5}} \\cdots"
},
{
"math_id": 66,
"text": "\\prod_{k=0}^n (k+1)^{(-1)^{k+1}{n \\choose k}}."
},
{
"math_id": 67,
"text": "\\frac{e^\\frac{\\pi}{2}+e^{-\\frac{\\pi}{2}}}{\\pi e^\\gamma}=\\prod_{n=1}^\\infty\\left(e^{-\\frac{1}{n}}\\left(1+\\frac{1}{n}+\\frac{1}{2n^2}\\right)\\right)."
},
{
"math_id": 68,
"text": "\\gamma_\\alpha = \\lim_{n\\to\\infty}\\left(\\sum_{k=1}^n \\frac1{k^\\alpha} - \\int_1^n \\frac1{x^\\alpha}\\,dx\\right),"
},
{
"math_id": 69,
"text": "c_f = \\lim_{n\\to\\infty}\\left(\\sum_{k=1}^n f(k) - \\int_1^n f(x)\\,dx\\right)"
},
{
"math_id": 70,
"text": "f_n(x) = \\frac{(\\log x)^n}{x}"
},
{
"math_id": 71,
"text": "f_a(x) = x^{-a}"
},
{
"math_id": 72,
"text": "\\gamma_{f_a} = \\frac{(a-1)\\zeta(a)-1}{a-1}"
},
{
"math_id": 73,
"text": "\\gamma = \\lim_{a\\to 1}\\left(\\zeta(a) - \\frac1{a-1}\\right)"
},
{
"math_id": 74,
"text": "\\gamma(a,q) = \\lim_{x\\to \\infty}\\left (\\sum_{0<n\\le x \\atop n\\equiv a \\pmod q} \\frac1{n}-\\frac{\\log x}{q}\\right)."
},
{
"math_id": 75,
"text": "\\begin{align}\n&\\gamma(0,q) = \\frac{\\gamma -\\log q}{q}, \\\\\n&\\sum_{a=0}^{q-1} \\gamma(a,q)=\\gamma, \\\\\n&q\\gamma(a,q) = \\gamma-\\sum_{j=1}^{q-1}e^{-\\frac{2\\pi aij}{q}}\\log\\left(1-e^{\\frac{2\\pi ij}{q}}\\right),\n\\end{align}"
},
{
"math_id": 76,
"text": "q\\gamma(a,q) = \\frac{q}{d}\\gamma\\left(\\frac{a}{d},\\frac{q}{d}\\right)-\\log d."
}
] |
https://en.wikipedia.org/wiki?curid=148594
|
1486120
|
Nicholas Mercator
|
German mathematician (c.1620 – 1687)
Nicholas (Nikolaus) Mercator (c. 1620, Holstein – 1687, Versailles), also known by his German name Kauffmann, was a 17th-century mathematician.
He was born in Eutin, Schleswig-Holstein, Germany and educated at Rostock and Leyden after which he lived from 1642 to 1648 in the Netherlands. He lectured at the University of Copenhagen during 1648–1654 and lived in Paris from 1655 to 1657. He was mathematics tutor to Joscelyne Percy, son of the 10th Earl of Northumberland, at Petworth, Sussex (1657). He taught mathematics in London (1658–1682). On 3 May 1661 he observed a transit of Mercury with Christiaan Huygens and Thomas Streete from Long Acre, London. On 14 November 1666 he was elected a Fellow of the Royal Society. He designed a marine chronometer for Charles II.
In 1682 Jean Colbert invited Mercator to assist in the design and construction of the fountains at the Palace of Versailles, so he relocated there, but a falling-out with Colbert followed.
Mathematically, he is most well known for his treatise "Logarithmo-technia" on logarithms, published in 1668. In this treatise he described the Mercator series:
formula_0
Nicholas Mercator was the first person to use the term natural logarithm.
To the field of music, Mercator contributed the first precise account of 53 equal temperament, which was of theoretical importance, but not widely practised.
He died at Versailles in 1687.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ln(1 + x) = x - \\frac{1}{2}x^2 + \\frac{1}{3}x^3 - \\frac{1}{4}x^4 + \\cdots."
}
] |
https://en.wikipedia.org/wiki?curid=1486120
|
14861779
|
View factor
|
In radiative heat transfer, a view factor, formula_0, is the proportion of the radiation which leaves surface formula_1 that strikes surface formula_2. In a complex 'scene' there can be any number of different objects, which can be divided in turn into even more surfaces and surface segments.
View factors are also sometimes known as configuration factors, form factors, angle factors or shape factors.
Relations.
Summation.
Radiation leaving a surface "within an enclosure" is conserved. Because of this, the sum of all view factors "from" a given surface, formula_3, within the enclosure is unity as defined by the "summation rule"
formula_4
where formula_5 is the number of surfaces in the enclosure.864 Any enclosure with formula_5 surfaces has a total formula_6 view factors.
For example, consider a case where two blobs with surfaces "A" and "B" are floating around in a cavity with surface "C". All of the radiation that leaves "A" must either hit "B" or "C", or if "A" is concave, it could hit "A". 100% of the radiation leaving "A" is divided up among "A", "B", and "C".
Confusion often arises when considering the radiation that "arrives" at a "target" surface. In that case, it generally does not make sense to sum view factors as view factor from "A" and view factor from "B" (above) are essentially different units. "C" may see 10% of "A" 's radiation and 50% of "B" 's radiation and 20% of "C" 's radiation, but without knowing how much each radiates, it does not even make sense to say that "C" receives 80% of the total radiation.
Reciprocity.
The "reciprocity relation" for view factors allows one to calculate formula_7 if one already knows formula_8 and is given as
formula_9
where formula_10 and formula_11 are the areas of the two surfaces.863
Self-viewing.
For a convex surface, no radiation can leave the surface and then hit it later, because radiation travels in straight lines. Hence, for convex surfaces, formula_12864
For concave surfaces, this doesn't apply, and so for concave surfaces formula_13
Superposition.
The superposition rule (or summation rule) is useful when a certain geometry is not available with given charts or graphs. The superposition rule allows us to express the geometry that is being sought using the sum or difference of geometries that are known.
formula_14
View factors of differential areas.
Taking the limit of a small flat surface gives differential areas, the view factor of two differential areas of areas formula_15 and formula_16 at a distance "s" is given by:
formula_17
where formula_18 and formula_19 are the angle between the surface normals and a ray between the two differential areas.
The view factor from a general surface formula_20 to another general surface formula_21 is given by:862
formula_22
Similarly the view factor formula_23is defined as the fraction of radiation that leaves formula_21 and is intercepted by formula_20, yielding the equationformula_24
The view factor is related to the etendue.
Example solutions.
For complex geometries, the view factor integral equation defined above can be cumbersome to solve. Solutions are often referenced from a table of theoretical geometries. Common solutions are included in the following table:865
Nusselt analog.
A geometrical picture that can aid intuition about the view factor was developed by Wilhelm Nusselt, and is called the Nusselt analog. The view factor between a differential element d"A"i and the element "A"j can be obtained projecting the element "A"j onto the surface of a unit hemisphere, and then projecting that in turn onto a unit circle around the point of interest in the plane of "A"i.
The view factor is then equal to the differential area d"A"i times the proportion of the unit circle covered by this projection.
The projection onto the hemisphere, giving the solid angle subtended by "A"j, takes care of the factors cos(θ2) and 1/"r"2; the projection onto the circle and the division by its area then takes care of the local factor cos(θ1) and the normalisation by π.
The Nusselt analog has on occasion been used to actually measure form factors for complicated surfaces, by photographing them through a suitable fish-eye lens. (see also Hemispherical photography). But its main value now is essentially in building intuition.
References.
<templatestyles src="Reflist/styles.css" />
External links.
A large number of 'standard' view factors can be calculated with the use of tables that are commonly provided in heat transfer textbooks.
|
[
{
"math_id": 0,
"text": "F_{A \\rarr B}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "S_i"
},
{
"math_id": 4,
"text": "\\sum_{j=1}^n {F_{S_i \\rarr S_j}} = 1"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "n^2"
},
{
"math_id": 7,
"text": "F_{i \\rarr j}"
},
{
"math_id": 8,
"text": "F_{j \\rarr i}"
},
{
"math_id": 9,
"text": "A_i F_{i \\rarr j} = A_j F_{j \\rarr i}"
},
{
"math_id": 10,
"text": "A_i"
},
{
"math_id": 11,
"text": "A_j"
},
{
"math_id": 12,
"text": "F_{i \\rarr i} = 0."
},
{
"math_id": 13,
"text": "F_{i \\rarr i} > 0."
},
{
"math_id": 14,
"text": "F_{1 \\rarr (2,3)}=F_{1 \\rarr 2}+F_{1\\rarr 3}."
},
{
"math_id": 15,
"text": "\\hbox{d}A_1"
},
{
"math_id": 16,
"text": "\\hbox{d}A_2"
},
{
"math_id": 17,
"text": "\ndF_{1 \\rarr 2} = \\frac{\\cos\\theta_1 \\cos\\theta_2}{\\pi s^2}\\hbox{d}A_2\n"
},
{
"math_id": 18,
"text": "\\theta_1"
},
{
"math_id": 19,
"text": "\\theta_2"
},
{
"math_id": 20,
"text": "A_1"
},
{
"math_id": 21,
"text": "A_2"
},
{
"math_id": 22,
"text": "\nF_{1 \\rarr 2} = \\frac{1}{A_1} \\int_{A_1} \\int_{A_2} \\frac{\\cos\\theta_1 \\cos\\theta_2}{\\pi s^2}\\, \\hbox{d}A_2\\, \\hbox{d}A_1.\n"
},
{
"math_id": 23,
"text": "F_{2\\rightarrow 1}"
},
{
"math_id": 24,
"text": "\nF_{2 \\rarr 1} = \\frac{1}{A_2} \\int_{A_1} \\int_{A_2} \\frac{\\cos\\theta_1 \\cos\\theta_2}{\\pi s^2}\\, \\hbox{d}A_2\\, \\hbox{d}A_1.\n"
}
] |
https://en.wikipedia.org/wiki?curid=14861779
|
14862766
|
Otto Stolz
|
Austrian mathematician (1842–1905)
Otto Stolz (3 July 1842 – 23 November 1905) was an Austrian mathematician noted for his work on mathematical analysis and infinitesimals. Born in Hall in Tirol, he studied at the University of Innsbruck from 1860 and the University of Vienna from 1863, receiving his habilitation there in 1867. Two years later he studied in Berlin under Karl Weierstrass, Ernst Kummer and Leopold Kronecker, and in 1871 heard lectures in Göttingen by Alfred Clebsch and Felix Klein (with whom he would later correspond), before returning to Innsbruck permanently as a professor of mathematics.
His work began with geometry (on which he wrote his thesis) but after the influence of Weierstrass it shifted to real analysis, and many small useful theorems are credited to him. For example, he proved that a continuous function "f" on a closed interval ["a", "b"] with midpoint convexity, i.e., formula_0, has left and right derivatives at each point in ("a", "b").
He died in 1905 shortly after finishing work on "Einleitung in die Funktionentheorie". His name lives on in the Stolz–Cesàro theorem.
Work on non-Archimedean systems.
Stolz published a number of papers containing constructions of non-Archimedean extensions of the real numbers, as detailed by Ehrlich (2006). His work, as well as that of Paul du Bois-Reymond, was sharply criticized by Georg Cantor as an "abomination". Cantor published a "proof-sketch" of the inconsistency of infinitesimals. The errors in Cantor's proof are analyzed by Ehrlich (2006).
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f\\left(\\frac{x + y}2\\right) \\leq \\frac{f(x)+f(y)}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=14862766
|
148633
|
Value at risk
|
Estimated potential loss for an investment under a given set of conditions
Value at risk (VaR) is a measure of the risk of loss of investment/Capital. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.
For a given portfolio, time horizon, and probability "p", the "p" VaR can be defined informally as the maximum possible loss during that time after excluding all worse outcomes whose combined probability is at most "p". This assumes mark-to-market pricing, and no trading in the portfolio.
For example, if a portfolio of stocks has a one-day 5% VaR of $1 million, that means that there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one-day period if there is no trading. Informally, a loss of $1 million or more on this portfolio is expected on 1 day out of 20 days (because of 5% probability).
More formally, "p" VaR is defined such that the probability of a loss greater than VaR is (at most) "(1-p)" while the probability of a loss less than VaR is (at least) "p". A loss which exceeds the VaR threshold is termed a "VaR breach".
For a fixed "p", the "p" VaR does not assess the magnitude of loss when a VaR breach occurs and therefore is considered by some to be a questionable metric for risk management. For instance, assume someone makes a bet that flipping a coin seven times will not give seven heads. The terms are that they win $100 if this does not happen (with probability 127/128) and lose $12,700 if it does (with probability 1/128). That is, the possible loss amounts are $0 or $12,700. The 1% VaR is then $0, because the probability of any loss at all is 1/128 which is less than 1%. They are, however, exposed to a possible loss of $12,700 which can be expressed as the "p" VaR for any "p ≤ 0.78125% (1/128)".
VaR has four main uses in finance: risk management, financial control, financial reporting and computing regulatory capital. VaR is sometimes used in non-financial applications as well. However, it is a controversial risk management tool.
Important related ideas are economic capital, backtesting, stress testing, expected shortfall, and tail conditional expectation.
Details.
Common parameters for VaR are 1% and 5% probabilities and one day and two week horizons, although other combinations are in use.
The reason for assuming normal markets and no trading, and to restricting loss to things measured in daily accounts, is to make the loss observable. In some extreme financial events it can be impossible to determine losses, either because market prices are unavailable or because the loss-bearing institution breaks up. Some longer-term consequences of disasters, such as lawsuits, loss of market confidence and employee morale and impairment of brand names can take a long time to play out, and may be hard to allocate among specific prior decisions. VaR marks the boundary between normal days and extreme events. Institutions can lose far more than the VaR amount; all that can be said is that they will not do so very often.
The probability level is about equally often specified as one minus the probability of a VaR break, so that the VaR in the example above would be called a one-day 95% VaR instead of one-day 5% VaR. This generally does not lead to confusion because the probability of VaR breaks is almost always small, certainly less than 50%.
Although it virtually always represents a loss, VaR is conventionally reported as a positive number. A negative VaR would imply the portfolio has a high probability of making a profit, for example a one-day 5% VaR of negative $ implies the portfolio has a 95% chance of making more than $ over the next day.
Another inconsistency is that VaR is sometimes taken to refer to profit-and-loss at the end of the period, and sometimes as the maximum loss at any point during the period. The original definition was the latter, but in the early 1990s when VaR was aggregated across trading desks and time zones, end-of-day valuation was the only reliable number so the former became the "de facto" definition. As people began using multiday VaRs in the second half of the 1990s, they almost always estimated the distribution at the end of the period only. It is also easier theoretically to deal with a point-in-time estimate versus a maximum over an interval. Therefore, the end-of-period definition is the most common both in theory and practice today.
Varieties.
The definition of VaR is nonconstructive; it specifies a property VaR must have, but not how to compute VaR. Moreover, there is wide scope for interpretation in the definition. This has led to two broad types of VaR, one used primarily in risk management and the other primarily for risk measurement. The distinction is not sharp, however, and hybrid versions are typically used in financial control, financial reporting and computing regulatory capital.
To a risk manager, VaR is a system, not a number. The system is run periodically (usually daily) and the published number is compared to the computed price movement in opening positions over the time horizon. There is never any subsequent adjustment to the published VaR, and there is no distinction between VaR breaks caused by input errors (including IT breakdowns, fraud and rogue trading), computation errors (including failure to produce a VaR on time) and market movements.
A frequentist claim is made that the long-term frequency of VaR breaks will equal the specified probability, within the limits of sampling error, and that the VaR breaks will be independent in time and independent of the level of VaR. This claim is validated by a backtest, a comparison of published VaRs to actual price movements. In this interpretation, many different systems could produce VaRs with equally good backtests, but wide disagreements on daily VaR values.
For risk measurement a number is needed, not a system. A Bayesian probability claim is made that given the information and beliefs at the time, the subjective probability of a VaR break was the specified level. VaR is adjusted after the fact to correct errors in inputs and computation, but not to incorporate information unavailable at the time of computation. In this context, "backtest" has a different meaning. Rather than comparing published VaRs to actual market movements over the period of time the system has been in operation, VaR is retroactively computed on scrubbed data over as long a period as data are available and deemed relevant. The same position data and pricing models are used for computing the VaR as determining the price movements.
Although some of the sources listed here treat only one kind of VaR as legitimate, most of the recent ones seem to agree that risk management VaR is superior for making short-term and tactical decisions in the present, while risk measurement VaR should be used for understanding the past, and making medium term and strategic decisions for the future. When VaR is used for financial control or financial reporting it should incorporate elements of both. For example, if a trading desk is held to a VaR limit, that is both a risk-management rule for deciding what risks to allow today, and an input into the risk measurement computation of the desk's risk-adjusted return at the end of the reporting period.
In governance.
VaR can also be applied to governance of endowments, trusts, and pension plans. Essentially, trustees adopt portfolio Values-at-Risk metrics for the entire pooled account and the diversified parts individually managed. Instead of probability estimates they simply define maximum levels of acceptable loss for each. Doing so provides an easy metric for oversight and adds accountability as managers are then directed to manage, but with the additional constraint to avoid losses within a defined risk parameter. VaR utilized in this manner adds relevance as well as an easy way to monitor risk measurement control far more intuitive than Standard Deviation of Return. Use of VaR in this context, as well as a worthwhile critique on board governance practices as it relates to investment management oversight in general can be found in "Best Practices in Governance."
Mathematical definition.
Let formula_0 be a profit and loss distribution (loss negative and profit positive). The VaR at level formula_1 is the smallest number formula_2 such that the probability that formula_3 does not exceed formula_2 is at least formula_4. Mathematically, formula_5 is the formula_6-quantile of formula_7, i.e.,
formula_8
This is the most general definition of VaR and the two identities are equivalent (indeed, for any real random variable formula_0 its cumulative distribution function formula_9 is well defined).
However this formula cannot be used directly for calculations unless we assume that formula_0 has some parametric distribution.
Risk managers typically assume that some fraction of the bad events will have undefined losses, either because markets are closed or illiquid, or because the entity bearing the loss breaks apart or loses the ability to compute accounts. Therefore, they do not accept results based on the assumption of a well-defined probability distribution. Nassim Taleb has labeled this assumption, "charlatanism". On the other hand, many academics prefer to assume a well-defined distribution, albeit usually one with fat tails. This point has probably caused more contention among VaR theorists than any other.
Value at risk can also be written as a distortion risk measure given by the distortion function formula_10
Risk measure and risk metric.
The term "VaR" is used both for a risk measure and a risk metric. This sometimes leads to confusion. Sources earlier than 1995 usually emphasize the risk measure, later sources are more likely to emphasize the metric.
The VaR risk measure defines risk as mark-to-market loss on a fixed portfolio over a fixed time horizon. There are many alternative risk measures in finance. Given the inability to use mark-to-market (which uses market prices to define loss) for future performance, loss is often defined (as a substitute) as change in fundamental value. For example, if an institution holds a loan that declines in market price because interest rates go up, but has no change in cash flows or credit quality, some systems do not recognize a loss. Also some try to incorporate the economic cost of harm not measured in daily financial statements, such as loss of market confidence or employee morale, impairment of brand names or lawsuits.
Rather than assuming a static portfolio over a fixed time horizon, some risk measures incorporate the dynamic effect of expected trading (such as a stop loss order) and consider the expected holding period of positions.
The VaR risk metric summarizes the distribution of possible losses by a quantile, a point with a specified probability of greater losses. A common alternative metric is expected shortfall.
VaR risk management.
Supporters of VaR-based risk management claim the first and possibly greatest benefit of VaR is the improvement in systems and modeling it forces on an institution. In 1997, Philippe Jorion wrote:[T]he greatest benefit of VAR lies in the imposition of a structured methodology for critically thinking about risk. Institutions that go through the process of computing their VAR are forced to confront their exposure to financial risks and to set up a proper risk management function. Thus the process of getting to VAR may be as important as the number itself.
Publishing a daily number, on-time and with specified statistical properties holds every part of a trading organization to a high objective standard. Robust backup systems and default assumptions must be implemented. Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are too-frequently down. Anything that affects profit and loss that is left out of other reports will show up either in inflated VaR or excessive VaR breaks. "A risk-taking institution that "does not" compute VaR might escape disaster, but an institution that "cannot" compute VaR will not."
The second claimed benefit of VaR is that it separates risk into two regimes. Inside the VaR limit, conventional statistical methods are reliable. Relatively short-term and specific data can be used for analysis. Probability estimates are meaningful because there are enough data to test them. In a sense, there is no true risk because these are a sum of many independent observations with a left bound on the outcome. For example, a casino does not worry about whether red or black will come up on the next roulette spin. Risk managers encourage productive risk-taking in this regime, because there is little true cost. People tend to worry too much about these risks because they happen frequently, and not enough about what might happen on the worst days.
Outside the VaR limit, all bets are off. Risk should be analyzed with stress testing based on long-term and broad market data. Probability statements are no longer meaningful. Knowing the distribution of losses beyond the VaR point is both impossible and useless. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not.
One specific system uses three regimes.
Another reason VaR is useful as a metric is due to its ability to compress the riskiness of a portfolio to a single number, making it comparable across different portfolios (of different assets). Within any portfolio it is also possible to isolate specific positions that might better hedge the portfolio to reduce, and minimise, the VaR.
Computation methods.
VaR can be estimated either parametrically (for example, variance-covariance VaR or delta-gamma VaR) or nonparametrically (for examples, historical simulation VaR or resampled VaR). Nonparametric methods of VaR estimation are discussed in Markovich and Novak. A comparison of a number of strategies for VaR prediction is given in Kuester et al.
A McKinsey report published in May 2012 estimated that 85% of large banks were using historical simulation. The other 15% used Monte Carlo methods (often applying a PCA decomposition) .
Backtesting.
Backtesting is the process to determine the accuracy of VaR forecasts vs. actual portfolio profit and losses.
A key advantage to VaR over most other measures of risk such as expected shortfall is the availability of several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen (1998), later generalized by Pajhede (2017), which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring. E.g. a 5% probability of a loss greater than VaR should be observed over time when using a 95% VaR, these hits should occur independently.
A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen and Pelletier (2004), Haas (2006), Tokpavi et al. (2014). and Pajhede (2017) As pointed out in several of the papers, the asymptotic distribution is often poor when considering high levels of coverage, e.g. a 99% VaR, therefore the parametric bootstrap method of Dufour (2006) is often used to obtain correct size properties for the tests. Backtest toolboxes are available in Matlab, or R—though only the first implements the parametric bootstrap method.
The second pillar of Basel II includes a backtesting step to validate the VaR figures.
History.
The problem of risk measurement is an old one in statistics, economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987. This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.
The crash was so unlikely given standard statistical models, that it called the entire basis of quant finance into question. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading, investment management and derivative pricing. These affected many markets at once, including ones that were usually not correlated, and seldom had discernible economic cause or warning (although after-the-fact explanations were plentiful). Much later, they were named "Black Swans" by Nassim Taleb and the concept extended far beyond finance.
If these events were included in quantitative analysis they dominated results and led to strategies that did not work day to day. If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result.
VaR was developed as a systematic way to segregate extreme events, which are studied qualitatively over long-term history and broad market events, from everyday price movements, which are studied quantitatively using short-term data in specific markets. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial.
Abnormal markets and trading were excluded from the VaR estimate in order to make it observable. It is not always possible to define loss if, for example, markets are closed as after 9/11, or severely illiquid, as happened several times in 2008. Losses can also be hard to define if the risk-bearing institution fails or breaks up. A measure that depends on traders taking certain actions, and avoiding other actions, can lead to self reference.
This is risk management VaR. It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust, before 1990, although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks.
The financial events of the early 1990s found many firms in trouble because the same underlying bet had been made at many places in the firm, in non-obvious ways. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. J. P. Morgan CEO Dennis Weatherstone famously called for a "4:15 report" that combined all firm risk on one page, available within 15 minutes of the market close.
Risk measurement VaR was developed for this purpose. Development was most extensive at J. P. Morgan, which published the methodology and gave free access to estimates of the necessary underlying parameters in 1994. This was the first time VaR had been exposed beyond a relatively small group of quants. Two years later, the methodology was spun off into an independent for-profit business now part of RiskMetrics Group (now part of MSCI).
In 1997, the U.S. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.
Worldwide adoption of the Basel II Accord, beginning in 1999 and nearing completion today, gave further impetus to the use of VaR. VaR is the preferred measure of market risk, and concepts similar to VaR are used in other parts of the accord.
Criticism.
VaR has been controversial since it moved from trading desks into the public eye in 1994. A famous 1997 debate between Nassim Taleb and Philippe Jorion set out some of the major points of contention. Taleb claimed VaR:
In 2008 David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review. Einhorn compared VaR to "an airbag that works all the time, except when you have a car accident". He further charged that VaR:
New York Times reporter Joe Nocera wrote an extensive piece Risk Mismanagement on January 4, 2009, discussing the role VaR played in the Financial crisis of 2007–2008. After interviewing risk managers (including several of the ones cited above) the article suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
Taleb in 2009 testified in Congress asking for the banning of VaR for a number of reasons. One was that tail risks are non-measurable. Another was that for anchoring reasons VaR leads to higher risk taking.
VaR is not subadditive: VaR of a combined portfolio can be larger than the sum of the VaRs of its components.
For example, the average bank branch in the United States is robbed about once every ten years. A single-branch bank has about 0.0004% chance of being robbed on a specific day, so the risk of robbery would not figure into one-day 1% VaR. It would not even be within an order of magnitude of that, so it is in the range where the institution should not worry about it, it should insure against it and take advice from insurers on precautions. The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability. It does not pay for a one-branch bank to have a security expert on staff.
As institutions get more branches, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. At that point it makes sense for the institution to run internal stress tests and analyze the risk itself. It will spend less on insurance and more on in-house expertise. For a very large banking institution, robberies are a routine daily occurrence. Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case. A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business.
As portfolios or institutions get larger, specific risks change from low-probability/low-predictability/high-impact to statistically predictable losses of low individual impact. That means they move from the range of far outside VaR, to be insured, to near outside VaR, to be analyzed case-by-case, to inside VaR, to be treated statistically.
VaR is a static measure of risk. By definition, VaR is a particular characteristic of the probability distribution of the underlying (namely, VaR is essentially a quantile). For a dynamic measure of risk, see Novak, ch. 10.
There are common abuses of VaR:
VaR, CVaR, RVaR and EVaR.
The VaR is not a coherent risk measure since it violates the sub-additivity property, which is
formula_11
However, it can be bounded by coherent risk measures like Conditional Value-at-Risk (CVaR) or entropic value at risk (EVaR).
CVaR is defined by average of VaR values for confidence levels between 0 and α.
However VaR, unlike CVaR, has the property of being a robust statistic.
A related class of risk measures is the 'Range Value at Risk' (RVaR), which is a robust version of CVaR.
For formula_12 (with formula_13 the set of all Borel measurable functions whose moment-generating function exists for all positive real values) we have
formula_14
where
formula_15
in which formula_16 is the moment-generating function of X at z. In the above equations the variable X denotes the financial loss, rather than wealth as is typically the case.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\alpha\\in(0,1)"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "Y:=-X"
},
{
"math_id": 4,
"text": "1-\\alpha"
},
{
"math_id": 5,
"text": "\\operatorname{VaR}_{\\alpha}(X)"
},
{
"math_id": 6,
"text": "(1-\\alpha)"
},
{
"math_id": 7,
"text": "Y"
},
{
"math_id": 8,
"text": "\\operatorname{VaR}_\\alpha(X)=-\\inf\\big\\{x\\in\\mathbb{R}:F_X(x)>\\alpha\\big\\} = F^{-1}_Y(1-\\alpha)."
},
{
"math_id": 9,
"text": "F_X"
},
{
"math_id": 10,
"text": "g(x) = \\begin{cases}0 & \\text{if }0 \\leq x < 1-\\alpha\\\\ 1 & \\text{if }1-\\alpha \\leq x \\leq 1\\end{cases}."
},
{
"math_id": 11,
"text": "\\mathrm{If}\\; X,Y \\in \\mathbf{L} ,\\; \\mathrm{then}\\; \\rho(X + Y) \\leq \\rho(X) + \\rho(Y)."
},
{
"math_id": 12,
"text": " X\\in \\mathbf{L}_{M^+} "
},
{
"math_id": 13,
"text": "\\mathbf{L}_{M^+} "
},
{
"math_id": 14,
"text": "\\text{VaR}_{1-\\alpha}(X)\\leq \\text{RVaR}_{\\alpha,\\beta}(X) \\leq \\text{CVaR}_{1-\\alpha}(X)\\leq\\text{EVaR}_{1-\\alpha}(X),"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n&\\text{VaR}_{1-\\alpha}(X):=\\inf_{t\\in\\mathbf{R}}\\{t:\\text{Pr}(X\\leq t)\\geq 1-\\alpha\\},\\\\\n&\\text{CVaR}_{1-\\alpha}(X) := \\frac{1}{\\alpha}\\int_0^{\\alpha} \\text{VaR}_{1-\\gamma}(X)d\\gamma,\\\\\n&\\text{RVaR}_{\\alpha,\\beta}(X) := \\frac{1}{\\beta-\\alpha}\\int_{\\alpha}^{\\beta} \\text{VaR}_{1-\\gamma}(X)d\\gamma,\\\\\n&\\text{EVaR}_{1-\\alpha}(X):=\\inf_{z>0}\\{z^{-1}\\ln(M_X(z)/\\alpha)\\},\n\\end{align}\n"
},
{
"math_id": 16,
"text": " M_X(z) "
}
] |
https://en.wikipedia.org/wiki?curid=148633
|
14864881
|
Continuous-time quantum walk
|
Quantum random walk dictated by a time-varying unitary matrix that relies on the Hamiltonian
A continuous-time quantum walk (CTQW) is a quantum walk on a given (simple) graph that is dictated by a time-varying unitary matrix that relies on the Hamiltonian of the quantum system and the adjacency matrix. The concept of a CTQW is believed to have been first considered for quantum computation by Edward Farhi and Sam Gutmann; since many classical algorithms are based on (classical) random walks, the concept of CTQWs were originally considered to see if there could be quantum analogues of these algorithms with e.g. better time-complexity than their classical counterparts. In recent times, problems such as deciding what graphs admit properties such as perfect state transfer with respect to their CTQWs have been of particular interest.
Definitions.
Suppose that formula_0 is a graph on formula_1 vertices, and that formula_2.
Continuous-time quantum walks.
The continuous-time quantum walk formula_3 on formula_0 at time formula_4 is defined as:formula_5letting formula_6 denote the adjacency matrix of formula_0.
It is also possible to similarly define a continuous-time quantum walk on formula_0 relative to its Laplacian matrix; although, unless stated otherwise, a CTQW on a graph will mean a CTQW relative to its adjacency matrix for the remainder of this article.
Mixing matrices.
The mixing matrix formula_7 of formula_0 at time formula_4 is defined as formula_8.
Mixing matrices are symmetric doubly-stochastic matrices obtained from CTQWs on graphs: formula_9 gives the probability of formula_10 transitioning to formula_11 at time formula_4 for any vertices formula_10 and v on formula_0.
Periodic vertices.
A vertex formula_10 on formula_0 is said to periodic at time formula_4 if formula_12.
Perfect state transfer.
Distinct vertices formula_10 and formula_11 on formula_0 are said to admit perfect state transfer at time formula_4 if formula_13.
If a pair of vertices on formula_0 admit perfect state transfer at time t, then formula_0 itself is said to admit perfect state transfer (at time t).
A set formula_14 of pairs of distinct vertices on formula_0 is said to admit perfect state transfer (at time formula_4) if each pair of vertices in formula_14 admits perfect state transfer at time formula_4.
A set formula_14 of vertices on formula_0 is said to admit perfect state transfer (at time formula_4) if for all formula_15 there is a formula_16 such that formula_10 and formula_11 admit perfect state transfer at time formula_4.
Periodic graphs.
A graph formula_0 itself is said to be periodic if there is a time formula_17 such that all of its vertices are periodic at time formula_4.
A graph is periodic if and only if its (non-zero) eigenvalues are all rational multiples of each other.
Moreover, a regular graph is periodic if and only if it is an integral graph.
Perfect state transfer.
Necessary conditions.
If a pair of vertices formula_10 and formula_11 on a graph formula_0 admit perfect state transfer at time formula_4, then both formula_10 and formula_11 are periodic at time formula_18.
Perfect state transfer on products of graphs.
Consider graphs formula_0 and formula_19.
If both formula_0 and formula_19 admit perfect state transfer at time formula_4, then their Cartesian product formula_20 admits perfect state transfer at time formula_4.
If either formula_0 or formula_19 admits perfect state transfer at time formula_4, then their disjoint union formula_21 admits perfect state transfer at time formula_4.
Perfect state transfer on walk-regular graphs.
If a walk-regular graph admits perfect state transfer, then all of its eigenvalues are integers.
If formula_0 is a graph in a homogeneous coherent algebra that admits perfect state transfer at time formula_4, such as e.g. a vertex-transitive graph or a graph in an association scheme, then all of the vertices on formula_0 admit perfect state transfer at time formula_4. Moreover, a graph formula_0 must have a perfect matching that admits perfect state transfer if it admits perfect state transfer between a pair of adjacent vertices and is a graph in a homogeneous coherent algebra.
A regular edge-transitive graph formula_0 cannot admit perfect state transfer between a pair of adjacent vertices, unless it is a disjoint union of copies of the complete graph formula_22.
A strongly regular graph admits perfect state transfer if and only if it is the complement of the disjoint union of an even number of copies of formula_22.
The only cubic distance-regular graph that admits perfect state transfer is the cubical graph.
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "t \\in \\mathbb{R}"
},
{
"math_id": 3,
"text": "U(t) \\in \\operatorname{Mat}_{n \\times n}(\\mathbb{C})"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "U(t) := e^{itA}"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "M(t) \\in \\operatorname{Mat}_{n \\times n}(\\mathbb{R})"
},
{
"math_id": 8,
"text": "M(t) := U(t) \\circ U(-t)"
},
{
"math_id": 9,
"text": "{M(t)}_{u,v}"
},
{
"math_id": 10,
"text": "u"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "{M(t)}_{u,u} = 1"
},
{
"math_id": 13,
"text": "M(t)_{u, v} = 1"
},
{
"math_id": 14,
"text": "S"
},
{
"math_id": 15,
"text": "u \\in S"
},
{
"math_id": 16,
"text": "v \\in S"
},
{
"math_id": 17,
"text": "t \\neq 0"
},
{
"math_id": 18,
"text": "2t"
},
{
"math_id": 19,
"text": "H"
},
{
"math_id": 20,
"text": "G \\, \\square \\, H"
},
{
"math_id": 21,
"text": "G \\sqcup H"
},
{
"math_id": 22,
"text": "K_2"
}
] |
https://en.wikipedia.org/wiki?curid=14864881
|
1486691
|
Multicollinearity
|
Linear dependency situation in a regression model
In statistics, multicollinearity or collinearity is a situation where the predictors in a regression model are linearly dependent.
Perfect multicollinearity refers to a situation where the predictive variables have an "exact" linear relationship. When there is perfect collinearity, the design matrix formula_0 has less than full rank, and therefore the moment matrix formula_1 cannot be inverted. In this situation, the parameter estimates of the regression are not well-defined, as the system of equations has infinitely many solutions.
Imperfect multicollinearity refers to a situation where the predictive variables have a "nearly" exact linear relationship.
Contrary to popular belief, neither the Gauss–Markov theorem nor the more common maximum likelihood justification for ordinary least squares relies on any kind of correlation structure between dependent predictors (although perfect collinearity can cause problems with some software).
There is no justification for the practice of removing collinear variables as part of regression analysis, and doing so may constitute scientific misconduct. Including collinear variables does not reduce the predictive power or reliability of the model as a whole, and does not reduce the accuracy of coefficient estimates.
High collinearity indicates that it is exceptionally important to include all collinear variables, as excluding any will cause worse coefficient estimates, strong confounding, and downward-biased estimates of standard errors.
Perfect multicollinearity.
<templatestyles src="Stack/styles.css"/>
Perfect multicollinearity refers to a situation where the predictors are linearly dependent (one can be written as an exact linear function of the others). Ordinary least squares requires inverting the matrix formula_1, where
formula_2
is an "formula_3" matrix, where "formula_4" is the number of observations, "formula_5" is the number of explanatory variables, and "formula_6". If there is an exact linear relationship among the independent variables, then at least one of the columns of formula_7 is a linear combination of the others, and so the rank of formula_7 (and therefore of formula_1) is less than "formula_8", and the matrix formula_1 will not be invertible.
Resolution.
Perfect collinearity is typically caused by including redundant variables in a regression. For example, a dataset may include variables for income, expenses, and savings. However, because income is equal to expenses plus savings by definition, it is incorrect to include all 3 variables in a regression simultaneously. Similarly, including a dummy variable for every category (e.g., summer, autumn, winter, and spring) as well as an intercept term will result in perfect collinearity. This is known as the dummy variable trap.
The other common cause of perfect collinearity is attempting to use ordinary least squares when working with very wide datasets (those with more variables than observations). These require more advanced data analysis techniques like Bayesian hierarchical modeling to produce meaningful results.
Numerical issues.
Sometimes, the variables formula_9 are nearly collinear. In this case, the matrix formula_1 has an inverse, but it is ill-conditioned. A computer algorithm may or may not be able to compute an approximate inverse; even if it can, the resulting inverse may have large rounding errors.
The standard measure of ill-conditioning in a matrix is the condition index. This determines if the inversion of the matrix is numerically unstable with finite-precision numbers, indicating the potential sensitivity of the computed inverse to small changes in the original matrix. The condition number is computed by finding the maximum singular value divided by the minimum singular value of the design matrix. In the context of collinear variables, the variance inflation factor is the condition number for a particular coefficient.
Solutions.
Numerical problems in estimating can be solved by applying standard techniques from linear algebra to estimate the equations more precisely:
Effects on coefficient estimates.
In addition to causing numerical problems, imperfect collinearity makes precise estimation of variables difficult. In other words, highly correlated variables lead to poor estimates and large standard errors.
As an example, say that we notice Alice wears her boots whenever it is raining and that there are only puddles when it rains. Then, we cannot tell whether she wears boots to keep the rain from landing on her feet, or to keep her feet dry if she steps in a puddle.
The problem with trying to identify how much each of the two variables matters is that they are confounded with each other: our observations are explained equally well by either variable, so we do not know which one of them causes the observed correlations.
There are two ways to discover this information:
This confounding becomes substantially worse when researchers attempt to ignore or suppress it by excluding these variables from the regression (see #Misuse). Excluding multicollinear variables from regressions will invalidate causal inference and produce worse estimates by removing important confounders.
Remedies.
There are many ways to prevent multicollinearity from affecting results by planning ahead of time. However, these methods require researchers to decide on a procedure and analysis "before" data has been collected (see post hoc analysis and #Misuse).
Regularized estimators.
Many regression methods are naturally "robust" to multicollinearity and generally perform better than ordinary least squares regression, even when variables are independent. Regularized regression techniques such as ridge regression, LASSO, elastic net regression, or spike-and-slab regression are less sensitive to including "useless" predictors, a common cause of collinearity. These techniques can detect and remove these predictors automatically to avoid problems. Bayesian hierarchical models (provided by software like BRMS) can perform such regularization automatically, learning informative priors from the data.
Often, problems caused by the use of frequentist estimation are misunderstood or misdiagnosed as being related to multicollinearity. Researchers are often frustrated not by multicollinearity, but by their inability to incorporate relevant prior information in regressions. For example, complaints that coefficients have "wrong signs" or confidence intervals that "include unrealistic values" indicate there is important prior information that is not being incorporated into the model. When this is information is available, it should be incorporated into the prior using Bayesian regression techniques.
Stepwise regression (the procedure of excluding "collinear" or "insignificant" variables) is especially vulnerable to multicollinearity, and is one of the few procedures wholly invalidated by it (with any collinearity resulting in heavily biased estimates and invalidated p-values).
Improved experimental design.
When conducting experiments where researchers have control over the predictive variables, researchers can often avoid collinearity by choosing an optimal experimental design in consultation with a statistician.
Acceptance.
While the above strategies work in some situations, they typically do not have a substantial effect. More advanced techniques may still result large standard errors. Thus the most common response to multicollinearity should be to "do nothing". The scientific process often involves null or inconclusive results; not every experiment will be "successful" in the sense of providing decisive confirmation of the researcher's original hypothesis.
Edward Leamer notes that "The solution to the weak evidence problem is more and better data. Within the confines of the given data set there is nothing that can be done about weak evidence"; researchers who believe there is a problem with the regression results should look at the prior probability, not the likelihood function.
Damodar Gujarati writes that "we should rightly accept [our data] are sometimes not very informative about parameters of interest". Olivier Blanchard quips that "multicollinearity is God's will, not a problem with OLS"; in other words, when working with observational data, researchers cannot "fix" multicollinearity, only accept it.
Misuse.
Variance inflation factors are often misused as criteria in stepwise regression (i.e. for variable inclusion/exclusion), a use that "lacks any logical basis but also is fundamentally misleading as a rule-of-thumb".
Excluding collinear variables leads to artificially small estimates for standard errors, but does not reduce the true (not estimated) standard errors for regression coefficients. Excluding variables with a high variance inflation factor also invalidates the calculated standard errors and p-values, by turning the results of the regression into a post hoc analysis.
Because collinearity leads to large standard errors and p-values, which can make publishing articles more difficult, some researchers will try to suppress inconvenient data by removing strongly-correlated variables from their regression. This procedure falls into the broader categories of p-hacking, data dredging, and post hoc analysis. Dropping (useful) collinear predictors will generally worsen the accuracy of the model and coefficient estimates.
Similarly, trying many different models or estimation procedures (e.g. ordinary least squares, ridge regression, etc.) until finding one that can "deal with" the collinearity creates a forking paths problem. P-values and confidence intervals derived from post hoc analyses are invalidated by ignoring the uncertainty in the model selection procedure.
It is reasonable to exclude unimportant predictors if they are known ahead of time to have little or no effect on the outcome; for example, local cheese production should not be used to predict the height of skyscrapers. However, this must be done when first specifying the model, prior to observing any data, and potentially-informative variables should always be included.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X^{\\mathsf{T}}X"
},
{
"math_id": 2,
"text": " X = \\begin{bmatrix}\n\n 1 & X_{11} & \\cdots & X_{k1} \\\\\n\n \\vdots & \\vdots & & \\vdots \\\\\n\n 1 & X_{1N} & \\cdots & X_{kN}\n\n\\end{bmatrix}"
},
{
"math_id": 3,
"text": " N \\times (k+1) "
},
{
"math_id": 4,
"text": " N "
},
{
"math_id": 5,
"text": " k "
},
{
"math_id": 6,
"text": " N \\ge k+1 "
},
{
"math_id": 7,
"text": " X "
},
{
"math_id": 8,
"text": " k+1 "
},
{
"math_id": 9,
"text": " X_j "
},
{
"math_id": 10,
"text": "x_1"
},
{
"math_id": 11,
"text": "x_1^2"
},
{
"math_id": 12,
"text": "x_1 \\times x_2"
}
] |
https://en.wikipedia.org/wiki?curid=1486691
|
14868
|
International Mathematical Union
|
International non-governmental organisation
The International Mathematical Union (IMU) is an international organization devoted to international cooperation in the field of mathematics across the world. It is a member of the International Science Council (ISC) and supports the International Congress of Mathematicians (ICM). Its members are national mathematics organizations from more than 80 countries.
The objectives of the International Mathematical Union are: promoting international cooperation in mathematics, supporting and assisting the International Congress of Mathematicians and other international scientific meetings/conferences, acknowledging outstanding research contributions to mathematics through the awarding of scientific prizes, and encouraging and supporting other international mathematical activities, considered likely to contribute to the development of mathematical science in any of its aspects, whether pure, applied, or educational.
History.
The IMU was established in 1920, but dissolved in September 1932 and reestablished in 1950 at the Constitutive Convention in New York, de jure on September 10, 1951, when ten countries had become members. The last milestone was the General Assembly in March 1952, in Rome, Italy where the activities of the new IMU were inaugurated and the first Executive Committee, President and various commissions were elected. In 1952 the IMU was also readmitted to the ICSU. The past president of the Union is Carlos Kenig (2019–2022). The current president is Hiraku Nakajima.
At the 16th meeting of the IMU General Assembly in Bangalore, India, in August 2010, Berlin was chosen as the location of the permanent office of the IMU, which was opened on January 1, 2011, and is hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS), an institute of the Gottfried Wilhelm Leibniz Scientific Community, with about 120 scientists engaging in mathematical research applied to complex problems in industry and commerce.
Commissions and committees.
IMU has a close relationship to mathematics education through its International Commission on Mathematical Instruction (ICMI). This commission is organized similarly to IMU with its own Executive Committee and General Assembly.
Developing countries are a high priority for the IMU and a significant percentage of its budget, including grants received from individuals, mathematical societies, foundations, and funding agencies, is spent on activities for developing countries. Since 2011 this has been coordinated by the Commission for Developing Countries (CDC).
The Committee for Women in Mathematics (CWM) is concerned with issues related to women in mathematics worldwide. It organizes the World Meeting for Women in Mathematics formula_0 as a satellite event of ICM.
The International Commission on the History of Mathematics (ICHM) is operated jointly by the IMU and the Division of the History of Science (DHS) of the International Union of History and Philosophy of Science (IUHPS).
The Committee on Electronic Information and Communication (CEIC) advises IMU on matters concerning mathematical information, communication, and publishing.
Prizes.
The scientific prizes awarded by the IMU, in the quadrennial International Congress of Mathematicians (ICM), are deemed to be some of the highest distinctions in the mathematical world. These are:
Membership and General Assembly.
The IMU's members are Member Countries and each Member country is represented through an Adhering Organization, which may be its principal academy, a mathematical society, its research council or some other institution or association of institutions, or an appropriate agency of its government. A country starting to develop its mathematical culture and interested in building links with mathematicians all over the world is invited to join IMU as an Associate Member. For the purpose of facilitating jointly sponsored activities and jointly pursuing the objectives of the IMU, multinational mathematical societies and professional societies can join IMU as an Affiliate Member. Every four years, the IMU membership gathers in a General Assembly (GA), which consists of delegates appointed by the Adhering Organizations, together with the members of the executive committee. All important decisions are made at the GA, including the election of the officers, establishment of commissions, the approval of the budget, and any changes to the statutes and by-laws.
Members and Associate Members.
The IMU has 83 (full) Member countries and two Associate Members (Bangladesh and Paraguay, marked below by light grey background).
<templatestyles src="Reflist/styles.css" />
Affiliate members.
The IMU has five affiliate members:
Organization and Executive Committee.
The International Mathematical Union is administered by an executive committee (EC) which conducts the business of the Union. The EC consists of the President, two vice-presidents, the Secretary, six Members-at-Large, all elected for a term of four years, and the Past President. The EC is responsible for all policy matters and for tasks, such as choosing the members of the ICM Program Committee and various prize committees.
Publications.
Every two months IMU publishes an electronic newsletter, "IMU-Net", that aims to improve communication between IMU and the worldwide mathematical community by reporting on decisions and recommendations of the Union, major international mathematical events and developments, and on other topics of general mathematical interest. IMU Bulletins are published annually with the aim to inform IMU's members about the Union's current activities. In 2009 IMU published the document "Best Current Practices for Journals".
IMU’s Involvement in developing countries.
The IMU took its first organized steps towards the promotion of mathematics in developing countries in the early 1970s and has, since then supported various activities. In 2010 IMU formed the Commission for Developing Countries (CDC) which brings together all of the past and current initiatives in support of mathematics and mathematicians in the developing world.
Some IMU Supported Initiatives:
IMU also supports the "International Commission on Mathematical Instruction" (ICMI) with its programmes, exhibits and workshops in emerging countries, especially in Asia and Africa.
IMU released a report in 2008, "Mathematics in Africa: Challenges and Opportunities", on the current state of mathematics in Africa and on opportunities for new initiatives to support mathematical development. In 2014, the IMU's Commission for Developing Countries CDC released an update of the report.
Additionally, reports about "Mathematics in Latin America and the Caribbean and South East Asia". were published.
In July 2014 IMU released the report: The International Mathematical Union in the Developing World: Past, Present and Future (July 2014).
MENAO Symposium at the ICM.
In 2014, the IMU held a day-long symposium prior to the opening of the International Congress of Mathematicians (ICM), entitled "Mathematics in Emerging Nations: Achievements and Opportunities" (MENAO). Approximately 260 participants from around the world, including representatives of embassies, scientific institutions, private business and foundations attended this session. Attendees heard inspiring stories of individual mathematicians and specific developing nations.
Presidents.
List of presidents of the International Mathematical Union from 1952 to the present:
1952–1954: Marshall Harvey Stone (vice: Émile Borel, Erich Kamke)
1955–1958: Heinz Hopf (vice: Arnaud Denjoy, W. V. D. Hodge)
1959–1962: Rolf Nevanlinna (vice: Pavel Alexandrov, Marston Morse)
1963–1966: Georges de Rham (vice: Henri Cartan, Kazimierz Kuratowski)
1967–1970: Henri Cartan (vice: Mikhail Lavrentyev, Deane Montgomery)
1971–1974: K. S. Chandrasekharan (vice: Abraham Adrian Albert, Lev Pontryagin)
1975–1978: Deane Montgomery (vice: J. W. S. Cassels, Miron Nicolescu, Gheorghe Vrânceanu)
1979–1982: Lennart Carleson (vice: Masayoshi Nagata, Yuri Vasilyevich Prokhorov)
1983–1986: Jürgen Moser (vice: Ludvig Faddeev, Jean-Pierre Serre)
1987–1990: Ludvig Faddeev (vice: Walter Feit, Lars Hörmander)
1991–1994: Jacques-Louis Lions (vice: John H. Coates, David Mumford)
1995–1998: David Mumford (vice: Vladimir Arnold, Albrecht Dold)
1999–2002: Jacob Palis (vice: Simon Donaldson, Shigefumi Mori)
2003–2006: John M. Ball (vice: Jean-Michel Bismut, Masaki Kashiwara)
2007–2010: László Lovász (vice: Zhi-Ming Ma, Claudio Procesi)
2011–2014: Ingrid Daubechies (vice: Christiane Rousseau, Marcelo Viana)
2015–2018: Shigefumi Mori (vice: Alicia Dickenstein, Vaughan Jones)
2019–2022: Carlos Kenig (vice: Nalini Joshi, Loyiso Nongxa)
2023–2026: Hiraku Nakajima (vice: Ulrike Tillmann, Tatiana Toro)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "((\\mathrm{WM})^2)"
}
] |
https://en.wikipedia.org/wiki?curid=14868
|
1486933
|
Forward rate
|
Future yield on a bond
The forward rate is the future yield on a bond. It is calculated using the yield curve. For example, the yield on a three-month Treasury bill six months from now is a "forward rate".
Forward rate calculation.
To extract the forward rate, we need the zero-coupon yield curve.
We are trying to find the future interest rate formula_0 for time period formula_1, formula_2 and formula_3 expressed in years, given the rate formula_4 for time period formula_5 and rate formula_6 for time period formula_7. To do this, we use the property that the proceeds from investing at rate formula_4 for time period formula_5 and then reinvesting those proceeds at rate formula_0 for time period formula_1 is equal to the proceeds from investing at rate formula_6 for time period formula_7.
formula_0 depends on the rate calculation mode (simple, yearly compounded or continuously compounded), which yields three different results.
Mathematically it reads as follows:
formula_8
Simple rate.
Solving for formula_0 yields:
Thus formula_9
The discount factor formula for period (0, t) formula_10 expressed in years, and rate formula_11 for this period being
formula_12,
the forward rate can be expressed in terms of discount factors:
formula_13
formula_14
Yearly compounded rate.
Solving for formula_0 yields :
formula_15
The discount factor formula for period (0,"t") formula_10 expressed in years, and rate formula_11 for this period being
formula_16, the forward rate can be expressed in terms of discount factors:
formula_17
formula_18
Continuously compounded rate.
Solving for formula_0 yields:
STEP 1→ formula_19
STEP 2→ formula_20
STEP 3→ formula_21
STEP 4→ formula_22
STEP 5→ formula_23
The discount factor formula for period (0,"t") formula_10 expressed in years, and rate formula_11 for this period being
formula_24,
the forward rate can be expressed in terms of discount factors:
formula_25
formula_26 is the forward rate between time formula_27 and time formula_28,
formula_29 is the zero-coupon yield for the time period formula_30, ("k" = 1,2).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r_{1,2}"
},
{
"math_id": 1,
"text": "(t_1, t_2)"
},
{
"math_id": 2,
"text": "t_1"
},
{
"math_id": 3,
"text": "t_2"
},
{
"math_id": 4,
"text": "r_1"
},
{
"math_id": 5,
"text": "(0, t_1)"
},
{
"math_id": 6,
"text": "r_2"
},
{
"math_id": 7,
"text": "(0, t_2)"
},
{
"math_id": 8,
"text": "(1+r_1t_1)(1+ r_{1,2}(t_2-t_1)) = 1+r_2t_2"
},
{
"math_id": 9,
"text": "r_{1,2} = \\frac{1}{t_2-t_1}\\left(\\frac{1+r_2t_2}{1+r_1t_1}-1\\right)"
},
{
"math_id": 10,
"text": "\\Delta_t"
},
{
"math_id": 11,
"text": "r_t"
},
{
"math_id": 12,
"text": "DF(0, t)=\\frac{1}{(1+r_t \\, \\Delta_t)}"
},
{
"math_id": 13,
"text": "r_{1,2} = \\frac{1}{t_2-t_1}\\left(\\frac{DF(0, t_1)}{DF(0, t_2)}-1\\right)"
},
{
"math_id": 14,
"text": "(1+r_1)^{t_1}(1+r_{1,2})^{t_2-t_1} = (1+r_2)^{t_2}"
},
{
"math_id": 15,
"text": "r_{1,2} = \\left(\\frac{(1+r_2)^{t_2}}{(1+r_1)^{t_1}}\\right)^{1/(t_2-t_1)} - 1"
},
{
"math_id": 16,
"text": "DF(0, t)=\\frac{1}{(1+r_t)^{\\Delta_t}}"
},
{
"math_id": 17,
"text": "r_{1,2}=\\left(\\frac{DF(0, t_1)}{DF(0, t_2)}\\right)^{1/(t_2-t_1)}-1"
},
{
"math_id": 18,
"text": "e^{r_2 \\cdot t_2} = e^{r_1 \\cdot t_1} \\cdot \\ e^{r_{1,2} \\cdot \\left(t_2 - t_1 \\right)}"
},
{
"math_id": 19,
"text": "e^{r_2 \\cdot t_2} = e^{r_1 \\cdot t_1 + r_{1,2} \\cdot \\left(t_2 - t_1 \\right)}"
},
{
"math_id": 20,
"text": "\\ln \\left(e^{r_2 \\cdot t_2} \\right) = \\ln \\left(e^{r_1 \\cdot t_1 + r_{1,2} \\cdot \\left(t_2 - t_1 \\right)}\\right)"
},
{
"math_id": 21,
"text": "r_2 \\cdot t_2 = r_1 \\cdot t_1 + r_{1,2} \\cdot \\left(t_2 - t_1 \\right)"
},
{
"math_id": 22,
"text": "r_{1,2} \\cdot \\left(t_2 - t_1 \\right) = r_2 \\cdot t_2 - r_1 \\cdot t_1"
},
{
"math_id": 23,
"text": "r_{1,2} = \\frac{ r_2 \\cdot t_2 - r_1 \\cdot t_1}{t_2 - t_1}"
},
{
"math_id": 24,
"text": "DF(0, t)=e^{-r_t\\,\\Delta_t}"
},
{
"math_id": 25,
"text": "r_{1,2} = \\frac{\\ln \\left(DF \\left(0, t_1 \\right)\\right) - \\ln \\left(DF \\left(0, t_2 \\right)\\right)}{t_2 - t_1} \n= \\frac{- \\ln \\left( \\frac{ DF \\left(0, t_2 \\right)}{ DF \\left(0, t_1 \\right)} \\right)}{t_2 - t_1} "
},
{
"math_id": 26,
"text": "r_{1,2} "
},
{
"math_id": 27,
"text": " t_1 "
},
{
"math_id": 28,
"text": " t_2 "
},
{
"math_id": 29,
"text": " r_k "
},
{
"math_id": 30,
"text": " (0, t_k) "
}
] |
https://en.wikipedia.org/wiki?curid=1486933
|
148721
|
T1 space
|
Topological space in which all singleton sets are closed
In topology and related branches of mathematics, a T1 space is a topological space in which, for every pair of distinct points, each has a neighborhood not containing the other point. An R0 space is one in which this holds for every pair of topologically distinguishable points. The properties T1 and R0 are examples of separation axioms.
Definitions.
Let "X" be a topological space and let "x" and "y" be points in "X". We say that "x" and "y" are separated if each lies in a neighbourhood that does not contain the other point.
A T1 space is also called an accessible space or a space with Fréchet topology and an R0 space is also called a symmetric space. (The term Fréchet space also has an entirely different meaning in functional analysis. For this reason, the term "T1 space" is preferred. There is also a notion of a Fréchet–Urysohn space as a type of sequential space. The term symmetric space also has another meaning.)
A topological space is a T1 space if and only if it is both an R0 space and a Kolmogorov (or T0) space (i.e., a space in which distinct points are topologically distinguishable). A topological space is an R0 space if and only if its Kolmogorov quotient is a T1 space.
Properties.
If formula_0 is a topological space then the following conditions are equivalent:
If formula_0 is a topological space then the following conditions are equivalent: (where formula_8 denotes the closure of formula_2)
In any topological space we have, as properties of any two points, the following implications
separated formula_17 topologically distinguishable formula_17 distinct
If the first arrow can be reversed the space is R0. If the second arrow can be reversed the space is T0. If the composite arrow can be reversed the space is T1. A space is T1 if and only if it is both R0 and T0.
A finite T1 space is necessarily discrete (since every set is closed).
A space that is locally T1, in the sense that each point has a T1 neighbourhood (when given the subspace topology), is also T1. Similarly, a space that is locally R0 is also R0. In contrast, the corresponding statement does not hold for T2 spaces. For example, the line with two origins is not a Hausdorff space but is locally Hausdorff.
* the open set formula_20 contains formula_11 but not formula_21 and the open set formula_22 contains formula_4 and not formula_11;
* equivalently, every singleton set formula_23 is the complement of the open set formula_24 so it is a closed set;
so the resulting space is T1 by each of the definitions above. This space is not T2, because the intersection of any two open sets formula_18 and formula_25 is formula_26 which is never empty. Alternatively, the set of even integers is compact but not closed, which would be impossible in a Hausdorff space.
formula_31
The resulting space is not T0 (and hence not T1), because the points formula_4 and formula_32 (for formula_4 even) are topologically indistinguishable; but otherwise it is essentially equivalent to the previous example.
Generalisations to other kinds of spaces.
The terms "T1", "R0", and their synonyms can also be applied to such variations of topological spaces as uniform spaces, Cauchy spaces, and convergence spaces.
The characteristic that unites the concept in all of these examples is that limits of fixed ultrafilters (or constant nets) are unique (for T1 spaces) or unique up to topological indistinguishability (for R0 spaces).
As it turns out, uniform spaces, and more generally Cauchy spaces, are always R0, so the T1 condition in these cases reduces to the T0 condition.
But R0 alone can be an interesting condition on other sorts of convergence spaces, such as pretopological spaces.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x \\in X,"
},
{
"math_id": 2,
"text": "\\{x\\}"
},
{
"math_id": 3,
"text": "X."
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "x."
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "S."
},
{
"math_id": 8,
"text": "\\operatorname{cl}\\{x\\}"
},
{
"math_id": 9,
"text": "x,y\\in X,"
},
{
"math_id": 10,
"text": "\\{y\\}"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "\\{x\\}."
},
{
"math_id": 13,
"text": "x\\in X"
},
{
"math_id": 14,
"text": "F"
},
{
"math_id": 15,
"text": "F\\cap\\operatorname{cl}\\{x\\}=\\emptyset."
},
{
"math_id": 16,
"text": "\\operatorname{cl}\\{x\\}."
},
{
"math_id": 17,
"text": "\\implies"
},
{
"math_id": 18,
"text": "O_A"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "O_{\\{ x \\}}"
},
{
"math_id": 21,
"text": "x,"
},
{
"math_id": 22,
"text": "O_{\\{ y \\}}"
},
{
"math_id": 23,
"text": "\\{ x \\}"
},
{
"math_id": 24,
"text": "O_{\\{ x \\}},"
},
{
"math_id": 25,
"text": "O_B"
},
{
"math_id": 26,
"text": "O_A \\cap O_B = O_{A \\cup B},"
},
{
"math_id": 27,
"text": "G_x"
},
{
"math_id": 28,
"text": "G_x = O_{\\{ x, x+1 \\}}"
},
{
"math_id": 29,
"text": "G_x = O_{\\{ x-1, x \\}}"
},
{
"math_id": 30,
"text": "A,"
},
{
"math_id": 31,
"text": "U_A := \\bigcap_{x \\in A} G_x. "
},
{
"math_id": 32,
"text": "x + 1"
},
{
"math_id": 33,
"text": "\\left(c_1, \\ldots, c_n\\right)"
},
{
"math_id": 34,
"text": "x_1 - c_1, \\ldots, x_n - c_n."
},
{
"math_id": 35,
"text": "A."
},
{
"math_id": 36,
"text": "O_a"
},
{
"math_id": 37,
"text": "a \\in A."
},
{
"math_id": 38,
"text": "O_a \\cap O_b = O_{ab}"
},
{
"math_id": 39,
"text": "O_0 = \\varnothing"
},
{
"math_id": 40,
"text": "O_1 = X."
},
{
"math_id": 41,
"text": "a."
}
] |
https://en.wikipedia.org/wiki?curid=148721
|
1487868
|
Poincaré metric
|
Metric tensor describing constant negative (hyperbolic) curvature
In mathematics, the Poincaré metric, named after Henri Poincaré, is the metric tensor describing a two-dimensional surface of constant negative curvature. It is the natural metric commonly used in a variety of calculations in hyperbolic geometry or Riemann surfaces.
There are three equivalent representations commonly used in two-dimensional hyperbolic geometry. One is the Poincaré half-plane model, defining a model of hyperbolic space on the upper half-plane. The Poincaré disk model defines a model for hyperbolic space on the unit disk. The disk and the upper half plane are related by a conformal map, and isometries are given by Möbius transformations. A third representation is on the punctured disk, where relations for "q"-analogues are sometimes expressed. These various forms are reviewed below.
Overview of metrics on Riemann surfaces.
A metric on the complex plane may be generally expressed in the form
formula_0
where λ is a real, positive function of formula_1 and formula_2. The length of a curve γ in the complex plane is thus given by
formula_3
The area of a subset of the complex plane is given by
formula_4
where formula_5 is the exterior product used to construct the volume form. The determinant of the metric is equal to formula_6, so the square root of the determinant is formula_7. The Euclidean volume form on the plane is formula_8 and so one has
formula_9
A function formula_10 is said to be the potential of the metric if
formula_11
The Laplace–Beltrami operator is given by
formula_12
The Gaussian curvature of the metric is given by
formula_13
This curvature is one-half of the Ricci scalar curvature.
Isometries preserve angles and arc-lengths. On Riemann surfaces, isometries are identical to changes of coordinate: that is, both the Laplace–Beltrami operator and the curvature are invariant under isometries. Thus, for example, let "S" be a Riemann surface with metric formula_14 and "T" be a Riemann surface with metric formula_15. Then a map
formula_16
with formula_17 is an isometry if and only if it is conformal and if
formula_18.
Here, the requirement that the map is conformal is nothing more than the statement
formula_19
that is,
formula_20
Metric and volume element on the Poincaré plane.
The Poincaré metric tensor in the Poincaré half-plane model is given on the upper half-plane H as
formula_21
where we write formula_22 and formula_23.
This metric tensor is invariant under the action of SL(2,R). That is, if we write
formula_24
for formula_25 then we can work out that
formula_26
and
formula_27
The infinitesimal transforms as
formula_28
and so
formula_29
thus making it clear that the metric tensor is invariant under SL(2,R). Indeed,
formula_30
The invariant volume element is given by
formula_31
The metric is given by
formula_32
formula_33
for formula_34
Another interesting form of the metric can be given in terms of the "cross-ratio". Given any four points formula_35 and formula_36 in the compactified complex plane formula_37 the cross-ratio is defined by
formula_38
Then the metric is given by
formula_39
Here, formula_40 and formula_41 are the endpoints, on the real number line, of the geodesic joining formula_42 and formula_43. These are numbered so that formula_42 lies in between formula_40 and formula_43.
The geodesics for this metric tensor are circular arcs perpendicular to the real axis (half-circles whose origin is on the real axis) and straight vertical lines ending on the real axis.
Conformal map of plane to disk.
The upper half plane can be mapped conformally to the unit disk with the Möbius transformation
formula_44
where "w" is the point on the unit disk that corresponds to the point "z" in the upper half plane. In this mapping, the constant "z"0 can be any point in the upper half plane; it will be mapped to the center of the disk. The real axis formula_45 maps to the edge of the unit disk formula_46 The constant real number formula_47 can be used to rotate the disk by an arbitrary fixed amount.
The canonical mapping is
formula_48
which takes "i" to the center of the disk, and "0" to the bottom of the disk.
Metric and volume element on the Poincaré disk.
The Poincaré metric tensor in the Poincaré disk model is given on the open unit disk
formula_49
by
formula_50
The volume element is given by
formula_51
The Poincaré metric is given by
formula_52
for formula_53
The geodesics for this metric tensor are circular arcs whose endpoints are orthogonal to the boundary of the disk. Geodesic flows on the Poincaré disk are Anosov flows; that article develops the notation for such flows.
The punctured disk model.
A second common mapping of the upper half-plane to a disk is the q-mapping
formula_54
where "q" is the nome and τ is the half-period ratio:
formula_55 .
In the notation of the previous sections, τ is the coordinate in the upper half-plane formula_56. The mapping is to the punctured disk, because the value "q"=0 is not in the image of the map.
The Poincaré metric on the upper half-plane induces a metric on the q-disk
formula_57
The potential of the metric is
formula_58
Schwarz lemma.
The Poincaré metric is distance-decreasing on harmonic functions. This is an extension of the Schwarz lemma, called the Schwarz–Ahlfors–Pick theorem.
|
[
{
"math_id": 0,
"text": "ds^2=\\lambda^2(z,\\overline{z})\\, dz\\,d\\overline{z}"
},
{
"math_id": 1,
"text": "z"
},
{
"math_id": 2,
"text": "\\overline{z}"
},
{
"math_id": 3,
"text": "l(\\gamma)=\\int_\\gamma \\lambda(z,\\overline{z})\\, |dz|"
},
{
"math_id": 4,
"text": "\\text{Area}(M)=\\int_M \\lambda^2 (z,\\overline{z})\\,\\frac{i}{2}\\,dz \\wedge d\\overline{z}"
},
{
"math_id": 5,
"text": "\\wedge"
},
{
"math_id": 6,
"text": "\\lambda^4"
},
{
"math_id": 7,
"text": "\\lambda^2"
},
{
"math_id": 8,
"text": "dx\\wedge dy"
},
{
"math_id": 9,
"text": "dz \\wedge d\\overline{z}=(dx+i\\,dy)\\wedge (dx-i \\, dy)= -2i\\,dx\\wedge dy."
},
{
"math_id": 10,
"text": "\\Phi(z,\\overline{z})"
},
{
"math_id": 11,
"text": "4\\frac{\\partial}{\\partial z} \n\\frac{\\partial}{\\partial \\overline{z}} \\Phi(z,\\overline{z})=\\lambda^2(z,\\overline{z})."
},
{
"math_id": 12,
"text": "\\Delta = \\frac{4}{\\lambda^2} \n\\frac {\\partial}{\\partial z} \n\\frac {\\partial}{\\partial \\overline{z}}\n= \\frac{1}{\\lambda^2} \\left(\n\\frac {\\partial^2}{\\partial x^2} + \n\\frac {\\partial^2}{\\partial y^2}\n\\right)."
},
{
"math_id": 13,
"text": "K=-\\Delta \\log \\lambda.\\,"
},
{
"math_id": 14,
"text": "\\lambda^2(z,\\overline{z})\\, dz \\, d\\overline{z}"
},
{
"math_id": 15,
"text": "\\mu^2(w,\\overline{w})\\, dw\\,d\\overline{w}"
},
{
"math_id": 16,
"text": "f:S\\to T\\,"
},
{
"math_id": 17,
"text": "f=w(z)"
},
{
"math_id": 18,
"text": "\\mu^2(w,\\overline{w}) \\;\n\\frac {\\partial w}{\\partial z}\n\\frac {\\partial \\overline {w}} {\\partial \\overline {z}} = \n\\lambda^2 (z, \\overline {z})\n"
},
{
"math_id": 19,
"text": "w(z,\\overline{z})=w(z),"
},
{
"math_id": 20,
"text": "\\frac{\\partial}{\\partial \\overline{z}} w(z) = 0."
},
{
"math_id": 21,
"text": "ds^2=\\frac{dx^2+dy^2}{y^2}=\\frac{dz \\, d\\overline{z}}{y^2}"
},
{
"math_id": 22,
"text": "dz=dx+i\\,dy"
},
{
"math_id": 23,
"text": "d\\overline{z}=dx-i\\,dy"
},
{
"math_id": 24,
"text": "z'=x'+iy'=\\frac{az+b}{cz+d}"
},
{
"math_id": 25,
"text": "ad-bc=1"
},
{
"math_id": 26,
"text": "x'=\\frac{ac(x^2+y^2)+x(ad+bc)+bd}{|cz+d|^2}"
},
{
"math_id": 27,
"text": "y'=\\frac{y}{|cz+d|^2}."
},
{
"math_id": 28,
"text": "dz'= \\frac{\\partial}{\\partial z} \\Big(\\frac{az+b}{cz+d}\\Big) \\, dz = \\frac{a (cz+d) - c(az+b)}{(cz+d)^2} \\, dz = \\frac{acz+ad - caz-cb}{(cz+d)^2} \\, dz = \\frac{ad-cb}{(cz+d)^2} \\, dz \\,\\,\\overset{ad-cb = 1}{=}\\,\\, \\frac{1}{(cz+d)^2} \\, dz = \\frac{dz}{(cz+d)^2}"
},
{
"math_id": 29,
"text": "dz'd\\overline{z}' = \\frac{dz\\,d\\overline{z}}{|cz+d|^4}"
},
{
"math_id": 30,
"text": "\\frac{dz' \\, d\\overline{z}'}{y'^2} = \\frac{\\frac{dz d\\overline{z}}{|cz+d|^4}}{\\frac{y^2}{|cz+d|^4}} = \\frac{dz \\, d\\overline{z}}{y^2}. "
},
{
"math_id": 31,
"text": "d\\mu=\\frac{dx\\,dy}{y^2}."
},
{
"math_id": 32,
"text": "\\rho(z_1,z_2)=2\\tanh^{-1}\\frac{|z_1-z_2|}{|z_1-\\overline{z_2}|}"
},
{
"math_id": 33,
"text": "\\rho(z_1,z_2)=\\log\\frac{|z_1-\\overline{z_2}|+|z_1-z_2|}{|z_1-\\overline{z_2}|-|z_1-z_2|}"
},
{
"math_id": 34,
"text": "z_1,z_2 \\in \\mathbb{H}."
},
{
"math_id": 35,
"text": "z_1, z_2, z_3"
},
{
"math_id": 36,
"text": "z_4"
},
{
"math_id": 37,
"text": "\\hat{\\Complex} = \\Complex \\cup \\{\\infty\\},"
},
{
"math_id": 38,
"text": "(z_1, z_2; z_3, z_4) = \\frac{(z_1-z_3)(z_2-z_4)}{(z_1-z_4)(z_2-z_3)}."
},
{
"math_id": 39,
"text": " \\rho(z_1,z_2)= \\log \\left (z_1, z_2; z_1^\\times, z_2^\\times \\right )."
},
{
"math_id": 40,
"text": "z_1^\\times"
},
{
"math_id": 41,
"text": "z_2^\\times"
},
{
"math_id": 42,
"text": "z_1"
},
{
"math_id": 43,
"text": "z_2"
},
{
"math_id": 44,
"text": "w=e^{i\\phi}\\frac{z-z_0}{z-\\overline {z_0}}"
},
{
"math_id": 45,
"text": "\\Im z =0"
},
{
"math_id": 46,
"text": "|w|=1."
},
{
"math_id": 47,
"text": "\\phi"
},
{
"math_id": 48,
"text": "w=\\frac{iz+1}{z+i}"
},
{
"math_id": 49,
"text": "U= \\left \\{z=x+iy:|z|=\\sqrt{x^2+y^2} < 1 \\right \\}"
},
{
"math_id": 50,
"text": "ds^2=\\frac{4(dx^2+dy^2)}{(1-(x^2+y^2))^2}=\\frac{4 dz\\,d\\overline{z}}{(1-|z|^2)^2}."
},
{
"math_id": 51,
"text": "d\\mu=\\frac{4 dx\\,dy}{(1-(x^2+y^2))^2}=\\frac{4 dx\\,dy}{(1-|z|^2)^2}."
},
{
"math_id": 52,
"text": "\\rho(z_1,z_2)=2\\tanh^{-1}\\left|\\frac{z_1-z_2}{1-z_1\\overline{z_2}}\\right|"
},
{
"math_id": 53,
"text": "z_1,z_2 \\in U."
},
{
"math_id": 54,
"text": "q=\\exp(i\\pi\\tau)"
},
{
"math_id": 55,
"text": " \\tau = \\frac{\\omega_2}{\\omega_1} "
},
{
"math_id": 56,
"text": "\\Im \\tau >0"
},
{
"math_id": 57,
"text": "ds^2=\\frac{4}{|q|^2 (\\log |q|^2)^2} dq \\, d\\overline{q}"
},
{
"math_id": 58,
"text": "\\Phi(q,\\overline{q})=4 \\log \\log |q|^{-2}"
}
] |
https://en.wikipedia.org/wiki?curid=1487868
|
1487910
|
Schwarz–Ahlfors–Pick theorem
|
Extension of the Schwarz lemma for hyperbolic geometry
In mathematics, the Schwarz–Ahlfors–Pick theorem is an extension of the Schwarz lemma for hyperbolic geometry, such as the Poincaré half-plane model.
The Schwarz–Pick lemma states that every holomorphic function from the unit disk "U" to itself, or from the upper half-plane "H" to itself, will not increase the Poincaré distance between points. The unit disk "U" with the Poincaré metric has negative Gaussian curvature −1. In 1938, Lars Ahlfors generalised the lemma to maps from the unit disk to other negatively curved surfaces:
Theorem (Schwarz–Ahlfors–Pick). Let "U" be the unit disk with Poincaré metric formula_0; let "S" be a Riemann surface endowed with a Hermitian metric formula_1 whose Gaussian curvature is ≤ −1; let formula_2 be a holomorphic function. Then
formula_3
for all formula_4
A generalization of this theorem was proved by Shing-Tung Yau in 1973.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "f:U\\rightarrow S"
},
{
"math_id": 3,
"text": "\\sigma(f(z_1),f(z_2)) \\leq \\rho(z_1,z_2)"
},
{
"math_id": 4,
"text": "z_1,z_2 \\in U."
}
] |
https://en.wikipedia.org/wiki?curid=1487910
|
1488195
|
Fundamental theorems of welfare economics
|
Complete, full information, perfectly competitive markets are Pareto efficient
There are two fundamental theorems of welfare economics. The first states that in economic equilibrium, a set of complete markets, with complete information, and in perfect competition, will be Pareto optimal (in the sense that no further exchange would make one person better off without making another worse off). The requirements for perfect competition are these:
The theorem is sometimes seen as an analytical confirmation of Adam Smith's "invisible hand" principle, namely that "competitive markets ensure an efficient allocation of resources". However, there is no guarantee that the Pareto optimal market outcome is equitative, as there are many possible Pareto efficient allocations of resources differing in their desirability (e.g. one person may own everything and everyone else nothing).
The second theorem states that any Pareto optimum can be supported as a competitive equilibrium for some initial set of endowments. The implication is that any desired Pareto optimal outcome can be supported; Pareto efficiency can be achieved with any redistribution of initial wealth. However, attempts to correct the distribution may introduce distortions, and so full optimality may not be attainable with redistribution.
The theorems can be visualized graphically for a simple pure exchange economy by means of the Edgeworth box diagram.
History of the fundamental theorems.
Adam Smith (1776).
In a discussion of import tariffs Adam Smith wrote that:
Every individual necessarily labours to render the annual revenue of the society as great as he can... He is in this, as in many other ways, led by an invisible hand to promote an end which was no part of his intention... By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.Note that Smith's ideas were not directed towards welfare economics specifically, as this field of economics had not been created at the time. However, his arguments have been credited towards the creation of the branch as well as the fundamental theories of welfare economics.
Léon Walras (1870).
Walras wrote that 'exchange under free competition is an operation by which all parties obtain the maximum satisfaction subject to buying and selling at a uniform price'.
F. Y. Edgeworth (1881).
Edgeworth took a step towards the first fundamental theorem in his 'Mathematical Psychics', looking at a pure exchange economy with no production. He included imperfect competition in his analysis. His definition of equilibrium is almost the same as Pareto's later definition of optimality: it is a point such that...
"in whatever direction" we take an infinitely small step, "P" and Π [the utilities of buyer and seller] do not increase together, but that, while one increases, the other decreases.
Instead of concluding that equilibrium was Pareto optimal, Edgeworth concluded that the equilibrium maximizes the sum of utilities of the parties, which is a special case of Pareto efficiency:
It seems to follow on general dynamical principles applied to this special case that equilibrium is attained when the "total pleasure-energy of the contractors is a maximum relative", or subject, to conditions...
Vilfredo Pareto (1906/9).
Pareto stated the first fundamental theorem in his "Manuale" (1906) and with more rigour in its French revision ("Manuel", 1909). He was the first to claim optimality under his own criterion or to support the claim by convincing arguments.
He defines equilibrium more abstractly than Edgeworth as a state which would maintain itself indefinitely in the absence of external pressures and shows that in an exchange economy it is the point at which a common tangent to the parties' indifference curves passes through the endowment.
His definition of optimality is given in Chap. VI:
We will say that the members of a collectivity enjoy a "maximum of ophelimity" [i.e. of utility] at a certain position when it is impossible to move a small step away such that the ophelimity enjoyed by each individual in the collectivity increases, or such that it diminishes. [He has previously defined an increase in individual ophelimity as a move onto a higher indifference curve.] That is to say that any small step is bound to increase the ophelimity of some individuals while diminishing that of others.
The following paragraph gives us a theorem:
For phenomena of type I [i.e. perfect competition], when equilibrium takes place at a point of tangency of indifference curves, the members of the collectivity enjoy a maximum of ophelimity.
He adds that 'a rigorous proof cannot be given without the help of mathematics' and refers to his Appendix.
Wicksell, referring to his definition of optimality, commented:
With such a definition it is almost self-evident that this so-called maximum obtains under free competition, because "if", after an exchange is effected, it were possible by means of a further series of direct or indirect exchanges to produce an additional satisfaction of needs for the participators, then to that extent such a continued exchange would doubtless have taken place, and the original position could not be one of final equilibrium.
Pareto didn't find it so straightforward. He gives a diagrammatic argument in his text, applying solely to exchange, and a 32-page mathematical argument in the Appendix which Samuelson found 'not easy to follow'. Pareto was hampered by not having a concept of the production–possibility frontier, whose development was due partly to his collaborator Enrico Barone. His own 'indifference curves for obstacles' seem to have been a false path.
Shortly after stating the first fundamental theorem, Pareto asks a question about distribution:
Consider a collectivist society which seeks to maximise the ophelimity of its members. The problem divides into two parts. Firstly we have a problem of distribution: how should the goods within a society be shared between its members? And secondly, how should production be organised so that, when goods are so distributed, the members of society obtain the maximum ophelimity?
His answer is an informal precursor of the second theorem:
Having distributed goods according to the answer to the first problem, the state should allow the members of the collectivity to operate a second distribution, or operate it itself, in either case making sure that it is performed in conformity with the workings of free competition.
Enrico Barone (1908).
Barone, an associate of Pareto, proved an optimality property of perfect competition, namely that – assuming exogenous prices – it maximises the monetary value of the return from productive activity, this being the sum of the values of leisure, savings, and goods for consumption, all taken in the desired proportions. He makes no argument that the prices chosen by the market are themselves optimal.
His paper wasn't translated into English until 1935. It received an approving summary from Samuelson but seems not to have influenced the development of the welfare theorems as they now stand.
Abba Lerner (1934).
In 1934 Lerner restated Edgeworth's condition for exchange that indifference curves should meet as tangents, presenting it as an optimality property. He stated a similar condition for production, namely that the production–possibility frontier ("PPF", to which he gave the alternative name of 'productive indifference curve') should be tangential with an indifference curve for the community. He was one of the originators of the PPF, having used it in a paper on international trade in 1932. He shows that the two arguments can be presented in the same terms, since the PPF plays the same role as the mirror-image indifference curve in an Edgeworth box. He also mentions that there's no need for the curves to be differentiable, since the same result obtains if they touch at pointed corners.
His definition of optimality was equivalent to Pareto's:
If... it is possible to move one individual into a preferred position without moving another individual into a worse position... we may say that the relative optimum is not reached...
The optimality condition for production is equivalent to the pair of requirements that (i) price should equal marginal cost and (ii) output should be maximised subject to (i). Lerner thus reduces optimality to tangency for both production and exchange, but does not say why the implied point on the PPF should be the equilibrium condition for a free market. Perhaps he considered it already sufficiently well established.
Lerner ascribes to his LSE colleague Victor Edelberg the credit for suggesting the use of indifference curves. Samuelson surmised that Lerner obtained his results independently of Pareto's work.
Harold Hotelling (1938).
Hotelling put forward a new argument to show that 'sales at marginal costs are a condition of maximum general welfare' (under Pareto's definition). He accepted that this condition was satisfied by perfect competition, but argued in consequence that perfect competition "could not" be optimal since some beneficial projects would be unable to recoup their fixed costs by charging at this rate (for example, in a natural monopoly).
Oscar Lange (1942).
Lange's paper 'The Foundations of Welfare Economics' is the source of the now-traditional pairing of two theorems, one governing markets, the other distribution. He justified the Pareto definition of optimality for the first theorem by reference to Lionel Robbins's rejection of interpersonal utility comparisons, and suggested various ways to reintroduce interpersonal comparisons for the second theorem such as the adjudications of a democratically elected Congress. Lange believed that such a congress could act in a similar way to a capitalist: through setting price vectors, it could achieve any optimal production plan to have achieve efficiency and social equality.
His reasoning is a mathematical translation (into Lagrange multipliers) of Lerner's graphical argument. The second theorem does not take its familiar form in his hands; rather he simply shows that the optimisation conditions for a genuine social utility function are similar to those for Pareto optimality.
Abram Bergson and Paul Samuelson (1947).
Samuelson (crediting Abram Bergson for the substance of his ideas) brought Lange's second welfare theorem to approximately its modern form. He follows Lange in deriving a set of equations which are necessary for Pareto optimality, and then considers what additional constraints arise if the economy is required to satisfy a genuine social welfare function, finding a further set of equations from which it follows 'that all of the action necessary to achieve a given ethical "desideratum" may take the form of "lump sum taxes or bounties"'.
Kenneth Arrow and Gérard Debreu (separately, 1951).
Arrow's and Debreu's two papers (written independently and published almost simultaneously) sought to improve on the rigour of Lange's first theorem. Their accounts refer to (short-run) production as well as exchange, expressing the conditions for both through linear functions.
Equilibrium for production is expressed by the constraint that the value of a manufacturer's net output, i.e. the dot product of the production vector with the price vector, should be maximised over the manufacturer's production set. This is interpreted as profit maximisation.
Equilibrium for exchange is interpreted as meaning that the individual's utility should be maximised over the positions obtainable from the endowment through exchange, these being the positions whose value is no greater than the value of his or her endowment, where the value of an allocation is its dot product with the price vector.
Arrow motivated his paper by reference to the need to extend proofs to cover equilibria at the edge of the space, and Debreu by the possibility of indifference curves being non-differentiable. Modern texts follow their style of proof.
Greenwald–Stiglitz theorem.
In their 1986 paper, "Externalities in Economies with Imperfect Information and Incomplete Markets", Bruce Greenwald and Joseph Stiglitz showed that the fundamental welfare theorems do not hold if there are incomplete markets or imperfect information. The paper establishes that a competitive equilibrium of an economy with asymmetric information is generically not even constrained Pareto efficient. A government facing the same information constraints as the private individuals in the economy can nevertheless find Pareto-improving policy interventions.
Greenwald and Stiglitz noted several relevant situations, including how moral hazard may render a situation inefficient (e.g. an alcohol tax may be pareto improving as it reduces automobile accidents).
Assumptions for the fundamental theorems.
In principle, there are two commonly found versions of the fundamental theorems, one relating to an exchange economy in which endowments are exogenously given, and one relating to an economy in which production occurs. The production economy is more general and entails additional assumptions. The assumptions are all based on the standard graduate microeconomics textbook.
The fundamental theorems do not generally ensure existence, nor uniqueness of the equilibria.
Second Fundamental Theorem.
The second fundamental theorem has more demanding conditions.
Common failures of the assumptions.
The following provides a non-exhaustive list of common failures of the assumptions underlying the fundamental theorems.
Another instance in which the welfare theorems fail to hold is in the canonical Overlapping generations model (OLG). A further assumption that is implicit in the statement of the theorem is that the value of total endowments in the economy (some of which might be transformed into other goods via production) is finite. In the OLG model, the finiteness of endowments fails, giving rise to similar problems as described by Hilbert's paradox of the Grand Hotel.
Whether the assumptions underlying the fundamental theorems are an adequate description of markets is at least partially an empirical question and may differ case by case.
Proof of the first fundamental theorem.
The first fundamental theorem holds under general conditions. A formal statement is as follows: "If preferences are locally nonsatiated, and if formula_2 is a price equilibrium with transfers, then the allocation formula_3 is Pareto optimal." An equilibrium in this sense either relates to an exchange economy only or presupposes that firms are allocatively and productively efficient, which can be shown to follow from perfectly competitive factor and production markets.
Given a set formula_4 of types of goods we work in the real vector space over formula_4, formula_5 and use boldface for vector valued variables. For instance, if formula_6 then formula_5 would be a three dimensional vector space and the vector formula_7 would represent the bundle of goods containing 1 unit of butter, 2 units of cookies and 3 units of milk.
Suppose that consumer "i" has wealth formula_8 such that formula_9 where formula_10 is the aggregate endowment of goods (i.e. the sum of all consumer and producer endowments) and formula_11 is the production of firm "j".
Preference maximization (from the definition of price equilibrium with transfers) implies (using formula_12 to denote the preference relation for consumer "i"):
if formula_13 then formula_14
In other words, if a bundle of goods is strictly preferred to formula_15 it must be unaffordable at price formula_16. Local nonsatiation additionally implies:
if formula_17 then formula_18
To see why, imagine that formula_17 but formula_19. Then by local nonsatiation we could find formula_20 arbitrarily close to formula_21 (and so still affordable) but which is strictly preferred to formula_15. But formula_15 is the result of preference maximization, so this is a contradiction.
An allocation is a pair formula_22 where formula_23 and formula_24, i.e. formula_25 is the 'matrix' (allowing potentially infinite rows/columns) whose "i"th column is the bundle of goods allocated to consumer "i" and formula_26 is the 'matrix' whose "j"th column is the production of firm "j". We restrict our attention to feasible allocations which are those allocations in which no consumer sells or producer consumes goods which they lack, i.e.,for every good and every consumer that consumers initial endowment plus their net demand must be positive similarly for producers.
Now consider an allocation formula_22 that Pareto dominates formula_27. This means that formula_17 for all "i" and formula_13 for some "i". By the above, we know formula_28 for all "i" and formula_29 for some "i". Summing, we find:
formula_30.
Because formula_31 is profit maximizing, we know formula_32, so formula_33. But goods must be conserved so formula_34. Hence, formula_22 is not feasible. Since all Pareto-dominating allocations are not feasible, formula_3 must itself be Pareto optimal.
Note that while the fact that formula_31 is profit maximizing is simply assumed in the statement of the theorem the result is only useful/interesting to the extent such a profit maximizing allocation of production is possible. Fortunately, for any restriction of the production allocation formula_31 and price to a closed subset on which the marginal price is bounded away from 0, e.g., any reasonable choice of continuous functions to parameterize possible productions, such a maximum exists. This follows from the fact that the minimal marginal price and finite wealth limits the maximum feasible production (0 limits the minimum) and Tychonoff's theorem ensures the product of these compacts spaces is compact ensuring us a maximum of whatever continuous function we desire exists.
Proof of the second fundamental theorem.
The second theorem formally states that, under the assumptions that every production set formula_1 is convex and every preference relation formula_35 is convex and locally nonsatiated, any desired Pareto-efficient allocation can be supported as a price "quasi"-equilibrium with transfers. Further assumptions are needed to prove this statement for price equilibria with transfers.
The proof proceeds in two steps: first, we prove that any Pareto-efficient allocation can be supported as a price quasi-equilibrium with transfers; then, we give conditions under which a price quasi-equilibrium is also a price equilibrium.
Let us define a price quasi-equilibrium with transfers as an allocation formula_36, a price vector "p", and a vector of wealth levels "w" (achieved by lump-sum transfers) with formula_37 (where formula_38 is the aggregate endowment of goods and formula_39 is the production of firm "j") such that:
i. formula_40 for all formula_41 (firms maximize profit by producing formula_42)
ii. For all "i", if formula_43 then formula_44 (if formula_45 is strictly preferred to formula_46 then it cannot cost less than formula_46)
iii. formula_47 (budget constraint satisfied)
The only difference between this definition and the standard definition of a price equilibrium with transfers is in statement ("ii"). The inequality is weak here (formula_44) making it a price quasi-equilibrium. Later we will strengthen this to make a price equilibrium.
Define formula_48 to be the set of all consumption bundles strictly preferred to formula_46 by consumer "i", and let "V" be the sum of all formula_48. formula_48 is convex due to the convexity of the preference relation formula_35. "V" is convex because every formula_48 is convex. Similarly formula_49, the union of all production sets formula_50 plus the aggregate endowment, is convex because every formula_50 is convex. We also know that the intersection of "V" and formula_49 must be empty, because if it were not it would imply there existed a bundle that is strictly preferred to formula_36 by everyone and is also affordable. This is ruled out by the Pareto-optimality of formula_36.
These two convex, non-intersecting sets allow us to apply the separating hyperplane theorem. This theorem states that there exists a price vector formula_51 and a number "r" such that formula_52 for every formula_53 and formula_54 for every formula_55. In other words, there exists a price vector that defines a hyperplane that perfectly separates the two convex sets.
Next we argue that if formula_56 for all "i" then formula_57. This is due to local nonsatiation: there must be a bundle formula_58 arbitrarily close to formula_45 that is strictly preferred to formula_46 and hence part of formula_48, so formula_59. Taking the limit as formula_60 does not change the weak inequality, so formula_57 as well. In other words, formula_45 is in the closure of "V".
Using this relation we see that for formula_46 itself formula_61. We also know that formula_62, so formula_63 as well. Combining these we find that formula_64. We can use this equation to show that formula_65 fits the definition of a price quasi-equilibrium with transfers.
Because formula_64 and formula_66 we know that for any firm j:
formula_67 for formula_68
which implies formula_40. Similarly we know:
formula_69 for formula_70
which implies formula_71. These two statements, along with the feasibility of the allocation at the Pareto optimum, satisfy the three conditions for a price quasi-equilibrium with transfers supported by wealth levels formula_72 for all "i".
We now turn to conditions under which a price quasi-equilibrium is also a price equilibrium, in other words, conditions under which the statement "if formula_43 then formula_44" imples "if formula_43 then formula_73". For this to be true we need now to assume that the consumption set formula_74 is convex and the preference relation formula_35 is continuous. Then, if there exists a consumption vector formula_58 such that formula_75 and formula_76, a price quasi-equilibrium is a price equilibrium.
To see why, assume to the contrary formula_43 and formula_77, and formula_45 exists. Then by the convexity of formula_74 we have a bundle formula_78 with formula_79. By the continuity of formula_35 for formula_80 close to 1 we have formula_81. This is a contradiction, because this bundle is preferred to formula_46 and costs less than formula_8.
Hence, for price quasi-equilibria to be price equilibria it is sufficient that the consumption set be convex, the preference relation to be continuous, and for there always to exist a "cheaper" consumption bundle formula_58. One way to ensure the existence of such a bundle is to require wealth levels formula_8 to be strictly positive for all consumers "i".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\geq_i"
},
{
"math_id": 1,
"text": "Y_j"
},
{
"math_id": 2,
"text": "(\\mathbf{X^*},\\mathbf{Y^*}, \\mathbf{p})"
},
{
"math_id": 3,
"text": "(\\mathbf{X^*},\\mathbf{Y^*})"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "\\mathbb{R}^{G}"
},
{
"math_id": 6,
"text": "G=\\lbrace \\text{butter}, \\text{cookies}, \\text{milk} \\rbrace"
},
{
"math_id": 7,
"text": "\\langle 1, 2, 3 \\rangle"
},
{
"math_id": 8,
"text": "w_i"
},
{
"math_id": 9,
"text": "\\Sigma_i w_i = \\mathbf{p} \\cdot \\mathbf{e} + \\Sigma _j \\mathbf{p} \\cdot \\mathbf{y^*_j}"
},
{
"math_id": 10,
"text": " \\mathbf{e} "
},
{
"math_id": 11,
"text": "\\mathbf{y^*_j}"
},
{
"math_id": 12,
"text": " >_i"
},
{
"math_id": 13,
"text": "\\mathbf{x_i} >_i \\mathbf{x^*_i}"
},
{
"math_id": 14,
"text": "\\mathbf{p} \\cdot \\mathbf{x_i} > \\mathbf{w_i}"
},
{
"math_id": 15,
"text": "\\mathbf{x^*_i}"
},
{
"math_id": 16,
"text": "\\mathbf{p}"
},
{
"math_id": 17,
"text": "\\mathbf{x_i} \\geq _i \\mathbf{x^*_i}"
},
{
"math_id": 18,
"text": "\\mathbf{p} \\cdot \\mathbf{x_i} \\geq \\mathbf{w_i}"
},
{
"math_id": 19,
"text": "\\mathbf{p} \\cdot \\mathbf{x_i} < w_i"
},
{
"math_id": 20,
"text": "\\mathbf{x'_i}"
},
{
"math_id": 21,
"text": "\\mathbf{x_i}"
},
{
"math_id": 22,
"text": "(\\mathbf{X},\\mathbf{Y})"
},
{
"math_id": 23,
"text": "\\mathbf{X} \\in \\Pi_{i \\in I} \\mathbb{R}^{G} "
},
{
"math_id": 24,
"text": "\\mathbf{Y} \\in \\Pi_{j \\in J} \\mathbb{R}^{G} "
},
{
"math_id": 25,
"text": "\\mathbf{X}"
},
{
"math_id": 26,
"text": "\\mathbf{Y}"
},
{
"math_id": 27,
"text": "(\\mathbf{X^*}, Y^*)"
},
{
"math_id": 28,
"text": "\\mathbf{p} \\cdot \\mathbf{x_i} \\geq w_i"
},
{
"math_id": 29,
"text": "\\mathbf{p} \\cdot \\mathbf{x_i} > w_i"
},
{
"math_id": 30,
"text": "\\Sigma _i \\mathbf{p} \\cdot \\mathbf{x_i} > \\Sigma _i w_i = \\Sigma _j \\mathbf{p} \\cdot \\mathbf{y^*_j}"
},
{
"math_id": 31,
"text": " \\mathbf{Y^*}"
},
{
"math_id": 32,
"text": " \\Sigma _j \\mathbf{p} \\cdot y^*_j \\geq \\Sigma _j p \\cdot y_j "
},
{
"math_id": 33,
"text": "\\Sigma _i \\mathbf{p} \\cdot \\mathbf{x_i} > \\Sigma _j \\mathbf{p} \\cdot \\mathbf{y_j}"
},
{
"math_id": 34,
"text": "\\Sigma _i \\mathbf{x_i} > \\Sigma _j \\mathbf{y_j}"
},
{
"math_id": 35,
"text": "\\geq _i"
},
{
"math_id": 36,
"text": "(x^*,y^*)"
},
{
"math_id": 37,
"text": "\\Sigma _i w_i = p \\cdot \\omega + \\Sigma _j p \\cdot y^*_j"
},
{
"math_id": 38,
"text": " \\omega "
},
{
"math_id": 39,
"text": "y^*_j"
},
{
"math_id": 40,
"text": "p \\cdot y_j \\leq p \\cdot y_j^*"
},
{
"math_id": 41,
"text": "y_j \\in Y_j"
},
{
"math_id": 42,
"text": "y_j^*"
},
{
"math_id": 43,
"text": "x_i >_i x_i^*"
},
{
"math_id": 44,
"text": "p \\cdot x_i \\geq w_i"
},
{
"math_id": 45,
"text": "x_i"
},
{
"math_id": 46,
"text": "x_i^*"
},
{
"math_id": 47,
"text": "\\Sigma_i x_i^* = \\omega + \\Sigma _j y_j^*"
},
{
"math_id": 48,
"text": "V_i"
},
{
"math_id": 49,
"text": "Y + \\{\\omega\\}"
},
{
"math_id": 50,
"text": "Y_i"
},
{
"math_id": 51,
"text": "p \\neq 0"
},
{
"math_id": 52,
"text": "p \\cdot z \\geq r"
},
{
"math_id": 53,
"text": "z \\in V"
},
{
"math_id": 54,
"text": "p \\cdot z \\leq r"
},
{
"math_id": 55,
"text": "z \\in Y + \\{\\omega\\}"
},
{
"math_id": 56,
"text": "x_i \\geq _i x_i^*"
},
{
"math_id": 57,
"text": "p \\cdot (\\Sigma _i x_i) \\geq r"
},
{
"math_id": 58,
"text": "x'_i"
},
{
"math_id": 59,
"text": "p \\cdot (\\Sigma _i x'_i) \\geq r"
},
{
"math_id": 60,
"text": "x'_i \\rightarrow x_i"
},
{
"math_id": 61,
"text": "p \\cdot (\\Sigma _i x_i^*) \\geq r"
},
{
"math_id": 62,
"text": "\\Sigma _i x_i^* \\in Y + \\{\\omega\\}"
},
{
"math_id": 63,
"text": "p \\cdot (\\Sigma _i x_i^*) \\leq r"
},
{
"math_id": 64,
"text": "p \\cdot (\\Sigma _i x_i^*) = r"
},
{
"math_id": 65,
"text": "(x^*,y^*,p)"
},
{
"math_id": 66,
"text": "\\Sigma _i x_i^* = \\omega + \\Sigma _j y_j^*"
},
{
"math_id": 67,
"text": "p \\cdot (\\omega + y_j + \\Sigma_h y_h^*) \\leq r = p \\cdot (\\omega + y_j^* + \\Sigma_h y_h^*)"
},
{
"math_id": 68,
"text": "h \\neq j"
},
{
"math_id": 69,
"text": "p \\cdot (x_i + \\Sigma_k x_k^*) \\geq r = p \\cdot (x_i^* + \\Sigma_k x_k^*)"
},
{
"math_id": 70,
"text": "k \\neq i"
},
{
"math_id": 71,
"text": "p \\cdot x_i \\geq p \\cdot x_i^*"
},
{
"math_id": 72,
"text": "w_i = p \\cdot x_i^*"
},
{
"math_id": 73,
"text": "p \\cdot x_i > w_i"
},
{
"math_id": 74,
"text": "X_i"
},
{
"math_id": 75,
"text": "x'_i \\in X_i"
},
{
"math_id": 76,
"text": "p \\cdot x'_i < w_i"
},
{
"math_id": 77,
"text": "p \\cdot x_i = w_i"
},
{
"math_id": 78,
"text": "x''_i = \\alpha x_i + (1 - \\alpha)x'_i \\in X_i"
},
{
"math_id": 79,
"text": "p \\cdot x''_i < w_i"
},
{
"math_id": 80,
"text": "\\alpha"
},
{
"math_id": 81,
"text": "\\alpha x_i + (1 - \\alpha)x'_i >_i x_i^*"
}
] |
https://en.wikipedia.org/wiki?curid=1488195
|
1488320
|
No-communication theorem
|
Principle in quantum information theory
In physics, the no-communication theorem or no-signaling principle is a no-go theorem from quantum information theory which states that, during measurement of an entangled quantum state, it is not possible for one observer, by making a measurement of a subsystem of the total state, to communicate information to another observer. The theorem is important because, in quantum mechanics, quantum entanglement is an effect by which certain widely separated events can be correlated in ways that, at first glance, suggest the possibility of communication faster-than-light. The no-communication theorem gives conditions under which such transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics, such as the EPR paradox, or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance" (in analogy with Einstein's labeling of quantum entanglement as requiring "spooky action at a distance" on the assumption of QM's completeness).
Informal overview.
The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem is only a sufficient condition that states that if the Kraus matrices commute then there can be no communication through the quantum entangled states and this is applicable to all communication. From a relativity and quantum field perspective also faster than light or "instantaneous" communication is disallowed.
Being only a sufficient condition there can be extra cases where communication is not allowed and there can be also cases where is still possible to communicate through the quantum channel encoding more than the classical information.
In regards to communication, a quantum channel can always be used to transfer classical information by means of shared quantum states.
In 2008 Matthew Hastings proved a counterexample where the minimum output entropy is not additive for all quantum channels. Therefore, by an equivalence result due to Peter Shor, the Holevo capacity is not just additive, but super-additive like the entropy, and by consequence there may be some quantum channels where you can transfer more than the classical capacity. Typically overall communication happens at the same time via quantum and non quantum channels, and in general time ordering and causality cannot be violated.
The basic assumption entering into the theorem is that a quantum-mechanical system is prepared in an initial state with some entangled states, and that this initial state is describable as a mixed or pure state in a Hilbert space "H". After a certain amount of time the system is divided in two parts each of which contains some non entangled states and half of quantum entangled states and the two parts becomes spatially distinct, "A" and "B", sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'.
An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob.
The proof proceeds by defining how the total Hilbert space "H" can be split into two parts, "H""A" and "H""B", describing the subspaces accessible to Alice and Bob. The total state of the system is assumed to be described by a density matrix σ. This appears to be a reasonable assumption, as a density matrix is sufficient to describe both pure and mixed states in quantum mechanics. Another important part of the theorem is that measurement is performed by applying a generalized projection operator "P" to the state σ. This again is reasonable, as projection operators give the appropriate mathematical description of quantum measurements. After a measurement by Alice, the state of the total system is said to have "collapsed" to a state "P"(σ).
The goal of the theorem is to prove that Bob cannot in any way distinguish the pre-measurement state σ from the post-measurement state "P"(σ). This is accomplished mathematically by comparing the trace of σ and the trace of "P"(σ), with the trace being taken over the subspace "H""A". Since the trace is only over a subspace, it is technically called a partial trace. Key to this step is the assumption that the (partial) trace adequately summarizes the system from Bob's point of view. That is, everything that Bob has access to, or could ever have access to, measure, or detect, is completely described by a partial trace over "H"A of the system σ. Again, this is a reasonable assumption, as it is a part of standard quantum mechanics. The fact that this trace never changes as Alice performs her measurements is the conclusion of the proof of the no-communication theorem.
Formulation.
The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations.
Alice and Bob perform measurements on system S whose underlying Hilbert space is
formula_0
It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on "H". Any density operator σ on "H" is a sum of the form:
formula_1
where "Ti" and "Si" are operators on "H""A" and "H""B" respectively. For the following, it is not required to assume that "Ti" and "Si" are state projection operators: "i.e." they need not necessarily be non-negative, nor have a trace of one. That is, σ can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state.
Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind
formula_2
where "V""k" are called Kraus matrices which satisfy
formula_3
The term
formula_4
from the expression
formula_5
means that Alice's measurement apparatus does not interact with Bob's subsystem.
Supposing the combined system is prepared in state σ and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is
formula_6
where formula_7 is the partial trace mapping with respect to Alice's system.
One can directly calculate this state:
formula_8
From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " H = H_A \\otimes H_B. "
},
{
"math_id": 1,
"text": " \\sigma = \\sum_i T_i \\otimes S_i "
},
{
"math_id": 2,
"text": " P(\\sigma) = \\sum_k (V_k \\otimes I_{H_B})^* \\ \\sigma \\ (V_k \\otimes I_{H_B}), "
},
{
"math_id": 3,
"text": " \\sum_k V_k V_k^* = I_{H_A}."
},
{
"math_id": 4,
"text": "I_{H_B}"
},
{
"math_id": 5,
"text": "(V_k \\otimes I_{H_B})"
},
{
"math_id": 6,
"text": " \\operatorname{tr}_{H_A}(P(\\sigma))"
},
{
"math_id": 7,
"text": "\\operatorname{tr}_{H_A}"
},
{
"math_id": 8,
"text": " \\begin{align}\n\\operatorname{tr}_{H_A}(P(\\sigma))\n& = \\operatorname{tr}_{H_A} \\left(\\sum_k (V_k \\otimes I_{H_B})^* \\sigma (V_k \\otimes I_{H_B} )\\right) \\\\\n& = \\operatorname{tr}_{H_A} \\left(\\sum_k \\sum_i V_k^* T_i V_k \\otimes S_i \\right)\\\\\n& = \\sum_i \\sum_k \\operatorname{tr}(V_k^* T_i V_k) S_i \\\\\n& = \\sum_i \\sum_k \\operatorname{tr}(T_i V_k V_k^*) S_i \\\\\n& = \\sum_i \\operatorname{tr}\\left(T_i \\sum_k V_k V_k^*\\right) S_i \\\\\n& = \\sum_i \\operatorname{tr}(T_i) S_i \\\\\n& = \\operatorname{tr}_{H_A}(\\sigma).\n\\end{align}"
},
{
"math_id": 9,
"text": "P(\\sigma)"
},
{
"math_id": 10,
"text": "|z+\\rangle_B"
},
{
"math_id": 11,
"text": "|z-\\rangle_B"
}
] |
https://en.wikipedia.org/wiki?curid=1488320
|
14884
|
Intermediate value theorem
|
Continuous function on an interval takes on every value between its values at the ends
In mathematical analysis, the intermediate value theorem states that if formula_0 is a continuous function whose domain contains the interval ["a", "b"], then it takes on any given value between formula_5 and formula_6 at some point within the interval.
This has two important corollaries:
Motivation.
This captures an intuitive property of continuous functions over the real numbers: given "formula_0" continuous on formula_7 with the known values formula_8 and formula_9, then the graph of formula_10 must pass through the horizontal line formula_11 while formula_2 moves from formula_12 to formula_13. It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper.
Theorem.
The intermediate value theorem states the following:
Consider an interval formula_14 of real numbers formula_15 and a continuous function formula_16. Then
Remark: "Version II" states that the set of function values has no gap. For any two function values formula_23 with formula_24, even if they are outside the interval between formula_5 and formula_6, all points in the interval formula_25 are also function values, formula_26
A subset of the real numbers with no internal gap is an interval. "Version I" is naturally contained in "Version II".
Relation to completeness.
The theorem depends on, and is equivalent to, the completeness of the real numbers. The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function formula_27 for formula_28 satisfies formula_29 and formula_30. However, there is no rational number formula_2 such that formula_31, because formula_32 is an irrational number.
Proof.
Proof version A.
The theorem may be proven as a consequence of the completeness property of the real numbers as follows:
We shall prove the first case, formula_33. The second case is similar.
Let formula_34 be the set of all formula_35 such that formula_36. Then formula_34 is non-empty since formula_3 is an element of formula_34. Since formula_34 is non-empty and bounded above by formula_4, by completeness, the supremum formula_37 exists. That is, formula_38 is the smallest number that is greater than or equal to every member of formula_34.
Note that, due to the continuity of formula_0 at formula_3, we can keep formula_39 within any formula_40 of formula_5 by keeping formula_2 sufficiently close to formula_3. Since formula_41 is a strict inequality, consider the implication when formula_42 is the distance between formula_17 and formula_5. No formula_2 sufficiently close to formula_3 can then make formula_39 greater than or equal to formula_17, which means there are values greater than formula_3 in formula_34. A more detailed proof goes like this:
Choose formula_43. Then formula_44 such that formula_45, formula_46Consider the interval formula_47. Notice that formula_48 and every formula_49 satisfies the condition formula_50. Therefore for every formula_49 we have formula_36. Hence formula_38 cannot be formula_3.
Likewise, due to the continuity of formula_0 at formula_4, we can keep formula_39 within any formula_51 of formula_6 by keeping formula_2 sufficiently close to formula_4. Since formula_52 is a strict inequality, consider the similar implication when formula_42 is the distance between formula_17 and formula_6. Every formula_2 sufficiently close to formula_4 must then make formula_39 greater than formula_17, which means there are values smaller than formula_4 that are upper bounds of formula_34. A more detailed proof goes like this:
Choose formula_53. Then formula_44 such that formula_45, formula_54Consider the interval formula_55. Notice that formula_56 and every formula_57 satisfies the condition formula_58. Therefore for every formula_57 we have formula_59. Hence formula_38 cannot be formula_4.
With formula_60 and formula_61, it must be the case formula_62. Now we claim that formula_20.
Fix some formula_51. Since formula_0 is continuous at formula_38, formula_63 such that formula_45, formula_64.
Since formula_62 and formula_65 is open, formula_66 such that formula_67. Set formula_68. Then we have
formula_69
for all formula_70. By the properties of the supremum, there exists some formula_71 that is contained in formula_34, and so
formula_72
Picking formula_73, we know that formula_74 because formula_38 is the supremum of formula_34. This means that
formula_75
Both inequalities
formula_76
are valid for all formula_51, from which we deduce formula_77 as the only possible value, as stated.
Proof version B.
We will only prove the case of formula_78, as the formula_79 case is similar.
Define formula_80 which is equivalent to formula_81 and lets us rewrite formula_78 as formula_82, and we have to prove, that formula_83 for some formula_84, which is more intuitive. We further define the set formula_85. Because formula_86 we know, that formula_87 so, that formula_34 is not empty. Moreover, as formula_88, we know that formula_34 is bounded and non-empty, so by Completeness, the supremum formula_89 exists.
There are 3 cases for the value of formula_90, those being formula_91 and formula_83. For contradiction, let us assume, that formula_92. Then, by the definition of continuity, for formula_93, there exists a formula_94 such that formula_70 implies, that formula_95, which is equivalent to formula_96. If we just chose formula_97, where formula_98, then formula_96 and formula_99, so formula_100. It follows that formula_2 is an upper bound for formula_34. However, formula_101, contradicting the upper bound property of the "least upper bound" formula_38, so formula_102. Assume then, that formula_103. We similarly chose formula_104 and know, that there exists a formula_94 such that formula_70 implies formula_105. We can rewrite this as formula_106 which implies, that formula_107. If we now chose formula_108, then formula_107 and formula_109. It follows that formula_2 is an upper bound for formula_34. However, formula_110, which contradict the least property of the "least upper bound" formula_38, which means, that formula_103 is impossible. If we combine both results, we get that formula_83 or formula_20 is the only remaining possibility.
Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis, which places "intuitive" arguments involving infinitesimals on a rigorous footing.
History.
A form of the theorem was postulated as early as the 5th century BCE, in the work of Bryson of Heraclea on squaring the circle. Bryson argued that, as circles larger than and smaller than a given square both exist, there must exist a circle of equal area. The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem:
Let formula_111 be continuous functions on the interval between formula_112 and formula_113 such that formula_114 and formula_115. Then there is an formula_2 between formula_112 and formula_113 such that formula_116.
The equivalence between this formulation and the modern one can be shown by setting formula_117 to the appropriate constant function. Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange. The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast, who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable.
Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions.
Converse is false.
A Darboux function is a real-valued function f that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values a and b in the domain of f, and any y between "f"("a") and "f"("b"), there is some c between a and b with "f"("c") = "y". The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false.
As an example, take the function "f" : [0, ∞) → [−1, 1] defined by "f"("x") = sin(1/"x") for "x" > 0 and "f"(0) = 0. This function is not continuous at "x" = 0 because the limit of "f"("x") as x tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function.
In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous).
Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; this definition was not adopted.
Generalizations.
Multi-dimensional spaces.
The Poincaré-Miranda theorem is a generalization of the Intermediate value theorem from a (one-dimensional) interval to a (two-dimensional) rectangle, or more generally, to an "n"-dimensional cube.
Vrahatis presents a similar generalization to triangles, or more generally, "n"-dimensional simplices. Let "Dn" be an "n"-dimensional simplex with "n"+1 vertices denoted by "v"0...,"vn". Let "F"=("f"1...,"fn") be a continuous function from "Dn" to "Rn", that never equals 0 on the boundary of "Dn". Suppose "F" satisfies the following conditions:
Then there is a point "z" in the interior of "Dn" on which "F"("z")=(0...,0).
It is possible to normalize the "fi" such that "fi"("vi")>0 for all "i"; then the conditions become simpler:
The theorem can be proved based on the Knaster–Kuratowski–Mazurkiewicz lemma. In can be used for approximations of fixed points and zeros.
General metric and topological spaces.
The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular:
In fact, connectedness is a topological property and (*) generalizes to topological spaces: "If formula_118 and formula_119 are topological spaces, formula_120 is a continuous map, and formula_118 is a connected space, then formula_125 is connected." The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of continuous, real-valued functions of a real variable, to continuous functions in general spaces.
Recall the first version of the intermediate value theorem, stated previously:
<templatestyles src="Math_theorem/styles.css" />
Intermediate value theorem ("Version I") — Consider a closed interval formula_14 in the real numbers formula_15 and a continuous function formula_126. Then, if formula_127 is a real number such that formula_128, there exists formula_62 such that formula_77.
The intermediate value theorem is an immediate consequence of these two properties of connectedness:
<templatestyles src="Math_proof/styles.css" />Proof
By (**), formula_14 is a connected set. It follows from (*) that the image, formula_21, is also connected. For convenience, assume that formula_129. Then once more invoking (**), formula_33 implies that formula_130, or formula_77 for some formula_131. Since formula_132, formula_133 must actually hold, and the desired conclusion follows. The same argument applies if formula_134, so we are done. Q.E.D.
The intermediate value theorem generalizes in a natural way: Suppose that X is a connected topological space and ("Y", <) is a totally ordered set equipped with the order topology, and let "f" : "X" → "Y" be a continuous map. If a and b are two points in X and u is a point in Y lying between "f"("a") and "f"("b") with respect to <, then there exists c in X such that "f"("c") = "u". The original theorem is recovered by noting that R is connected and that its natural topology is the order topology.
The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem.
In constructive mathematics.
In constructive mathematics, the intermediate value theorem is not true. Instead, one has to weaken the conclusion:
Practical applications.
A similar result is the Borsuk–Ulam theorem, which says that a continuous map from the formula_139-sphere to Euclidean formula_139-space will always map some pair of antipodal points to the same place.
<templatestyles src="Math_proof/styles.css" />Proof for 1-dimensional case
Take formula_0 to be any continuous function on a circle. Draw a line through the center of the circle, intersecting it at two opposite points formula_140 and formula_141. Define formula_142 to be formula_143. If the line is rotated 180 degrees, the value −"d" will be obtained instead. Due to the intermediate value theorem there must be some intermediate rotation angle for which "d" = 0, and as a consequence "f"("A") = "f"("B") at this angle.
In general, for any continuous function whose domain is some closed convex formula_139-dimensional shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same.
The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "[a,b]"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "f(a)"
},
{
"math_id": 6,
"text": "f(b)"
},
{
"math_id": 7,
"text": "[1,2]"
},
{
"math_id": 8,
"text": "f(1) = 3"
},
{
"math_id": 9,
"text": "f(2) = 5"
},
{
"math_id": 10,
"text": "y = f(x)"
},
{
"math_id": 11,
"text": "y = 4"
},
{
"math_id": 12,
"text": "1"
},
{
"math_id": 13,
"text": "2"
},
{
"math_id": 14,
"text": "I = [a,b]"
},
{
"math_id": 15,
"text": "\\R"
},
{
"math_id": 16,
"text": "f \\colon I \\to \\R"
},
{
"math_id": 17,
"text": "u"
},
{
"math_id": 18,
"text": "\\min(f(a),f(b))<u<\\max(f(a),f(b)),"
},
{
"math_id": 19,
"text": "c\\in (a,b)"
},
{
"math_id": 20,
"text": "f(c)=u"
},
{
"math_id": 21,
"text": "f(I)"
},
{
"math_id": 22,
"text": "\\bigl[\\min(f(a), f(b)),\\max(f(a), f(b))\\bigr]"
},
{
"math_id": 23,
"text": "c,d \\in f(I)"
},
{
"math_id": 24,
"text": "c < d"
},
{
"math_id": 25,
"text": "\\bigl[c,d\\bigr]"
},
{
"math_id": 26,
"text": "\\bigl[c,d\\bigr]\\subseteq f(I)."
},
{
"math_id": 27,
"text": "f(x) = x^2"
},
{
"math_id": 28,
"text": "x\\in\\Q"
},
{
"math_id": 29,
"text": "f(0) = 0"
},
{
"math_id": 30,
"text": "f(2) = 4"
},
{
"math_id": 31,
"text": "f(x)=2"
},
{
"math_id": 32,
"text": "\\sqrt 2"
},
{
"math_id": 33,
"text": "f(a) < u < f(b)"
},
{
"math_id": 34,
"text": "S"
},
{
"math_id": 35,
"text": "x \\in [a,b]"
},
{
"math_id": 36,
"text": "f(x)<u"
},
{
"math_id": 37,
"text": "c=\\sup S"
},
{
"math_id": 38,
"text": "c"
},
{
"math_id": 39,
"text": "f(x)"
},
{
"math_id": 40,
"text": "\\varepsilon>0"
},
{
"math_id": 41,
"text": "f(a)<u"
},
{
"math_id": 42,
"text": "\\varepsilon"
},
{
"math_id": 43,
"text": "\\varepsilon=u-f(a)>0"
},
{
"math_id": 44,
"text": "\\exists \\delta>0"
},
{
"math_id": 45,
"text": "\\forall x \\in [a,b]"
},
{
"math_id": 46,
"text": "|x-a|<\\delta \\implies |f(x)-f(a)|<u-f(a) \\implies f(x)<u."
},
{
"math_id": 47,
"text": "[a,\\min(a+\\delta,b))=I_1"
},
{
"math_id": 48,
"text": "I_1 \\subseteq [a,b]"
},
{
"math_id": 49,
"text": "x \\in I_1"
},
{
"math_id": 50,
"text": "|x-a|<\\delta"
},
{
"math_id": 51,
"text": "\\varepsilon > 0"
},
{
"math_id": 52,
"text": "u<f(b)"
},
{
"math_id": 53,
"text": "\\varepsilon=f(b)-u>0"
},
{
"math_id": 54,
"text": "|x-b|<\\delta \\implies |f(x)-f(b)|<f(b)-u \\implies f(x)>u."
},
{
"math_id": 55,
"text": "(\\max(a,b-\\delta),b]=I_2"
},
{
"math_id": 56,
"text": "I_2 \\subseteq [a,b]"
},
{
"math_id": 57,
"text": "x \\in I_2"
},
{
"math_id": 58,
"text": "|x-b|<\\delta"
},
{
"math_id": 59,
"text": "f(x)>u"
},
{
"math_id": 60,
"text": "c \\neq a"
},
{
"math_id": 61,
"text": "c \\neq b"
},
{
"math_id": 62,
"text": "c \\in (a,b)"
},
{
"math_id": 63,
"text": "\\exists \\delta_1>0"
},
{
"math_id": 64,
"text": "|x-c|<\\delta_1 \\implies |f(x) - f(c)| < \\varepsilon"
},
{
"math_id": 65,
"text": "(a,b)"
},
{
"math_id": 66,
"text": "\\exists \\delta_2>0"
},
{
"math_id": 67,
"text": "(c-\\delta_2,c+\\delta_2) \\subseteq (a,b)"
},
{
"math_id": 68,
"text": "\\delta= \\min(\\delta_1,\\delta_2)"
},
{
"math_id": 69,
"text": "f(x)-\\varepsilon<f(c)<f(x)+\\varepsilon"
},
{
"math_id": 70,
"text": "x\\in(c-\\delta,c+\\delta)"
},
{
"math_id": 71,
"text": "a^*\\in (c-\\delta,c]"
},
{
"math_id": 72,
"text": "f(c)<f(a^*)+\\varepsilon<u+\\varepsilon."
},
{
"math_id": 73,
"text": "a^{**}\\in(c,c+\\delta)"
},
{
"math_id": 74,
"text": "a^{**}\\not\\in S"
},
{
"math_id": 75,
"text": "f(c)>f(a^{**})-\\varepsilon \\geq u-\\varepsilon."
},
{
"math_id": 76,
"text": "u-\\varepsilon<f(c)< u+\\varepsilon"
},
{
"math_id": 77,
"text": "f(c) = u"
},
{
"math_id": 78,
"text": "f(a)<u<f(b)"
},
{
"math_id": 79,
"text": "f(a)>u>f(b)"
},
{
"math_id": 80,
"text": "g(x)=f(x)-u"
},
{
"math_id": 81,
"text": "f(x)=g(x)+u"
},
{
"math_id": 82,
"text": "g(a)<0<g(b)"
},
{
"math_id": 83,
"text": "g(c)=0"
},
{
"math_id": 84,
"text": "c\\in[a,b]"
},
{
"math_id": 85,
"text": "S=\\{x\\in[a,b]:g(x)\\leq 0\\}"
},
{
"math_id": 86,
"text": "g(a)<0"
},
{
"math_id": 87,
"text": "a\\in S"
},
{
"math_id": 88,
"text": "S\\subseteq[a,b]"
},
{
"math_id": 89,
"text": "c=\\sup(S)"
},
{
"math_id": 90,
"text": "g(c)"
},
{
"math_id": 91,
"text": "g(c)<0,g(c)>0"
},
{
"math_id": 92,
"text": "g(c)<0"
},
{
"math_id": 93,
"text": "\\epsilon=0-g(c)"
},
{
"math_id": 94,
"text": "\\delta>0"
},
{
"math_id": 95,
"text": "|g(x)-g(c)|<-g(c)"
},
{
"math_id": 96,
"text": "g(x)<0"
},
{
"math_id": 97,
"text": "x=c+\\frac{\\delta}{N}"
},
{
"math_id": 98,
"text": "N>\\frac{\\delta}{b-c}"
},
{
"math_id": 99,
"text": "c<x<b"
},
{
"math_id": 100,
"text": "x\\in S"
},
{
"math_id": 101,
"text": "x>c"
},
{
"math_id": 102,
"text": "g(c)\\geq 0"
},
{
"math_id": 103,
"text": "g(c)>0"
},
{
"math_id": 104,
"text": "\\epsilon=g(c)-0"
},
{
"math_id": 105,
"text": "|g(x)-g(c)|<g(c)"
},
{
"math_id": 106,
"text": "-g(c)<g(x)-g(c)<g(c)"
},
{
"math_id": 107,
"text": "g(x)>0"
},
{
"math_id": 108,
"text": "x=c-\\frac{\\delta}{2}"
},
{
"math_id": 109,
"text": "a<x<c"
},
{
"math_id": 110,
"text": "x<c"
},
{
"math_id": 111,
"text": "f, \\varphi"
},
{
"math_id": 112,
"text": "\\alpha"
},
{
"math_id": 113,
"text": "\\beta"
},
{
"math_id": 114,
"text": "f(\\alpha) < \\varphi(\\alpha)"
},
{
"math_id": 115,
"text": "f(\\beta) > \\varphi(\\beta)"
},
{
"math_id": 116,
"text": "f(x) = \\varphi(x)"
},
{
"math_id": 117,
"text": "\\varphi"
},
{
"math_id": 118,
"text": "X"
},
{
"math_id": 119,
"text": "Y"
},
{
"math_id": 120,
"text": "f \\colon X \\to Y"
},
{
"math_id": 121,
"text": "E \\subset X"
},
{
"math_id": 122,
"text": "f(E)"
},
{
"math_id": 123,
"text": "E \\subset \\R"
},
{
"math_id": 124,
"text": "x,y\\in E,\\ x < r < y \\implies r \\in E"
},
{
"math_id": 125,
"text": "f(X)"
},
{
"math_id": 126,
"text": "f\\colon I\\to\\R"
},
{
"math_id": 127,
"text": " u"
},
{
"math_id": 128,
"text": "\\min(f(a),f(b))< u < \\max(f(a),f(b))"
},
{
"math_id": 129,
"text": "f(a) < f(b)"
},
{
"math_id": 130,
"text": "u \\in f(I)"
},
{
"math_id": 131,
"text": "c\\in I"
},
{
"math_id": 132,
"text": "u\\neq f(a), f(b)"
},
{
"math_id": 133,
"text": "c\\in(a,b)"
},
{
"math_id": 134,
"text": "f(b) < f(a)"
},
{
"math_id": 135,
"text": "f:[a,b] \\to R"
},
{
"math_id": 136,
"text": "f(a) < 0"
},
{
"math_id": 137,
"text": "0 < f(b)"
},
{
"math_id": 138,
"text": "\\vert f(x) \\vert < \\varepsilon"
},
{
"math_id": 139,
"text": "n"
},
{
"math_id": 140,
"text": "A"
},
{
"math_id": 141,
"text": "B"
},
{
"math_id": 142,
"text": "d"
},
{
"math_id": 143,
"text": "f(A)-f(B)"
}
] |
https://en.wikipedia.org/wiki?curid=14884
|
14887052
|
41 equal temperament
|
In music, 41 equal temperament, abbreviated 41-TET, 41-EDO, or 41-ET, is the tempered scale derived by dividing the octave into 41 equally sized steps (equal frequency ratios). Each step represents a frequency ratio of 21/41, or 29.27 cents (), an interval close in size to the septimal comma. 41-ET can be seen as a tuning of the schismatic, magic and miracle temperaments. It is the second smallest equal temperament, after 29-ET, whose perfect fifth is closer to just intonation than that of 12-ET. In other words, formula_0 is a better approximation to the ratio formula_1 than either formula_2 or formula_3.
History and use.
Although 41-ET has not seen as wide use as other temperaments such as 19-ET or 31-ET , pianist and engineer Paul von Janko built a piano using this tuning, which is on display at the Gemeentemuseum in The Hague. 41-ET can also be seen as an octave-based approximation of the Bohlen–Pierce scale.
41-ET guitars have been built, notably by Yossi Tamim. The frets on such guitars are very tightly spaced. To make a more playable 41-ET guitar, an approach called "The Kite Tuning" omits every-other fret (in other words, 41 frets per two octaves or 20.5 frets per octave) while tuning adjacent strings to an odd number of steps of 41. Thus, any two adjacent strings together contain all the pitch classes of the full 41-ET system. The Kite Guitar's main tuning uses 13 steps of 41-ET (which approximates a 5/4 ratio) between strings. With that tuning, all simple ratios of odd limit 9 or less are available at spans at most only 4 frets.
41-ET is also a subset of 205-ET, for which the keyboard layout of the
Tonal Plexus is designed.
Interval size.
Here are the sizes of some common intervals (shaded rows mark relatively poor matches):
As the table above shows, the 41-ET both distinguishes between and closely matches all intervals involving the ratios in the harmonic series up to and including the 10th overtone. This includes the distinction between the major tone and minor tone (thus 41-ET is not a meantone tuning). These close fits make 41-ET a good approximation for 5-, 7- and 9-limit music.
41-ET also closely matches a number of other intervals involving higher harmonics. It distinguishes between and closely matches all intervals involving up through the 12th overtones, with the exception of the greater undecimal neutral second (11:10). Although not as accurate, it can be considered a full 15-limit tuning as well.
Tempering.
Intervals not tempered out by 41-ET include the lesser diesis (128:125), septimal diesis (49:48), septimal sixth-tone (50:49), septimal comma (64:63), and the syntonic comma (81:80).
41-ET tempers out 100:99, which is the difference between the greater undecimal neutral second and the minor tone, as well as the septimal kleisma (225:224), 1029:1024 (the difference between three intervals of 8:7 the interval 3:2), and the small diesis (3125:3072).
Notation.
Using extended pythagorean notation results in double and even triple sharps and flats. Furthermore, the notes run out of order. The chromatic scale is C, B♯, A/E, D♭, C♯, B, E, D... These issues can be avoided by using ups and downs notation. The up and down arrows are written as a caret or a lower-case "v", usually in a sans-serif font. One arrow equals one step of 41-TET. In note names, the arrows come first, to facilitate chord naming. The many enharmonic equivalences allow great freedom of spelling.
Chords of 41 equal temperament.
Because ups and downs notation names the intervals of 41-TET, it can provide precise chord names. The pythagorean minor chord with 32/27 on C is still named Cm and still spelled C–E♭–G. But the 5-limit "up"minor chord uses the upminor 3rd 6/5 and is spelled C–^E♭–G. This chord is named C^m. Compare with ^Cm (^C–^E♭–^G).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2^{24/41} \\approx 1.50042"
},
{
"math_id": 1,
"text": "3/2 = 1.5"
},
{
"math_id": 2,
"text": "2^{17/29} \\approx 1.50129"
},
{
"math_id": 3,
"text": "2^{7/12} \\approx 1.49831"
}
] |
https://en.wikipedia.org/wiki?curid=14887052
|
14889315
|
Weyl connection
|
In differential geometry, a Weyl connection (also called a Weyl structure) is a generalization of the Levi-Civita connection that makes sense on a conformal manifold. They were introduced by Hermann Weyl in an attempt to unify general relativity and electromagnetism. His approach, although it did not lead to a successful theory, lead to further developments of the theory in conformal geometry, including a detailed study by Élie Cartan . They were also discussed in .
Specifically, let formula_0 be a smooth manifold, and formula_1 a conformal class of (non-degenerate) metric tensors on formula_0, where formula_2 iff formula_3 for some smooth function formula_4 (see Weyl transformation). A Weyl connection is a torsion free affine connection on formula_0 such that, for any formula_5,
formula_6
where formula_7 is a one-form depending on formula_8.
If formula_9 is a Weyl connection and formula_3, then
formula_10
so the one-form transforms by
formula_11
Thus the notion of a Weyl connection is conformally invariant, and the change in one-form is mediated by a de Rham cocycle.
An example of a Weyl connection is the Levi-Civita connection for any metric in the conformal class formula_1, with formula_12. This is not the most general case, however, as any such Weyl connection has the property that the one-form formula_13 is closed for all formula_14 belonging to the conformal class. In general, the Ricci curvature of a Weyl connection is not symmetric. Its skew part is the dimension times the two-form formula_15, which is independent of formula_8 in the conformal class, because the difference between two formula_7 is a de Rham cocycle. Thus, by the Poincaré lemma, the Ricci curvature is symmetric if and only if the Weyl connection is locally the Levi-Civita connection of some element of the conformal class.
Weyl's original hope was that the form formula_7 could represent the vector potential of electromagnetism (a gauge dependent quantity), and formula_15 the field strength (a gauge invariant quantity). This synthesis is unsuccessful in part because the gauge group is wrong: electromagnetism is associated with a formula_16 gauge field, not an formula_17 gauge field.
showed that an affine connection is a Weyl connection if and only if its holonomy group is a subgroup of the conformal group. The possible holonomy algebras in Lorentzian signature were analyzed in .
A Weyl manifold is a manifold admitting a global Weyl connection. The global analysis of Weyl manifolds is actively being studied. For example, considered complete Weyl manifolds such that the Einstein vacuum equations hold, an Einstein–Weyl geometry, obtaining a complete characterization in three dimensions.
Weyl connections also have current applications in string theory and holography.
Weyl connections have been generalized to the setting of parabolic geometries, of which conformal geometry is a special case, in .
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "[g]"
},
{
"math_id": 2,
"text": "h,g\\in[g]"
},
{
"math_id": 3,
"text": "h=e^{2\\gamma}g"
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "g\\in [g]"
},
{
"math_id": 6,
"text": "\\nabla g = \\alpha_g \\otimes g"
},
{
"math_id": 7,
"text": "\\alpha_g"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "\\nabla"
},
{
"math_id": 10,
"text": "\\nabla h = (2\\,d\\gamma+\\alpha_g)\\otimes h"
},
{
"math_id": 11,
"text": "\\alpha_{e^{2\\gamma}g} = 2\\,d\\gamma+\\alpha_g."
},
{
"math_id": 12,
"text": "\\alpha_g=0"
},
{
"math_id": 13,
"text": "\\alpha_h"
},
{
"math_id": 14,
"text": "h"
},
{
"math_id": 15,
"text": "d\\alpha_g"
},
{
"math_id": 16,
"text": "U(1)"
},
{
"math_id": 17,
"text": "\\mathbb R"
}
] |
https://en.wikipedia.org/wiki?curid=14889315
|
14889378
|
Conformal dimension
|
In mathematics, the conformal dimension of a metric space "X" is the infimum of the Hausdorff dimension over the conformal gauge of "X", that is, the class of all metric spaces quasisymmetric to "X".
Formal definition.
Let "X" be a metric space and formula_0 be the collection of all metric spaces that are quasisymmetric to "X". The conformal dimension of "X" is defined as such
formula_1
Properties.
We have the following inequalities, for a metric space "X":
formula_2
The second inequality is true by definition. The first one is deduced from the fact that the topological dimension T is invariant by homeomorphism, and thus can be defined as the infimum of the Hausdorff dimension over all spaces homeomorphic to "X".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{G}"
},
{
"math_id": 1,
"text": " \\mathrm{Cdim} X = \\inf_{Y \\in \\mathcal{G}} \\dim_H Y"
},
{
"math_id": 2,
"text": "\\dim_T X \\leq \\mathrm{Cdim} X \\leq \\dim_H X"
},
{
"math_id": 3,
"text": "\\mathbf{R}^N"
}
] |
https://en.wikipedia.org/wiki?curid=14889378
|
148911
|
Separated sets
|
Type of relation for subsets of a topological space
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces.
Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept.
Definitions.
There are various ways in which two subsets formula_0 and formula_1 of a topological space formula_2 can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the properties below is stricter than disjointness, incorporating some topological information. The properties are presented in increasing order of specificity, each being a stronger notion than the preceding one.
A more restrictive property is that formula_0 and formula_1 are <templatestyles src="Template:Visible anchor/styles.css" />separated in formula_2 if each is disjoint from the other's closure:
formula_3
This property is known as the Hausdorff−Lennes Separation Condition. Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do "not" have to be disjoint from each other; for example, the intervals formula_4 and formula_5 are separated in the real line formula_6 even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls formula_7 and formula_8 are separated whenever formula_9 The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): formula_0 and formula_1 are separated when they are disjoint and each is disjoint from the other's derived set, that is, formula_10 (As in the case of the first version of the definition, the derived sets formula_11 and formula_12 are not required to be disjoint from each other.)
The sets formula_0 and formula_1 are <templatestyles src="Template:Visible anchor/styles.css" />separated by neighbourhoods if there are neighbourhoods formula_13 of formula_0 and formula_14 of formula_1 such that formula_13 and formula_14 are disjoint. (Sometimes you will see the requirement that formula_13 and formula_14 be "open" neighbourhoods, but this makes no difference in the end.) For the example of formula_15 and formula_16 you could take formula_17 and formula_18 Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If formula_0 and formula_1 are open and disjoint, then they must be separated by neighbourhoods; just take formula_19 and formula_20 For this reason, separatedness is often used with closed sets (as in the normal separation axiom).
The sets formula_0 and formula_1 are <templatestyles src="Template:Visible anchor/styles.css" />separated by closed neighbourhoods if there is a closed neighbourhood formula_13 of formula_0 and a closed neighbourhood formula_14 of formula_1 such that formula_13 and formula_14 are disjoint. Our examples, formula_4 and formula_21 are not separated by closed neighbourhoods. You could make either formula_13 or formula_14 closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods.
The sets formula_0 and formula_1 are <templatestyles src="Template:Visible anchor/styles.css" />separated by a continuous function if there exists a continuous function formula_22 from the space formula_2 to the real line formula_23 such that formula_24 and formula_25, that is, members of formula_0 map to 0 and members of formula_1 map to 1. (Sometimes the unit interval formula_26 is used in place of formula_23 in this definition, but this makes no difference.) In our example, formula_4 and formula_5 are not separated by a function, because there is no way to continuously define formula_27 at the point 1. If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of formula_27 as formula_28 and formula_29 where formula_30 is any positive real number less than formula_31
The sets formula_0 and formula_1 are <templatestyles src="Template:Visible anchor/styles.css" />precisely separated by a continuous function if there exists a continuous function formula_22 such that formula_32 and formula_33 (Again, you may also see the unit interval in place of formula_6 and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since formula_34 and formula_35 are closed in formula_6 only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function).
Relation to separation axioms and separated spaces.
The "separation axioms" are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is "separated" if, given any two distinct points "x" and "y", the singleton sets {"x"} and {"y"} are separated by neighbourhoods.
Separated spaces are usually called "Hausdorff spaces" or "T2 spaces".
Relation to connected spaces.
Given a topological space "X", it is sometimes useful to consider whether it is possible for a subset "A" to be separated from its complement. This is certainly true if "A" is either the empty set or the entire space "X", but there may be other possibilities. A topological space "X" is "connected" if these are the only two possibilities. Conversely, if a nonempty subset "A" is separated from its own complement, and if the only subset of "A" to share this property is the empty set, then "A" is an "open-connected component" of "X". (In the degenerate case where "X" is itself the empty set formula_36, authorities differ on whether formula_36 is connected and whether formula_36 is an open-connected component of itself.)
Relation to topologically distinguishable points.
Given a topological space "X", two points "x" and "y" are "topologically distinguishable" if there exists an open set that one point belongs to but the other point does not. If "x" and "y" are topologically distinguishable, then the singleton sets {"x"} and {"y"} must be disjoint. On the other hand, if the singletons {"x"} and {"y"} are separated, then the points "x" and "y" must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "A \\cap \\bar{B} = \\varnothing = \\bar{A} \\cap B."
},
{
"math_id": 4,
"text": "[0, 1)"
},
{
"math_id": 5,
"text": "(1, 2]"
},
{
"math_id": 6,
"text": "\\Reals,"
},
{
"math_id": 7,
"text": "B_r(p) = \\{x \\in X : d(p, x) < r\\}"
},
{
"math_id": 8,
"text": "B_s(q) = \\{x \\in X : d(q, x) < s\\}"
},
{
"math_id": 9,
"text": "d(p, q) \\geq r + s."
},
{
"math_id": 10,
"text": "A' \\cap B = \\varnothing = B' \\cap A."
},
{
"math_id": 11,
"text": "A'"
},
{
"math_id": 12,
"text": "B'"
},
{
"math_id": 13,
"text": "U"
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "A = [0, 1)"
},
{
"math_id": 16,
"text": "B = (1, 2],"
},
{
"math_id": 17,
"text": "U = (-1, 1)"
},
{
"math_id": 18,
"text": "V = (1, 3)."
},
{
"math_id": 19,
"text": "U = A"
},
{
"math_id": 20,
"text": "V = B."
},
{
"math_id": 21,
"text": "(1, 2],"
},
{
"math_id": 22,
"text": "f : X \\to \\Reals"
},
{
"math_id": 23,
"text": "\\Reals"
},
{
"math_id": 24,
"text": "A \\subseteq f^{-1}(0)"
},
{
"math_id": 25,
"text": "B \\subseteq f^{-1}(1)"
},
{
"math_id": 26,
"text": "[0, 1]"
},
{
"math_id": 27,
"text": "f"
},
{
"math_id": 28,
"text": "U = f^{-1}[-c, c]"
},
{
"math_id": 29,
"text": "V = f^{-1}[1 - c, 1 + c],"
},
{
"math_id": 30,
"text": "c"
},
{
"math_id": 31,
"text": "1/2."
},
{
"math_id": 32,
"text": "A = f^{-1}(0)"
},
{
"math_id": 33,
"text": "B = f^{-1}(1)."
},
{
"math_id": 34,
"text": "\\{0\\}"
},
{
"math_id": 35,
"text": "\\{1\\}"
},
{
"math_id": 36,
"text": "\\emptyset"
}
] |
https://en.wikipedia.org/wiki?curid=148911
|
14891304
|
Abstract family of acceptors
|
An abstract family of acceptors (AFA) is a grouping of generalized acceptors. Informally, an acceptor is a device with a finite state control, a finite number of input symbols, and an internal store with a read and write function. Each acceptor has a start state and a set of accepting states. The device reads a sequence of symbols, transitioning from state to state for each input symbol. If the device ends in an accepting state, the device is said to accept the sequence of symbols. A family of acceptors is a set of acceptors with the same type of internal store. The study of AFA is part of AFL (abstract families of languages) theory.
Formal definitions.
AFA Schema.
An "AFA Schema" is an ordered 4-tuple formula_0, where
Abstract family of acceptors.
An "abstract family of acceptors (AFA)" is an ordered pair formula_23 such that:
For a given acceptor, let formula_40 be the relation on formula_41 defined by: For formula_42 in formula_43, formula_44 if there exists a formula_45 and formula_46 such that formula_45 is in formula_9, formula_47 is in formula_48 and formula_49. Let formula_50 denote the transitive closure of formula_40.
Let formula_23 be an AFA and formula_28 = (formula_29, formula_30, formula_31, formula_32, formula_33) be in formula_28. Define formula_51 to be the set formula_52. For each subset formula_53 of formula_27, let formula_54.
Define formula_55 to be the set formula_56. For each subset formula_53 of formula_27, let formula_57.
Informal discussion.
AFA Schema.
An AFA schema defines a store or memory with read and write function. The symbols in formula_1 are called "storage symbols" and the symbols in formula_2 are called "instructions". The write function formula_3 returns a new storage state given the current storage state and an instruction. The read function formula_5 returns the current state of memory. Condition (3) insures the empty storage configuration is distinct from other configurations. Condition (4) requires there be an identity instruction that allows the state of memory to remain unchanged while the acceptor changes state or advances the input. Condition (5) assures that the set of storage symbols for any given acceptor is finite.
Abstract family of acceptors.
An AFA is the set of all acceptors over a given pair of state and input alphabets which have the same storage mechanism defined by a given AFA schema. The formula_40 relation defines one step in the operation of an acceptor. formula_55 is the set of words accepted by acceptor formula_28 by having the acceptor enter an accepting state. formula_51 is the set of words accepted by acceptor formula_28 by having the acceptor simultaneously enter an accepting state and having an empty storage.
The abstract acceptors defined by AFA are generalizations of other types of acceptors (e.g. finite state automata, pushdown automata, etc.). They have a finite state control like other automata, but their internal storage may vary widely from the stacks and tapes used in classical automata.
Results from AFL theory.
The main result from AFL theory is that a family of languages formula_58 is a full AFL if and only if formula_59 for some AFA formula_23. Equally important is the result that formula_58 is a full semi-AFL if and only if formula_60 for some AFA formula_23.
Origins.
Seymour Ginsburg of the University of Southern California and Sheila Greibach of Harvard University first presented their AFL theory paper at the IEEE Eighth Annual Symposium on Switching and Automata Theory in 1967.
|
[
{
"math_id": 0,
"text": "(\\Gamma, I, f, g)"
},
{
"math_id": 1,
"text": "\\Gamma"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "f : \\Gamma^* \\times I \\rightarrow \\Gamma^* \\cup \\{\\empty\\}"
},
{
"math_id": 5,
"text": "g"
},
{
"math_id": 6,
"text": "\\Gamma^*"
},
{
"math_id": 7,
"text": "g (\\epsilon) = \\{ \\epsilon \\}"
},
{
"math_id": 8,
"text": "\\epsilon"
},
{
"math_id": 9,
"text": "g(\\gamma)"
},
{
"math_id": 10,
"text": "\\gamma = \\epsilon"
},
{
"math_id": 11,
"text": "\\gamma"
},
{
"math_id": 12,
"text": "g(\\Gamma^*)"
},
{
"math_id": 13,
"text": "1_\\gamma"
},
{
"math_id": 14,
"text": "f(\\gamma', 1_\\gamma) = \\gamma'"
},
{
"math_id": 15,
"text": "\\gamma'"
},
{
"math_id": 16,
"text": "g(\\gamma')"
},
{
"math_id": 17,
"text": "\\Gamma_u"
},
{
"math_id": 18,
"text": "\\Gamma_1"
},
{
"math_id": 19,
"text": "\\Gamma_1^*"
},
{
"math_id": 20,
"text": "f(\\gamma,u) \\ne \\empty"
},
{
"math_id": 21,
"text": "f(\\gamma,u)"
},
{
"math_id": 22,
"text": "(\\Gamma_1 \\cup \\Gamma_u)^*"
},
{
"math_id": 23,
"text": "(\\Omega, \\mathcal{D})"
},
{
"math_id": 24,
"text": "\\Omega"
},
{
"math_id": 25,
"text": "K"
},
{
"math_id": 26,
"text": "\\Sigma"
},
{
"math_id": 27,
"text": "\\mathcal{D}"
},
{
"math_id": 28,
"text": "D"
},
{
"math_id": 29,
"text": "K_1"
},
{
"math_id": 30,
"text": "\\Sigma_1"
},
{
"math_id": 31,
"text": "\\delta"
},
{
"math_id": 32,
"text": "q_0"
},
{
"math_id": 33,
"text": "F"
},
{
"math_id": 34,
"text": "K_1 \\times (\\Sigma_1 \\cup \\{ \\epsilon \\}) \\times g(\\Gamma^*)"
},
{
"math_id": 35,
"text": "K_1 \\times I"
},
{
"math_id": 36,
"text": "G_D = \\{ \\gamma "
},
{
"math_id": 37,
"text": "\\delta(q,a,\\gamma) "
},
{
"math_id": 38,
"text": "q"
},
{
"math_id": 39,
"text": "a \\}"
},
{
"math_id": 40,
"text": "\\vdash"
},
{
"math_id": 41,
"text": "K_1 \\times \\Sigma_1^* \\times \\Gamma^*"
},
{
"math_id": 42,
"text": "a"
},
{
"math_id": 43,
"text": "\\Sigma_1 \\cup \\{ \\epsilon \\}"
},
{
"math_id": 44,
"text": "(p,aw,\\gamma) \\vdash (p',w,\\gamma')"
},
{
"math_id": 45,
"text": "\\overline{\\gamma}"
},
{
"math_id": 46,
"text": "u"
},
{
"math_id": 47,
"text": "(p',u)"
},
{
"math_id": 48,
"text": "\\delta(p,a,\\overline{\\gamma})"
},
{
"math_id": 49,
"text": "f(\\gamma,u)=\\gamma'"
},
{
"math_id": 50,
"text": "\\vdash^*"
},
{
"math_id": 51,
"text": "L(D)"
},
{
"math_id": 52,
"text": "\\{ w \\in \\Sigma_1^* | \\exists q \\in F . (q_0,w,\\epsilon) \\vdash^* (q,\\epsilon,\\epsilon)\\}"
},
{
"math_id": 53,
"text": "\\mathcal{E}"
},
{
"math_id": 54,
"text": "\\mathcal{L}(\\mathcal{E}) = \\{L(D) | D \\in \\mathcal{E} \\}"
},
{
"math_id": 55,
"text": "L_f(D)"
},
{
"math_id": 56,
"text": "\\{ w \\in \\Sigma_1^* | \\exists(q \\in F)\\exists(\\gamma \\in \\Gamma^*) . (q_0,w,\\epsilon) \\vdash^* (q,\\epsilon,\\gamma)\\}"
},
{
"math_id": 57,
"text": "\\mathcal{L}_f(\\mathcal{E}) = \\{L_f(D) | D \\in \\mathcal{E} \\}"
},
{
"math_id": 58,
"text": "\\mathcal{L}"
},
{
"math_id": 59,
"text": "\\mathcal{L} = \\mathcal{L}(\\mathcal{D})"
},
{
"math_id": 60,
"text": "\\mathcal{L} = \\mathcal{L}_f(\\mathcal{D})"
}
] |
https://en.wikipedia.org/wiki?curid=14891304
|
14892659
|
Stellar-wind bubble
|
A stellar-wind bubble is a cavity light-years across filled with hot gas blown into the interstellar medium by the high-velocity (several thousand km/s) stellar wind from a single massive star of type O or B. Weaker stellar winds also blow bubble structures, which are also called astrospheres. The heliosphere blown by the solar wind, within which all the major planets of the Solar System are embedded, is a small example of a stellar-wind bubble.
Stellar-wind bubbles have a two-shock structure. The freely-expanding stellar wind hits an inner termination shock, where its kinetic energy is thermalized, producing 106 K, X-ray-emitting plasma. The hot, high-pressure, shocked wind expands, driving a shock into the surrounding interstellar gas. If the surrounding gas is dense enough (number densities formula_0 or so), the swept-up gas radiatively cools far faster than the hot interior, forming a thin, relatively dense shell around the hot, shocked wind.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n > 0.1 \\mbox{ cm}^{-3}"
}
] |
https://en.wikipedia.org/wiki?curid=14892659
|
14892705
|
Compound of six cubes with rotational freedom
|
Polyhedral compound
This uniform polyhedron compound is a symmetric arrangement of 6 cubes, considered as square prisms. It can be constructed by superimposing six identical cubes, and then rotating them in pairs about the three axes that pass through the centres of two opposite cubic faces. Each cube is rotated by an equal (and opposite, within a pair) angle "θ".
When "θ" = 0, all six cubes coincide. When "θ" is 45 degrees, the cubes coincide in pairs yielding (two superimposed copies of) the compound of three cubes.
Cartesian coordinates.
Cartesian coordinates for the vertices of this compound are all the permutations of
formula_0
|
[
{
"math_id": 0,
"text": "(\\pm(\\cos(\\theta)+\\sin(\\theta)), \\pm(\\cos(\\theta)-\\sin(\\theta)), \\pm1)."
}
] |
https://en.wikipedia.org/wiki?curid=14892705
|
14892992
|
Lamb waves
|
Elastic waves propagating in solid plates or spheres
Lamb waves propagate in solid plates or spheres. They are elastic waves whose particle motion lies in the plane that contains the direction of wave propagation and the direction perpendicular to the plate. In 1917, the English mathematician Horace Lamb published his classic analysis and description of acoustic waves of this type. Their properties turned out to be quite complex. An infinite medium supports just two wave modes traveling at unique velocities; but plates support two infinite sets of Lamb wave modes, whose velocities depend on the relationship between wavelength and plate thickness.
Since the 1990s, the understanding and utilization of Lamb waves have advanced greatly, thanks to the rapid increase in the availability of computing power. Lamb's theoretical formulations have found substantial practical application, especially in the field of non-destructive testing.
The term Rayleigh–Lamb waves embraces the Rayleigh wave, a type of wave that propagates along a single surface. Both Rayleigh and Lamb waves are constrained by the elastic properties of the surface(s) that guide them.
Lamb's characteristic equations.
In general, elastic waves in solid materials are guided by the boundaries of the media in which they propagate. An approach to guided wave propagation, widely used in physical acoustics, is to seek sinusoidal solutions to the wave equation for linear elastic waves subject to boundary conditions representing the structural geometry. This is a classic eigenvalue problem.
Waves in plates were among the first guided waves to be analyzed in this way. The analysis was developed and published in 1917 by Horace Lamb, a leader in the mathematical physics of his day.
Lamb's equations were derived by setting up formalism for a solid plate having infinite extent in the "x" and "y" directions, and thickness "d" in the "z" direction. Sinusoidal solutions to the wave equation were postulated, having x- and z-displacements of the form
formula_0
formula_1
This form represents sinusoidal waves propagating in the "x" direction with wavelength 2π/k and frequency ω/2π. Displacement is a function of "x", "z", "t" only; there is no displacement in the "y" direction and no variation of any physical quantities in the "y" direction.
The physical boundary condition for the free surfaces of the plate is that the component of stress in the "z" direction at "z" = +/- "d"/2 is zero.
Applying these two conditions to the above-formalized solutions to the wave equation, a pair of characteristic equations can be found. These are:
formula_2
for symmetric modes and
formula_3
for asymmetric modes, where
formula_4
Inherent in these equations is a relationship between the angular frequency ω and the wave number k. Numerical methods are used to find the phase velocity "cp = fλ = ω/k", and the group velocity "cg" = "dω"/"dk", as functions of "d"/"λ" or "fd". "cl" and "ct" are the longitudinal wave and shear wave velocities respectively.
The solution of these equations also reveals the precise form of the particle motion, which equations (1) and (2) represent in generic form only. It is found that equation (3) gives rise to a family of waves whose motion is symmetrical about the midplane of the plate (the plane z = 0), while equation (4) gives rise to a family of waves whose motion is antisymmetric about the midplane. Figure 1 illustrates a member of each family.
Lamb’s characteristic equations were established for waves propagating in an infinite plate - a homogeneous, isotropic solid bounded by two parallel planes beyond which no wave energy can propagate. In formulating his problem, Lamb confined the components of particle motion to the direction of the plate normal ("z"-direction) and the direction of wave propagation ("x"-direction). By definition, Lamb waves have no particle motion in the "y"-direction. Motion in the "y"-direction in plates is found in the so-called SH or shear-horizontal wave modes. These have no motion in the "x"- or "z"-directions, and are thus complementary to the Lamb wave modes. These two are the only wave types which can propagate with straight, infinite wave fronts in a plate as defined above.
Velocity dispersion inherent in the characteristic equations.
Lamb waves exhibit velocity dispersion; that is, their velocity of propagation "c" depends on the frequency (or wavelength), as well as on the elastic constants and density of the material. This phenomenon is central to the study and understanding of wave behavior in plates. Physically, the key parameter is the ratio of plate thickness "d" to wavelength formula_7. This ratio determines the effective stiffness of the plate and hence the velocity of the wave. In technological applications, a more practical parameter readily derived from this is used, namely the product of thickness and frequency:
The relationship between velocity and frequency (or wavelength) is inherent in the characteristic equations. In the case of the plate, these equations are not simple and their solution requires numerical methods. This was an intractable problem until the advent of the digital computer forty years after Lamb's original work. The publication of computer-generated "dispersion curves" by Viktorov in the former Soviet Union, Firestone followed by Worlton in the United States, and eventually many others brought Lamb wave theory into the realm of practical applicability. The free "Dispersion Calculator" (DC) software allows computation of dispersion diagrams for isotropic plates and multilayered anisotropic specimens. Experimental waveforms observed in plates can be understood by interpretation with reference to the dispersion curves.
Dispersion curves - graphs that show relationships between wave velocity, wavelength and frequency in dispersive systems - can be presented in various forms. The form that gives the greatest insight into the underlying physics has formula_5 (angular frequency) on the "y"-axis and "k" (wave number) on the "x"-axis. The form used by Viktorov, that brought Lamb waves into practical use, has wave velocity on the "y"-axis and formula_8, the thickness/wavelength ratio, on the "x"-axis. The most practical form of all, for which credit is due to J. and H. Krautkrämer as well as to Floyd Firestone (who, incidentally, coined the phrase "Lamb waves") has wave velocity on the y-axis and "fd", the frequency-thickness product, on the "x"-axis.
Lamb's characteristic equations indicate the existence of two entire families of sinusoidal wave modes in infinite plates of width formula_6. This stands in contrast with the situation in unbounded media where there are just two wave modes, the longitudinal wave and the transverse or shear wave. As in Rayleigh waves which propagate along single free surfaces, the particle motion in Lamb waves is elliptical with its "x" and "z" components depending on the depth within the plate. In one family of modes, the motion is symmetrical about the midthickness plane. In the other family it is antisymmetric.
The phenomenon of velocity dispersion leads to a rich variety of experimentally observable waveforms when acoustic waves propagate in plates. It is the group velocity "cg", not the above-mentioned phase velocity "c" or "cp", that determines the modulations seen in the observed waveform. The appearance of the waveforms depends critically on the frequency range selected for observation. The flexural and extensional modes are relatively easy to recognize and this has been advocated as a technique of nondestructive testing.
The zero-order modes.
The symmetrical and antisymmetric zero-order modes deserve special attention. These modes have "nascent frequencies" of zero. Thus they are the only modes that exist over the entire frequency spectrum from zero to indefinitely high frequencies. In the low frequency range (i.e. when the wavelength is greater than the plate thickness) these modes are often called the “extensional mode” and the “flexural mode" respectively, terms that describe the nature of the motion and the elastic stiffnesses that govern the velocities of propagation. The elliptical particle motion is mainly in the plane of the plate for the symmetrical, extensional mode and perpendicular to the plane of the plate for the antisymmetric, flexural mode. These characteristics change at higher frequencies.
These two modes are the most important because (a) they exist at all frequencies and (b) in most practical situations they carry more energy than the higher-order modes.
The zero-order symmetrical mode (designated S0) travels at the "plate velocity" in the low-frequency regime where it is properly called the "extensional mode". In this regime, the plate stretches in the direction of propagation and contracts correspondingly in the thickness direction. As the frequency increases and the wavelength becomes comparable with the plate thickness, curving of the plate starts to have a significant influence on its effective stiffness. The phase velocity drops smoothly while the group velocity drops somewhat precipitously towards a minimum. At higher frequencies yet, both the phase velocity and the group velocity converge towards the Rayleigh wave velocity - the phase velocity from above, and the group velocity from below.
In the low-frequency limit for the extensional mode, the z- and x-components of the surface displacement are in quadrature and the ratio of their amplitudes is given by:
formula_9
where formula_10 is Poisson's ratio.
The zero-order antisymmetric mode (designated A0) is highly dispersive in the low frequency regime where it is properly called the "flexural mode" or the "bending mode". For very low frequencies (very thin plates) the phase and group velocities are both proportional to the square root of the frequency; the group velocity is twice the phase velocity. This simple relationship is a consequence of the stiffness/thickness relationship for thin plates in bending. At higher frequencies where the wavelength is no longer much greater than the plate thickness, these relationships break down. The phase velocity rises less and less quickly and converges towards the Rayleigh wave velocity in the high frequency limit. The group velocity passes through a maximum, a little faster than the shear wave velocity, when the wavelength is approximately equal to the plate thickness. It then converges, from above, to the Rayleigh wave velocity in the high frequency limit.
In experiments that allow both extensional and flexural modes to be excited and detected, the extensional mode often appears as a higher-velocity, lower-amplitude precursor to the flexural mode. The flexural mode is the more easily excited of the two and often carries most of the energy.
The higher-order modes.
As the frequency is raised, the higher-order wave modes make their appearance in addition to the zero-order modes. Each higher-order mode is “born” at a resonant frequency of the plate, and exists only above that frequency. For example, in a <templatestyles src="Fraction/styles.css" />3⁄4 inch (19mm) thick steel plate at a frequency of 200 kHz, the first four Lamb wave modes are present, and at 300 kHz, the first six. The first few higher-order modes can be distinctly observed under favorable experimental conditions. Under less than favorable conditions they overlap and can not be distinguished.
The higher-order Lamb modes are characterized by nodal planes within the plate, parallel to the plate surfaces. Each of these modes exists only above a certain frequency which can be called its "nascent frequency". There is no upper frequency limit for any of the modes. The nascent frequencies can be pictured as the resonant frequencies for longitudinal or shear waves propagating perpendicular to the plane of the plate, i.e.
formula_11
where "n" is any positive integer. Here "c" can be either the longitudinal wave velocity or the shear wave velocity, and for each resulting set of resonances the corresponding Lamb wave modes are alternately symmetrical and antisymmetric. The interplay of these two sets results in a pattern of nascent frequencies that at first glance seems irregular. For example, in a 3/4 inch (19mm) thick steel plate having longitudinal and shear velocities of 5890 m/s and 3260 m/s respectively, the nascent frequencies of the antisymmetric modes A1 and A2 are 86 kHz and 310 kHz respectively, while the nascent frequencies of the symmetric modes S1, S2 and S3 are 155 kHz, 172 kHz and 343 kHz respectively.
At its nascent frequency, each of these modes has an infinite phase velocity and a group velocity of zero. In the high frequency limit, the phase and group velocities of all these modes converge to the shear wave velocity. Because of these convergences, the Rayleigh and shear velocities (which are very close to one another) are of major importance in thick plates. Simply stated in terms of the material of greatest engineering significance, most of the high-frequency wave energy that propagates long distances in steel plates is traveling at 3000–3300 m/s.
Particle motion in the Lamb wave modes is in general elliptical, having components both perpendicular to and parallel to the plane of the plate. These components are in quadrature, i.e. they have a 90° phase difference. The relative magnitude of the components is a function of frequency. For certain frequencies-thickness products, the amplitude of one component passes through zero so that the motion is entirely perpendicular or parallel to the plane of the plate. For particles on the plate surface, these conditions occur when the Lamb wave phase velocity is √2"c""t" or for symmetric modes only "c""l", respectively. These directionality considerations are important when considering the radiation of acoustic energy from plates into adjacent fluids.
The particle motion is also entirely perpendicular or entirely parallel to the plane of the plate, at a mode's nascent frequency. Close to the nascent frequencies of modes corresponding to longitudinal-wave resonances of the plate, their particle motion will be almost entirely perpendicular to the plane of the plate; and near the shear-wave resonances, parallel.
J. and H. Krautkrämer have pointed out that Lamb waves can be conceived as a system of longitudinal and shear waves propagating at suitable angles across and along the plate. These waves reflect and mode-convert and combine to produce a sustained, coherent wave pattern. For this coherent wave pattern to be formed, the plate thickness has to be just right relative to the angles of propagation and wavelengths of the underlying longitudinal and shear waves; this requirement leads to the velocity dispersion relationships.
Lamb waves with cylindrical symmetry; plate waves from point sources.
While Lamb's analysis assumed a straight wavefront, it has been shown that the same characteristic equations apply to cylindrical plate waves (i.e. waves propagating outwards from a line source, the line lying perpendicular to the plate). The difference is that whereas the "carrier" for the straight wavefront is a sinusoid, the "carrier" for the axisymmetric wave is a Bessel function. The Bessel function takes care of the singularity at the source, then converges towards sinusoidal behavior at great distances.
These cylindrical waves are the eigenfunctions from which the plate's response to point disturbances can be composed. Thus a plate's response to a point disturbance can be expressed as a combination of Lamb waves, plus evanescent terms in the near field. The overall result can be loosely visualized as a pattern of circular wavefronts, like ripples from a stone dropped into a pond but changing more profoundly in form as they progress outwards. Lamb wave theory relates only to motion in the (r,z) direction; transverse motion is a different topic.
Guided Lamb waves.
This phrase is quite often encountered in non-destructive testing. "Guided Lamb Waves" can be defined as Lamb-like waves that are guided by the finite dimensions of real test objects. To add the prefix "guided" to the phrase "Lamb wave" is thus to recognize that Lamb's infinite plate is, in reality, nowhere to be found.
In reality we deal with finite plates, or plates wrapped into cylindrical pipes or vessels, or plates cut into thin strips, etc. Lamb wave theory often gives a very good account of much of the wave behavior of such structures. It will not give a perfect account, and that is why the phrase "Guided Lamb Waves" is more practically relevant than "Lamb Waves". One question is how the velocities and mode shapes of the Lamb-like waves will be influenced by the real geometry of the part. For example, the velocity of a Lamb-like wave in a thin cylinder will depend slightly on the radius of the cylinder and on whether the wave is traveling along the axis or round the circumference. Another question is what completely different acoustical behaviors and wave modes may be present in the real geometry of the part. For example, a cylindrical pipe has flexural modes associated with bodily movement of the whole pipe, quite different from the Lamb-like flexural mode of the pipe wall.
Lamb waves in ultrasonic testing.
The purpose of ultrasonic testing is usually to find and characterize individual flaws in the object being tested. Such flaws are detected when they reflect or scatter the impinging wave and the reflected or scattered wave reaches the search unit with sufficient amplitude.
Traditionally, ultrasonic testing has been conducted with waves whose wavelength is very much shorter than the dimension of the part being inspected. In this high-frequency-regime, the ultrasonic inspector uses waves that approximate to the infinite-medium longitudinal and shear wave modes, zig-zagging to and from across the thickness of the plate. Although the lamb wave pioneers worked on non-destructive testing applications and drew attention to the theory, widespread use did not come about until the 1990s when computer programs for calculating dispersion curves and relating them to experimentally observable signals became much more widely available. These computational tools, along with a more widespread understanding of the nature of Lamb waves, made it possible to devise techniques for nondestructive testing using wavelengths that are comparable with or greater than the thickness of the plate. At these longer wavelengths the attenuation of the wave is less so that flaws can be detected at greater distances.
A major challenge and skill in the use of Lamb waves for ultrasonic testing is the generation of specific modes at specific frequencies that will propagate well and give clean return "echoes". This requires careful control of the excitation. Techniques for this include the use of comb transducers, wedges, waves from liquid media and electromagnetic acoustic transducers (EMAT's).
Lamb waves in acousto-ultrasonic testing.
Acousto-ultrasonic testing differs from ultrasonic testing in that it was conceived as a means of assessing damage (and other material attributes) distributed over substantial areas, rather than characterizing flaws individually. Lamb waves are well suited to this concept, because they irradiate the whole plate thickness and propagate substantial distances with consistent patterns of motion.
Lamb waves in acoustic emission testing.
Acoustic emission uses much lower frequencies than traditional ultrasonic testing, and the sensor is typically expected to detect active flaws at distances up to several meters. A large fraction of the structures customarily testing with acoustic emission are fabricated from steel plate - tanks, pressure vessels, pipes and so on. Lamb wave theory is, therefore, the prime theory for explaining the signal forms and propagation velocities that are observed when conducting acoustic emission testing. The analysis of Acoustic Emission signals via guided wave theory is referred to as Modal Acoustic Emission (MAE). Substantial improvements in the accuracy of source location (a major technique of AE testing) can be achieved through good understanding and skillful utilization of the Lamb wave body of knowledge.
Ultrasonic and acoustic emission testing contrasted.
An arbitrary mechanical excitation applied to a plate will generate a multiplicity of Lamb waves carrying energy across a range of frequencies. Such is the case for the acoustic emission wave. In acoustic emission testing, the challenge is to recognize the multiple Lamb wave components in the received waveform and to interpret them in terms of source motion. This contrasts with the situation in ultrasonic testing, where the first challenge is to generate a single, well-controlled Lamb wave mode at a single frequency. But even in ultrasonic testing, mode conversion takes place when the generated Lamb wave interacts with flaws, so the interpretation of reflected signals compounded from multiple modes becomes a means of flaw characterization.
|
[
{
"math_id": 0,
"text": "\\xi = A_x f_x(z) e^{i(\\omega t - kx)} \\quad \\quad (1) "
},
{
"math_id": 1,
"text": "\\zeta = A_z f_z(z) e^{i(\\omega t - k x)} \\quad \\quad (2) "
},
{
"math_id": 2,
"text": "\n\\frac{\\tanh(\\beta d / 2)} {\\tanh(\\alpha d / 2)} = \\frac\n{4 \\alpha \\beta k^2}\n{(k^2 + \\beta^2)^2}\\ \\quad \\quad \\quad \\quad (3)\n"
},
{
"math_id": 3,
"text": "\n\\frac{\\tanh(\\beta d / 2)} {\\tanh(\\alpha d / 2)} = \\frac\n{(k^2 + \\beta^2)^2}\n{4 \\alpha \\beta k^2}\\ \\quad \\quad \\quad \\quad (4)\n"
},
{
"math_id": 4,
"text": " \\alpha^2 = k^2-\\frac{\\omega^2}{c_l^2}\n\\quad \\quad \\text{and}\\quad\\quad \\beta^2 = k^2-\\frac{\\omega^2}{c_t^2}. "
},
{
"math_id": 5,
"text": "\\omega"
},
{
"math_id": 6,
"text": "d"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "d/\\lambda"
},
{
"math_id": 9,
"text": " \\frac {a_z}{a_x} = \\frac{\\pi \\nu}{(1 - \\nu)} . \\frac{d}{ \\lambda}"
},
{
"math_id": 10,
"text": "\\nu"
},
{
"math_id": 11,
"text": " d = \\frac{n \\lambda}{2} \\quad \\quad \\text{or} \\quad \\quad\nf = \\frac{nc}{2d}"
}
] |
https://en.wikipedia.org/wiki?curid=14892992
|
14893994
|
Ordered weighted averaging
|
In applied mathematics, specifically in fuzzy logic, the ordered weighted averaging (OWA) operators provide a parameterized class of mean type aggregation operators. They were introduced by Ronald R. Yager.
Many notable mean operators such as the max, arithmetic average, median and min, are members of this class. They have been widely used in computational intelligence because of their ability to model linguistically expressed aggregation instructions.
Definition.
An OWA operator of dimension formula_0 is a mapping formula_1 that has an associated collection of weights formula_2 lying in the unit interval and summing to one and with
formula_3
where formula_4 is the "j"th largest of the formula_5.
By choosing different "W" one can implement different aggregation operators. The OWA operator is a non-linear operator as a result of the process of determining the "b""j".
formula_6 if formula_7 and formula_8 for formula_9
formula_10 if formula_11 and formula_8 for formula_12
formula_13 if formula_14 for all formula_15
Properties.
The OWA operator is a mean operator. It is bounded, monotonic, symmetric, and idempotent, as defined below.
Characterizing features.
Two features have been used to characterize the OWA operators. The first is the attitudinal character, also called "orness". This is defined as
formula_16
It is known that formula_17.
In addition "A" − "C"(max) = 1, A − C(ave) = A − C(med) = 0.5 and A − C(min) = 0. Thus the A − C goes from 1 to 0 as we go from Max to Min aggregation. The attitudinal character characterizes the similarity of aggregation to OR operation(OR is defined as the Max).
The second feature is the dispersion. This defined as
formula_18
An alternative definition is formula_19 The dispersion characterizes how uniformly the arguments are being used.
Type-1 OWA aggregation operators.
The above Yager's OWA operators are used to aggregate the crisp values. Can we aggregate fuzzy sets in the OWA mechanism? The
Type-1 OWA operators have been proposed for this purpose.
So the type-1 OWA operators provides us with a new technique for directly aggregating uncertain information with uncertain weights via OWA mechanism in soft decision making and data mining, where these uncertain objects are modelled by fuzzy sets.
The type-1 OWA operator is defined according to the alpha-cuts of fuzzy sets as follows:
Given the "n" linguistic weights formula_20 in the form of fuzzy sets defined on the domain of discourse formula_21, then for each formula_22, an formula_23-level type-1 OWA operator with formula_23-level sets formula_24 to aggregate the formula_23-cuts of fuzzy sets formula_25 is given as
formula_26
where formula_27, and formula_28 is a permutation function such that formula_29, i.e., formula_30 is the formula_31th largest
element in the set formula_32.
The computation of the type-1 OWA output is implemented by computing the left end-points and right end-points of the intervals formula_33:
formula_34 and formula_35
where formula_36. Then membership function of resulting aggregation fuzzy set is:
formula_37
For the left end-points, we need to solve the following programming problem:
formula_38
while for the right end-points, we need to solve the following programming problem:
formula_39
This paper has presented a fast method to solve two programming problem so that the type-1 OWA aggregation operation can be performed efficiently.
OWA for committee voting.
Amanatidis, Barrot, Lang, Markakis and Ries present voting rules for multi-issue voting, based on OWA and the Hamming distance. Barrot, Lang and Yokoo study the manipulability of these rules.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\ n "
},
{
"math_id": 1,
"text": " F: \\mathbb{R}^n \\rightarrow \\mathbb{R} "
},
{
"math_id": 2,
"text": " \\ W = [w_1, \\ldots, w_n] "
},
{
"math_id": 3,
"text": " F(a_1, \\ldots , a_n) = \\sum_{j=1}^n w_j b_j"
},
{
"math_id": 4,
"text": " b_j "
},
{
"math_id": 5,
"text": " a_i "
},
{
"math_id": 6,
"text": " \\ F(a_1, \\ldots, a_n) = \\max(a_1, \\ldots, a_n) "
},
{
"math_id": 7,
"text": " \\ w_1 = 1 "
},
{
"math_id": 8,
"text": " \\ w_j = 0 "
},
{
"math_id": 9,
"text": " j \\ne 1 "
},
{
"math_id": 10,
"text": " \\ F(a_1, \\ldots, a_n) = \\min(a_1, \\ldots, a_n) "
},
{
"math_id": 11,
"text": " \\ w_n = 1 "
},
{
"math_id": 12,
"text": " j \\ne n "
},
{
"math_id": 13,
"text": " \\ F(a_1, \\ldots, a_n) = \\mathrm{average}(a_1, \\ldots, a_n) "
},
{
"math_id": 14,
"text": " \\ w_j = \\frac{1}{n} "
},
{
"math_id": 15,
"text": " j \\in [1, n] "
},
{
"math_id": 16,
"text": "A-C(W)= \\frac{1}{n-1} \\sum_{j=1}^n (n - j) w_j. "
},
{
"math_id": 17,
"text": " A-C(W) \\in [0, 1] "
},
{
"math_id": 18,
"text": "H(W) = -\\sum_{j=1}^n w_j \\ln (w_j)."
},
{
"math_id": 19,
"text": "E(W) = \\sum_{j=1}^n w_j^2 ."
},
{
"math_id": 20,
"text": "\\left\\{ {W^i} \\right\\}_{i =1}^n "
},
{
"math_id": 21,
"text": "U = [0,\\;\\;1]"
},
{
"math_id": 22,
"text": "\\alpha \\in [0,\\;1]"
},
{
"math_id": 23,
"text": "\\alpha "
},
{
"math_id": 24,
"text": "\\left\\{ {W_\\alpha ^i } \\right\\}_{i = 1}^n "
},
{
"math_id": 25,
"text": "\\left\\{ {A^i} \\right\\}_{i =1}^n "
},
{
"math_id": 26,
"text": "\n\\Phi_\\alpha \\left( {A_\\alpha ^1 , \\ldots ,A_\\alpha ^n } \\right) =\\left\\{ {\\frac{\\sum\\limits_{i = 1}^n {w_i a_{\\sigma (i)} } }{\\sum\\limits_{i = 1}^n {w_i } }\\left| {w_i \\in W_\\alpha ^i ,\\;a_i } \\right. \\in A_\\alpha ^i ,\\;i = 1, \\ldots ,n} \\right\\}"
},
{
"math_id": 27,
"text": "W_\\alpha ^i= \\{w| \\mu_{W_i }(w) \\geq \\alpha \\}, A_\\alpha ^i=\\{ x| \\mu _{A_i }(x)\\geq \\alpha \\}"
},
{
"math_id": 28,
"text": "\\sigma :\\{\\;1, \\ldots ,n\\;\\} \\to \\{\\;1, \\ldots ,n\\;\\}"
},
{
"math_id": 29,
"text": "a_{\\sigma (i)} \\ge a_{\\sigma (i + 1)} ,\\;\\forall \\;i = 1, \\ldots ,n - 1"
},
{
"math_id": 30,
"text": "a_{\\sigma (i)} "
},
{
"math_id": 31,
"text": "i"
},
{
"math_id": 32,
"text": "\\left\\{ {a_1 , \\ldots ,a_n } \\right\\}"
},
{
"math_id": 33,
"text": "\\Phi _\\alpha \\left( {A_\\alpha ^1 , \\ldots ,A_\\alpha ^n } \\right)"
},
{
"math_id": 34,
"text": "\\Phi _\\alpha \\left( {A_\\alpha ^1 , \\ldots ,A_\\alpha ^n } \\right)_{-} "
},
{
"math_id": 35,
"text": "\n\\Phi _\\alpha \\left( {A_\\alpha ^1 , \\ldots ,A_\\alpha ^n } \\right)_ {+},"
},
{
"math_id": 36,
"text": "A_\\alpha ^i=[A_{\\alpha-}^i, A_{\\alpha+}^i], W_\\alpha ^i=[W_{\\alpha-}^i, W_{\\alpha+}^i]"
},
{
"math_id": 37,
"text": "\\mu _{G} (x) = \\mathop \\vee _{\\alpha :x \\in \\Phi _\\alpha \\left( {A_\\alpha ^1 , \\cdots\n,A_\\alpha ^n } \\right)_\\alpha } \\alpha "
},
{
"math_id": 38,
"text": " \\Phi _\\alpha \\left( {A_\\alpha ^1 , \\cdots ,A_\\alpha ^n } \\right)_{-} = \\min\\limits_{\\begin{array}{l} W_{\\alpha - }^i \\le w_i \\le W_{\\alpha + }^i A_{\\alpha - }^i \\le a_i \\le A_{\\alpha + }^i \\end{array}} \\sum\\limits_{i = 1}^n {w_i a_{\\sigma (i)} / \\sum\\limits_{i = 1}^n {w_i } } "
},
{
"math_id": 39,
"text": "\\Phi _\\alpha \\left( {A_\\alpha ^1 , \\cdots , A_\\alpha ^n } \\right)_{+} = \\max\\limits_{\\begin{array}{l} W_{\\alpha - }^i \\le w_i \\le W_{\\alpha + }^i A_{\\alpha - }^i \\le a_i \\le A_{\\alpha + }^i \\end{array}} \\sum\\limits_{i = 1}^n {w_i a_{\\sigma (i)} / \\sum\\limits_{i =\n1}^n {w_i } } "
}
] |
https://en.wikipedia.org/wiki?curid=14893994
|
14894187
|
Compound of eight octahedra with rotational freedom
|
Polyhedral compound
The compound of eight octahedra with rotational freedom is a uniform polyhedron compound. It is composed of a symmetric arrangement of 8 octahedra, considered as triangular antiprisms. It can be constructed by superimposing eight identical octahedra, and then rotating them in pairs about the four axes that pass through the centres of two opposite octahedral faces. Each octahedron is rotated by an equal (and opposite, within a pair) angle "θ".
It can be constructed by superimposing two compounds of four octahedra with rotational freedom, one with a rotation of "θ", and the other with a rotation of −"θ".
When "θ" = 0, all eight octahedra coincide. When "θ" is 60 degrees, the octahedra coincide in pairs yielding (two superimposed copies of) the compound of four octahedra.
Cartesian coordinates.
Cartesian coordinates for the vertices of this compound are all the permutations of
formula_0
|
[
{
"math_id": 0,
"text": "(\\pm(1 - \\cos(\\theta) + \\sqrt{3} \\sin(\\theta)), \\pm(1 - \\cos(\\theta) - \\sqrt{3} \\sin(\\theta)), \\pm(1 + 2 \\cos(\\theta)))."
}
] |
https://en.wikipedia.org/wiki?curid=14894187
|
14894345
|
Compound of four octahedra with rotational freedom
|
Polyhedral compound
The compound of four octahedra with rotational freedom is a uniform polyhedron compound. It consists in a symmetric arrangement of 4 octahedra, considered as triangular antiprisms. It can be constructed by superimposing four identical octahedra, and then rotating each by an equal angle "θ" about a separate axis passing through the centres of two opposite octahedral faces, in such a way as to preserve pyritohedral symmetry.
Superimposing this compound with a second copy, in which the octahedra have been rotated by the same angle "θ" in the opposite direction, yields the compound of eight octahedra with rotational freedom.
When "θ" = 0, all four octahedra coincide. When "θ" is 60 degrees, the more symmetric compound of four octahedra (without rotational freedom) arises. In another notable case (pictured), when
formula_0
24 of the triangles form coplanar pairs, and the compound assumes the form of the compound of five octahedra with one of the octahedra removed.
|
[
{
"math_id": 0,
"text": "\\theta = 2 \\tan^{-1}\\left(\\sqrt{15}-2\\sqrt{3}\\right) \\approx 44.47751^\\circ,"
}
] |
https://en.wikipedia.org/wiki?curid=14894345
|
1489559
|
Net energy gain
|
Net Energy Gain (NEG) is a concept used in energy economics that refers to the difference between the energy expended to harvest an energy source and the amount of energy gained from that harvest. The net energy gain, which can be expressed in joules, differs from the net financial gain that may result from the energy harvesting process, in that various sources of energy (e.g. natural gas, coal, etc.) can be priced differently for the same amount of energy.
Calculating NEG.
A net energy gain is achieved by expending less energy acquiring a source of energy than is contained in the source to be consumed. That is
formula_0
Factors to consider when calculating NEG is the type of energy, the way energy is used and acquired, and the methods used to store or transport the energy. It is also possible to overcomplicate the equation by an infinite number of externalities and inefficiencies that may be present during the energy harvesting process.
Sources of energy.
The definition of an energy source is not rigorous. Anything that can provide energy to anything else can qualify. Wood in a stove is full of potential thermal energy; in a car, mechanical energy is acquired from the combustion of gasoline, and the combustion of coal is converted from thermal to mechanical, and then to electrical energy.
Examples of energy sources include:
The term net energy gain can be used in slightly different ways:
Non-sustainables.
The usual definition of net energy gain compares the energy required to extract energy (that is, to find it, remove it from the ground, refine it, and ship it to the energy user) with the amount of energy produced and transmitted to a user from some (typically underground) energy resource. To better understand this, assume an economy has a certain amount of finite oil reserves that are still underground, unextracted. To get to that energy, some of the extracted oil needs to be consumed in the extraction process to run the engines driving the pumps, therefore after extraction the net energy produced will be less than the amount of energy in the ground before extraction, because some had to be used up.
The extraction energy can be viewed in one of two ways: profitable extractable (NEG>0) or nonprofitable extractable (NEG<0). For instance, in the Athabasca Oil Sands, the highly diffuse nature of the tar sands and low price of crude oil rendered them uneconomical to mine until the late 1950s (NEG<0). Since then, the price of oil has risen and a new steam extraction technique has been developed, allowing the sands to become the largest oil provider in Alberta (NEG>0).
Sustainables.
The situation is different with sustainable energy sources, such as hydroelectric, wind, solar, and geothermal energy sources, because there is no bulk reserve to account for (other than the Sun's lifetime), but the energy continuously trickles, so only the energy required for extraction is considered.
In all energy extraction cases, the life cycle of the energy-extraction device is crucial for the NEG-ratio. If an extraction device is defunct after 10 years, its NEG will be significantly lower than if it operates for 30 years. Therefore, the "'energy payback time" (sometimes referred to as energy amortization) can be used instead, which is the time, usually given in years, a plant must operate until the running NEG becomes positive (i.e. until the amount of energy needed for the plant infrastructure has been harvested from the plant).
Biofuels.
Net energy gain of biofuels has been a particular source of controversy for ethanol derived from corn (bioethanol). The actual net energy of biofuel production is highly dependent on both the bio source that is converted into energy, how it is grown and harvested (and in particular the use of petroleum-derived fertilizer), and how efficient the process of conversion to usable energy is. Details on this can be found in the Ethanol fuel energy balance article. Similar considerations also apply to biodiesel and other fuels.
ISO 13602.
ISO 13602-1 provides methods to analyse, characterize and compare technical energy systems (TES) with all their inputs, outputs and risk factors. It contains rules and guidelines for the methodology for such analyses.
ISO 13602-1 describes a means of to establish relations between inputs and outputs (net energy) and thus to facilitate certification, marking, and labelling, comparable characterizations, coefficient of performance, energy resource planning, environmental impact assessments, meaningful energy statistics and forecasting of the direct natural energy resource or energyware inputs, technical energy system investments and the performed and expected future energy service outputs.
In ISO 13602-1:2002, renewable resource is defined as "natural resource for which the ratio of the creation of the natural resource to the output of that resource from nature to the technosphere is equal to or greater than one".
Examples.
During the 1920s, of crude oil were extracted for every barrel of crude used in the extraction and refining process. Today only are harvested for every barrel used. When the net energy gain of an energy source reaches zero, then the source is no longer contributing energy to an economy.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "NEG = Energy_{\\hbox{Consumable}} - Energy_{\\hbox{Expended}}."
}
] |
https://en.wikipedia.org/wiki?curid=1489559
|
14895644
|
Compound of twenty octahedra with rotational freedom
|
Polyhedral compound
The compound of twenty octahedra with rotational freedom is a uniform polyhedron compound. It's composed of a symmetric arrangement of 20 octahedra, considered as triangular antiprisms. It can be constructed by superimposing two copies of the compound of 10 octahedra UC16, and for each resulting pair of octahedra, rotating each octahedron in the pair by an equal and opposite angle "θ".
When "θ" is zero or 60 degrees, the octahedra coincide in pairs yielding (two superimposed copies of) the compounds of ten octahedra UC16 and UC15 respectively. When
formula_0
octahedra (from distinct rotational axes) coincide in sets four, yielding the compound of five octahedra. When
formula_1
the vertices coincide in pairs, yielding the compound of twenty octahedra (without rotational freedom).
Cartesian coordinates.
Cartesian coordinates for the vertices of this compound are all the cyclic permutations of
formula_2
where "τ" = (1 + √5)/2 is the golden ratio (sometimes written "φ").
|
[
{
"math_id": 0,
"text": "\\theta=2\\tan^{-1}\\left(\\sqrt{\\frac{1}{3}\\left(13-4\\sqrt{10}\\right)}\\right)\\approx 37.76124^\\circ,"
},
{
"math_id": 1,
"text": "\\theta=2\\tan^{-1}\\left(\\frac{-4\\sqrt{3}-2\\sqrt{15}+\\sqrt{132+60\\sqrt{5}}}{4+\\sqrt{2}+2\\sqrt{5}+\\sqrt{10}}\\right)\\approx14.33033^\\circ,"
},
{
"math_id": 2,
"text": "\n\\begin{align}\n& \\scriptstyle \\Big( \\pm2\\sqrt3\\sin\\theta,\\, \\pm(\\tau^{-1}\\sqrt2+2\\tau\\cos\\theta),\\, \\pm(\\tau\\sqrt2-2\\tau^{-1}\\cos\\theta) \\Big) \\\\\n& \\scriptstyle \\Big( \\pm(\\sqrt2 -\\tau^2\\cos\\theta + \\tau^{-1}\\sqrt3\\sin\\theta),\\, \\pm(\\sqrt2 + (2\\tau-1)\\cos\\theta + \\sqrt3\\sin\\theta),\\, \\pm(\\sqrt2 + \\tau^{-2}\\cos\\theta - \\tau\\sqrt3\\sin\\theta) \\Big) \\\\\n& \\scriptstyle \\Big(\\pm(\\tau^{-1}\\sqrt2-\\tau\\cos\\theta-\\tau\\sqrt3\\sin\\theta),\\, \\pm(\\tau\\sqrt2 + \\tau^{-1}\\cos\\theta+\\tau^{-1}\\sqrt3\\sin\\theta),\\, \\pm(3\\cos\\theta-\\sqrt3\\sin\\theta) \\Big) \\\\\n& \\scriptstyle \\Big(\\pm(-\\tau^{-1}\\sqrt2 + \\tau\\cos\\theta - \\tau\\sqrt 3\\sin\\theta),\\, \\pm(\\tau\\sqrt2 + \\tau^{-1}\\cos\\theta-\\tau^{-1}\\sqrt3\\sin\\theta),\\, \\pm(3\\cos\\theta+\\sqrt3\\sin\\theta) \\Big) \\\\\n& \\scriptstyle \\Big(\\pm(-\\sqrt 2 + \\tau^2\\cos\\theta+\\tau^{-1}\\sqrt 3 \\sin\\theta), \\, \\pm(\\sqrt 2 + (2\\tau-1)\\cos\\theta - \\sqrt 3 \\sin\\theta), \\, \\pm(\\sqrt 2 + \\tau^{-2}\\cos\\theta + \\tau\\sqrt 3 \\sin\\theta) \\Big)\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14895644
|
14896
|
Inductor
|
Passive two-terminal electrical component that stores energy in its magnetic field
An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when electric current flows through it. An inductor typically consists of an insulated wire wound into a coil.
When the current flowing through the coil changes, the time-varying magnetic field induces an electromotive force ("emf") (voltage) in the conductor, described by Faraday's law of induction. According to Lenz's law, the induced voltage has a polarity (direction) which opposes the change in current that created it. As a result, inductors oppose any changes in current through them.
An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current. In the International System of Units (SI), the unit of inductance is the henry (H) named for 19th century American scientist Joseph Henry. In the measurement of magnetic circuits, it is equivalent to . Inductors have values that typically range from 1μH (10−6H) to 20H. Many inductors have a magnetic core made of iron or ferrite inside the coil, which serves to increase the magnetic field and thus the inductance. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits. Inductors are widely used in alternating current (AC) electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass; inductors designed for this purpose are called chokes. They are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers.
The term inductor seems to come from Heinrich Daniel Ruhmkorff, who called the induction coil he invented in 1851 an inductorium.
Description.
An electric current flowing through a conductor generates a magnetic field surrounding it. The magnetic flux linkage formula_0 generated by a given current formula_1 depends on the geometric shape of the circuit. Their ratio defines the inductance formula_2. Thus
formula_3.
The inductance of a circuit depends on the geometry of the current path as well as the magnetic permeability of nearby materials. An inductor is a component consisting of a wire or other conductor shaped to increase the magnetic flux through the circuit, usually in the shape of a coil or helix, with two terminals. Winding the wire into a coil increases the number of times the magnetic flux lines link the circuit, increasing the field and thus the inductance. The more turns, the higher the inductance. The inductance also depends on the shape of the coil, separation of the turns, and many other factors. By adding a "magnetic core" made of a ferromagnetic material like iron inside the coil, the magnetizing field from the coil will induce magnetization in the material, increasing the magnetic flux. The high permeability of a ferromagnetic core can increase the inductance of a coil by a factor of several thousand over what it would be without it.
Constitutive equation.
Any change in the current through an inductor creates a changing flux, inducing a voltage across the inductor. By Faraday's law of induction, the voltage formula_4 induced by any change in magnetic flux through the circuit is given by
formula_5.
Reformulating the definition of L above, we obtain
formula_6.
It follows that
formula_7
formula_8
if L is independent of time, current and magnetic flux linkage. Thus, inductance is also a measure of the amount of electromotive force (voltage) generated for a given rate of change of current. This is usually taken to be the constitutive relation (defining equation) of the inductor.
Lenz's law.
The polarity (direction) of the induced voltage is given by Lenz's law, which states that the induced voltage will be such as to oppose the change in current. For example, if the current through a 1 henry inductor is increasing at a rate of 1 amperes per second, the induced 1 volt of potential difference will be positive at the current's entrance point and negative at the exit point, tending to oppose the additional current. The energy from the external circuit necessary to overcome this potential "hill" is being stored in the magnetic field of the inductor. If the current is decreasing, the induced voltage will be negative at the current's entrance point and positive at the exit point, tending to maintain the current. In this case energy from the magnetic field is being returned to the circuit.
Positive form of current–voltage relationship.
Because the induced voltage is positive at the current's entrance terminal, the inductor's current–voltage relationship is often expressed without a negative sign by using the current's exit terminal as the reference point for the voltage formula_9 at the current's entrance terminal (as labeled in the schematic).
The derivative form of this current–voltage relationship is then:formula_10The integral form of this current–voltage relationship, starting at time formula_11 with some initial current formula_12, is then:formula_13The dual of the inductor is the capacitor, which stores energy in an electric field rather than a magnetic field. Its current–voltage relation replaces L with the capacitance C and has current and voltage swapped from these equations.
Energy stored in an inductor.
One intuitive explanation as to why a potential difference is induced on a change of current in an inductor goes as follows:
When there is a change in current through an inductor there is a change in the strength of the magnetic field. For example, if the current is increased, the magnetic field increases. This, however, does not come without a price. The magnetic field contains potential energy, and increasing the field strength requires more energy to be stored in the field. This energy comes from the electric current through the inductor. The increase in the magnetic potential energy of the field is provided by a corresponding drop in the electric potential energy of the charges flowing through the windings. This appears as a voltage drop across the windings as long as the current increases. Once the current is no longer increased and is held constant, the energy in the magnetic field is constant and no additional energy must be supplied, so the voltage drop across the windings disappears.
Similarly, if the current through the inductor decreases, the magnetic field strength decreases, and the energy in the magnetic field decreases. This energy is returned to the circuit in the form of an increase in the electrical potential energy of the moving charges, causing a voltage rise across the windings.
Derivation.
The work done per unit charge on the charges passing through the inductor is formula_14. The negative sign indicates that the work is done "against" the emf, and is not done "by" the emf. The current formula_1 is the charge per unit time passing through the inductor. Therefore, the rate of work formula_15 done by the charges against the emf, that is the rate of change of energy of the current, is given by
formula_16
From the constitutive equation for the inductor, formula_17 so
formula_18
formula_19
In a ferromagnetic core inductor, when the magnetic field approaches the level at which the core saturates, the inductance will begin to change, it will be a function of the current formula_20. Neglecting losses, the energy formula_15 stored by an inductor with a current formula_21 passing through it is equal to the amount of work required to establish the current through the inductor.
This is given by:
formula_22, where formula_23 is the so-called "differential inductance" and is defined as: formula_24.
In an air core inductor or a ferromagnetic core inductor below saturation, the inductance is constant (and equal to the differential inductance), so the stored energy is
formula_25
formula_26
For inductors with magnetic cores, the above equation is only valid for linear regions of the magnetic flux, at currents below the saturation level of the inductor, where the inductance is approximately constant. Where this is not the case, the integral form must be used with formula_27 variable.
Voltage step response.
When a voltage step is applied to an inductor:
Ideal and real inductors.
The constitutive equation describes the behavior of an "ideal inductor" with inductance formula_2, and without resistance, capacitance, or energy dissipation. In practice, inductors do not follow this theoretical model; "real inductors" have a measurable resistance due to the resistance of the wire and energy losses in the core, and parasitic capacitance between turns of the wire.
A real inductor's capacitive reactance rises with frequency, and at a certain frequency, the inductor will behave as a resonant circuit. Above this self-resonant frequency, the capacitive reactance is the dominant part of the inductor's impedance. At higher frequencies, resistive losses in the windings increase due to the skin effect and proximity effect.
Inductors with ferromagnetic cores experience additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, magnetic core inductors also show sudden departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core.
Inductors radiate electromagnetic energy into surrounding space and may absorb electromagnetic emissions from other circuits, resulting in potential electromagnetic interference.
An early solid-state electrical switching and amplifying device called a saturable reactor exploits saturation of the core as a means of stopping the inductive transfer of current via the core.
"Q" factor.
The winding resistance appears as a resistance in series with the inductor; it is referred to as DCR (DC resistance). This resistance dissipates some of the reactive energy. The quality factor (or "Q") of an inductor is the ratio of its inductive reactance to its resistance at a given frequency, and is a measure of its efficiency. The higher the Q factor of the inductor, the closer it approaches the behavior of an ideal inductor. High Q inductors are used with capacitors to make resonant circuits in radio transmitters and receivers. The higher the Q is, the narrower the bandwidth of the resonant circuit.
The Q factor of an inductor is defined as
formula_28
where formula_2 is the inductance, formula_29 is the DC resistance, and the product formula_30 is the inductive reactance
"Q" increases linearly with frequency if "L" and "R" are constant. Although they are constant at low frequencies, the parameters vary with frequency. For example, skin effect, proximity effect, and core losses increase "R" with frequency; winding capacitance and variations in permeability with frequency affect "L".
At low frequencies and within limits, increasing the number of turns "N" improves "Q" because "L" varies as "N"2 while "R" varies linearly with "N". Similarly increasing the radius "r" of an inductor improves (or increases) "Q" because "L" varies with "r"2 while "R" varies linearly with "r". So high "Q" air core inductors often have large diameters and many turns. Both of those examples assume the diameter of the wire stays the same, so both examples use proportionally more wire. If the total mass of wire is held constant, then there would be no advantage to increasing the number of turns or the radius of the turns because the wire would have to be proportionally thinner.
Using a high permeability ferromagnetic core can greatly increase the inductance for the same amount of copper, so the core can also increase the Q. Cores however also introduce losses that increase with frequency. The core material is chosen for best results for the frequency band. High Q inductors must avoid saturation; one way is by using a (physically larger) air core inductor. At VHF or higher frequencies an air core is likely to be used. A well designed air core inductor may have a Q of several hundred.
Applications.
Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove ripple which is a multiple of the mains frequency (or the switching frequency for switched-mode power supplies) from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire.
Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods and enables topographies where the output voltage is higher than the input voltage.
A tuned circuit, consisting of an inductor connected to a capacitor, acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals.
Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers. Transformers enable switched-mode power supplies that galvanically isolate the output from the input.
Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors.
Inductors have parasitic effects which cause them to depart from ideal behavior. They create and suffer from electromagnetic interference (EMI). Their physical size prevents them from being integrated on semiconductor chips. So the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors.
Inductor construction.
An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor. The high permeability of the ferromagnetic core increases the magnetic field and confines it closely to the inductor, thereby increasing the inductance. Low frequency inductors are constructed like transformers, with cores of electrical steel laminated to prevent eddy currents. 'Soft' ferrites are widely used for cores above audio frequencies, since they do not cause the large energy losses at high frequencies that ordinary iron alloys do. Inductors come in many shapes. Some inductors have an adjustable core, which enables changing of the inductance. Inductors used to block very high frequencies are sometimes made by stringing a ferrite bead on a wire.
Small inductors can be etched directly onto a printed circuit board by laying out the trace in a spiral pattern. Some such planar inductors use a planar core. Small value inductors can also be built on integrated circuits using the same processes that are used to make interconnects. Aluminium interconnect is typically used, laid out in a spiral coil pattern. However, the small dimensions limit the inductance, and it is far more common to use a circuit called a "gyrator" that uses a capacitor and active components to behave similarly to an inductor. Regardless of the design, because of the low inductances and low power dissipation on-die inductors allow, they are currently only commercially used for high frequency RF circuits.
Shielded inductors.
Inductors used in power regulation systems, lighting, and other systems that require low-noise operating conditions, are often partially or fully shielded. In telecommunication circuits employing induction coils and repeating transformers shielding of inductors in close proximity reduces circuit cross-talk.
Types.
Air-core inductor.
The term "air core coil" describes an inductor that does not use a magnetic core made of a ferromagnetic material. The term refers to coils wound on plastic, ceramic, or other nonmagnetic forms, as well as those that have only air inside the windings. Air core coils have lower inductance than ferromagnetic core coils, but are often used at high frequencies because they are free from energy losses called core losses that occur in ferromagnetic cores, which increase with frequency. A side effect that can occur in air core coils in which the winding is not rigidly supported on a form is 'microphony': mechanical vibration of the windings can cause variations in the inductance.
Radio-frequency inductor.
At high frequencies, particularly radio frequencies (RF), inductors have higher resistance and other losses. In addition to causing power loss, in resonant circuits this can reduce the Q factor of the circuit, broadening the bandwidth. In RF inductors specialized construction techniques are used to minimize these losses. The losses are due to these effects:
To reduce parasitic capacitance and proximity effect, high Q RF coils are constructed to avoid having many turns lying close together, parallel to one another. The windings of RF coils are often limited to a single layer, and the turns are spaced apart. To reduce resistance due to skin effect, in high-power inductors such as those used in transmitters the windings are sometimes made of a metal strip or tubing which has a larger surface area, and the surface is silver-plated.
Small inductors for low current and low power are made in molded cases resembling resistors. These may be either plain (phenolic) core or ferrite core. An ohmmeter readily distinguishes them from similar-sized resistors by showing the low resistance of the inductor.
Ferromagnetic-core inductor.
Ferromagnetic-core or iron-core inductors use a magnetic core made of a ferromagnetic or ferrimagnetic material such as iron or ferrite to increase the inductance. A magnetic core can increase the inductance of a coil by a factor of several thousand, by increasing the magnetic field due to its higher magnetic permeability. However the magnetic properties of the core material cause several side effects which alter the behavior of the inductor and require special construction:
<templatestyles src="Glossary/styles.css" />
Laminated-core inductor.
Low-frequency inductors are often made with laminated cores to prevent eddy currents, using construction similar to transformers. The core is made of stacks of thin steel sheets or laminations oriented parallel to the field, with an insulating coating on the surface. The insulation prevents eddy currents between the sheets, so any remaining currents must be within the cross sectional area of the individual laminations, reducing the area of the loop and thus reducing the energy losses greatly. The laminations are made of low-conductivity silicon steel to further reduce eddy current losses.
Ferrite-core inductor.
For higher frequencies, inductors are made with cores of ferrite. Ferrite is a ceramic ferrimagnetic material that is nonconductive, so eddy currents cannot flow within it. The formulation of ferrite is xxFe2O4 where xx represents various metals. For inductor cores soft ferrites are used, which have low coercivity and thus low hysteresis losses.
Powdered-iron-core inductor.
Another material is powdered iron cemented with a binder. Medium frequency equipment almost exclusively uses powdered iron cores, and inductors and transformers built for the lower shortwaves are made using either cemented powdered iron or ferrites.
Toroidal-core inductor.
In an inductor wound on a straight rod-shaped core, the magnetic field lines emerging from one end of the core must pass through the air to re-enter the core at the other end. This reduces the field, because much of the magnetic field path is in air rather than the higher permeability core material and is a source of electromagnetic interference. A higher magnetic field and inductance can be achieved by forming the core in a closed magnetic circuit. The magnetic field lines form closed loops within the core without leaving the core material. The shape often used is a toroidal or doughnut-shaped ferrite core. Because of their symmetry, toroidal cores allow a minimum of the magnetic flux to escape outside the core (called "leakage flux"), so they radiate less electromagnetic interference than other shapes. Toroidal core coils are manufactured of various materials, primarily ferrite, powdered iron and laminated cores.
Variable inductor.
Probably the most common type of variable inductor today is one with a moveable ferrite magnetic core, which can be slid or screwed in or out of the coil. Moving the core farther into the coil increases the permeability, increasing the magnetic field and the inductance. Many inductors used in radio applications (usually less than 100 MHz) use adjustable cores in order to tune such inductors to their desired value, since manufacturing processes have certain tolerances (inaccuracy). Sometimes such cores for frequencies above 100 MHz are made from highly conductive non-magnetic material such as aluminum. They decrease the inductance because the magnetic field must bypass them.
Air core inductors can use sliding contacts or multiple taps to increase or decrease the number of turns included in the circuit, to change the inductance. A type much used in the past but mostly obsolete today has a spring contact that can slide along the bare surface of the windings. The disadvantage of this type is that the contact usually short-circuits one or more turns. These turns act like a single-turn short-circuited transformer secondary winding; the large currents induced in them cause power losses.
A type of continuously variable air core inductor is the "variometer". This consists of two coils with the same number of turns connected in series, one inside the other. The inner coil is mounted on a shaft so its axis can be turned with respect to the outer coil. When the two coils' axes are collinear, with the magnetic fields pointing in the same direction, the fields add and the inductance is maximum. When the inner coil is turned so its axis is at an angle with the outer, the mutual inductance between them is smaller so the total inductance is less. When the inner coil is turned 180° so the coils are collinear with their magnetic fields opposing, the two fields cancel each other and the inductance is very small. This type has the advantage that it is continuously variable over a wide range. It is used in antenna tuners and matching circuits to match low frequency transmitters to their antennas.
Another method to control the inductance without any moving parts requires an additional DC current bias winding which controls the permeability of an easily saturable core material. "See" Magnetic amplifier.
Choke.
A choke is an inductor designed specifically for blocking high-frequency alternating current (AC) in an electrical circuit, while allowing DC or low-frequency signals to pass. Because the inductor restricts or "chokes" the changes in current, this type of inductor is called a choke. It usually consists of a coil of insulated wire wound on a magnetic core, although some consist of a donut-shaped "bead" of ferrite material strung on a wire. Like other inductors, chokes resist changes in current passing through them increasingly with frequency. The difference between chokes and other inductors is that chokes do not require the high Q factor construction techniques that are used to reduce the resistance in inductors used in tuned circuits.
Circuit analysis.
The effect of an inductor in a circuit is to oppose changes in current through it by developing a voltage across it proportional to the rate of change of the current. An ideal inductor would offer no resistance to a constant direct current; however, only superconducting inductors have truly zero electrical resistance.
The relationship between the time-varying voltage "v"("t") across an inductor with inductance "L" and the time-varying current "i"("t") passing through it is described by the differential equation:
formula_31
When there is a sinusoidal alternating current (AC) through an inductor, a sinusoidal voltage is induced. The amplitude of the voltage is proportional to the product of the amplitude (formula_32) of the current and the angular frequency (formula_33) of the current.
formula_34
In this situation, the phase of the current lags that of the voltage by π/2 (90°). For sinusoids, as the voltage across the inductor goes to its maximum value, the current goes to zero, and as the voltage across the inductor goes to zero, the current through it goes to its maximum value.
If an inductor is connected to a direct current source with value "I" via a resistance "R" (at least the DCR of the inductor), and then the current source is short-circuited, the differential relationship above shows that the current through the inductor will discharge with an exponential decay:
formula_35
Reactance.
The ratio of the peak voltage to the peak current in an inductor energised from an AC source is called the reactance and is denoted "X"L.
formula_36
Thus,
formula_37
where "ω" is the angular frequency.
Reactance is measured in ohms but referred to as "impedance" rather than resistance; energy is stored in the magnetic field as current rises and discharged as current falls. Inductive reactance is proportional to frequency. At low frequency the reactance falls; at DC, the inductor behaves as a short circuit. As frequency increases the reactance increases and at a sufficiently high frequency the reactance approaches that of an open circuit.
Corner frequency.
In filtering applications, with respect to a particular load impedance, an inductor has a corner frequency defined as:
formula_38
Laplace circuit analysis (s-domain).
When using the Laplace transform in circuit analysis, the impedance of an ideal inductor with no initial current is represented in the "s" domain by:
formula_39
where
"formula_2" is the inductance, and
"formula_40" is the complex frequency.
If the inductor does have initial current, it can be represented by:
Inductor networks.
Inductors in a parallel configuration each have the same potential difference (voltage). To find their total equivalent inductance ("L"eq):
formula_41
The current through inductors in series stays the same, but the voltage across each inductor can be different. The sum of the potential differences (voltage) is equal to the total voltage. To find their total inductance:
formula_42
These simple relationships hold true only when there is no mutual coupling of magnetic fields between individual inductors.
Mutual inductance.
Mutual inductance occurs when the magnetic field of an inductor induces a magnetic field in an adjacent inductor. Mutual induction is the basis of transformer construction.
formula_43
where M is the maximum mutual inductance possible between 2 inductors and L1 and L2 are the two inductors.
In general
formula_44
as only a fraction of self flux is linked with the other.
This fraction is called "Coefficient of flux linkage (K)" or "Coefficient of coupling".
formula_45
Inductance formulas.
The table below lists some common simplified formulas for calculating the approximate inductance of several inductor constructions.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Phi_\\mathbf{B}"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "L := \\frac{\\Phi_\\mathbf{B}}{I}"
},
{
"math_id": 4,
"text": "\\mathcal{E}"
},
{
"math_id": 5,
"text": "\\mathcal{E} = -\\frac{d\\Phi_\\mathbf{B}}{dt}"
},
{
"math_id": 6,
"text": " \\Phi_\\mathbf{B} = LI"
},
{
"math_id": 7,
"text": "\\mathcal{E} = -\\frac{d\\Phi_\\mathbf{B}}{dt} = -\\frac{d}{dt}(LI)"
},
{
"math_id": 8,
"text": "\\quad\\mathcal{E} = -L\\frac{dI}{dt}\\quad"
},
{
"math_id": 9,
"text": "V(t)"
},
{
"math_id": 10,
"text": "V(t) = L\\frac{\\mathrm{d}I(t)}{\\mathrm{d}t} \\, ."
},
{
"math_id": 11,
"text": "t_0"
},
{
"math_id": 12,
"text": "I(t_0)"
},
{
"math_id": 13,
"text": "I(t) = I(t_0) + \\frac{1}{L}\\int_{t_0}^t V(\\tau) \\, \\mathrm{d}\\tau \\, ."
},
{
"math_id": 14,
"text": "-\\mathcal{E}"
},
{
"math_id": 15,
"text": "W"
},
{
"math_id": 16,
"text": "\\frac{dW}{dt} = -\\mathcal{E}I "
},
{
"math_id": 17,
"text": "-\\mathcal{E} = L\\frac{dI}{dt}"
},
{
"math_id": 18,
"text": "\\frac{dW}{dt}= L\\frac{dI}{dt} \\cdot I = LI \\cdot \\frac{dI}{dt}"
},
{
"math_id": 19,
"text": "dW = L I \\cdot dI"
},
{
"math_id": 20,
"text": "L(I)"
},
{
"math_id": 21,
"text": "I_0"
},
{
"math_id": 22,
"text": "W = \\int_0^{I_0} L_d(I) \\, I \\, dI"
},
{
"math_id": 23,
"text": "L_d(I)"
},
{
"math_id": 24,
"text": "L_d = \\frac{d\\Phi_{\\mathbf{B}}}{dI}"
},
{
"math_id": 25,
"text": "W = L\\int_0^{I_0} I \\, dI"
},
{
"math_id": 26,
"text": "\\quad W = \\frac{1}{2}L {I_0}^2\\quad"
},
{
"math_id": 27,
"text": "L_d"
},
{
"math_id": 28,
"text": "Q = \\frac{\\omega L}{R}"
},
{
"math_id": 29,
"text": "R"
},
{
"math_id": 30,
"text": "\\omega L"
},
{
"math_id": 31,
"text": "v(t) = L \\frac{di(t)}{dt}"
},
{
"math_id": 32,
"text": "I_P"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "\\begin{align}\n i(t) &= I_\\mathrm P \\sin(\\omega t) \\\\\n \\frac{di(t)}{dt} &= I_\\mathrm P \\omega \\cos(\\omega t) \\\\\n v(t) &= L I_\\mathrm P \\omega \\cos(\\omega t)\n\\end{align}"
},
{
"math_id": 35,
"text": "i(t) = I e^{-\\frac{R}{L}t}"
},
{
"math_id": 36,
"text": "X_\\mathrm L = \\frac {V_\\mathrm P}{I_\\mathrm P} = \\frac {\\omega L I_\\mathrm P}{I_\\mathrm P} "
},
{
"math_id": 37,
"text": "X_\\mathrm L = \\omega L "
},
{
"math_id": 38,
"text": "f_\\mathrm{3\\,dB} = \\frac{R}{2\\pi L}"
},
{
"math_id": 39,
"text": "Z(s) = Ls\\, "
},
{
"math_id": 40,
"text": "s"
},
{
"math_id": 41,
"text": " L_\\mathrm{eq} = \\left(\\sum_{i=1}^n{1\\over L_i}\\right)^{-1} = \\left({1\\over L_1} + {1\\over L_2} + \\dots + {1\\over L_n}\\right)^{-1}."
},
{
"math_id": 42,
"text": " L_\\mathrm{eq} = \\sum_{i=1}^n L_i = L_1 + L_2 + \\cdots + L_n.\\,\\! "
},
{
"math_id": 43,
"text": " M = \\sqrt{L_1L_2} "
},
{
"math_id": 44,
"text": " M \\leq \\sqrt{L_1L_2} "
},
{
"math_id": 45,
"text": " M = K\\sqrt{L_1L_2} "
}
] |
https://en.wikipedia.org/wiki?curid=14896
|
1489628
|
Capital account
|
Record of the net flow of investment into an economy
In macroeconomics and international finance, the capital account, also known as the capital and financial account, records the net flow of investment into an economy. It is one of the two primary components of the balance of payments, the other being the current account. Whereas the current account reflects a nation's net income, the capital account reflects net change in ownership of national assets.
A surplus in the capital account means money is flowing into the country, but unlike a surplus in the current account, the inbound flows effectively represent borrowings or sales of assets rather than payment for work. A deficit in the capital account means money is flowing out of the country, and it suggests the nation is increasing its ownership of foreign assets.
The term "capital account" is used with a narrower meaning by the International Monetary Fund (IMF) and affiliated sources. The IMF splits what the rest of the world calls the capital account into two top-level divisions: "financial account" and "capital account", with by far the bulk of the transactions being recorded in its financial account.
Definitions.
At high level:
formula_0
Breaking this down:
formula_1
Central bank operations and the reserve account.
Conventionally, central banks have two principal tools to influence the value of their nation's currency: raising or lowering the base rate of interest and, more effectively, buying or selling their currency. Setting a higher interest rate than other major central banks will tend to attract funds via the nation's capital account, and this will act to raise the value of its currency. A relatively low interest rate will have the opposite effect. Since World War II, interest rates have largely been set with a view to the needs of the domestic economy, and moreover, changing the interest rate alone has only a limited effect.
A nation's ability to prevent a fall in the value of its own currency is limited mainly by the size of its foreign reserves: it needs to use the reserves to buy back its currency. Starting in 2013, a trend has developed for some central banks to attempt to exert upward pressure on their currencies by means of currency swaps rather than by directly selling their foreign reserves. In the absence of foreign reserves, central banks may affect international pricing indirectly by selling assets (usually government bonds) domestically, which, however, diminishes liquidity in the economy and may lead to deflation.
When a currency rises higher than monetary authorities might like (making exports less competitive internationally), it is usually considered relatively easy for an independent central bank to counter this. By buying foreign currency or foreign financial assets (usually other governments' bonds), the central bank has a ready means to lower the value of its own currency; if it needs to, it can always create more of its own currency to fund these purchases. The risk, however, is general price inflation. The term "printing money" is often used to describe such monetization, but is an anachronism, since most money exists in the form of deposits and its supply is manipulated through the purchase of bonds. A third mechanism that central banks and governments can use to raise or lower the value of their currency is simply to talk it up or down, by hinting at future action that may discourage speculators. Quantitative easing, a practice used by major central banks in 2009, consisted of large-scale bond purchases by central banks. The desire was to stabilize banking systems and, if possible, encourage investment to reduce unemployment.
As an example of direct intervention to manage currency valuation, in the 20th century Great Britain's central bank, the Bank of England, would sometimes use its reserves to buy large amounts of pound sterling to prevent it falling in value. Black Wednesday was a case where it had insufficient reserves of foreign currency to do this successfully. Conversely, in the early 21st century, several major emerging economies effectively sold large amounts of their currencies in order to prevent their value rising, and in the process built up large reserves of foreign currency, principally the US dollar.
Sometimes the reserve account is classified as "below the line" and thus not reported as part of the capital account. Flows to or from the reserve account can substantially affect the overall capital account. Taking the example of China in the early 21st century, and excluding the activity of its central bank, China's capital account had a large surplus, as it had been the recipient of much foreign investment. If the reserve account is included, however, China's capital account has been in large deficit, as its central bank purchased large amounts of foreign assets (chiefly US government bonds) to a degree sufficient to offset not just the rest of the capital account, but its large current account surplus as well.
Sterilization.
In the financial literature, "sterilization" is a term commonly used to refer to operations of a central bank that mitigate the potentially undesirable effects of inbound capital: currency appreciation and inflation. Depending on the source, sterilization can mean the relatively straightforward recycling of inbound capital to prevent currency appreciation and/or a range of measures to check the inflationary impact of inbound capital. The classic way to sterilize the inflationary effect of the extra money flowing into the domestic base from the capital account is for the central bank to use open market operations where it sells bonds domestically, thereby soaking up new cash that would otherwise circulate around the home economy. A central bank normally makes a small loss from its overall sterilization operations, as the interest it earns from buying foreign assets to prevent appreciation is usually less than what it has to pay out on the bonds it issues domestically to check inflation. In some cases, however, a profit can be made. In the strict textbook definition, sterilization refers only to measures aimed at keeping the domestic monetary base stable; an intervention to prevent currency appreciation that involved merely buying foreign assets without counteracting the resulting increase of the domestic money supply would not count as sterilization. A textbook sterilization would be, for example, the Federal Reserve's purchase of $1 billion in foreign assets. This would create additional liquidity in foreign hands. At the same time, the Fed would sell $1 billion of debt securities into the US market, draining the domestic economy of $1 billion. With $1 billion added abroad and $1 billion removed from the domestic economy, the net capital inflow that would have influenced the currency's exchange rate has undergone sterilization.
International Monetary Fund.
The above definition is the one most widely used in economic literature, in the financial press, by corporate and government analysts (except when they are reporting to the IMF), and by the World Bank. In contrast, what the rest of the world calls the capital account is labelled the "financial account" by the International Monetary Fund (IMF) and the United Nations System of National Accounts (SNA). In the IMF's definition, the capital account represents a small subset of what the standard definition designates the capital account, largely comprising transfers. Transfers are one-way flows, such as gifts, as opposed to commercial exchanges (i.e., buying/selling and barter). The largest type of transfer between nations is typically foreign aid, but that is mostly recorded in the current account. An exception is debt forgiveness, which in a sense is the transfer of ownership of an asset. When a country receives significant debt forgiveness, that will typically comprise the bulk of its overall IMF capital account entry for that year.
The IMF's capital account does include some non-transfer flows, which are sales involving non-financial and non-produced assets—for example, natural resources like land, leases and licenses, and marketing assets such as brands—but the sums involved are typically very small, as most movement in these items occurs when both seller and buyer are of the same nationality.
Transfers apart from debt forgiveness recorded in the IMF's capital account include the transfer of goods and financial assets by migrants leaving or entering a country, the transfer of ownership on fixed assets, the transfer of funds received to the sale or acquisition of fixed assets, gift and inheritance taxes, death levies, and uninsured damage to fixed assets. In a non-IMF representation, these items might be grouped in the "other" subtotal of the capital account. They typically amount to a very small amount in comparison to loans and flows into and out of short-term bank accounts.
Capital controls.
Capital controls are measures imposed by a state's government aimed at managing capital account transactions. They include outright prohibitions against some or all capital account transactions, transaction taxes on the international sale of specific financial assets, or caps on the size of international sales and purchases of specific financial assets. While usually aimed at the financial sector, controls can affect ordinary citizens, for example in the 1960s British families were at one point restricted from taking more than £50 with them out of the country for their foreign holidays. Countries without capital controls that limit the buying and selling of their currency at market rates are said to have full capital account convertibility.
Following the Bretton Woods agreement established at the close of World War II, most nations put in place capital controls to prevent large flows either into or out of their capital account. John Maynard Keynes, one of the architects of the Bretton Woods system, considered capital controls to be a permanent part of the global economy. Both advanced and emerging nations adopted controls; in basic theory it may be supposed that large inbound investments will speed an emerging economy's development, but empirical evidence suggests this does not reliably occur, and in fact large capital inflows can hurt a nation's economic development by causing its currency to appreciate, by contributing to inflation, and by causing an unsustainable "bubble" of economic activity that often precedes financial crisis. The inflows sharply reverse once capital flight takes places after the crisis occurs.
As part of the displacement of Keynesianism in favor of free market orientated policies, countries began abolishing their capital controls, starting between 1973–74 with the US, Canada, Germany and Switzerland and followed by Great Britain in 1979. Most other advanced and emerging economies followed, chiefly in the 1980s and early 1990s.
An exception to this trend was Malaysia, which in 1998 imposed capital controls in the wake of the 1997 Asian Financial Crisis. While most Asian economies didn't impose controls, after the 1997 crises they ceased to be net importers of capital and became net exporters instead. Large inbound flows were directed "uphill" from emerging economies to the US and other developed nations. According to economist C. Fred Bergsten the large inbound flow into the US was one of the causes of the financial crisis of 2007–2008. By the second half of 2009, low interest rates and other aspects of the government led response to the global crises had resulted in increased movement of capital back towards emerging economies. In November 2009 the "Financial Times" reported several emerging economies such as Brazil and India had begun to implement or at least signal the possible adoption of capital controls to reduce the flow of foreign capital into their economies.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n \\text{Capital Account} = \\left[ {\\text{Change in foreign ownership} \\atop \\text{of domestic assets}} \\right] - \\left[ {\\text{Change in domestic} \\atop \\text{ownership of foreign assets}} \\right]\n"
},
{
"math_id": 1,
"text": "\n \\text{Capital Account} = \\left[ {\\text{Foreign Direct} \\atop \\text{Investment}} \\right] + \\left[ {\\text{Portfolio} \\atop \\text{Investment}} \\right] + \\left[ {\\text{Other} \\atop \\text{Investment}} \\right] + \\left[ {\\text{Reserve} \\atop \\text{Account}} \\right]\n"
}
] |
https://en.wikipedia.org/wiki?curid=1489628
|
14897216
|
Episodic tremor and slip
|
Seismological phenomenon observed in some subduction zones
Episodic tremor and slip (ETS) is a seismological phenomenon observed in some subduction zones that is characterized by non-earthquake seismic rumbling, or tremor, and slow slip along the plate interface. Slow slip events are distinguished from earthquakes by their propagation speed and focus. In slow slip events, there is an apparent reversal of crustal motion, although the fault motion remains consistent with the direction of subduction. ETS events themselves are imperceptible to human beings and do not cause damage.
Discovery.
Nonvolcanic, episodic tremor was first identified in southwest Japan in 2002. Shortly afterwards, the Geological Survey of Canada coined the term "episodic tremor and slip" to characterize observations of GPS measurements in the Vancouver Island area. Vancouver Island lies in the eastern, North American region of the Cascadia subduction zone. ETS events in Cascadia were observed to reoccur cyclically with a period of approximately 14 months. Analysis of measurements led to the successful prediction of ETS events in following years (e.g., 2003, 2004, 2005, and 2007). In Cascadia, these events are marked by about two weeks of 1 to 10 Hz seismic trembling and non-earthquake ("aseismic") slip on the plate boundary equivalent to a magnitude 7 earthquake. (Tremor is a weak seismological signal only detectable by very sensitive seismometers.) Recent episodes of tremor and slip in the Cascadia region have occurred down-dip of the region ruptured in the 1700 Cascadia earthquake.
Since the initial discovery of this seismic mode in the Cascadia region, slow slip and tremor have been detected in other subduction zones around the world, including Japan and Mexico.
Slow slip is not accompanied by tremor in the Hikurangi Subduction Zone.
Every five years a year-long quake of this type occurs beneath the New Zealand capital, Wellington. It was first measured in 2003, and has reappeared in 2008 and 2013.
Characteristics.
Slip behaviour.
In the Cascadia subduction zone, the Juan de Fuca Plate, a relic of the ancient Farallon Plate, is actively subducting eastward underneath the North American Plate. The boundary between the Juan de Fuca and North American plates is generally "locked" due to interplate friction. A GPS marker on the surface of the North American plate above the locked region will trend eastward as it is dragged by the subduction process. Geodetic measurements show periodic reversals in the motion (i.e., westward movement) of the overthrusting North American Plate. During these reversals, the GPS marker will be displaced to the west over a period of days to weeks. Because these events occur over a much longer duration than earthquakes, they are termed "slow slip events".
Slow slip events have been observed to occur in the Cascadia, Japan, and Mexico subduction zones. Unique characteristics of slow slip events include periodicity on timescales of months to years, focus near or down-dip of the locked zone, and along-strike propagation of 5 to 15 km/d. In contrast, a typical earthquake rupture velocity is 70 to 90% of the S-wave velocity, or approximately 3.5 km/s.
Because slow slip events occur in subduction zones, their relationship to megathrust earthquakes is of economic, human, and scientific importance. The seismic hazard posed by ETS events is dependent on their focus. If the slow slip event extends into the seismogenic zone, accumulated stress would be released, decreasing the risk of a catastrophic earthquake. However, if the slow slip event occurs down-dip of the seismogenic zone, it may "load" the region with stress. The probability of a great earthquake (moment magnitude scale formula_0) occurring has been suggested to be 30 times greater during an ETS event than otherwise, but more recent observations have shown this theory to be simplistic. One factor is that tremor occurs in many segments at different times along the plate boundary; another factor is that rarely have tremor and large earthquakes been observed to correlate in timing.
Tremor.
Slow slip events are frequently linked to non-volcanic seismological "rumbling", or tremor. Tremor is distinguished from earthquakes in several key respects: frequency, duration, and origin. Seismic waves generated by earthquakes are high-frequency and short-lived. These characteristics allow seismologists to determine the hypocentre of an earthquake using first-arrival methods. In contrast, tremor signals are weak and extended in duration. Furthermore, while earthquakes are caused by the rupture of faults, tremor is generally attributed to underground movement of fluids (magmatic or hydrothermal). As well as in subduction zones, tremor has been detected in transform faults such as the San Andreas.
In both the Cascadia and Nankai subduction zones, slow slip events are directly associated with tremor. In the Cascadia subduction zone, slip events and seismological tremor signals are spatially and temporally coincident, but this relationship does not extend to the Mexican subduction zone. Furthermore, this association is not an intrinsic characteristic of slow slip events. In the Hikurangi Subduction Zone, New Zealand, episodic slip events are associated with distinct, reverse-faulted microearthquakes.
Two types of tremor have been identified: one associated with geodetic deformation (as described above), and one associated with 5 to 10 second bursts excited by distant earthquakes. The second type of tremor has been detected worldwide; for example, it has been triggered in the San Andreas Fault by the 2002 Denali earthquake and in Taiwan by the 2001 Kunlun earthquake.
Geological interpretation.
Tremor is commonly associated with the underground movement of magmatic or hydrothermal fluids. As a plate is subducted into the mantle, it loses water from its porespace and due to phase changes of hydrous minerals (such as amphibole). It has been proposed that this liberation of water generates a supercritical fluid at the plate interface, lubricating plate motion. This supercritical fluid may open fractures in the surrounding rock, and that tremor is the seismological signal of this process. Mathematical modelling has successfully reproduced the periodicity of episodic tremor and slip in the Cascadia region by incorporating this dehydration effect. In this interpretation, tremor may be enhanced where the subducting oceanic crust is young, hot, and wet as opposed to older and colder.
However, alternative models have also been proposed. Tremor has been demonstrated to be influenced by tides or variable fluid flow through a fixed volume. Tremor has also been attributed to shear slip at the plate interface. Recent contributions in mathematical modelling reproduce the sequences of Cascadia and Hikurangi (New Zealand), and suggest in-situ dehydration as the cause for the episodic tremor and slip events.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M_w \\geq 8.0"
}
] |
https://en.wikipedia.org/wiki?curid=14897216
|
14898988
|
Fuhrmann circle
|
__notoc__
In geometry, the Fuhrmann circle of a triangle, named after the German Wilhelm Fuhrmann (1833–1904), is the circle that has as a diameter the line segment between the orthocenter formula_0 and the Nagel point formula_1. This circle is identical with the circumcircle of the Fuhrmann triangle.
The radius of the Fuhrmann circle of a triangle with sides "a", "b", and "c" and circumradius "R" is
formula_2
which is also the distance between the circumcenter and incenter.
Aside from the orthocenter the Fuhrmann circle intersects each altitude of the triangle in one additional point. Those points all have the distance formula_3 from their associated vertices of the triangle. Here formula_4 denotes the radius of the triangles incircle.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": " N"
},
{
"math_id": 2,
"text": " R\\sqrt{\\frac{a^3-a^2b-ab^2+b^3-a^2c+3abc-b^2c-ac^2+c^3}{abc}},"
},
{
"math_id": 3,
"text": "2r"
},
{
"math_id": 4,
"text": "r"
}
] |
https://en.wikipedia.org/wiki?curid=14898988
|
1490148
|
Perturbation (astronomy)
|
Classical approach to the many-body problem of astronomy
In astronomy, perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body. The other forces can include a third (fourth, fifth, etc.) body, resistance, as from an atmosphere, and the off-center attraction of an oblate or otherwise misshapen body.
Introduction.
The study of perturbations began with the first attempts to predict planetary motions in the sky. In ancient times the causes were unknown. Isaac Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation.
Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for marine navigation.
The complex motions of gravitational perturbations can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is a conic section, and can be described in geometrical terms. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between that and the actual motion of the body are perturbations due to the additional gravitational effects of the remaining body or bodies. If there is only one other significant body then the perturbed motion is a three-body problem; if there are multiple other bodies it is an n‑body problem. A general analytical solution (a mathematical expression to predict the positions and motions at any future time) exists for the two-body problem; when more than two bodies are considered analytic solutions exist only for special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape.
Most systems that involve multiple gravitational attractions present one primary body which is dominant in its effects (for example, a star, in the case of the star and its planet, or a planet, in the case of the planet and its satellite). The gravitational effects of the other bodies can be treated as perturbations of the hypothetical unperturbed motion of the planet or satellite around its primary body.
Mathematical analysis.
General perturbations.
In methods of general perturbations, general differential equations, either of motion or of change in the orbital elements, are solved analytically, usually by series expansions. The result is usually expressed in terms of algebraic and trigonometric functions of the orbital elements of the body in question and the perturbing bodies. This can be applied generally to many different sets of conditions, and is not specific to any particular set of gravitating objects. Historically, general perturbations were investigated first. The classical methods are known as "variation of the elements", "variation of parameters" or "variation of the constants of integration". In these methods, it is considered that the body is always moving in a conic section, however the conic section is constantly changing due to the perturbations. If all perturbations were to cease at any particular instant, the body would continue in this (now unchanging) conic section indefinitely; this conic is known as the osculating orbit and its orbital elements at any particular time are what are sought by the methods of general perturbations.
General perturbations takes advantage of the fact that in many problems of celestial mechanics, the two-body orbit changes rather slowly due to the perturbations; the two-body orbit is a good first approximation. General perturbations is applicable only if the perturbing forces are about one order of magnitude smaller, or less, than the gravitational force of the primary body. In the Solar System, this is usually the case; Jupiter, the second largest body, has a mass of about that of the Sun.
General perturbation methods are preferred for some types of problems, as the source of certain observed motions are readily found. This is not necessarily so for special perturbations; the motions would be predicted with similar accuracy, but no information on the configurations of the perturbing bodies (for instance, an orbital resonance) which caused them would be available.
Special perturbations.
In methods of special perturbations, numerical datasets, representing values for the positions, velocities and accelerative forces on the bodies of interest, are made the basis of numerical integration of the differential equations of motion. In effect, the positions and velocities are perturbed directly, and no attempt is made to calculate the curves of the orbits or the orbital elements.
Special perturbations can be applied to any problem in celestial mechanics, as it is not limited to cases where the perturbing forces are small. Once applied only to comets and minor planets, special perturbation methods are now the basis of the most accurate machine-generated planetary ephemerides of the great astronomical almanacs. Special perturbations are also used for modeling an orbit with computers.
Cowell's formulation.
Cowell's formulation (so named for Philip H. Cowell, who, with A.C.D. Cromellin, used a similar method to predict the return of Halley's comet) is perhaps the simplest of the special perturbation methods. In a system of formula_1 mutually interacting bodies, this method mathematically solves for the Newtonian forces on body formula_0 by summing the individual interactions from the other formula_2 bodies:
formula_3
where formula_4 is the acceleration vector of body formula_5, formula_6 is the gravitational constant, formula_7 is the mass of body formula_2, formula_8 and formula_9 are the position vectors of objects formula_0 and formula_10 respectively, and formula_11 is the distance from object formula_5 to object formula_10, all vectors being referred to the barycenter of the system. This equation is resolved into components in formula_12 formula_13 and formula_14 and these are integrated numerically to form the new velocity and position vectors. This process is repeated as many times as necessary. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large.
However, for many problems in celestial mechanics, this is never the case. Another disadvantage is that in systems with a dominant central body, such as the Sun, it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies, although with high precision numbers built into modern computers this is not as much of a limitation as it once was.
Encke's method.
Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time.
Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as "rectification". Encke's method is similar to the general perturbation method of variation of the elements, except the rectification is performed at discrete intervals rather than continuously.
Letting formula_15 be the radius vector of the osculating orbit, formula_16 the radius vector of the perturbed orbit, and formula_17 the variation from the osculating orbit,
formula_18 and formula_19 are just the equations of motion of formula_16 and formula_20
where formula_21 is the gravitational parameter with formula_22 and formula_23 the masses of the central body and the perturbed body, formula_24 is the perturbing acceleration, and formula_25 and formula_26 are the magnitudes of formula_16 and formula_15.
Substituting from equations (3) and (4) into equation (2),
which, in theory, could be integrated twice to find formula_17. Since the osculating orbit is easily calculated by two-body methods, formula_15 and formula_17 are accounted for and formula_16 can be solved. In practice, the quantity in the brackets, formula_27, is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits. Encke's method was more widely used before the advent of modern computers, when much orbit computation was performed on mechanical calculating machines.
Periodic nature.
In the Solar System, many of the disturbances of one planet by another are periodic, consisting of small impulses each time a planet passes another in its orbit. This causes the bodies to follow motions that are periodic or quasi-periodic – such as the Moon in its strongly perturbed orbit, which is the subject of lunar theory. This periodic nature led to the discovery of Neptune in 1846 as a result of its perturbations of the orbit of Uranus.
On-going mutual perturbations of the planets cause long-term quasi-periodic variations in their orbital elements, most apparent when two planets' orbital periods are nearly in sync. For instance, five orbits of Jupiter (59.31 years) is nearly equal to two of Saturn (58.91 years). This causes large perturbations of both, with a period of 918 years, the time required for the small difference in their positions at conjunction to make one complete circle, first discovered by Laplace. Venus currently has the orbit with the least eccentricity, i.e. it is the closest to circular, of all the planetary orbits. In 25,000 years' time, Earth will have a more circular (less eccentric) orbit than Venus. It has been shown that long-term periodic disturbances within the Solar System can become chaotic over very long time scales; under some circumstances one or more planets can cross the orbit of another, leading to collisions.
The orbits of many of the minor bodies of the Solar System, such as comets, are often heavily perturbed, particularly by the gravitational fields of the gas giants. While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion. For example, in April 1996, Jupiter's gravitational influence caused the period of Comet Hale–Bopp's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ i\\ "
},
{
"math_id": 1,
"text": "\\ n\\ "
},
{
"math_id": 2,
"text": "j"
},
{
"math_id": 3,
"text": "\\mathbf{\\ddot{r}}_i = \\sum_{\\underset{j \\ne i}{j=1}}^n \\ G\\ m_j \\frac{\\ (\\mathbf{r}_j-\\mathbf{r}_i)\\ }{\\ \\| \\mathbf{r}_j-\\mathbf{r}_i \\|^3 }"
},
{
"math_id": 4,
"text": "\\ \\mathbf{\\ddot{r}}_i\\ "
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "\\ m_j\\ "
},
{
"math_id": 8,
"text": "\\ \\mathbf{r}_i\\ "
},
{
"math_id": 9,
"text": "\\ \\mathbf{r}_j\\ "
},
{
"math_id": 10,
"text": "\\ j\\ "
},
{
"math_id": 11,
"text": "\\ r_{ij} \\equiv \\| \\mathbf{r}_j-\\mathbf{r}_i \\|\\ "
},
{
"math_id": 12,
"text": "\\ x\\ ,"
},
{
"math_id": 13,
"text": "\\ y\\ ,"
},
{
"math_id": 14,
"text": "\\ z\\ ,"
},
{
"math_id": 15,
"text": "\\boldsymbol{\\rho}"
},
{
"math_id": 16,
"text": "\\mathbf{r}"
},
{
"math_id": 17,
"text": "\\delta \\mathbf{r}"
},
{
"math_id": 18,
"text": "\\mathbf{\\ddot{r}}"
},
{
"math_id": 19,
"text": "\\boldsymbol{\\ddot{\\rho}}"
},
{
"math_id": 20,
"text": "\\boldsymbol{\\rho},"
},
{
"math_id": 21,
"text": "\\mu = G(M+m)"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "m"
},
{
"math_id": 24,
"text": "\\mathbf{a}_{\\text{per}}"
},
{
"math_id": 25,
"text": "r"
},
{
"math_id": 26,
"text": "\\rho"
},
{
"math_id": 27,
"text": " {\\boldsymbol{\\rho} \\over \\rho^3} - {\\mathbf{r} \\over r^3} "
}
] |
https://en.wikipedia.org/wiki?curid=1490148
|
1490155
|
Price level
|
Hypothetical measure of overall prices
The general price level is a hypothetical measure of overall prices for some set of goods and services (the consumer basket), in an economy or monetary union during a given interval (generally one day), normalized relative to some base set. Typically, the general price level is approximated with a daily price "index", normally the Daily CPI. The general price level can change more than once per day during hyperinflation.
Theoretical foundation.
The classical dichotomy is the assumption that there is a relatively clean distinction between overall increases or decreases in prices and underlying, “nominal” economic variables. Thus, if prices "overall" increase or decrease, it is assumed that this change can be decomposed as follows:
Given a set formula_0 of goods and services, the total value of transactions in formula_0 at time formula_1 is
formula_2
where
formula_3 represents the quantity of formula_4 at time formula_1
formula_5 represents the prevailing price of formula_4 at time formula_1
formula_6 represents the “real” price of formula_4 at time formula_1
formula_7 is the price level at time formula_1
The general price "level" is distinguished from a price "index" in that the existence of the former depends upon the classical dichotomy, while the latter is simply a computation, and many such will be possible regardless of whether they are meaningful.
Significance.
If, indeed, a general price level component could be distinguished, then it would be possible to "measure" the difference in overall prices between two regions or intervals. For example, the inflation rate could be measured as
formula_8
and “real” economic growth or contraction could be distinguished from mere price changes by deflating GDP or some other measure.
formula_9
Measuring price level.
Applicable indices are the consumer price index (CPI), Default Price Deflator, and the Producer Price Index.
Price indices not only affect the rate of inflation, but are also part of real output and productivity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "\\sum_{c\\,\\in\\, C} (p_{c,t}\\cdot q_{c,t})=\\sum_{c\\,\\in\\, C} [(P_t\\cdot p'_{c,t})\\cdot q_{c,t}]=P_t\\cdot \\sum_{c\\,\\in\\, C} (p'_{c,t}\\cdot q_{c,t})"
},
{
"math_id": 3,
"text": "q_{c,t}\\, "
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "p_{c,t}\\,"
},
{
"math_id": 6,
"text": "p'_{c,t}"
},
{
"math_id": 7,
"text": "P_t"
},
{
"math_id": 8,
"text": "\\frac{(P_{t_1}-P_{t_0})/P_{t_0}}{t_1 -t_0}"
},
{
"math_id": 9,
"text": "\\frac{(GDP)_{t_1}}{P_{t_1}}-\\frac{(GDP)_{t_0}}{P_{t_0}}"
}
] |
https://en.wikipedia.org/wiki?curid=1490155
|
14903178
|
Yield gap
|
The yield gap or yield ratio is the ratio of the dividend yield of an equity and the yield of a long-term government bond. Typically equities have a higher yield (as a percentage of the market price of the equity) thus reflecting the higher risk of holding an equity.
formula_0
The purpose of calculating the yield gap is to assess whether the equity is over or under priced as compared to bonds. For a given equity, the following cases may be considered:
See also.
Yield (finance)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{Yield Gap} = \\frac {\\mbox{Yield Ratio of Equity}} {\\mbox{Yield Ratio of Bond}}"
}
] |
https://en.wikipedia.org/wiki?curid=14903178
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.