id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
5779282
|
Projected dynamical system
|
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation
formula_0
where "K" is our constraint set. Differential equations of this form are notable for having a discontinuous vector field.
History of projected dynamical systems.
Projected dynamical systems have evolved out of the desire to dynamically model the behaviour of nonstatic solutions in equilibrium problems over some parameter, typically take to be time. This dynamics differs from that of ordinary differential equations in that solutions are still restricted to whatever constraint set the underlying equilibrium problem was working on, e.g. nonnegativity of investments in financial modeling, convex polyhedral sets in operations research, etc. One particularly important class of equilibrium problems which has aided in the rise of projected dynamical systems has been that of variational inequalities.
The formalization of projected dynamical systems began in the 1990s in Section 5.3 of the paper of Dupuis and Ishii. However, similar concepts can be found in the mathematical literature which predate this, especially in connection with variational inequalities and differential inclusions.
Projections and Cones.
Any solution to our projected differential equation must remain inside of our constraint set "K" for all time. This desired result is achieved through the use of projection operators and two particular important classes of convex cones. Here we take "K" to be a closed, convex subset of some Hilbert space "X".
The "normal cone" to the set "K" at the point "x" in "K" is given by
formula_1
The "tangent cone" (or "contingent cone") to the set "K" at the point "x" is given by
formula_2
The "projection operator" (or "closest element mapping") of a point "x" in "X" to "K" is given by the point formula_3 in "K" such that
formula_4
for every "y" in "K".
The "vector projection operator" of a vector "v" in "X" at a point "x" in "K" is given by
formula_5
Which is just the Gateaux Derivative computed in the direction of the Vector field
Projected Differential Equations.
Given a closed, convex subset "K" of a Hilbert space "X" and a vector field "-F" which takes elements from "K" into "X", the projected differential equation associated with "K" and "-F" is defined to be
formula_6
On the interior of "K" solutions behave as they would if the system were an unconstrained ordinary differential equation. However, since the vector field is discontinuous along the boundary of the set, projected differential equations belong to the class of discontinuous ordinary differential equations. While this makes much of ordinary differential equation theory inapplicable, it is known that when "-F" is a Lipschitz continuous vector field, a unique absolutely continuous solution exists through each initial point "x(0)=x0" in "K" on the interval formula_7.
This differential equation can be alternately characterized by
formula_8
or
formula_9
The convention of denoting the vector field "-F" with a negative sign arises from a particular connection projected dynamical systems shares with variational inequalities. The convention in the literature is to refer to the vector field as positive in the variational inequality, and negative in the corresponding projected dynamical system.
|
[
{
"math_id": 0,
"text": "\n\\frac{dx(t)}{dt} = \\Pi_K(x(t),-F(x(t)))\n"
},
{
"math_id": 1,
"text": "\nN_K(x) = \\{ p \\in V | \\langle p, x - x^* \\rangle \\geq 0, \\forall x^* \\in K \\}.\n"
},
{
"math_id": 2,
"text": "\nT_K(x) = \\overline{\\bigcup_{h>0} \\frac{1}{h} (K-x)}.\n"
},
{
"math_id": 3,
"text": "P_K(x)"
},
{
"math_id": 4,
"text": "\n\\| x-P_K(x) \\| \\leq \\| x-y \\|\n"
},
{
"math_id": 5,
"text": "\n\\Pi_K(x,v)=\\lim_{\\delta \\to 0^+} \\frac{P_K(x+\\delta v)-x}{\\delta}.\n"
},
{
"math_id": 6,
"text": "\n\\frac{dx(t)}{dt} = \\Pi_K(x(t),-F(x(t))).\n"
},
{
"math_id": 7,
"text": "[0,\\infty)"
},
{
"math_id": 8,
"text": "\n\\frac{dx(t)}{dt} = P_{T_K(x(t))}(-F(x(t)))\n"
},
{
"math_id": 9,
"text": "\n\\frac{dx(t)}{dt} = -F(x(t))-P_{N_K(x(t))}(-F(x(t))).\n"
}
] |
https://en.wikipedia.org/wiki?curid=5779282
|
577966
|
Slip angle
|
Term or maneuver in vehicle dynamics
In vehicle dynamics, slip angle or sideslip angle is the angle between the direction in which a wheel is pointing and the direction in which it is actually traveling (i.e., the angle between the forward velocity vector formula_1 and the vector sum of wheel forward velocity formula_1 and lateral velocity formula_2, as defined in the image to the right). This slip angle results in a force, the cornering force, which is in the plane of the contact patch and perpendicular to the intersection of the contact patch and the midplane of the wheel. This cornering force increases approximately linearly for the first few degrees of slip angle, then increases non-linearly to a maximum before beginning to decrease.
The slip angle, formula_0 is defined as formula_3
Causes.
A non-zero slip angle arises because of deformation in the tire carcass and tread. As the tire rotates, the friction between the contact patch and the road results in individual tread 'elements' (finite sections of tread) remaining stationary with respect to the road. If a side-slip velocity "u" is introduced, the contact patch will be deformed. When a tread element enters the contact patch, the friction between the road and the tire causes the tread element to remain stationary, yet the tire continues to move laterally. Thus the tread element will be ‘deflected’ sideways. While it is equally valid to frame this as the tire/wheel being deflected away from the stationary tread element, convention is for the co-ordinate system to be fixed around the wheel mid-plane.
While the tread element moves through the contact patch it is deflected further from the wheel mid-plane. This deflection gives rise to the slip angle, and to the cornering force. The rate at which the cornering force builds up is described by the relaxation length.
Effects.
The ratios between the slip angles of the front and rear axles (a function of the slip angles of the front and rear tires respectively) will determine the vehicle's behavior in a given turn. If the ratio of front to rear slip angles is greater than 1:1, the vehicle will tend to understeer, while a ratio of less than 1:1 will produce oversteer. Actual instantaneous slip angles depend on many factors, including the condition of the road surface, but a vehicle's suspension can be designed to promote specific dynamic characteristics. A principal means of adjusting developed slip angles is to alter the relative roll couple (the rate at which weight transfers from the inside to the outside wheel in a turn) front to rear by varying the relative amount of front and rear lateral load transfer. This can be achieved by modifying the height of the roll centers, or by adjusting roll stiffness, either through suspension changes or the addition of an anti-roll bar.
Because of asymmetries in the side-slip along the length of the contact patch, the resultant force of this side-slip occurs away from the geometric center of the contact patch, a distance described as the pneumatic trail, and so creates a torque on the tire, the so-called self aligning torque.
Measurement of slip angle.
There are two main ways to measure slip angle of a tire: on a vehicle as it moves, or on a dedicated testing device.
There are a number of devices which can be used to measure slip angle on a vehicle as it moves; some use optical methods, some use inertial methods, some GPS and some both GPS and inertial.
Various test machines have been developed to measure slip angle in a controlled environment. A motorcycle tire test machine is located at the University of Padua. That uses a 3-meter diameter disk that rotates under a tire held at a fixed steer and camber angle, up to 54 degrees. Sensors measure the force and moment generated, and a correction is made to account for the curvature of the track. Other devices use the inner or outer surface of rotating drums, sliding planks, conveyor belts, or a trailer that presses the test tire to an actual road surface.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "v_x"
},
{
"math_id": 2,
"text": "v_y"
},
{
"math_id": 3,
"text": "\\alpha \\triangleq -\\arctan\\left(\\frac{v_y}{|v_x|}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=577966
|
57797421
|
Heilbronn set
|
In mathematics, a Heilbronn set is an infinite set "S" of natural numbers for which every real number can be arbitrarily closely approximated by a fraction whose denominator is in "S". For any given real number formula_0 and natural number formula_1, it is easy to find the integer formula_2 such that formula_3 is closest to formula_0. For example, for the real number formula_4 and formula_5 we have formula_6. If we call the closeness of formula_0 to formula_3 the difference between formula_7 and formula_2, the closeness is always less than 1/2 (in our example it is 0.15926...). A collection of numbers is a Heilbronn set if for any formula_0 we can always find a sequence of values for formula_1 in the set where the closeness tends to zero.
More mathematically let formula_8 denote the distance from formula_9 to the nearest integer then formula_10 is a Heilbronn set if and only if for every real number formula_0 and every formula_11 there exists formula_12 such that formula_13.
Examples.
The natural numbers are a Heilbronn set as Dirichlet's approximation theorem shows that there exists formula_14 with formula_15.
The formula_16th powers of integers are a Heilbronn set. This follows from a result of I. M. Vinogradov who showed that for every formula_17 and formula_16 there exists an exponent formula_18 and formula_19 such that formula_20. In the case formula_21 Hans Heilbronn was able to show that formula_22 may be taken arbitrarily close to 1/2. Alexandru Zaharescu has improved Heilbronn's result to show that formula_22 may be taken arbitrarily close to 4/7.
Any Van der Corput set is also a Heilbronn set.
Example of a non-Heilbronn set.
The powers of 10 are not a Heilbronn set. Take formula_23 then the statement that formula_24 for some formula_16 is equivalent to saying that the decimal expansion of formula_0 has run of three zeros or three nines somewhere. This is not true for all real numbers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "g/h"
},
{
"math_id": 4,
"text": "\\pi"
},
{
"math_id": 5,
"text": "h=100"
},
{
"math_id": 6,
"text": "g=314"
},
{
"math_id": 7,
"text": "h\\theta"
},
{
"math_id": 8,
"text": "\\|\\alpha\\|"
},
{
"math_id": 9,
"text": "\\alpha"
},
{
"math_id": 10,
"text": "\\mathcal H"
},
{
"math_id": 11,
"text": "\\varepsilon>0"
},
{
"math_id": 12,
"text": "h\\in\\mathcal H"
},
{
"math_id": 13,
"text": "\\|h\\theta\\|<\\varepsilon"
},
{
"math_id": 14,
"text": "q<[1/\\varepsilon]"
},
{
"math_id": 15,
"text": "\\|q\\theta\\|<\\varepsilon"
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "N"
},
{
"math_id": 18,
"text": "\\eta_k>0"
},
{
"math_id": 19,
"text": "q<N"
},
{
"math_id": 20,
"text": "\\|q^k\\theta\\|\\ll N^{-\\eta_k}"
},
{
"math_id": 21,
"text": "k=2"
},
{
"math_id": 22,
"text": "\\eta_2"
},
{
"math_id": 23,
"text": "\\varepsilon=0.001"
},
{
"math_id": 24,
"text": "\\|10^k\\theta\\|<\\varepsilon"
}
] |
https://en.wikipedia.org/wiki?curid=57797421
|
57800229
|
Cramér's theorem (large deviations)
|
Cramér's theorem is a fundamental result in the theory of large deviations, a subdiscipline of probability theory. It determines the rate function of a series of iid random variables.
A weak version of this result was first shown by Harald Cramér in 1938.
Statement.
The logarithmic moment generating function (which is the cumulant-generating function) of a random variable is defined as:
formula_0
Let formula_1 be a sequence of iid real random variables with finite logarithmic moment generating function, i.e. formula_2 for all formula_3.
Then the Legendre transform of formula_4:
formula_5
satisfies,
formula_6
for all formula_7
In the terminology of the theory of large deviations the result can be reformulated as follows:
If formula_1 is a series of iid random variables, then the distributions formula_8 satisfy a large deviation principle with rate function formula_9.
|
[
{
"math_id": 0,
"text": " \\Lambda(t)=\\log \\operatorname E [\\exp(tX_1)]. "
},
{
"math_id": 1,
"text": " X_1, X_2, \\dots "
},
{
"math_id": 2,
"text": " \\Lambda(t) < \\infty "
},
{
"math_id": 3,
"text": " t \\in \\mathbb R "
},
{
"math_id": 4,
"text": " \\Lambda "
},
{
"math_id": 5,
"text": " \\Lambda^*(x):= \\sup_{t \\in \\mathbb R} \\left(tx-\\Lambda(t) \\right) "
},
{
"math_id": 6,
"text": " \\lim_{n \\to \\infty} \\frac 1n \\log \\left(P\\left(\\sum_{i=1}^n X_i \\geq nx \\right)\\right) = -\\Lambda^*(x) "
},
{
"math_id": 7,
"text": " x > \\operatorname E[X_1]. "
},
{
"math_id": 8,
"text": " \\left(\\mathcal L ( \\tfrac 1n \\sum_{i=1}^n X_i) \\right)_{n \\in \\N}"
},
{
"math_id": 9,
"text": " \\Lambda^* "
}
] |
https://en.wikipedia.org/wiki?curid=57800229
|
57800775
|
Green data center
|
Server facility which utilizes energy-efficient technologies
A green data center, or sustainable data center, is a service facility which utilizes energy-efficient technologies. They do not contain obsolete systems (such as inactive or underused servers), and take advantage of newer, more efficient technologies.
With the exponential growth and usage of the Internet, power consumption in data centers has increased significantly. Due to the resulting environmental impact, increase in public awareness, higher cost of energy and legislative action, increased pressure has been placed on companies to follow a green policy. For these reasons, the creation of sustainable data centers has become essential in an environmental and a business sense.
Energy use.
The use of high-performance computing techniques has increased, trading energy consumption for increased performance. Industry estimates suggest that data centers consume three to five percent of the world's global energy. According to an AFCOM State of the Data Center survey, 70 percent of data-center providers indicated that power density per rack has increased significantly since 2013. Managers have been forced to find new ways to power their data centers with renewable energy sources such as hydro, solar, geothermal, and wind. More efficient technologies were developed to decrease data-center power consumption.
Metrics.
Several metrics have been developed to measure power efficiency in data centers. Power usage effectiveness (PUE) and carbon usage effectiveness (CUE) are two frequently-used metrics created by the Green Grid (TGG), a global consortium dedicated to advancing energy efficiency in data centers.
Power usage effectiveness.
PUE was invented in 2007, and proposed new guidelines to measure energy use in data centers.
formula_0
This ratio describes how much extra energy a data center needs to maintain IT equipment for every watt delivered to the equipment. The best PUE a data center can have is 1: an ideal situation, with no extra energy use. When PUE was introduced, studies found that the industry-average PUE was between 2.5 and 3. In more recent studies, the average PUE fell to about 1.7 by using this framework. PUE began the shift of the data-center industry towards energy efficiency.
Although PUE is the most-frequently-used metric for data centers to measure energy efficiency, its reliability is still debated;
Carbon usage effectiveness.
Carbon usage effectiveness (CUE) is another metric used to measure energy usage and sustainability in data centers. It is calculated with the following formula:
formula_1
Another way to express this formula is as the product of the carbon dioxide emission factor (CEF) and the PUE, where the CEF is the kg of <chem>CO2</chem> produced for each kilowatt-hour of electricity:
formula_2 × formula_3
Certifications.
Data centers in the United States may apply to be certified as green data centers. The most widely used green building rating system is the Leadership in Energy and Environmental Design (LEED). Developed by the U.S. Green Building Council, it is available in several categories. Depending on ratings, data centers may receive a silver, gold or platinum certification. The platinum certification is given to data centers with the highest level of environmentally-responsible construction and efficient use of resources.
Data centers may also be certified under the National Data Center Energy Efficiency Information Program by Energy Star, part of an initiative by the U.S. Environmental Protection Agency and the U.S. Department of Energy. The program certifies buildings and consumer products for energy efficiency. Only data centers which are in the top 25 percent in energy performance may receive Energy Star certification.
Technologies.
Several technologies increase efficiency and decrease energy consumption in data centers.
Low-power servers.
Low-power servers are more energy-efficient than conventional servers in data centers. They use the technology of smartphone computing, which tries to balance performance with energy consumption. The first low-power servers were introduced in 2012 by large IT providers such as Dell and Hewlett-Packard. Used correctly, low-power servers can be much more efficient than conventional servers. They can have a significant impact on data-center efficiency, decreasing power consumption and the operating cost of cooling facilities.
Modular data centers.
A modular data center is a portable data center which can be placed anywhere data capacity is needed. Compared with traditional data centers, they are designed for rapid deployment, energy efficiency and high density. These ready-made data centers in a box became very popular. The HP EcoPod modular data center supports over 4,000 data centers with a PUE rating of 1.05 in and free-air cooling.
Free air cooling.
Free air cooling systems uses external wind instead of traditional data-center computer room air conditioner (CRAC) units. Although outdoor air still needs to be filtered and moisturized, much less energy is required to cool a data center with this method. Outdoor air temperature is an issue here, and the data center's location plays a critical role in this technology.
Hot and cold aisle containment.
In this method, the rows of racks are aligned with the backs of the servers facing each other; the aisles are enclosed, to capture the air. In hot aisle containment, the heat produced by the servers is pumped to the cooling units. In cold aisle containment, cold air is pumped to the enclosed aisles. Both containment methods are more effective than traditional cooling technologies, and can help reduce energy consumption (and its impact). Although it may be more difficult to implement, hot aisle containment is more effective than cold aisle containment.
Reusing waste heat.
Data centers use electric power, releasing more than 98 percent of this electricity as heat. Waste heat can be actively reused, and a data center becomes a closed-loop heating system with no waste. Examples include:
Ultrasonic humidification.
Some humidity is necessary for data centers to work efficiently and prevent damage to devices and servers. Ultrasonic humidification uses ultrasound to create moisture, using 90 percent less energy than conventional methods such as resistance steam humidifiers.
Evaporative cooling.
Evaporative cooling reduces heat by the evaporation of water. Two main methods are used: evaporation pads and high-pressure spray systems. With evaporation pads – the more popular method – air is drawn through the pads, making water evaporate and cooling the air. The other technique, high-pressure spray systems, needs a larger area and consumes more energy with pumps. Evaporative cooling is dependent on geographical location and season, because both affect the moisture level of the air. Compared to traditional mechanical cooling systems, evaporative cooling generally uses significantly less electricity.
Direct current data centers.
Direct current data centers are data centers that produce direct current on site with solar panels and store the electricity on site in a battery storage power station. Computers run on direct current and the need for inverting the AC power from the grid would be eliminated. The data center site could still use AC power as a grid-as-a-backup solution. DC data centers could be 10% more efficient and use less floor space for inverting components.
Investment in Green Data Center.
According to a new study by Arizton Advisory & Intelligence, the total investment in the green data center market across the globe marked $35.58 billion in 2021. The investment is expected to grow by a CAGR of 7.6%.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{PUE} = {\\mbox{Total Facility Power} \\over \\mbox{IT Equipment Power}} "
},
{
"math_id": 1,
"text": " \\mathrm{CUE} = \\frac{Total \\rm{CO_2} Emissions Caused}{IT Equipment Power} "
},
{
"math_id": 2,
"text": " \\mathrm{CUE} = \\frac{\\rm{CO_2} emitted (kg\\rm{CO_2}eq)}{unit of energy (kWh)}"
},
{
"math_id": 3,
"text": " \\frac{Total Facility Power}{IT Equipment Power}"
}
] |
https://en.wikipedia.org/wiki?curid=57800775
|
578038
|
Reversal potential
|
In a biological membrane, the reversal potential is the membrane potential at which the direction of ionic current reverses. At the reversal potential, there is no net flow of ions from one side of the membrane to the other. For channels that are permeable to only a single type of ion, the reversal potential is identical to the equilibrium potential of the ion.
Equilibrium potential.
The equilibrium potential for an ion is the membrane potential at which there is no net movement of the ion. The flow of any inorganic ion, such as Na+ or K+, through an ion channel (since membranes are normally impermeable to ions) is driven by the electrochemical gradient for that ion. This gradient consists of two parts, the difference in the concentration of that ion across the membrane, and the voltage gradient. When these two influences balance each other, the electrochemical gradient for the ion is zero and there is no net flow of the ion through the channel; this also translates to no current across the membrane so long as only one ionic species is involved. The voltage gradient at which this equilibrium is reached is the equilibrium potential for the ion and it can be calculated from the Nernst equation.
Mathematical models and the driving force.
We can consider as an example a positively charged ion, such as K+, and a negatively charged membrane, as it is commonly the case in most organisms. The membrane voltage opposes the flow of the potassium ions out of the cell and the ions can leave the interior of the cell only if they have sufficient thermal energy to overcome the energy barrier produced by the negative membrane voltage. However, this biasing effect can be overcome by an opposing concentration gradient if the interior concentration is high enough which favours the potassium ions leaving the cell.
An important concept related to the equilibrium potential is the driving force"." Driving force is simply defined as the difference between the actual membrane potential and an ion's equilibrium potential formula_0where formula_1refers to the equilibrium potential for a specific ion. Relatedly, the membrane current per unit area due to the type formula_2 ion channel is given by the following equation:
formula_3
where formula_0 is the driving force and formula_4 is the specific conductance, or conductance per unit area. Note that the ionic current will be zero if the membrane is impermeable to that ion in question or if the membrane voltage is exactly equal to the equilibrium potential of that ion.
Use in research.
When Vm is at the reversal potential for an event such as a synaptic potential ("V"m − "E"rev is equal to 0), the identity of the ions that flow during an EPC can be deduced by comparing the reversal potential of the EPC to the equilibrium potential for various ions. For instance several excitatory ionotropic ligand-gated neurotransmitter receptors including glutamate receptors (AMPA, NMDA, and kainate), nicotinic acetylcholine (nACh), and serotonin (5-HT3) receptors are nonselective cation channels that pass Na+ and K+ in nearly equal proportions, giving the reversal potential close to zero. The inhibitory ionotropic ligand-gated neurotransmitter receptors that carry Cl−, such as GABAA and glycine receptors, have reversal potentials close to the resting potential (approximately –70 mV) in neurons.
This line of reasoning led to the development of experiments (by Akira Takeuchi and Noriko Takeuchi in 1960) that demonstrated that acetylcholine-activated ion channels are approximately equally permeable to Na+ and K+ ions. The experiment was performed by lowering the external Na+ concentration, which lowers (makes more negative) the Na+ equilibrium potential and produces a negative shift in reversal potential. Conversely, increasing the external K+ concentration raises (makes more positive) the K+ equilibrium potential and produces a positive shift in reversal potential. A general expression for reversal potential of synaptic events, including for decreases in conductance, has been derived.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_\\mathrm{m}-E_\\mathrm{i}\\ "
},
{
"math_id": 1,
"text": "E_\\mathrm{i}\\ "
},
{
"math_id": 2,
"text": "i "
},
{
"math_id": 3,
"text": "i_\\mathrm{i} = g_\\mathrm{i} \\left(V_\\mathrm{m}-E_\\mathrm{i}\\right) "
},
{
"math_id": 4,
"text": "g_\\mathrm{i} "
}
] |
https://en.wikipedia.org/wiki?curid=578038
|
5780875
|
Residually finite group
|
In the mathematical field of group theory, a group "G" is residually finite or finitely approximable if for every element "g" that is not the identity in "G" there is a homomorphism "h" from "G" to a finite group, such that
formula_0
There are a number of equivalent definitions:
Examples.
Examples of groups that are residually finite are finite groups, free groups, finitely generated nilpotent groups, polycyclic-by-finite groups, finitely generated linear groups, and fundamental groups of compact 3-manifolds.
Subgroups of residually finite groups are residually finite, and direct products of residually finite groups are residually finite. Any inverse limit of residually finite groups is residually finite. In particular, all profinite groups are residually finite.
Examples of non-residually finite groups can be constructed using the fact that all finitely generated residually finite groups are Hopfian groups. For example the Baumslag–Solitar group "B"(2,3) is not Hopfian, and therefore not residually finite.
Profinite topology.
Every group "G" may be made into a topological group by taking as a basis of open neighbourhoods of the identity, the collection of all normal subgroups of finite index in "G". The resulting topology is called the profinite topology on "G". A group is residually finite if, and only if, its profinite topology is Hausdorff.
A group whose cyclic subgroups are closed in the profinite topology is said to be formula_1.
Groups each of whose finitely generated subgroups are closed in the profinite topology are called subgroup separable (also LERF, for "locally extended residually finite").
A group in which every conjugacy class is closed in the profinite topology is called conjugacy separable.
Varieties of residually finite groups.
One question is: what are the properties of a variety all of whose groups are residually finite? Two results about these are:
|
[
{
"math_id": 0,
"text": "h(g) \\neq 1.\\,"
},
{
"math_id": 1,
"text": "\\Pi_C\\,"
}
] |
https://en.wikipedia.org/wiki?curid=5780875
|
5781226
|
Quasiconvex function
|
Mathematical function with convex lower level sets
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form formula_0 is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
Quasiconvexity is a more general property than convexity in that all convex functions are also quasiconvex, but not all quasiconvex functions are convex. "Univariate" unimodal functions are quasiconvex or quasiconcave, however this is not necessarily the case for functions with multiple arguments. For example, the 2-dimensional Rosenbrock function is unimodal but not quasiconvex and functions with star-convex sublevel sets can be unimodal without being quasiconvex.
Definition and properties.
A function formula_1 defined on a convex subset formula_2 of a real vector space is quasiconvex if for all formula_3 and formula_4 we have
formula_5
In words, if formula_6 is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, then formula_6 is quasiconvex. Note that the points formula_7 and formula_8, and the point directly between them, can be points on a line or more generally points in "n"-dimensional space.
An alternative way (see introduction) of defining a quasi-convex function formula_9 is to require that each sublevel set
formula_10
is a convex set.
If furthermore
formula_11
for all formula_12 and formula_13, then formula_6 is strictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does.
A quasiconcave function is a function whose negative is quasiconvex, and a strictly quasiconcave function is a function whose negative is strictly quasiconvex. Equivalently a function formula_6 is quasiconcave if
formula_14
and strictly quasiconcave if
formula_15
A (strictly) quasiconvex function has (strictly) convex lower contour sets, while a (strictly) quasiconcave function has (strictly) convex upper contour sets.
A function that is both quasiconvex and quasiconcave is quasilinear.
A particular case of quasi-concavity, if formula_16, is unimodality, in which there is a locally maximal value.
Applications.
Quasiconvex functions have applications in mathematical analysis, in mathematical optimization, and in game theory and economics.
Mathematical optimization.
In nonlinear optimization, quasiconvex programming studies iterative methods that converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization of convex programming. Quasiconvex programming is used in the solution of "surrogate" dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangian dual problems. In theory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated); however, such theoretically "efficient" methods use "divergent-series" step size rules, which were first developed for classical subgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods, bundle methods of descent, and nonsmooth filter methods.
Economics and partial differential equations: Minimax theorems.
In microeconomics, quasiconcave utility functions imply that consumers have convex preferences. Quasiconvex functions are important
also in game theory, industrial organization, and general equilibrium theory, particularly for applications of Sion's minimax theorem. Generalizing a minimax theorem of John von Neumann, Sion's theorem is also used in the theory of partial differential equations.
|
[
{
"math_id": 0,
"text": "(-\\infty,a)"
},
{
"math_id": 1,
"text": "f:S \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "x, y \\in S"
},
{
"math_id": 4,
"text": "\\lambda \\in [0,1]"
},
{
"math_id": 5,
"text": "f(\\lambda x + (1 - \\lambda)y)\\leq\\max\\big\\{f(x),f(y)\\big\\}."
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "f(x)"
},
{
"math_id": 10,
"text": "S_\\alpha(f) = \\{x\\mid f(x) \\leq \\alpha\\}"
},
{
"math_id": 11,
"text": "f(\\lambda x + (1 - \\lambda)y)<\\max\\big\\{f(x),f(y)\\big\\}"
},
{
"math_id": 12,
"text": "x \\neq y"
},
{
"math_id": 13,
"text": "\\lambda \\in (0,1)"
},
{
"math_id": 14,
"text": "f(\\lambda x + (1 - \\lambda)y)\\geq\\min\\big\\{f(x),f(y)\\big\\}."
},
{
"math_id": 15,
"text": "f(\\lambda x + (1 - \\lambda)y)>\\min\\big\\{f(x),f(y)\\big\\}"
},
{
"math_id": 16,
"text": "S \\subset \\mathbb{R}"
},
{
"math_id": 17,
"text": "f = \\max \\left\\lbrace f_1 , \\ldots , f_n \\right\\rbrace"
},
{
"math_id": 18,
"text": "g : \\mathbb{R}^{n} \\rightarrow \\mathbb{R}"
},
{
"math_id": 19,
"text": "h : \\mathbb{R} \\rightarrow \\mathbb{R}"
},
{
"math_id": 20,
"text": "f = h \\circ g"
},
{
"math_id": 21,
"text": "f(x,y)"
},
{
"math_id": 22,
"text": "C"
},
{
"math_id": 23,
"text": "h(x) = \\inf_{y \\in C} f(x,y)"
},
{
"math_id": 24,
"text": "f(x), g(x)"
},
{
"math_id": 25,
"text": "(f+g)(x) = f(x) + g(x)"
},
{
"math_id": 26,
"text": "f(x), g(y)"
},
{
"math_id": 27,
"text": "h(x,y) = f(x) + g(y)"
},
{
"math_id": 28,
"text": "x \\mapsto \\log(x)"
},
{
"math_id": 29,
"text": "x\\mapsto \\lfloor x\\rfloor"
}
] |
https://en.wikipedia.org/wiki?curid=5781226
|
578150
|
Standard hydrogen electrode
|
Reference redox electrode used under standard conditions
In electrochemistry, the standard hydrogen electrode (abbreviated SHE), is a redox electrode which forms the basis of the thermodynamic scale of oxidation-reduction potentials. Its absolute electrode potential is estimated to be 4.44 ± 0.02 V at 25 °C, but to form a basis for comparison with all other electrochemical reactions, hydrogen's standard electrode potential ("E"°) is declared to be zero volts at any temperature. Potentials of all other electrodes are compared with that of the standard hydrogen electrode at the same temperature.
Nernst equation for SHE.
The hydrogen electrode is based on the redox half cell corresponding to the reduction of two hydrated protons, into one gaseous hydrogen molecule,
General equation for a reduction reaction:
formula_0
The reaction quotient (Qr) of the half-reaction is the ratio between the chemical activities (a) of the reduced form (the reductant, "a"red) and the oxidized form (the oxidant, "a"ox).
formula_1
Considering the redox couple:
<chem>2H_{(aq)}+ + 2e- <=> H2_{(g)}</chem>
at chemical equilibrium, the ratio Qr of the reaction products by the reagents is equal to the equilibrium constant K of the half-reaction:
formula_2
where
More details on managing gas fugacity to get rid of the pressure unit in thermodynamic calculations can be found at thermodynamic activity#Gases. The followed approach is the same as for chemical activity and molar concentration of solutes in solution. In the SHE, pure hydrogen gas (formula_12) at the standard pressure formula_11 of 1 bar is engaged in the system. Meanwhile the general SHE equation can also be applied to other thermodynamic systems with different mole fraction or total pressure of hydrogen.
This redox reaction occurs at a platinized platinum electrode.
The electrode is immersed in the acidic solution and pure hydrogen gas is bubbled over its surface. The concentration of both the reduced and oxidised forms of hydrogen are maintained at unity. That implies that the pressure of hydrogen gas is 1 bar (100 kPa) and the activity coefficient of hydrogen ions in the solution is unity. The activity of hydrogen ions is their effective concentration, which is equal to the formal concentration times the activity coefficient. These unit-less activity coefficients are close to 1.00 for very dilute water solutions, but usually lower for more concentrated solutions.
As the general form of the Nernst equation at equilibrium is the following:
formula_13
and as formula_14 by definition in the case of the SHE,
The Nernst equation for the SHE becomes:
formula_15
formula_16
Simply neglecting the pressure unit present in formula_8, this last equation can often be directly written as:
formula_17
And by solving the numerical values for the term
formula_18
the practical formula commonly used in the calculations of this Nernst equation is:
formula_19 (unit: volt)
As under standard conditions formula_20 formula_21 the equation simplifies to:
formula_22 (unit: volt)
This last equation describes the straight line with a negative slope of -0.0591 volt/ pH unit delimiting the lower stability region of water in a Pourbaix diagram where gaseous hydrogen is evolving because of water decomposition.
where:
105 Pa)
105 Pa
Note: as the system is at chemical equilibrium, hydrogen gas, is also in equilibrium with dissolved hydrogen, and the Nernst equation implicitly takes into account the Henry's law for gas dissolution. Therefore, there is no need to independently consider the gas dissolution process in the system, as it is already "de facto" included.
SHE vs NHE vs RHE.
During the early development of electrochemistry, researchers used the normal hydrogen electrode as their standard for zero potential. This was convenient because it could "actually be constructed" by "[immersing] a platinum electrode into a solution of 1 N strong acid and [bubbling] hydrogen gas through the solution at about 1 atm pressure". However, this electrode/solution interface was later changed. What replaced it was a theoretical electrode/solution interface, where the concentration of H+ was 1 M, but the H+ ions were assumed to have no interaction with other ions (a condition not physically attainable at those concentrations). To differentiate this new standard from the previous one, it was given the name 'standard hydrogen electrode'.
Finally, there are also reversible hydrogen electrodes (RHEs), which are practical hydrogen electrodes whose potential depends on the pH of the solution.
In summary,
NHE (normal hydrogen electrode): potential of a platinum electrode in 1 M acid solution with 1 bar of hydrogen bubbled through
SHE (standard hydrogen electrode): potential of a platinum electrode in a theoretical ideal solution (the current "standard" for zero potential for all temperatures)
RHE (reversible hydrogen electrode): a practical hydrogen electrode whose potential depends on the pH of the solution
Choice of platinum.
The choice of platinum for the hydrogen electrode is due to several factors:
The surface of platinum is platinized (i.e., covered with a layer of fine powdered platinum also known as platinum black) to:
Other metals can be used for fabricating electrodes with a similar function such as the palladium-hydrogen electrode.
Interference.
Because of the high adsorption activity of the platinized platinum electrode, it's very important to protect electrode surface and solution from the presence of organic substances as well as from atmospheric oxygen. Inorganic ions that can be reduced to a lower valency state at the electrode also have to be avoided (e.g., Fe3+, CrO42−). A number of organic substances are also reduced by hydrogen on a platinum surface, and these also have to be avoided.
Cations that can be reduced and deposited on the platinum can be source of interference: silver, mercury, copper, lead, cadmium and thallium.
Substances that can inactivate ("poison") the catalytic sites include arsenic, sulfides and other sulfur compounds, colloidal substances, alkaloids, and material found in biological systems.
Isotopic effect.
The standard redox potential of the deuterium couple is slightly different from that of the proton couple (ca. −0.0044 V vs SHE). Various values in this range have been obtained: −0.0061 V, −0.00431 V, −0.0074 V.
<chem>2 D_{(aq)}+ + 2 e- -> D2_{(g)}</chem>
Also difference occurs when hydrogen deuteride (HD, or deuterated hydrogen, DH) is used instead of hydrogen in the electrode.
Experimental setup.
The scheme of the standard hydrogen electrode:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\underset{\\text{ox}}{\\text{(oxidant)}} + z\\ce{e- <=>}\\ \\underset{\\text{red}}{\\text{(reductant)}}"
},
{
"math_id": 1,
"text": "Q_r = \\frac{a_\\text{red}}{a_\\text{ox}}"
},
{
"math_id": 2,
"text": "K = \\frac{a_\\text{red}}{a_\\text{ox}} = \\frac{a_\\mathrm{H_2}}{a_\\mathrm{H^+}^2} = \\frac{p_\\mathrm{H_2}/p^0}{a_\\mathrm{H^+}^2} = \\frac{x_\\mathrm{H_2}p/p^0}{a_\\mathrm{H^+}^2}"
},
{
"math_id": 3,
"text": "a_\\text{red}"
},
{
"math_id": 4,
"text": "a_\\text{ox}"
},
{
"math_id": 5,
"text": "a_\\mathrm{H^+}"
},
{
"math_id": 6,
"text": "a_\\mathrm{H_2}"
},
{
"math_id": 7,
"text": "p_{\\mathrm{H_2}}."
},
{
"math_id": 8,
"text": "p_\\mathrm{H_2}"
},
{
"math_id": 9,
"text": "p_{\\mathrm{H_2}} = x_{\\mathrm{H_2}} \\cdot p,"
},
{
"math_id": 10,
"text": "x_{\\mathrm{H_2}}"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "x_{\\mathrm{H_2}} = 1"
},
{
"math_id": 13,
"text": "E_\\text{cell} = E^\\ominus_\\text{cell} - \\frac{RT}{zF} \\ln K"
},
{
"math_id": 14,
"text": "E^\\ominus_\\text{cell} = 0"
},
{
"math_id": 15,
"text": "E=0-{RT \\over 2F}\\ln \\frac{{p_\\mathrm{H_2}/p^0}}{a_\\mathrm{H^+}^2}"
},
{
"math_id": 16,
"text": "E=-2.303\\,{RT \\over F} \\left( \\mathrm{pH} + \\frac{1}{2} \\log \\frac{p_\\mathrm{H_2}}{p^0} \\right) "
},
{
"math_id": 17,
"text": "E=-2.303\\,{RT \\over F} \\left( \\mathrm{pH} + \\frac{1}{2} \\log p_\\mathrm{H_2} \\right) "
},
{
"math_id": 18,
"text": "-2.303\\,{RT \\over F} = -2.303 \\left( \\frac{8.314 \\times 298.15}{96,485} \\right) = -0.0591 \\ \\mathrm{volts,}"
},
{
"math_id": 19,
"text": "E=-0.0591 \\left( \\mathrm{pH} + \\frac{1}{2} \\log p_\\mathrm{H_2} \\right)"
},
{
"math_id": 20,
"text": "p_\\mathrm{H_2} = 1 \\text{ bar,}"
},
{
"math_id": 21,
"text": "\\log p_\\mathrm{H_2} = \\log 1 = 0,"
},
{
"math_id": 22,
"text": "E=-0.0591 \\ \\mathrm{pH}"
},
{
"math_id": 23,
"text": "a_\\mathrm{H^+} = \\gamma_\\mathrm{H^+} \\tfrac{C_\\mathrm{H^+}}{C^0}"
}
] |
https://en.wikipedia.org/wiki?curid=578150
|
57820758
|
HD 89345 b
|
Neptune-like exoplanet
HD 89345 b is a Neptune-like exoplanet that orbits a G-type star. It is also called K2-234b. Its mass is equivalent to 35.7 Earths, it takes 11.8 days to complete one orbit of its star, and is 0.105 AU away from its star. It was discovered by a team of 43 astrophysicists, one of which was V. Van Eylen, and was announced in 2018.
Overview.
The exoplanet HD 89345 b, which has a mass of 0.1 MJ and a radius of 0.61 RJ, was assigned to the class of ocean planets. The parent star of the planet, which is about 5.3 billion years old, belongs to the spectral class of G5V-G6V. It is 66 percent larger and 22 percent more massive than the Sun, and is located 413 light-years away. The effective temperature of the star is 5609 K. Considering that HD 89345 b makes one revolution around the star in 11.8 days at a distance of 0.11 AU, this planet was described by researchers as a warm subterranean with an equilibrium temperature of 1059 K.
Discovery.
HD 89345 b, a Saturn-sized exoplanet orbiting a slightly evolved star HD 89345, was discovered in 2018 using the transit photometry method, the process that detects distant planets by measuring the minute dimming of a star as an orbiting planet passes between it and the Earth. It is the only planet orbiting around HD 89345, a G5 class star, situated in the constellation of Leo in 413 light-years from the Sun. This star is aged 9.4 billion years. HD 89345 b orbits its star in about 12 terrestrial days in an elliptical orbit. The orbit is closer to the star than the inner limit of the habitable zone. It has a low density and can be composed of gas.
Its parent star, HD 89345, is a bright star (apparent magnitude 9.3) observed by the K2 mission with one-minute time sampling. It exhibits solar-like oscillations. The data is collected by asteroseismology, which enables to determine the parameters of the star and find its mass and radius. Its mass is 1.12 M☉ and its mean radius is 1.657 R☉. The star appears to have recently left the main sequence, based on the inferred age, 9.4 Gyr, and the non-detection of mixed modes. The star hosts a "warm Saturn" with an orbital period of approximately 11.8 days and a radius of . Radial-velocity follow-up observations performed with the FIES, HARPS, and HARPS-N spectrographs show that the planet has a mass of . The data also show that the planet's orbit is eccentric (formula_0).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e \\thickapprox 0.2\n"
}
] |
https://en.wikipedia.org/wiki?curid=57820758
|
57821564
|
Wiman-Valiron theory
|
Mathematical theory
Wiman-Valiron theory is a mathematical theory invented by Anders Wiman as a tool to study the behavior of
arbitrary entire functions. After the work of Wiman, the theory was developed by other mathematicians,
and extended to
more general classes of analytic functions. The main result of the theory is an asymptotic formula for the function
and its derivatives near the point where the maximum modulus of this function is attained.
Maximal term and central index.
By definition, an entire function can be represented by a power series which is convergent for all complex formula_0:
formula_1
The terms of this series tend to 0 as formula_2, so for each formula_0 there is a term of maximal modulus.
This term depends on formula_3.
Its modulus is called the "maximal term" of the series:
formula_4
Here formula_5 is the exponent for which the maximum is attained; if there are several maximal terms, we define formula_5 as the
largest exponent of them. This number formula_5 depends on formula_6, it is denoted by formula_7
and is called the "central index".
Let
formula_8
be the maximum modulus of the function formula_9.
Cauchy's inequality implies that formula_10 for all formula_11.
The converse estimate formula_12 was first proved by Borel, and
a more precise estimate due to Wiman reads
formula_13
in the sense that for every formula_14 there exist arbitrarily large values of formula_6 for which this
inequality holds. In fact, it was shown by Valiron that the above relation holds for "most" values of formula_6: the exceptional set formula_15
for which it does not hold has finite logarithmic measure:
formula_16
Improvements of these inequality were subject of much research in the 20th century.
The main asymptotic formula.
The following result of Wiman
is fundamental for various applications: let formula_17 be the point for which the maximum in
the definition of formula_18 is attained; by the Maximum Principle we have formula_19. It turns
out that formula_20 behaves near the point formula_17
like a monomial:
there are arbitrarily large values of formula_6 such that the formula
formula_21
holds in the disk
formula_22
Here formula_14 is an arbitrary positive number, and the o(1) refers to formula_23,
where formula_15 is the exceptional set described above. This disk is usually called the "Wiman-Valiron disk".
Applications.
The formula for formula_20 for formula_0 near formula_17 can be differentiated so we have an asymptotic relation
formula_24
This is useful for studies of entire solutions of differential equations.
Another important application is due to Valiron
who noticed that the image of the Wiman-Valiron disk contains a "large" annulus (formula_25 where both formula_26 and formula_27
are arbitrarily large). This implies the important theorem of Valiron that there are arbitrarily large discs in
the plane in which the inverse branches of an entire function can be defined. A quantitative version of this statement
is known as the Bloch theorem.
This theorem of Valiron has further applications in
holomorphic dynamics: it is used in the proof of the fact that the escaping set of an entire function is not empty.
Later development.
In 1938, Macintyre
found that one can get rid of the central index and of power series itself in this theory.
Macintyre replaced the central index by the quantity
formula_28
and proved the main relation in the form
formula_29
This statement does not mention the power series, but the assumption that formula_9 is entire was used by Macintyre.
The final generalization was achieved by
Bergweiler, Rippon and Stallard
who showed that this relation persists for every unbounded analytic function formula_9 defined in an arbitrary unbounded region
formula_30 in the complex plane, under the only assumption that formula_31 is bounded for formula_32.
The key statement which makes this generalization possible is that the Wiman-Valiron disk is actually contained in formula_30
for all non-exceptional formula_6.
|
[
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "f(z)=\\sum_{n=0}^\\infty a_nz^n."
},
{
"math_id": 2,
"text": "n\\to\\infty"
},
{
"math_id": 3,
"text": "r:=|z|"
},
{
"math_id": 4,
"text": "\\mu(r,f)=\\max_k |a_k|r^k=:|a_n|r^n,\\quad r\\geq 0."
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "n(r,f)"
},
{
"math_id": 8,
"text": " M(r,f)=\\max\\{ |f(z)|: |z|\\leq r \\}"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "\\mu(r,f)\\leq M(r,f)"
},
{
"math_id": 11,
"text": "r\\geq 0"
},
{
"math_id": 12,
"text": "M(r,f)\\leq (\\mu(r,f))^{1+\\epsilon}"
},
{
"math_id": 13,
"text": "M(r,f)\\leq \\mu(r,f)\\left(\\log\\mu(r,f)\\right)^{1/2+\\epsilon},"
},
{
"math_id": 14,
"text": "\\epsilon>0"
},
{
"math_id": 15,
"text": "E"
},
{
"math_id": 16,
"text": "\\int_E\\frac{dr}{r}<\\infty."
},
{
"math_id": 17,
"text": "z_r"
},
{
"math_id": 18,
"text": "M(r,f)"
},
{
"math_id": 19,
"text": "|z_r|=r"
},
{
"math_id": 20,
"text": "f(z)"
},
{
"math_id": 21,
"text": "f(z)=(1+o(1))\\left(\\frac{z}{z_r}\\right)^{n(r,f)}f(z_r),"
},
{
"math_id": 22,
"text": "|z-z_r|<\\frac{r}{\\left(n(r)\\right)^{1/2+\\epsilon}}."
},
{
"math_id": 23,
"text": "r\\to\\infty,\\; r\\not\\in E"
},
{
"math_id": 24,
"text": "f^{(m)}(z)=(1+o(1))\\left(\\frac{n(r)}{z}\\right)^m\\left(\\frac{r}{z_r}\\right)^{n(r)}f(z_r)."
},
{
"math_id": 25,
"text": "\\{ z:r_1<|z|<r_2\\}"
},
{
"math_id": 26,
"text": "r_1"
},
{
"math_id": 27,
"text": "r_2/r_1"
},
{
"math_id": 28,
"text": "a(r,f):=r\\frac{M'(r,f)}{M(r,f)}"
},
{
"math_id": 29,
"text": "f^{(m)}(z)=(1+o(1))\\left(\\frac{a(r,f)}{z}\\right)^m\\left(\\frac{z}{z_r}\\right)^{a(r,f)}f(z_r)\\quad\\mbox{for}\n\\quad|z-z_r|\\leq\\frac{r}{(a(r,f))^{1/2+\\epsilon}}."
},
{
"math_id": 30,
"text": "D"
},
{
"math_id": 31,
"text": "|f(z)|"
},
{
"math_id": 32,
"text": "z\\in\\partial D"
}
] |
https://en.wikipedia.org/wiki?curid=57821564
|
5782346
|
Self-averaging
|
A self-averaging physical property of a disordered system is one that can be described by averaging over a sufficiently large sample. The concept was introduced by Ilya Mikhailovich Lifshitz.
Definition.
Frequently in physics one comes across situations where quenched randomness plays an important role. Any physical property "X" of such a system, would require an averaging over all disorder realisations. The system can be completely described by the average ["X"] where [...] denotes averaging over realisations (“averaging over samples”) provided the relative variance "R""X" = "V""X" / ["X"]2 → 0 as "N"→∞, where "V""X" = ["X"2] − ["X"]2 and "N" denotes the size of the realisation. In such a scenario a single large system is sufficient to represent the whole ensemble. Such quantities are called self-averaging. Away from criticality, when the larger lattice is built from smaller blocks, then due to the additivity property of an extensive quantity, the central limit theorem guarantees that "R""X" ~ "N"−1 thereby ensuring self-averaging. On the other hand, at the critical point, the question whether formula_0 is self-averaging or not becomes nontrivial, due to long range correlations.
Non self-averaging systems.
At the pure critical point randomness is classified as relevant if, by the standard definition of relevance, it leads to a change in the critical behaviour (i.e., the critical exponents) of the pure system. It has been shown by recent renormalization group and numerical studies that self-averaging property is lost if randomness or disorder is relevant. Most importantly as N → ∞, RX at the critical point approaches a constant. Such systems are called non self-averaging. Thus unlike the self-averaging scenario, numerical simulations cannot lead to an improved picture in larger lattices (large N), even if the critical point is exactly known. In summary, various types of self-averaging can be indexed with the help of the asymptotic size dependence of a quantity like RX. If RX falls off to zero with size, it is self-averaging whereas if RX approaches a constant as N → ∞, the system is non-self-averaging.
Strong and weak self-averaging.
There is a further classification of self-averaging systems as strong and weak. If the exhibited behavior is "R""X" ~ "N"−1 as suggested by the central limit theorem, mentioned earlier, the system is said to be strongly self-averaging. Some systems shows a slower power law decay "R""X" ~ "N"−"z" with 0 < "z" < 1. Such systems are classified weakly self-averaging. The known critical exponents of the system determine the exponent "z".
It must also be added that relevant randomness does not necessarily imply non self-averaging, especially in a mean-field scenario.
The RG arguments mentioned above need to be extended to situations with sharp limit of "T""c" distribution and long range interactions.
|
[
{
"math_id": 0,
"text": "X"
}
] |
https://en.wikipedia.org/wiki?curid=5782346
|
57824
|
Semi-continuity
|
Property of functions which is weaker than continuity
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function formula_0 is upper (respectively, lower) semicontinuous at a point formula_1 if, roughly speaking, the function values for arguments near formula_1 are not much higher (respectively, lower) than formula_2
A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point formula_1 to formula_3 for some formula_4, then the result is upper semicontinuous; if we decrease its value to formula_5 then the result is lower semicontinuous.
The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899.
Definitions.
Assume throughout that formula_6 is a topological space and formula_7 is a function with values in the extended real numbers formula_8.
Upper semicontinuity.
A function formula_7 is called upper semicontinuous at a point formula_9 if for every real formula_10 there exists a neighborhood formula_11 of formula_1 such that formula_12 for all formula_13.
Equivalently, formula_0 is upper semicontinuous at formula_1 if and only if
formula_14
where lim sup is the limit superior of the function formula_0 at the point formula_15
If formula_6 is a metric space with distance function formula_16 and formula_17 this can also be restated using an formula_18-formula_19 formulation, similar to the definition of continuous function. Namely, for each formula_20 there is a formula_21 such that formula_22 whenever formula_23
A function formula_7 is called upper semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is upper semicontinuous at every point of its domain.
(2) For each formula_24, the set formula_25 is open in formula_6, where formula_26.
(3) For each formula_24, the formula_27-superlevel set formula_28 is closed in formula_6.
(4) The hypograph formula_29 is closed in formula_30.
(5) The function formula_0 is continuous when the codomain formula_31 is given the left order topology. This is just a restatement of condition (2) since the left order topology is generated by all the intervals formula_32.
Lower semicontinuity.
A function formula_7 is called lower semicontinuous at a point formula_33 if for every real formula_34 there exists a neighborhood formula_11 of formula_1 such that formula_35 for all formula_13.
Equivalently, formula_0 is lower semicontinuous at formula_1 if and only if
formula_36
where formula_37 is the limit inferior of the function formula_0 at point formula_15
If formula_6 is a metric space with distance function formula_16 and formula_17 this can also be restated as follows: For each formula_20 there is a formula_21 such that formula_38 whenever formula_23
A function formula_7 is called lower semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is lower semicontinuous at every point of its domain.
(2) For each formula_24, the set formula_39 is open in formula_6, where formula_40.
(3) For each formula_24, the formula_27-sublevel set formula_41 is closed in formula_6.
(4) The epigraph formula_42 is closed in formula_30.
(5) The function formula_0 is continuous when the codomain formula_31 is given the right order topology. This is just a restatement of condition (2) since the right order topology is generated by all the intervals formula_43.
Examples.
Consider the function formula_44 piecewise defined by:
formula_45
This function is upper semicontinuous at formula_46 but not lower semicontinuous.
The floor function formula_47 which returns the greatest integer less than or equal to a given real number formula_48 is everywhere upper semicontinuous. Similarly, the ceiling function formula_49 is lower semicontinuous.
Upper and lower semicontinuity bear no relation to continuity from the left or from the right for functions of a real variable. Semicontinuity is defined in terms of an ordering in the range of the functions, not in the domain. For example the function
formula_50
is upper semicontinuous at formula_51 while the function limits from the left or right at zero do not even exist.
If formula_52 is a Euclidean space (or more generally, a metric space) and formula_53 is the space of curves in formula_6 (with the supremum distance formula_54), then the length functional formula_55 which assigns to each curve formula_56 its length formula_57 is lower semicontinuous. As an example, consider approximating the unit square diagonal by a staircase from below. The staircase always has length 2, while the diagonal line has only length formula_58.
Let formula_59 be a measure space and let formula_60 denote the set of positive measurable functions endowed with the
topology of convergence in measure with respect to formula_61 Then by Fatou's lemma the integral, seen as an operator from formula_60 to formula_62 is lower semicontinuous.
Tonelli's theorem in functional analysis characterizes the weak lower semicontinuity of nonlinear functionals on "L""p" spaces in terms of the convexity of another function.
Properties.
Unless specified otherwise, all functions below are from a topological space formula_6 to the extended real numbers formula_63 Several of the results hold for semicontinuity at a specific point, but for brevity they are only stated for semicontinuity over the whole domain.
Binary Operations on Semicontinuous Functions.
Let formula_73.
In particular, the limit of a monotone increasing sequence formula_87 of continuous functions is lower semicontinuous. (The Theorem of Baire below provides a partial converse.) The limit function will only be lower semicontinuous in general, not continuous. An example is given by the functions formula_88 defined for formula_89 for formula_90
Likewise, the infimum of an arbitrary family of upper semicontinuous functions is upper semicontinuous. And the limit of a monotone decreasing sequence of continuous functions is upper semicontinuous.
("Proof for the upper semicontinuous case": By condition (5) in the definition, formula_0 is continuous when formula_31 is given the left order topology. So its image formula_96 is compact in that topology. And the compact sets in that topology are exactly the sets with a maximum. For an alternative proof, see the article on the extreme value theorem.)
formula_100 and
formula_101
If formula_0 does not take the value formula_102, the continuous functions can be taken to be real-valued.
Additionally, every upper semicontinuous function formula_7 is the limit of a monotone decreasing sequence of extended real-valued continuous functions on formula_103 if formula_0 does not take the value formula_104 the continuous functions can be taken to be real-valued.
Semicontinuity of Set-valued Functions.
For set-valued functions, several concepts of semicontinuity have been defined, namely "upper", "lower", "outer", and "inner" semicontinuity, as well as "upper" and "lower hemicontinuity".
A set-valued function formula_106 from a set formula_69 to a set formula_107 is written formula_108 For each formula_109 the function formula_106 defines a set formula_110
The preimage of a set formula_111 under formula_106 is defined as
formula_112
That is, formula_113 is the set that contains every point formula_114 in formula_69 such that formula_115 is not disjoint from formula_116.
Upper and Lower Semicontinuity.
A set-valued map formula_117 is "upper semicontinuous" at formula_118 if for every open set formula_119 such that formula_120, there exists a neighborhood formula_121 of formula_114 such that formula_122
A set-valued map formula_117 is "lower semicontinuous" at formula_118 if for every open set formula_119 such that formula_123 there exists a neighborhood formula_121 of formula_114 such that formula_124
Upper and lower set-valued semicontinuity are also defined more generally for a set-valued maps between topological spaces by replacing formula_125 and formula_126 in the above definitions with arbitrary topological spaces.
Note, that there is not a direct correspondence between single-valued lower and upper semicontinuity and set-valued lower and upper semicontinuouty.
An upper semicontinuous single-valued function is not necessarily upper semicontinuous when considered as a set-valued map.
For example, the function formula_127 defined by
formula_45
is upper semicontinuous in the single-valued sense but the set-valued map formula_128 is not upper semicontinuous in the set-valued sense.
Inner and Outer Semicontinuity.
A set-valued function formula_117 is called "inner semicontinuous" at formula_114 if for every formula_129 and every convergent sequence formula_130 in formula_125 such that formula_131, there exists
a sequence formula_132 in formula_126 such that formula_133 and formula_134 for all sufficiently large formula_135
A set-valued function formula_117 is called "outer semicontinuous" at formula_114 if for every convergence sequence formula_130 in formula_125 such that formula_131 and every convergent sequence formula_132 in formula_126 such that formula_136 for each formula_137 the sequence formula_132 converges to a point in formula_115 (that is, formula_138).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "x_0"
},
{
"math_id": 2,
"text": "f\\left(x_0\\right)."
},
{
"math_id": 3,
"text": "f\\left(x_0\\right) + c"
},
{
"math_id": 4,
"text": "c>0"
},
{
"math_id": 5,
"text": "f\\left(x_0\\right) - c"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "f:X\\to\\overline{\\R}"
},
{
"math_id": 8,
"text": "\\overline{\\R}=\\R \\cup \\{-\\infty,\\infty\\} = [-\\infty,\\infty]"
},
{
"math_id": 9,
"text": "x_0 \\in X"
},
{
"math_id": 10,
"text": "y > f\\left(x_0\\right)"
},
{
"math_id": 11,
"text": "U"
},
{
"math_id": 12,
"text": "f(x)<y"
},
{
"math_id": 13,
"text": "x\\in U"
},
{
"math_id": 14,
"text": "\\limsup_{x \\to x_0} f(x) \\leq f(x_0)"
},
{
"math_id": 15,
"text": "x_0."
},
{
"math_id": 16,
"text": "d"
},
{
"math_id": 17,
"text": "f(x_0)\\in\\R,"
},
{
"math_id": 18,
"text": "\\varepsilon"
},
{
"math_id": 19,
"text": "\\delta"
},
{
"math_id": 20,
"text": "\\varepsilon>0"
},
{
"math_id": 21,
"text": "\\delta>0"
},
{
"math_id": 22,
"text": "f(x)<f(x_0)+\\varepsilon"
},
{
"math_id": 23,
"text": "d(x,x_0)<\\delta."
},
{
"math_id": 24,
"text": "y\\in\\R"
},
{
"math_id": 25,
"text": "f^{-1}([ -\\infty ,y))=\\{x\\in X : f(x)<y\\}"
},
{
"math_id": 26,
"text": "[ -\\infty ,y)=\\{t\\in\\overline{\\R}:t<y\\}"
},
{
"math_id": 27,
"text": "y"
},
{
"math_id": 28,
"text": "f^{-1}([y, \\infty)) = \\{x\\in X : f(x)\\ge y\\}"
},
{
"math_id": 29,
"text": "\\{(x,t)\\in X\\times\\R : t\\le f(x)\\}"
},
{
"math_id": 30,
"text": "X\\times\\R"
},
{
"math_id": 31,
"text": "\\overline{\\R}"
},
{
"math_id": 32,
"text": "[ -\\infty,y)"
},
{
"math_id": 33,
"text": "x_0\\in X"
},
{
"math_id": 34,
"text": "y < f\\left(x_0\\right)"
},
{
"math_id": 35,
"text": "f(x)>y"
},
{
"math_id": 36,
"text": "\\liminf_{x \\to x_0} f(x) \\ge f(x_0)"
},
{
"math_id": 37,
"text": "\\liminf"
},
{
"math_id": 38,
"text": "f(x)>f(x_0)-\\varepsilon"
},
{
"math_id": 39,
"text": "f^{-1}((y,\\infty ])=\\{x\\in X : f(x)>y\\}"
},
{
"math_id": 40,
"text": "(y,\\infty ]=\\{t\\in\\overline{\\R}:t>y\\}"
},
{
"math_id": 41,
"text": "f^{-1}((-\\infty, y]) = \\{x\\in X : f(x)\\le y\\}"
},
{
"math_id": 42,
"text": "\\{(x,t)\\in X\\times\\R : t\\ge f(x)\\}"
},
{
"math_id": 43,
"text": "(y,\\infty ] "
},
{
"math_id": 44,
"text": "f,"
},
{
"math_id": 45,
"text": "f(x) = \\begin{cases}\n-1 & \\mbox{if } x < 0,\\\\\n 1 & \\mbox{if } x \\geq 0\n\\end{cases}"
},
{
"math_id": 46,
"text": "x_0 = 0,"
},
{
"math_id": 47,
"text": "f(x) = \\lfloor x \\rfloor,"
},
{
"math_id": 48,
"text": "x,"
},
{
"math_id": 49,
"text": "f(x) = \\lceil x \\rceil"
},
{
"math_id": 50,
"text": "f(x) = \\begin{cases}\n\\sin(1/x) & \\mbox{if } x \\neq 0,\\\\\n1 & \\mbox{if } x = 0,\n\\end{cases}"
},
{
"math_id": 51,
"text": "x = 0"
},
{
"math_id": 52,
"text": "X = \\R^n"
},
{
"math_id": 53,
"text": "\\Gamma = C([0,1], X)"
},
{
"math_id": 54,
"text": "d_\\Gamma(\\alpha,\\beta) = \\sup\\{d_X(\\alpha(t),\\beta(t)):t\\in[0,1]\\}"
},
{
"math_id": 55,
"text": "L : \\Gamma \\to [0, +\\infty],"
},
{
"math_id": 56,
"text": "\\alpha"
},
{
"math_id": 57,
"text": "L(\\alpha),"
},
{
"math_id": 58,
"text": "\\sqrt 2"
},
{
"math_id": 59,
"text": "(X,\\mu)"
},
{
"math_id": 60,
"text": "L^+(X,\\mu)"
},
{
"math_id": 61,
"text": "\\mu."
},
{
"math_id": 62,
"text": "[-\\infty, +\\infty]"
},
{
"math_id": 63,
"text": "\\overline{\\R}= [-\\infty,\\infty]."
},
{
"math_id": 64,
"text": "A\\subset X"
},
{
"math_id": 65,
"text": "\\mathbf{1}_A(x)=1"
},
{
"math_id": 66,
"text": "x\\in A"
},
{
"math_id": 67,
"text": "0"
},
{
"math_id": 68,
"text": "x\\notin A"
},
{
"math_id": 69,
"text": "A"
},
{
"math_id": 70,
"text": "A \\subset X"
},
{
"math_id": 71,
"text": "\\chi_{A}(x)=0"
},
{
"math_id": 72,
"text": "\\chi_A(x) = \\infty"
},
{
"math_id": 73,
"text": "f,g : X \\to \\overline{\\R}"
},
{
"math_id": 74,
"text": "g"
},
{
"math_id": 75,
"text": "f+g"
},
{
"math_id": 76,
"text": "f(x)+g(x)"
},
{
"math_id": 77,
"text": "-\\infty+\\infty"
},
{
"math_id": 78,
"text": "f g"
},
{
"math_id": 79,
"text": "-f"
},
{
"math_id": 80,
"text": "f \\circ g"
},
{
"math_id": 81,
"text": "x \\mapsto \\max\\{f(x), g(x)\\}"
},
{
"math_id": 82,
"text": "x \\mapsto \\min\\{f(x), g(x)\\}"
},
{
"math_id": 83,
"text": "\\R"
},
{
"math_id": 84,
"text": "(f_i)_{i\\in I}"
},
{
"math_id": 85,
"text": "f_i:X\\to\\overline{\\R}"
},
{
"math_id": 86,
"text": "f(x)=\\sup\\{f_i(x):i\\in I\\}"
},
{
"math_id": 87,
"text": "f_1\\le f_2\\le f_3\\le\\cdots"
},
{
"math_id": 88,
"text": "f_n(x)=1-(1-x)^n"
},
{
"math_id": 89,
"text": "x\\in[0,1]"
},
{
"math_id": 90,
"text": "n=1,2,\\ldots."
},
{
"math_id": 91,
"text": "C"
},
{
"math_id": 92,
"text": "[a, b]"
},
{
"math_id": 93,
"text": "f : C \\to \\overline{\\R}"
},
{
"math_id": 94,
"text": "C."
},
{
"math_id": 95,
"text": "C,"
},
{
"math_id": 96,
"text": "f(C)"
},
{
"math_id": 97,
"text": "X."
},
{
"math_id": 98,
"text": "\\{f_i\\}"
},
{
"math_id": 99,
"text": "f_i : X \\to \\overline\\R"
},
{
"math_id": 100,
"text": "f_i(x) \\leq f_{i+1}(x) \\quad \\forall x \\in X,\\ \\forall i = 0, 1, 2, \\dots"
},
{
"math_id": 101,
"text": "\\lim_{i \\to \\infty} f_i(x) = f(x) \\quad \\forall x \\in X. "
},
{
"math_id": 102,
"text": "-\\infty"
},
{
"math_id": 103,
"text": "X;"
},
{
"math_id": 104,
"text": "\\infty,"
},
{
"math_id": 105,
"text": "f : X \\to \\N"
},
{
"math_id": 106,
"text": "F"
},
{
"math_id": 107,
"text": "B"
},
{
"math_id": 108,
"text": "F : A \\rightrightarrows B."
},
{
"math_id": 109,
"text": "x \\in A,"
},
{
"math_id": 110,
"text": "F(x) \\subset B."
},
{
"math_id": 111,
"text": "S \\subset B"
},
{
"math_id": 112,
"text": "F^{-1}(S) :=\\{x \\in A: F(x) \\cap S \\neq \\varnothing\\}."
},
{
"math_id": 113,
"text": "F^{-1}(S)"
},
{
"math_id": 114,
"text": "x"
},
{
"math_id": 115,
"text": "F(x)"
},
{
"math_id": 116,
"text": "S"
},
{
"math_id": 117,
"text": "F: \\mathbb{R}^m \\rightrightarrows \\mathbb{R}^n"
},
{
"math_id": 118,
"text": "x \\in \\mathbb{R}^m"
},
{
"math_id": 119,
"text": "U \\subset \\mathbb{R}^n"
},
{
"math_id": 120,
"text": "F(x) \\subset U"
},
{
"math_id": 121,
"text": "V"
},
{
"math_id": 122,
"text": "F(V) \\subset U."
},
{
"math_id": 123,
"text": "x \\in F^{-1}(U),"
},
{
"math_id": 124,
"text": "V \\subset F^{-1}(U)."
},
{
"math_id": 125,
"text": "\\mathbb{R}^m"
},
{
"math_id": 126,
"text": "\\mathbb{R}^n"
},
{
"math_id": 127,
"text": "f : \\mathbb{R} \\to \\mathbb{R}"
},
{
"math_id": 128,
"text": "x \\mapsto F(x) := \\{f(x)\\}"
},
{
"math_id": 129,
"text": "y \\in F(x)"
},
{
"math_id": 130,
"text": "(x_i)"
},
{
"math_id": 131,
"text": "x_i \\to x"
},
{
"math_id": 132,
"text": "(y_i)"
},
{
"math_id": 133,
"text": "y_i \\to y"
},
{
"math_id": 134,
"text": "y_i \\in F\\left(x_i\\right)"
},
{
"math_id": 135,
"text": "i \\in \\mathbb{N}."
},
{
"math_id": 136,
"text": "y_i \\in F(x_i)"
},
{
"math_id": 137,
"text": "i\\in\\mathbb{N},"
},
{
"math_id": 138,
"text": "\\lim _{i \\to \\infty} y_i \\in F(x)"
}
] |
https://en.wikipedia.org/wiki?curid=57824
|
57831158
|
Isosceles set
|
In discrete geometry, an isosceles set is a set of points with the property that every three of them form an isosceles triangle. More precisely, each three points should determine at most two distances; this also allows degenerate isosceles triangles formed by three equally-spaced points on a line.
History.
The problem of finding the largest isosceles set in a Euclidean space of a given dimension was posed in 1946 by Paul Erdős. In his statement of the problem, Erdős observed that the largest such set in the Euclidean plane has six points. In his 1947 solution, Leroy Milton Kelly showed more strongly that the unique six-point planar isosceles set consists of the vertices and center of a regular pentagon. In three dimensions, Kelly found an eight-point isosceles set, six points of which are the same; the remaining two points lie on a line perpendicular to the pentagon through its center, at the same distance as the pentagon vertices from the center. This three-dimensional example was later proven to be optimal, and to be the unique optimal solution.
Decomposition into 2-distance sets.
Kelly's eight-point three-dimensional isosceles set can be decomposed into two sets formula_0 (the three points on a line perpendicular to the pentagon) and formula_1 (the five vertices of the pentagon), with the property that each point in formula_0 is equidistant from all points of formula_1. When such a decomposition is possible, in Euclidean spaces of any dimension, formula_0 and formula_1 must lie in perpendicular subspaces, formula_0 must be an isosceles set within its subspace, and the set formula_2 formed from formula_1 by adding the point at the intersection of its two subspaces must also be an isosceles set within its subspace. In this way, an isosceles set in high dimensions can sometimes be decomposed into isosceles sets in lower dimensions. On the other hand, when an isosceles set has no decomposition of this type, then it must have a stronger property than being isosceles: it has only two distances, among all pairs of points.
Despite this decomposition theorem, it is possible for the largest two-distance set and the largest isosceles set in the same dimension to have different sizes. This happens, for instance, in the plane, where the largest two-distance set has five points (the vertices of a regular pentagon), while the largest isosceles set has six points. In this case, the six-point isosceles set has a decomposition where formula_0 is the singleton set of the central point (in a space of zero dimensions) and formula_1 consists of all remaining points.
Upper bounds.
In formula_3-dimensional space, an isosceles set can have at most
formula_4
points. This is tight for formula_5 and for formula_6 but not necessarily for other dimensions.
The maximum number of points in a formula_3-dimensional isosceles set, for formula_7, is known to be
3, 6, 8, 11, 17, 28, 30, 45 (sequence in the OEIS)
but these numbers are not known for higher dimensions.
Construction.
Lisoněk provides the following construction of two-distance sets with
formula_8
points, which also produces isosceles sets with
formula_9
points. In formula_10-dimensional Euclidean space, let formula_11 (for formula_12) denote the vector a unit distance from the origin along the formula_13th coordinate axis, and construct the set formula_14 consisting of all points formula_15 for formula_16. Then formula_14 lies in the formula_3-dimensional subspace of points with coordinate sum formula_17; its convex hull is the hypersimplex formula_18. It has only two distances: two points formed from sums of overlapping pairs of unit vectors have distance formula_19, while two points formed from disjoint pairs of unit vectors have distance formula_17. Adding one more point to formula_14 at its centroid forms a formula_3-dimensional isosceles set. For instance, for formula_20, this construction produces a suboptimal isosceles set with seven points, the vertices and center of a regular octahedron, rather than the optimal eight-point set.
Generalization.
The same problem can also be considered for other metric spaces. For instance, for Hamming spaces, somewhat smaller upper bounds are known than for Euclidean spaces of the same dimension. In an ultrametric space, the whole space (and any of its subsets) is an isosceles set. Therefore, ultrametric spaces are sometimes called isosceles spaces. However, not every isosceles set is ultrametric; for instance, obtuse Euclidean isosceles triangles are not ultrametric.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "Y'"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\binom{d+2}{2}"
},
{
"math_id": 5,
"text": "d=6"
},
{
"math_id": 6,
"text": "d=8"
},
{
"math_id": 7,
"text": "d=1,2,\\dots, 8"
},
{
"math_id": 8,
"text": "\\binom{d+1}{2}"
},
{
"math_id": 9,
"text": "\\binom{d+1}{2}+1"
},
{
"math_id": 10,
"text": "d+1"
},
{
"math_id": 11,
"text": "e_i"
},
{
"math_id": 12,
"text": "i=1,\\dots,d+1"
},
{
"math_id": 13,
"text": "i"
},
{
"math_id": 14,
"text": "S"
},
{
"math_id": 15,
"text": "e_i+e_j"
},
{
"math_id": 16,
"text": "i\\ne j"
},
{
"math_id": 17,
"text": "2"
},
{
"math_id": 18,
"text": "\\Delta_{d+1,2}"
},
{
"math_id": 19,
"text": "\\sqrt2"
},
{
"math_id": 20,
"text": "d=3"
}
] |
https://en.wikipedia.org/wiki?curid=57831158
|
57831336
|
Gaia Sausage
|
Remains galaxy merger in the Milky Way
The Gaia Sausage or Gaia Enceladus is the remains of a dwarf galaxy (the Sausage Galaxy, or Gaia-Enceladus-Sausage, or Gaia-Sausage-Enceladus) that merged with the Milky Way about 8–11 billion years ago. At least eight globular clusters were added to the Milky Way along with 50 billion solar masses of stars, gas and dark matter. It represents the last major merger of the Milky Way.
Etymology.
The "Gaia Sausage" is so-called because of the characteristic sausage shape of the population in a chart of velocity space, in particular a plot of radial (formula_0) versus azimuthal velocity (formula_1) of stars (See spherical coordinate system), using data from the Gaia Mission. The stars that have merged with the Milky Way have orbits that are highly elongated. The outermost points of their orbits are around 20 kiloparsecs from the Galactic Center at what is called the "halo break." These stars had previously been seen in Hipparcos data and identified as originаting from an accreted galaxy.
The name "Enceladus" refers to the mythological giant Enceladus, who was buried under Mount Etna and caused earthquakes. Thus this former galaxy was buried in the Milky Way, and caused the puffing up of the thick disc.
Components.
Globular clusters.
The globular clusters firmly identified as former Sausage members are Messier 2, Messier 56, Messier 75, Messier 79, NGC 1851, NGC 2298, and NGC 5286.
The nature of NGC 2808.
NGC 2808 is another globular-like cluster of the Sausage. It is composed of three generations of stars, all born within 200 million years of the formation of the cluster.
One theory to account for three generations of stars is that NGC 2808 is the former core of the Sausage. This could also account for its stellar population of over a million stars, which is unusually large for a globular cluster.
Stars.
The stars from this dwarf orbit the Milky Way core with extreme eccentricities on the order of about 0.9. Their metallicity is also typically higher than other halo stars, with most having [Fe/H] > −1.7 dex, i.e., at least 2% of the solar value
The "Gaia Sausage" reconstructed the Milky Way by puffing up the thin disk to make it a thick disk, whilst the gas it brought into the Milky Way triggered a fresh round of star formation and replenished the thin disk. The debris from the dwarf galaxy provides most of the metal-rich part of the galactic halo.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\boldsymbol{v}_r"
},
{
"math_id": 1,
"text": "\\boldsymbol{v}_\\theta"
}
] |
https://en.wikipedia.org/wiki?curid=57831336
|
578327
|
Future value
|
Value of an asset at a specific date
Future value is the value of an asset at a specific date. It measures the nominal future sum of money that a given sum of money is "worth" at a specified time in the future assuming a certain interest rate, or more generally, rate of return; it is the present value multiplied by the accumulation function.
The value does not include corrections for inflation or other factors that affect the true value of money in the future. This is used in time value of money calculations.
Overview.
Money value fluctuates over time: $100 today has a different value than $100 in five years. This is because one can invest $100 today in an interest-bearing bank account or any other investment, and that money will grow/shrink due to the rate of return. Also, if $100 today allows the purchase of an item, it is possible that $100 will not be enough to purchase the same item in five years, because of inflation (increase in purchase price).
An investor who has some money has two options: to spend it right now or to invest it. The financial compensation for saving it (and not spending it) is that the money value will accrue through the interests that he will receive from a borrower (the bank account on which he has the money deposited).
Therefore, to evaluate the real worthiness of an amount of money today after a given period of time, economic agents compound the amount of money at a given interest rate. Most actuarial calculations use the risk-free interest rate which corresponds the minimum guaranteed rate provided the bank's saving account, for example. If one wants to compare their change in purchasing power, then they should use the real interest rate (nominal interest rate minus inflation rate).
The operation of evaluating a present value into the future value is called capitalization (how much will $100 today be worth in 5 years?). The reverse operation which consists in evaluating the present value of a future amount of money is called a discounting (how much $100 that will be received in 5 years- at a lottery, for example -are worth today?).
It follows that if one has to choose between receiving $100 today and $100 in one year, the rational decision is to cash the $100 today. If the money is to be received in one year and assuming the savings account interest rate is 5%, the person has to be offered at least $105 in one year so that two options are equivalent (either receiving $100 today or receiving $105 in one year). This is because if you have cash of $100 today and deposit in your savings account, you will have $105 in one year.
Simple interest.
To determine future value (FV) using simple interest (i.e., without compounding):
formula_0
where "PV" is the present value or principal, "t" is the time in years (or a fraction of year), and "r" stands for the per annum interest rate. Simple interest is rarely used, as compounding is considered more meaningful . Indeed, the Future Value in this case grows linearly (it's a linear function of the initial investment): it doesn't take into account the fact that the interest earned might be compounded itself and produce further interest (which corresponds to an exponential growth of the initial investment -see below-).
Compound interest.
To determine future value using compound interest:
formula_1
where "PV" is the present value, "t" is the number of compounding periods (not necessarily an integer), and "i" is the interest rate for that period. Thus the future value increases exponentially with time when "i" is positive. The growth rate is given by the period, and "i", the interest rate for that period. Alternatively the growth rate is expressed by the interest per unit time based on continuous compounding. For example, the following all represent the same growth rate:
Also the growth rate may be expressed in a percentage per period (nominal rate), with another period as compounding basis; for the same growth rate we have:
To convert an interest rate from one compounding basis to another compounding basis (between different periodic interest rates), the following formula applies:
formula_2
where
"i"1 is the periodic interest rate with compounding frequency "n"1 and
"i"2 is the periodic interest rate with compounding frequency "n"2.
If the compounding frequency is annual, "n"2 will be 1, and to get the annual interest rate (which may be referred to as the effective interest rate, or the annual percentage rate), the formula can be simplified to:
formula_3
where "r" is the annual rate, "i" the periodic rate, and "n" the number of compounding periods per year.
Problems become more complex as you account for more variables. For example, when accounting for annuities (annual payments), there is no simple "PV" to plug into the equation. Either the "PV" must be calculated first, or a more complex annuity equation must be used. Another complication is when the interest rate is applied multiple times per period. For example, suppose the 10% interest rate in the earlier example is compounded twice a year (semi-annually). Compounding means that each successive application of the interest rate applies to all of the previously accumulated amount, so instead of getting 0.05 each 6 months, one must figure out the true annual interest rate, which in this case would be 1.1025 (one would divide the 10% by two to get 5%, then apply it twice: 1.052.) This 1.1025 represents the original amount 1.00 plus 0.05 in 6 months to make a total of 1.05, and get the same rate of interest on that 1.05 for the remaining 6 months of the year. The second six-month period returns more than the first six months because the interest rate applies to the accumulated interest as well as the original amount.
This formula gives the future value (FV) of an ordinary annuity (assuming compound interest):
formula_4
where "r" = interest rate; "n" = number of periods. The simplest way to understand the above formula is to cognitively split the right side of the equation into two parts, the payment amount, and the ratio of compounding over basic interest. The ratio of compounding is composed of the aforementioned effective interest rate over the basic (nominal) interest rate. This provides a ratio that increases the payment amount in terms present value.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "FV = PV(1+rt)"
},
{
"math_id": 1,
"text": "FV = PV(1+i)^t"
},
{
"math_id": 2,
"text": "i_2=\\left[\\left(1+\\frac{i_1}{n_1}\\right)^\\frac{n_1}{n_2}-1\\right]{\\times}n_2"
},
{
"math_id": 3,
"text": "r = \\left( 1 + { i \\over n } \\right)^n - 1 "
},
{
"math_id": 4,
"text": "FV_\\mathrm{annuity} = {(1+r)^n - 1 \\over r} \\cdot \\mathrm{(payment\\ amount)}"
}
] |
https://en.wikipedia.org/wiki?curid=578327
|
5783569
|
List of formulas in Riemannian geometry
|
This is a list of formulas encountered in Riemannian geometry. Einstein notation is used throughout this article. This article uses the "analyst's" sign convention for Laplacians, except when noted otherwise.
Christoffel symbols, covariant derivative.
In a smooth coordinate chart, the Christoffel symbols of the first kind are given by
formula_0
and the Christoffel symbols of the second kind by
formula_1
Here formula_2 is the inverse matrix to the metric tensor formula_3. In other words,
formula_4
and thus
formula_5
is the dimension of the manifold.
Christoffel symbols satisfy the symmetry relations
formula_6 or, respectively, formula_7,
the second of which is equivalent to the torsion-freeness of the Levi-Civita connection.
The contracting relations on the Christoffel symbols are given by
formula_8
and
formula_9
where |"g"| is the absolute value of the determinant of the metric tensor formula_10. These are useful when dealing with divergences and Laplacians (see below).
The covariant derivative of a vector field with components formula_11 is given by:
formula_12
and similarly the covariant derivative of a formula_13-tensor field with components formula_14 is given by:
formula_15
For a formula_16-tensor field with components formula_17 this becomes
formula_18
and likewise for tensors with more indices.
The covariant derivative of a function (scalar) formula_19 is just its usual differential:
formula_20
Because the Levi-Civita connection is metric-compatible, the covariant derivatives of metrics vanish,
formula_21
as well as the covariant derivatives of the metric's determinant (and volume element)
formula_22
The geodesic formula_23 starting at the origin with initial speed formula_11 has Taylor expansion in the chart:
formula_24
Curvature tensors.
Identities.
Basic symmetries.
The Weyl tensor has the same basic symmetries as the Riemann tensor, but its 'analogue' of the Ricci tensor is zero:
The Ricci tensor, the Einstein tensor, and the traceless Ricci tensor are symmetric 2-tensors:
Twice-contracted second Bianchi identity.
Equivalently:
Ricci identity.
If formula_57 is a vector field then
formula_58
which is just the definition of the Riemann tensor. If formula_59 is a one-form then
formula_60
More generally, if formula_61 is a (0,k)-tensor field then
formula_62
Remarks.
A classical result says that formula_63 if and only if formula_64 is locally conformally flat, i.e. if and only if formula_65 can be covered by smooth coordinate charts relative to which the metric tensor is of the form formula_66 for some function formula_67 on the chart.
Gradient, divergence, Laplace–Beltrami operator.
The gradient of a function formula_19 is obtained by raising the index of the differential formula_68, whose components are given by:
formula_69
The divergence of a vector field with components formula_70 is
formula_71
The Laplace–Beltrami operator acting on a function formula_72 is given by the divergence of the gradient:
formula_73
The divergence of an antisymmetric tensor field of type formula_16 simplifies to
formula_74
The Hessian of a map formula_75 is given by
formula_76
Kulkarni–Nomizu product.
The Kulkarni–Nomizu product is an important tool for constructing new tensors from existing tensors on a Riemannian manifold. Let formula_77 and formula_78 be symmetric covariant 2-tensors. In coordinates,
formula_79
Then we can multiply these in a sense to get a new covariant 4-tensor, which is often denoted formula_80. The defining formula is
formula_81
Clearly, the product satisfies
formula_82
In an inertial frame.
An orthonormal inertial frame is a coordinate chart such that, at the origin, one has the relations formula_83 and formula_84 (but these may not hold at other points in the frame). These coordinates are also called normal coordinates.
In such a frame, the expression for several operators is simpler. Note that the formulae given below are valid "at the origin of the frame only".
formula_85
formula_86
Conformal change.
Let formula_87 be a Riemannian or pseudo-Riemanniann metric on a smooth manifold formula_65, and formula_67 a smooth real-valued function on formula_65. Then
formula_88
is also a Riemannian metric on formula_65. We say that formula_89 is (pointwise) conformal to formula_87. Evidently, conformality of metrics is an equivalence relation. Here are some formulas for conformal changes in tensors associated with the metric. (Quantities marked with a tilde will be associated with formula_89, while those unmarked with such will be associated with formula_87.)
(4,0) Riemann curvature tensor.
Using the Kulkarni–Nomizu product:
Hodge Laplacian on p-forms.
The "geometer's" sign convention is used for the Hodge Laplacian here. In particular it has the opposite sign on functions as the usual Laplacian.
Second fundamental form of an immersion.
Suppose formula_64 is Riemannian and formula_113 is a twice-differentiable immersion. Recall that the second fundamental form is, for each formula_114 a symmetric bilinear map formula_115 which is valued in the formula_116-orthogonal linear subspace to formula_117 Then
Here formula_120 denotes the formula_116-orthogonal projection of formula_121 onto the formula_116-orthogonal linear subspace to formula_117
Mean curvature of an immersion.
In the same setting as above (and suppose formula_122 has dimension formula_123), recall that the mean curvature vector is for each formula_124 an element formula_125 defined as the formula_87-trace of the second fundamental form. Then
Note that this transformation formula is for the mean curvature vector, and the formula for the mean curvature formula_127 in the hypersurface case is
where formula_129 is a (local) normal vector field.
Variation formulas.
Let formula_65 be a smooth manifold and let formula_130 be a one-parameter family of Riemannian or pseudo-Riemannian metrics. Suppose that it is a differentiable family in the sense that for any smooth coordinate chart, the derivatives formula_131 exist and are themselves as differentiable as necessary for the following expressions to make sense. formula_132 is a one-parameter family of symmetric 2-tensor fields.
Principal symbol.
The variation formula computations above define the principal symbol of the mapping which sends a pseudo-Riemannian metric to its Riemann tensor, Ricci tensor, or scalar curvature.
formula_144
formula_146
formula_148
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Gamma_{kij}=\\frac12 \\left(\n \\frac{\\partial}{\\partial x^j} g_{ki}\n +\\frac{\\partial}{\\partial x^i} g_{kj}\n -\\frac{\\partial}{\\partial x^k} g_{ij}\n \\right)\n =\\frac12 \\left( g_{ki,j} + g_{kj,i} - g_{ij,k} \\right) \\,,\n"
},
{
"math_id": 1,
"text": "\\begin{align}\n \\Gamma^m{}_{ij} &= g^{mk}\\Gamma_{kij}\\\\\n &=\\frac{1}{2}\\, g^{mk} \\left(\n \\frac{\\partial}{\\partial x^j} g_{ki}\n +\\frac{\\partial}{\\partial x^i} g_{kj}\n -\\frac{\\partial}{\\partial x^k} g_{ij}\n \\right)\n =\\frac{1}{2}\\, g^{mk} \\left( g_{ki,j} + g_{kj,i} - g_{ij,k} \\right) \\,.\n \\end{align}\n"
},
{
"math_id": 2,
"text": "g^{ij}"
},
{
"math_id": 3,
"text": "g_{ij}"
},
{
"math_id": 4,
"text": "\n\\delta^i{}_j = g^{ik}g_{kj}\n"
},
{
"math_id": 5,
"text": "\nn = \\delta^i{}_i = g^i{}_i = g^{ij}g_{ij}\n"
},
{
"math_id": 6,
"text": "\\Gamma_{kij} = \\Gamma_{kji} "
},
{
"math_id": 7,
"text": " \\Gamma^i{}_{jk}=\\Gamma^i{}_{kj}"
},
{
"math_id": 8,
"text": "\\Gamma^i{}_{ki}=\\frac{1}{2} g^{im}\\frac{\\partial g_{im}}{\\partial x^k}=\\frac{1}{2g} \\frac{\\partial g}{\\partial x^k} = \\frac{\\partial \\log \\sqrt{|g|}}{\\partial x^k} "
},
{
"math_id": 9,
"text": "g^{k\\ell}\\Gamma^i{}_{k\\ell}=\\frac{-1}{\\sqrt{|g|}} \\;\\frac{\\partial\\left(\\sqrt{|g|}\\,g^{ik}\\right)} {\\partial x^k}"
},
{
"math_id": 10,
"text": "g_{ik}"
},
{
"math_id": 11,
"text": "v^i"
},
{
"math_id": 12,
"text": "\nv^i {}_{;j}=(\\nabla_j v)^i=\\frac{\\partial v^i}{\\partial x^j}+\\Gamma^i{}_{jk}v^k\n"
},
{
"math_id": 13,
"text": "(0,1)"
},
{
"math_id": 14,
"text": "v_i"
},
{
"math_id": 15,
"text": "\nv_{i;j}=(\\nabla_j v)_i=\\frac{\\partial v_i}{\\partial x^j}-\\Gamma^k{}_{ij} v_k\n"
},
{
"math_id": 16,
"text": "(2,0)"
},
{
"math_id": 17,
"text": "v^{ij}"
},
{
"math_id": 18,
"text": "\nv^{ij}{}_{;k}=\\nabla_k v^{ij}=\\frac{\\partial v^{ij}}{\\partial x^k} +\\Gamma^i{}_{k\\ell}v^{\\ell j}+\\Gamma^j{}_{k\\ell}v^{i\\ell}\n"
},
{
"math_id": 19,
"text": "\\phi"
},
{
"math_id": 20,
"text": "\n\\nabla_i \\phi=\\phi_{;i}=\\phi_{,i}=\\frac{\\partial \\phi}{\\partial x^i}\n"
},
{
"math_id": 21,
"text": "\n(\\nabla_k g)_{ij} = 0, \\quad (\\nabla_k g)^{ij} = 0\n"
},
{
"math_id": 22,
"text": "\n\\nabla_k \\sqrt{|g|}=0\n"
},
{
"math_id": 23,
"text": "X(t)"
},
{
"math_id": 24,
"text": "\nX(t)^i=tv^i-\\frac{t^2}{2}\\Gamma^i{}_{jk}v^jv^k+O(t^3)\n"
},
{
"math_id": 25,
"text": "{R_{ijk}}^l=\\frac{\\partial\\Gamma_{ik}^l}{\\partial x^j}-\\frac{\\partial\\Gamma_{jk}^l}{\\partial x^i}+ \\big(\\Gamma_{ik}^p\\Gamma_{jp}^l-\\Gamma_{jk}^p\\Gamma_{ip}^l\\big)"
},
{
"math_id": 26,
"text": "R(u,v)w=\\nabla_u\\nabla_vw-\\nabla_v\\nabla_uw-\\nabla_{[u,v]}w"
},
{
"math_id": 27,
"text": "{R^i_{jkl}}=\\frac{\\partial\\Gamma_{lj}^i}{\\partial x^k}-\\frac{\\partial\\Gamma_{kj}^i}{\\partial x^l}+ \\big(\\Gamma_{kp}^i\\Gamma_{lj}^p-\\Gamma_{lp}^i\\Gamma_{kj}^p\\big)"
},
{
"math_id": 28,
"text": "R_{ik}={R_{ijk}}^j"
},
{
"math_id": 29,
"text": "\\operatorname{Ric}(v,w)=\\operatorname{tr}(u\\mapsto R(u,v)w)"
},
{
"math_id": 30,
"text": "R= g^{ik}R_{ik}"
},
{
"math_id": 31,
"text": "R=\\operatorname{tr}_g\\operatorname{Ric}"
},
{
"math_id": 32,
"text": "Q_{ik}=R_{ik}-\\frac{1}{n}Rg_{ik}"
},
{
"math_id": 33,
"text": "Q(u,v)=\\operatorname{Ric}(u,v)-\\frac{1}{n}Rg(u,v)"
},
{
"math_id": 34,
"text": "R_{ijkl}= {R_{ijk}}^pg_{pl}"
},
{
"math_id": 35,
"text": "\\operatorname{Rm}(u,v,w,x)=g\\big(R(u,v)w,x\\big)"
},
{
"math_id": 36,
"text": "W_{ijkl}=R_{ijkl}-\\frac{1}{n(n-1)}R\\big(g_{ik}g_{jl}-g_{il}g_{jk}\\big)-\\frac{1}{n-2}\\big(Q_{ik}g_{jl}-Q_{jk}g_{il}-Q_{il}g_{jk}+Q_{jl}g_{ik}\\big)"
},
{
"math_id": 37,
"text": "W(u,v,w,x)=\\operatorname{Rm}(u,v,w,x)-\\frac{1}{n(n-1)}R\\big(g(u,w)g(v,x)-g(u,x)g(v,w)\\big)-\\frac{1}{n-2}\\big(Q(u,w)g(v,x)-Q(v,w)g(u,x)-Q(u,x)g(v,w)+Q(v,x)g(u,w)\\big)"
},
{
"math_id": 38,
"text": "G_{ik}=R_{ik}-\\frac{1}{2}Rg_{ik}"
},
{
"math_id": 39,
"text": "G(u,v)=\\operatorname{Ric}(u,v)-\\frac{1}{2}Rg(u,v)"
},
{
"math_id": 40,
"text": "{R_{ijk}}^l=-{R_{jik}}^l"
},
{
"math_id": 41,
"text": "R_{ijkl}=-R_{jikl}=-R_{ijlk}=R_{klij}"
},
{
"math_id": 42,
"text": "W_{ijkl}=-W_{jikl}=-W_{ijlk}=W_{klij}"
},
{
"math_id": 43,
"text": " g^{il}W_{ijkl}=0"
},
{
"math_id": 44,
"text": "R_{jk}=R_{kj}"
},
{
"math_id": 45,
"text": "G_{jk}=G_{kj}"
},
{
"math_id": 46,
"text": "Q_{jk}=Q_{kj}"
},
{
"math_id": 47,
"text": "R_{ijkl}+R_{jkil}+R_{kijl}=0"
},
{
"math_id": 48,
"text": "W_{ijkl}+W_{jkil}+W_{kijl}=0"
},
{
"math_id": 49,
"text": "\\nabla_pR_{ijkl}+\\nabla_iR_{jpkl}+\\nabla_jR_{pikl}=0"
},
{
"math_id": 50,
"text": "(\\nabla_u\\operatorname{Rm})(v,w,x,y)+(\\nabla_v\\operatorname{Rm})(w,u,x,y)+(\\nabla_w\\operatorname{Rm})(u,v,x,y)=0"
},
{
"math_id": 51,
"text": "\\nabla_jR_{pk}-\\nabla_pR_{jk}=-\\nabla^lR_{jpkl}"
},
{
"math_id": 52,
"text": "(\\nabla_u\\operatorname{Ric})(v,w)-(\\nabla_v\\operatorname{Ric})(u,w)=-\\operatorname{tr}_g\\big((x,y)\\mapsto(\\nabla_x\\operatorname{Rm})(u,v,w,y)\\big)"
},
{
"math_id": 53,
"text": "g^{pq}\\nabla_pR_{qk}=\\frac{1}{2}\\nabla_k R"
},
{
"math_id": 54,
"text": "\\operatorname{div}_g\\operatorname{Ric}=\\frac{1}{2}dR"
},
{
"math_id": 55,
"text": "g^{pq}\\nabla_pG_{qk}=0"
},
{
"math_id": 56,
"text": "\\operatorname{div}_gG=0"
},
{
"math_id": 57,
"text": "X"
},
{
"math_id": 58,
"text": "\\nabla_i\\nabla_jX^k-\\nabla_j\\nabla_iX^k=-{R_{ijp}}^kX^p,"
},
{
"math_id": 59,
"text": "\\omega"
},
{
"math_id": 60,
"text": "\\nabla_i\\nabla_j\\omega_k-\\nabla_j\\nabla_i\\omega_k={R_{ijk}}^p\\omega_p."
},
{
"math_id": 61,
"text": "T"
},
{
"math_id": 62,
"text": "\\nabla_i\\nabla_j T_{l_1\\cdots l_k}-\\nabla_j\\nabla_iT_{l_1\\cdots l_k}={R_{ijl_1}}^pT_{pl_2\\cdots l_k}+\\cdots+{R_{ijl_k}}^pT_{l_1\\cdots l_{k-1}p}."
},
{
"math_id": 63,
"text": "W=0"
},
{
"math_id": 64,
"text": "(M,g)"
},
{
"math_id": 65,
"text": "M"
},
{
"math_id": 66,
"text": "g_{ij}=e^\\varphi \\delta_{ij}"
},
{
"math_id": 67,
"text": "\\varphi"
},
{
"math_id": 68,
"text": "\\partial_i\\phi dx^i"
},
{
"math_id": 69,
"text": "\\nabla^i \\phi=\\phi^{;i}=g^{ik}\\phi_{;k}=g^{ik}\\phi_{,k}=g^{ik}\\partial_k \\phi=g^{ik}\\frac{\\partial \\phi}{\\partial x^k}\n"
},
{
"math_id": 70,
"text": "V^m"
},
{
"math_id": 71,
"text": "\\nabla_m V^m = \\frac{\\partial V^m}{\\partial x^m} + V^k \\frac{\\partial \\log \\sqrt{|g|}}{\\partial x^k} = \\frac{1}{\\sqrt{|g|}} \\frac{\\partial (V^m\\sqrt{|g|})}{\\partial x^m}."
},
{
"math_id": 72,
"text": "f"
},
{
"math_id": 73,
"text": "\n\\begin{align}\n\\Delta f &= \\nabla_i \\nabla^i f \n= \\frac{1}{\\sqrt{|g|}} \\frac{\\partial }{\\partial x^j}\\left(g^{jk}\\sqrt{|g|}\\frac{\\partial f}{\\partial x^k}\\right) \\\\\n &=\ng^{jk}\\frac{\\partial^2 f}{\\partial x^j \\partial x^k} + \\frac{\\partial g^{jk}}{\\partial x^j} \\frac{\\partial\nf}{\\partial x^k} + \\frac12 g^{jk}g^{il}\\frac{\\partial g_{il}}{\\partial x^j}\\frac{\\partial f}{\\partial x^k}\n= g^{jk}\\frac{\\partial^2 f}{\\partial x^j \\partial x^k} - g^{jk}\\Gamma^l{}_{jk}\\frac{\\partial f}{\\partial x^l}\n\\end{align}\n"
},
{
"math_id": 74,
"text": "\\nabla_k A^{ik}= \\frac{1}{\\sqrt{|g|}} \\frac{\\partial (A^{ik}\\sqrt{|g|})}{\\partial x^k}."
},
{
"math_id": 75,
"text": "\\phi: M \\rightarrow N "
},
{
"math_id": 76,
"text": " \\left( \\nabla \\left( d \\phi\\right) \\right) _{ij} ^\\gamma= \\frac{\\partial ^2 \\phi ^\\gamma}{\\partial x^i \\partial x^j}- ^M \\Gamma ^k{}_{ij} \\frac{\\partial \\phi ^\\gamma}{\\partial x^k} + ^N \\Gamma ^{\\gamma}{}_{\\alpha \\beta} \\frac{\\partial \\phi ^\\alpha}{\\partial x^i}\\frac{\\partial \\phi ^\\beta}{\\partial x^j}."
},
{
"math_id": 77,
"text": "A"
},
{
"math_id": 78,
"text": "B"
},
{
"math_id": 79,
"text": "A_{ij} = A_{ji} \\qquad \\qquad B_{ij} = B_{ji} "
},
{
"math_id": 80,
"text": " A {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} B"
},
{
"math_id": 81,
"text": "\\left(A {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} B\\right)_{ijkl} = A_{ik}B_{jl} + A_{jl}B_{ik} - A_{il}B_{jk} - A_{jk}B_{il}"
},
{
"math_id": 82,
"text": "A {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} B = B {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} A"
},
{
"math_id": 83,
"text": "g_{ij}=\\delta_{ij}"
},
{
"math_id": 84,
"text": "\\Gamma^i{}_{jk}=0"
},
{
"math_id": 85,
"text": "R_{ik\\ell m}=\\frac{1}{2}\\left(\n\\frac{\\partial^2g_{im}}{\\partial x^k \\partial x^\\ell} \n+ \\frac{\\partial^2g_{k\\ell}}{\\partial x^i \\partial x^m}\n- \\frac{\\partial^2g_{i\\ell}}{\\partial x^k \\partial x^m}\n- \\frac{\\partial^2g_{km}}{\\partial x^i \\partial x^\\ell} \\right)\n"
},
{
"math_id": 86,
"text": "R^\\ell{}_{ijk}=\n\\frac{\\partial}{\\partial x^j} \\Gamma^\\ell{}_{ik}-\\frac{\\partial}{\\partial x^k}\\Gamma^\\ell{}_{ij}\n"
},
{
"math_id": 87,
"text": "g"
},
{
"math_id": 88,
"text": "\\tilde g = e^{2\\varphi}g "
},
{
"math_id": 89,
"text": "\\tilde g"
},
{
"math_id": 90,
"text": "\\widetilde{\\Gamma}_{ij}^k=\\Gamma_{ij}^k+\\frac{\\partial\\varphi}{\\partial x^i}\\delta_j^k+\\frac{\\partial\\varphi}{\\partial x^j}\\delta_i^k-\\frac{\\partial\\varphi}{\\partial x^l}g^{lk}g_{ij}"
},
{
"math_id": 91,
"text": "\\widetilde{\\nabla}_XY=\\nabla_XY+d\\varphi(X)Y+d\\varphi(Y)X-g(X,Y)\\nabla \\varphi"
},
{
"math_id": 92,
"text": "\\widetilde{R}_{ijkl}=e^{2\\varphi}R_{ijkl}-e^{2\\varphi}\\big(g_{ik}T_{jl}+g_{jl}T_{ik}-g_{il}T_{jk}-g_{jk}T_{il}\\big)"
},
{
"math_id": 93,
"text": "T_{ij}=\\nabla_i\\nabla_j\\varphi-\\nabla_i\\varphi\\nabla_j\\varphi+\\frac{1}{2}|d\\varphi|^2g_{ij}"
},
{
"math_id": 94,
"text": "\\widetilde{\\operatorname{Rm}} = e^{2\\varphi}\\operatorname{Rm} - e^{2\\varphi}g {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} \\left( \\operatorname{Hess}\\varphi - d\\varphi\\otimes d\\varphi + \\frac{1}{2}|d\\varphi|^2g \\right)"
},
{
"math_id": 95,
"text": "\\widetilde{R}_{ij}=R_{ij}-(n-2)\\big(\\nabla_i\\nabla_j\\varphi-\\nabla_i\\varphi\\nabla_j\\varphi\\big)-\\big(\\Delta\\varphi+(n-2)|d\\varphi|^2\\big)g_{ij}"
},
{
"math_id": 96,
"text": "\\widetilde{\\operatorname{Ric}}=\\operatorname{Ric}-(n-2)\\big(\\operatorname{Hess}\\varphi-d\\varphi\\otimes d\\varphi\\big)-\\big(\\Delta\\varphi+(n-2)|d\\varphi|^2\\big)g"
},
{
"math_id": 97,
"text": "\\widetilde{R}=e^{-2\\varphi}R-2(n-1)e^{-2\\varphi}\\Delta\\varphi-(n-2)(n-1)e^{-2\\varphi}|d\\varphi|^2"
},
{
"math_id": 98,
"text": "n\\neq 2"
},
{
"math_id": 99,
"text": "\\tilde R = e^{-2\\varphi}\\left[R - \\frac{4(n-1)}{(n-2)}e^{-(n-2)\\varphi/2}\\Delta\\left( e^{(n-2)\\varphi/2} \\right) \\right] "
},
{
"math_id": 100,
"text": "\\widetilde{R}_{ij}-\\frac{1}{n}\\widetilde{R}\\widetilde{g}_{ij}=R_{ij}-\\frac{1}{n}Rg_{ij}-(n-2)\\big(\\nabla_i\\nabla_j\\varphi-\\nabla_i\\varphi\\nabla_j\\varphi\\big)+\\frac{(n-2)}{n}\\big(\\Delta\\varphi-|d\\varphi|^2\\big)g_{ij}"
},
{
"math_id": 101,
"text": "\\widetilde{\\operatorname{Ric}}-\\frac{1}{n}\\widetilde{R}\\widetilde{g}=\\operatorname{Ric}-\\frac{1}{n}Rg-(n-2)\\big(\\operatorname{Hess}\\varphi-d\\varphi\\otimes d\\varphi\\big)+\\frac{(n-2)}{n}\\big(\\Delta\\varphi-|d\\varphi|^2\\big)g"
},
{
"math_id": 102,
"text": "{\\widetilde{W}_{ijk}}^l={W_{ijk}}^l"
},
{
"math_id": 103,
"text": "\\widetilde{W}(X,Y,Z)=W(X,Y,Z)"
},
{
"math_id": 104,
"text": "X,Y,Z"
},
{
"math_id": 105,
"text": "\\sqrt{\\det \\widetilde{g}}=e^{n\\varphi}\\sqrt{\\det g}"
},
{
"math_id": 106,
"text": "d\\mu_{\\widetilde{g}}=e^{n\\varphi}\\,d\\mu_g"
},
{
"math_id": 107,
"text": "\\widetilde{\\ast}_{i_1\\cdots i_{n-p}}^{j_1\\cdots j_p}=e^{(n-2p)\\varphi}\\ast_{i_1\\cdots i_{n-p}}^{j_1\\cdots j_p}"
},
{
"math_id": 108,
"text": "\\widetilde{\\ast}=e^{(n-2p)\\varphi}\\ast"
},
{
"math_id": 109,
"text": "\\widetilde{d^\\ast}_{j_1\\cdots j_{p-1}}^{i_1\\cdots i_p}=e^{-2\\varphi}(d^\\ast)_{j_1\\cdots j_{p-1}}^{i_1\\cdots i_p}-(n-2p)e^{-2\\varphi}\\nabla^{i_1}\\varphi\\delta_{j_1}^{i_2}\\cdots\\delta_{j_{p-1}}^{i_p}"
},
{
"math_id": 110,
"text": "\\widetilde{d^\\ast}=e^{-2\\varphi}d^\\ast-(n-2p)e^{-2\\varphi}\\iota_{\\nabla\\varphi}"
},
{
"math_id": 111,
"text": "\\widetilde{\\Delta}\\Phi=e^{-2\\varphi}\\Big(\\Delta\\Phi + (n-2)g(d\\varphi,d\\Phi)\\Big)"
},
{
"math_id": 112,
"text": "\\widetilde{\\Delta^d}\\omega=e^{-2\\varphi}\\Big(\\Delta^d\\omega-(n-2p)d\\circ \\iota_{\\nabla\\varphi}\\omega-(n-2p-2)\\iota_{\\nabla\\varphi}\\circ d\\omega+2(n-2p)d\\varphi\\wedge\\iota_{\\nabla\\varphi}\\omega-2d\\varphi\\wedge d^\\ast\\omega\\Big)"
},
{
"math_id": 113,
"text": "F:\\Sigma\\to(M,g)"
},
{
"math_id": 114,
"text": "p\\in M,"
},
{
"math_id": 115,
"text": "h_p:T_p\\Sigma\\times T_p\\Sigma\\to T_{F(p)}M,"
},
{
"math_id": 116,
"text": "g_{F(p)}"
},
{
"math_id": 117,
"text": "dF_p(T_p\\Sigma)\\subset T_{F(p)}M."
},
{
"math_id": 118,
"text": "\\widetilde{h}(u,v)=h(u,v)-(\\nabla\\varphi)^\\perp g(u,v)"
},
{
"math_id": 119,
"text": "u,v\\in T_pM"
},
{
"math_id": 120,
"text": "(\\nabla\\varphi)^\\perp"
},
{
"math_id": 121,
"text": "\\nabla\\varphi\\in T_{F(p)}M"
},
{
"math_id": 122,
"text": "\\Sigma"
},
{
"math_id": 123,
"text": "n"
},
{
"math_id": 124,
"text": "p\\in\\Sigma"
},
{
"math_id": 125,
"text": "\\textbf H_p\\in T_{F(p)}M"
},
{
"math_id": 126,
"text": "\\widetilde{\\textbf H}=e^{-2\\varphi}(\\textbf H-n(\\nabla\\varphi)^\\perp)."
},
{
"math_id": 127,
"text": "H"
},
{
"math_id": 128,
"text": "\\widetilde{H}=e^{-\\varphi}(H-n\\langle\\nabla\\varphi,\\eta\\rangle)"
},
{
"math_id": 129,
"text": "\\eta"
},
{
"math_id": 130,
"text": "g_t"
},
{
"math_id": 131,
"text": "v_{ij}=\\frac{\\partial}{\\partial t}\\big((g_t)_{ij}\\big)"
},
{
"math_id": 132,
"text": "v=\\frac{\\partial g}{\\partial t} "
},
{
"math_id": 133,
"text": "\\frac{\\partial}{\\partial t}\\Gamma_{ij}^k=\\frac{1}{2}g^{kp}\\Big(\\nabla_i v_{jp}+\\nabla_jv_{ip}-\\nabla_pv_{ij}\\Big)."
},
{
"math_id": 134,
"text": "\\frac{\\partial}{\\partial t}R_{ijkl}=\\frac{1}{2}\\Big(\\nabla_j\\nabla_k v_{il}+\\nabla_i\\nabla_lv_{jk}-\\nabla_i\\nabla_kv_{jl}-\\nabla_j\\nabla_lv_{ik}\\Big)+\\frac{1}{2}{R_{ijk}}^pv_{pl}-\\frac{1}{2}{R_{ijl}}^pv_{pk}"
},
{
"math_id": 135,
"text": "\\frac{\\partial}{\\partial t}R_{ik}=\\frac{1}{2}\\Big(\\nabla^p\\nabla_kv_{ip}+\\nabla_i(\\operatorname{div}v)_k-\\nabla_i\\nabla_k(\\operatorname{tr}_gv)-\\Delta v_{ik}\\Big)+\\frac{1}{2}R_i^pv_{pk}-\\frac{1}{2} R_i{}^p{}_k{}^qv_{pq}"
},
{
"math_id": 136,
"text": "\\frac{\\partial}{\\partial t}R=\\operatorname{div}_g\\operatorname{div}_gv-\\Delta(\\operatorname{tr}_gv)-\\langle v,\\operatorname{Ric}\\rangle_g"
},
{
"math_id": 137,
"text": "\\frac{\\partial}{\\partial t}d\\mu_g=\\frac{1}{2} g^{pq}v_{pq}\\,d\\mu_g"
},
{
"math_id": 138,
"text": "\\frac{\\partial}{\\partial t}\\nabla_i\\nabla_j\\Phi=\\nabla_i\\nabla_j\\frac{\\partial\\Phi}{\\partial t}-\\frac{1}{2}g^{kp}\\Big(\\nabla_i v_{jp}+\\nabla_jv_{ip}-\\nabla_pv_{ij}\\Big)\\frac{\\partial\\Phi}{\\partial x^k}"
},
{
"math_id": 139,
"text": "\\frac{\\partial}{\\partial t}\\Delta\\Phi=-\\langle v,\\operatorname{Hess}\\Phi\\rangle_g-g\\Big(\\operatorname{div}v-\\frac{1}{2}d(\\operatorname{tr}_gv),d\\Phi\\Big)"
},
{
"math_id": 140,
"text": "g\\mapsto\\operatorname{Rm}^g"
},
{
"math_id": 141,
"text": "\\xi\\in T_p^\\ast M"
},
{
"math_id": 142,
"text": "T_pM"
},
{
"math_id": 143,
"text": "T_pM,"
},
{
"math_id": 144,
"text": "v\\mapsto \\frac{\\xi_j\\xi_kv_{il}+\\xi_i\\xi_lv_{jk}-\\xi_i\\xi_kv_{jl}-\\xi_j\\xi_lv_{ik}}{2} = -\\frac12 (\\xi \\otimes \\xi) {~\\wedge\\!\\!\\!\\!\\!\\!\\!\\!\\;\\bigcirc~} v."
},
{
"math_id": 145,
"text": "g\\mapsto\\operatorname{Ric}^g"
},
{
"math_id": 146,
"text": "v\\mapsto v(\\xi^\\sharp,\\cdot)\\otimes\\xi+\\xi\\otimes v(\\xi^\\sharp,\\cdot)-(\\operatorname{tr}_{g_p}v)\\xi\\otimes\\xi-|\\xi|_g^2 v."
},
{
"math_id": 147,
"text": "g\\mapsto R^g"
},
{
"math_id": 148,
"text": "v\\mapsto |\\xi|_g^2\\operatorname{tr}_gv+v(\\xi^\\sharp,\\xi^\\sharp)."
}
] |
https://en.wikipedia.org/wiki?curid=5783569
|
57837450
|
Point counting (geology)
|
In geology, point counting is a method to determine the proportion of an area that is covered by some objects of interest. In most cases the area is a thin section or a polished slab. The objects of interest vary between subdisciplines and can for example be quartz or feldspar grains in sedimentology, any type of mineral in petrology or different taxonomic groups in paleontology.
Method.
The basic method is to cover the area by a grid of points. Then for each of these points, the underlying object is identified. Then the estimate for the proportion of the area covered by the type of object is given as
formula_0,
where
There exist many variations of this procedure that can for example vary in grid geometry.
|
[
{
"math_id": 0,
"text": " p_i\\approx \\frac{h_i}{N} "
},
{
"math_id": 1,
"text": " p_i"
},
{
"math_id": 2,
"text": " i "
},
{
"math_id": 3,
"text": " h_i "
},
{
"math_id": 4,
"text": " N "
}
] |
https://en.wikipedia.org/wiki?curid=57837450
|
57837828
|
Zero-divisor graph
|
Graph of zero divisors of a commutative ring
In mathematics, and more specifically in combinatorial commutative algebra, a zero-divisor graph is an undirected graph representing the zero divisors of a commutative ring. It has elements of the ring as its vertices, and pairs of elements whose product is zero as its edges.
Definition.
There are two variations of the zero-divisor graph commonly used.
In the original definition of , the vertices represent all elements of the ring. In a later variant studied by , the vertices represent only the zero divisors of the given ring.
Examples.
If formula_1 is a semiprime number (the product of two prime numbers)
then the zero-divisor graph of the ring of integers modulo formula_1 (with only the zero divisors as its vertices) is either a complete graph or a complete bipartite graph.
It is a complete graph formula_2 in the case that formula_3 for some prime number formula_4. In this case the vertices are all the nonzero multiples of formula_4, and the product of any two of these numbers is zero modulo formula_5.
It is a complete bipartite graph formula_6 in the case that formula_7 for two distinct prime numbers formula_4 and formula_8. The two sides of the bipartition are the formula_9 nonzero multiples of formula_8 and the formula_10 nonzero multiples of formula_4, respectively. Two numbers (that are not themselves zero modulo formula_1) multiply to zero modulo formula_1 if and only if one is a multiple of formula_4 and the other is a multiple of formula_8, so this graph has an edge between each pair of vertices on opposite sides of the bipartition, and no other edges. More generally, the zero-divisor graph is a complete bipartite graph for any ring that is a product of two integral domains.
The only cycle graphs that can be realized as zero-product graphs (with zero divisors as vertices) are the cycles of length 3 or 4.
The only trees that may be realized as zero-divisor graphs are the stars (complete bipartite graphs that are trees) and the five-vertex tree formed as the zero-divisor graph of formula_0.
Properties.
In the version of the graph that includes all elements, 0 is a universal vertex, and the zero divisors can be identified as the vertices that have a neighbor other than 0.
Because it has a universal vertex, the graph of all ring elements is always connected and has diameter at most two. The graph of all zero divisors is non-empty for every ring that is not an integral domain. It remains connected, has diameter at most three, and (if it contains a cycle) has girth at most four.
The zero-divisor graph of a ring that is not an integral domain is finite if and only if the ring is finite. More concretely, if the graph has maximum degree formula_11, the ring has at most formula_12 elements.
If the ring and the graph are infinite, every edge has an endpoint with infinitely many neighbors.
conjectured that (like the perfect graphs) zero-divisor graphs always have equal clique number and chromatic number. However, this is not true; a counterexample was discovered by .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}_2\\times\\mathbb{Z}_4"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "K_{p-1}"
},
{
"math_id": 3,
"text": "n=p^2"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "p^2"
},
{
"math_id": 6,
"text": "K_{p-1,q-1}"
},
{
"math_id": 7,
"text": "n=pq"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "p-1"
},
{
"math_id": 10,
"text": "q-1"
},
{
"math_id": 11,
"text": "d"
},
{
"math_id": 12,
"text": "(d^2-2d+2)^2"
}
] |
https://en.wikipedia.org/wiki?curid=57837828
|
5783949
|
Extremally disconnected space
|
Topological space in which the closure of every open set is openIn mathematics, an extremally disconnected space is a topological space in which the closure of every open set is open. (The term "extremally disconnected" is correct, even though the word "extremally" does not appear in most dictionaries, and is sometimes mistaken by spellcheckers for the homophone "extremely disconnected".)
An extremally disconnected space that is also compact and Hausdorff is sometimes called a Stonean space. This is not the same as a Stone space, which is a totally disconnected compact Hausdorff space. Every Stonean space is a Stone space, but not vice versa. In the duality between Stone spaces and Boolean algebras, the Stonean spaces correspond to the complete Boolean algebras.
An extremally disconnected first-countable collectionwise Hausdorff space must be discrete. In particular, for metric spaces, the property of being extremally disconnected (the closure of every open set is open) is equivalent to the property of being discrete (every set is open).
Examples and non-examples.
The following spaces are not extremally disconnected:
Equivalent characterizations.
A theorem due to says that the projective objects of the category of compact Hausdorff spaces are exactly the extremally disconnected compact Hausdorff spaces. A simplified proof of this fact is given by .
A compact Hausdorff space is extremally disconnected if and only if it is a retract of the Stone–Čech compactification of a discrete space.
Applications.
proves the Riesz–Markov–Kakutani representation theorem by reducing it to the case of extremally disconnected spaces, in which case the representation theorem can be proved by elementary means.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C(X)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\{\\{x,y\\},\\{x,y,z\\}\\}"
}
] |
https://en.wikipedia.org/wiki?curid=5783949
|
5784135
|
Rational representation
|
In mathematics, in the representation theory of algebraic groups, a linear representation of an algebraic group is said to be rational if, viewed as a map from the group to the general linear group, it is a rational map of algebraic varieties.
Finite direct sums and products of rational representations are rational.
A rational formula_0 module is a module that can be expressed as a sum (not necessarily direct) of rational representations.
|
[
{
"math_id": 0,
"text": "G"
}
] |
https://en.wikipedia.org/wiki?curid=5784135
|
57842154
|
Lugiato–Lefever equation
|
The numerical models of lasers and the most of nonlinear optical systems stem from Maxwell–Bloch equations (MBE). This full set of Partial Differential Equations includes Maxwell equations for electromagnetic field and semiclassical equations of the two-level (or multilevel) atoms. For this reason the simplified theoretical approaches were developed for numerical simulation of laser beams formation and their propagation since the early years of laser era. The Slowly varying envelope approximation of MBE follows from the standard nonlinear wave equation with nonlinear polarization formula_0 as a source:
formula_1
where :formula_2
resulting in the standard "parabolic" wave equation:
formula_3, under conditions :
formula_4 and formula_5 .
The averaging over longitudinal coordinate formula_6 results in "mean-field"
"Suchkov-Letokhov equation (SLE)" describing the nonstationary evolution of the transverse mode pattern.
The model usually designated as Lugiato–Lefever equation (LLE) was formulated in 1987 by Luigi Lugiato and René Lefever
as a paradigm for spontaneous pattern formation in nonlinear optical systems. The patterns originate from the interaction of a coherent field, that is injected into a resonant optical cavity, with a Kerr medium that fills the cavity.
The same equation governs two types of patterns: stationary patterns that arise in the planes orthogonal with respect to the direction of propagation of light ("transverse patterns") and patterns that form in the longitudinal direction ("longitudinal" "patterns"), travel along the cavity with the velocity of light in the medium and give rise to a sequence of pulses in the output of the cavity.
The case of longitudinal patterns is intrinsically linked to the phenomenon of “Kerr frequency combs” in microresonators, discovered in 2007 by Tobias Kippenberg and collaborators, that has raised a very lively interest, especially because of the applicative avenue it has opened.
The LLE equation.
Figure 1 shows a light beam that propagates in the formula_7 direction, while formula_8 and formula_9 are the transverse directions. If we assume that the electric field as formula_10, where formula_11 denotes time, is linearly polarized and therefore can be treated as a scalar, we can express it in terms of the slowly varying normalized complex envelope formula_12 in this way
formula_13
where formula_14 is the frequency of the light beam that is injected into the cavity and formula_15 of the light velocity in the Kerr medium that fills the cavity. For definiteness, consider a ring cavity (Fig. 2) of very high quality (High-Q cavity).
In the original LLE, one assumes conditions such that the envelope formula_16 is independent of the longitudinal variable formula_7 (i.e. uniform along the cavity), so that formula_17. The equation reads
formula_18
where formula_19 and formula_20, formula_21 are normalized temporal and spatial variables, i.e. formula_22, formula_23, formula_24, with formula_25 being the cavity decay rate or cavity linewidth, formula_26 the diffraction length in the cavity. formula_27 is the cavity detuning parameter, with formula_28 being the cavity frequency nearest to formula_14. In the righthand side of Eq.(1), formula_29 is the normalized amplitude of the input field that is injected into the cavity,
the second is the decay term, the third is the detuning term, the fourth is the cubic nonlinear term that takes into account the Kerr medium, the last term with the transverse Laplacian formula_30 describes diffraction in the paraxial approximation. Conditions of self-focusing are assumed.
We refer to Eq.(1) as the transverse LLE. Some years later than, there was the formulation of the longitudinal LLE, in which diffraction is replaced by dispersion. In this case one assumes that the envelope formula_16 is independent of the transverse variables formula_8 and formula_9, so that formula_31. The longitudinal LLE reads
with formula_32, where formula_33 depends, in particular on the dispersion parameter at second order. Conditions of anomalous dispersion are assumed. An important point is that, once formula_34 is obtained by solving Eq.(2), one must come back to the original variables formula_35 and replace formula_7 by formula_36, so that a formula_7-dependent stationary solution (stationary pattern) becomes a travelling pattern (with velocity formula_15).
From a mathematical viewpoint, the LLE amounts to a driven, damped, detuned nonlinear Schroedinger equation.
The transverse LLE (1) is in 2D from the spatial viewpoint. In a waveguide configuration formula_16 depends only on one spatial variable, say formula_8, and the transverse Laplacian is replaced by formula_37 and one has the transverse LLE in 1D. The longitudinal LLE (2) is equivalent to the transverse LLE in 1D.
In some papers dealing with the longitudinal case one considers dispersion beyond the second order, so that Eq.(2) includes also terms with derivatives of order higher than second with respect to formula_38.
Uniform stationary solutions. Connection with "optical bistability". Four-wave mixing and pattern formation..
Let us focus on the case in which the envelope formula_16 is constant, i.e. on the stationary solutions that are independent of all spatial variables. By dropping all derivatives in Eqs.(1) and (2), and taking the squared modulus, one obtains the stationary equation
If we plot the stationary curve of formula_39 as a function of formula_40,
when formula_41 we obtain a curve as that shown in Fig.3.
The curve is formula_42-shaped and
there is an interval of values of formula_40 where one has three stationary states. However, the states that lie in the segment with negative slope are unstable, so that in the interval there are two coexisting stable stationary states: this phenomenon is called "optical bistability". If the input intensity formula_40 is increased and then decreased, one covers a hysteresis cycle.
If we refer to the modes of the empty cavity, in the case of the uniform stationary solutions described by Eq.(3) the electric field is singlemode, corresponding to the mode of frequency formula_28 quasi-resonant with the input frequency formula_14.
In the transverse configuration of Eq.(1), in the case of these stationary solutions E corresponds to a singlemode plane wave formula_43 with formula_44, where formula_45 and formula_46 are the transverse components of the wave vector, exactly as the input field formula_29.
The cubic Kerr nonlinearity of Eqs.(1) and (2) gives rise to four-wave mixing (FWM), which can generate other modes, so that the envelope formula_16 displays a spatial pattern: in the transverse plane in the case of Eq.(1), along the cavity in the case of Eq.(2).
Transverse patterns and "cavity solitons".
In the transverse case of Eq.(1) the pattern arises from the interplay of FWM and diffraction. The FWM can give rise, for example, to processes in which pairs of photons with formula_44 are absorbed and, simultaneously, the system emits pairs of photons with formula_47, formula_48 and formula_49, formula_48 in such a way that the total energy of photons, and their total momentum, are conserved (Fig.4).
Actually further FWM processes enter into play, so that formula_50 assumes the configuration of a hexagonal pattern (see Fig.5).
A pattern displays an ordered array of intensity peaks. It is possible to generate also isolated intensity peaks, that are called "cavity soliton"s (see Fig. 6). Since cavity solitons can be “written”and “erased” one by one in the transverse plane like in a blackboard, they are of great interest for applications to optical information processing and telecommunications.
Longitudinal patterns and cavity solitons.
In the longitudinal case of Eq.(2) the patterns arise from the interplay between FWM and dispersion. The FWM can give rise, for example, to processes in which pairs of photons of the longitudinal mode quasi-resonant with formula_14 are absorbed and, simultaneously, the system emits photon pairs corresponding to cavity modes symmetrically adjacent to the quasi-resonant mode, in such a way that the total photon energy, as well as the total longitudinal photon momentum, are conserved.
Figure 7 shows an example of the patterns that are generated, and travel along the cavity and out of the cavity. Like in the transverse case, also in the longitudinal configuration single or multiple Kerr cavity solitons can be generated; Figure 8 illustrates the case of a single cavity soliton that circulates in the cavity and produces a sequence of narrow pulses in the output. Such solitons have been observed for the first time in a fiber cavity.
It is important to note that the instability which originates longitudinal patterns and cavity solitons in the LLE is a special case of the multimode instability of optical bistability, predicted by Bonifacio and Lugiato in and first observed experimentally in.
Microresonator Kerr frequency combs and cavity solitons.
Optical frequency combs constitute an equidistant set of laser frequencies that can be employed to count the cycles of light. This technique, introduced by Theodor Haensch and John Hall using mode-locked lasers, has led to myriad applications. The work demonstrated the realization of broadband optical frequency combs exploiting the whispering gallery modes activated by a CW laser field injected into a high-Q microresonator filled with a Kerr medium, that gives rise to FWM. Since that time Kerr frequency combs (KFC), whose bandwidth can exceed an octave with repetition rates in the microwave to THz frequencies, have been generated in a wide variety of microresonators; for reviews on this subject see e.g. They offer substantial potential for miniaturization and chip-scale photonic integration, as well as for power reduction. Today KFC generation is a mature field, and this technology has been applied to several areas, including coherent telecommunications, spectroscopy, atomic clocks as well as laser ranging and astrophysical spectrometer calibration.
A key impetus to these developments has been the realization of Kerr cavity solitons in microresonators, opening the possibility of utilizing Kerr cavity solitons in photonic integrated microresonators.
The longitudinal LLE (2) provides a spatio-temporal picture of the involved phenomena, but from the spectral viewpoint its solutions correspond to KFC. The link between the topic of optical KFC and the LLE was theoretically developed in. These authors showed that the LLE (or generalizations including higher order dispersion terms) is the model which describes the generation of KFC and is capable of predicting their properties when the system parameters are varied. The spontaneous formation of spatial patterns and solitons travelling along the cavity described by the LLE is the spatiotemporal equivalent of the frequency combs and governs their features. The rather idealized conditions assumed in the formulation of the LLE, especially the high-Q condition, have been perfectly materialized by the spectacular technological progress that has occurred in the meantime in the field of photonics and has led, in particular, to the discovery of KFC.
The Suchkov-Letokhov equation.
The averaging over longitudinal coordinate formula_6 results in the "mean-field"
" SLE " equation where longitudinal derivative is absent:
formula_51.
Rigorous procedure demonstrated that this precursor of LLE is applicable to modeling of the nonstationary evolution of the transverse mode pattern in the Disk laser (1966) . Under condition of stationary Kerr nonlinearity " SLE " reduces to LLE.
Quantum aspects.
The two photons that, as shown in Fig.4, are emitted in symmetrically tilted directions in the FWM process, are in a state of "quantum entanglement": they are precisely correlated, for example in energy and momentum. This fact is fundamental for the quantum aspects of optical patterns. For instance, the difference between the intensities of the two symmetrical beams is squeezed, i.e. exhibits fluctuations below the shot noise level; the longitudinal analogue of this phenomenon has been observed experimentally in KFC. In turn, such quantum aspects are basic for the field of quantum imaging.
Review articles.
For reviews on the subject of the LLE, see also.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{P}^\\text{NL}"
},
{
"math_id": 1,
"text": " \\nabla^2 {\\cal E(\\vec r,t)}- \\frac{n^2}{c^2}\\frac{\\partial^2}{\\partial t^2} \n{\\cal E(\\vec r,t)}\n= \\frac{1}{\\varepsilon_0 c^2}\\frac{\\partial^2}{\\partial t^2}\\mathbf{P}^\\text{NL},"
},
{
"math_id": 2,
"text": "{\\cal E}(\\vec r,t)\\propto E(\\vec r_\\perp,z,t)e^{i(n k_0)(z-ct)} + \\text{c.c.} "
},
{
"math_id": 3,
"text": " \\frac{\\partial E}{\\partial z} + \\frac{\\; \\omega_0\\ }{ k_0 \n c^2} \\frac{\\partial E}{\\partial t} - \\tfrac{1}{2 k_0 }\\ i\\ \\nabla^2_\\perp E = \n\\frac{1}{\\varepsilon_0 k_0 c^2}\\frac{\\partial^2}{\\partial t^2}\\mathbf{P}^\\text{NL}~"
},
{
"math_id": 4,
"text": "\\displaystyle \\left|\\ \\nabla^2 E\\ \\right| \\ll \\left|\\ k_0 \\nabla E\\ \\right| "
},
{
"math_id": 5,
"text": "\\displaystyle \\left|\\ \\frac{\\partial^2 E}{\\partial t^2}\\ \\right| \\ll \\left|\\ \\omega_0\\, \\frac{\\partial E}{\\partial t}\\ \\right|\\ "
},
{
"math_id": 6,
"text": "z "
},
{
"math_id": 7,
"text": "z"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "y"
},
{
"math_id": 10,
"text": "{\\cal E}(x,y,z,t)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "E(x,y,z,t)"
},
{
"math_id": 13,
"text": "{\\cal E}(x,y,z,t)\\propto E(x,y,z,t)e^{i(\\omega_0/\\tilde{c})(z-\\tilde{c}t)} + \\text{c.c.} "
},
{
"math_id": 14,
"text": "\\omega_0"
},
{
"math_id": 15,
"text": "\\tilde{c}"
},
{
"math_id": 16,
"text": "E"
},
{
"math_id": 17,
"text": "E=E(x,y,t)"
},
{
"math_id": 18,
"text": "\\nabla^2_\\perp E=\\frac{\\partial^2E}{\\partial \\bar{x}^2}+\\frac{\\partial^2E}{\\partial \\bar{y}^2}"
},
{
"math_id": 19,
"text": "\\bar{t}"
},
{
"math_id": 20,
"text": "\\bar{x}"
},
{
"math_id": 21,
"text": "\\bar{y}"
},
{
"math_id": 22,
"text": "\\bar{t}=\\kappa t"
},
{
"math_id": 23,
"text": "\\bar{x}=x/\\ell_d"
},
{
"math_id": 24,
"text": "\\bar{y}=y/\\ell_d"
},
{
"math_id": 25,
"text": "\\kappa"
},
{
"math_id": 26,
"text": "\\ell_d"
},
{
"math_id": 27,
"text": "\\theta=(\\omega_c-\\omega_0)/\\kappa"
},
{
"math_id": 28,
"text": "\\omega_c"
},
{
"math_id": 29,
"text": "E_\\text{in}"
},
{
"math_id": 30,
"text": "\\nabla_\\perp^2"
},
{
"math_id": 31,
"text": "E=E(z,t)"
},
{
"math_id": 32,
"text": "\\bar{z}=z/a"
},
{
"math_id": 33,
"text": "a"
},
{
"math_id": 34,
"text": "E(\\bar{z},\\bar{t})"
},
{
"math_id": 35,
"text": "z,t"
},
{
"math_id": 36,
"text": "z-\\tilde{c}t"
},
{
"math_id": 37,
"text": "\\frac{\\partial^2 E}{\\partial \\bar{x}^2}"
},
{
"math_id": 38,
"text": "\\bar{z}"
},
{
"math_id": 39,
"text": "|E|^2"
},
{
"math_id": 40,
"text": "|E_\\text{in}|^2"
},
{
"math_id": 41,
"text": "\\theta>\\sqrt{3} "
},
{
"math_id": 42,
"text": "S"
},
{
"math_id": 43,
"text": "e^{i(k_x x+k_y y)}"
},
{
"math_id": 44,
"text": "k_x=k_y=0"
},
{
"math_id": 45,
"text": "k_x"
},
{
"math_id": 46,
"text": "k_y"
},
{
"math_id": 47,
"text": "k_x=\\bar{k}_x"
},
{
"math_id": 48,
"text": "k_y=0"
},
{
"math_id": 49,
"text": "k_x=-\\bar{k}_x"
},
{
"math_id": 50,
"text": "E(x,y)"
},
{
"math_id": 51,
"text": " \\frac{\\; \\omega_0\\ }{ k_0 \n c^2} \\frac{\\partial E}{\\partial t} - \\tfrac{1}{2 k_0 }\\ i\\ \\nabla^2_\\perp E = \n\\frac{1}{\\varepsilon_0 k_0 c^2}\\frac{\\partial^2}{\\partial t^2}\\mathbf{P}^\\text{NL}~"
}
] |
https://en.wikipedia.org/wiki?curid=57842154
|
57843630
|
Alloy broadening
|
Alloy broadening is a spectral-line broadening mechanism caused by random distribution of the atoms in an alloy.
The alloy broadening is one of the line broadening mechanisms. The random distribution of atoms in an alloy causes a different material composition at different positions. In semiconductors and insulators the different material composition leads to different band gap energies. This gives different exciton recombination energies. Therefore, depending on the position where an exciton recombines the emitted light has a different energy. The alloy broadening is an inhomogeneous line broadening, meaning that its shape is Gaussian.
Binary alloy.
In the mathematical description it is assumed that no clustering occurs within the alloy. Then, for a binary alloy of the form <chem>A_{1-x}B_{x}</chem>, e.g. <chem>Si_{1-x}Ge_{x}</chem>, the standard deviation of the composition is given by:
formula_0,
where formula_1 is the number of atoms within the excitons' volume, i.e. formula_2 with formula_3 being the atoms per volume.
In general, the band gap energy formula_4 of a semiconducting alloy depends on the composition, i.e. formula_4. The band gap energy can be considered to be the fluorescence energy. Therefore, the standard deviation in fluorescence is
formula_5
As the alloy broadening belongs to the group of inhomogeneous broadenings the line shape of the fluorescence intensity formula_6 is Gaussian:
formula_7
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta x = \\sqrt{\\frac{x \\cdot (1-x)}{N}}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "N = V_{exc} \\cdot n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "E_{g}"
},
{
"math_id": 5,
"text": "\\Delta E = \\frac{\\mathrm d E_{g}}{\\mathrm dx} \\cdot \\sqrt{x \\cdot \\frac{1-x}{N}}"
},
{
"math_id": 6,
"text": "I(E)"
},
{
"math_id": 7,
"text": "I(E) \\sim \\exp\\left(- \\frac{(E - E_{0})^2}{2 \\cdot \\Delta E^2}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=57843630
|
57845886
|
Separation oracle
|
Concept in the mathematical theory of convex optimization
A separation oracle (also called a cutting-plane oracle) is a concept in the mathematical theory of convex optimization. It is a method to describe a convex set that is given as an input to an optimization algorithm. Separation oracles are used as input to ellipsoid methods.
Definition.
Let "K" be a convex and compact set in R"n". A strong separation oracle for "K" is an oracle (black box) that, given a vector "y" in R"n", returns one of the following:
A strong separation oracle is completely accurate, and thus may be hard to construct. For practical reasons, a weaker version is considered, which allows for small errors in the boundary of "K" and the inequalities. Given a small error tolerance "d">0, we say that:
The weak version also considers "rational" numbers, which have a representation of finite length, rather than arbitrary real numbers. A weak separation oracle for "K" is an oracle that, given a vector "y" in Q"n" and a rational number "d">0, returns one of the following::
Implementation.
A special case of a convex set is a set represented by linear inequalities: "formula_2. "Such a set is called a convex "polytope". A strong separation oracle for a convex polytope can be implemented, but its run-time depends on the input format.
Representation by inequalities.
If the matrix "A" and the vector "b" are given as input, so that "formula_2", then a strong separation oracle can be implemented as follows. Given a point "y", compute "formula_3":
This oracle runs in polynomial time as long as the number of constraints is polynomial.
Representation by vertices.
Suppose the set of vertices of "K" is given as an input, so that "formula_8" the convex hull of its vertices. Then, deciding whether "y" is in "K" requires to check whether "y" is a convex combination of the input vectors, that is, whether there exist coefficients "z"1...,"zk" such that:
This is a linear program with "k" variables and "n" equality constraints (one for each element of "y"). If "y" is not in "K", then the above program has no solution, and the separation oracle needs to find a vector "c" such that
Note that the two above representations can be very different in size: it is possible that a polytope can be represented by a small number of inequalities, but has exponentially many vertices (for example, an "n"-dimensional cube). Conversely, it is possible that a polytope has a small number of vertices, but requires exponentially many inequalities (for example, the convex hull of the 2"n" vectors of the form (0...,±1...,0).
Problem-specific representation.
In some linear optimization problems, even though the number of constraints is exponential, one can still write a custom separation oracle that works in polynomial time. Some examples are:
Non-linear sets.
Let "f" be a convex function on R"n". The set "formula_12" is a convex set in R"n"+1. Given an evaluation oracle for "f" (a black box that returns the value of "f" for every given point), one can easily check whether a vector ("y", "t") is in "K". In order to get a separation oracle, we need also an oracle to evaluate the subgradient of "f". Suppose some vector ("y", "s") is not in "K", so "f"("y") > "s". Let "g" be the subgradient of "f" at "y" ("g" is a vector in R"n")"." Denote "formula_13".Then, "formula_14", and for all ("x", "t") in "K": "formula_15". By definition of a subgradient: "formula_16" for all "x" in R"n". Therefore, "formula_17", so "formula_18 ," and "c" represents a separating hyperplane.
Usage.
A strong separation oracle can be given as an input to the ellipsoid method for solving a linear program. Consider the linear program "formula_19". The ellipsoid method maintains an ellipsoid that initially contains the entire feasible domain formula_20. At each iteration "t", it takes the center formula_21 of the current ellipsoid, and sends it to the separation oracle:
After making a cut, we construct a new, smaller ellipsoid, that contains the remaining region. It can be shown that this process converges to an approximate solution, in time polynomial in the required accuracy.
Converting a weak oracle to a strong oracle.
Given a weak separation oracle for a "polyhedron", it is possible to construct a strong separation oracle by a careful method of rounding, or by diophantine approximations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a\\cdot y > a\\cdot x "
},
{
"math_id": 1,
"text": "a\\cdot y +d\\geq a\\cdot x "
},
{
"math_id": 2,
"text": "K = \\{x | Ax \\leq b \\}"
},
{
"math_id": 3,
"text": "Ay"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "c\\cdot y"
},
{
"math_id": 7,
"text": "c\\cdot y > b \\geq c\\cdot x"
},
{
"math_id": 8,
"text": "K = \\text{conv}(v_1,\\ldots,v_k) ="
},
{
"math_id": 9,
"text": "z_1\\cdot v_1 + \\cdots + z_k\\cdot v_k = y"
},
{
"math_id": 10,
"text": "0 \\leq z_i\\leq 1"
},
{
"math_id": 11,
"text": "c\\cdot y > c\\cdot v_i"
},
{
"math_id": 12,
"text": "K = \\{(x, t) | f(x)\\leq t \\}"
},
{
"math_id": 13,
"text": "c := (g, -1)"
},
{
"math_id": 14,
"text": "c\\cdot (y,s) = g\\cdot y - s > g\\cdot y - f(y)"
},
{
"math_id": 15,
"text": "c\\cdot (x,t) = g\\cdot x - t \\leq g\\cdot x - f(x)"
},
{
"math_id": 16,
"text": "f(x)\\geq f(y) + g\\cdot (x-y)"
},
{
"math_id": 17,
"text": "g\\cdot y - f(y) \\geq g\\cdot x-f(x)"
},
{
"math_id": 18,
"text": "c\\cdot(y,s) > c\\cdot(x,t)"
},
{
"math_id": 19,
"text": "\\text{maximize}~~ c\\cdot x ~~\\text{subject to}~~ Ax \\leq b, x\\geq 0"
},
{
"math_id": 20,
"text": "A x \\leq b"
},
{
"math_id": 21,
"text": "x_t"
},
{
"math_id": 22,
"text": "Ax \\leq b"
},
{
"math_id": 23,
"text": "c \\cdot x < c \\cdot x_t"
},
{
"math_id": 24,
"text": "a_j "
},
{
"math_id": 25,
"text": "a_j\\cdot x_t > b_j "
},
{
"math_id": 26,
"text": "a_j \\cdot x \\leq b_j "
},
{
"math_id": 27,
"text": "a_j\\cdot x_t > a_j\\cdot x "
},
{
"math_id": 28,
"text": "a_j\\cdot y > a_j\\cdot x_t"
}
] |
https://en.wikipedia.org/wiki?curid=57845886
|
578460
|
Loop invariant
|
Invariants used to prove properties of loops
In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop.
In formal program verification, particularly the Floyd-Hoare approach, loop invariants are expressed by formal predicate logic and used to prove properties of loops and by extension algorithms that employ loops (usually correctness properties).
The loop invariants will be true on entry into a loop and following each iteration, so that on exit from the loop both the loop invariants and the loop termination condition can be guaranteed.
From a programming methodology viewpoint, the loop invariant can be viewed as a more abstract specification of the loop, which characterizes the deeper purpose of the loop beyond the details of this implementation. A survey article covers fundamental algorithms from many areas of computer science (searching, sorting, optimization, arithmetic etc.), characterizing each of them from the viewpoint of its invariant.
Because of the similarity of loops and recursive programs, proving partial correctness of loops with invariants is very similar to proving the correctness of recursive programs via induction. In fact, the loop invariant is often the same as the inductive hypothesis to be proved for a recursive program equivalent to a given loop.
Informal example.
The following C subroutine codice_0 returns the maximum value in its argument array codice_1, provided its length codice_2 is at least 1.
Comments are provided at lines 3, 6, 9, 11, and 13. Each comment makes an assertion about the values of one or more variables at that stage of the function.
The highlighted assertions within the loop body, at the beginning and end of the loop (lines 6 and 11), are exactly the same. They thus describe an invariant property of the loop.
When line 13 is reached, this invariant still holds, and it is known that the loop condition codice_3 from line 5 has become false. Both properties together imply that codice_4 equals the maximum value in codice_5, that is, that the correct value is returned from line 14.
int max(int n, const int a[]) {
int m = a[0];
// m equals the maximum value in a[0...0]
int i = 1;
while (i != n) {
// m equals the maximum value in a[0...i-1]
if (m < a[i])
m = a[i];
// m equals the maximum value in a[0...i]
++i;
// m equals the maximum value in a[0...i-1]
// m equals the maximum value in a[0...i-1], and i==n
return m;
Following a defensive programming paradigm, the loop condition codice_3 in line 5 should better be modified to codice_7, in order to avoid endless looping for illegitimate negative values of codice_2. While this change in code intuitively shouldn't make a difference, the reasoning leading to its correctness becomes somewhat more complicated, since then only codice_9 is known in line 13. In order to obtain that also codice_10 holds, that condition has to be included into the loop invariant. It is easy to see that codice_10, too, is an invariant of the loop, since codice_7 in line 6 can be obtained from the (modified) loop condition in line 5, and hence codice_10 holds in line 11 after codice_14 has been incremented in line 10. However, when loop invariants have to be manually provided for formal program verification, such intuitively too obvious properties like codice_10 are often overlooked.
Floyd–Hoare logic.
In Floyd–Hoare logic, the partial correctness of a while loop is governed by the following rule of inference:
formula_0
This means:
In other words: The rule above is a deductive step that has as its premise the Hoare triple formula_3. This triple is actually a relation on machine states. It holds whenever starting from a state in which the boolean expression formula_4 is true and successfully executing some code called formula_1, the machine ends up in a state in which I is true. If this relation can be proven, the rule then allows us to conclude that successful execution of the program formula_2 will lead from a state in which I is true to a state in which formula_5 holds. The boolean formula I in this rule is called a loop invariant.
With some variations in the notation used, and with the premise that the loop halts, this rule is also known as the Invariant Relation Theorem. As one 1970s textbook presents it in a way meant to be accessible to student programmers:
Let the notation codice_16 mean that if codice_17 is true before the sequence of statements codice_18 run, then codice_19 is true after it. Then the invariant relation theorem holds that
codice_20
implies
codice_21
Example.
The following example illustrates how this rule works. Consider the program
while (x < 10)
x := x+1;
One can then prove the following Hoare triple:
formula_6
The condition "C" of the codice_22 loop is formula_7. A useful loop invariant I has to be guessed; it will turn out that formula_8 is appropriate. Under these assumptions it is possible to prove the following Hoare triple:
formula_9
While this triple can be derived formally from the rules of Floyd-Hoare logic governing assignment, it is also intuitively justified: Computation starts in a state where formula_10 is true, which means simply that formula_7 is true. The computation adds 1 to x, which means that formula_8 is still true (for integer x).
Under this premise, the rule for codice_22 loops permits the following conclusion:
formula_11
However, the post-condition formula_12 (x is less than or equal to 10, but it is not less than 10) is logically equivalent to formula_13, which is what we wanted to show.
The property formula_14 is another invariant of the example loop, and the trivial property formula_15 is another one.
Applying the above inference rule to the former invariant yields formula_16.
Applying it to invariant formula_15 yields formula_17, which is slightly more expressive.
Programming language support.
Eiffel.
The Eiffel programming language provides native support for loop invariants. A loop invariant is expressed with the same syntax used for a class invariant. In the sample below, the loop invariant expression codice_24 must be true following the loop initialization, and after each execution of the loop body; this is checked at runtime.
from
x := 0
invariant
x <= 10
until
x > 10
loop
x := x + 1
end
Whiley.
The Whiley programming language also provides first-class support for loop invariants. Loop invariants are expressed using one or more codice_25 clauses, as the following illustrates:
function max(int[] items) -> (int r)
// Requires at least one element to compute max
requires |items| > 0
// (1) Result is not smaller than any element
// (2) Result matches at least one element
ensures some { i in 0..|items| | items[i] == r }:
nat i = 1
int m = items[0]
while i < |items|
// (1) No item seen so far is larger than m
// (2) One or more items seen so far matches m
where some { k in 0..i | items[k] == m }:
if items[i] > m:
m = items[i]
i = i + 1
return m
The codice_0 function determines the largest element in an integer array. For this to be defined, the array must contain at least one element. The postconditions of codice_0 require that the returned value is: (1) not smaller than any element; and, (2) that it matches at least one element. The loop invariant is defined inductively through two codice_25 clauses, each of which corresponds to a clause in the postcondition. The fundamental difference is that each clause of the loop invariant identifies the result as being correct up to the current element codice_14, whilst the postconditions identify the result as being correct for all elements.
Use of loop invariants.
A loop invariant can serve one of the following purposes:
For 1., a natural language comment (like codice_30 in the above example) is sufficient.
For 2., programming language support is required, such as the C library assert.h, or the above-shown codice_31 clause in Eiffel. Often, run-time checking can be switched on (for debugging runs) and off (for production runs) by a compiler or a runtime option.
For 3., some tools exist to support mathematical proofs, usually based on the above-shown Floyd–Hoare rule, that a given loop code in fact satisfies a given (set of) loop invariant(s).
The technique of abstract interpretation can be used to detect loop invariant of given code automatically. However, this approach is limited to very simple invariants (such as codice_32).
Distinction from loop-invariant code.
Loop-invariant code consists of statements or expressions that can be moved outside a loop body without affecting the program semantics. Such transformations, called loop-invariant code motion, are performed by some compilers to optimize programs.
A loop-invariant code example (in the C programming language) is
for (int i=0; i<n; ++i) {
x = y+z;
a[i] = 6*i + x*x;
where the calculations codice_33 and codice_34 can be moved before the loop, resulting in an equivalent, but faster, program:
x = y+z;
t1 = x*x;
for (int i=0; i<n; ++i) {
a[i] = 6*i + t1;
In contrast, e.g. the property codice_35 is a loop invariant for both the original and the optimized program, but is not part of the code, hence it doesn't make sense to speak of "moving it out of the loop".
Loop-invariant code may induce a corresponding loop-invariant property. For the above example, the easiest way to see it is to consider a program where the loop invariant code is computed both before and within the loop:
x1 = y+z;
t1 = x1*x1;
for (int i=0; i<n; ++i) {
x2 = y+z;
a[i] = 6*i + t1;
A loop-invariant property of this code is codice_36, indicating that the values computed before the loop agree with those computed within (except before the first iteration).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\{C\\land I\\}\\;\\mathrm{body}\\;\\{I\\}} {\\{I\\}\\;\\mathtt{while}\\ (C)\\ \\mathrm{body}\\;\\{\\lnot C\\land I\\}}"
},
{
"math_id": 1,
"text": "\\mathrm{body}"
},
{
"math_id": 2,
"text": "\\mathtt{while}\\ (C)\\ \\mathrm{body}"
},
{
"math_id": 3,
"text": "\\{C\\land I\\}\\;\\mathrm{body}\\;\\{I\\}"
},
{
"math_id": 4,
"text": "C\\land I"
},
{
"math_id": 5,
"text": "\\lnot C\\land I"
},
{
"math_id": 6,
"text": "\\{x\\leq10\\}\\; \\mathtt{while}\\ (x<10)\\ x := x+1\\;\\{x=10\\}"
},
{
"math_id": 7,
"text": "x<10"
},
{
"math_id": 8,
"text": "x\\leq10"
},
{
"math_id": 9,
"text": "\\{x<10 \\land x\\leq10\\}\\; x := x+1 \\;\\{x\\leq10\\}"
},
{
"math_id": 10,
"text": "x<10 \\land x\\leq10"
},
{
"math_id": 11,
"text": "\\{x\\leq10\\}\\; \\mathtt{while}\\ (x<10)\\ x := x+1 \\;\\{\\lnot(x<10) \\land x\\leq10\\}"
},
{
"math_id": 12,
"text": "\\lnot(x<10)\\land x\\leq10"
},
{
"math_id": 13,
"text": "x=10"
},
{
"math_id": 14,
"text": "0 \\leq x"
},
{
"math_id": 15,
"text": "\\mathrm{true}"
},
{
"math_id": 16,
"text": "\\{0 \\leq x\\}\\; \\mathtt{while}\\ (x<10)\\ x := x+1\\;\\{10 \\leq x\\}"
},
{
"math_id": 17,
"text": "\\{\\mathrm{true}\\}\\; \\mathtt{while}\\ (x<10)\\ x := x+1\\;\\{10 \\leq x\\}"
}
] |
https://en.wikipedia.org/wiki?curid=578460
|
5784666
|
Ambient construction
|
In conformal geometry, the ambient construction refers to a construction of Charles Fefferman and Robin Graham for which a conformal manifold of dimension "n" is realized ("ambiently") as the boundary of a certain Poincaré manifold, or alternatively as the celestial sphere of a certain pseudo-Riemannian manifold.
The ambient construction is canonical in the sense that it is performed only using the conformal class of the metric: it is conformally invariant. However, the construction only works asymptotically, up to a certain order of approximation. There is, in general, an obstruction to continuing this extension past the critical order. The obstruction itself is of tensorial character, and is known as the (conformal) obstruction tensor. It is, along with the Weyl tensor, one of the two primitive invariants in conformal differential geometry.
Aside from the obstruction tensor, the ambient construction can be used to define a class of conformally invariant differential operators known as the GJMS operators.
A related construction is the tractor bundle.
Overview.
The model flat geometry for the ambient construction is the future null cone in Minkowski space, with the origin deleted. The celestial sphere at infinity is the conformal manifold "M", and the null rays in the cone determine a line bundle over "M". Moreover, the null cone carries a metric which degenerates in the direction of the generators of the cone.
The ambient construction in this flat model space then asks: if one is provided with such a line bundle, along with its degenerate metric, to what extent is it possible to "extend" the metric off the null cone in a canonical way, thus recovering the ambient Minkowski space? In formal terms, the degenerate metric supplies a Dirichlet boundary condition for the extension problem and, as it happens, the natural condition is for the extended metric to be Ricci flat (because of the normalization of the normal conformal connection.)
The ambient construction generalizes this to the case when "M" is conformally curved, first by constructing a natural null line bundle "N" with a degenerate metric, and then solving the associated Dirichlet problem on "N" × (-1,1).
Details.
This section provides an overview of the construction, first of the null line bundle, and then of its ambient extension.
The null line bundle.
Suppose that "M" is a conformal manifold, and that ["g"] denotes the conformal metric defined on "M". Let π : "N" → "M" denote the tautological subbundle of T*"M" ⊗ T*"M" defined by all representatives of the conformal metric. In terms of a fixed background metric "g"0, "N" consists of all positive multiples ω2"g"0 of the metric. There is a natural action of R+ on "N", given by
formula_0
Moreover, the total space of "N" carries a tautological degenerate metric, for if "p" is a point of the fibre of π : "N" → "M" corresponding to the conformal representative "g"p, then let
formula_1
This metric degenerates along the vertical directions. Furthermore, it is homogeneous of degree 2 under the R+ action on "N":
formula_2
Let "X" be the vertical vector field generating the scaling action. Then the following properties are immediate:
"h"("X",-) = 0
LXh = 2"h", where LX is the Lie derivative along the vector field "X".
The ambient space.
Let "N"~ = "N" × (-1,1), with the natural inclusion "i" : "N" → "N"~. The dilations δω extend naturally to "N"~, and hence so does the generator "X" of dilation.
An ambient metric on "N"~ is a Lorentzian metric "h"~ such that
Suppose that a fixed representative of the conformal metric "g" and a local coordinate system "x" = ("x"i) are chosen on "M". These induce coordinates on "N" by identifying a point in the fibre of "N" with ("x","t"2"g"("x")) where "t" > 0 is the fibre coordinate. (In these coordinates, "X" = "t" ∂"t".) Finally, if ρ is a defining function of "N" in "N"~ which is homogeneous of degree 0 under dilations, then ("x","t",ρ) are coordinates of "N"~. Furthermore, any extension metric which is homogeneous of degree 2 can be written in these coordinates in the form:
formula_3
where the "g"ij are "n"2 functions with "g"("x",0) = "g"("x"), the given conformal representative.
After some calculation one shows that the Ricci flatness is equivalent to the following differential equation, where the prime is differentiation with respect to ρ:
formula_4
One may then formally solve this equation as a power series in ρ to obtain the asymptotic development of the ambient metric off the null cone. For example, substituting ρ = 0 and solving gives
"g"ij′("x",0) = 2"P"ij
where "P" is the Schouten tensor. Next, differentiating again and substituting the known value of "g"ij′("x",0) into the equation, the second derivative can be found to be a multiple of the Bach tensor. And so forth.
|
[
{
"math_id": 0,
"text": "\\delta_\\omega g = \\omega^2 g"
},
{
"math_id": 1,
"text": "h_p(X_p,Y_p) = g_p(\\pi_*X,\\pi_*Y)."
},
{
"math_id": 2,
"text": "\\delta^*_\\omega h = \\omega^2 h"
},
{
"math_id": 3,
"text": "h^\\sim = t^2 g_{ij}(x,\\rho)dx^idx^j+2\\rho dt^2+2tdtd\\rho,\\, "
},
{
"math_id": 4,
"text": "\\rho g_{ij}''-\\rho g^{kl}g_{ik}'g_{jl}+\\tfrac12\\rho g^{kl}g_{kl}'g_{ij}'+\\frac{2-n}{2}g_{ij}'-\\tfrac12 g^{kl}g_{kl}'g_{ij}+\\mathrm{Ric}(g)_{ij}=0."
}
] |
https://en.wikipedia.org/wiki?curid=5784666
|
5784784
|
Bach tensor
|
In differential geometry and general relativity, the Bach tensor is a trace-free tensor of rank 2 which is conformally invariant in dimension "n" = 4. Before 1968, it was the only known conformally invariant tensor that is algebraically independent of the Weyl tensor. In abstract indices the Bach tensor is given by
formula_0
where "formula_1" is the Weyl tensor, and "formula_2" the Schouten tensor given in terms of the Ricci tensor "formula_3" and scalar curvature "formula_4" by
formula_5
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B_{ab} = P_{cd}{{{W_a}^c}_b}^d+\\nabla^c\\nabla_cP_{ab}-\\nabla^c\\nabla_aP_{bc}"
},
{
"math_id": 1,
"text": "W"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "R_{ab}"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "P_{ab}=\\frac{1}{n-2}\\left(R_{ab}-\\frac{R}{2(n-1)}g_{ab}\\right)."
}
] |
https://en.wikipedia.org/wiki?curid=5784784
|
57854817
|
Filtration (probability theory)
|
Model of information available at a given point of a random process
In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are totally ordered collections of subsets that are used to model the information that is available at a given point and therefore play an important role in the formalization of random (stochastic) processes.
Definition.
Let formula_0 be a probability space and let formula_1 be an index set with a total order formula_2 (often formula_3, formula_4, or a subset of formula_5).
For every formula_6 let formula_7 be a sub-"σ"-algebra of formula_8. Then
formula_9
is called a filtration, if formula_10 for all formula_11. So filtrations are families of "σ"-algebras that are ordered non-decreasingly. If formula_12 is a filtration, then formula_13 is called a filtered probability space.
Example.
Let formula_14 be a stochastic process on the probability space formula_15.
Let formula_16 denote the "σ"-algebra generated by the random variables formula_17.
Then
formula_18
is a "σ"-algebra and formula_19 is a filtration.
formula_12 really is a filtration, since by definition all formula_20 are "σ"-algebras and
formula_21
This is known as the natural filtration of formula_22 with respect to formula_23.
Types of filtrations.
Right-continuous filtration.
If formula_24 is a filtration, then the corresponding right-continuous filtration is defined as
formula_25
with
formula_26
The filtration formula_12 itself is called right-continuous if formula_27.
Complete filtration.
Let formula_28 be a probability space and let,
formula_29
be the set of all sets that are contained within a formula_30-null set.
A filtration formula_24 is called a complete filtration, if every formula_7 contains formula_31. This implies formula_32 is a complete measure space for every formula_33 (The converse is not necessarily true.)
Augmented filtration.
A filtration is called an augmented filtration if it is complete and right continuous. For every filtration formula_12 there exists a smallest augmented filtration formula_34 refining formula_12.
If a filtration is an augmented filtration, it is said to satisfy the usual hypotheses or the usual conditions.
|
[
{
"math_id": 0,
"text": " (\\Omega, \\mathcal A, P) "
},
{
"math_id": 1,
"text": " I "
},
{
"math_id": 2,
"text": " \\leq "
},
{
"math_id": 3,
"text": " \\N "
},
{
"math_id": 4,
"text": " \\R^+ "
},
{
"math_id": 5,
"text": " \\mathbb R^+ "
},
{
"math_id": 6,
"text": " i \\in I "
},
{
"math_id": 7,
"text": " \\mathcal F_i "
},
{
"math_id": 8,
"text": " \\mathcal A "
},
{
"math_id": 9,
"text": " \\mathbb F:= (\\mathcal F_i)_{i \\in I} "
},
{
"math_id": 10,
"text": " \\mathcal F_k \\subseteq \\mathcal F_\\ell"
},
{
"math_id": 11,
"text": " k \\leq \\ell "
},
{
"math_id": 12,
"text": " \\mathbb F "
},
{
"math_id": 13,
"text": " (\\Omega, \\mathcal A, \\mathbb F, P) "
},
{
"math_id": 14,
"text": " (X_n)_{n \\in \\N} "
},
{
"math_id": 15,
"text": " (\\Omega, \\mathcal A, P) "
},
{
"math_id": 16,
"text": " \\sigma(X_k \\mid k \\leq n) "
},
{
"math_id": 17,
"text": " X_1, X_2, \\dots, X_n "
},
{
"math_id": 18,
"text": " \\mathcal F_n:=\\sigma(X_k \\mid k \\leq n) "
},
{
"math_id": 19,
"text": " \\mathbb F= (\\mathcal F_n)_{n \\in \\N} "
},
{
"math_id": 20,
"text": " \\mathcal F_n "
},
{
"math_id": 21,
"text": " \\sigma(X_k \\mid k \\leq n) \\subseteq \\sigma(X_k \\mid k \\leq n+1). "
},
{
"math_id": 22,
"text": "\\mathcal A"
},
{
"math_id": 23,
"text": "(X_n)_{n \\in \\N}"
},
{
"math_id": 24,
"text": " \\mathbb F= (\\mathcal F_i)_{i \\in I} "
},
{
"math_id": 25,
"text": " \\mathbb F^+:= (\\mathcal F_i^+)_{i \\in I}, "
},
{
"math_id": 26,
"text": " \\mathcal F_i^+:= \\bigcap_{i < z} \\mathcal F_z. "
},
{
"math_id": 27,
"text": " \\mathbb F^+ = \\mathbb F "
},
{
"math_id": 28,
"text": " (\\Omega, \\mathcal F, P) "
},
{
"math_id": 29,
"text": " \\mathcal N_P:= \\{A \\subseteq \\Omega \\mid A \\subseteq B \\text{ for some } B \\in \\mathcal F \\text{ with } P(B)=0 \\} "
},
{
"math_id": 30,
"text": " P "
},
{
"math_id": 31,
"text": " \\mathcal N_P "
},
{
"math_id": 32,
"text": " (\\Omega, \\mathcal F_i, P) "
},
{
"math_id": 33,
"text": " i \\in I. "
},
{
"math_id": 34,
"text": " \\tilde {\\mathbb F} "
}
] |
https://en.wikipedia.org/wiki?curid=57854817
|
5785677
|
Landau's problems
|
Four basic unsolved problems about prime numbers
At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows:
As of 2024[ [update]], all four problems are unresolved.
Progress toward solutions.
Goldbach's conjecture.
Goldbach's weak conjecture, every odd number greater than 5 can be expressed as the sum of three primes, is a consequence of Goldbach's conjecture. Ivan Vinogradov proved it for large enough "n" (Vinogradov's theorem) in 1937, and Harald Helfgott extended this to a full proof of Goldbach's weak conjecture in 2013.
Chen's theorem, another weakening of Goldbach's conjecture, proves that for all sufficiently large "n", formula_0 where "p" is prime and "q" is either prime or semiprime. Bordignon, Johnston, and Starichkova, correcting and improving on Yamada, proved an explicit version of Chen's theorem: every even number greater than formula_1 is the sum of a prime and a product of at most two primes. Bordignon and Starichkova reduce this to formula_2 assuming the Generalized Riemann hypothesis (GRH) for Dirichlet L-functions. Johnson and Starichkova give a version working for all "n" >= 4 at the cost of using a number which is the product of at most 369 primes rather than a prime or semiprime; under GRH they improve 369 to 33.
Montgomery and Vaughan showed that the exceptional set of even numbers not expressible as the sum of two primes has a density zero, although the set is not proven to be finite. The best current bounds on the exceptional set is formula_3 (for large enough "x") due to Pintz, and formula_4 under RH, due to Goldston.
Linnik proved that large enough even numbers could be expressed as the sum of two primes and some (ineffective) constant "K" of powers of 2. Following many advances (see Pintz for an overview), Pintz and Ruzsa improved this to "K" = 8. Assuming the GRH, this can be improved to "K" = 7.
Twin prime conjecture.
Yitang Zhang showed that there are infinitely many prime pairs with gap bounded by 70 million, and this result has been improved to gaps of length 246 by a collaborative effort of the Polymath Project. Under the generalized Elliott–Halberstam conjecture this was improved to 6, extending earlier work by Maynard and Goldston, Pintz and Yıldırım.
Chen showed that there are infinitely many primes "p" (later called Chen primes) such that "p" + 2 is either a prime or a semiprime.
Legendre's conjecture.
It suffices to check that each prime gap starting at "p" is smaller than formula_5. A table of maximal prime gaps shows that the conjecture holds to 264 ≈ 1.8×1019. A counterexample near that size would require a prime gap a hundred million times the size of the average gap.
Järviniemi, improving on Heath-Brown and Matomäki, shows that there are at most formula_6 exceptional primes followed by gaps larger than formula_7; in particular,
formula_8
A result due to Ingham shows that there is a prime between formula_9 and formula_10 for every large enough "n".
Near-square primes.
Landau's fourth problem asked whether there are infinitely many primes which are of the form formula_11 for integer "n". (The list of known primes of this form is .) The existence of infinitely many such primes would follow as a consequence of other number-theoretic conjectures such as the Bunyakovsky conjecture and Bateman–Horn conjecture. As of 2024[ [update]], this problem is open.
One example of near-square primes are Fermat primes. Henryk Iwaniec showed that there are infinitely many numbers of the form formula_12 with at most two prime factors. Ankeny and Kubilius proved that, assuming the extended Riemann hypothesis for "L"-functions on Hecke characters, there are infinitely many primes of the form formula_13 with formula_14. Landau's conjecture is for the stronger formula_15. The best unconditional result is due to Harman and Lewis and it gives formula_16.
Merikoski, improving on previous works, showed that there are infinitely many numbers of the form formula_12 with greatest prime factor at least formula_17. Replacing the exponent with 2 would yield Landau's conjecture.
The Friedlander–Iwaniec theorem shows that infinitely many primes are of the form formula_18.
Baier and Zhao prove that there are infinitely many primes of the form formula_19 with formula_20; the exponent can be improved to formula_21 under the Generalized Riemann Hypothesis for L-functions and to formula_22 under a certain Elliott-Halberstam type hypothesis.
The Brun sieve establishes an upper bound on the density of primes having the form formula_11: there are formula_23 such primes up to formula_24. Hence almost all numbers of the form formula_12 are composite.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2n=p+q"
},
{
"math_id": 1,
"text": "e^{e^{32,7}} \\approx 1.4\\cdot10^{69057979807814}"
},
{
"math_id": 2,
"text": "e^{e^{15.85}} \\approx 3.6\\cdot10^{3321634}"
},
{
"math_id": 3,
"text": "E(x) < x^{0.72}"
},
{
"math_id": 4,
"text": "E(x) \\ll x^{0.5}\\log^3 x"
},
{
"math_id": 5,
"text": "2 \\sqrt p"
},
{
"math_id": 6,
"text": "x^{7/100+\\varepsilon}"
},
{
"math_id": 7,
"text": "\\sqrt{2p}"
},
{
"math_id": 8,
"text": "\\sum_{\\stackrel{p_{n+1}-p_n > \\sqrt{p_n}^{1/2}}{p_n \\leq x}}p_{n+1}-p_n\\ll x^{0.57+\\varepsilon}."
},
{
"math_id": 9,
"text": "n^3"
},
{
"math_id": 10,
"text": "(n+1)^3"
},
{
"math_id": 11,
"text": "p=n^2+1"
},
{
"math_id": 12,
"text": "n^2+1"
},
{
"math_id": 13,
"text": "p=x^2+y^2"
},
{
"math_id": 14,
"text": "y=O(\\log p)"
},
{
"math_id": 15,
"text": "y=1"
},
{
"math_id": 16,
"text": "y=O(p^{0.119})"
},
{
"math_id": 17,
"text": "n^{1.279}"
},
{
"math_id": 18,
"text": "x^2+y^4"
},
{
"math_id": 19,
"text": "p=an^2+1"
},
{
"math_id": 20,
"text": "a < p^{5/9+\\varepsilon}"
},
{
"math_id": 21,
"text": "1/2+\\varepsilon"
},
{
"math_id": 22,
"text": "\\varepsilon"
},
{
"math_id": 23,
"text": "O(\\sqrt x/\\log x)"
},
{
"math_id": 24,
"text": "x"
}
] |
https://en.wikipedia.org/wiki?curid=5785677
|
57857677
|
Euler measure
|
In measure theory, the Euler measure of a polyhedral set equals the Euler integral of its indicator function.
The magnitude of an Euler measure.
By induction, it is easy to show that independent of dimension, the Euler measure of a closed bounded convex polyhedron always equals 1, while the Euler measure of a "d"-D relative-open bounded convex polyhedron is formula_0.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(-1)^d"
}
] |
https://en.wikipedia.org/wiki?curid=57857677
|
5786179
|
Acoustic wave
|
Type of energy propagation
Acoustic waves are a type of energy propagation through a medium by means of adiabatic loading and unloading. Important quantities for describing acoustic waves are acoustic pressure, particle velocity, particle displacement and acoustic intensity. Acoustic waves travel with a characteristic acoustic velocity that depends on the medium they're passing through. Some examples of acoustic waves are audible sound from a speaker (waves traveling through air at the speed of sound), seismic waves (ground vibrations traveling through the earth), or ultrasound used for medical imaging (waves traveling through the body).
Wave properties.
Acoustic wave is a mechanical wave that transmits energy through the movements of atoms and molecules. Acoustic wave transmits through fluids in longitudinal manner (movement of particles are parallel to the direction of propagation of the wave); in contrast to electromagnetic wave that transmits in transverse manner (movement of particles at a right angle to the direction of propagation of the wave). However, in solids, acoustic wave transmits in both longitudinal and transverse manners due to presence of shear moduli in such a state of matter.
Acoustic wave equation.
The acoustic wave equation describes the propagation of sound waves. The acoustic wave equation for sound pressure in one dimension is given by
formula_0
where
The wave equation for particle velocity has the same shape and is given by
formula_5
where
For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article.
D'Alembert gave the general solution for the lossless wave equation. For sound pressure, a solution would be
formula_7
where
For formula_11 the wave becomes a travelling wave moving rightwards, for formula_12 the wave becomes a travelling wave moving leftwards. A standing wave can be obtained by formula_13.
Phase.
In a travelling wave pressure and particle velocity are in phase, which means the phase angle between the two quantities is zero.
This can be easily proven using the ideal gas law
formula_14
where
Consider a volume formula_15. As an acoustic wave propagates through the volume, adiabatic compression and decompression occurs. For adiabatic change the following relation between volume formula_15 of a parcel of fluid and pressure formula_1 holds
formula_18
where formula_19 is the adiabatic index without unit and the subscript formula_20 denotes the mean value of the respective variable.
As a sound wave propagates through a volume, the horizontal displacement of a particle formula_21 occurs along the wave propagation direction.
formula_22
where
From this equation it can be seen that when pressure is at its maximum, particle displacement from average position reaches zero. As mentioned before, the oscillating pressure for a rightward traveling wave can be given by
formula_24
Since displacement is maximum when pressure is zero there is a 90 degrees phase difference, so displacement is given by
formula_25
Particle velocity is the first derivative of particle displacement: formula_26. Differentiation of a sine gives a cosine again
formula_27
During adiabatic change, temperature changes with pressure as well following
formula_28
This fact is exploited within the field of thermoacoustics.
Propagation speed.
The propagation speed, or acoustic velocity, of acoustic waves is a function of the medium of propagation. In general, the acoustic velocity "c" is given by the Newton-Laplace equation:
formula_29
where
Thus the acoustic velocity increases with the stiffness (the resistance of an elastic body to deformation by an applied force) of the material, and decreases with the density.
For general equations of state, if classical mechanics is used, the acoustic velocity formula_3 is given by
formula_31
with formula_1 as the pressure and formula_30 the density, where differentiation is taken with respect to adiabatic change.
Phenomena.
Acoustic waves are elastic waves that exhibit phenomena like diffraction, reflection and interference. Note that sound waves in air are not polarized since they oscillate along the same direction as they move.
Interference.
Interference is the addition of two or more waves that results in a new wave pattern. Interference of sound waves can be observed when two loudspeakers transmit the same signal. At certain locations constructive interference occurs, doubling the local sound pressure. And at other locations destructive interference occurs, causing a local sound pressure of zero pascals.
Standing wave.
A standing wave is a special kind of wave that can occur in a resonator. In a resonator superposition of the incident and reflective wave occurs, causing a standing wave. Pressure and particle velocity are 90 degrees out of phase in a standing wave.
Consider a tube with two closed ends acting as a resonator. The resonator has normal modes at frequencies given by
formula_32
where
At the ends particle velocity becomes zero since there can be no particle displacement. Pressure however doubles at the ends because of interference of the incident wave with the reflective wave. As pressure is maximum at the ends while velocity is zero, there is a 90 degrees phase difference between them.
Reflection.
An acoustic travelling wave can be reflected by a solid surface. If a travelling wave is reflected, the reflected wave can interfere with the incident wave causing a standing wave in the near field. As a consequence, the local pressure in the near field is doubled, and the particle velocity becomes zero.
Attenuation causes the reflected wave to decrease in power as distance from the reflective material increases. As the power of the reflective wave decreases compared to the power of the incident wave, interference also decreases. And as interference decreases, so does the phase difference between sound pressure and particle velocity. At a large enough distance from the reflective material, there is no interference left anymore. At this distance one can speak of the far field.
The amount of reflection is given by the reflection coefficient which is the ratio of the reflected intensity over the incident intensity
formula_34
Absorption.
Acoustic waves can be absorbed. The amount of absorption is given by the absorption coefficient which is given by
formula_35
where
Often acoustic absorption of materials is given in decibels instead.
Layered media.
When an acoustic wave propagates through a non-homogeneous medium, it will undergo diffraction at the impurities it encounters or at the interfaces between layers of different materials. This is a phenomenon very similar to that of the refraction, absorption and transmission of light in Bragg mirrors. The concept of acoustic wave propagation through periodic media is exploited with great success in acoustic metamaterial engineering.
The acoustic absorption, reflection and transmission in multilayer materials can be calculated with the transfer-matrix method.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " { \\partial^2 p \\over \\partial x ^2 } - {1 \\over c^2} { \\partial^2 p \\over \\partial t ^2 } = 0 "
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": " { \\partial^2 u \\over \\partial x ^2 } - {1 \\over c^2} { \\partial^2 u \\over \\partial t ^2 } = 0 "
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": " p = R \\cos(\\omega t - kx) + (1-R) \\cos(\\omega t+kx) "
},
{
"math_id": 8,
"text": "\\omega"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "R=1"
},
{
"math_id": 12,
"text": "R=0"
},
{
"math_id": 13,
"text": "R=0.5"
},
{
"math_id": 14,
"text": " pV = nRT"
},
{
"math_id": 15,
"text": "V"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "8.314\\,472(15)~\\frac{\\mathrm{J}}{\\mathrm{mol~K}}"
},
{
"math_id": 18,
"text": " { \\partial V \\over V_m } = { -1 \\over \\ \\gamma } {\\partial p \\over p_m } "
},
{
"math_id": 19,
"text": "\\gamma"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "\\eta"
},
{
"math_id": 22,
"text": " { \\partial \\eta \\over V_m } A = { \\partial V \\over V_m } = { -1 \\over \\ \\gamma } {\\partial p \\over p_m } "
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": " p = p_0 \\cos(\\omega t - kx)"
},
{
"math_id": 25,
"text": " \\eta = \\eta_0 \\sin(\\omega t - kx)"
},
{
"math_id": 26,
"text": "u = \\partial \\eta / \\partial t"
},
{
"math_id": 27,
"text": " u = u_0 \\cos(\\omega t - kx)"
},
{
"math_id": 28,
"text": " { \\partial T \\over T_m } = { \\gamma - 1 \\over \\ \\gamma } {\\partial p \\over p_m } "
},
{
"math_id": 29,
"text": "c = \\sqrt{\\frac{C}{\\rho}}"
},
{
"math_id": 30,
"text": "\\rho"
},
{
"math_id": 31,
"text": "c^2 = \\frac{\\partial p}{\\partial\\rho}"
},
{
"math_id": 32,
"text": "f = \\frac{Nc}{2d}\\qquad\\qquad N \\in \\{1,2,3,\\dots\\}"
},
{
"math_id": 33,
"text": "d"
},
{
"math_id": 34,
"text": "R = \\frac{ I_{\\text{reflected}} }{ I_{\\text{incident}} }"
},
{
"math_id": 35,
"text": "\\alpha = 1 - R^2"
},
{
"math_id": 36,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=5786179
|
578650
|
Scholz conjecture
|
Conjecture
In mathematics, the Scholz conjecture is a conjecture on the length of certain addition chains.
It is sometimes also called the Scholz–Brauer conjecture or the Brauer–Scholz conjecture, after Arnold Scholz who formulated it in 1937 and Alfred Brauer who studied it soon afterward and proved a weaker bound.
Neill Clift has announced an example showing that the bound of the conjecture is not always tight.
Statement.
The conjecture states that
"l"(2"n" − 1) ≤ "n" − 1 + "l"("n"),
where "l"("n") is the length of the shortest addition chain producing "n".
Here, an addition chain is defined as a sequence of numbers, starting with 1, such that every number after the first can be expressed as a sum of two earlier numbers (which are allowed to both be equal). Its length is the number of sums needed to express all its numbers, which is one less than the length of the sequence of numbers (since there is no sum of previous numbers for the first number in the sequence, 1). Computing the length of the shortest addition chain that contains a given number x can be done by dynamic programming for small numbers, but it is not known whether it can be done in polynomial time measured as a function of the length of the binary representation of x. Scholz's conjecture, if true, would provide short addition chains for numbers x of a special form, the Mersenne numbers.
Example.
As an example, "l"(5) = 3: it has a shortest addition chain
1, 2, 4, 5
of length three, determined by the three sums
1 + 1 = 2,
2 + 2 = 4,
4 + 1 = 5.
Also, "l"(31) = 7: it has a shortest addition chain
1, 2, 3, 6, 12, 24, 30, 31
of length seven, determined by the seven sums
1 + 1 = 2,
2 + 1 = 3,
3 + 3 = 6,
6 + 6 = 12,
12 + 12 = 24,
24 + 6 = 30,
30 + 1 = 31.
Both "l"(31) and 5 − 1 + "l"(5) equal 7.
Therefore, these values obey the inequality (which in this case is an equality) and the Scholz conjecture is true for the case "n" = 5.
Partial results.
By using a combination of computer search techniques and mathematical characterizations of optimal addition chains, showed that the conjecture is true for all "n" < 5784689. Additionally, he verified that for all "n" ≤ 64, the inequality of the conjecture is actually an equality.
The bound of the conjecture is not always an exact equality. For instance, for formula_0, formula_1, with formula_2.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n=9307543"
},
{
"math_id": 1,
"text": "l(2^n-1)\\le 9307570< 9307571=n-1+l(n)"
},
{
"math_id": 2,
"text": "l(n)=29"
}
] |
https://en.wikipedia.org/wiki?curid=578650
|
578656
|
Addition chain
|
In mathematics, an addition chain for computing a positive integer n can be given by a sequence of natural numbers starting with 1 and ending with n, such that each number in the sequence is the sum of two previous numbers. The "length" of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers.
Examples.
As an example: (1,2,3,6,12,24,30,31) is an addition chain for 31 of length 7, since
2 = 1 + 1
3 = 2 + 1
6 = 3 + 3
12 = 6 + 6
24 = 12 + 12
30 = 24 + 6
31 = 30 + 1
Addition chains can be used for addition-chain exponentiation. This method allows exponentiation with integer exponents to be performed using a number of multiplications equal to the length of an addition chain for the exponent. For instance, the addition chain for 31 leads to a method for computing the 31st power of any number n using only seven multiplications, instead of the 30 multiplications that one would get from repeated multiplication, and eight multiplications with exponentiation by squaring:
n2 = n × n
n3 = n2 × n
n6 = n3 × n3
n12 = n6 × n6
n24 = n12 × n12
n30 = n24 × n6
n31 = n30 × n
Methods for computing addition chains.
Calculating an addition chain of minimal length is not easy; a generalized version of the problem, in which one must find a chain that simultaneously forms each of a sequence of values, is NP-complete. There is no known algorithm which can calculate a minimal addition chain for a given number with any guarantees of reasonable timing or small memory usage. However, several techniques are known to calculate relatively short chains that are not always optimal.
One very well known technique to calculate relatively short addition chains is the "binary method", similar to exponentiation by squaring. In this method, an addition chain for the number formula_0 is obtained recursively, from an addition chain for formula_1. If formula_0 is even, it can be obtained in a single additional sum, as formula_2. If formula_0 is odd, this method uses two sums to obtain it, by computing formula_3 and then adding one.
The "factor method" for finding addition chains is based on the prime factorization of the number formula_0 to be represented. If formula_0 has a number formula_4 as one of its prime factors, then an addition chain for formula_0 can be obtained by starting with a chain for formula_5, and then concatenating onto it a chain for formula_4, modified by multiplying each of its numbers by formula_5. The ideas of the factor method and binary method can be combined into "Brauer's m-ary method" by choosing any number formula_6 (regardless of whether it divides formula_0), recursively constructing a chain for formula_7, concatenating a chain for formula_6 (modified in the same way as above) to obtain formula_8, and then adding the remainder. Additional refinements of these ideas lead to a family of methods called "sliding window methods".
Chain length.
Let formula_9 denote the smallest formula_10 so that there exists an addition chain
of length formula_10 which computes formula_0.
It is known that
formula_11,
where formula_12 is the Hamming weight (the number of ones) of the binary expansion of formula_0.
One can obtain an addition chain for formula_13 from an addition chain for formula_0 by including one additional sum formula_14, from which follows the inequality formula_15 on the lengths of the chains for formula_0 and formula_13. However, this is not always an equality,
as in some cases formula_13 may have a shorter chain than the one obtained in this way. For instance, formula_16, observed by Knuth. It is even possible for formula_13 to have a shorter chain than formula_0, so that formula_17; the smallest formula_0 for which this happens is formula_18, which is followed by formula_19, formula_20, and so on (sequence in the OEIS).
Brauer chain.
A Brauer chain or star addition chain is an addition chain in which each of the sums used to calculate its numbers uses the immediately previous number. A Brauer number is a number for which a Brauer chain is optimal.
Brauer proved that
"l"*(2"n"−1) ≤ "n" − 1 + "l"*("n")
where &NoBreak;&NoBreak; is the length of the shortest star chain. For many values of "n", and in particular for "n" < 12509, they are equal: "l"("n") = "l"*("n"). But Hansen showed that there are some values of "n" for which "l"("n") ≠ "l"*("n"), such as "n" = 26106 + 23048 + 22032 + 22016 + 1 which has "l"*("n") = 6110, "l"("n") ≤ 6109. The smallest such "n" is 12509.
Scholz conjecture.
The Scholz conjecture (sometimes called the "Scholz–Brauer" or "Brauer–Scholz conjecture"), named after Arnold Scholz and Alfred T. Brauer), is a conjecture from 1937 stating that
formula_21
This inequality is known to hold for all Hansen numbers, a generalization of Brauer numbers; Neill Clift checked by computer that all formula_22 are Hansen (while 5784689 is not). Clift further verified that in fact formula_23 for all formula_24.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n'=\\lfloor n/2\\rfloor"
},
{
"math_id": 2,
"text": "n=n'+n'"
},
{
"math_id": 3,
"text": "n-1=n'+n'"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "n/p"
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "\\lfloor n/m\\rfloor"
},
{
"math_id": 8,
"text": "m\\lfloor n/m\\rfloor"
},
{
"math_id": 9,
"text": "l(n)"
},
{
"math_id": 10,
"text": "s"
},
{
"math_id": 11,
"text": "\\log_2(n)+ \\log_2(\\nu(n))-2.13\\leq l(n) \\leq \\log_2(n) + \\log_2(n)(1+o(1))/\\log_2(\\log_2(n))"
},
{
"math_id": 12,
"text": "\\nu(n)"
},
{
"math_id": 13,
"text": "2n"
},
{
"math_id": 14,
"text": "2n=n+n"
},
{
"math_id": 15,
"text": "l(2n)\\le l(n)+1"
},
{
"math_id": 16,
"text": "l(382)=l(191)=11"
},
{
"math_id": 17,
"text": "l(2n)< l(n)"
},
{
"math_id": 18,
"text": "n=375494703"
},
{
"math_id": 19,
"text": "602641031"
},
{
"math_id": 20,
"text": "619418303"
},
{
"math_id": 21,
"text": " l(2^n-1) \\le n - 1 + l(n). "
},
{
"math_id": 22,
"text": " n \\le 5784688 "
},
{
"math_id": 23,
"text": " l(2^n-1) = n - 1 + l(n)"
},
{
"math_id": 24,
"text": "n \\le 64"
}
] |
https://en.wikipedia.org/wiki?curid=578656
|
5786821
|
Rothe–Hagen identity
|
Generalization of Vandermonde's identity
In mathematics, the Rothe–Hagen identity is a mathematical identity valid for all complex numbers (formula_0) except where its denominators vanish:
formula_1
It is a generalization of Vandermonde's identity, and is named after Heinrich August Rothe and Johann Georg Hagen.
|
[
{
"math_id": 0,
"text": "x, y, z"
},
{
"math_id": 1,
"text": "\\sum_{k=0}^n\\frac{x}{x+kz}{x+kz \\choose k}\\frac{y}{y+(n-k)z}{y+(n-k)z \\choose n-k}=\\frac{x+y}{x+y+nz}{x+y+nz \\choose n}."
}
] |
https://en.wikipedia.org/wiki?curid=5786821
|
5787012
|
Lerche–Newberger sum rule
|
Finds the sum of certain infinite series involving Bessel functions of the first kind
The Lerche–Newberger, or Newberger, sum rule, discovered by B. S. Newberger in 1982, finds the sum of certain infinite series involving Bessel functions "J""α" of the first kind.
It states that if "μ" is any non-integer complex number, formula_0, and Re("α" + "β") > −1, then
formula_1
Newberger's formula generalizes a formula of this type proven by Lerche in 1966; Newberger discovered it independently. Lerche's formula has γ =1; both extend a standard rule for the summation of Bessel functions, and are useful in plasma physics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\scriptstyle\\gamma \\in (0,1]"
},
{
"math_id": 1,
"text": "\\sum_{n=- \\infin}^\\infin\\frac{(-1)^n J_{\\alpha - \\gamma n}(z)J_{\\beta + \\gamma n}(z)}{n+\\mu}=\\frac{\\pi}{\\sin \\mu \\pi}J_{\\alpha + \\gamma \\mu}(z)J_{\\beta - \\gamma \\mu}(z)."
}
] |
https://en.wikipedia.org/wiki?curid=5787012
|
578753
|
Pollard's p − 1 algorithm
|
Special-purpose algorithm for factoring integers
Pollard's "p" − 1 algorithm is a number theoretic integer factorization algorithm, invented by John Pollard in 1974. It is a special-purpose algorithm, meaning that it is only suitable for integers with specific types of factors; it is the simplest example of an algebraic-group factorisation algorithm.
The factors it finds are ones for which the number preceding the factor, "p" − 1, is powersmooth; the essential observation is that, by working in the multiplicative group modulo a composite number "N", we are also working in the multiplicative groups modulo all of "N"'s factors.
The existence of this algorithm leads to the concept of safe primes, being primes for which "p" − 1 is two times a Sophie Germain prime "q" and thus minimally smooth. These primes are sometimes construed as "safe for cryptographic purposes", but they might be "unsafe" — in current recommendations for cryptographic strong primes ("e.g." ANSI X9.31), it is necessary but not sufficient that "p" − 1 has at least one large prime factor. Most sufficiently large primes are strong; if a prime used for cryptographic purposes turns out to be non-strong, it is much more likely to be through malice than through an accident of random number generation. This terminology is considered obsolete by the cryptography industry: the ECM factorization method is more efficient than Pollard's algorithm and finds safe prime factors just as quickly as it finds non-safe prime factors of similar size, thus the size of "p" is the key security parameter, not the smoothness of "p-1".
Base concepts.
Let "n" be a composite integer with prime factor "p". By Fermat's little theorem, we know that for all integers "a" coprime to "p" and for all positive integers "K":
formula_0
If a number "x" is congruent to 1 modulo a factor of "n", then the gcd("x" − 1, "n") will be divisible by that factor.
The idea is to make the exponent a large multiple of "p" − 1 by making it a number with very many prime factors; generally, we take the product of all prime powers less than some limit "B". Start with a random "x", and repeatedly replace it by formula_1 as "w" runs through those prime powers. Check at each stage, or once at the end if you prefer, whether gcd("x" − 1, "n") is not equal to 1.
Multiple factors.
It is possible that for all the prime factors "p" of "n", "p" − 1 is divisible by small primes, at which point the Pollard "p" − 1 algorithm simply returns "n".
Algorithm and running time.
The basic algorithm can be written as follows:
Inputs: "n": a composite number
Output: a nontrivial factor of "n" or failure
# select a smoothness bound "B"
# define formula_2 (note: explicitly evaluating "M" may not be necessary)
# randomly pick a positive integer, "a", which is coprime to "n" (note: we can actually fix "a", e.g. if "n" is odd, then we can always select "a" = 2, random selection here is not imperative)
# compute "g"
gcd("a""M" − 1, "n") (note: exponentiation can be done modulo "n")
# if 1 < "g" < "n" then return "g"
# if "g"
1 then select a larger "B" and go to step 2 or return failure
# if "g"
"n" then select a smaller "B" and go to step 2 or return failure
If "g"
1 in step 6, this indicates there are no prime factors "p" for which "p-1" is "B"-powersmooth. If "g"
"n" in step 7, this usually indicates that all factors were "B"-powersmooth, but in rare cases it could indicate that "a" had a small order modulo "n". Additionally, when the maximum prime factors of "p-1" for each prime factors "p" of "n" are all the same in some rare cases, this algorithm will fail.
The running time of this algorithm is O("B" × log "B" × log2 "n"); larger values of "B" make it run slower, but are more likely to produce a factor.
Example.
If we want to factor the number "n" = 299.
# We select "B" = 5.
# Thus "M" = 22 × 31 × 51.
# We select "a" = 2.
# "g" = gcd("a""M" − 1, "n") = 13.
# Since 1 < 13 < 299, thus return 13.
# 299 / 13 = 23 is prime, thus it is fully factored: 299 = 13 × 23.
Methods of choosing "B".
Since the algorithm is incremental, it is able to keep running with the bound constantly increasing.
Assume that "p" − 1, where "p" is the smallest prime factor of "n", can be modelled as a random number of size less than √"n". By Dixon's theorem, the probability that the largest factor of such a number is less than ("p" − 1)"1/ε" is roughly "ε"−"ε"; so there is a probability of about 3−3 = 1/27 that a "B" value of "n"1/6 will yield a factorisation.
In practice, the elliptic curve method is faster than the Pollard "p" − 1 method once the factors are at all large; running the "p" − 1 method up to "B" = 232 will find a quarter of all 64-bit factors and 1/27 of all 96-bit factors.
Two-stage variant.
A variant of the basic algorithm is sometimes used; instead of requiring that "p" − 1 has all its factors less than "B", we require it to have all but one of its factors less than some "B"1, and the remaining factor less than some "B"2 ≫ "B"1. After completing the first stage, which is the same as the basic algorithm, instead of computing a new
formula_3
for "B"2 and checking gcd("a""M"' − 1, "n"), we compute
formula_4
where "H"
"a""M" and check if gcd("Q", "n") produces a nontrivial factor of "n". As before, exponentiations can be done modulo "n".
Let {"q"1, "q"2, …} be successive prime numbers in the interval ("B"1, "B"2] and "d""n" = "q""n" − "q""n"−1 the difference between consecutive prime numbers. Since typically "B"1 > 2, "d""n" are even numbers. The distribution of prime numbers is such that the "d""n" will all be relatively small. It is suggested that "d""n" ≤ ln2 "B"2. Hence, the values of "H"2, "H"4, "H"6, … (mod "n") can be stored in a table, and "H""q""n" be computed from "H""q""n"−1⋅"H""d""n", saving the need for exponentiations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a^{K(p-1)} \\equiv 1\\pmod{p}"
},
{
"math_id": 1,
"text": "x^w \\bmod n"
},
{
"math_id": 2,
"text": "M = \\prod_{\\text{primes}~q \\le B} q^{ \\lfloor \\log_q{B} \\rfloor }"
},
{
"math_id": 3,
"text": "M' = \\prod_{\\text{primes }q \\le B_2} q^{ \\lfloor \\log_q B_2 \\rfloor }\n"
},
{
"math_id": 4,
"text": "Q = \\prod_{\\text{primes } q \\in (B_1, B_2]} (H^q - 1)"
}
] |
https://en.wikipedia.org/wiki?curid=578753
|
57888813
|
Zvi Bern
|
American theoretical particle physicist
Zvi Bern (born 17 September 1960) is an American theoretical particle physicist. He is a professor at University of California, Los Angeles (UCLA).
Bern studied physics and mathematics at the Massachusetts Institute of Technology and earned his doctorate in 1986 in theoretical physics from the University of California, Berkeley under the supervision of Martin Halpern. Bern's dissertation manuscript can currently be found in Lawrence Berkeley Laboratory's archives, examining "possible nonperturbative continuum regularization schemes for quantum field theory which are based upon the Langevin equation of Parisi and Wu."
Bern developed new methods for the computation of Feynman diagrams that were originally introduced in quantum electrodynamics for the perturbative computation of scattering amplitudes. In more complicated quantum field theories such as Yang–Mills theory or quantum field theories with gravity, the computer calculation of the perturbative evolution using Feynman diagrams quickly reached its limits due to the exponential growth in diagrams. The new theoretical developments of the 1990s and 2000s came in time for a renewed interest in extensive calculations in the context of the experiments at the Large Hadron Collider. Bern and colleagues developed twistor-space methods applied to gauge-theory amplitudes. Bern and colleagues developed the method of "generalized unitarity as a means for obtaining loop amplitudes from on-shell tree amplitudes". The method of generalized unitarity provided new insights into the perturbative treatment of N = 8 supergravity and showed that there is a smaller degree of divergence than expected; higher-loop evidence suggested that "N = 8 supergravity has the same degree of divergence as N = 4 super-Yang–Mills theory and is ultraviolet finite in four dimensions". Prior to this, it had been generally assumed that quantum gravitation from three loops resulted in uncontrollable divergences. In 2010, with his students Carrasco and Johansson, Bern found that diagrams for supersymmetric gravitational theories are equivalent to those of two copies of supersymmetric Yang–Mills theories (theories with gluons), which is known as double copy theory. They used a previously found duality between kinematics and color degrees of freedom. Instead of previously around formula_0 terms, only 10 terms had to be evaluated in 3 loops, and correspondingly in 4 loops around 100 terms versus formula_1 terms, and in 5 loops around 1000 terms versus formula_2 terms; furthermore, there were no uncontrollable divergences in three and four loops — such uncontrollable divergences were predicted by the majority of experts in the 1980s and constituted one of the reasons for favoring string theory.
Bern was elected in 2004 a fellow of the American Physical Society. In 2014, he received the Sakurai Prize with David A. Kosower and Lance J. Dixon for "pathbreaking contributions to the calculation of perturbative scattering amplitudes, which led to a deeper understanding of quantum field theory and to powerful new tools for computing QCD processes." In 2023, Bern and his collaborators David A Kosower and Lance J Dixon were awarded Galileo Galilei Medal from Italy’s Instituto Nazionale di Fisica.
Bern's Erdős number is three.
Currently, Bern is the director of the Mani Lal Bhaumik Institute for Theoretical Physics at UCLA, which aims to "provide an exceptional environment for excellence in theoretical physics research".
He was elected a Member of the National Academy of Sciences in 2024.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^{20}"
},
{
"math_id": 1,
"text": "10^{26}"
},
{
"math_id": 2,
"text": "10^{31}"
}
] |
https://en.wikipedia.org/wiki?curid=57888813
|
57900656
|
Measurable group
|
In mathematics, a measurable group is a special type of group in the intersection between group theory and measure theory. Measurable groups are used to study measures is an abstract setting and are often closely related to topological groups.
Definition.
Let formula_0 a group with group law
formula_1.
Let further formula_2 be a σ-algebra of subsets of the set formula_3.
The group, or more formally the triple formula_4 is called a measurable group if
Here, formula_8 denotes the formation of the product σ-algebra of the σ-algebras formula_9 and formula_10.
Topological groups as measurable groups.
Every second-countable topological group formula_11 can be taken as a measurable group. This is done by equipping the group with the Borel σ-algebra
formula_12,
which is the σ-algebra generated by the topology. Since by definition of a topological group, the group law and the formation of the inverse element is continuous, both operations are in this case also measurable from formula_13 to formula_13 and from formula_14 to formula_13, respectively. Second countability ensures that formula_15, and therefore the group formula_3 is also a measurable group.
Related concepts.
Measurable groups can be seen as measurable acting groups that act on themselves.
|
[
{
"math_id": 0,
"text": " (G, \\circ) "
},
{
"math_id": 1,
"text": " \\circ : G \\times G \\to G "
},
{
"math_id": 2,
"text": " \\mathcal G "
},
{
"math_id": 3,
"text": " G "
},
{
"math_id": 4,
"text": "(G,\\circ,\\mathcal G)"
},
{
"math_id": 5,
"text": " g \\mapsto g^{-1} "
},
{
"math_id": 6,
"text": " (g_1, g_2) \\mapsto g_1 \\circ g_2 "
},
{
"math_id": 7,
"text": " \\mathcal G \\otimes \\mathcal G "
},
{
"math_id": 8,
"text": " \\mathcal A \\otimes \\mathcal B "
},
{
"math_id": 9,
"text": " \\mathcal A "
},
{
"math_id": 10,
"text": " \\mathcal B "
},
{
"math_id": 11,
"text": " (G, \\mathcal O) "
},
{
"math_id": 12,
"text": " \\mathcal B(G)= \\sigma(\\mathcal O) "
},
{
"math_id": 13,
"text": " \\mathcal B(G) "
},
{
"math_id": 14,
"text": " \\mathcal B(G\\times G) "
},
{
"math_id": 15,
"text": " \\mathcal B(G)\\otimes \\mathcal B(G) = \\mathcal B(G\\times G) "
}
] |
https://en.wikipedia.org/wiki?curid=57900656
|
57901740
|
Rhenium(IV) chloride
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Rhenium(IV) chloride is the inorganic compound with the formula ReCl4. This black solid is of interest as a binary phase but otherwise is of little practical value. A second polymorph of ReCl4 is also known.
Preparation.
ReCl4 can be prepared by comproportionation of rhenium(V) chloride and rhenium(III) chloride. It can also be produced by reduction of rhenium(V) chloride with antimony trichloride.
formula_0
Tetrachloroethylene at 120 °C is also effective as a reductant:
formula_1
Structure.
X-ray crystallography reveals a polymeric structure. The Re–Re bonding distance is 2.728 Å. Re centers are octahedral, being surrounded by six chloride ligands. Pairs of octahedra share faces. The Re2Cl9 subunits are linked by bridging chloride ligands. The structural motif - corner-shared bioctahedra - is unusual in the binary metal halides.
|
[
{
"math_id": 0,
"text": "\\mathrm{2 \\ ReCl_5 + SbCl_3 \\longrightarrow 2 \\ ReCl_4 + SbCl_5}"
},
{
"math_id": 1,
"text": "\\mathrm{2 \\ ReCl_5 + C_2Cl_4 \\longrightarrow 2 \\ ReCl_4 + C_2Cl_6}"
}
] |
https://en.wikipedia.org/wiki?curid=57901740
|
579026
|
Gravitational potential
|
Fundamental study of potential theory
In classical mechanics, the gravitational potential is a scalar field associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance.
In mathematics, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.
Potential energy.
The gravitational potential ("V") at a location is the gravitational potential energy ("U") at that location per unit mass:
formula_0
where "m" is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity.
In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, "g", can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height:
formula_1
Mathematical form.
The gravitational potential "V" at a distance "x" from a point mass of mass "M" can be defined as the work "W" that needs to be done by an external agent to bring a unit mass in from infinity to that point:
formula_2
where "G" is the gravitational constant, and F is the gravitational force. The product "GM" is the standard gravitational parameter and is often known to higher precision than "G" or "M" separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as "x" tends to infinity, it approaches zero.
The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is
formula_3
where x is a vector of length "x" pointing from the point mass toward the small body and formula_4 is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law:
formula_5
The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., x"n" and have masses "m"1, ..., "m""n", then the potential of the distribution at the point x is
formula_6
If the mass distribution is given as a mass measure "dm" on three-dimensional Euclidean space R3, then the potential is the convolution of −"G"/|r| with "dm". In good cases this equals the integral
formula_7
where is the distance between the points x and r. If there is a function "ρ"(r) representing the density of the distribution at r, so that "dm"(r) = "ρ"(r) "dv"(r), where "dv"(r) is the Euclidean volume element, then the gravitational potential is the volume integral
formula_8
If "V" is a potential function coming from a continuous mass distribution "ρ"(r), then "ρ" can be recovered using the Laplace operator, Δ:
formula_9
This holds pointwise whenever "ρ" is continuous and is zero outside of a bounded set. In general, the mass measure "dm" can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation. See also Green's function for the three-variable Laplace equation and Newtonian potential.
The integral may be expressed in terms of known transcendental functions for all ellipsoidal shapes, including the symmetrical and degenerate ones. These include the sphere, where the three semi axes are equal; the oblate (see reference ellipsoid) and prolate spheroids, where two semi axes are equal; the degenerate ones where one semi axes is infinite (the elliptical and circular cylinder) and the unbounded sheet where two semi axes are infinite. All these shapes are widely used in the applications of the gravitational potential integral (apart from the constant "G", with 𝜌 being a constant charge density) to electromagnetism.
Spherical symmetry.
A spherically symmetric mass distribution behaves to an observer completely outside the distribution as though all of the mass was concentrated at the center, and thus effectively as a point mass, by the shell theorem. On the surface of the earth, the acceleration is given by so-called standard gravity "g", approximately 9.8 m/s2, although this value varies slightly with latitude and altitude. The magnitude of the acceleration is a little larger at the poles than at the equator because Earth is an oblate spheroid.
Within a spherically symmetric mass distribution, it is possible to solve Poisson's equation in spherical coordinates. Within a uniform spherical body of radius "R", density ρ, and mass "m", the gravitational force "g" inside the sphere varies linearly with distance "r" from the center, giving the gravitational potential inside the sphere, which is
formula_10
which differentiably connects to the potential function for the outside of the sphere (see the figure at the top).
General relativity.
In general relativity, the gravitational potential is replaced by the metric tensor. When the gravitational field is weak and the sources are moving very slowly compared to light-speed, general relativity reduces to Newtonian gravity, and the metric tensor can be expanded in terms of the gravitational potential.
Multipole expansion.
The potential at a point x is given by
formula_11
The potential can be expanded in a series of Legendre polynomials. Represent the points x and r as position vectors relative to the center of mass. The denominator in the integral is expressed as the square root of the square to give
formula_12
where, in the last integral, "r" = |r| and θ is the angle between x and r.
(See "mathematical form".) The integrand can be expanded as a Taylor series in "Z" = "r"/|x|, by explicit calculation of the coefficients. A less laborious way of achieving the same result is by using the generalized binomial theorem. The resulting series is the generating function for the Legendre polynomials:
formula_13
valid for and . The coefficients "P""n" are the Legendre polynomials of degree "n". Therefore, the Taylor coefficients of the integrand are given by the Legendre polynomials in "X" = cos "θ". So the potential can be expanded in a series that is convergent for positions x such that "r" < |x| for all mass elements of the system (i.e., outside a sphere, centered at the center of mass, that encloses the system):
formula_14
The integral formula_15 is the component of the center of mass in the x direction; this vanishes because the vector x emanates from the center of mass. So, bringing the integral under the sign of the summation gives
formula_16
This shows that elongation of the body causes a lower potential in the direction of elongation, and a higher potential in perpendicular directions, compared to the potential due to a spherical mass, if we compare cases with the same distance to the center of mass. (If we compare cases with the same distance to the "surface", the opposite is true.)
Numerical values.
The absolute value of gravitational potential at a number of locations with regards to the gravitation from the Earth, the Sun, and the Milky Way is given in the following table; i.e. an object at Earth's surface would need 60 MJ/kg to "leave" Earth's gravity field, another 900 MJ/kg to also leave the Sun's gravity field and more than 130 GJ/kg to leave the gravity field of the Milky Way. The potential is half the square of the escape velocity.
Compare the gravity at these locations.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "V = \\frac{U}{m},"
},
{
"math_id": 1,
"text": "\\Delta U \\approx mg \\Delta h."
},
{
"math_id": 2,
"text": "V(\\mathbf{x}) = \\frac{W}{m} = \\frac{1}{m} \\int_{\\infty}^{x} \\mathbf{F} \\cdot d\\mathbf{x} = \\frac{1}{m} \\int_{\\infty}^{x} \\frac{G m M}{x^2} dx = -\\frac{G M}{x},"
},
{
"math_id": 3,
"text": "\\mathbf{a} = -\\frac{GM}{x^3} \\mathbf{x} = -\\frac{GM}{x^2} \\hat{\\mathbf{x}},"
},
{
"math_id": 4,
"text": "\\hat{\\mathbf{x}}"
},
{
"math_id": 5,
"text": "\\|\\mathbf{a}\\| = \\frac{GM}{x^2}."
},
{
"math_id": 6,
"text": "V(\\mathbf{x}) = \\sum_{i=1}^n -\\frac{Gm_i}{\\|\\mathbf{x} - \\mathbf{x}_i\\|}."
},
{
"math_id": 7,
"text": "V(\\mathbf{x}) = -\\int_{\\R^3} \\frac{G}{\\|\\mathbf{x} - \\mathbf{r}\\|}\\,dm(\\mathbf{r}),"
},
{
"math_id": 8,
"text": "V(\\mathbf{x}) = -\\int_{\\R^3} \\frac{G}{\\|\\mathbf{x}-\\mathbf{r}\\|}\\,\\rho(\\mathbf{r})dv(\\mathbf{r})."
},
{
"math_id": 9,
"text": "\\rho(\\mathbf{x}) = \\frac{1}{4\\pi G}\\Delta V(\\mathbf{x})."
},
{
"math_id": 10,
"text": "V(r) = \\frac {2}{3} \\pi G \\rho \\left[r^2 - 3 R^2\\right] = \\frac{Gm}{2R^3} \\left[r^2 -3 R^2\\right], \\qquad r \\leq R,"
},
{
"math_id": 11,
"text": "V(\\mathbf{x}) = - \\int_{\\R^3} \\frac{G}{|\\mathbf{x}-\\mathbf{r}|}\\ dm(\\mathbf{r})."
},
{
"math_id": 12,
"text": "\\begin{align}\nV(\\mathbf{x}) &= - \\int_{\\R^3} \\frac{G}{ \\sqrt{|\\mathbf{x}|^2 -2 \\mathbf{x} \\cdot \\mathbf{r} + |\\mathbf{r}|^2}}\\,dm(\\mathbf{r})\\\\\n&=- \\frac{1}{|\\mathbf{x}|}\\int_{\\R^3} \\frac{G} \\sqrt{1 -2 \\frac{r}{|\\mathbf{x}|} \\cos \\theta + \\left( \\frac{r}{|\\mathbf{x}|} \\right)^2}\\,dm(\\mathbf{r})\n\\end{align}"
},
{
"math_id": 13,
"text": "\\left(1- 2 X Z + Z^2 \\right) ^{- \\frac{1}{2}} \\ = \\sum_{n=0}^\\infty Z^n P_n(X)"
},
{
"math_id": 14,
"text": " \\begin{align}\nV(\\mathbf{x}) &= - \\frac{G}{|\\mathbf{x}|} \\int \\sum_{n=0}^\\infty \\left(\\frac{r}{|\\mathbf{x}|} \\right)^n P_n(\\cos \\theta) \\, dm(\\mathbf{r})\\\\\n&= - \\frac{G}{|\\mathbf{x}|} \\int \\left(1 + \\left(\\frac{r}{|\\mathbf{x}|}\\right) \\cos \\theta + \\left(\\frac{r}{|\\mathbf{x}|}\\right)^2\\frac {3 \\cos^2 \\theta - 1}{2} + \\cdots\\right)\\,dm(\\mathbf{r})\n\\end{align}"
},
{
"math_id": 15,
"text": "\\int r \\cos(\\theta) \\, dm"
},
{
"math_id": 16,
"text": " V(\\mathbf{x}) = - \\frac{GM}{|\\mathbf{x}|} - \\frac{G}{|\\mathbf{x}|} \\int \\left(\\frac{r}{|\\mathbf{x}|}\\right)^2 \\frac {3 \\cos^2 \\theta - 1}{2} dm(\\mathbf{r}) + \\cdots"
}
] |
https://en.wikipedia.org/wiki?curid=579026
|
579041
|
Magnetomotive force
|
Concept in physics
In physics, the magnetomotive force (abbreviated mmf or MMF, symbol formula_0) is a quantity appearing in the equation for the magnetic flux in a magnetic circuit, Hopkinson's law. It is the property of certain substances or phenomena that give rise to magnetic fields:
formula_1
where Φ is the magnetic flux and formula_2 is the reluctance of the circuit. It can be seen that the magnetomotive force plays a role in this equation analogous to the voltage V in Ohm's law, "V" = "IR", since it is the cause of magnetic flux in a magnetic circuit:
Units.
The SI unit of mmf is the ampere, the same as the unit of current (analogously the units of emf and voltage are both the volt). Informally, and frequently, this unit is stated as the ampere-turn to avoid confusion with current. This was the unit name in the MKS system. Occasionally, the cgs system unit of the gilbert may also be encountered.
History.
The term "magnetomotive force" was coined by Henry Augustus Rowland in 1880. Rowland intended this to indicate a direct analogy with electromotive force. The idea of a magnetic analogy to electromotive force can be found much earlier in the work of Michael Faraday (1791–1867) and it is hinted at by James Clerk Maxwell (1831–1879). However, Rowland coined the term and was the first to make explicit an Ohm's law for magnetic circuits in 1873.
"Ohm's law for magnetic circuits" is sometimes referred to as Hopkinson's law rather than Rowland's law as some authors attribute the law to John Hopkinson instead of Rowland. According to a review of magnetic circuit analysis methods this is an incorrect attribution originating from an 1885 paper by Hopkinson. Furthermore, Hopkinson actually cites Rowland's 1873 paper in this work.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal F"
},
{
"math_id": 1,
"text": " \\mathcal{F} = \\Phi \\mathcal{R} ,"
},
{
"math_id": 2,
"text": "\\mathcal{R}"
},
{
"math_id": 3,
"text": "\\mathcal{F} = NI"
},
{
"math_id": 4,
"text": "\\mathcal{F} = \\Phi \\mathcal{R}"
},
{
"math_id": 5,
"text": "\\mathcal{F} = HL"
}
] |
https://en.wikipedia.org/wiki?curid=579041
|
57904446
|
Jacqueline Jensen-Vallin
|
American mathematician & academic
Jacqueline Ann Jensen-Vallin is an American mathematician. She is a chair and professor of mathematics at Lamar University, former editor-in-chief of "MAA FOCUS", the newsletter of the Mathematical Association of America (MAA), and a former governor of the Texas Section of the MAA. Her research interests include combinatorial group theory, low-dimensional topology, and knot theory; she is also known for her work in mathematics education and the history of women in mathematics.
Education and career.
Jensen-Vallin did her undergraduate studies at the University of Connecticut, completing a double major in mathematics and psychology in 1995. She went to the University of Oregon for her graduate studies, and completed her doctorate there in 2002. Her dissertation, "Finding formula_0-Generators for Exotic Homotopy Types of Two-Complexes", concerned algebraic geometry and was supervised by Micheal Dyer.
After completing her doctorate, she joined the faculty at Sam Houston State University in 2002. She moved to Lamar University in 2014.
Book.
With Janet Beery, Sarah J. Greenwald, and Maura B. Mast, Jensen-Vallin is an editor of the book "Women in Mathematics: Celebrating the Centennial of the Mathematical Association of America" (Springer, 2017).
Awards and honors.
Jensen-Vallin was one of the 2008 winners of the Henry L. Alder Award for Distinguished Teaching by a Beginning College or University Mathematics Faculty Member.
The Association for Women in Mathematics gave her their Service Award in 2018.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pi_2"
}
] |
https://en.wikipedia.org/wiki?curid=57904446
|
579061
|
Monad (functional programming)
|
Design pattern in functional programming to build generic types
In functional programming, a monad is a structure that combines program fragments (functions) and wraps their return values in a type with additional computation. In addition to defining a wrapping monadic type, monads define two operators: one to wrap a value in the monad type, and another to compose together functions that output values of the monad type (these are known as monadic functions). General-purpose languages use monads to reduce boilerplate code needed for common operations (such as dealing with undefined values or fallible functions, or encapsulating bookkeeping code). Functional languages use monads to turn complicated sequences of functions into succinct pipelines that abstract away control flow, and side-effects.
Both the concept of a monad and the term originally come from category theory, where a monad is defined as a functor with additional structure. Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws, which should be satisfied by any monad and can be used to verify monadic code.
Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances.
Overview.
"For a monad codice_0, a value of type codice_1 represents having access to a value of type codice_2 within the context of the monad." —C. A. McCann
More exactly, a monad can be used where unrestricted access to a value is inappropriate for reasons specific to the scenario. In the case of the Maybe monad, it is because the value may not exist. In the case of the IO monad, it is because the value may not be known yet, such as when the monad represents user input that will only be provided after a prompt is displayed. In all cases the scenarios in which access makes sense are captured by the bind operation defined for the monad; for the Maybe monad a value is bound only if it exists, and for the IO monad a value is bound only after the previous operations in the sequence have been performed.
A monad can be created by defining a type constructor "M" and two operations:
With these elements, the programmer composes a sequence of function calls (a "pipeline") with several "bind" operators chained together in an expression. Each function call transforms its input plain-type value, and the bind operator handles the returned monadic value, which is fed into the next step in the sequence.
Typically, the bind operator codice_7 may contain code unique to the monad that performs additional computation steps not available in the function received as a parameter. Between each pair of composed function calls, the bind operator can inject into the monadic value codice_1 some additional information that is not accessible within the function codice_9, and pass it along down the pipeline. It can also exert finer control of the flow of execution, for example by calling the function only under some conditions, or executing the function calls in a particular order.
An example: Maybe.
One example of a monad is the codice_20 type. Undefined null results are one particular pain point that many procedural languages don't provide specific tools for dealing with, requiring use of the null object pattern or checks to test for invalid values at each operation to handle undefined values. This causes bugs and makes it harder to build robust software that gracefully handles errors. The codice_20 type forces the programmer to deal with these potentially undefined results by explicitly defining the two states of a result: codice_22, or codice_23. For example the programmer might be constructing a parser, which is to return an intermediate result, or else signal a condition which the parser has detected, and which the programmer must also handle. With just a little extra functional spice on top, this codice_20 type transforms into a fully-featured monad.
In most languages, the Maybe monad is also known as an option type, which is just a type that marks whether or not it contains a value. Typically they are expressed as some kind of enumerated type. In this Rust example we will call it codice_25 and variants of this type can either be a value of generic type codice_26, or the empty variant: codice_23.
// The <T> represents a generic type "T"
enum Maybe<T> {
Just(T),
Nothing,
codice_25 can also be understood as a "wrapping" type, and this is where its connection to monads comes in. In languages with some form of the codice_20 type, there are functions that aid in their use such as composing monadic functions with each other and testing if a codice_20 contains a value.
In the following hard-coded example, a codice_20 type is used as a result of functions that may fail, in this case the type returns nothing if there is a divide-by-zero.
fn divide(x: Decimal, y: Decimal) -> Maybe<Decimal> {
// divide(1.0, 4.0) -> returns Just(0.25)
// divide(3.0, 0.0) -> returns Nothing
One such way to test whether or not a codice_20 contains a value is to use codice_33 statements.
let m_x = divide(3.14, 0.0); // see divide function above
// The if statement extracts x from m_x if m_x is the Just variant of Maybe
if let Just(x) = m_x {
println!("answer: ", x)
} else {
println!("division failed, divide by zero error...")
Other languages may have pattern matching
let result = divide(3.0, 2.0);
match result {
Just(x) => println!("Answer: ", x),
Nothing => println!("division failed; we'll get 'em next time."),
Monads can compose functions that return codice_20, putting them together. A concrete example might have one function take in several codice_20 parameters, and return a single codice_20 whose value is codice_23 when any of the parameters are codice_23, as in the following:
fn chainable_division(maybe_x: Maybe<Decimal>, maybe_y: Maybe<Decimal>) -> Maybe<Decimal> {
match (maybe_x, maybe_y) {
(Just(x), Just(y)) => { // If both inputs are Just, check for division by zero and divide accordingly
_ => return Nothing // Otherwise return Nothing
chainable_division(chainable_division(Just(2.0), Just(0.0)), Just(1.0)); // inside chainable_division fails, outside chainable_division returns Nothing
Instead of repeating codice_39 expressions, we can use something called a "bind" operator. (also known as "map", "flatmap", or "shove"). This operation takes a monad and a function that returns a monad and runs the function on the inner value of the passed monad, returning the monad from the function.
// Rust example using ".map". maybe_x is passed through 2 functions that return Maybe<Decimal> and Maybe<String> respectively.
// As with normal function composition the inputs and outputs of functions feeding into each other should match wrapped types. (i.e. the add_one function should return a Maybe<Decimal> which then can be unwrapped to a Decimal for the decimal_to_string function)
let maybe_x: Maybe<Decimal> = Just(1.0)
let maybe_result = maybe_x.map(add_one).map(decimal_to_string)
In Haskell, there is an operator "bind", or (codice_7) that allows for this monadic composition in a more elegant form similar to function composition.
halve :: Int -> Maybe Int
halve x
| even x = Just (x `div` 2)
| odd x = Nothing
-- This code halves x twice. it evaluates to Nothing if x is not a multiple of 4
halve x »= halve
With codice_7 available, codice_42 can be expressed much more succinctly with the help of anonymous functions (i.e. lambdas). Notice in the expression below how the two nested lambdas each operate on the wrapped value in the passed codice_20 monad using the bind operator.
chainable_division(mx,my) = mx »= ( λx -> my »= (λy -> Just (x / y)) )
What has been shown so far is basically a monad, but to be more concise, the following is a strict list of qualities necessary for a monad as defined by the following section.
A type (codice_20)
A type converter (codice_45)
A combinator for monadic functions ( codice_7 or codice_47)
These are the 3 things necessary to form a monad. Other monads may embody different logical processes, and some may have additional properties, but all of them will have these three similar components.
Definition.
The more common definition for a monad in functional programming, used in the above example, is actually based on a Kleisli triple ⟨T, η, μ⟩ rather than category theory's standard definition. The two constructs turn out to be mathematically equivalent, however, so either definition will yield a valid monad. Given any well-defined, basic types T, U, a monad consists of three parts:
To fully qualify as a monad though, these three parts must also respect a few laws:
Algebraically, this means any monad both gives rise to a category (called the Kleisli category) "and" a monoid in the category of functors (from values to computations), with monadic composition as a binary operator in the monoid and unit as identity in the monad.
Usage.
The value of the monad pattern goes beyond merely condensing code and providing a link to mathematical reasoning.
Whatever language or default programming paradigm a developer uses, following the monad pattern brings many of the benefits of purely functional programming.
By reifying a specific kind of computation, a monad not only encapsulates the tedious details of that computational pattern, but it does so in a declarative way, improving the code's clarity.
As monadic values explicitly represent not only computed values, but computed "effects", a monadic expression can be replaced with its value in referentially transparent positions, much like pure expressions can be, allowing for many techniques and optimizations based on rewriting.
Typically, programmers will use bind to chain monadic functions into a sequence, which has led some to describe monads as "programmable semicolons", a reference to how many imperative languages use semicolons to separate statements.
However, monads do not actually order computations; even in languages that use them as central features, simpler function composition can arrange steps within a program.
A monad's general utility rather lies in simplifying a program's structure and improving separation of concerns through abstraction.
The monad structure can also be seen as a uniquely mathematical and compile time variation on the decorator pattern.
Some monads can pass along extra data that is inaccessible to functions, and some even exert finer control over execution, for example only calling a function under certain conditions.
Because they let application programmers implement domain logic while offloading boilerplate code onto pre-developed modules, monads can even be considered a tool for aspect-oriented programming.
One other noteworthy use for monads is isolating side-effects, like input/output or mutable state, in otherwise purely functional code.
Even purely functional languages "can" still implement these "impure" computations without monads, via an intricate mix of function composition and continuation-passing style (CPS) in particular.
With monads though, much of this scaffolding can be abstracted away, essentially by taking each recurring pattern in CPS code and bundling it into a distinct monad.
If a language does not support monads by default, it is still possible to implement the pattern, often without much difficulty.
When translated from category-theory to programming terms, the monad structure is a generic concept and can be defined directly in any language that supports an equivalent feature for bounded polymorphism.
A concept's ability to remain agnostic about operational details while working on underlying types is powerful, but the unique features and stringent behavior of monads set them apart from other concepts.
Applications.
Discussions of specific monads will typically focus on solving a narrow implementation problem since a given monad represents a specific computational form.
In some situations though, an application can even meet its high-level goals by using appropriate monads within its core logic.
Here are just a few applications that have monads at the heart of their designs:
History.
The term "monad" in programming dates to the APL and J programming languages, which do tend toward being purely functional. However, in those languages, "monad" is only shorthand for a function taking one parameter (a function with two parameters being a "dyad", and so on).
The mathematician Roger Godement was the first to formulate the concept of a monad (dubbing it a "standard construction") in the late 1950s, though the term "monad" that came to dominate was popularized by category-theorist Saunders Mac Lane. The form defined above using bind, however, was originally described in 1965 by mathematician Heinrich Kleisli in order to prove that any monad could be characterized as an adjunction between two (covariant) functors.
Starting in the 1980s, a vague notion of the monad pattern began to surface in the computer science community.
According to programming language researcher Philip Wadler, computer scientist John C. Reynolds anticipated several facets of it in the 1970s and early 1980s, when he discussed the value of continuation-passing style, of category theory as a rich source for formal semantics, and of the type distinction between values and computations.
The research language Opal, which was actively designed up until 1990, also effectively based I/O on a monadic type, but the connection was not realized at the time.
The computer scientist Eugenio Moggi was the first to explicitly link the monad of category theory to functional programming, in a conference paper in 1989, followed by a more refined journal submission in 1991. In earlier work, several computer scientists had advanced using category theory to provide semantics for the lambda calculus. Moggi's key insight was that a real-world program is not just a function from values to other values, but rather a transformation that forms "computations" on those values. When formalized in category-theoretic terms, this leads to the conclusion that monads are the structure to represent these computations.
Several others popularized and built on this idea, including Philip Wadler and Simon Peyton Jones, both of whom were involved in the specification of Haskell. In particular, Haskell used a problematic "lazy stream" model up through v1.2 to reconcile I/O with lazy evaluation, until switching over to a more flexible monadic interface. The Haskell community would go on to apply monads to many problems in functional programming, and in the 2010s, researchers working with Haskell eventually recognized that monads are applicative functors; and that both monads and arrows are monoids.
At first, programming with monads was largely confined to Haskell and its derivatives, but as functional programming has influenced other paradigms, many languages have incorporated a monad pattern (in spirit if not in name). Formulations now exist in Scheme, Perl, Python, Racket, Clojure, Scala, F#, and have also been considered for a new ML standard.
Analysis.
One benefit of the monad pattern is bringing mathematical precision on the composition of computations.
Not only can the monad laws be used to check an instance's validity, but features from related structures (like functors) can be used through subtyping.
Verifying the monad laws.
Returning to the codice_20 example, its components were declared to make up a monad, but no proof was given that it satisfies the monad laws.
This can be rectified by plugging the specifics of codice_20 into one side of the general laws, then algebraically building a chain of equalities to reach the other side:
Law 1: eta(a) »= f(x) ⇔ (Just a) »= f(x) ⇔ f(a)
Law 2: ma »= eta(x) ⇔ ma
if ma is (Just a) then
eta(a) ⇔ Just a
else or
Nothing ⇔ Nothing
end if
Law 3: (ma »= f(x)) »= g(y) ⇔ ma »= (f(x) »= g(y))
if (ma »= f(x)) is (Just b) then if ma is (Just a) then
g(ma »= f(x)) (f(x) »= g(y)) a
else else
Nothing Nothing
end if end if
⇔ if ma is (Just a) and f(a) is (Just b) then
(g ∘ f) a
else if ma is (Just a) and f(a) is Nothing then
Nothing
else
Nothing
end if
Derivation from functors.
Though rarer in computer science, one can use category theory directly, which defines a monad as a functor with two additional natural transformations.
So to begin, a structure requires a higher-order function (or "functional") named map to qualify as a functor:
<templatestyles src="Block indent/styles.css"/>codice_62
This is not always a major issue, however, especially when a monad is derived from a pre-existing functor, whereupon the monad inherits map automatically. (For historical reasons, this codice_63 is instead called codice_64 in Haskell.)
A monad's first transformation is actually the same unit from the Kleisli triple, but following the hierarchy of structures closely, it turns out unit characterizes an applicative functor, an intermediate structure between a monad and a basic functor. In the applicative context, unit is sometimes referred to as pure but is still the same function. What does differ in this construction is the law unit must satisfy; as bind is not defined, the constraint is given in terms of map instead:
<templatestyles src="Block indent/styles.css"/>
The final leap from applicative functor to monad comes with the second transformation, the join function (in category theory this is a natural transformation usually called μ), which "flattens" nested applications of the monad:
<templatestyles src="Block indent/styles.css"/>codice_65
As the characteristic function, join must also satisfy three variations on the monad laws:
<templatestyles src="Block indent/styles.css"/>codice_66
<templatestyles src="Block indent/styles.css"/>codice_67
<templatestyles src="Block indent/styles.css"/>codice_68
Regardless of whether a developer defines a direct monad or a Kleisli triple, the underlying structure will be the same, and the forms can be derived from each other easily:
<templatestyles src="Block indent/styles.css"/>codice_69
<templatestyles src="Block indent/styles.css"/>codice_70
<templatestyles src="Block indent/styles.css"/>codice_71
Another example: List.
The List monad naturally demonstrates how deriving a monad from a simpler functor can come in handy.
In many languages, a list structure comes pre-defined along with some basic features, so a codice_72 type constructor and append operator (represented with codice_73 for infix notation) are assumed as already given here.
Embedding a plain value in a list is also trivial in most languages:
unit(x) = [x]
From here, applying a function iteratively with a list comprehension may seem like an easy choice for bind and converting lists to a full monad.
The difficulty with this approach is that bind expects monadic functions, which in this case will output lists themselves;
as more functions are applied, layers of nested lists will accumulate, requiring more than a basic comprehension.
However, a procedure to apply any "simple" function over the whole list, in other words map, is straightforward:
(map φ) xlist = [ φ(x1), φ(x2), ..., φ(xn) ]
Now, these two procedures already promote codice_72 to an applicative functor.
To fully qualify as a monad, only a correct notion of join to flatten repeated structure is needed, but for lists, that just means unwrapping an outer list to append the inner ones that contain values:
join(xlistlist) = join([xlist1, xlist2, ..., xlistn])
= xlist1 ++ xlist2 ++ ... ++ xlistn
The resulting monad is not only a list, but one that automatically resizes and condenses itself as functions are applied.
bind can now also be derived with just a formula, then used to feed codice_72 values through a pipeline of monadic functions:
(xlist »= f) = join ∘ (map f) xlist
One application for this monadic list is representing nondeterministic computation.
codice_72 can hold results for all execution paths in an algorithm, then condense itself at each step to "forget" which paths led to which results (a sometimes important distinction from deterministic, exhaustive algorithms).
Another benefit is that checks can be embedded in the monad; specific paths can be pruned transparently at their first point of failure, with no need to rewrite functions in the pipeline.
A second situation where codice_72 shines is composing multivalued functions.
For instance, the nth complex root of a number should yield n distinct complex numbers, but if another mth root is then taken of those results, the final m•n values should be identical to the output of the m•nth root.
codice_72 completely automates this issue away, condensing the results from each step into a flat, mathematically correct list.
Techniques.
Monads present opportunities for interesting techniques beyond just organizing program logic. Monads can lay the groundwork for useful syntactic features while their high-level and mathematical nature enable significant abstraction.
Syntactic sugar <templatestyles src="Template:Visible anchor/styles.css" />do-notation.
Although using bind openly often makes sense, many programmers prefer a syntax that mimics imperative statements
(called "do-notation" in Haskell, "perform-notation" in OCaml, "computation expressions" in F#, and "for comprehension" in Scala). This is only syntactic sugar that disguises a monadic pipeline as a code block; the compiler will then quietly translate these expressions into underlying functional code.
Translating the codice_79 function from the codice_20 into Haskell can show this feature in action. A non-monadic version of codice_79 in Haskell looks like this:
add mx my =
case mx of
Nothing -> Nothing
Just x -> case my of
Nothing -> Nothing
Just y -> Just (x + y)
In monadic Haskell, codice_82 is the standard name for unit, plus lambda expressions must be handled explicitly, but even with these technicalities, the codice_20 monad makes for a cleaner definition:
add mx my =
mx »= (\x ->
my »= (\y ->
return (x + y)))
With do-notation though, this can be distilled even further into a very intuitive sequence:
add mx my = do
x <- mx
y <- my
return (x + y)
A second example shows how codice_20 can be used in an entirely different language: F#.
With computation expressions, a "safe division" function that returns codice_85 for an undefined operand "or" division by zero can be written as:
let readNum () =
let s = Console.ReadLine()
let succ,v = Int32.TryParse(s)
if (succ) then Some(v) else None
let secure_div =
maybe {
let! x = readNum()
let! y = readNum()
if (y = 0)
then None
else return (x / y)
At build-time, the compiler will internally "de-sugar" this function into a denser chain of bind calls:
maybe.Delay(fun () ->
maybe.Bind(readNum(), fun x ->
maybe.Bind(readNum(), fun y ->
if (y=0) then None else maybe.Return(x / y))))
For a last example, even the general monad laws themselves can be expressed in do-notation:
General interface.
Every monad needs a specific implementation that meets the monad laws, but other aspects like the relation to other structures or standard idioms within a language are shared by all monads.
As a result, a language or library may provide a general codice_86 interface with function prototypes, subtyping relationships, and other general facts.
Besides providing a head-start to development and guaranteeing a new monad inherits features from a supertype (such as functors), checking a monad's design against the interface adds another layer of quality control.
Operators.
Monadic code can often be simplified even further through the judicious use of operators.
The map functional can be especially helpful since it works on more than just ad-hoc monadic functions; so long as a monadic function should work analogously to a predefined operator, map can be used to instantly "lift" the simpler operator into a monadic one.
With this technique, the definition of codice_79 from the codice_20 example could be distilled into:
add(mx,my) = map (+)
The process could be taken even one step further by defining codice_79 not just for codice_20, but for the whole codice_86 interface.
By doing this, any new monad that matches the structure interface and implements its own map will immediately inherit a lifted version of codice_79 too.
The only change to the function needed is generalizing the type signature:
add : (Monad Number, Monad Number) → Monad Number
Another monadic operator that is also useful for analysis is monadic composition (represented as infix codice_93 here), which allows chaining monadic functions in a more mathematical style:
(f >=> g)(x) = f(x) »= g
With this operator, the monad laws can be written in terms of functions alone, highlighting the correspondence to associativity and existence of an identity:
(unit >=> g) ↔ g
(f >=> unit) ↔ f
(f >=> g) >=> h ↔ f >=> (g >=> h)
In turn, the above shows the meaning of the "do" block in Haskell:
do
_p <- f(x)
_q <- g(_p)
h(_q) ↔ ( f >=> g >=> h )(x)
More examples.
Identity monad.
The simplest monad is the Identity monad, which just annotates plain values and functions to satisfy the monad laws:
newtype Id T = T
unit(x) = x
(x »= f) = f(x)
codice_94 does actually have valid uses though, such as providing a base case for recursive monad transformers.
It can also be used to perform basic variable assignment within an imperative-style block.
Collections.
Any collection with a proper append is already a free monoid, but it turns out that codice_72 is not the only collection that also has a well-defined join and qualifies as a monad.
One can even mutate codice_72 into these other monadic collections by simply imposing special properties on append:
IO monad (Haskell).
As already mentioned, pure code should not have unmanaged side effects, but that does not preclude a program from "explicitly" describing and managing effects.
This idea is central to Haskell's IO monad, where an object of type codice_97 can be seen as describing an action to be performed in the world, optionally providing information about the world of type codice_2. An action that provides no information about the world has the type codice_99, "providing" the dummy value codice_100.
When a programmer binds an codice_101 value to a function, the function computes the next action to be performed based on the information about the world provided by the previous action (input from users, files, etc.). Most significantly, because the value of the IO monad can only be bound to a function that computes another IO monad, the bind function imposes a discipline of a sequence of actions where the result of an action can only be provided to a function that will compute the next action to perform. This means that actions which do not need to be performed never are, and actions that do need to be performed have a well defined sequence, solving the problem of (IO) actions not being referentially transparent.
For example, Haskell has several functions for acting on the wider file system, including one that checks whether a file exists and another that deletes a file.
Their two type signatures are:
doesFileExist :: FilePath -> IO Bool
removeFile :: FilePath -> IO ()
The first is interested in whether a given file really exists, and as a result, outputs a Boolean value within the codice_101 monad.
The second function, on the other hand, is only concerned with acting on the file system so the codice_101 container it outputs is empty.
codice_101 is not limited just to file I/O though; it even allows for user I/O, and along with imperative syntax sugar, can mimic a typical "Hello, World!" program:
main :: IO ()
main = do
putStrLn "Hello, world!"
putStrLn "What is your name, user?"
name <- getLine
putStrLn ("Nice to meet you, " ++ name ++ "!")
Desugared, this translates into the following monadic pipeline (codice_105 in Haskell is just a variant of bind for when only monadic effects matter and the underlying result can be discarded):
main :: IO ()
main =
putStrLn "Hello, world!" »
putStrLn "What is your name, user?" »
getLine »= (\name ->
putStrLn ("Nice to meet you, " ++ name ++ "!"))
Writer monad (JavaScript).
Another common situation is keeping a log file or otherwise reporting a program's progress.
Sometimes, a programmer may want to log even more specific, technical data for later profiling or debugging.
The Writer monad can handle these tasks by generating auxiliary output that accumulates step-by-step.
To show how the monad pattern is not restricted to primarily functional languages, this example implements a codice_106 monad in JavaScript.
First, an array (with nested tails) allows constructing the codice_106 type as a linked list.
The underlying output value will live in position 0 of the array, and position 1 will implicitly hold a chain of auxiliary notes:
const writer = value => [value, []];
Defining unit is also very simple:
const unit = value => [value, []];
Only unit is needed to define simple functions that output codice_106 objects with debugging notes:
const squared = x => [x * x, [`${x} was squared.`]];
const halved = x => [x / 2, [`${x} was halved.`]];
A true monad still requires bind, but for codice_106, this amounts simply to concatenating a function's output to the monad's linked list:
const bind = (writer, transform) => {
const [value, log] = writer;
const [result, updates] = transform(value);
return [result, log.concat(updates)];
The sample functions can now be chained together using bind, but defining a version of monadic composition (called codice_110 here) allows applying these functions even more succinctly:
const pipelog = (writer, ...transforms) =>
transforms.reduce(bind, writer);
The final result is a clean separation of concerns between stepping through computations and logging them to audit later:
pipelog(unit(4), squared, halved);
// Resulting writer object = [8, ['4 was squared.', '16 was halved.']]
Environment monad.
An environment monad (also called a "reader monad" and a "function monad") allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type "E" → "T", where E is the type of the shared environment. The monad functions are:
formula_0
The following monadic operations are useful:
formula_1
The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in a state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad.
Formally, a value in an environment monad is equivalent to a function with an additional, anonymous argument; return and bind are equivalent to the K and S combinators, respectively, in the SKI combinator calculus.
State monads.
A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state (of type codice_111) along with a return value (of type codice_112). This is similar to an environment monad, except that it also returns a new state, and thus allows modeling a "mutable" environment.
type State s t = s -> (t, s)
Note that this monad takes a type parameter, the type of the state information. The monad operations are defined as follows:
-- "return" produces the given value without changing the state.
return x = \s -> (x, s)
-- "bind" modifies m so that it applies f to its result.
m »= f = \r -> let (x, s) = m r in (f x) s
Useful state operations include:
get = \s -> (s, s) -- Examine the state at this point in the computation.
put s = \_ -> ((), s) -- Replace the state.
modify f = \s -> ((), f s) -- Update the state
Another operation applies a state monad to a given initial state:
runState :: State s a -> s -> (a, s)
runState t s = t s
do-blocks in a state monad are sequences of operations that can examine and update the state data.
Informally, a state monad of state type S maps the type of return values T into functions of type formula_2, where S is the underlying state. The return and bind function are:
formula_3.
From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition.
Continuation monad.
A continuation monad with return type R maps type T into functions of type formula_4. It is used to model continuation-passing style. The return and bind functions are as follows:
formula_5
The call-with-current-continuation function is defined as follows:
formula_6
Program logging.
The following code is pseudocode. Suppose we have two functions codice_113 and codice_114, with types
foo : int -> int
bar : int -> int
That is, both functions take in an integer and return another integer. Then we can apply the functions in succession like so:
foo (bar x)
Where the result is the result of codice_113 applied to the result of codice_114 applied to codice_117.
But suppose we are debugging our program, and we would like to add logging messages to codice_113 and codice_114.
So we change the types as so:
foo : int -> int * string
bar : int -> int * string
So that both functions return a tuple, with the result of the application as the integer,
and a logging message with information about the applied function and all the previously applied functions as the string.
Unfortunately, this means we can no longer compose codice_113 and codice_114, as their input type codice_122 is not compatible with their output type codice_123. And although we can again gain composability by modifying the types of each function to be codice_124, this would require us to add boilerplate code to each function to extract the integer from the tuple, which would get tedious as the number of such functions increases.
Instead, let us define a helper function to abstract away this boilerplate for us:
bind : int * string -> (int -> int * string) -> int * string
codice_16 takes in an integer and string tuple, then takes in a function (like codice_113) that maps from an integer to an integer and string tuple. Its output is an integer and string tuple, which is the result of applying the input function to the integer within the input integer and string tuple.
In this way, we only need to write boilerplate code to extract the integer from the tuple once, in codice_16.
Now we have regained some composability. For example:
bind (bind (x,s) bar) foo
Where codice_128 is an integer and string tuple.
To make the benefits even clearer, let us define an infix operator as an alias for codice_16:
(»=) : int * string -> (int -> int * string) -> int * string
So that codice_130 is the same as codice_131.
Then the above example becomes:
((x,s) »= bar) »= foo
Finally, we define a new function to avoid writing codice_132 every time we wish to create an empty logging message, where codice_133 is the empty string.
return : int -> int * string
Which wraps codice_117 in the tuple described above.
The result is a pipeline for logging messages:
((return x) »= bar) »= foo
That allows us to more easily log the effects of codice_114 and codice_113 on codice_117.
codice_123 denotes a pseudo-coded monadic value. codice_16 and codice_82 are analogous to the corresponding functions of the same name.
In fact, codice_123, codice_16, and codice_82 form a monad.
Additive monads.
An additive monad is a monad endowed with an additional closed, associative, binary operator mplus and an identity element under mplus, called mzero.
The codice_20 monad can be considered additive, with codice_23 as mzero and a variation on the OR operator as mplus.
codice_72 is also an additive monad, with the empty list codice_147 acting as mzero and the concatenation operator codice_73 as mplus.
Intuitively, mzero represents a monadic wrapper with no value from an underlying type, but is also considered a "zero" (rather than a "one") since it acts as an absorber for bind, returning mzero whenever bound to a monadic function.
This property is two-sided, and bind will also return mzero when any value is bound to a monadic zero function.
In category-theoretic terms, an additive monad qualifies once as a monoid over monadic functions with bind (as all monads do), and again over monadic values via mplus.
Free monads.
Sometimes, the general outline of a monad may be useful, but no simple pattern recommends one monad or another.
This is where a free monad comes in; as a free object in the category of monads, it can represent monadic structure without any specific constraints beyond the monad laws themselves.
Just as a free monoid concatenates elements without evaluation, a free monad allows chaining computations with markers to satisfy the type system, but otherwise imposes no deeper semantics itself.
For example, by working entirely through the codice_39 and codice_23 markers, the codice_20 monad is in fact a free monad.
The codice_72 monad, on the other hand, is not a free monad since it brings extra, specific facts about lists (like append) into its definition.
One last example is an abstract free monad:
data Free f a
= Pure a
| Free (f (Free f a))
unit :: a -> Free f a
unit x = Pure x
bind :: Functor f => Free f a -> (a -> Free f b) -> Free f b
bind (Pure x) f = f x
bind (Free x) f = Free (fmap (\y -> bind y f) x)
Free monads, however, are "not" restricted to a linked-list like in this example, and can be built around other structures like trees.
Using free monads intentionally may seem impractical at first, but their formal nature is particularly well-suited for syntactic problems.
A free monad can be used to track syntax and type while leaving semantics for later, and has found use in parsers and interpreters as a result.
Others have applied them to more dynamic, operational problems too, such as providing iteratees within a language.
Comonads.
Besides generating monads with extra properties, for any given monad, one can also define a comonad.
Conceptually, if monads represent computations built up from underlying values, then comonads can be seen as reductions back down to values.
Monadic code, in a sense, cannot be fully "unpacked"; once a value is wrapped within a monad, it remains quarantined there along with any side-effects (a good thing in purely functional programming).
Sometimes though, a problem is more about consuming contextual data, which comonads can model explicitly.
Technically, a comonad is the categorical dual of a monad, which loosely means that it will have the same required components, only with the direction of the type signatures "reversed".
Starting from the bind-centric monad definition, a comonad consists of:
counit(wa) : W T → T
(wa =» f) : (W U, W U → T) → W T
extend and counit must also satisfy duals of the monad laws:
counit ∘ ( (wa =» f) → wb ) ↔ f(wa) → b
wa =» counit ↔ wa
wa ( (=» f(wx = wa)) → wb (=» g(wy = wb)) → wc ) ↔ ( wa (=» f(wx = wa)) → wb ) (=» g(wy = wb)) → wc
Analogous to monads, comonads can also be derived from functors using a dual of join:
duplicate(wa) : W T → W (W T)
While operations like extend are reversed, however, a comonad does "not" reverse functions it acts on, and consequently, comonads are still functors with map, not cofunctors.
The alternate definition with duplicate, counit, and map must also respect its own comonad laws:
((map duplicate) ∘ duplicate) wa ↔ (duplicate ∘ duplicate) wa ↔ wwwa
((map counit) ∘ duplicate) wa ↔ (counit ∘ duplicate) wa ↔ wa
((map map φ) ∘ duplicate) wa ↔ (duplicate ∘ (map φ)) wa ↔ wwb
And as with monads, the two forms can be converted automatically:
(map φ) wa ↔ wa =» (φ ∘ counit) wx
duplicate wa ↔ wa =» wx
wa =» f(wx) ↔ ((map f) ∘ duplicate) wa
A simple example is the Product comonad, which outputs values based on an input value and shared environment data.
In fact, the codice_154 comonad is just the dual of the codice_106 monad and effectively the same as the codice_156 monad (both discussed below).
codice_154 and codice_156 differ only in which function signatures they accept, and how they complement those functions by wrapping or unwrapping values.
A less trivial example is the Stream comonad, which can be used to represent data streams and attach filters to the incoming signals with extend.
In fact, while not as popular as monads, researchers have found comonads particularly useful for stream processing and modeling dataflow programming.
Due to their strict definitions, however, one cannot simply move objects back and forth between monads and comonads.
As an even higher abstraction, arrows can subsume both structures, but finding more granular ways to combine monadic and comonadic code is an active area of research.
See also.
Alternatives for modeling computations:
Related design concepts:
Generalizations of monads:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
HaskellWiki references:
Tutorials:
Interesting cases:
|
[
{
"math_id": 0,
"text": "\\begin{array}{ll}\n\\text{return} \\colon & T \\rarr E \\rarr T = t \\mapsto e \\mapsto t \\\\\n\\text{bind} \\colon & (E \\rarr T) \\rarr (T \\rarr E \\rarr T') \\rarr E \\rarr T' = r \\mapsto f \\mapsto e \\mapsto f \\, (r \\, e) \\, e\n\\end{array}\n"
},
{
"math_id": 1,
"text": "\\begin{array}{ll}\n\\text{ask} \\colon & E \\rarr E = \\text{id}_E \\\\\n\\text{local} \\colon & (E \\rarr E) \\rarr (E \\rarr T) \\rarr E \\rarr T = f \\mapsto c \\mapsto e \\mapsto c \\, (f \\, e)\n\\end{array}\n"
},
{
"math_id": 2,
"text": "S \\rarr T \\times S"
},
{
"math_id": 3,
"text": "\\begin{array}{ll}\n\\text{return} \\colon & T \\rarr S \\rarr T \\times S = t \\mapsto s \\mapsto (t, s) \\\\\n\\text{bind} \\colon & (S \\rarr T \\times S) \\rarr (T \\rarr S \\rarr T' \\times S) \\rarr S \\rarr T' \\times S \\ = m \\mapsto k \\mapsto s \\mapsto (k \\ t \\ s') \\quad \\text{where} \\; (t, s') = m \\, s\n\\end{array}\n"
},
{
"math_id": 4,
"text": "\\left(T \\rarr R \\right) \\rarr R"
},
{
"math_id": 5,
"text": "\\begin{array}{ll}\n\\text{return} \\colon &T \\rarr \\left(T \\rarr R \\right) \\rarr R = t \\mapsto f \\mapsto f \\, t\\\\\n\\text{bind} \\colon &\\left(\\left(T \\rarr R \\right) \\rarr R \\right) \\rarr \\left(T \\rarr \\left(T' \\rarr R \\right) \\rarr R \\right) \\rarr \\left(T' \\rarr R \\right) \\rarr R = c \\mapsto f \\mapsto k \\mapsto c \\, \\left(t \\mapsto f \\, t \\, k \\right)\n\\end{array}"
},
{
"math_id": 6,
"text": "\\text{call/cc} \\colon \\ \\left(\\left(T \\rarr \\left(T' \\rarr R \\right) \\rarr R \\right) \\rarr \\left(T \\rarr R \\right) \\rarr R \\right) \\rarr \\left(T \\rarr R \\right) \\rarr R = f \\mapsto k \\mapsto \\left(f \\left(t \\mapsto x \\mapsto k \\, t \\right) \\, k \\right)"
}
] |
https://en.wikipedia.org/wiki?curid=579061
|
57907267
|
Mixed binomial process
|
A mixed binomial process is a special point process in probability theory. They naturally arise from restrictions of (mixed) Poisson processes bounded intervals.
Definition.
Let formula_0 be a probability distribution and let formula_1 be i.i.d. random variables with distribution formula_0. Let formula_2 be a random variable taking a.s. (almost surely) values in formula_3. Assume that formula_4 are independent and let formula_5 denote the Dirac measure on the point formula_6.
Then a random measure formula_7 is called a mixed binomial process iff it has a representation as
formula_8
This is equivalent to formula_7 conditionally on formula_9 being a binomial process based on formula_10 and formula_0.
Properties.
Laplace transform.
Conditional on formula_11, a mixed Binomial processe has the Laplace transform
formula_12
for any positive, measurable function formula_13.
Restriction to bounded sets.
For a point process formula_7 and a bounded measurable set formula_14 define the restriction offormula_7 on formula_14 as
formula_15.
Mixed binomial processes are stable under restrictions in the sense that if formula_7 is a mixed binomial process based on formula_0 and formula_2, then formula_16 is a mixed binomial process based on
formula_17
and some random variable formula_18.
Also if formula_7 is a Poisson process or a mixed Poisson process, then formula_16 is a mixed binomial process.
Examples.
Poisson-type random measures are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under thinning, that are examples of mixed binomial processes. They are the only distributions in the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. Poisson-type (PT) random measures include the Poisson random measure, negative binomial random measure, and binomial random measure.
|
[
{
"math_id": 0,
"text": " P "
},
{
"math_id": 1,
"text": " X_i, X_2, \\dots "
},
{
"math_id": 2,
"text": " K "
},
{
"math_id": 3,
"text": " \\mathbb N= \\{0,1,2, \\dots \\} "
},
{
"math_id": 4,
"text": " K, X_1, X_2, \\dots "
},
{
"math_id": 5,
"text": " \\delta_x "
},
{
"math_id": 6,
"text": " x "
},
{
"math_id": 7,
"text": " \\xi "
},
{
"math_id": 8,
"text": " \\xi= \\sum_{i=0}^K \\delta_{X_i} "
},
{
"math_id": 9,
"text": "\\{ K =n \\}"
},
{
"math_id": 10,
"text": "n "
},
{
"math_id": 11,
"text": " K=n "
},
{
"math_id": 12,
"text": " \\mathcal L(f)= \\left( \\int \\exp(-f(x))\\; P(\\mathrm dx)\\right)^n "
},
{
"math_id": 13,
"text": " f "
},
{
"math_id": 14,
"text": " B "
},
{
"math_id": 15,
"text": " \\xi_B(\\cdot )= \\xi(B \\cap \\cdot) "
},
{
"math_id": 16,
"text": " \\xi_B "
},
{
"math_id": 17,
"text": " P_B(\\cdot)= \\frac{P(B \\cap \\cdot)}{P(B)} "
},
{
"math_id": 18,
"text": " \\tilde K "
}
] |
https://en.wikipedia.org/wiki?curid=57907267
|
57910947
|
FIFA World Ranking system (2006–2018)
|
The 2006–2018 FIFA men's ranking system was a calculation technique previously used by FIFA for ranking men's national teams in football. The ranking system was introduced by FIFA after the 2006 FIFA World Cup, as an update to an earlier system, and was replaced after the 2018 World Cup with a revised Elo-based system.
The system, like the previous ones, is extremely similar to that of a league, though with changes made to ensure that it is still representative of the teams' performance despite playing differing numbers of matches per annum, and the differing strength of opposition that teams have to face. The factors taken into account are as follows:
Teams' actual scores are a result of the average points gained over each calendar year; matches from the previous four years are considered, with more weight being given to recent ones.
Origin.
The new rankings were compiled in response to criticism from the media. Meetings were attended by FIFA staff and external experts and a large amount of research was conducted by this group, resulting in the new ranking system. The new system was confirmed in Leipzig on 7 December 2005 by a committee of FIFA executives. Notable changes include the dropping of the home or away advantage and number of goals from the calculation, and the simplification of many aspects of the system.
International "A" matches.
In October 2012, FIFA released a press circular defining what is considered to be an international "A" match.
<templatestyles src="Template:Blockquote/styles.css" />For the purposes of the ranking, FIFA defines an international "A" match as a match between two FIFA members for which both members field their first representative team ("A" team).
The FIFA/Coca-Cola World Ranking is based on a list of all international "A" matches that are recognised by FIFA.
International "A" matches include matches played as part of the FIFA World Cup, FIFA World Cup qualifiers, FIFA Confederations Cup, continental final tournaments, continental qualifying competitions and international friendlies.
Win, draw or defeat.
In previous years a complicated system of points allocation was used, depending on how strong the opponent was, and how large the loss margin, which allowed weaker losing teams to gain points when playing a much stronger opposition, if they managed to put up a decent match. With the new system, the points allocation is simpler: three points for a win, one point for a draw, and zero points for a loss, in line with most league systems around the world.
In the event of a match being decided by a penalty shootout, the winning team receives two points, and the losing team one point.
Until November 2012, in two-legged play-offs, if Team A lost the first leg 2 – 0, then matched the result in the return leg and won after a penalty shootout, it received two points. However, if Team A won the return leg by one goal only, being eliminated in the process, it received 3 points. FIFA fixed this flaw starting with the November 2012 ranking.
Match status.
Different matches have different importance to teams, and FIFA has tried to respect this by using a weighting system, where the most significant matches are in the World Cup finals, and the lowest weighted are friendly matches. FIFA states that it wishes to recognise that friendlies are still important, since they make up half of the competitive matches counted in the rankings. FIFA also stated, however, that it did not plan to make any adjustment for teams that qualify directly for major tournaments.
The match status multipliers are as follows:
Opponent strength.
A win against a very highly ranked opponent is a considerably greater achievement than a win against a low-rated opponent, thus the strength of the opposing team is an important factor.
The new system uses an opposition strength factor based on team rankings. The previous system was based on points difference.
The formula used is:
formula_0
with the exceptions that the team ranked #1 is given a multiplier of 2, and teams ranked 150th and below are assigned the minimum multiplier of 0.5.
The ranking position is taken from the opposition's ranking in the most recently published FIFA World Ranking before the match is included in the ranking calculation.
The rankings published before July 2006 are purely historical and are not used for the new ranking calculation. Instead, FIFA went back as far as 1996 to apply the new formula and is using those new rankings for the current calculations.
See the detailed break-down of point totals for teams from the top 20 in the October 2007 rankings.
Regional strength.
In addition to the opposition strength multiplier, FIFA considers the relative strength of entire confederations in the calculation. Each confederation is assigned a weighting between 0.85 and 1.0, based on the relative performance of the confederations in the last three World Cups. Their values are as follows:
The multiplier used in the calculation is the average of the regional strength weighting of the two teams:
formula_1
FIFA changed the formula used to compute the confederation weightings after the 2010 FIFA World Cup without public announcement. Without this modification, UEFA's multiplier would have dropped for the first time below 1, with CONMEBOL remaining the only confederation with a multiplier of 1.
The confederation weighting for AFC was increased in August 2011 from 0.85 to 0.86 after a computer programmer found an error in FIFA's calculations.
Assessment period.
Matches played over the last four years (48 months) are included in the calculation, but there is a weighting to put more emphasis on recent results. Previously an eight-year period was used. The date weighting is as follows:
If a team exceeds the assessment period without playing a match, it is temporarily removed from the rankings, and is reinstated as soon as it plays a match again. The most recent team to be temporarily absent from the rankings is São Tomé and Príncipe (reinstated in November 2011, after having been removed in December 2007).
Ranking formula.
The final ranking points figure for a single match is multiplied by 100 and rounded to the nearest whole number.
formula_2
Results for all matches played in the year are averaged together (assuming at least five matches have been played). The average ranking points for the four previous years, weighted by their multiplier mentioned above, are added together to arrive at the final ranking points.
Examples.
The following examples use these hypothetical teams and confederations, and assume the games are played within the last 12 months:
A friendly match is played between Amplistan and Bestrudia. Amplistan wins 2–1.
Bestrudia gets no ranking points because it lost the game, so all factors are multiplied by zero.
Amplistan's 141 ranking points are calculated like this:
More examples:
Conesto gets more points than Bestrudia for defeating the same team (Amplistan) because of the higher weighting of its confederation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{Opposition strength multiplier} = ({200-\\text{ranking position})/100}"
},
{
"math_id": 1,
"text": "\\text{Regional strength multiplier} = \\frac{\\text{Team 1 regional weighting} + \\text{Team 2 regional weighting}}{2}"
},
{
"math_id": 2,
"text": "\\text{Ranking points} = \\text{Result points} \\times \\text{Match status} \\times \\text{Opposition strength} \\times \\text{Regional strength}"
}
] |
https://en.wikipedia.org/wiki?curid=57910947
|
5791679
|
Mechanical singularity
|
In engineering, a mechanical singularity is a position or configuration of a mechanism or a machine where the subsequent behaviour cannot be predicted, or the forces or other physical quantities involved become infinite or nondeterministic.
When the underlying engineering equations of a mechanism or machine are evaluated at the singular configuration (if any exists), then those equations exhibit mathematical singularity.
Examples of mechanical singularities are gimbal lock and in static mechanical analysis, an under-constrained system.
Types of singularities.
There are three types of singularities that can be found in mechanisms: direct-kinematics singularities, inverse-kinematics singularities, and combined singularities. These singularities occur when one or both Jacobian matrices of the mechanisms becomes singular of rank-deficient. The relationship between the input and output velocities of the mechanism are defined by the following general equation:
formula_0
where formula_1is the output velocities, formula_2is the input velocities, formula_3is the direct-kinematics Jacobians, and formula_4is the inverse-kinematics Jacobian.
Type-I: Inverse-kinematics singularities.
This first kind of singularities occurs when:
formula_5
Type-II: Direct-kinematics singularities.
This second kind of singularities occurs when:
formula_6
Type-III: Combined singularities.
This kind of singularities occurs when for a particular configuration, both formula_7and formula_8become singular simultaneously.
|
[
{
"math_id": 0,
"text": "\\textbf{A}\\dot{\\textbf{x}}+\\textbf{B}\\dot{\\textbf{q}}=\\textbf{0}\n"
},
{
"math_id": 1,
"text": "\\dot{\\textbf{x}}\n"
},
{
"math_id": 2,
"text": "\\dot{\\textbf{q}}\n"
},
{
"math_id": 3,
"text": "\\textbf{A}\n"
},
{
"math_id": 4,
"text": "\\textbf{B}\n"
},
{
"math_id": 5,
"text": "\\det(\\textbf{B})=0\n"
},
{
"math_id": 6,
"text": "\\det(\\textbf{A})=0\n"
},
{
"math_id": 7,
"text": "\\textbf {A}\n"
},
{
"math_id": 8,
"text": "\\textbf {B} \n"
}
] |
https://en.wikipedia.org/wiki?curid=5791679
|
57927311
|
Eduard Wirsing
|
German mathematician (1931–2022)
Eduard Wirsing (28 June 1931 – 22 March 2022) was a German mathematician, specializing in number theory.
Biography.
Wirsing was born on 28 June 1931 in Berlin.
Wirsing studied at the University of Göttingen and the Free University of Berlin, where he received his doctorate in 1957 under the supervision of Hans-Heinrich Ostmann with thesis "Über wesentliche Komponenten in der additiven Zahlentheorie" (On Essential Components in Additive Number Theory). In 1967/68 he was a professor at Cornell University and from 1969 a full professor at the University of Marburg, where he was since 1965. In 1970/71 he was at the Institute for Advanced Study. Since 1974 he was a professor at the University of Ulm, where he led the 1976 Mathematical Colloquium. He retired as professor emeritus in 1999, but continued to be mathematically active.
Wirsing organized conferences on analytical number theory at the Oberwolfach Research Institute for Mathematics.
In his spare time he played go and chess, played alto recorder, and made electronic devices.
Wirsing died on 22 March 2022.
Research.
In 1960 he proved for algebraic number fields a generalization of Roth's 1955 Thue-Siegel-Roth theorem:
Let formula_0 be algebraic of degree formula_1, then there are only finitely many algebraic numbers formula_2 of degree "n", so that
formula_3 for arbitrarily small positive formula_4, where formula_5 is the height of formula_2.
The exponent on the right was improved to "n+1" (replacing "2n") by Wolfgang M. Schmidt in 1970.
In 1961 Wirsing proved a theorem about the asymptotic means of non-negative multiplicative functions, and he was able to show, under certain conditions, that these are essentially determined by their values at the prime numbers (and not also by values at the higher prime exponents). In 1967 he sharpened his theorem and proved a conjecture of Paul Erdős (each multiplicative function, which takes only the values 1 and formula_6, has an average value).
In 1956, with Alfred Stöhr, Wirsing gave simpler examples (than the example given by Yuri Linnik in 1942) demonstrating that there are essential components that are not additive bases.
In 1957 he, with Bernhard Hornfeck, gave an asymptotic estimate for the density of perfect numbers. In 1959 Wirsing gave an asymptotic estimate for the density of multiply perfect numbers.
He gave in 1962 an elementary proof of a sharpened form of the prime-number theorem (with remainder). (In this context, "elementary" means "not using methods from complex function theory".) About the same time, similar results were published by Robert Breusch (1960) and Enrico Bombieri (1962). Elementary proofs of the prime number theorem were first published by Paul Erdős and Atle Selberg in 1949.
Wirsing is also known for his work on the Gauss-Kuzmin-Lévy distribution (named after Carl Friedrich Gauss, Rodion Kuzmin, Paul Lévy). He gave asymptotic estimates for the distribution of the coefficients of the regular continued fraction evolution of a random variable evenly distributed in the unit interval. In this context, he also introduced a universal mathematical constant (Gauss-Kuzmin-Wirsing constant).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\geq 3"
},
{
"math_id": 2,
"text": "\\beta"
},
{
"math_id": 3,
"text": "\\left|\\alpha - \\beta \\right| < H (\\beta)^{-(2 n + \\varepsilon)}"
},
{
"math_id": 4,
"text": "\\varepsilon"
},
{
"math_id": 5,
"text": "H(\\beta)"
},
{
"math_id": 6,
"text": "-1"
}
] |
https://en.wikipedia.org/wiki?curid=57927311
|
57927962
|
NGC 1266
|
Galaxy in the constellation Eridanus
NGC 1266 is a lenticular galaxy in the constellation Eridanus. Although not currently starbursting, it has undergone a period of intense star formation in the recent past, ceasing only ≈500 Myr ago. The galaxy is host to an obscured active galactic nucleus.
A massive molecular outflow, with 2.4 × 107 Mformula_0 of hydrogen, is present from the nucleus of the galaxy, at a rate of 110 Mformula_0 yr−1. Less than 2% of the gas (2 Mformula_0 yr−1) is escaping the galaxy. Momentum coupling to the jet of the AGN is likely driving the outflow.
The current observed star-formation rate (SFR) of ~0.87 Mformula_0 yr−1 is significantly lower than expected for a galaxy of its properties, suppressed by a factor of 50 to 150. Authors have put forth several hypotheses to explain these observations. The most likely scenario is that the AGN-driven molecular outflow is injecting turbulence into the nuclear regions, preventing gravitational collapse of molecular clouds. NGC 1266 is the first known intermediate-mass galaxy to show AGN-driven suppression of star formation.
Two hypotheses exist to explain NGC 1266's nuclear activity and excessive far-IR emission. Either a heavily obscured ultracompact starburst is present in the nuclear regions, or a powerful buried AGN is present, beyond what has been inferred from other observations. Neither scenario is without problems. The black hole at the center of the galaxy is likely growing according to the M–sigma relation, and eventually the outflow will result in the removal of the majority of the gas from the nucleus.
References.
<templatestyles src="Reflist/styles.css" />
External links.
<indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &de=&zoom=&show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 03h 16m 00.7s, −02° 25′ 38″</indicator>
|
[
{
"math_id": 0,
"text": "_{\\odot}"
}
] |
https://en.wikipedia.org/wiki?curid=57927962
|
5793
|
Cumulative distribution function
|
Probability that random variable X is less than or equal to x
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable formula_0, or just distribution function of formula_0, evaluated at formula_1, is the probability that formula_0 will take a value less than or equal to formula_1.
Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function) formula_2 satisfying formula_3 and formula_4.
In the case of a scalar continuous distribution, it gives the area under the probability density function from negative infinity to formula_1. Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
Definition.
The cumulative distribution function of a real-valued random variable formula_0 is the function given by
where the right-hand side represents the probability that the random variable formula_0 takes on a value less than or equal to formula_1.
The probability that formula_0 lies in the semi-closed interval formula_5, where formula_6, is therefore
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables formula_7 etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital formula_8 for a cumulative distribution function, in contrast to the lower-case formula_9 used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses formula_10 and formula_11 instead of formula_8 and formula_9, respectively.
The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given formula_12,
formula_13
as long as the derivative exists.
The CDF of a continuous random variable formula_0 can be expressed as the integral of its probability density function formula_14 as follows:
formula_15
In the case of a random variable formula_0 which has distribution having a discrete component at a value formula_16,
formula_17
If formula_18 is continuous at formula_16, this equals zero and there is no discrete component at formula_16.
Properties.
Every cumulative distribution function formula_18 is non-decreasing and right-continuous, which makes it a càdlàg function. Furthermore,
formula_19
Every function with these three properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
If formula_0 is a purely discrete random variable, then it attains values formula_20 with probability formula_21, and the CDF of formula_0 will be discontinuous at the points formula_22:
formula_23
If the CDF formula_18 of a real valued random variable formula_0 is continuous, then formula_0 is a continuous random variable; if furthermore formula_18 is absolutely continuous, then there exists a Lebesgue-integrable function formula_24 such that
formula_25
for all real numbers formula_26 and formula_16. The function formula_14 is equal to the derivative of formula_18 almost everywhere, and it is called the probability density function of the distribution of formula_0.
If formula_0 has finite L1-norm, that is, the expectation of formula_27 is finite, then the expectation is given by the Riemann–Stieltjes integral
formula_28
and for any formula_29,
formula_30
as well as
formula_31
as shown in the diagram with the two red rectangles. In particular, we have
formula_32
In addition, the (finite) expected value of the real-valued random variable formula_0 can be defined on the graph of its cumulative distribution function as illustrated by the drawing in the definition of expected value for arbitrary real-valued random variables.
Examples.
As an example, suppose formula_0 is uniformly distributed on the unit interval formula_33.
Then the CDF of formula_0 is given by
formula_34
Suppose instead that formula_0 takes only the discrete values 0 and 1, with equal probability.
Then the CDF of formula_0 is given by
formula_35
Suppose formula_0 is exponential distributed. Then the CDF of formula_0 is given by
formula_36
Here "λ" > 0 is the parameter of the distribution, often called the rate parameter.
Suppose formula_0 is normal distributed. Then the CDF of formula_0 is given by
formula_37
Here the parameter formula_38 is the mean or expectation of the distribution; and formula_39 is its standard deviation.
A table of the CDF of the standard normal distribution is often used in statistical applications, where it is named the standard normal table, the unit normal table, or the Z table.
Suppose formula_0 is binomial distributed. Then the CDF of formula_0 is given by
formula_40
Here formula_41 is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of formula_42 independent experiments, and formula_43 is the "floor" under formula_44, i.e. the greatest integer less than or equal to formula_44.
Derived functions.
Complementary cumulative distribution function (tail distribution).
Sometimes, it is useful to study the opposite question and ask how often the random variable is "above" a particular level. This is called the <templatestyles src="Template:Visible anchor/styles.css" />complementary cumulative distribution function (<templatestyles src="Template:Visible anchor/styles.css" />ccdf) or simply the <templatestyles src="Template:Visible anchor/styles.css" />tail distribution or <templatestyles src="Template:Visible anchor/styles.css" />exceedance, and is defined as
formula_45
This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic "at least" as extreme as the one observed. Thus, provided that the test statistic, "T", has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value formula_46 of the test statistic
formula_47
In survival analysis, formula_48 is called the survival function and denoted formula_49, while the term "reliability function" is common in engineering.
Folded cumulative distribution.
While the plot of a cumulative distribution formula_8 often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is
formula_60
where formula_61 denotes the indicator function and the second summand is the survivor function, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median, dispersion (specifically, the mean absolute deviation from the median) and skewness of the distribution or of the empirical results.
Inverse distribution function (quantile function).
If the CDF "F" is strictly increasing and continuous then formula_62 is the unique real number formula_63 such that formula_64. This defines the inverse distribution function or quantile function.
Some distributions do not have a unique inverse (for example if formula_65 for all formula_66, causing formula_18 to be constant). In this case, one may use the generalized inverse distribution function, which is defined as
formula_67
Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are:
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions.
Empirical distribution function.
The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.
Multivariate case.
Definition for two random variables.
When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables formula_84, the joint CDF formula_85 is given by
where the right-hand side represents the probability that the random variable formula_0 takes on a value less than or equal to formula_1 and that formula_76 takes on a value less than or equal to formula_86.
Example of joint cumulative distribution function:
For two continuous variables "X" and "Y": formula_87
For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of "X" and "Y", and here is the example:
given the joint probability mass function in tabular form, determine the joint cumulative distribution function.
Solution: using the given table of probabilities for each potential range of "X" and "Y", the joint cumulative distribution function may be constructed in tabular form:
Definition for more than two random variables.
For formula_88 random variables formula_89, the joint CDF formula_90 is given by
Interpreting the formula_88 random variables as a random vector formula_91 yields a shorter notation:
formula_92
Properties.
Every multivariate CDF is:
Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let formula_95 for formula_96 or formula_97 or formula_98 and let formula_99 otherwise. It is easy to see that the above conditions are met, and yet formula_8 is not a CDF since if it was, then formula_100 as explained below.
The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case:
formula_101
Complex case.
Complex random variable.
The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form formula_102 make no sense. However expressions of the form formula_103 make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts:
formula_104
Complex random vector.
Generalization of Eq.4 yields
formula_105
as definition for the CDS of a complex random vector formula_106.
Use in statistical analysis.
The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.
Kolmogorov–Smirnov and Kuiper's tests.
The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "F \\colon \\mathbb R \\rightarrow [0,1]"
},
{
"math_id": 3,
"text": "\\lim_{x\\rightarrow-\\infty}F(x)=0"
},
{
"math_id": 4,
"text": "\\lim_{x\\rightarrow\\infty}F(x)=1"
},
{
"math_id": 5,
"text": "(a,b]"
},
{
"math_id": 6,
"text": "a < b"
},
{
"math_id": 7,
"text": "X, Y, \\ldots"
},
{
"math_id": 8,
"text": "F"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "\\Phi"
},
{
"math_id": 11,
"text": "\\phi"
},
{
"math_id": 12,
"text": "F(x)"
},
{
"math_id": 13,
"text": "f(x) = \\frac{dF(x)}{dx}"
},
{
"math_id": 14,
"text": "f_X"
},
{
"math_id": 15,
"text": "F_X(x) = \\int_{-\\infty}^x f_X(t) \\, dt."
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "\\operatorname{P}(X=b) = F_X(b) - \\lim_{x \\to b^-} F_X(x)."
},
{
"math_id": 18,
"text": "F_X"
},
{
"math_id": 19,
"text": "\\lim_{x \\to -\\infty} F_X(x) = 0, \\quad \\lim_{x \\to +\\infty} F_X(x) = 1."
},
{
"math_id": 20,
"text": "x_1,x_2,\\ldots"
},
{
"math_id": 21,
"text": "p_i = p(x_i)"
},
{
"math_id": 22,
"text": "x_i"
},
{
"math_id": 23,
"text": "F_X(x) = \\operatorname{P}(X\\leq x) = \\sum_{x_i \\leq x} \\operatorname{P}(X = x_i) = \\sum_{x_i \\leq x} p(x_i)."
},
{
"math_id": 24,
"text": "f_X(x)"
},
{
"math_id": 25,
"text": "F_X(b)-F_X(a) = \\operatorname{P}(a< X\\leq b) = \\int_a^b f_X(x)\\,dx"
},
{
"math_id": 26,
"text": "a"
},
{
"math_id": 27,
"text": "|X|"
},
{
"math_id": 28,
"text": "\n\\mathbb E[X] = \\int_{-\\infty}^\\infty t\\,dF_X(t)\n"
},
{
"math_id": 29,
"text": "x \\geq 0"
},
{
"math_id": 30,
"text": "\nx (1-F_X(x)) \\leq \\int_x^{\\infty} t\\,dF_X(t)\n"
},
{
"math_id": 31,
"text": "\nx F_X(-x) \\leq \\int_{-\\infty}^{-x} (-t)\\,dF_X(t)\n"
},
{
"math_id": 32,
"text": "\n\\lim_{x \\to -\\infty} x F(x) = 0, \\quad \\lim_{x \\to +\\infty} x (1-F(x)) = 0.\n"
},
{
"math_id": 33,
"text": "[0,1]"
},
{
"math_id": 34,
"text": "F_X(x) = \\begin{cases}\n0 &:\\ x < 0\\\\\nx &:\\ 0 \\le x \\le 1\\\\\n1 &:\\ x > 1\n\\end{cases}"
},
{
"math_id": 35,
"text": "F_X(x) = \\begin{cases}\n0 &:\\ x < 0\\\\\n1/2 &:\\ 0 \\le x < 1\\\\\n1 &:\\ x \\ge 1\n\\end{cases}"
},
{
"math_id": 36,
"text": "F_X(x;\\lambda) = \\begin{cases}\n1-e^{-\\lambda x} & x \\ge 0, \\\\\n0 & x < 0.\n\\end{cases}"
},
{
"math_id": 37,
"text": "F(x;\\mu,\\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\int_{-\\infty}^x \\exp \\left( -\\frac{(t - \\mu)^2}{2\\sigma^2} \\right)\\, dt. "
},
{
"math_id": 38,
"text": "\\mu"
},
{
"math_id": 39,
"text": "\\sigma"
},
{
"math_id": 40,
"text": "F(k;n,p) = \\Pr(X\\leq k) = \\sum _{i=0}^{\\lfloor k\\rfloor }{n \\choose i} p^{i} (1-p)^{n-i}"
},
{
"math_id": 41,
"text": "p"
},
{
"math_id": 42,
"text": "n"
},
{
"math_id": 43,
"text": "\\lfloor k\\rfloor"
},
{
"math_id": 44,
"text": "k"
},
{
"math_id": 45,
"text": "\\bar F_X(x) = \\operatorname{P}(X > x) = 1 - F_X(x)."
},
{
"math_id": 46,
"text": "t"
},
{
"math_id": 47,
"text": "p= \\operatorname{P}(T \\ge t) = \\operatorname{P}(T > t) = 1 - F_T(t)."
},
{
"math_id": 48,
"text": "\\bar F_X(x)"
},
{
"math_id": 49,
"text": "S(x)"
},
{
"math_id": 50,
"text": "\\bar F_X(x) \\leq \\frac{\\operatorname{E}(X)}{x} ."
},
{
"math_id": 51,
"text": "x \\to \\infty, \\bar F_X(x) \\to 0"
},
{
"math_id": 52,
"text": "\\bar F_X(x) = o(1/x)"
},
{
"math_id": 53,
"text": "\\operatorname{E}(X)"
},
{
"math_id": 54,
"text": "c > 0"
},
{
"math_id": 55,
"text": "\n\\operatorname{E}(X) = \\int_0^\\infty x f_X(x) \\, dx \\geq \\int_0^c x f_X(x) \\, dx + c\\int_c^\\infty f_X(x) \\, dx\n"
},
{
"math_id": 56,
"text": "\\bar F_X(c) = \\int_c^\\infty f_X(x) \\, dx"
},
{
"math_id": 57,
"text": "\n0 \\leq c\\bar F_X(c) \\leq \\operatorname{E}(X) - \\int_0^c x f_X(x) \\, dx \\to 0 \\text{ as } c \\to \\infty\n"
},
{
"math_id": 58,
"text": "\\operatorname{E}(X) = \\int_0^\\infty \\bar F_X(x) \\, dx - \\int_{-\\infty}^0 F_X(x) \\, dx"
},
{
"math_id": 59,
"text": "\\operatorname{E}(X) = \\sum_{n=0}^\\infty \\bar F_X(n)."
},
{
"math_id": 60,
"text": "F_\\text{fold}(x)=F(x)1_{\\{F(x)\\leq 0.5\\}}+(1-F(x))1_{\\{F(x)>0.5\\}}"
},
{
"math_id": 61,
"text": "1_{\\{A\\}}"
},
{
"math_id": 62,
"text": " F^{-1}( p ), p \\in [0,1], "
},
{
"math_id": 63,
"text": " x "
},
{
"math_id": 64,
"text": " F(x) = p "
},
{
"math_id": 65,
"text": "f_X(x)=0"
},
{
"math_id": 66,
"text": "a<x<b"
},
{
"math_id": 67,
"text": "\nF^{-1}(p) = \\inf \\{x \\in \\mathbb{R}: F(x) \\geq p \\}, \\quad \\forall p \\in [0,1].\n"
},
{
"math_id": 68,
"text": "F^{-1}( 0.5 )"
},
{
"math_id": 69,
"text": " \\tau = F^{-1}( 0.95 ) "
},
{
"math_id": 70,
"text": " \\tau "
},
{
"math_id": 71,
"text": "F^{-1}"
},
{
"math_id": 72,
"text": "F^{-1}(F(x)) \\leq x"
},
{
"math_id": 73,
"text": "F(F^{-1}(p)) \\geq p"
},
{
"math_id": 74,
"text": "F^{-1}(p) \\leq x"
},
{
"math_id": 75,
"text": "p \\leq F(x)"
},
{
"math_id": 76,
"text": "Y"
},
{
"math_id": 77,
"text": "U[0, 1]"
},
{
"math_id": 78,
"text": "F^{-1}(Y)"
},
{
"math_id": 79,
"text": "\\{X_\\alpha\\}"
},
{
"math_id": 80,
"text": "Y_\\alpha"
},
{
"math_id": 81,
"text": "U[0,1]"
},
{
"math_id": 82,
"text": "F^{-1}(Y_\\alpha) = X_\\alpha"
},
{
"math_id": 83,
"text": "\\alpha"
},
{
"math_id": 84,
"text": "X,Y"
},
{
"math_id": 85,
"text": "F_{XY}"
},
{
"math_id": 86,
"text": "y"
},
{
"math_id": 87,
"text": " \\Pr(a < X < b \\text{ and } c < Y < d) = \\int_a^b \\int_c^d f(x,y) \\, dy \\, dx;"
},
{
"math_id": 88,
"text": "N"
},
{
"math_id": 89,
"text": "X_1,\\ldots,X_N"
},
{
"math_id": 90,
"text": "F_{X_1,\\ldots,X_N}"
},
{
"math_id": 91,
"text": "\\mathbf{X} = (X_1, \\ldots, X_N)^T"
},
{
"math_id": 92,
"text": "F_{\\mathbf{X}}(\\mathbf{x}) = \\operatorname{P}(X_1 \\leq x_1,\\ldots,X_N \\leq x_N)"
},
{
"math_id": 93,
"text": "0\\leq F_{X_1 \\ldots X_n}(x_1,\\ldots,x_n)\\leq 1,"
},
{
"math_id": 94,
"text": "\\lim_{x_1,\\ldots,x_n \\rightarrow+\\infty}F_{X_1 \\ldots X_n}(x_1,\\ldots,x_n)=1 \\text{ and } \\lim_{x_i\\rightarrow-\\infty}F_{X_1 \\ldots X_n}(x_1,\\ldots,x_n)=0, \\text{for all } i."
},
{
"math_id": 95,
"text": "F(x,y)=0"
},
{
"math_id": 96,
"text": "x<0"
},
{
"math_id": 97,
"text": "x+y<1"
},
{
"math_id": 98,
"text": "y<0"
},
{
"math_id": 99,
"text": "F(x,y)=1"
},
{
"math_id": 100,
"text": "\\operatorname{P}\\left(\\frac{1}{3} < X \\leq 1, \\frac{1}{3} < Y \\leq 1\\right)=-1"
},
{
"math_id": 101,
"text": "F_{X_1,X_2}(a, c) + F_{X_1,X_2}(b, d) - F_{X_1,X_2}(a, d) - F_{X_1,X_2}(b, c) = \\operatorname{P}(a < X_1 \\leq b, c < X_2 \\leq d) = \\int ..."
},
{
"math_id": 102,
"text": " P(Z \\leq 1+2i) "
},
{
"math_id": 103,
"text": " P(\\Re{(Z)} \\leq 1, \\Im{(Z)} \\leq 3) "
},
{
"math_id": 104,
"text": " F_Z(z) = F_{\\Re{(Z)},\\Im{(Z)}}(\\Re{(z)},\\Im{(z)}) = P(\\Re{(Z)} \\leq \\Re{(z)} , \\Im{(Z)} \\leq \\Im{(z)}). "
},
{
"math_id": 105,
"text": "F_{\\mathbf{Z}}(\\mathbf{z}) = F_{\\Re{(Z_1)},\\Im{(Z_1)}, \\ldots, \\Re{(Z_n)},\\Im{(Z_n)}}(\\Re{(z_1)}, \\Im{(z_1)},\\ldots,\\Re{(z_n)}, \\Im{(z_n)}) = \\operatorname{P}(\\Re{(Z_1)} \\leq \\Re{(z_1)},\\Im{(Z_1)} \\leq \\Im{(z_1)},\\ldots,\\Re{(Z_n)} \\leq \\Re{(z_n)},\\Im{(Z_n)} \\leq \\Im{(z_n)})"
},
{
"math_id": 106,
"text": "\\mathbf{Z} = (Z_1,\\ldots,Z_N)^T"
},
{
"math_id": 107,
"text": "(0, \\infty)"
},
{
"math_id": 108,
"text": " f(x)= \\frac{2\\beta^{\\frac{\\alpha}{2}} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi{\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}}"
},
{
"math_id": 109,
"text": "\\Psi(\\alpha,z)={}_1\\Psi_1\\left(\\begin{matrix}\\left(\\alpha,\\frac{1}{2}\\right)\\\\(1,0)\\end{matrix};z \\right)"
}
] |
https://en.wikipedia.org/wiki?curid=5793
|
579311
|
Image (mathematics)
|
Set of the values of a function
In mathematics, for a function formula_0, the image of an input value formula_1 is the single output value produced by formula_2 when passed formula_1. The preimage of an output value formula_3 is the set of input values that produce formula_3.
More generally, evaluating formula_2 at each element of a given subset formula_4 of its domain formula_5 produces a set, called the "image of formula_4 under (or through) formula_2". Similarly, the inverse image (or preimage) of a given subset formula_6 of the codomain formula_7 is the set of all elements of formula_5 that map to a member of formula_8
The image of the function formula_2 is the set of all output values it may produce, that is, the image of formula_5. The preimage of formula_2, that is, the preimage of formula_7 under formula_2, always equals formula_5 (the domain of formula_2); therefore, the former notion is rarely used.
Image and inverse image may also be defined for general binary relations, not just functions.
Definition.
The word "image" is used in three related ways. In these definitions, formula_10 is a function from the set formula_5 to the set formula_9
Image of an element.
If formula_1 is a member of formula_11 then the image of formula_1 under formula_12 denoted formula_13 is the value of formula_2 when applied to formula_14 formula_15 is alternatively known as the output of formula_2 for argument formula_14
Given formula_16 the function formula_2 is said to take the value formula_3 or take formula_3 as a value if there exists some formula_1 in the function's domain such that formula_17
Similarly, given a set formula_18 formula_2 is said to take a value in formula_19 if there exists some formula_1 in the function's domain such that formula_20
However, formula_2 takes [all] values in formula_19 and formula_2 is valued in formula_19 means that formula_21 for every point formula_1 in the domain of formula_2 .
Image of a subset.
Throughout, let formula_10 be a function.
The image under formula_2 of a subset formula_4 of formula_5 is the set of all formula_22 for formula_23 It is denoted by formula_24 or by formula_25 when there is no risk of confusion. Using set-builder notation, this definition can be written as
formula_26
This induces a function formula_27 where formula_28 denotes the power set of a set formula_29 that is the set of all subsets of formula_30 See below for more.
Image of a function.
The "image" of a function is the image of its entire domain, also known as the range of the function. This last usage should be avoided because the word "range" is also commonly used to mean the codomain of formula_31
Generalization to binary relations.
If formula_32 is an arbitrary binary relation on formula_33 then the set formula_34 is called the image, or the range, of formula_35 Dually, the set formula_36 is called the domain of formula_35
Inverse image.
Let formula_2 be a function from formula_5 to formula_9 The preimage or inverse image of a set formula_37 under formula_12 denoted by formula_38 is the subset of formula_5 defined by
formula_39
Other notations include formula_40 and formula_41
The inverse image of a singleton set, denoted by formula_42 or by formula_43 is also called the fiber or fiber over formula_3 or the level set of formula_44 The set of all the fibers over the elements of formula_7 is a family of sets indexed by formula_9
For example, for the function formula_45 the inverse image of formula_46 would be formula_47 Again, if there is no risk of confusion, formula_48 can be denoted by formula_49 and formula_50 can also be thought of as a function from the power set of formula_7 to the power set of formula_51 The notation formula_50 should not be confused with that for inverse function, although it coincides with the usual one for bijections in that the inverse image of formula_6 under formula_2 is the image of formula_6 under formula_52
Notation for image and inverse image.
The traditional notations used in the previous section do not distinguish the original function formula_10 from the image-of-sets function formula_53; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative is to give explicit names for the image and preimage as functions between power sets:
Properties.
General.
For every function formula_10 and all subsets formula_97 and formula_98 the following properties hold:
Also:
Multiple functions.
For functions formula_10 and formula_100 with subsets formula_97 and formula_101 the following properties hold:
Multiple subsets of domain or codomain.
For function formula_10 and subsets formula_104 and formula_105 the following properties hold:
The results relating images and preimages to the (Boolean) algebra of intersection and union work for any collection of subsets, not just for pairs of subsets:
With respect to the algebra of subsets described above, the inverse image function is a lattice homomorphism, while the image function is only a semilattice homomorphism (that is, it does not always preserve intersections).
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from Fibre on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "f: X \\to Y"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "Y"
},
{
"math_id": 8,
"text": "B."
},
{
"math_id": 9,
"text": "Y."
},
{
"math_id": 10,
"text": "f : X \\to Y"
},
{
"math_id": 11,
"text": "X,"
},
{
"math_id": 12,
"text": "f,"
},
{
"math_id": 13,
"text": "f(x),"
},
{
"math_id": 14,
"text": "x."
},
{
"math_id": 15,
"text": "f(x)"
},
{
"math_id": 16,
"text": "y,"
},
{
"math_id": 17,
"text": "f(x) = y."
},
{
"math_id": 18,
"text": "S,"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "f(x) \\in S."
},
{
"math_id": 21,
"text": "f(x) \\in S"
},
{
"math_id": 22,
"text": "f(a)"
},
{
"math_id": 23,
"text": "a\\in A."
},
{
"math_id": 24,
"text": "f[A],"
},
{
"math_id": 25,
"text": "f(A),"
},
{
"math_id": 26,
"text": "f[A] = \\{f(a) : a \\in A\\}."
},
{
"math_id": 27,
"text": "f[\\,\\cdot\\,] : \\mathcal P(X) \\to \\mathcal P(Y),"
},
{
"math_id": 28,
"text": "\\mathcal P(S)"
},
{
"math_id": 29,
"text": "S;"
},
{
"math_id": 30,
"text": "S."
},
{
"math_id": 31,
"text": "f."
},
{
"math_id": 32,
"text": "R"
},
{
"math_id": 33,
"text": "X \\times Y,"
},
{
"math_id": 34,
"text": "\\{ y \\in Y : x R y \\text{ for some } x \\in X \\}"
},
{
"math_id": 35,
"text": "R."
},
{
"math_id": 36,
"text": "\\{ x \\in X : x R y \\text{ for some } y \\in Y \\}"
},
{
"math_id": 37,
"text": "B \\subseteq Y"
},
{
"math_id": 38,
"text": "f^{-1}[B],"
},
{
"math_id": 39,
"text": "f^{-1}[ B ] = \\{ x \\in X \\,:\\, f(x) \\in B \\}."
},
{
"math_id": 40,
"text": "f^{-1}(B)"
},
{
"math_id": 41,
"text": "f^{-}(B)."
},
{
"math_id": 42,
"text": "f^{-1}[\\{ y \\}]"
},
{
"math_id": 43,
"text": "f^{-1}[y],"
},
{
"math_id": 44,
"text": "y."
},
{
"math_id": 45,
"text": "f(x) = x^2,"
},
{
"math_id": 46,
"text": "\\{ 4 \\}"
},
{
"math_id": 47,
"text": "\\{ -2, 2 \\}."
},
{
"math_id": 48,
"text": "f^{-1}[B]"
},
{
"math_id": 49,
"text": "f^{-1}(B),"
},
{
"math_id": 50,
"text": "f^{-1}"
},
{
"math_id": 51,
"text": "X."
},
{
"math_id": 52,
"text": "f^{-1}."
},
{
"math_id": 53,
"text": "f : \\mathcal{P}(X) \\to \\mathcal{P}(Y)"
},
{
"math_id": 54,
"text": "f^\\rightarrow : \\mathcal{P}(X) \\to \\mathcal{P}(Y)"
},
{
"math_id": 55,
"text": "f^\\rightarrow(A) = \\{ f(a)\\;|\\; a \\in A\\}"
},
{
"math_id": 56,
"text": "f^\\leftarrow : \\mathcal{P}(Y) \\to \\mathcal{P}(X)"
},
{
"math_id": 57,
"text": "f^\\leftarrow(B) = \\{ a \\in X \\;|\\; f(a) \\in B\\}"
},
{
"math_id": 58,
"text": "f_\\star : \\mathcal{P}(X) \\to \\mathcal{P}(Y)"
},
{
"math_id": 59,
"text": "f^\\rightarrow"
},
{
"math_id": 60,
"text": "f^\\star : \\mathcal{P}(Y) \\to \\mathcal{P}(X)"
},
{
"math_id": 61,
"text": "f^\\leftarrow"
},
{
"math_id": 62,
"text": "f[A]"
},
{
"math_id": 63,
"text": "f\\,''A."
},
{
"math_id": 64,
"text": "f : \\{ 1, 2, 3 \\} \\to \\{ a, b, c, d \\}"
},
{
"math_id": 65,
"text": "\n \\left\\{\\begin{matrix}\n 1 \\mapsto a, \\\\\n 2 \\mapsto a, \\\\\n 3 \\mapsto c.\n \\end{matrix}\\right.\n "
},
{
"math_id": 66,
"text": "\\{ 2, 3 \\}"
},
{
"math_id": 67,
"text": "f(\\{ 2, 3 \\}) = \\{ a, c \\}."
},
{
"math_id": 68,
"text": "\\{ a, c \\}."
},
{
"math_id": 69,
"text": "a"
},
{
"math_id": 70,
"text": "f^{-1}(\\{ a \\}) = \\{ 1, 2 \\}."
},
{
"math_id": 71,
"text": "\\{ a, b \\}"
},
{
"math_id": 72,
"text": "f^{-1}(\\{ a, b \\}) = \\{ 1, 2 \\}."
},
{
"math_id": 73,
"text": "\\{ b, d \\}"
},
{
"math_id": 74,
"text": "\\{ \\ \\} = \\emptyset."
},
{
"math_id": 75,
"text": "f : \\R \\to \\R"
},
{
"math_id": 76,
"text": "f(x) = x^2."
},
{
"math_id": 77,
"text": "\\{ -2, 3 \\}"
},
{
"math_id": 78,
"text": "f(\\{ -2, 3 \\}) = \\{ 4, 9 \\},"
},
{
"math_id": 79,
"text": "\\R^+"
},
{
"math_id": 80,
"text": "\\{ 4, 9 \\}"
},
{
"math_id": 81,
"text": "f^{-1}(\\{ 4, 9 \\}) = \\{ -3, -2, 2, 3 \\}."
},
{
"math_id": 82,
"text": "N = \\{ n \\in \\R : n < 0 \\}"
},
{
"math_id": 83,
"text": "f : \\R^2 \\to \\R"
},
{
"math_id": 84,
"text": "f(x, y) = x^2 + y^2."
},
{
"math_id": 85,
"text": "f^{-1}(\\{ a \\})"
},
{
"math_id": 86,
"text": "a > 0, \\ a = 0, \\text{ or } \\ a < 0"
},
{
"math_id": 87,
"text": "a \\ge 0,"
},
{
"math_id": 88,
"text": "(x, y) \\in \\R^2"
},
{
"math_id": 89,
"text": "x^2 + y^2 = a,"
},
{
"math_id": 90,
"text": "\\sqrt{a}."
},
{
"math_id": 91,
"text": "M"
},
{
"math_id": 92,
"text": "\\pi : TM \\to M"
},
{
"math_id": 93,
"text": "TM"
},
{
"math_id": 94,
"text": "M,"
},
{
"math_id": 95,
"text": "\\pi"
},
{
"math_id": 96,
"text": "T_x(M) \\text{ for } x \\in M."
},
{
"math_id": 97,
"text": "A \\subseteq X"
},
{
"math_id": 98,
"text": "B \\subseteq Y,"
},
{
"math_id": 99,
"text": "f(A) \\cap B = \\varnothing \\,\\text{ if and only if }\\, A \\cap f^{-1}(B) = \\varnothing"
},
{
"math_id": 100,
"text": "g : Y \\to Z"
},
{
"math_id": 101,
"text": "C \\subseteq Z,"
},
{
"math_id": 102,
"text": "(g \\circ f)(A) = g(f(A))"
},
{
"math_id": 103,
"text": "(g \\circ f)^{-1}(C) = f^{-1}(g^{-1}(C))"
},
{
"math_id": 104,
"text": "A, B \\subseteq X"
},
{
"math_id": 105,
"text": "S, T \\subseteq Y,"
},
{
"math_id": 106,
"text": "f\\left(\\bigcup_{s\\in S}A_s\\right) = \\bigcup_{s\\in S} f\\left(A_s\\right)"
},
{
"math_id": 107,
"text": "f\\left(\\bigcap_{s\\in S}A_s\\right) \\subseteq \\bigcap_{s\\in S} f\\left(A_s\\right)"
},
{
"math_id": 108,
"text": "f^{-1}\\left(\\bigcup_{s\\in S}B_s\\right) = \\bigcup_{s\\in S} f^{-1}\\left(B_s\\right)"
},
{
"math_id": 109,
"text": "f^{-1}\\left(\\bigcap_{s\\in S}B_s\\right) = \\bigcap_{s\\in S} f^{-1}\\left(B_s\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=579311
|
5793598
|
Chebyshev function
|
In mathematics, the Chebyshev function is either a scalarising function (Tchebycheff function) or one of two related functions. The first Chebyshev function "ϑ"&hairsp;&hairsp;("x") or "θ"&hairsp;("x") is given by
formula_0
where formula_1 denotes the natural logarithm, with the sum extending over all prime numbers p that are less than or equal to x.
The second Chebyshev function "ψ"&hairsp;("x") is defined similarly, with the sum extending over all prime powers not exceeding x
formula_2
where Λ is the von Mangoldt function. The Chebyshev functions, especially the second one "ψ"&hairsp;("x"), are often used in proofs related to prime numbers, because it is typically simpler to work with them than with the prime-counting function, "π"&hairsp;("x") (see the exact formula below.) Both Chebyshev functions are asymptotic to x, a statement equivalent to the prime number theorem.
Tchebycheff function, Chebyshev utility function, or weighted Tchebycheff scalarizing function is used when one has several functions to be minimized and one wants to "scalarize" them to a single function:
formula_3
By minimizing this function for different values of formula_4, one obtains every point on a Pareto front, even in the nonconvex parts. Often the functions to be minimized are not formula_5 but formula_6 for some scalars formula_7. Then formula_8
All three functions are named in honour of Pafnuty Chebyshev.
Relationships.
The second Chebyshev function can be seen to be related to the first by writing it as
formula_9
where k is the unique integer such that "p"&hairsp;"k" ≤ "x" and "x" < "p"&hairsp;"k" +&hairsp;1. The values of k are given in OEIS: . A more direct relationship is given by
formula_10
This last sum has only a finite number of non-vanishing terms, as
formula_11
The second Chebyshev function is the logarithm of the least common multiple of the integers from 1 to n.
formula_12
Values of lcm(1, 2, ..., "n") for the integer variable n are given at OEIS: .
Relationships between "ψ"("x")/"x" and "&vartheta;"("x")/"x".
The following theorem relates the two quotients formula_13 and formula_14 .
Theorem: For formula_15, we have
formula_16
This inequality implies that
formula_17
In other words, if one of the formula_18 or formula_19 tends to a limit then so does the other, and the two limits are equal.
Proof: Since formula_20, we find that
formula_21
But from the definition of formula_22 we have the trivial inequality
formula_23
so
formula_24
Lastly, divide by formula_25 to obtain the inequality in the theorem.
Asymptotics and bounds.
The following bounds are known for the Chebyshev functions:[#endnote_][#endnote_] (in these formulas "p""k" is the kth prime number; "p"1
2, "p"2
3, etc.)
formula_26
Furthermore, under the Riemann hypothesis,
formula_27
for any "ε" > 0.
Upper bounds exist for both "ϑ"&hairsp;&hairsp;("x") and "ψ"&hairsp;("x") such that [#endnote_]
formula_28
for any "x" > 0.
An explanation of the constant 1.03883 is given at OEIS: .
The exact formula.
In 1895, Hans Carl Friedrich von Mangoldt proved[#endnote_] an explicit expression for "ψ"&hairsp;("x") as a sum over the nontrivial zeros of the Riemann zeta function:
formula_29
(The numerical value of is log(2π).) Here ρ runs over the nontrivial zeros of the zeta function, and "ψ"0 is the same as ψ, except that at its jump discontinuities (the prime powers) it takes the value halfway between the values to the left and the right:
formula_30
From the Taylor series for the logarithm, the last term in the explicit formula can be understood as a summation of over the trivial zeros of the zeta function, "ω"
−2, −4, −6, ..., i.e.
formula_31
Similarly, the first term, "x"
, corresponds to the simple pole of the zeta function at 1. It being a pole rather than a zero accounts for the opposite sign of the term.
Properties.
A theorem due to Erhard Schmidt states that, for some explicit positive constant K, there are infinitely many natural numbers x such that
formula_32
and infinitely many natural numbers x such that
formula_33[#endnote_][#endnote_]
In little-o notation, one may write the above as
formula_34
Hardy and Littlewood[#endnote_] prove the stronger result, that
formula_35
Relation to primorials.
The first Chebyshev function is the logarithm of the primorial of x, denoted "x"&hairsp;#:
formula_36
This proves that the primorial "x"&hairsp;# is asymptotically equal to "e"(1&hairsp;&hairsp;+ "o"(1))"x", where "o" is the little-o notation (see big O notation) and together with the prime number theorem establishes the asymptotic behavior of "p""n"&hairsp;#.
Relation to the prime-counting function.
The Chebyshev function can be related to the prime-counting function as follows. Define
formula_37
Then
formula_38
The transition from Π to the prime-counting function, π, is made through the equation
formula_39
Certainly "π"&hairsp;("x") ≤ "x", so for the sake of approximation, this last relation can be recast in the form
formula_40
The Riemann hypothesis.
The Riemann hypothesis states that all nontrivial zeros of the zeta function have real part . In this case, , and it can be shown that
formula_41
By the above, this implies
formula_42
Smoothing function.
The smoothing function is defined as
formula_43
Obviously formula_44
|
[
{
"math_id": 0,
"text": "\\vartheta(x) = \\sum_{p \\le x} \\log p"
},
{
"math_id": 1,
"text": "\\log"
},
{
"math_id": 2,
"text": "\\psi(x) = \\sum_{k \\in \\mathbb{N}}\\sum_{p^k \\le x}\\log p = \\sum_{n \\leq x} \\Lambda(n) = \\sum_{p \\le x}\\left\\lfloor\\log_p x\\right\\rfloor\\log p,"
},
{
"math_id": 3,
"text": "f_{Tchb}(x,w) = \\max_i w_i f_i(x)."
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "f_i"
},
{
"math_id": 6,
"text": "|f_i-z_i^*|"
},
{
"math_id": 7,
"text": "z_i^*"
},
{
"math_id": 8,
"text": "f_{Tchb}(x,w) = \\max_i w_i |f_i(x)-z_i^*|."
},
{
"math_id": 9,
"text": "\\psi(x) = \\sum_{p \\le x}k \\log p"
},
{
"math_id": 10,
"text": "\\psi(x) = \\sum_{n=1}^\\infty \\vartheta\\big(x^{\\frac{1}{n}}\\big)."
},
{
"math_id": 11,
"text": "\\vartheta\\big(x^{\\frac{1}{n}}\\big) = 0\\quad \\text{for}\\quad n>\\log_2 x = \\frac{\\log x}{\\log 2}."
},
{
"math_id": 12,
"text": "\\operatorname{lcm}(1,2,\\dots,n) = e^{\\psi(n)}."
},
{
"math_id": 13,
"text": "\\frac{\\psi(x)}{x}"
},
{
"math_id": 14,
"text": "\\frac{\\vartheta(x)}{x}"
},
{
"math_id": 15,
"text": "x>0"
},
{
"math_id": 16,
"text": "0 \\leq \\frac{\\psi(x)}{x}-\\frac{\\vartheta(x)}{x}\\leq \\frac{(\\log x)^2}{2\\sqrt{x}\\log 2}."
},
{
"math_id": 17,
"text": "\\lim_{x\\to\\infty}\\!\\left(\\frac{\\psi(x)}{x}-\\frac{\\vartheta(x)}{x}\\right)\\! = 0."
},
{
"math_id": 18,
"text": "\\psi(x)/x"
},
{
"math_id": 19,
"text": "\\vartheta(x)/x"
},
{
"math_id": 20,
"text": "\\psi(x)=\\sum_{n \\leq \\log_2 x}\\vartheta(x^{1/n})"
},
{
"math_id": 21,
"text": "0 \\leq \\psi(x)-\\vartheta(x)=\\sum_{2\\leq n \\leq \\log_2 x}\\vartheta(x^{1/n})."
},
{
"math_id": 22,
"text": "\\vartheta(x)"
},
{
"math_id": 23,
"text": "\\vartheta(x)\\leq \\sum_{p\\leq x}\\log x\\leq x\\log x"
},
{
"math_id": 24,
"text": "\\begin{align}\n0\\leq\\psi(x)-\\vartheta(x)&\\leq \\sum_{2\\leq n\\leq \\log_2 x}x^{1/n}\\log(x^{1/n})\\\\\n&\\leq(\\log_2 x)\\sqrt{x}\\log\\sqrt{x}\\\\\n&=\\frac{\\log x}{\\log 2}\\frac{\\sqrt{x}}{2}\\log x\\\\\n&=\\frac{\\sqrt{x}\\,(\\log x)^2}{2\\log 2}.\n\\end{align}"
},
{
"math_id": 25,
"text": "x"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\vartheta(p_k) &\\ge k\\left( \\log k+\\log\\log k-1+\\frac{\\log\\log k-2.050735}{\\log k}\\right)&& \\text{for }k\\ge10^{11}, \\\\[8px]\n\\vartheta(p_k) &\\le k\\left( \\log k+\\log\\log k-1+\\frac{\\log\\log k-2}{\\log k}\\right)&& \\text{for }k \\ge 198, \\\\[8px]\n|\\vartheta(x)-x| &\\le 0.006788\\,\\frac{x}{\\log x}&& \\text{for }x \\ge 10\\,544\\,111, \\\\[8px]\n|\\psi(x)-x|&\\le0.006409\\,\\frac{x}{\\log x}&& \\text{for } x \\ge e^{22},\\\\[8px]\n0.9999\\sqrt{x} &< \\psi(x)-\\vartheta(x)<1.00007\\sqrt{x}+1.78\\sqrt[3]{x}&& \\text{for }x\\ge121.\n\\end{align}"
},
{
"math_id": 27,
"text": "\\begin{align}\n|\\vartheta(x)-x| &= O\\Big(x^{\\frac12+\\varepsilon}\\Big) \\\\\n|\\psi(x)-x| &= O\\Big(x^{\\frac12+\\varepsilon}\\Big)\n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align} \\vartheta(x)&<1.000028x \\\\ \\psi(x)&<1.03883x \\end{align}"
},
{
"math_id": 29,
"text": "\\psi_0(x) = x - \\sum_{\\rho} \\frac{x^{\\rho}}{\\rho} - \\frac{\\zeta'(0)}{\\zeta(0)} - \\tfrac{1}{2} \\log (1-x^{-2})."
},
{
"math_id": 30,
"text": "\\psi_0(x) \n= \\frac{1}{2}\\!\\left( \\sum_{n \\leq x} \\Lambda(n)+\\sum_{n < x} \\Lambda(n)\\right)\n=\\begin{cases} \\psi(x) - \\tfrac{1}{2} \\Lambda(x) & x = 2,3,4,5,7,8,9,11,13,16,\\dots \\\\ [5px]\n\\psi(x) & \\mbox{otherwise.} \\end{cases}"
},
{
"math_id": 31,
"text": "\\sum_{k=1}^{\\infty} \\frac{x^{-2k}}{-2k} = \\tfrac{1}{2} \\log \\left( 1 - x^{-2} \\right)."
},
{
"math_id": 32,
"text": "\\psi(x)-x < -K\\sqrt{x}"
},
{
"math_id": 33,
"text": "\\psi(x)-x > K\\sqrt{x}."
},
{
"math_id": 34,
"text": "\\psi(x)-x \\ne o\\left(\\sqrt{x}\\,\\right)."
},
{
"math_id": 35,
"text": "\\psi(x)-x \\ne o\\left(\\sqrt{x}\\,\\log\\log\\log x\\right)."
},
{
"math_id": 36,
"text": "\\vartheta(x) = \\sum_{p \\le x} \\log p = \\log \\prod_{p\\le x} p = \\log\\left(x\\#\\right)."
},
{
"math_id": 37,
"text": "\\Pi(x) = \\sum_{n \\leq x} \\frac{\\Lambda(n)}{\\log n}."
},
{
"math_id": 38,
"text": "\\Pi(x) = \\sum_{n \\leq x} \\Lambda(n) \\int_n^x \\frac{dt}{t \\log^2 t} + \\frac{1}{\\log x} \\sum_{n \\leq x} \\Lambda(n) = \\int_2^x \\frac{\\psi(t)\\, dt}{t \\log^2 t} + \\frac{\\psi(x)}{\\log x}."
},
{
"math_id": 39,
"text": "\\Pi(x) = \\pi(x) + \\tfrac{1}{2} \\pi\\left(\\sqrt{x}\\,\\right) + \\tfrac{1}{3} \\pi\\left(\\sqrt[3]{x}\\,\\right) + \\cdots"
},
{
"math_id": 40,
"text": "\\pi(x) = \\Pi(x) + O\\left(\\sqrt{x}\\,\\right)."
},
{
"math_id": 41,
"text": "\\sum_{\\rho} \\frac{x^{\\rho}}{\\rho} = O\\!\\left(\\sqrt{x}\\, \\log^2 x\\right)."
},
{
"math_id": 42,
"text": "\\pi(x) = \\operatorname{li}(x) + O\\!\\left(\\sqrt{x}\\, \\log x\\right)."
},
{
"math_id": 43,
"text": "\\psi_1(x) = \\int_0^x \\psi(t)\\,dt."
},
{
"math_id": 44,
"text": "\\psi_1(x) \\sim \\frac{x^2}{2}."
}
] |
https://en.wikipedia.org/wiki?curid=5793598
|
57936674
|
Lewis's triviality result
|
In the mathematical theory of probability, David Lewis's triviality result is a theorem about the impossibility of systematically equating the conditional probability formula_0 with the probability of a so-called conditional event, formula_1.
Conditional probability and conditional events.
The statement "The probability that if formula_2, then formula_3, is 20%" means (put intuitively) that event formula_3 may be expected to occur in 20% of the outcomes where event formula_2 occurs. The standard formal expression of this is formula_4, where the conditional probability formula_0 equals, by definition, formula_5.
Beginning in the 1960s, several philosophical logicians—most notably Ernest Adams and Robert Stalnaker—floated the idea that one might also write formula_6, where formula_1 is the conditional event "If formula_2, then formula_3". That is, given events formula_2 and formula_3, one might suppose there is an event, formula_1, such that formula_7 could be counted on to equal formula_0, so long as formula_8.
Part of the appeal of this move would be the possibility of embedding conditional expressions within more complex constructions. One could write, say, formula_9, to express someone's high subjective degree of confidence ("75% sure") that either formula_2, or else if formula_3, then formula_10. Compound constructions containing conditional expressions might also be useful in the programming of automated decision-making systems.
How might such a convention be combined with standard probability theory? The most direct extension of the standard theory would be to treat formula_1 as an event like any other, i.e., as a set of outcomes. Adding formula_1 to the familiar Venn- or Euler diagram of formula_2 and formula_3 would then result in something like Fig. 1, where formula_11 are probabilities allocated to the eight respective regions, such that formula_12.
For formula_7 to equal formula_0 requires that formula_13, i.e., that the probability inside the formula_1 region equal the formula_14 region's proportional share of the probability inside the formula_2 region. In general the equality will of course not be true, so that making it reliably true requires a new constraint on probability functions: in addition to satisfying Kolmogorov's probability axioms, they must also satisfy a new constraint, namely that formula_15 for any events formula_2 and formula_3 such that formula_8.
Lewis's result.
pointed out a seemingly fatal problem with the above proposal: assuming a nontrivial set of events, the new, restricted class of formula_16-functions will not be closed under conditioning, the operation that turns probability function formula_16 into new function formula_17, predicated on event formula_10's occurrence. That is, if formula_15, it will not in general be true that formula_18 as long as formula_19. This implies that if rationality requires having a well-behaved probability function, then a fully rational person (or computing system) would become irrational simply in virtue of learning that arbitrary event formula_10 had occurred. Bas van Fraassen called this result "a veritable bombshell" (1976, p. 273).
Lewis's proof is as follows. Let a set of events be non-trivial if it contains two possible events, formula_2 and formula_3, that are mutually exclusive but do not together exhaust all possibilities, so that formula_8, formula_20, formula_21, and formula_22. The existence of two such events implies the existence of the event formula_23, as well, and, if conditional events are admitted, the event formula_24. The proof derives a contradiction from the assumption that such a minimally non-trivial set of events exists.
Graphical version.
A graphical version of the proof starts with Fig. 2, where the formula_2 and formula_3 from Fig. 1 are now disjoint and formula_1 has been replaced by formula_24. By the assumption that formula_2 and formula_3 are possible, formula_42 and formula_43. By the assumption that together formula_2 and formula_3 do not together exhaust all possibilities, formula_44. And by the new constraint on probability functions, formula_45 formula_46, which means that
(1) formula_47
Conditioning on an event involves zeroing out the probabilities outside the event's region and increasing the probabilities inside the region by a common scale factor. Here, conditioning on formula_2 will zero out formula_48 and formula_49 and scale up formula_50 and formula_51, to formula_52 and formula_53, respectively, and so
(2) formula_54 which simplifies to formula_55
Conditioning instead on formula_25 will zero out formula_50 and formula_51 and scale up formula_48 and formula_49, and so
(3) formula_56 which simplifies to formula_57
From (2), it follows that formula_58, and since formula_52 is the scaled-up value of formula_50, it must also be that formula_59. Similarly, from (3), formula_60. But then (1) reduces to formula_61, which implies that formula_62, which contradicts the stipulation that formula_44.
Later developments.
In a follow-up article, noted that the triviality proof can proceed by conditioning not on formula_2 and formula_25 but instead, by turns, on each of a finite set of mutually exclusive and jointly exhaustive events formula_63 He also gave a variant of the proof that involved not total conditioning, in which the probability of either formula_2 or formula_25 is set to 1, but partial conditioning (i.e., Jeffrey conditioning), by which probability is incrementally shifted from formula_25 to formula_2.
Separately, pointed out that even without conditioning, if the number of outcomes is large but finite, then in general formula_64, being a ratio of two outputs of the formula_16-function, will take on more values than any single output of the function can. So, for instance, if in Fig. 1 formula_65 are all multiples of 0.01 (as would be the case if there were exactly 100 equiprobable outcomes), then formula_7 must be a multiple of 0.01, as well, but formula_5 need not be. That being the case, formula_7 cannot reliably be made to equal formula_66.
also argued that the condition formula_15 caused acceptable formula_16-functions to be implausibly sparse and isolated from one another. One way to put the point: standardly, any weighted average of two probability function is itself a probability function, so that between any two formula_16-functions there will be a continuum of weighted-average formula_16-functions along which one of the original formula_16-functions gradually transforms into the other. But these continua disappear if the added formula_15 condition is imposed. Now an average of two acceptable formula_16-functions will in general not be an acceptable formula_16-function.
Possible rejoinders.
Assuming that formula_15 holds for a minimally nontrivial set of events and for any formula_16-function leads to a contradiction. Thus formula_15 can hold for any formula_16-function only for trivial sets of events—that is the triviality result. However, the proof relies on background assumptions that may be challenged. It may be proposed, for instance, that the referent event of an expression like “formula_1” is not fixed for a given formula_2 and formula_3, but instead changes as the probability function changes. Or it may be proposed that conditioning on formula_10 should follow a rule other than formula_67.
But the most common response, among proponents of the formula_15 condition, has been to explore ways to model conditional events as something other than subsets of a universe set of outcomes. Even before Lewis published his result, had modeled conditional events as ordered "pairs" of sets of outcomes. With that approach and others in the same spirit, conditional events and their associated combination and complementation operations do not constitute the usual algebra of sets of standard probability theory, but rather a more exotic type of structure, known as a conditional event algebra.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "P(B\\mid A)"
},
{
"math_id": 1,
"text": "A \\rightarrow B"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "P(B\\mid A)=0.20"
},
{
"math_id": 5,
"text": "P(A \\cap B)/P(A)"
},
{
"math_id": 6,
"text": "P(A \\rightarrow B) = 0.20"
},
{
"math_id": 7,
"text": "P(A \\rightarrow B)"
},
{
"math_id": 8,
"text": "P(A) > 0"
},
{
"math_id": 9,
"text": "P(A \\cup (B \\rightarrow C)) = 0.75"
},
{
"math_id": 10,
"text": "C"
},
{
"math_id": 11,
"text": "s,t,\\ldots, z"
},
{
"math_id": 12,
"text": "s + t + \\cdots + z = 1"
},
{
"math_id": 13,
"text": "t + v + w + y = (s + t)/(s + t + x + y)"
},
{
"math_id": 14,
"text": "A \\cap B"
},
{
"math_id": 15,
"text": "P(A \\rightarrow B) = P(B\\mid A)"
},
{
"math_id": 16,
"text": "P"
},
{
"math_id": 17,
"text": "P_C (\\cdot) = P(\\cdot\\mid C)"
},
{
"math_id": 18,
"text": "P_C(A \\rightarrow B) = P_C(B\\mid A)"
},
{
"math_id": 19,
"text": "P(C)> 0"
},
{
"math_id": 20,
"text": "P(B) > 0"
},
{
"math_id": 21,
"text": "P(A \\cap B) = 0"
},
{
"math_id": 22,
"text": "P(A \\cup B) < 1"
},
{
"math_id": 23,
"text": "A \\cup B"
},
{
"math_id": 24,
"text": "(A \\cup B) \\rightarrow A"
},
{
"math_id": 25,
"text": "A'"
},
{
"math_id": 26,
"text": "P_A((A \\cup B) \\rightarrow A) = P(((A \\cup B) \\rightarrow A) \\cap A)/P(A)"
},
{
"math_id": 27,
"text": "P_A((A \\cup B) \\rightarrow A) = P_A((A \\cup B) \\cap A)/P_A(A \\cup B) ="
},
{
"math_id": 28,
"text": "P((A \\cup B) \\cap A\\mid A)/P(A \\cup B\\mid A) = 1/1 = 1"
},
{
"math_id": 29,
"text": "P(((A \\cup B) \\rightarrow A) \\cap A) = P(A)"
},
{
"math_id": 30,
"text": "P_{A'}((A \\cup B) \\rightarrow A) = P(((A \\cup B) \\rightarrow A) \\cap A')/P(A')"
},
{
"math_id": 31,
"text": "P_{A'}((A \\cup B) \\rightarrow A) = P_{A'}((A \\cup B) \\cap A)/P_{A'}(A \\cup B) ="
},
{
"math_id": 32,
"text": "P((A \\cup B) \\cap A\\mid A')/P(A \\cup B\\mid A') = 0/P(A \\cup B\\mid A') = 0"
},
{
"math_id": 33,
"text": "P(A \\cup B\\mid A') \\neq 0"
},
{
"math_id": 34,
"text": "P(((A \\cup B) \\rightarrow A) \\cap A') = 0"
},
{
"math_id": 35,
"text": "P(X \\cap Y) + P(X \\cap Y') = P(X)"
},
{
"math_id": 36,
"text": "P(((A \\cup B) \\rightarrow A) \\cap A) + P(((A \\cup B) \\rightarrow A) \\cap A') ="
},
{
"math_id": 37,
"text": "P((A \\cup B) \\rightarrow A)"
},
{
"math_id": 38,
"text": "P(A)"
},
{
"math_id": 39,
"text": "P((A \\cup B) \\cap A)/P(A \\cup B) = P(A)/P(A \\cup B)"
},
{
"math_id": 40,
"text": "P(A) = P(A)/P(A \\cup B)"
},
{
"math_id": 41,
"text": "P(A \\cup B) = 1"
},
{
"math_id": 42,
"text": "x+y>0"
},
{
"math_id": 43,
"text": "u+v>0"
},
{
"math_id": 44,
"text": "u + v + x + y < 1"
},
{
"math_id": 45,
"text": "P((A \\cup B) \\rightarrow A) = P(A\\mid A \\cup B) ="
},
{
"math_id": 46,
"text": "P(A \\cap (A \\cup B))/P(A \\cup B) = P(A)/P(A \\cup B)"
},
{
"math_id": 47,
"text": "y + v + w =\\frac{x + y}{x + y + u + v},"
},
{
"math_id": 48,
"text": "u, v"
},
{
"math_id": 49,
"text": "w"
},
{
"math_id": 50,
"text": "x"
},
{
"math_id": 51,
"text": "y"
},
{
"math_id": 52,
"text": "x_A"
},
{
"math_id": 53,
"text": "y_A"
},
{
"math_id": 54,
"text": "y_A + 0 + 0 = \\frac{x_A + y_A}{x_A + y_A + 0 + 0},"
},
{
"math_id": 55,
"text": "y_A = 1."
},
{
"math_id": 56,
"text": "0 + v_{A'} + w_{A'} = \\frac{0 + 0}{0 + 0 + u_A + v_A},"
},
{
"math_id": 57,
"text": "v_{A'} + w_{A'} = 0."
},
{
"math_id": 58,
"text": "x_A = 0"
},
{
"math_id": 59,
"text": "x = 0"
},
{
"math_id": 60,
"text": "v = w = 0"
},
{
"math_id": 61,
"text": "y = y/(y + u)"
},
{
"math_id": 62,
"text": "y + u = 1"
},
{
"math_id": 63,
"text": "A, C, D, E, \\ldots\\,."
},
{
"math_id": 64,
"text": "P(B\\mid A) = P(A \\cap B)/P(A)"
},
{
"math_id": 65,
"text": "s, t, \\ldots "
},
{
"math_id": 66,
"text": "P(B \\mid A)"
},
{
"math_id": 67,
"text": "P_C(\\cdot) = P(\\cdot\\mid C)"
}
] |
https://en.wikipedia.org/wiki?curid=57936674
|
5794
|
Central tendency
|
Statistical value representing the center or average of a distribution
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.
Colloquially, measures of central tendency are often called "averages." The term "central tendency" dates from the late 1920s.
The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value."
The central tendency of a distribution is typically contrasted with its "dispersion" or "variability"; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion.
Measures.
The following may be applied to one-dimensional data. Depending on the circumstances, it may be appropriate to transform the data before calculating a central tendency. Examples are squaring the values or taking logarithms. Whether a transformation is appropriate and what it should be, depend heavily on the data being analyzed.
Any of the above may be applied to each dimension of multi-dimensional data, but the results may not be invariant to rotations of the multi-dimensional space.
Solutions to variational problems.
Several measures of central tendency can be characterized as solving a variational problem, in the sense of the calculus of variations, namely minimizing variation from the center. That is, given a measure of statistical dispersion, one asks for a measure of central tendency that minimizes variation: such that variation from the center is minimal among all choices of center. In a quip, "dispersion precedes location". These measures are initially defined in one dimension, but can be generalized to multiple dimensions. This center may or may not be unique. In the sense of spaces, the correspondence is:
The associated functions are called -norms: respectively 0-"norm", 1-norm, 2-norm, and ∞-norm. The function corresponding to the L0 space is not a norm, and is thus often referred to in quotes: 0-"norm".
In equations, for a given (finite) data set X, thought of as a vector , the dispersion about a point c is the "distance" from x to the constant vector in the p-norm (normalized by the number of points n):
formula_0
For and these functions are defined by taking limits, respectively as and . For the limiting values are 00
0 and or , so the difference becomes simply equality, so the 0-norm counts the number of "unequal" points. For the largest number dominates, and thus the ∞-norm is the maximum difference.
Uniqueness.
The mean ("L"2 center) and midrange ("L"∞ center) are unique (when they exist), while the median ("L"1 center) and mode ("L"0 center) are not in general unique. This can be understood in terms of convexity of the associated functions (coercive functions).
The 2-norm and ∞-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point.
The 1-norm is not "strictly" convex, whereas strict convexity is needed to ensure uniqueness of the minimizer. Correspondingly, the median (in this sense of minimizing) is not in general unique, and in fact any point between the two central points of a discrete distribution minimizes average absolute deviation.
The 0-"norm" is not convex (hence not a norm). Correspondingly, the mode is not unique – for example, in a uniform distribution "any" point is the mode.
Clustering.
Instead of a single central point, one can ask for multiple points such that the variation from these points is minimized. This leads to cluster analysis, where each point in the data set is clustered with the nearest "center". Most commonly, using the 2-norm generalizes the mean to "k"-means clustering, while using the 1-norm generalizes the (geometric) median to "k"-medians clustering. Using the 0-norm simply generalizes the mode (most common value) to using the "k" most common values as centers.
Unlike the single-center statistics, this multi-center clustering cannot in general be computed in a closed-form expression, and instead must be computed or approximated by an iterative method; one general approach is expectation–maximization algorithms.
Information geometry.
The notion of a "center" as minimizing variation can be generalized in information geometry as a distribution that minimizes divergence (a generalized distance) from a data set. The most common case is maximum likelihood estimation, where the maximum likelihood estimate (MLE) maximizes likelihood (minimizes expected surprisal), which can be interpreted geometrically by using entropy to measure variation: the MLE minimizes cross-entropy (equivalently, relative entropy, Kullback–Leibler divergence).
A simple example of this is for the center of nominal data: instead of using the mode (the only single-valued "center"), one often uses the empirical measure (the frequency distribution divided by the sample size) as a "center". For example, given binary data, say heads or tails, if a data set consists of 2 heads and 1 tails, then the mode is "heads", but the empirical measure is 2/3 heads, 1/3 tails, which minimizes the cross-entropy (total surprisal) from the data set. This perspective is also used in regression analysis, where least squares finds the solution that minimizes the distances from it, and analogously in logistic regression, a maximum likelihood estimate minimizes the surprisal (information distance).
Relationships between the mean, median and mode.
For unimodal distributions the following bounds are known and are sharp:
formula_1
formula_2
formula_3
where "μ" is the mean, "ν" is the median, "θ" is the mode, and "σ" is the standard deviation.
For every distribution,
formula_4
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f_p(c) = \\left\\| \\mathbf{x} - \\mathbf{c} \\right\\|_p := \\bigg( \\frac{1}{n} \\sum_{i=1}^n \\left| x_i - c\\right| ^p \\bigg) ^{1/p}"
},
{
"math_id": 1,
"text": " \\frac{| \\theta - \\mu |}{ \\sigma } \\le \\sqrt{ 3 } ,"
},
{
"math_id": 2,
"text": " \\frac{| \\nu - \\mu |}{ \\sigma } \\le \\sqrt{ 0.6 } ,"
},
{
"math_id": 3,
"text": " \\frac{| \\theta - \\nu |}{ \\sigma } \\le \\sqrt{ 3 } ,"
},
{
"math_id": 4,
"text": " \\frac{| \\nu - \\mu |}{ \\sigma } \\le 1."
}
] |
https://en.wikipedia.org/wiki?curid=5794
|
579414
|
Drug design
|
Invention of new medications based on knowledge of a biological target
Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed.
Definition.
The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effictive drug. These other characteristics are often difficult to predict with rational design techniques.
Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles.
Drug targets.
A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential.
Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease.
Drug discovery.
Phenotypic drug discovery.
Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention.
Rational drug discovery.
Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule.
Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined.
The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature.
Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality.
Computer-aided drug design.
The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity.
Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estimates. These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target.
Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures.
Computer-aided drug design may be used at any of the following stages of drug discovery:
In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates:
Types.
There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design.
Ligand-based.
Ligand-based drug design (or "indirect drug design") relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs.
Structure-based.
Structure-based drug design (or "direct drug design") relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates.
Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening.
A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity.
Binding site identification.
Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding.
Scoring functions.
Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection.
One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form:
formula_0
where:
A more general thermodynamic "master" equation is as follows:
formula_1
where:
The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built.
Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model.
Examples.
A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995.
Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the "bcr-abl" fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues.
Additional examples include:
<templatestyles src="Div col/styles.css"/>
Drug screening.
Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes.
Case studies.
<templatestyles src="Div col/styles.css"/>
Criticism.
It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta G_{\\text{bind}} = \\Delta G_{\\text{0}} + \\Delta G_{\\text{hb}} \\Sigma_{h-bonds} + \\Delta G_{\\text{ionic}} \\Sigma_{ionic-int} + \\Delta G_{\\text{lipophilic}} \\left\\vert A \\right\\vert + \\Delta G_{\\text{rot}} \\mathit{NROT} "
},
{
"math_id": 1,
"text": "\\begin{array}{lll}\\Delta G_{\\text{bind}} = -RT \\ln K_{\\text{d}}\\\\[1.3ex]\n\nK_{\\text{d}} = \\dfrac{[\\text{Ligand}] [\\text{Receptor}]}{[\\text{Complex}]}\\\\[1.3ex]\n\n\\Delta G_{\\text{bind}} = \\Delta G_{\\text{desolvation}} + \\Delta G_{\\text{motion}} + \\Delta G_{\\text{configuration}} + \\Delta G_{\\text{interaction}}\\end{array}"
}
] |
https://en.wikipedia.org/wiki?curid=579414
|
57942226
|
Löschian number
|
Integer sequence
In number theory, the numbers of the form "x"2 + "xy" + "y"2 for integer "x", "y" are called the Löschian numbers (or Loeschian numbers). These numbers are named after August Lösch. They are the norms of the Eisenstein integers. They are a set of whole numbers, including zero, and having prime factorization in which all primes congruent to 2 mod 3 have even powers (there is no restriction of primes congruent to 0 or 1 mod 3).
|
[
{
"math_id": 0,
"text": "(m^2+m+1)x^2"
},
{
"math_id": 1,
"text": " (x^2 + xy + y^2) "
}
] |
https://en.wikipedia.org/wiki?curid=57942226
|
57945487
|
World Football Elo Ratings
|
Ranking system for men's national association football teams
The World Football Elo Ratings are a ranking system for men's national association football teams that is published by the website eloratings.net. It is based on the Elo rating system but includes modifications to take various football-specific variables into account, like the margin of victory, importance of a match, and home field advantage. Other implementations of the Elo rating system are possible and there is no single nor any official Elo ranking for football teams.
Since being developed, the Elo rankings have been found to have the highest predictive capability for football matches. FIFA's official rankings, both the FIFA World Rankings for men and the FIFA Women's World Rankings are based on a modified version of the Elo formula, the men's rankings having switched away from FIFA's own system for matches played since June 2018.
History and overview.
The Elo system, developed by Hungarian-American mathematician Árpád Élő, is used by FIDE, the international chess federation, to rate chess players, and by the European Go Federation, to rate Go players. In 1997, Bob Runyan adapted the Elo rating system to international football and posted the results on the Internet. He was also the first maintainer of the World Football Elo Ratings web site, currently maintained by Kirill Bulygin. Other implementations of the Elo rating system are possible.
The Elo system was adapted for football by adding a weighting for the kind of match, an adjustment for the home team advantage, and an adjustment for goal difference in the match result.
The ratings consider all official international matches for which results are available. Ratings tend to converge on a team's true strength relative to its competitors after about 30 matches. Ratings for teams with fewer than 30 matches are considered provisional.
Comparison with other systems.
A 2009 comparative study of eight methods found that the implementation of the Elo rating system described below had the highest predictive capability for football matches, while the men's FIFA ranking method (2006–2018 system) performed poorly.
The FIFA World Rankings is the official national teams rating system used by the international governing body of football. The FIFA Women's World Rankings system has used a modified version of the Elo formula since 2003. In June 2018, the FIFA ranking switched to an Elo-based ranking as well, starting from the current FIFA rating points. The major difference between the World Football Elo Rating and the new men's FIFA rating system is that the latter does not consider goal differential and counts a penalty shoot-out as a win/loss rather than a draw (neither method distinguishes a win in extra time from a win in regular time).
Calculation principles.
The ratings are based on the following formula:
formula_0
where
formula_1
Where;
"Points Change" is rounded to the nearest integer before updating the team rating.
Status of match.
The status of the match is incorporated by the use of a weight constant. The constant reflects the importance of a match, which, in turn, is determined entirely by which tournament the match is in; the weight constant for each major tournament is:
The FIFA adaptation of the Elo rating features 8 weights, with the knockout stages in the World Cup weighing 12 times more than some friendly matches.
Number of goals.
The number of goals is taken into account by use of a goal difference index.
If the game is a draw or is won by one goal
formula_2
If the game is won by two goals
formula_3
If the game is won by three or more goals:
formula_5
Table of examples:
Result of match.
W is the result of the game (1 for a win, 0.5 for a draw, and 0 for a loss). This also holds when a game is won or lost in extra time. If the match is decided on penalties, however, the result of the game is considered a draw (W = 0.5).
Expected result of match.
We is the expected result (win expectancy with a draw counting as 0.5) from the following formula:
formula_6
where "dr" equals the difference in ratings (add 100 points for the home team). So "dr" of 0 gives 0.5, of 120 gives 0.666 to the higher-ranked team and 0.334 to the lower, and of 800 gives 0.99 to the higher-ranked team and 0.01 to the lower.
The FIFA adaptation of the Elo rating does not incorporate a home team advantage and has a larger divisor in the formula (600 vs 400), making the points exchange less sensitive to the rating difference of two teams.
Examples for clarification.
The same example of a three-team friendly tournament on neutral territory is used as on the FIFA World Rankings page. Beforehand team A had a rating of 630 points, team B 500 points, and teams C 480 points.<br>The first table shows the points allocations based on three possible outcomes of the match between the strongest team A, and the somewhat weaker team B:
When the difference in strength between the two teams is less, so also will be the difference in points allocation. The next table illustrates how the points would be divided following the same results as above, but with two roughly equally ranked teams, B and C, being involved:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_n = R_o + P"
},
{
"math_id": 1,
"text": "P = K G (W - W_e)"
},
{
"math_id": 2,
"text": "G = 1"
},
{
"math_id": 3,
"text": "G = \\frac{3}{2}"
},
{
"math_id": 4,
"text": " \\forall "
},
{
"math_id": 5,
"text": "G = \\frac{11+N}{8}"
},
{
"math_id": 6,
"text": "W_e = \\frac{1}{10^{-dr/400} + 1}"
}
] |
https://en.wikipedia.org/wiki?curid=57945487
|
5795043
|
Implicational propositional calculus
|
In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus that uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", "formula_0", etc..
Functional (in)completeness.
Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued truth functions from it.
For example, the two-place truth function that always returns "false" is not definable from → and arbitrary propositional variables: any formula constructed from → and propositional variables must receive the value "true" when all of its variables are evaluated to true.
It follows that {→} is not functionally complete.
However, if one adds a nullary connective ⊥ for falsity, then one can define all other truth functions. Formulas over the resulting set of connectives {→, ⊥} are called f-implicational. If "P" and "Q" are propositions, then:
Since the above operators are known to be functionally complete, it follows that any truth function can be expressed in terms of → and ⊥.
Axiom system.
The following statements are considered tautologies (irreducible and intuitively true, by definition).
Where in each case, "P", "Q", and "R" may be replaced by any formulas that contain only "→" as a connective. If Γ is a set of formulas and "A" a formula, then formula_1 means that "A" is derivable using the axioms and rules above and formulas from Γ as additional hypotheses.
Łukasiewicz (1948) found an axiom system for the implicational calculus that replaces the schemas 1–3 above with a single schema
He also argued that there is no shorter axiom system.
Basic properties of derivation.
Since all axioms and rules of the calculus are schemata, derivation is closed under substitution:
If formula_2 then formula_3
where σ is any substitution (of formulas using only implication).
The implicational propositional calculus also satisfies the deduction theorem:
If formula_4, then formula_5
As explained in the deduction theorem article, this holds for any axiomatic extension of the system containing axiom schemas 1 and 2 above and modus ponens.
Completeness.
The implicational propositional calculus is semantically complete with respect to the usual two-valued semantics of classical propositional logic. That is, if Γ is a set of implicational formulas, and "A" is an implicational formula entailed by Γ, then formula_1.
Proof.
A proof of the completeness theorem is outlined below. First, using the compactness theorem and the deduction theorem, we may reduce the completeness theorem to its special case with empty Γ, i.e., we only need to show that every tautology is derivable in the system.
The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the functional incompleteness of implication. If "A" and "F" are formulas, then "A" → "F" is equivalent to (¬"A*") ∨ "F", where "A*" is the result of replacing in "A" all, some, or none of the occurrences of "F" by falsity. Similarly, ("A" → "F") → "F" is equivalent to "A*" ∨ "F". So under some conditions, one can use them as substitutes for saying "A*" is false or "A*" is true respectively.
We first observe some basic facts about derivability:
Indeed, we can derive "A" → ("B" → "C") using Axiom 1, and then derive "A" → "C" by modus ponens (twice) from Ax. 2.
This follows from (1) by the deduction theorem.
If we further assume "C" → "B", we can derive "A" → "B" using (1), then we derive "C" by modus ponens. This shows formula_6, and the deduction theorem gives formula_7. We apply Ax. 3 to obtain (3).
Let "F" be an arbitrary fixed formula. For any formula "A", we define "A"0
("A" → "F") and "A"1
(("A" → "F") → "F"). Consider only formulas in propositional variables "p"1, ..., "pn". We claim that for every formula "A" in these variables and every truth assignment "e",
We prove (4) by induction on "A". The base case "A" = "pi" is trivial. Let "A"
("B" → "C"). We distinguish three cases:
Now let "F" be a tautology in variables "p"1, ..., "pn". We will prove by reverse induction on "k" = "n"...,0 that for every assignment "e",
The base case "k" = "n" follows from a special case of (4) using
formula_12
and the fact that "F"→"F" is a theorem by the deduction theorem.
Assume that (5) holds for "k" + 1, we will show it for "k". By applying deduction theorem to the induction hypothesis, we obtain
formula_13
by first setting "e"("p""k"+1) = 0 and second setting "e"("p""k"+1) = 1. From this we derive (5) using modus ponens.
For "k" = 0 we obtain that the tautology "F" is provable without assumptions. This is what was to be proved.
This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional variables in the tautology, hence it is not a practical method for any but the very shortest tautologies.
The Bernays–Tarski axiom system.
The Bernays–Tarski axiom system is often used. In particular, Łukasiewicz's paper derives the Bernays–Tarski axioms from Łukasiewicz's sole axiom as a means of showing its completeness.<br>
It differs from the axiom schemas above by replacing axiom schema 2, ("P"→("Q"→"R"))→(("P"→"Q")→("P"→"R")), with
which is called "hypothetical syllogism".
This makes derivation of the deduction meta-theorem a little more difficult, but it can still be done.
We show that from "P"→("Q"→"R") and "P"→"Q" one can derive "P"→"R". This fact can be used in lieu of axiom schema 2 to get the meta-theorem.
Satisfiability and validity.
Satisfiability in the implicational propositional calculus is trivial, because every formula is satisfiable: just set all variables to true.
Falsifiability in the implicational propositional calculus is NP-complete, meaning that validity (tautology) is co-NP-complete.
In this case, a useful technique is to presume that the formula is not a tautology and attempt to find a valuation that makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology.
Example of a non-tautology:
Suppose [("A"→"B")→(("C"→"A")→"E")]→(["F"→(("C"→"D")→"E")]→[("A"→"F")→("D"→"E")]) is false.
Then ("A"→"B")→(("C"→"A")→"E") is true; "F"→(("C"→"D")→"E") is true; "A"→"F" is true; "D" is true; and "E" is false.
Since "D" is true, "C"→"D" is true. So the truth of "F"→(("C"→"D")→"E") is equivalent to the truth of "F"→"E".
Then since "E" is false and "F"→"E" is true, we get that "F" is false.
Since "A"→"F" is true, "A" is false. Thus "A"→"B" is true and ("C"→"A")→"E" is true.
"C"→"A" is false, so "C" is true.
The value of "B" does not matter, so we can arbitrarily choose it to be true.
Summing up, the valuation that sets "B", "C" and "D" to be true and "A", "E" and "F" to be false will make [("A"→"B")→(("C"→"A")→"E")]→(["F"→(("C"→"D")→"E")]→[("A"→"F")→("D"→"E")]) false. So it is not a tautology.
Example of a tautology:
Suppose (("A"→"B")→"C")→(("C"→"A")→("D"→"A")) is false.
Then ("A"→"B")→"C" is true; "C"→"A" is true; "D" is true; and "A" is false.
Since "A" is false, "A"→"B" is true. So "C" is true. Thus "A" must be true, contradicting the fact that it is false.
Thus there is no valuation that makes (("A"→"B")→"C")→(("C"→"A")→("D"→"A")) false. Consequently, it is a tautology.
Adding an axiom schema.
What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a tautology; or (2) it is not a tautology.
If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may be possible to find significantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems will remain unbounded, that is, for any natural number "n" there will still be theorems that cannot be proved in "n" or fewer steps.
If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of every formula, because there is a common method for proving every formula. For example, suppose the new axiom schema were (("B"→"C")→"C")→"B". Then (("A"→("A"→"A"))→("A"→"A"))→"A" is an instance (one of the new axioms) and also not a tautology. But [(("A"→("A"→"A"))→("A"→"A"))→"A"]→"A" is a tautology and thus a theorem due to the old axioms (using the completeness result above). Applying modus ponens, we get that "A" is a theorem of the extended system. Then all one has to do to prove any formula is to replace "A" by the desired formula throughout the proof of "A". This proof will have the same number of steps as the proof of "A".
An alternative axiomatization.
The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another axiom system that aims directly at completeness without going through the deduction metatheorem.
First we have axiom schemas that are designed to efficiently prove the subset of tautologies that contain only one propositional variable.
The proof of each such tautology would begin with two parts (hypothesis and conclusion) that are the same. Then insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure will quickly give every tautology containing only one variable. (The symbol "ꞈ" in each axiom schema indicates where the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.)
Consider any formula Φ that may contain "A", "B", "C"1, ..., "C""n" and ends with "A" as its final conclusion. Then we take
as an axiom schema where Φ− is the result of replacing "B" by "A" throughout Φ and Φ+ is the result of replacing "B" by ("A"→"A") throughout Φ. This is a schema for axiom schemas since there are two level of substitution: in the first Φ is substituted (with variations); in the second, any of the variables (including both "A" and "B") may be replaced by arbitrary formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one variable by considering the case when "B" is false Φ− and the case when "B" is true Φ+.
If the variable that is the final conclusion of a formula takes the value true, then the whole formula takes the value true regardless of the values of the other variables. Consequently if "A" is true, then Φ, Φ−, Φ+ and Φ−→(Φ+→Φ) are all true. So without loss of generality, we may assume that "A" is false. Notice that Φ is a tautology if and only if both Φ− and Φ+ are tautologies. But while Φ has "n"+2 distinct variables, Φ− and Φ+ both have "n"+1. So the question of whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each are all tautologies. Also notice that Φ−→(Φ+→Φ) is a tautology regardless of whether Φ is, because if Φ is false then either Φ− or Φ+ will be false depending on whether "B" is false or true.
Examples:
Deriving Peirce's law
Deriving Łukasiewicz' sole axiom
Using a truth table to verify Łukasiewicz' sole axiom would require consideration of 16=24 cases since it contains 4 distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases: "R" is false and "Q" is false, "R" is false and "Q" is true, and "R" is true. However because we are working within the formal system of logic (instead of outside it, informally), each case required much more effort.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightarrow "
},
{
"math_id": 1,
"text": "\\Gamma\\vdash A"
},
{
"math_id": 2,
"text": "\\Gamma\\vdash A,"
},
{
"math_id": 3,
"text": "\\sigma(\\Gamma)\\vdash\\sigma(A),"
},
{
"math_id": 4,
"text": "\\Gamma,A\\vdash B"
},
{
"math_id": 5,
"text": "\\Gamma\\vdash A\\to B."
},
{
"math_id": 6,
"text": "A\\to C,(A\\to B)\\to C,C\\to B\\vdash C"
},
{
"math_id": 7,
"text": "A\\to C,(A\\to B)\\to C\\vdash(C\\to B)\\to C"
},
{
"math_id": 8,
"text": "(C\\to F)\\to F\\vdash((B\\to C)\\to F)\\to F"
},
{
"math_id": 9,
"text": "B\\to F\\vdash((B\\to C)\\to F)\\to F."
},
{
"math_id": 10,
"text": "\\begin{align}(B\\to F)\\to F,C\\to F,B\\to C&\\vdash B\\to F&&\\text{by (1)}\\\\&\\vdash F&&\\text{by modus ponens,}\\end{align}"
},
{
"math_id": 11,
"text": "(B\\to F)\\to F,C\\to F\\vdash(B\\to C)\\to F"
},
{
"math_id": 12,
"text": " F^{e(F)} = F^1 = ((F \\to F) \\to F)"
},
{
"math_id": 13,
"text": "\\begin{align}p_1^{e(p_1)},\\dots,p_k^{e(p_k)}&\\vdash(p_{k+1}\\to F)\\to F,\\\\\np_1^{e(p_1)},\\dots,p_k^{e(p_k)}&\\vdash((p_{k+1}\\to F)\\to F)\\to F,\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=5795043
|
57951371
|
BWF World Tour Finals
|
Season ending badminton championships
The BWF World Tour Finals, officially HSBC BWF World Tour Finals, which succeeds BWF Super Series Finals, is an annual season finale badminton tournament which is held every December of a year where the players with the most points from that calendar year's events of the BWF World Tour compete for total prize money of at least US$ 2,500,000.
Features.
Prize money.
The tournament offers minimum total prize money of USD$2,500,000. The prize money is distributed via the following formula:
formula_0
The prize money distribution (as of 2023 editions) are:
World ranking points.
Below is the point distribution for each phase of the tournament based on the BWF points system for the BWF World Tour Final event.
Eligibility.
At the end of the BWF World Tour circuit, top eight players/pairs in the BWF World Tour standing of each discipline, with the maximum of two players/pairs from the same member association, are required to play in a final tournament known as the BWF World Tour Finals.
If two or more players are tie in ranking, the selection of players will based on the following criteria:
Results.
<templatestyles src="Reflist/styles.css" />
"As of the 2023 edition"
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Total\\ prize\\ money\\ \\times \\frac{Percentage}{100}"
}
] |
https://en.wikipedia.org/wiki?curid=57951371
|
5795881
|
Jacobi rotation
|
In numerical linear algebra, a Jacobi rotation is a rotation, "Q""k"ℓ, of a 2-dimensional linear subspace of an "n-"dimensional inner product space, chosen to zero a symmetric pair of off-diagonal entries of an "n"×"n" real symmetric matrix, "A", when applied as a similarity transformation:
formula_0
formula_1
It is the core operation in the Jacobi eigenvalue algorithm, which is numerically stable and well-suited to implementation on parallel processors .
Only rows "k" and ℓ and columns "k" and ℓ of "A" will be affected, and that "A"′ will remain symmetric. Also, an explicit matrix for "Q""k"ℓ is rarely computed; instead, auxiliary values are computed and "A" is updated in an efficient and numerically stable way. However, for reference, we may write the matrix as
formula_2
That is, "Q""k"ℓ is an identity matrix except for four entries, two on the diagonal ("q""kk" and "q"ℓℓ, both equal to "c") and two symmetrically placed off the diagonal ("q""k"ℓ and "q"ℓ"k", equal to "s" and −"s", respectively). Here "c" = cos θ and "s" = sin θ for some angle θ; but to apply the rotation, the angle itself is not required. Using Kronecker delta notation, the matrix entries can be written:
formula_3
Suppose "h" is an index other than "k" or ℓ (which must themselves be distinct). Then the similarity update produces, algebraically:
formula_4
formula_5
formula_6
formula_7
formula_8
Numerically stable computation.
To determine the quantities needed for the update, we must solve the off-diagonal equation for zero . This implies that:
formula_9
Set β to half of this quantity:
formula_10
If "a""k"ℓ is zero we can stop without performing an update, thus we never divide by zero. Let "t" be tan θ. Then with a few trigonometric identities we reduce the equation to:
formula_11
For stability we choose the solution:
formula_12
From this we may obtain "c" and "s" as:
formula_13
formula_14
Although we now could use the algebraic update equations given previously, it may be preferable to rewrite them. Let:
formula_15
so that ρ = tan(θ/2). Then the revised update equations are:
formula_16
formula_17
formula_18
formula_19
formula_20
As previously remarked, we need never explicitly compute the rotation angle θ. In fact, we can reproduce the symmetric update determined by "Q""k"ℓ by retaining only the three values "k", ℓ, and "t", with "t" set to zero for a null rotation.
Tridiagonal example.
Some applications may require multiple zero entries in a similarity matrix, possibly in the form of a tridiagonal matrix. Since Jacobian rotations may remove zeros from other cells that were previously zeroed, it is usually not possible to achieve tridiagonalization by simply zeroing each off-tridiagonal cell individually in a medium to large matrix. However, if Jacobian rotations are repeatedly performed on the above-tridiagonal cell with the highest absolute value using an adjacent cell just below or to the left to rotate on, then the all of the off-triangular cells are expected to converge on zero after several iterations. In the example below, formula_21 is a 5X5 matrix that is to be tridiagonalized into a similar matrix, formula_22.
formula_23
To tridiagonalize matrix formula_21 into matrix formula_22, the off-tridiagonal cells [1,3], [1,4], [1,5], [2,4], [2,5], and [3,5], must continue to be iteratively zeroed until the maximum absolute value of those cells is below an acceptable convergence threshold. This example will use 1.e-14.. The cells below the diagonal will be zeroed automatically, due to the symmetric nature of the matrix. The first Jacobian rotation will be on the off-tridiagonal cell with the highest absolute value, which by inspection is [1,4] with a value of 11. To make this entry zero, the condition specified in the above equations must be met for the cell coordinates to be zeroed (formula_24) and for the selected rotational coordinates of formula_25 (formula_26), and are reproduced below for the first iteration.
formula_27
The first rotation iteration, formula_28, produces a matrix with cells [1,4] and [4,1] zeroed, as expected. Furthermore, the eigenvalues and determinant of formula_28are identical to those of formula_21 and T1 is also symmetric, confirming that the Jacobian rotation was performed correctly. The next iteration for formula_29 will select cell [2,5] which contains the highest absolute value, 4.8001142, of all the cells to be zeroed..
After 10 iterations of zeroing the cell with the maximum absolute value using Jacobian rotations on the cell just below it, the maximum absolute value of all off-tridiagonal cells is 2.6e-15. Assuming this convergence criteria is acceptably low for the application it is being performed for, the similar triangularized formula_22 matrix is shown below.
formula_30
Since formula_21 and formula_22 have identical eigenvalues and determinants and formula_22 is also symmetric, formula_21 and formula_22 are similar matrices with formula_22 being tridiagonalized.
Eigenvalues example.
Jacobian rotation can be used to extract the eigenvalues in a similar manner as the triangulation example above, but by zeroing all of the cells above the diagonal, instead of the tridiagonal, and performing the Jacobian rotation directly in the cells to be zeroed, instead of an adjacent cell.
Starting with the same matrix formula_21 as the tridiagonal example,
formula_23
The first Jacobian rotation will be on the off-diagonal cell with the with the highest absolute value, which by inspection is [1,4] with a value of 11, and the rotation cell will also be [1,4], formula_31 in the equations above. The rotation angle is the result of a quadratic solution, but it can be seen in the equation that if the matrix is symmetric, then a real solution is assured.
formula_32
The first rotation iteration, formula_28, produces a matrix with cells [1,4] and [4,1] zeroed, as expected. Furthermore, the eigenvalues and determinant of formula_28are identical to those of formula_21 and T1 is also symmetric, confirming that the Jacobian rotation was performed correctly. The next iteration for formula_29 will select cell [3,4] which contains the highest absolute value, 8.5794421, of all the cells to be zeroed..
After 25 iterations of zeroing the cell with the maximum absolute value using Jacobian rotations on the cell just below it, the maximum absolute value of all off-diagonal cells is 9.0233029E-11. Assuming this convergence criteria is acceptably low for the application it is being performed for, the similar diagonalized formula_22 matrix is shown below.
formula_33
The eigenvalues are now displayed across the diagonal, and may be directly extracted for use elsewhere.
Since formula_21 and formula_22 have identical eigenvalues and determinants and formula_22 is also symmetric, formula_21 and formula_22 are similar matrices with formula_22 being successfully diagonalized.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " A \\mapsto Q_{k\\ell}^T A Q_{k\\ell} = A' . \\,\\! "
},
{
"math_id": 1,
"text": "\n\\begin{bmatrix}\n {*} & & & \\cdots & & & * \\\\\n & \\ddots & & & & & \\\\\n & & a_{kk} & \\cdots & a_{k\\ell} & & \\\\\n \\vdots & & \\vdots & \\ddots & \\vdots & & \\vdots \\\\\n & & a_{\\ell k} & \\cdots & a_{\\ell\\ell} & & \\\\\n & & & & & \\ddots & \\\\\n {*} & & & \\cdots & & & *\n\\end{bmatrix}\n\\to\n\\begin{bmatrix}\n {*} & & & \\cdots & & & * \\\\\n & \\ddots & & & & & \\\\\n & & a'_{kk} & \\cdots & 0 & & \\\\\n \\vdots & & \\vdots & \\ddots & \\vdots & & \\vdots \\\\\n & & 0 & \\cdots & a'_{\\ell\\ell} & & \\\\\n & & & & & \\ddots & \\\\\n {*} & & & \\cdots & & & *\n\\end{bmatrix}.\n"
},
{
"math_id": 2,
"text": "\nQ_{k\\ell} = \n\\begin{bmatrix}\n 1 & & & & & & \\\\\n & \\ddots & & & & 0 & \\\\\n & & c & \\cdots & s & & \\\\\n & & \\vdots & \\ddots & \\vdots & & \\\\\n & & -s & \\cdots & c & & \\\\\n & 0 & & & & \\ddots & \\\\\n & & & & & & 1\n\\end{bmatrix} .\n"
},
{
"math_id": 3,
"text": " q_{ij} = \n\\delta_{ij} + (\\delta_{ik}\\delta_{jk} \n+ \\delta_{i\\ell}\\delta_{j\\ell})(c-1) + (\\delta_{ik}\\delta_{j\\ell} \n- \\delta_{i\\ell}\\delta_{jk})s . \\,\\!\n"
},
{
"math_id": 4,
"text": " a'_{hk} = a'_{kh} = c a_{hk} - s a_{h\\ell} \\,\\! "
},
{
"math_id": 5,
"text": " a'_{h\\ell} = a'_{\\ell h} = c a_{h\\ell} + s a_{hk} \\,\\! "
},
{
"math_id": 6,
"text": " a'_{k\\ell} = a'_{\\ell k} = (c^2-s^2)a_{k\\ell} + sc (a_{kk} - a_{\\ell\\ell}) = 0 \\,\\! "
},
{
"math_id": 7,
"text": " a'_{kk} = c^2 a_{kk} + s^2 a_{\\ell\\ell} - 2 s c a_{k\\ell} \\,\\! "
},
{
"math_id": 8,
"text": " a'_{\\ell\\ell} = s^2 a_{kk} + c^2 a_{\\ell\\ell} + 2 s c a_{k\\ell}. \\,\\! "
},
{
"math_id": 9,
"text": " \\frac{c^2-s^2}{sc} = \\frac{a_{\\ell\\ell} - a_{kk}}{a_{k\\ell}} . "
},
{
"math_id": 10,
"text": " \\beta = \\frac{a_{\\ell\\ell} - a_{kk}}{2 a_{k\\ell}} . "
},
{
"math_id": 11,
"text": " t^2 + 2\\beta t - 1 = 0 . \\,\\! "
},
{
"math_id": 12,
"text": " t = \\frac{\\sgn(\\beta)}{|\\beta|+\\sqrt{\\beta^2+1}} . "
},
{
"math_id": 13,
"text": " c = \\frac{1}{\\sqrt{t^2+1}} \\,\\! "
},
{
"math_id": 14,
"text": " s = c t \\,\\! "
},
{
"math_id": 15,
"text": " \\rho= \\frac{1-c}{s} , "
},
{
"math_id": 16,
"text": " a'_{hk} = a'_{kh} = a_{hk} - s (a_{h\\ell} + \\rho a_{hk}) \\,\\! "
},
{
"math_id": 17,
"text": " a'_{h\\ell} = a'_{\\ell h} = a_{h\\ell} + s (a_{hk} - \\rho a_{h\\ell}) \\,\\! "
},
{
"math_id": 18,
"text": " a'_{k\\ell} = a'_{\\ell k} = 0 \\,\\! "
},
{
"math_id": 19,
"text": " a'_{kk} = a_{kk} - t a_{k \\ell} \\,\\! "
},
{
"math_id": 20,
"text": " a'_{\\ell\\ell} = a_{\\ell\\ell} + t a_{k \\ell} \\,\\! "
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "T"
},
{
"math_id": 23,
"text": "\\begin{align}\n&A=\n\\begin{bmatrix} \n2 & 5 &9 &11 &7\\\\ \n5 & 3 &6 &-2 &5\\\\\n9 & 6 &7 &3 &1\\\\\n11 & -2 &3 &1 &3\\\\\n7 & 5 &1 &3 &4\\\\\n\\end{bmatrix} \\\\\n\n&\\text{eigenvalues of }A = \\begin{bmatrix}4.1714549 &5.9308002 &-5.3682585 &24.054762 &-11.788758 \\end{bmatrix} \\\\\n&|A| = 37662\n\\end{align}"
},
{
"math_id": 24,
"text": "h=1, l=4"
},
{
"math_id": 25,
"text": "S"
},
{
"math_id": 26,
"text": "h=1, k=3"
},
{
"math_id": 27,
"text": "\\begin{align} \\\\\n&9 \\sin(\\theta) + 11\\cos(\\theta) = 0 \\\\\n& \\theta = \\tan^{-1}(-11/9) = -0.8850668 \\\\\n& \\sin(\\theta) = -0.77395730 \\\\\n& \\cos(\\theta) = 0.66323890 \\\\\n& \\\\\n&\\text{More conveniountly:} \\\\\n&r = \\sqrt(11^2+9^2) = 14.2126704 \\\\\n&\\sin(\\theta) = -11/r = -0.77395730 \\\\\n&\\cos(\\theta) = 9/r = 0.66323890 \\\\\n&\\\\\n&S=\n\\begin{bmatrix} \n&1 &0 &0 &0 &0 \\\\\n&0 &0.66323890 &0 &-0.77395730 &0 \\\\\n&0 &0 &1 &0 &0 \\\\\n&0 &0.77395730 &0 &0.66323890 &0 \\\\\n&0 &0 &0 &0 &1 \\\\\n\\end{bmatrix} \\\\\n&\\\\\n&T_1=S^TAS = \n\\begin{bmatrix} \n2 &12.083046 &9 &0 &7\\\\ \n12.083046 &-0.16438356 &5.2139171 &0.56164384 &4.8001142\\\\\n9 &5.2139171 &7 &-4.22079 &1\\\\\n0 &0.56164384 &-4.22079 &4.1643836 &-3.3104236\\\\\n7 &4.8001142 &1 &-3.3104236 &4\\\\\n\\end{bmatrix} \\\\\n&\\text{eigenvalues of }T_1 = \\begin{bmatrix}4.1714549 &5.9308002 &-5.3682585 &24.054762 &-11.788758 \\end{bmatrix} \\\\\n&|T_1| = 37662\n\\end{align}"
},
{
"math_id": 28,
"text": "T_1"
},
{
"math_id": 29,
"text": "T_2"
},
{
"math_id": 30,
"text": "\\begin{align}\n&T=\n\\begin{bmatrix} \n2 & 16.613248 &0 &0 &0\\\\ \n16.613248 &10.184783 &5.1109346 &0 &0\\\\\n0 &5.1109346 &4.0304188 &4.5541372 &0\\\\\n0 &0 &4.5541372 &-3.3901672 &0.43515335\\\\\n0 &0 &0 &0.43515335 &4.1749658\\\\\n\\end{bmatrix} \\\\\n\n&\\text{eigenvalues of }T = \\begin{bmatrix}4.1714549 &5.9308002 &-5.3682585 &24.054762 &-11.788758 \\end{bmatrix} \\\\\n&|T| = 37662\n\\end{align}"
},
{
"math_id": 31,
"text": "l = 4, k = 1"
},
{
"math_id": 32,
"text": "\\begin{align} \\\\\n&11(cos^2(\\theta) - sin^2(\\theta)) + (2-1)cos(\\theta)sin(\\theta) = 0 \\\\\n&\\text{dividing by }cos^2(\\theta) \\\\\n&tan^2(\\theta) - tan(\\theta)/11 - 1 = 0 \\\\\n&tan(\\theta) = [1/11 + \\sqrt{1/11^2 + 4}]/2 = 1.0464871 \\\\\n&\\theta = tan^{-1}(1.0464871) = -0.80810980 \\\\\n&sin(\\theta) = 0.72298259 \\\\\n&cos(\\theta) = 0.69086625 \\\\\n& \\\\\n&\\text{More conveniountly:} \\\\\n&r = sqrt(1.0464871^2+1) = 1.4474582 \\\\\n&sin(\\theta) = 1.0464871/r = 0.72298259 \\\\\n&cos(\\theta) = 1/r = 0.69086625 \\\\\n&\\\\\n&S=\n\\begin{bmatrix} \n&0.69086625 &0 &0 &0.72298259 &0 \\\\\n&0 &1 &0 &0 &0 \\\\\n&0 &0 &1 &0 &0 \\\\\n&-0.72298259 &0 &0 &0.69086625 &0 \\\\\n&0 &0 &0 &0 &1 \\\\\n\\end{bmatrix} \\\\\n&\\\\\n&T_1=S^TAS = \n\\begin{bmatrix} \n-9.5113578 &4.9002964 &4.0488484 &0 &2.6671159\\\\ \n4.9002964 &3 &6 &2.2331805 &5\\\\\n4.0488484 &6 &7 &8.5794421 &1\\\\\n0 &2.2331805 &8.5794421 &12.511358 &7.1334769\\\\\n2.6671159 &5 &1 &7.1334769 &4\\\\\n\\end{bmatrix} \\\\\n&\\text{eigenvalues of }T_1 = \\begin{bmatrix}4.1714549 &5.9308002 &-5.3682585 &24.054762 &-11.788758 \\end{bmatrix} \\\\\n&|T_1| = 37662\n\\end{align}"
},
{
"math_id": 33,
"text": "\\begin{align}\n&T=\n\\begin{bmatrix} \n24.054762 &0 &0 &0 &0\\\\ \n0 &5.9308002 &0 &0 &0\\\\\n0 &0 &-11.788758 &0 &0\\\\\n0 &0 &0 &4.1714549 &0\\\\\n0 &0 &0 &0 &-5.3682585\\\\\n\\end{bmatrix} \\\\\n\n&\\text{eigenvalues of }T = \\begin{bmatrix}4.1714549 &5.9308002 &-5.3682585 &24.054762 &-11.788758 \\end{bmatrix} \\\\\n&|T| = 37662\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=5795881
|
57967381
|
William Hamilton Meeks, III
|
American mathematician
William Hamilton Meeks III (born 8 August 1947 in Washington, DC) is an American mathematician, specializing in differential geometry and minimal surfaces.
Meeks studied at the University of California, Berkeley, with bachelor's degree in 1971, master's degree in 1974, and Ph.D. in 1975 with supervisor H. Blaine Lawson and thesis "The Conformal Structure and Geometry of Triply Periodic Minimal Surfaces in formula_0". He was an assistant professor in 1975–1977 at the University of California, Los Angeles (UCLA), in 1977–1978 at the Instituto de Matemática Pura e Aplicada (IMPA), and in 1978–1979 at Stanford University. From 1979 to 1983 he was a professor at IMPA. He was from 1983 to 1984 a visiting member of the Institute for Advanced Study and from 1984 to 1986 a professor at Rice University with the academic year 1985–1986 spent as a visiting professor at the University of California, Santa Barbara. From 1986 to 2018 he has been the George David Birkhoff Professor of Mathematics at the University of Massachusetts, Amherst. He currently is at the Institute for Advanced Study after assuming professor emeritus status at UMass Amherst.
He is known as an expert on minimal surfaces and their computer graphics visualization; on the latter subject he has collaborated with David Allen Hoffman. For the academic year 2006/07 Meeks was a Guggenheim Fellow.
In 1986 at the International Congress of Mathematicians in Berkeley, he was Invited Speaker with talk "Recent progress on the geometry of surfaces in formula_0 and on the use of computer graphics as a research tool".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " R ^ 3 "
}
] |
https://en.wikipedia.org/wiki?curid=57967381
|
57967986
|
Equivalence number method
|
The equivalence number method is a cost calculation method for co-production in cost and activity accounting. The resulting costs of the input factors are allocated to the individual products according to a weighting key, the so-called equivalence numbers.
Description.
As with the other cost allocation methods, the conservation of the cost sum applies, that is:
formula_0
The cost of the main product, usually for the product with the highest physical or economical output, receives for example the equivalence number 1. On the basis of selected indicators (average market prices, physical properties, etc.) other equivalence numbers are formed, using suitable ratios between the different co-products. Multiplying the equivalence numbers by the production or sales figures results in the allocation keys for a specific product type. From this the cost of a co-product can be calculated, both for main and by-products.
Application examples.
An airline can determine the cost of the transportation service by dividing air freight and passengers by weight. The average passenger weight of booked seats is to be compared to the weight of the loaded air cargo containers.
In a refinery, one can assume the input as crude oil and as output gasoline, diesel and heavy fuel oil as well as (flare) losses. The equivalence number method can use the energy content of the products as an allocation key. E is the product of energy density and production quantity.
In the cogeneration plants, the Carnot method allocates the fuel to the products useful heat and electrical work. The weighting key is the exergy content of the output energies.
In the alternative generation method, the key is thermal and weighted electrical efficiency, where the weighting factor is the ratio of thermal to electrical reference efficiencies (γ = ηth, ref/ηel,ref).
Criticism.
Criticism of the equivalence number method is justified by the fact that completely arbitrary and random keys can be chosen. For example, in the case of allocating the potable water bill in a house with only one common meter, the water consumption could be divided according to the number of occupants per apartment or the apartment's net dwelling area in m2.
Mathematical background.
From a one-dimensional input I, a two-dimensional output is assumed with "O1 = f1(I) * I" and "O2 = f2(I) * I".
Note: One interpretation for "f" is a conversion efficiency from the input to the respective output. More than 2 co-products are also conceivable.
The costs "k1", "k2" are the variable costs of the two outputs which need to be determined. "kI" represents the known variable costs of the input. "Kvar" denotes the respective sum of the variable costs. "a1" and "a2" are the allocation factors for the respective output, i.e. they describe the proportion of the input that is assigned to a co-product.
The weighting keys are "f1" and "f2":
formula_1
This results in specific variable costs "k1" and "k2":
formula_2
According to the introducing relation of the cost allocation, the following applies:
formula_3
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{sum of cost input} = \\text{sum of cost output}"
},
{
"math_id": 1,
"text": "K_1^{var} = a_1 \\cdot K_I^{var} = \\frac{f_1}{f_1 + f_2} \\cdot k_I \\cdot I \\quad \\text{respectively} \\quad K_2^{var} = a_2 \\cdot K_I^{var} = \\frac{f_2}{f_1 + f_2} \\cdot k_I \\cdot I"
},
{
"math_id": 2,
"text": "k_1 = \\frac{K^{var}_1}{O_1} = \\frac{K^{var}_1}{f_1 \\cdot I} \\quad \\text{respectively} \\quad k_2 = \\frac{K^{var}_2}{O_2} = \\frac{K^{var}_2}{f_2 \\cdot I}"
},
{
"math_id": 3,
"text": " k_I \\cdot I = k_1 \\cdot O_1 + k_2 \\cdot O_2 \\quad \\text{or} \\quad K_I^{var} = K_1^{var} + K_2^{var}"
}
] |
https://en.wikipedia.org/wiki?curid=57967986
|
5797
|
Cluster sampling
|
Sampling methodology in statistics
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total population is divided into these groups (known as clusters) and a simple random sample of the groups is selected. The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a "one-stage" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a "two-stage" cluster sampling plan. A common motivation for cluster sampling is to reduce the total number of interviews and costs given the desired accuracy. For a fixed sample size, the expected random error is smaller when most of the variation in the population is present internally within the groups, and not between the groups.
Cluster elemental.
The population within a cluster should ideally be as heterogeneous as possible, but there should be homogeneity between clusters. Each cluster should be a small-scale representation of the total population. The clusters should be mutually exclusive and collectively exhaustive. A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters.
The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled. A common motivation for cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision.
There is also multistage cluster sampling, where at least two stages are taken in selecting elements from clusters.
When clusters are of different sizes.
Without modifying the estimated parameter, cluster sampling is unbiased when the clusters are approximately the same size. In this case, the parameter is computed by combining all the selected clusters. When the clusters are of different sizes there are several options:
One method is to sample clusters and then survey all elements in that cluster. Another method is a two-stage method of sampling a fixed proportion of units (be it 5% or 50%, or another number, depending on cost considerations) from within each of the selected clusters. Relying on the sample drawn from these options will yield an unbiased estimator. However, the sample size is no longer fixed upfront. This leads to a more complicated formula for the standard error of the estimator, as well as issues with the optics of the study plan (since the power analysis and the cost estimations often relate to a specific sample size).
A third possible solution is to use probability proportionate to size sampling. In this sampling plan, the probability of selecting a cluster is proportional to its size, so a large cluster has a greater probability of selection than a small cluster. The advantage here is that when clusters are selected with probability proportionate to size, the same number of interviews should be carried out in each sampled cluster so that each unit sampled has the same probability of selection.
Applications of cluster sampling.
An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area in an area sampling frame. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster. It is usually necessary to increase the total sample size to achieve equivalent precision in the estimators, but cost savings may make such an increase in sample size feasible.
For the organization of a population census, the first step is usually dividing the overall geographic area into enumeration areas or census tracts for the field work organization. Enumeration areas may be also useful as first-stage units for cluster sampling in many types of surveys. When a population census is outdated, the list of individuals should not be directly used as sampling frame for a socio-economic survey. Updating the whole census is economically unfeasible. A good alternative may be keeping the old enumeration areas, with some update in highly dynamic areas, such as urban suburbs, selecting a sample of enumeration areas and updating the list of individuals or households only in the selected enumeration areas.
Cluster sampling is used to estimate low mortalities in cases such as wars, famines and natural disasters.
Fisheries science.
It is almost impossible to take a simple random sample of fish from a population, which would require that individuals are captured individually and at random. This is because fishing gears capture fish in groups (or clusters).
In commercial fisheries sampling, the costs of operating at sea are often too large to select hauls individually and at random. Therefore, observations are further clustered by either vessel or fishing trip.
Economics.
The World Bank has applied adaptive cluster sampling to study informal businesses in developing countries in a cost efficient manner, as the informal sector is not captured by official records and too expensive to be studied through simple random sampling. The approach follows a two-stage sampling whereby adaptive cluster sampling is used to generate an estimate of the universe of informal businesses in operations, while the second stage to obtain a random sample about the characteristics of those businesses.
Advantages.
Major use: when the sampling frame of all elements is not available we can resort only to cluster sampling.
More on cluster sampling.
Two-stage cluster sampling.
Two-stage cluster sampling, a simple case of multistage sampling, is obtained by selecting cluster samples in the first stage and then selecting a sample of elements from every sampled cluster. Consider a population of "N" clusters in total. In the first stage, "n" clusters are selected using the ordinary cluster sampling method. In the second stage, simple random sampling is usually used. It is used separately in every cluster and the numbers of elements selected from different clusters are not necessarily equal. The total number of clusters "N", the number of clusters selected "n", and the numbers of elements from selected clusters need to be pre-determined by the survey designer. Two-stage cluster sampling aims at minimizing survey costs and at the same time controlling the uncertainty related to estimates of interest. This method can be used in health and social sciences. For instance, researchers used two-stage cluster sampling to generate a representative sample of the Iraqi population to conduct mortality surveys. Sampling in this method can be quicker and more reliable than other methods, which is why this method is now used frequently.
Inference when the number of clusters is small.
Cluster sampling methods can lead to significant bias when working with a small number of clusters. For instance, it can be necessary to cluster at the state or city-level, units that may be small and fixed in number. Microeconometrics methods for panel data often use short panels, which is analogous to having few observations per clusters and many clusters. The small cluster problem can be viewed as an incidental parameter problem. While the point estimates can be reasonably precisely estimated, if the number of observations per cluster is sufficiently high, we need the number of clusters formula_0 for the asymptotics to kick in. If the number of clusters is low the estimated covariance matrix can be downward biased.
Small numbers of clusters are a risk when there is serial correlation or when there is intraclass correlation as in the Moulton context. When having few clusters, we tend to underestimate serial correlation across observations when a random shock occurs, or the intraclass correlation in a Moulton setting. Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.
In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observations per cluster is fixed at "n". Below, formula_1 stands for the covariance matrix adjusted for clustering, formula_2 stands for the covariance matrix not adjusted for clustering, and ρ stands for the intraclass correlation:
formula_3
The ratio on the left-hand side indicates how much the unadjusted scenario overestimates the precision. Therefore, a high number means a strong downward bias of the estimated covariance matrix. A small cluster problem can be interpreted as a large n: when the data is fixed and the number of clusters is low, the number of data within a cluster can be high. It follows that inference, when the number of clusters is small, will not have the correct coverage.
Several solutions for the small cluster problem have been proposed. One can use a bias-corrected cluster-robust variance matrix, make T-distribution adjustments, or use bootstrap methods with asymptotic refinements, such as the percentile-t or wild bootstrap, that can lead to improved finite sample inference. Cameron, Gelbach and Miller (2008) provide microsimulations for different methods and find that the wild bootstrap performs well in the face of a small number of clusters.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G\\rightarrow \\infty"
},
{
"math_id": 1,
"text": "V_{c}(\\beta)"
},
{
"math_id": 2,
"text": "V(\\beta)"
},
{
"math_id": 3,
"text": "\\frac{V_{c}(\\hat\\beta)}{V(\\hat\\beta)}=1+(n-1)\\rho"
}
] |
https://en.wikipedia.org/wiki?curid=5797
|
579730
|
Data center
|
Building or room used to house computer servers and related equipment
A data center (American English) or data centre (Commonwealth English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. Estimated global data center electricity consumption in 2022 was 240–340 TWh, or roughly 1–1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. The IEA projects that data center electric use could double between 2022 and 2026. High demand for electricity from data centers, including by cryptomining and artificial intelligence, has also increased strain on local electric grids and increased electricity prices in some markets.
Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers.
History.
Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.
A boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage."
The term cloud data centers (CDCs) has been used. Data centers typically cost a lot to build and maintain. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term "data center".
The global data center market saw steady growth in the 2010s, with a notable acceleration in the latter half of the decade. According to Gartner, worldwide data center infrastructure spending reached $200 billion in 2021, representing a 6% increase from 2020 despite the economic challenges posed by the COVID-19 pandemic.
The latter part of the 2010s and early 2020s saw a significant shift towards AI and machine learning applications, generating a global boom for more powerful and efficient data center infrastructure. As of March 2021, global data creation was projected to grow to more than 180 zettabytes by 2025, up from 64.2 zettabytes in 2020.
Amidst the recent boom of development, the United States has established a position as the leader in data center infrastructure, hosting 5,381 data centers as of March 2024, the highest number of any country worldwide. According to global consultancy McKinsey & Co., U.S. market demand is expected to double to 35 gigawatts (GW) by 2030, up from 17 GW in 2022. As of 2023, the U.S. accounts for roughly 40 percent of the global market.
A study published by the Electric Power Research Institute (EPRI) in May 2024 estimates U.S. data center power consumption could range from 4.6% to 9.1% of the country’s generation by 2030. As of 2023, about 80% of U.S. data center load was concentrated in 15 states, led by Virginia and Texas.
Requirements for modern data centers.
Modernization and data center transformation enhances performance and energy efficiency.
Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.
Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize.
Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment."
Meeting standards for data centers.
The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.
Telcordia GR-3160, "NEBS Requirements for Telecommunications Data Center Equipment and Spaces", provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
Data center transformation.
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.
Raised floor.
A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.
Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.
The first purpose of the raised floor was to allow access for wiring.
Lights out.
The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.
Noise levels.
Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence."
OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92-96 dB(A).
Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying “It’s like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves.”
External sources of noise include HVAC equipment and energy generators.
Data center design.
The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers.
Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are:
Design criteria and trade-offs.
High availability.
Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many "nines" can be placed after "99%".
Modularity and flexibility.
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed.
Environmental control.
Temperature and humidity are controlled via:
It is important that computers do not get humid or overheat, as high humidity can lead to dust clogging the fans, which leads to overheat, or can cause components to malfunction, ruining the board and running a fire hazard. Overheat can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards.
Electrical power.
Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.
To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing.
Options include:
Air flow.
Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units.
Aisle containment.
Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products.
Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency.
Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained.
Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement.
Fire protection.
Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage.
Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of Circuit-boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as:
However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered.
Security.
Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace.
Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance.
Energy use.
Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.
Greenhouse gas emissions.
In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. Although some of this electricity was low carbon, the IEA called for more "government and industry efforts on energy efficiency, renewables procurement and RD&D", as some data centers still use electricity generated by fossil fuels. They also said that lifecycle emissions should be considered, that is including "embodied" emissions, such as in buildings. Data centers are estimated to have been responsible for 0.5% of US greenhouse gas emissions in 2018. Some Chinese companies, such as Tencent, have pledged to be carbon neutral by 2030, while others such as Alibaba have been criticized by Greenpeace for not committing to become carbon neutral. Google and Microsoft now each consume more power than some fairly big countries, surpassing the consumption of more than 100 countries.
Energy efficiency and overhead.
The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment.
formula_0
PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation. PUEs of as low as 1.01 have been achieved with two phase immersion cooling.
The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency.
The European Union also has a similar initiative: EU Code of Conduct for Data Centres.
Energy use analysis and projects.
The focus of measuring and analyzing energy use goes beyond what is used by IT equipment; facility support hardware such as chillers and fans also use energy.
In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 megawatt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.
In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021.
Power and cooling analysis.
Power is the largest recurring cost to the user of a data center. Cooling it at or below wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry.
A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.
Energy efficiency analysis.
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.
Computational Fluid Dynamics (CFD) analysis.
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance.
Thermal zone mapping.
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
Green data centers.
Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category.
Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook don't need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular.
Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers.
Direct current data centers.
Direct current data centers are data centers that produce direct current on site with solar panels and store the electricity on site in a battery storage power station. Computers run on direct current and the need for inverting the AC power from the grid would be eliminated. The data center site could still use AC power as a grid-as-a-backup solution. DC data centers could be 10% more efficient and use less floor space for inverting components.
Energy reuse.
It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid-cooled infrastructure that captures all heat with water. Different liquid technologies are categorized in 3 main groups, indirect liquid cooling (water-cooled racks), direct liquid cooling (direct-to-chip cooling) and total liquid cooling (complete immersion in liquid, see server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high-temperature water outputs from the data center.
Impact on electricity prices.
Cryptomining and the artificial intelligence boom of the 2020's has also led to increased demand for electricity, that the IEA expects could double global overall data center demand for electricity between 2022 and 2026. The US could see its share of the electricity market going to data centers increase from 4% to 6% over those four years. Bitcoin used up 2% of US electricity in 2023. This has led to increased electricity prices, particularly in regions with lots of data centers like Santa Clara, California and upstate New York. Data centers have also generated concerns in Northern Virginia about whether residents will have to foot the bill for future power lines. It has also made it harder to develop housing in London. A Bank of America Institute report in July 2024 found that the increase in demand for electricity due in part to AI has been pushing electricity prices higher and is a significant contributor to electricity inflation.
Dynamic infrastructure.
Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed.
Side benefits include
Network infrastructure.
Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming).
Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.
Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center.
Software/data backup.
Non-mutually exclusive options for data backup are:
Onsite is traditional, and one of its major advantages is immediate availability.
Offsite backup storage.
Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are:
Modular data center.
For quick deployment or disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time.
Micro data center.
Micro data centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{PUE} = {\\mbox{Total Facility Power} \\over \\mbox{IT Equipment Power}} "
}
] |
https://en.wikipedia.org/wiki?curid=579730
|
579755
|
Mach–Zehnder interferometer
|
Device to determine relative phase shift
The Mach–Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated beams derived by splitting light from a single source. The interferometer has been used, among other things, to measure phase shifts between the two beams caused by a sample or a change in length of one of the paths. The apparatus is named after the physicists Ludwig Mach (the son of Ernst Mach) and Ludwig Zehnder; Zehnder's proposal in an 1891 article was refined by Mach in an 1892 article. Mach–Zehnder interferometry with electrons as well as with light has been demonstrated. The versatility of the Mach–Zehnder configuration has led to its being used in a range of research topics efforts especially in fundamental quantum mechanics.
Design.
The Mach–Zehnder check interferometer is a highly configurable instrument. In contrast to the well-known Michelson interferometer, each of the well-separated light paths is traversed only once.
If the source has a low coherence length then great care must be taken to equalize the two optical paths. White light in particular requires the optical paths to be simultaneously equalized over all wavelengths, or no fringes will be visible (unless a monochromatic filter is used to isolate a single wavelength). As seen in Fig. 1, a compensating cell made of the same type of glass as the test cell (so as to have equal optical dispersion) would be placed in the path of the reference beam to match the test cell. Note also the precise orientation of the beam splitters. The reflecting surfaces of the beam splitters would be oriented so that the test and reference beams pass through an equal amount of glass. In this orientation, the test and reference beams each experience two front-surface reflections, resulting in the same number of phase inversions. The result is that light travels through an equal optical path length in both the test and reference beams leading to constructive interference.
Collimated sources result in a nonlocalized fringe pattern. Localized fringes result when an extended source is used. In Fig. 2, we see that the fringes can be adjusted so that they are localized in any desired plane. In most cases, the fringes would be adjusted to lie in the same plane as the test object, so that fringes and test object can be photographed together.
Operation.
The collimated beam is split by a half-silvered mirror. The two resulting beams (the "sample beam" and the "reference beam") are each reflected by a mirror. The two beams then pass a second half-silvered mirror and enter two detectors.
The Fresnel equations for reflection and transmission of a wave at a dielectric imply that there is a phase change for a reflection, when a wave propagating in a lower-refractive index medium reflects from a higher-refractive index medium, but not in the opposite case. A 180° phase shift occurs upon reflection from the front of a mirror, since the medium behind the mirror (glass) has a higher refractive index than the medium the light is traveling in (air). No phase shift accompanies a rear-surface reflection, since the medium behind the mirror (air) has a lower refractive index than the medium the light is traveling in (glass).
The speed of light is lower in media with an index of refraction greater than that of a vacuum, which is 1. Specifically, its speed is: "v" = "c"/"n", where "c" is the speed of light in vacuum, and "n" is the index of refraction. This causes a phase shift increase proportional to ("n" − 1) × "length traveled". If "k" is the constant phase shift incurred by passing through a glass plate on which a mirror resides, a total of 2"k" phase shift occurs when reflecting from the rear of a mirror. This is because light traveling toward the rear of a mirror will enter the glass plate, incurring "k" phase shift, and then reflect from the mirror with no additional phase shift, since only air is now behind the mirror, and travel again back through the glass plate, incurring an additional "k" phase shift.
The rule about phase shifts applies to beamsplitters constructed with a dielectric coating and must be modified if a metallic coating is used or when different polarizations are taken into account. Also, in real interferometers, the thicknesses of the beamsplitters may differ, and the path lengths are not necessarily equal. Regardless, in the absence of absorption, conservation of energy guarantees that the two paths must differ by a half-wavelength phase shift. Also beamsplitters that are not 50/50 are frequently employed to improve the interferometer's performance in certain types of measurement.
In Fig. 3, in the absence of a sample, both the sample beam (SB) and the reference beam (RB) will arrive in phase at detector 1, yielding constructive interference. Both SB and RB will have undergone a phase shift of (1 × wavelength + "k") due to two front-surface reflections and one transmission through a glass plate. At detector 2, in the absence of a sample, the sample beam and reference beam will arrive with a phase difference of half a wavelength, yielding complete destructive interference. The RB arriving at detector 2 will have undergone a phase shift of (0.5 × wavelength + 2"k") due to one front-surface reflection and two transmissions. The SB arriving at detector 2 will have undergone a (1 × wavelength + 2"k") phase shift due to two front-surface reflections, one rear-surface reflection. Therefore, when there is no sample, only detector 1 receives light. If a sample is placed in the path of the sample beam, the intensities of the beams entering the two detectors will change, allowing the calculation of the phase shift caused by the sample.
Quantum treatment.
We can model a photon going through the interferometer by assigning a probability amplitude to each of the two possible paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state describing the photon is therefore a vector formula_0 that is a superposition of the "lower" path formula_1 and the "upper" path formula_2, that is, formula_3 for complex formula_4 such that formula_5.
Both beam splitters are modelled as the unitary matrix formula_6, which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of formula_7, or be reflected to the other path with a probability amplitude of formula_8. The phase shifter on the upper arm is modelled as the unitary matrix formula_9, which means that if the photon is on the "upper" path it will gain a relative phase of formula_10, and it will stay unchanged if it is on the lower path.
A photon that enters the interferometer from the left will then end up described by the state
formula_11
and the probabilities that it will be detected at the right or at the top are given respectively by
formula_12
formula_13
One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.
It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will no longer be interference between the paths, and the probabilities are given by formula_14, independently of the phase formula_10. From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it must be described by a genuine quantum superposition of the two paths.
Uses.
The Mach–Zehnder interferometer's relatively large and freely accessible working space, and its flexibility in locating the fringes has made it the interferometer of choice for visualizing flow in wind tunnels and for flow visualization studies in general. It is frequently used in the fields of aerodynamics, plasma physics and heat transfer to measure pressure, density, and temperature changes in gases.
Mach–Zehnder interferometers are used in electro-optic modulators, electronic devices used in various fiber-optic communication applications. Mach–Zehnder modulators are incorporated in monolithic integrated circuits and offer well-behaved, high-bandwidth electro-optic amplitude and phase responses over a multiple-gigahertz frequency range.
Mach–Zehnder interferometers are also used to study one of the most counterintuitive predictions of quantum mechanics, the phenomenon known as quantum entanglement.
The possibility to easily control the features of the light in the reference channel without disturbing the light in the object channel popularized the Mach–Zehnder configuration in holographic interferometry. In particular, optical heterodyne detection with an off-axis, frequency-shifted reference beam ensures good experimental conditions for shot-noise limited holography with video-rate cameras, vibrometry, and laser Doppler imaging of blood flow.
In optical telecommunications it is used as an electro-optic modulator for phase and amplitude modulation of light. Optical computing researchers have proposed using Mach-Zehnder interferometer configurations in optical neural chips for greatly accelerating complex-valued neural network algorithms.
The versatility of the Mach–Zehnder configuration has led to its being used in a wide range of fundamental research topics in quantum mechanics, including studies on counterfactual definiteness, quantum entanglement, quantum computation, quantum cryptography, quantum logic, Elitzur–Vaidman bomb tester, the quantum eraser experiment, the quantum Zeno effect, and neutron diffraction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\psi \\in \\mathbb{C}^2"
},
{
"math_id": 1,
"text": "\\psi_l = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}"
},
{
"math_id": 2,
"text": "\\psi_u = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 3,
"text": "\\psi = \\alpha \\psi_l + \\beta \\psi_u"
},
{
"math_id": 4,
"text": "\\alpha,\\beta"
},
{
"math_id": 5,
"text": "|\\alpha|^2+|\\beta|^2 = 1"
},
{
"math_id": 6,
"text": "B = \\frac1{\\sqrt2}\\begin{pmatrix} 1 & i \\\\ i & 1 \\end{pmatrix}"
},
{
"math_id": 7,
"text": "1/\\sqrt{2}"
},
{
"math_id": 8,
"text": "i/\\sqrt{2}"
},
{
"math_id": 9,
"text": "P = \\begin{pmatrix} 1 & 0 \\\\ 0 & e^{i\\Delta\\Phi} \\end{pmatrix}"
},
{
"math_id": 10,
"text": "\\Delta\\Phi"
},
{
"math_id": 11,
"text": "BPB\\psi_l = ie^{i\\Delta\\Phi/2} \\begin{pmatrix} -\\sin(\\Delta\\Phi/2) \\\\ \\cos(\\Delta\\Phi/2) \\end{pmatrix},"
},
{
"math_id": 12,
"text": " p(u) = |\\langle \\psi_u| BPB|\\psi_l \\rangle|^2 = \\cos^2 \\frac{\\Delta \\Phi}{2},"
},
{
"math_id": 13,
"text": " p(l) = |\\langle \\psi_l| BPB|\\psi_l \\rangle|^2 = \\sin^2 \\frac{\\Delta \\Phi}{2}."
},
{
"math_id": 14,
"text": "p(u)=p(l) = 1/2"
}
] |
https://en.wikipedia.org/wiki?curid=579755
|
5798245
|
Gross tonnage
|
Nonlinear measure of a ship's overall internal volume
Gross tonnage (GT, G.T. or gt) is a nonlinear measure of a ship's overall internal volume. Gross tonnage is different from gross register tonnage. Neither gross tonnage nor gross register tonnage should be confused with measures of mass or weight such as deadweight tonnage or displacement.
Gross tonnage, along with net tonnage, was defined by the "International Convention on Tonnage Measurement of Ships, 1969", adopted by the International Maritime Organization (IMO) in 1969, and came into force on 18 July 1982. These two measurements replaced gross register tonnage (GRT) and net register tonnage (NRT). Gross tonnage is calculated based on "the moulded volume of all enclosed spaces of the ship" and is used to determine things such as a ship's manning regulations, safety rules, registration fees, and port dues, whereas the older gross register tonnage is a measure of the volume of only certain enclosed spaces.
History.
The International Convention on Tonnage Measurement of Ships, 1969 was adopted by IMO in 1969. The Convention mandated a transition from the former measurements of gross register tonnage (grt) and net register tonnage (nrt) to gross tonnage (GT) and net tonnage (NT). It was the first successful attempt to introduce a universal tonnage measurement system.
Various methods were previously used to calculate merchant ship tonnage, but they differed significantly and one single international system was needed. Previous methods traced back to George Moorsom of Great Britain's Board of Trade who devised one such method in 1854.
The tonnage determination rules apply to all ships built on or after 18 July 1982. Ships built before that date were given 12 years to migrate from their existing gross register tonnage (GRT) to use of GT and NT. The phase-in period was provided to allow ships time to adjust economically, since tonnage is the basis for satisfying manning regulations and safety rules. Tonnage is also the basis for calculating registration fees and port dues. One of the convention's goals was to ensure that the new calculated tonnages "did not differ too greatly" from the traditional gross and net register tonnages.
Both GT and NT are obtained by measuring ship's volume and then applying a mathematical formula. Gross tonnage is based on "the moulded volume of all enclosed spaces of the ship" whereas net tonnage is based on "the moulded volume of all cargo spaces of the ship". In addition, a ship's net tonnage is constrained to be no less than 30% of her gross tonnage.
Calculation.
The gross tonnage calculation is defined in Regulation 3 of Annex 1 of "The International Convention on Tonnage Measurement of Ships, 1969". It is based on two variables, and is ultimately an increasing one-to-one function of ship volume:
The value of the multiplier "K" increases logarithmically with the ship's total volume (in cubic metres) and is applied as an amplification factor in determining the gross tonnage value. "K" is calculated with a formula which uses the common or base-10 logarithm:
formula_0
Once "V" and "K" are known, gross tonnage is calculated using the formula, whereby GT is a function of V:
formula_1
which by substitution is:
formula_2
Thus, gross tonnage exhibits linearithmic growth with volume, increasing faster at larger volumes. The units of gross tonnage, which involve both cubic metres and log-metres, have no physical significance, but were rather chosen for historical convenience.
Volume from gross tonnage.
Since gross tonnage is a bijective function of ship volume, it has an inverse function, namely ship volume from gross tonnage, but the inverse cannot be expressed in terms of elementary functions. A root-finding algorithm may be used for obtaining an approximation to a ship's volume given its gross tonnage. The formula for exact conversion of gross tonnage to volume is:
formula_3
where formula_4 is the natural logarithm and formula_5 is the Lambert W function.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " K = 0.2 + 0.02 \\times \\log_{10}(V)\\,"
},
{
"math_id": 1,
"text": " GT = V \\times K\\,"
},
{
"math_id": 2,
"text": "GT = V\\times(0.2 + 0.02\\times\\log_{10}(V))"
},
{
"math_id": 3,
"text": " V = \\frac{50 \\times \\ln 10 \\times GT}{W(5 \\times 10^{11} \\times \\ln 10 \\times GT)}"
},
{
"math_id": 4,
"text": "\\ln"
},
{
"math_id": 5,
"text": "W"
}
] |
https://en.wikipedia.org/wiki?curid=5798245
|
5798355
|
Inner measure
|
In mathematics, in particular in measure theory, an inner measure is a function on the power set of a given set, with values in the extended real numbers, satisfying some technical conditions. Intuitively, the inner measure of a set is a lower bound of the size of that set.
Definition.
An inner measure is a set function
formula_0
defined on all subsets of a set formula_1 that satisfies the following conditions:
The inner measure induced by a measure.
Let formula_15 be a σ-algebra over a set formula_16 and formula_17 be a measure on formula_18
Then the inner measure formula_19 induced by formula_17 is defined by
formula_20
Essentially formula_19 gives a lower bound of the size of any set by ensuring it is at least as big as the formula_17-measure of any of its formula_15-measurable subsets. Even though the set function formula_19 is usually not a measure, formula_19 shares the following properties with measures:
Measure completion.
Induced inner measures are often used in combination with outer measures to extend a measure to a larger σ-algebra. If formula_17 is a finite measure defined on a σ-algebra formula_15 over formula_16 and formula_24 and formula_19 are corresponding induced outer and inner measures, then the sets formula_25 such that formula_26 form a σ-algebra formula_27 with formula_28.
The set function formula_29 defined by
formula_30
for all formula_31 is a measure on formula_27 known as the completion of formula_32
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varphi : 2^X \\to [0, \\infty],"
},
{
"math_id": 1,
"text": "X,"
},
{
"math_id": 2,
"text": "\\varphi(\\varnothing) = 0"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B,"
},
{
"math_id": 5,
"text": "\\varphi(A \\cup B) \\geq \\varphi(A) + \\varphi(B)."
},
{
"math_id": 6,
"text": "A_1, A_2, \\ldots"
},
{
"math_id": 7,
"text": " A_j \\supseteq A_{j+1}"
},
{
"math_id": 8,
"text": "j"
},
{
"math_id": 9,
"text": "\\varphi(A_1) < \\infty"
},
{
"math_id": 10,
"text": "\\varphi \\left(\\bigcap_{j=1}^\\infty A_j\\right) = \\lim_{j \\to \\infty} \\varphi(A_j)"
},
{
"math_id": 11,
"text": "\\varphi(A) = \\infty"
},
{
"math_id": 12,
"text": "r,"
},
{
"math_id": 13,
"text": "B \\subseteq A"
},
{
"math_id": 14,
"text": "r \\leq \\varphi(B) < \\infty."
},
{
"math_id": 15,
"text": "\\Sigma"
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": "\\mu"
},
{
"math_id": 18,
"text": "\\Sigma."
},
{
"math_id": 19,
"text": "\\mu_*"
},
{
"math_id": 20,
"text": "\\mu_*(T) = \\sup\\{\\mu(S) : S \\in \\Sigma \\text{ and } S \\subseteq T\\}."
},
{
"math_id": 21,
"text": "\\mu_*(\\varnothing) = 0,"
},
{
"math_id": 22,
"text": "E \\subseteq F"
},
{
"math_id": 23,
"text": "\\mu_*(E) \\leq \\mu_*(F)."
},
{
"math_id": 24,
"text": "\\mu^*"
},
{
"math_id": 25,
"text": "T \\in 2^X"
},
{
"math_id": 26,
"text": "\\mu_*(T) = \\mu^*(T)"
},
{
"math_id": 27,
"text": "\\hat \\Sigma"
},
{
"math_id": 28,
"text": "\\Sigma\\subseteq\\hat\\Sigma"
},
{
"math_id": 29,
"text": "\\hat\\mu"
},
{
"math_id": 30,
"text": "\\hat\\mu(T) = \\mu^*(T) = \\mu_*(T)"
},
{
"math_id": 31,
"text": "T \\in \\hat \\Sigma"
},
{
"math_id": 32,
"text": "\\mu."
}
] |
https://en.wikipedia.org/wiki?curid=5798355
|
57997388
|
Power loss factor
|
The power loss factor β describes the loss of electrical power in CHP systems with a variable power-to-heat ratio when an increasing heat flow is extracted from the main thermodynamic electricity generating process in order to provide useful heat. Usually, the power loss factor refers to extraction steam turbines in thermal power stations, which conduct a part of the steam in a heating condenser for the production of useful heat, instead of the low pressure part of the steam turbine where is could perform mechanical work.
formula_0
The picture on the right shows in the left part the principle of steam extraction. After the intermediate-pressure section of the turbine, i.e. before the low-pressure section, steam is diverted and flows into the heating condenser, where it transfers heat to the heating circuit (temperature level TH about 100 °C) and liquefies. The remaining steam works in the low-pressure section of the turbine and is then liquefied in the condenser at approx. 30 °C. Then it is fed via the condensate pump to the feedwater circuit. The partial steam flow, which goes into the heating condenser at high temperature can no longer work in the low-pressure section and is responsible for the loss of power.
The right-hand side of the picture shows the associated T-s diagram (see Rankine cycle) for an operating state in which half of the waste heat is used for heating purposes. To the left of the red square, the white area below the red line corresponds to the waste heat (qout), which is released via the condenser to the environment (ambient temperature level TA). The entire red area corresponds to the useful heat (qheat), the upper hatched part of this area corresponds to the power loss in the low pressure stage.
Modern cogeneration plants have power loss ratios of about 1/5 to 1/9 when delivering heat in the range of 80 °C-120 °C. That means in exchange of one kWh of electrical energy ca. 5 up to 9 kWh of useful heat are obtained.
Based on the equivalence of power loss and gain of heat, the power loss method assigns CO2 emissions and primary energy from the fuel to the useful heat and the electrical energy.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\beta = \\frac{\\Delta P_\\text{el}}{\\dot Q_\\text{utile}} "
}
] |
https://en.wikipedia.org/wiki?curid=57997388
|
57998655
|
Drug accumulation ratio
|
In pharmacokinetics, the drug accumulation ratio (Rac) is the ratio of accumulation of a drug under steady state conditions (i.e., after repeated administration) as compared to a single dose. The higher the value, the more the drug accumulates in the body. An "Rac" of 1 means no accumulation.
Studies.
The accumulation ratio of a specific drug in humans is determined by clinical studies. According to a 2013 analysis, such studies are typically done with 10 to 20 subjects who are given one single dose followed by a washout phase of seven days (median), and then seven to 14 repeated doses to reach steady state conditions. Blood samples are drawn 11 times (median) per subject to determine the blood concentration of the studied drug.
Calculation.
There are various competing calculation methods for the drug accumulation ratio, yielding somewhat different results. A commonly used formula defines "Rac" as the ratio of the area under the curve (AUC) during a single dosing interval under steady state conditions to the AUC during a dosing interval after one single dose:
formula_0
where formula_1 is the dosing interval, "ss" means "steady state" and 1 stands for a single-dose application.
Another definition sets "Rac" to the ratio of the average drug concentration during one day under steady state conditions to the concentration after a single dose.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_{ac} = \\frac {\\operatorname{AUC}(\\tau,ss)} {\\operatorname{AUC}(\\tau,1)}"
},
{
"math_id": 1,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=57998655
|
58006785
|
Polynomial solutions of P-recursive equations
|
In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a "degree bound" for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm.
In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis formula_0).
Other algorithms which compute rational or hypergeometric solutions of a linear recurrence equation with polynomial coefficients also use algorithms which compute polynomial solutions.
Degree bound.
Let formula_1 be a field of characteristic zero and formula_2 a recurrence equation of order formula_3 with polynomial coefficients formula_4, polynomial right-hand side formula_5 and unknown polynomial sequence formula_6. Furthermore formula_7 denotes the degree of a polynomial formula_8 (with formula_9 for the zero polynomial) and formula_10 denotes the leading coefficient of the polynomial. Moreover letformula_11for formula_12 where formula_13 denotes the falling factorial and formula_14 the set of nonnegative integers. Then formula_15. This is called a degree bound for the polynomial solution formula_16. This bound was shown by Abramov and Petkovšek.
Algorithm.
The algorithm consists of two steps. In a first step the "degree bound" is computed. In a second step an "ansatz" with a polynomial formula_16 of that degree with arbitrary coefficients in formula_1 is made and plugged into the recurrence equation. Then the different powers are compared and a system of linear equations for the coefficients of formula_16 is set up and solved. This is called the "method undetermined coefficients". The algorithm returns the general polynomial solution of a recurrence equation.
algorithm polynomial_solutions is
input: Linear recurrence equation formula_17.
output: The general polynomial solution formula_16 if there are any solutions, otherwise false.
for formula_12 do
formula_18
repeat
formula_19
formula_20
formula_21
formula_22
formula_23 with unknown coefficients formula_24 for formula_25
Compare coefficients of polynomials formula_26 and formula_27 to get possible values for formula_28
if there are possible values for formula_29 then
return general solution formula_16
else
return false
end if
Example.
Applying the formula for the degree bound on the recurrence equationformula_30over formula_31 yields formula_32. Hence one can use an ansatz with a quadratic polynomial formula_33 with formula_34. Plugging this ansatz into the original recurrence equation leads toformula_35This is equivalent to the following system of linear equationsformula_36with the solution formula_37. Therefore the only polynomial solution is formula_38.
|
[
{
"math_id": 0,
"text": "(x^n)_{n \\in \\N}"
},
{
"math_id": 1,
"text": "\\mathbb{K}"
},
{
"math_id": 2,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n)"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "p_k \\in \\mathbb{K} [n]"
},
{
"math_id": 5,
"text": "f \\in \\mathbb{K}[n]"
},
{
"math_id": 6,
"text": "y(n) \\in \\mathbb{K}[n]"
},
{
"math_id": 7,
"text": "\\deg (p)"
},
{
"math_id": 8,
"text": "p \\in \\mathbb{K}[n]"
},
{
"math_id": 9,
"text": "\\deg (0) = - \\infty"
},
{
"math_id": 10,
"text": "\\text{lc}(p)"
},
{
"math_id": 11,
"text": "\\begin{align}\n q_i &= \\sum_{k=i}^r \\binom{k}{i} p_k, & b &= \\max_{i=0,\\dots,r}(\\deg (q_i)-i), \\\\\n \\alpha(n) &= \\sum_{i=0,\\dots,r \\atop \\deg (q_i) - i = b} \\text{lc} (q_i) n^{\\underline{i}}, & d_\\alpha &= \\max \\{n \\in \\N \\, : \\, \\alpha(n) = 0 \\} \\cup \\{ - \\infty\\}\n\\end{align}"
},
{
"math_id": 12,
"text": "i=0,\\dots,r"
},
{
"math_id": 13,
"text": "n^{\\underline{i}} = n (n-1) \\cdots (n-i+1)"
},
{
"math_id": 14,
"text": "\\N"
},
{
"math_id": 15,
"text": "\\deg (y) \\leq \\max \\{ \\deg(f) - b, -b-1, d_\\alpha \\}"
},
{
"math_id": 16,
"text": "y"
},
{
"math_id": 17,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n), p_k, f \\in \\mathbb{K}[n], p_0, p_r \\neq 0"
},
{
"math_id": 18,
"text": "q_i = \\sum_{k=i}^r \\binom{k}{i} p_k"
},
{
"math_id": 19,
"text": "b=\\max_{i=0,\\dots,r} (\\deg (q_i) - i)"
},
{
"math_id": 20,
"text": "\\alpha(n) = \\sum_{i=0,\\dots,r \\atop \\deg (q_i) - i = b} \\text{lc} (q_i) n^{\\underline{i}}"
},
{
"math_id": 21,
"text": "d_\\alpha = \\max \\{n \\in \\N \\, : \\, \\alpha(n) = 0 \\} \\cup \\{ - \\infty\\}"
},
{
"math_id": 22,
"text": "d = \\max \\{ \\deg (f) - b, -b-1, d_\\alpha\\}"
},
{
"math_id": 23,
"text": "y(n) = \\sum_{j=0}^d y_j n^j"
},
{
"math_id": 24,
"text": "y_j \\in \\mathbb{K}"
},
{
"math_id": 25,
"text": "j=0,\\dots,d"
},
{
"math_id": 26,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k)"
},
{
"math_id": 27,
"text": "f(n)"
},
{
"math_id": 28,
"text": "y_j, j=0,\\dots,d"
},
{
"math_id": 29,
"text": "y_j"
},
{
"math_id": 30,
"text": "(n^2-2) \\, y (n) + (-n^2+2n) \\, y (n+1)=2n,"
},
{
"math_id": 31,
"text": "\\Q"
},
{
"math_id": 32,
"text": "\\deg (y) \\leq 2"
},
{
"math_id": 33,
"text": "y(n) =y_2 n^2 + y_1 n + y_0"
},
{
"math_id": 34,
"text": "y_0, y_1, y_2 \\in \\Q"
},
{
"math_id": 35,
"text": "2n = (n^2-2) \\, y(n) + (-n^2+2n) \\, y (n+1) = (y_1 + y_2) \\, n^2 + (2 y_0 + 2 y_2 ) \\, n - 2 y_0."
},
{
"math_id": 36,
"text": "\\begin{align}\n\\begin{pmatrix}\n0 & 1 & 1 \\\\ 2 & 0 & 2 \\\\ -2 & 0 & 0 \n\\end{pmatrix}\n\\begin{pmatrix}\ny_0 \\\\ y_1 \\\\ y_2\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n0 \\\\ 2 \\\\ 0\n\\end{pmatrix}\n\\end{align}"
},
{
"math_id": 37,
"text": "y_0 = 0, y_1 = -1, y_2 = 1"
},
{
"math_id": 38,
"text": "y (n) = n^2-n"
}
] |
https://en.wikipedia.org/wiki?curid=58006785
|
58006810
|
Abramov's algorithm
|
In mathematics, particularly in computer algebra, Abramov's algorithm computes all rational solutions of a linear recurrence equation with polynomial coefficients. The algorithm was published by Sergei A. Abramov in 1989.
Universal denominator.
The main concept in Abramov's algorithm is a universal denominator. Let formula_0 be a field of characteristic zero. The "dispersion" formula_1 of two polynomials formula_2 is defined asformula_3where formula_4 denotes the set of non-negative integers. Therefore the dispersion is the maximum formula_5 such that the polynomial formula_6 and the formula_7-times shifted polynomial formula_8 have a common factor. It is formula_9 if such a formula_7 does not exist. The dispersion can be computed as the largest non-negative integer root of the resultant formula_10. Let formula_11 be a recurrence equation of order formula_12 with polynomial coefficients formula_13, polynomial right-hand side formula_14 and rational sequence solution formula_15. It is possible to write formula_16 for two relatively prime polynomials formula_2. Let formula_17 andformula_18where formula_19 denotes the falling factorial of a function. Then formula_20 divides formula_21. So the polynomial formula_21 can be used as a denominator for all rational solutions formula_22 and hence it is called a universal denominator.
Algorithm.
Let again formula_11 be a recurrence equation with polynomial coefficients and formula_21 a universal denominator. After substituting formula_23 for an unknown polynomial formula_24 and setting formula_25 the recurrence equation is equivalent toformula_26As the formula_27 cancel this is a linear recurrence equation with polynomial coefficients which can be solved for an unknown polynomial solution formula_28. There are algorithms to find polynomial solutions. The solutions for formula_28 can then be used again to compute the rational solutions formula_29.
algorithm rational_solutions is
input: Linear recurrence equation formula_30.
output: The general rational solution formula_31 if there are any solutions, otherwise false.
formula_32
formula_33
formula_25
Solve formula_34 for general polynomial solution formula_28
if solution formula_28 exists then
return general solution formula_23
else
return false
end if
Example.
The homogeneous recurrence equation of order formula_35formula_36over formula_37 has a rational solution. It can be computed by considering the dispersionformula_38This yields the following universal denominator:formula_39andformula_40Multiplying the original recurrence equation with formula_41 and substituting formula_42 leads toformula_43This equation has the polynomial solution formula_44 for an arbitrary constant formula_45. Using formula_46 the general rational solution isformula_47for arbitrary formula_45.
|
[
{
"math_id": 0,
"text": "\\mathbb{K}"
},
{
"math_id": 1,
"text": "\\operatorname{dis} (p,q)"
},
{
"math_id": 2,
"text": "p, q \\in \\mathbb{K}[n]"
},
{
"math_id": 3,
"text": "\\operatorname{dis} (p,q) =\\max \\{ k \\in \\N \\, : \\, \\deg (\\gcd (p(n), q(n+k) )) \\geq 1 \\} \\cup \\{ -1 \\},"
},
{
"math_id": 4,
"text": "\\N"
},
{
"math_id": 5,
"text": "k \\in \\N"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "-1"
},
{
"math_id": 10,
"text": "\\operatorname{res}_n (p(n), q(n+k) ) \\in \\mathbb{K}[k]"
},
{
"math_id": 11,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n)"
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "p_k \\in \\mathbb{K} [n]"
},
{
"math_id": 14,
"text": "f \\in \\mathbb{K}[n]"
},
{
"math_id": 15,
"text": "y (n) \\in \\mathbb{K}(n)"
},
{
"math_id": 16,
"text": "y (n) = p(n)/q(n)"
},
{
"math_id": 17,
"text": "D=\\operatorname{dis} (p_r(n-r), p_0 (n) )"
},
{
"math_id": 18,
"text": "u(n) = \\gcd ([p_0 (n+D)]^{\\underline{D+1}}, [p_r (n-r)]^{\\underline{D+1}})"
},
{
"math_id": 19,
"text": "[p(n)]^{\\underline{k}}=p(n)p(n-1)\\cdots p(n-k+1)"
},
{
"math_id": 20,
"text": "q(n)"
},
{
"math_id": 21,
"text": "u(n)"
},
{
"math_id": 22,
"text": "y(n)"
},
{
"math_id": 23,
"text": "y (n) = z(n)/u(n)"
},
{
"math_id": 24,
"text": "z(n) \\in \\mathbb{K} [n]"
},
{
"math_id": 25,
"text": "\\ell(n) = \\operatorname{lcm}(u(n), \\dots, u(n+r))"
},
{
"math_id": 26,
"text": "\\sum_{k=0}^r p_k (n) \\frac{z(n+k)}{u(n+k)} \\ell(n) = f(n) \\ell(n)."
},
{
"math_id": 27,
"text": "u(n+k)"
},
{
"math_id": 28,
"text": "z(n)"
},
{
"math_id": 29,
"text": "y(n) = z(n)/u(n)"
},
{
"math_id": 30,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n), p_k, f \\in \\mathbb{K}[n], p_0, p_r \\neq 0"
},
{
"math_id": 31,
"text": "y"
},
{
"math_id": 32,
"text": "D = \\operatorname{disp} (p_r(n-r), p_0 (n) )"
},
{
"math_id": 33,
"text": "u(n) = \\gcd ([p_0 (n+D)]^{\\underline{D+1}}, [p_r (n-r)]^{\\underline{D+1}})"
},
{
"math_id": 34,
"text": "\\sum_{k=0}^r p_k (n) \\frac{z(n+k)}{u(n+k)} \\ell(n) = f(n) \\ell(n)"
},
{
"math_id": 35,
"text": "1"
},
{
"math_id": 36,
"text": "(n-1) \\, y(n) + (-n-1) \\, y(n+1) = 0"
},
{
"math_id": 37,
"text": "\\Q"
},
{
"math_id": 38,
"text": "D = \\operatorname{dis} (p_1 (n-1), p_0 (n) ) = \\operatorname{disp} (-n,n-1) = 1."
},
{
"math_id": 39,
"text": " u(n) = \\gcd ([p_0 (n+1)]^{\\underline{2}}, [p_r (n-1)]^{\\underline{2}}) = (n-1)n"
},
{
"math_id": 40,
"text": " \\ell(n) = \\operatorname{lcm} (u(n), u(n+1) ) = (n-1)n(n+1)."
},
{
"math_id": 41,
"text": " \\ell(n)"
},
{
"math_id": 42,
"text": " y(n) = z(n)/u(n)"
},
{
"math_id": 43,
"text": " (n-1)(n+1)\\, z(n) + (-n-1) (n-1) \\, z(n+1) = 0."
},
{
"math_id": 44,
"text": " z(n) = c"
},
{
"math_id": 45,
"text": " c \\in \\Q"
},
{
"math_id": 46,
"text": " y(n) = z(n) / u(n)"
},
{
"math_id": 47,
"text": " y(n) = \\frac{c}{(n-1)n}"
}
] |
https://en.wikipedia.org/wiki?curid=58006810
|
58006825
|
P-recursive equation
|
Linear recurrence equation
In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations (or linear recurrence relations or linear difference equations) with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics. The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite.
From the late 1980s, the first algorithms were developed to find solutions for these equations. Sergei A. Abramov, Marko Petkovšek and Mark van Hoeij described algorithms to find polynomial, rational, hypergeometric and d'Alembertian solutions.
Definition.
Let formula_0 be a field of characteristic zero (for example formula_1), formula_2 polynomials for formula_3,formula_4 a sequence and formula_5 an unknown sequence. The equationformula_6is called a linear recurrence equation with polynomial coefficients (all recurrence equations in this article are of this form). If formula_7 and formula_8 are both nonzero, then formula_9 is called the order of the equation. If formula_10 is zero the equation is called homogeneous, otherwise it is called inhomogeneous.
This can also be written as formula_11 where formula_12 is a linear recurrence operator with polynomial coefficients and formula_13 is the shift operator, i.e. formula_14.
Closed form solutions.
Let formula_15 or equivalently formula_16 be a recurrence equation with polynomial coefficients. There exist several algorithms which compute solutions of this equation. These algorithms can compute polynomial, rational, hypergeometric and d'Alembertian solutions. The solution of a homogeneous equation is given by the kernel of the linear recurrence operator: formula_17. As a subspace of the space of sequences this kernel has a basis. Let formula_18 be a basis of formula_19, then the formal sum formula_20 for arbitrary constants formula_21 is called the general solution of the homogeneous problem formula_22. If formula_23 is a particular solution of formula_16, i.e. formula_24, then formula_25 is also a solution of the inhomogeneous problem and it is called the general solution of the inhomogeneous problem.
Polynomial solutions.
In the late 1980s Sergei A. Abramov described an algorithm which finds the general polynomial solution of a recurrence equation, i.e. formula_26, with a polynomial right-hand sideformula_27. He (and a few years later Marko Petkovšek) gave a degree bound for polynomial solutions. This way the problem can simply be solved by considering a system of linear equations. In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis formula_28).
The other algorithms for finding more general solutions (e.g. rational or hypergeometric solutions) also rely on algorithms which compute polynomial solutions.
Rational solutions.
In 1989 Sergei A. Abramov showed that a general rational solution, i.e. formula_29, with polynomial right-hand side formula_30, can be found by using the notion of a universal denominator. A universal denominator is a polynomial formula_31 such that the denominator of every rational solution divides formula_31. Abramov showed how this universal denominator can be computed by only using the first and the last coefficient polynomial formula_7 and formula_8. Substituting this universal denominator for the unknown denominator of formula_32 all rational solutions can be found by computing all polynomial solutions of a transformed equation.
Hypergeometric solution.
A sequence formula_33 is called hypergeometric if the ratio of two consecutive terms is a rational function in formula_34, i.e. formula_35. This is the case if and only if the sequence is the solution of a first-order recurrence equation with polynomial coefficients. The set of hypergeometric sequences is not a subspace of the space of sequences as it is not closed under addition.
In 1992 Marko Petkovšek gave an algorithm to get the general hypergeometric solution of a recurrence equation where the right-hand side formula_36 is the sum of hypergeometric sequences. The algorithm makes use of the Gosper-Petkovšek normal-form of a rational function. With this specific representation it is again sufficient to consider polynomial solutions of a transformed equation.
A different and more efficient approach is due to Mark van Hoeij. Considering the roots of the first and the last coefficient polynomial formula_7 and formula_8 – called singularities – one can build a solution step by step making use of the fact that every hypergeometric sequence formula_37 has a representation of the formformula_38for some formula_39 with formula_40 for formula_41 and formula_42. Here formula_43 denotes the Gamma function and formula_44 the algebraic closure of the field formula_0. Then the formula_45 have to be singularities of the equation (i.e. roots of formula_7 or formula_8). Furthermore one can compute bounds for the exponents formula_46. For fixed values formula_47 it is possible to make an ansatz which gives candidates for formula_48. For a specific formula_48 one can again make an ansatz to get the rational function formula_49 by Abramov's algorithm. Considering all possibilities one gets the general solution of the recurrence equation.
D'Alembertian solutions.
A sequence formula_32 is called d'Alembertian if formula_50 for some hypergeometric sequences formula_51 and formula_52 means that formula_53 where formula_54 denotes the difference operator, i.e. formula_55. This is the case if and only if there are first-order linear recurrence operators formula_56 with rational coefficients such that formula_57.
1994 Abramov and Petkovšek described an algorithm which computes the general d'Alembertian solution of a recurrence equation. This algorithm computes hypergeometric solutions and reduces the order of the recurrence equation recursively.
Examples.
Signed permutation matrices.
The number of signed permutation matrices of size formula_58 can be described by the sequence formula_59. A signed permutation matrix is a square matrix which has exactly one nonzero entry in every row and in every column. The nonzero entries can be formula_60. The sequence is determined by the linear recurrence equation with polynomial coefficientsformula_61and the initial values formula_62. Applying an algorithm to find hypergeometric solutions one can find the general hypergeometric solutionformula_63for some constant formula_64. Also considering the initial values, the sequence formula_65 describes the number of signed permutation matrices.
Involutions.
The number of involutions formula_33 of a set with formula_66 elements is given by the recurrence equationformula_67Applying for example Petkovšek's algorithm it is possible to see that there is no polynomial, rational or hypergeometric solution for this recurrence equation.
Applications.
A function formula_68 is called hypergeometric if formula_69 where formula_70 denotes the rational functions in formula_66 and formula_71. A hypergeometric sum is a finite sum of the form formula_72 where formula_68 is hypergeometric. Zeilberger's creative telescoping algorithm can transform such a hypergeometric sum into a recurrence equation with polynomial coefficients. This equation can then be solved to get for example a linear combination of hypergeometric solutions which is called a closed form solution of formula_10.
|
[
{
"math_id": 0,
"text": "\\mathbb{K}"
},
{
"math_id": 1,
"text": "\\mathbb{K} = \\mathbb{Q}"
},
{
"math_id": 2,
"text": "p_k(n) \\in \\mathbb{K} [n]"
},
{
"math_id": 3,
"text": "k = 0,\\dots,r"
},
{
"math_id": 4,
"text": "f \\in \\mathbb{K}^{\\N}"
},
{
"math_id": 5,
"text": "y \\in \\mathbb{K}^{\\N}"
},
{
"math_id": 6,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n)"
},
{
"math_id": 7,
"text": "p_0"
},
{
"math_id": 8,
"text": "p_r"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "L y = f"
},
{
"math_id": 12,
"text": "L=\\sum_{k=0}^r p_k N^k"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "N \\, y (n) = y (n+1)"
},
{
"math_id": 15,
"text": "\\sum_{k=0}^r p_k(n) \\, y (n+k) = f(n)"
},
{
"math_id": 16,
"text": "Ly=f"
},
{
"math_id": 17,
"text": "\\ker L = \\{ y \\in \\mathbb{K}^\\N \\, : \\, L y = 0\\}"
},
{
"math_id": 18,
"text": "\\{ y^{(1)}, y^{(2)}, \\dots, y^{(m)} \\}"
},
{
"math_id": 19,
"text": "\\ker L"
},
{
"math_id": 20,
"text": "c_1 y^{(1)} + \\dots + c_m y^{(m)}"
},
{
"math_id": 21,
"text": "c_1,\\dots,c_m \\in \\mathbb{K} "
},
{
"math_id": 22,
"text": "Ly=0"
},
{
"math_id": 23,
"text": "\\tilde{y}"
},
{
"math_id": 24,
"text": "L \\tilde{y}=f"
},
{
"math_id": 25,
"text": "c_1 y^{(1)} + \\dots + c_m y^{(m)} + \\tilde{y}"
},
{
"math_id": 26,
"text": "y (n) \\in \\mathbb{K} [n]"
},
{
"math_id": 27,
"text": "f(n) \\in \\mathbb{K} [n]"
},
{
"math_id": 28,
"text": "(x^n)_{n \\in \\N}"
},
{
"math_id": 29,
"text": "y(n) \\in \\mathbb{K} (n)"
},
{
"math_id": 30,
"text": "f(n) \\in \\mathbb{K}[n]"
},
{
"math_id": 31,
"text": "u"
},
{
"math_id": 32,
"text": "y"
},
{
"math_id": 33,
"text": "y(n)"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "y (n+1) / y(n) \\in \\mathbb{K} (n)"
},
{
"math_id": 36,
"text": "f"
},
{
"math_id": 37,
"text": "y (n)"
},
{
"math_id": 38,
"text": "y (n) = c \\, r(n)\\, z^n \\, \\Gamma(n-\\xi_1)^{e_1} \\Gamma(n-\\xi_2)^{e_2} \\cdots \\Gamma(n-\\xi_s)^{e_s}"
},
{
"math_id": 39,
"text": "c \\in \\mathbb{K}, z \\in \\overline{\\mathbb{K}}, s \\in \\N, r(n) \\in \\overline\\mathbb{K}(n), \\xi_1, \\dots, \\xi_s \\in \\overline{\\mathbb{K}}"
},
{
"math_id": 40,
"text": "\\xi_i-\\xi_j \\notin \\Z"
},
{
"math_id": 41,
"text": "i \\neq j"
},
{
"math_id": 42,
"text": "e_1, \\dots, e_s \\in \\Z"
},
{
"math_id": 43,
"text": "\\Gamma (n)"
},
{
"math_id": 44,
"text": "\\overline{\\mathbb{K}}"
},
{
"math_id": 45,
"text": "\\xi_1, \\dots, \\xi_s "
},
{
"math_id": 46,
"text": "e_i"
},
{
"math_id": 47,
"text": "\\xi_1, \\dots, \\xi_s, e_1, \\dots, e_s "
},
{
"math_id": 48,
"text": "z"
},
{
"math_id": 49,
"text": "r(n)"
},
{
"math_id": 50,
"text": "y = h_1 \\sum h_2 \\sum \\cdots \\sum h_k"
},
{
"math_id": 51,
"text": "h_1,\\dots,h_k"
},
{
"math_id": 52,
"text": "y=\\sum x"
},
{
"math_id": 53,
"text": "\\Delta y = x"
},
{
"math_id": 54,
"text": "\\Delta "
},
{
"math_id": 55,
"text": "\\Delta y = N y - y = y (n+1) - y(n)"
},
{
"math_id": 56,
"text": "L_1, \\dots, L_k"
},
{
"math_id": 57,
"text": "L_k \\cdots L_1 y = 0"
},
{
"math_id": 58,
"text": "n \\times n"
},
{
"math_id": 59,
"text": "y(n) \\in \\Q^{\\N}"
},
{
"math_id": 60,
"text": "\\pm 1"
},
{
"math_id": 61,
"text": "y (n) = 4(n-1)^2 \\, y (n-2) + 2 \\, y (n-1)"
},
{
"math_id": 62,
"text": "y(0) = 1, y(1) = 2"
},
{
"math_id": 63,
"text": "y (n) = c \\, 2^n n!"
},
{
"math_id": 64,
"text": "c"
},
{
"math_id": 65,
"text": "y (n) = 2^n n!"
},
{
"math_id": 66,
"text": "n"
},
{
"math_id": 67,
"text": "y (n) = (n-1) \\, y (n-2) + y (n-1)."
},
{
"math_id": 68,
"text": "F(n,k)"
},
{
"math_id": 69,
"text": "F(n,k+1)/F(n,k), F(n+1,k)/F(n,k) \\in \\mathbb{K}(n,k)"
},
{
"math_id": 70,
"text": "\\mathbb{K}(n,k)"
},
{
"math_id": 71,
"text": "k"
},
{
"math_id": 72,
"text": "f(n)=\\sum_k F(n,k)"
}
] |
https://en.wikipedia.org/wiki?curid=58006825
|
580076
|
Skyglow
|
Diffuse luminance of the night sky
Skyglow (or sky glow) is the diffuse luminance of the night sky, apart from discrete light sources such as the Moon and visible individual stars. It is a commonly noticed aspect of light pollution. While usually referring to luminance arising from artificial lighting, skyglow may also involve any scattered light seen at night, including natural ones like starlight, zodiacal light, and airglow.
In the context of light pollution, skyglow arises from the use of artificial light sources, including electrical (or rarely gas) lighting used for illumination and advertisement and from gas flares. Light propagating into the atmosphere directly from upward-directed or incompletely shielded sources, or after reflection from the ground or other surfaces, is partially scattered back toward the ground, producing a diffuse glow that is visible from great distances. Skyglow from artificial lights is most often noticed as a glowing dome of light over cities and towns, yet is pervasive throughout the developed world.
Causes.
Light used for all purposes in the outdoor environment contributes to skyglow, by sometimes avoidable aspects such as poor shielding of fixtures, and through at least partially unavoidable aspects such as unshielded signage and reflection from intentionally illuminated surfaces. Some of this light is then scattered in the atmosphere back toward the ground by molecules and aerosols (see ), and (if present) clouds, causing skyglow.
Research indicates that when viewed from nearby, about half of skyglow arises from direct upward emissions, and half from reflected, though the ratio varies depending on details of lighting fixtures and usage, and distance of the observation point from the light source. In most communities, direct upward emission averages about 10–15%. Fully shielded lighting (with no light emitted directly upward) decreases skyglow by about half when viewed nearby, but by much greater factors when viewed from a distance.
Skyglow is significantly amplified by the presence of snow, and within and near urban areas when clouds are present. In remote areas, snow brightens the sky, but clouds make the sky darker.
Mechanism.
There are two kinds of light scattering that lead to sky glow: scattering from molecules such as N2 and O2 (called Rayleigh scattering), and that from aerosols, described by Mie theory. Rayleigh scattering is much stronger for short-wavelength (blue) light, while scattering from aerosols is less affected by wavelength. Rayleigh scattering makes the sky appear blue in the daytime; the more aerosols there are, the less blue or whiter the sky appears. In many areas, most particularly in urban areas, aerosol scattering dominates, due to the heavy aerosol loading caused by modern industrial activity, power generation, farming and transportation.
Despite the strong wavelength dependence of Rayleigh scattering, its effect on sky glow for real light sources is small. Though the shorter wavelengths suffer increased scattering, this increased scattering also gives rise to increased extinction: the effects approximately balance when the observation point is near the light source.
For human visual perception of sky glow, generally the assumed context under discussions of sky glow, sources rich in shorter wavelengths produce brighter sky glow, but for a different reason (see ).
Measurement.
Professional astronomers and light pollution researchers use various measures of luminous or radiant intensity per unit area, such as magnitudes per square arcsecond, watts per square meter per steradian,(nano-)Lamberts, or (micro-)candela per square meter. All-sky maps of skyglow brightness are produced with professional-grade imaging cameras with CCD detectors and using stars as calibration sources. Amateur astronomers have used the Bortle Dark-Sky Scale to approximately quantify skyglow ever since it was published in "Sky & Telescope" magazine in February 2001. The scale rates the darkness of the night sky inhibited by skyglow with nine classes and provides a detailed description of each position on the scale. Amateurs also increasingly use Sky Quality Meters (SQM) that nominally measure in astronomical photometric units of visual (Johnson V) magnitudes per square arcsecond.
Dependence on distance from source.
Sky glow brightness arising from artificial light sources falls steeply with distance from the light source, due to the geometric effects characterized by an inverse square law in combination with atmospheric absorption. An approximate relation is given by
formula_0
which is known as "Walker's Law."
Walker's Law has been verified by observation to describe both the measurements of sky brightness at any given point or direction in the sky caused by a light source (such as a city), as well as to integrated measures such as the brightness of the "light dome" over a city, or the integrated brightness of the entire night sky. At very large distances (over about 50 km) the brightness falls more rapidly, largely due to extinction and geometric effects caused by the curvature of the Earth.
Dependence on light source.
Different light sources produce differing amounts of visual sky glow. The dominant effect arises from the Purkinje shift, and not as commonly claimed from Rayleigh scattering of short wavelengths (see ). When observing the night sky, even from moderately light polluted areas, the eye becomes nearly or completely dark-adapted or scotopic. The scotopic eye is much more sensitive to blue and green light, and much less sensitive to yellow and red light, than the light-adapted or photopic eye. Predominantly because of this effect, white light sources such as metal halide, fluorescent, or white LED can produce as much as 3.3 times the visual sky glow brightness of the currently most-common high-pressure sodium lamp, and up to eight times the brightness of low-pressure sodium or amber Aluminium gallium indium phosphide LED.
In detail, the effects are complex, depending both on the distance from the source as well as the viewing direction in the night sky. But the basic results of recent research are unambiguous: assuming equal luminous flux (that is, equal amounts of visible light), and matched optical characteristics of the fixtures (particularly the amount of light allowed to radiate directly upward), white sources rich in shorter (blue and green) wavelengths produce dramatically greater sky glow than sources with little blue and green. The effect of Rayleigh scattering on skyglow impacts of differing light source spectra is small.
Much discussion in the lighting industry and even by some dark-sky advocacy organizations (e.g. International Dark-Sky Association) of the sky glow consequences of replacing the currently prevalent high-pressure sodium roadway lighting systems with white LEDs neglects critical issues of human visual spectral sensitivity, or focuses exclusively on white LED light sources, or focuses concerns narrowly on the blue portion (<500 nm) of the spectrum. All of these deficiencies lead to the incorrect conclusion that increases in sky glow brightness arising from the change in light source spectrum are minimal, or that light-pollution regulations that limit the CCT of white LEDs to so-called "warm white" (i.e. CCT <4000K or 3500K) will prevent sky glow increases. Improved efficiency (efficiency in distributing light onto the target area – such as the roadway – with diminished "waste" falling outside of the target area and more uniform distribution patterns) can allow designers to lower lighting amounts. But efficiency improvement sufficient to overcome sky glow doubling or tripling arising from a switch to even warm-white LED from high-pressure sodium (or a 4–8x increase compared to low-pressure sodium) has not been demonstrated.
Negative effects.
Skyglow, and more generally light pollution, has various negative effects: from aesthetic diminishment of the beauty of a star-filled sky, through energy and resources wasted in the production of excessive or uncontrolled lighting, to impacts on birds and other biological systems, including humans. Skyglow is a prime problem for astronomers, because it reduces contrast in the night sky to the extent where it may become impossible to see all but the brightest stars.
Many nocturnal organisms are believed to navigate using the polarization signal of scattered moonlight. Because skyglow is mostly unpolarized, it can swamp the weaker signal from the moon, making this type of navigation impossible. Close to global coastal megacities (e.g. Tokyo, Shanghai), the natural illumination cycles provided by the moon in the marine environment are considerably disrupted by light pollution, with only nights around the full moon providing greater radiances, and over a given month lunar dosages may be a factor of 6 less than light pollution dosage.
Due to skyglow, people who live in or near urban areas see thousands fewer stars than in an unpolluted sky, and commonly cannot see the Milky Way. Fainter sights like the zodiacal light and Andromeda Galaxy are nearly impossible to discern even with telescopes.
Effects on the ecosystem.
The effects of sky glow in relation to the ecosystem have been observed to be detrimental to a variety of organisms. The lives of plants and animals (especially those which are nocturnal) are affected as their natural environment becomes subjected to unnatural change. It can be assumed that the rate of human development technology exceeds the rate of non-human natural adaptability to their environment, therefore, organisms such as plants and animals are unable to keep up and can suffer as a consequence.
Although sky glow can be the result of a natural occurrence, the presence of artificial sky glow has become a detrimental problem as urbanization continues to flourish. The effects of urbanization, commercialization, and consumerism are the result of human development; these developments in turn have ecological consequences. For example, lighted fishing fleets, offshore oil platforms, and cruise ships all bring the disruption of artificial night lighting to the world's oceans.
As a whole, these effects derive from changes in orientation, disorientation, or misorientation, and attraction or repulsion from the altered light environment, which in turn may affect foraging, predator-prey dynamics, reproduction, migration, and communication. These changes can result in the death of some species such as certain migratory birds, sea creatures, and nocturnal predators.
Besides the effect on animals, crops and trees are also susceptible to destruction. The constant exposure to light has an impact of the photosynthesis of a plant, as a plant needs a balance of both sun and darkness in order for it to survive. In turn, the effects of sky glow can affect production rates of agriculture, especially in farming areas that are close to large city centers.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{intensity} \\ \\propto \\ \\frac{1}{\\text{distance}^{2.5}} \\, "
}
] |
https://en.wikipedia.org/wiki?curid=580076
|
58014671
|
Zhihong Xia
|
Chinese-American mathematician
Zhihong "Jeff" Xia (; born 20 September 1962, in Dongtai, Jiangsu, China) is a Chinese-American mathematician.
Education and career.
Xia received, in 1982, from Nanjing University a bachelor's degree in astronomy and in 1988, a PhD in mathematics from Northwestern University with thesis advisor Donald G. Saari, for his thesis, "The Existence of the Non-Collision Singularities". From 1988 to 1990, Xia was an assistant professor at Harvard University and from 1990 to 1994, an associate professor at Georgia Institute of Technology (and Institute Fellow). In 1994, he became a full professor at Northwestern University and since 2000, he has been the "Arthur and Gladys Pancoe Professor of Mathematics".
His research deals with celestial mechanics, dynamical systems, Hamiltonian dynamics, and ergodic theory. In his dissertation, he solved the Painlevé conjecture, a long-standing problem posed in 1895 by Paul Painlevé. The problem concerns the existence of singularities of non-collision character in the formula_0-body problem in three-dimensional space; Xia proved the existence for formula_1. For the existence proof, he constructed an example of five masses, of which four are separated into two pairs which revolve around each other in eccentric elliptical orbits about the z-axis of symmetry, and a fifth mass moves along the z-axis. For selected initial conditions, the fifth mass can be accelerated to an infinite velocity in a finite time interval (without any collision between the bodies involved in the example). The case formula_2 was open until 2014, when it was solved by Jinxin Xue. For formula_3, Painlevé had proven that the singularities (points of the orbit in which accelerations become infinite in a finite time interval) must be of the collision type. However, Painlevé's proof did not extend to the case formula_4.
In 1993, Xia was the inaugural winner of the Blumenthal Award of the American Mathematical Society. From 1989 to 1991, he was a Sloan Fellow. From 1993 to 1998, he received the National Young Investigator Award from the National Science Foundation. In 1995, he received the Monroe H. Martin Prize in Applied Mathematics from the University of Maryland. In 1998, he was an Invited Speaker of the International Congress of Mathematicians in Berlin.
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": " N \\geq 5 "
},
{
"math_id": 2,
"text": "N = 4"
},
{
"math_id": 3,
"text": "N = 3"
},
{
"math_id": 4,
"text": " N > 3 "
}
] |
https://en.wikipedia.org/wiki?curid=58014671
|
580167
|
Link grammar
|
Theory of syntax
Link grammar (LG) is a theory of syntax by Davy Temperley and Daniel Sleator which builds relations between pairs of words, rather than constructing constituents in a phrase structure hierarchy. Link grammar is similar to dependency grammar, but dependency grammar includes a head-dependent relationship, whereas link grammar makes the head-dependent relationship optional (links need not indicate direction). Colored Multiplanar Link Grammar (CMLG) is an extension of LG allowing crossing relations between pairs of words. The relationship between words is indicated with link types, thus making the Link grammar closely related to certain categorial grammars.
For example, in a subject–verb–object language like English, the verb would look left to form a subject link, and right to form an object link. Nouns would look right to complete the subject link, or left to complete the object link.
In a subject–object–verb language like Persian, the verb would look left to form an object link, and a more distant left to form a subject link. Nouns would look to the right for both subject and object links.
Overview.
Link grammar connects the words in a sentence with links, similar in form to a catena. Unlike the catena or a traditional dependency grammar, the marking of the head-dependent relationship is optional for most languages, becoming mandatory only in free-word-order languages (such as Turkish, Finnish, Hungarian). That is, in English, the subject-verb relationship is "obvious", in that the subject is almost always to the left of the verb, and thus no specific indication of dependency needs to be made. In the case of subject-verb inversion, a distinct link type is employed. For free word-order languages, this can no longer hold, and a link between the subject and verb must contain an explicit directional arrow to indicate which of the two words is which.
Link grammar also differs from traditional dependency grammars by allowing cyclic relations between words. Thus, for example, there can be links indicating both the head verb of a sentence, the head subject of the sentence, as well as a link between the subject and the verb. These three links thus form a cycle (a triangle, in this case). Cycles are useful in constraining what might otherwise be ambiguous parses; cycles help "tighten up" the set of allowable parses of a sentence.
For example, in the parse
+---->WV--->+
+--Wd--+-Ss-+--Pa--+
LEFT-WALL he runs fast
the LEFT-WALL indicates the start of the sentence, or the root node. The directional WV link (with arrows) points at the head verb of the sentence; it is the Wall-Verb link. The Wd link (drawn here without arrows) indicates the head noun (the subject) of the sentence. The link type Wd indicates both that it connects to the wall (W) and that the sentence is a declarative sentence (the lower-case "d" subtype). The Ss link indicates the subject-verb relationship; the lower-case "s" indicating that the subject is singular. Note that the WV, Wd and Ss links for a cycle. The Pa link connects the verb to a complement; the lower-case "a" indicating that it is a predicative adjective in this case.
Parsing algorithm.
Parsing is performed in analogy to assembling a jigsaw puzzle (representing the parsed sentence) from puzzle pieces (representing individual words). A language is represented by means of a dictionary or lexis, which consists of words and the set of allowed "jigsaw puzzle shapes" that each word can have. The shape is indicated by a "connector", which is a link-type, and a direction indicator + or - indicating right or left. Thus for example, a transitive verb may have the connectors S- & O+ indicating that the verb may form a Subject ("S") connection to its left ("-") and an object connection ("O") to its right ("+"). Similarly, a common noun may have the connectors D- & S+ indicating that it may connect to a determiner on the left ("D-") and act as a subject, when connecting to a verb on the right ("S+"). The act of parsing is then to identify that the S+ connector can attach to the S- connector, forming an "S" link between the two words. Parsing completes when all connectors have been connected.
A given word may have dozens or even hundreds of allowed puzzle-shapes (termed "disjuncts"): for example, many verbs may be optionally transitive, thus making the O+ connector optional; such verbs might also take adverbial modifiers (E connectors) which are inherently optional. More complex verbs may have additional connectors for indirect objects, or for particles or prepositions. Thus, a part of parsing also involves picking one single unique disjunct for a word; the final parse must satisfy (connect) "all" connectors for that disjunct.
Dependency.
Connectors may also include head-dependent indicators h and d. In this case, a connector containing a head indicator is only allowed to connect to a connector containing the dependent indicator (or to a connector without any h-d indicators on it). When these indicators are used, the link is decorated with arrows to indicate the link direction.
A recent extension simplifies the specification of connectors for languages that have little or no restrictions on word-order, such as Lithuanian. There are also extensions to make it easier to support languages with concatenative morphologies.
Planarity.
The parsing algorithm also requires that the final graph is a planar graph, i.e. that no links cross. This constraint is based on empirical psycho-linguistic evidence that, indeed, for most languages, in nearly all situations, dependency links really do not cross. There are rare exceptions, e.g. in Finnish, and even in English; they can be parsed by link-grammar only by introducing more complex and selective connector types to capture these situations.
Costs and selection.
Connectors can have an optional floating-point cost markup, so that some are "cheaper" to use than others, thus giving preference to certain parses over others. That is, the total cost of parse is the sum of the individual costs of the connectors that were used; the cheapest parse indicates the most likely parse. This is used for parse-ranking multiple ambiguous parses. The fact that the costs are local to the connectors, and are not a global property of the algorithm makes them essentially Markovian in nature.
The assignment of a log-likelihood to linkages allows link grammar to implement the semantic selection of predicate-argument relationships. That is, certain constructions, although syntactically valid, are extremely unlikely. In this way, link grammar embodies some of the ideas present in operator grammar.
Because the costs are additive, they behave like the logarithm of the probability (since log-likelihoods are additive), or equivalently, somewhat like the entropy (since entropies are additive). This makes link grammar compatible with machine learning techniques such as hidden Markov models and the Viterbi algorithm, because the link costs correspond to the link weights in Markov networks or Bayesian networks.
Type theory.
The link grammar link types can be understood to be the types in the sense of type theory. In effect, the link grammar can be used to model the internal language of certain (non-symmetric) compact closed categories, such as pregroup grammars. In this sense, link grammar appears to be isomorphic or homomorphic to some categorial grammars. Thus, for example, in a categorial grammar the noun phrase "the bad boy" may be written as
formula_0
whereas the corresponding disjuncts in link grammar would be
the: D+;
bad: A+;
boy: D- & A-;
The contraction rules (inference rules) of the Lambek calculus can be mapped to the connecting of connectors in link grammar. The + and - directional indicators correspond the forward and backward-slashes of the categorical grammar. Finally, the single-letter names A and D can be understood as labels or "easy-to-read" mnemonic names for the rather more verbose types "NP/N", etc.
The primary distinction here is then that the categorical grammars have two type constructors, the forward and backward slashes, that can be used to create new types (such as "NP/N") from base types (such as "NP" and "N"). Link-grammar omits the use of type constructors, opting instead to define a much larger set of base types having compact, easy-to-remember mnemonics.
Examples.
Example 1.
A basic rule file for an SVO language might look like:
<determiner> D+;
<noun-subject> {D−} & S+;
<noun-object> {D−} & O−;
<verb> S− & {O+};
Thus the English sentence, "The boy painted a picture" would appear as:
+-----O-----+
+-D-+--S--+ +--D--+
The boy painted a picture
Similar parses apply for Chinese.
Example 2.
Conversely, a rule file for a null subject SOV language might consist of the following links:
<noun-subject> S+;
<noun-object> O+;
<verb> {O−} & {S−};
And a simple Persian sentence, "man nAn xordam" (من نان خوردم) 'I ate bread' would look like:
+-----S-----+
| +--O--+
man nAn xordam
VSO order can be likewise accommodated, such as for Arabic.
Example 3 (morphology).
In many languages with a concatenative morphology, the stem plays no grammatical role; the grammar is determined by the suffixes. Thus, in Russian, the sentence 'вверху плыли редкие облачка' might have the parse:
+------------Wd-----------+---------------SIp---------------+
| +-------EI------+ +--------Api-------+
| | +--LLCZD-+ +-LLAQZ+ +--LLCAO-+
LEFT-WALL вверху.e плы.= =ли.vnndpp ре.= =дкие.api облачк.= =а.ndnpi
The subscripts, such as '.vnndpp', are used to indicate the grammatical category. The primary links: Wd, EI, SIp and Api connect together the suffixes, as, in principle, other stems could appear here, without altering the structure of the sentence. The Api link indicates the adjective; SIp denotes subject-verb inversion; EI is a modifier. The Wd link is used to indicate the head noun; the head verb is not indicated in this sentence. The LLXXX links serve only to attach stems to suffixes.
Example 4 (phonology).
The link-grammar can also indicate phonological agreement between neighboring words. For example:
+---------Ost--------+
+------>WV------>+ +------Ds**x-----+
+----Wd---+-Ss*b-+ +--PHv-+----A----+
LEFT-WALL that.j-p is.v an abstract.a concept.n
Here, the connector 'PH' is used to constrain the determiners that can appear before the word 'abstract'. It effectively blocks (makes it costly) to use the determiner 'a' in this sentence, while the link to 'an' becomes cheap. The other links are roughly as in previous examples: S denoting subject, O denoting object, D denoting determiner. The 'WV' link indicates the head verb, and the 'W' link indicates the head noun. The lower-case letters following the upper-case link types serve to refine the type; so for example, Ds can only connect to a singular noun; Ss only to a singular subject, Os to a singular object. The lower-case v in PHv denotes 'vowel'; the lower-case d in Wd denotes a declarative sentence.
Example 5 (Vietnamese).
The Vietnamese language sentence "Bữa tiệc hôm qua là một thành công lớn" - "The party yesterday was a great success" may be parsed as follows:
Implementations.
The link grammar syntax parser is a library for natural language processing written in C. It is available under the LGPL license. The parser is an ongoing project. Recent versions include improved sentence coverage, Russian, Persian and Arabic language support, prototypes for German, Hebrew, Lithuanian, Vietnamese and Turkish, and programming API's for Python, Java, Common LISP, AutoIt and OCaml, with 3rd-party bindings for Perl, Ruby and JavaScript node.js.
A current major undertaking is a project to learn the grammar and morphology of new languages, using unsupervised learning algorithms.
The "link-parser" program along with rules and word lists for English may be found in standard Linux distributions, e.g., as a Debian package, although many of these are years out of date.
Applications.
AbiWord, a free word processor, uses link grammar for on-the-fly grammar checking. Words that cannot be linked anywhere are underlined in green.
The semantic relationship extractor RelEx, layered on top of the link grammar library, generates a dependency grammar output by making explicit the semantic relationships between words in a sentence. Its output can be classified as being at a level between that of SSyntR and DSyntR of Meaning-Text Theory. It also provides framing/grounding, anaphora resolution, head-word identification, lexical chunking, part-of-speech identification, and tagging, including entity, date, money, gender, etc. tagging. It includes a compatibility mode to generate dependency output compatible with the Stanford parser, and Penn Treebank-compatible POS tagging.
Link grammar has also been employed for information extraction of
biomedical texts and
events described in news articles, as well as experimental machine translation systems from English to German, Turkish, Indonesian. and Persian.
The link grammar link dictionary is used to generate and verify the syntactic correctness of three different natural language generation systems: NLGen, NLGen2 and microplanner/surreal. It is also used as a part of the NLP pipeline in the OpenCog AI project.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n{\\text{the} \\atop \\text{NP/N,}}\n{\\text{bad} \\atop \\text{N/N,}}\n{\\text{boy} \\atop \\text{N}}\n"
}
] |
https://en.wikipedia.org/wiki?curid=580167
|
58019333
|
Aline Bonami
|
French mathematician
Aline Bonami (née Nivat) is a French mathematician known for her expertise in mathematical analysis. She is a professor emeritus at the University of Orléans, and was president of the Société mathématique de France for 2012–2013.
Education and career.
Bonami was a student at the École normale supérieure de jeunes filles from 1963 to 1967, when she became a researcher at the Centre national de la recherche scientifique (CNRS). In 1970, she completed a doctorate at the University of Paris-Sud, under the supervision of Yves Meyer; her dissertation was "Etude des coefficients de Fourier des fonctions de formula_0". She joined the University of Orléans in 1973 and retired as a professor emeritus in 2006.
Awards and honors.
The French Academy of Sciences gave Bonami their Prix Petit d'Ormoy, Carrière, Thébault in 2001, for her results on
Bergman and Szegő projections, on Hankel operators with several complex variables, and on inequalities for hypercontractivity.
The University of Gothenburg gave her an honorary doctorate in 2002. A conference on harmonic analysis was held in her honor in Orléans in 2014. She was awarded the 2020 Stefan Bergman Prize by the American Mathematical Society "for her highly influential contributions to several complex variables and analytic spaces. She is being especially recognized for her fundamental work on the Bergman and Szegő projections and their corresponding spaces of holomorphic functions."
Personal.
Bonami is the sister of , a specialist of Russian literature and history, and of French computer scientist Maurice Nivat.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L^p(G)"
}
] |
https://en.wikipedia.org/wiki?curid=58019333
|
58019382
|
96 equal temperament
|
Musical scale with a 96-step octave
In music, 96 equal temperament, called 96-TET, 96-EDO ("Equal Division of the Octave"), or 96-ET, is the tempered scale derived by dividing the octave into 96 equal steps (equal frequency ratios). Each step represents a frequency ratio of formula_0, or 12.5 cents. Since 96 factors into 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, and 96, it contains all of those temperaments. Most humans can only hear differences of 6 cents on notes that are played sequentially, and this amount varies according to the pitch, so the use of larger divisions of octave can be considered unnecessary. Smaller differences in pitch may be considered vibrato or stylistic devices.
History and use.
96-EDO was first advocated by Julián Carrillo in 1924, with a 16th-tone piano. It was also advocated more recently by Pascale Criton and Vincent-Olivier Gagnon.
Notation.
Since 96 = 24 × 4, quarter-tone notation can be used and split into four parts.
One can split it into four parts like this:
C, C↑, C↑↑/C↓↓, C↓, C, ..., C↓, C
As it can become confusing with so many accidentals, Julián Carrillo proposed referring to notes by step number from C (e.g. 0, 1, 2, 3, 4, ..., 95, 0)
Since the 16th-tone piano has a 97-key layout arranged in 8 conventional piano "octaves", music for it is usually notated according to the key the player has to strike. While the entire range of the instrument is only C4–C5, the notation ranges from C0 to C8. Thus, written D0 corresponds to sounding C↑↑4 or note 2, and written A♭/G♯2 corresponds to sounding E4 or note 32.
Interval size.
Below are some intervals in 96-EDO and how well they approximate just intonation.
Moving from 12-EDO to 96-EDO allows the better approximation of a number of intervals, such as the minor third and major sixth.
Scale diagram.
Modes.
96-EDO contains all of the 12-EDO modes. However, it contains better approximations to some intervals (such as the minor third).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt[96]{2}"
}
] |
https://en.wikipedia.org/wiki?curid=58019382
|
580238
|
Curve of constant width
|
Shape with width independent of orientation
In geometry, a curve of constant width is a simple closed curve in the plane whose width (the distance between parallel supporting lines) is the same in all directions. The shape bounded by a curve of constant width is a body of constant width or an orbiform, the name given to these shapes by Leonhard Euler. Standard examples are the circle and the Reuleaux triangle. These curves can also be constructed using circular arcs centered at crossings of an arrangement of lines, as the involutes of certain curves, or by intersecting circles centered on a partial curve.
Every body of constant width is a convex set, its boundary crossed at most twice by any line, and if the line crosses perpendicularly it does so at both crossings, separated by the width. By Barbier's theorem, the body's perimeter is exactly π times its width, but its area depends on its shape, with the Reuleaux triangle having the smallest possible area for its width and the circle the largest. Every superset of a body of constant width includes pairs of points that are farther apart than the width, and every curve of constant width includes at least six points of extreme curvature. Although the Reuleaux triangle is not smooth, curves of constant width can always be approximated arbitrarily closely by smooth curves of the same constant width.
Cylinders with constant-width cross-section can be used as rollers to support a level surface. Another application of curves of constant width is for coinage shapes, where regular Reuleaux polygons are a common choice. The possibility that curves other than circles can have constant width makes it more complicated to check the roundness of an object.
Curves of constant width have been generalized in several ways to higher dimensions and to non-Euclidean geometry.
Definitions.
Width, and constant width, are defined in terms of the supporting lines of curves; these are lines that touch a curve without crossing it.
Every compact curve in the plane has two supporting lines in any given direction, with the curve sandwiched between them. The Euclidean distance between these two lines is the "width" of the curve in that direction, and a curve has constant width if this distance is the same for all directions of lines. The width of a bounded convex set can be defined in the same way as for curves, by the distance between pairs of parallel lines that touch the set without crossing it, and a convex set is a body of constant width when this distance is nonzero and does not depend on the direction of the lines. Every body of constant width has a curve of constant width as its boundary, and every curve of constant width has a body of constant width as its convex hull.
Another equivalent way to define the width of a compact curve or of a convex set is by looking at its orthogonal projection onto a line. In both cases, the projection is a line segment, whose length equals the distance between support lines that are perpendicular to the line. So, a curve or a convex set has constant width when all of its orthogonal projections have the same length.
Examples.
Circles have constant width, equal to their diameter. On the other hand, squares do not: supporting lines parallel to two opposite sides of the square are closer together than supporting lines parallel to a diagonal. More generally, no polygon can have constant width. However, there are other shapes of constant width. A standard example is the Reuleaux triangle, the intersection of three circles, each centered where the other two circles cross. Its boundary curve consists of three arcs of these circles, meeting at 120° angles, so it is not smooth, and in fact these angles are the sharpest possible for any curve of constant width.
Other curves of constant width can be smooth but non-circular, not even having any circular arcs in their boundary.
For instance, the zero set of the polynomial below forms a non-circular smooth algebraic curve of constant width:
formula_0
Its degree, eight, is the minimum possible degree for a polynomial that defines a non-circular curve of constant width.
Constructions.
Every regular polygon with an odd number of sides gives rise to a curve of constant width, a Reuleaux polygon, formed from circular arcs centered at its vertices that pass through the two vertices farthest from the center. For instance, this construction generates a Reuleaux triangle from an equilateral triangle. Some irregular polygons also generate Reuleaux polygons. In a closely related construction, called by Martin Gardner the "crossed-lines method", an arrangement of lines in the plane (no two parallel but otherwise arbitrary) is sorted into cyclic order by the slopes of the lines. The lines are then connected by a curve formed from a sequence of circular arcs; each arc connects two consecutive lines in the sorted order, and is centered at their crossing. The radius of the first arc must be chosen large enough to cause all successive arcs to end on the correct side of the next crossing point; however, all sufficiently-large radii work. For two lines, this forms a circle; for three lines on the sides of an equilateral triangle, with the minimum possible radius, it forms a Reuleaux triangle, and for the lines of a regular star polygon it can form a Reuleaux polygon.
Leonhard Euler constructed curves of constant width from involutes of curves with an odd number of cusp singularities, having only one tangent line in each direction (that is, projective hedgehogs). An intuitive way to describe the involute construction is to roll a line segment around such a curve, keeping it tangent to the curve without sliding along it, until it returns to its starting point of tangency. The line segment must be long enough to reach the cusp points of the curve, so that it can roll past each cusp to the next part of the curve, and its starting position should be carefully chosen so that at the end of the rolling process it is in the same position it started from. When that happens, the curve traced out by the endpoints of the line segment is an involute that encloses the given curve without crossing it, with constant width equal to the length of the line segment. If the starting curve is smooth (except at the cusps), the resulting curve of constant width will also be smooth. An example of a starting curve with the correct properties for this construction is the deltoid curve, and the involutes of the deltoid that enclose it form smooth curves of constant width, not containing any circular arcs.
Another construction chooses half of the curve of constant width, meeting certain requirements, and forms from it a body of constant width having the given curve as part of its boundary. The construction begins with a convex curved arc, whose endpoints are the intended width formula_1 apart. The two endpoints must touch parallel supporting lines at distance formula_1 from each other. Additionally, each supporting line that touches another point of the arc must be tangent at that point to a circle of radius formula_1 containing the entire arc; this requirement prevents the curvature of the arc from being less than that of the circle. The completed body of constant width is then the intersection of the interiors of an infinite family of circles, of two types: the ones tangent to the supporting lines, and more circles of the same radius centered at each point of the given arc. This construction is universal: all curves of constant width may be constructed in this way. Victor Puiseux, a 19th-century French mathematician, found curves of constant width containing elliptical arcs that can be constructed in this way from a semi-ellipse. To meet the curvature condition, the semi-ellipse should be bounded by the semi-major axis of its ellipse, and the ellipse should have eccentricity at most formula_2. Equivalently, the semi-major axis should be at most twice the semi-minor axis.
Given any two bodies of constant width, their Minkowski sum forms another body of constant width. A generalization of Minkowski sums to the sums of support functions of hedgehogs produces a curve of constant width from the sum of a projective hedgehog and a circle, whenever the result is a convex curve. All curves of constant width can be decomposed into a sum of hedgehogs in this way.
Properties.
A curve of constant width can rotate between two parallel lines separated by its width, while at all times touching those lines, which act as supporting lines for the rotated curve. In the same way, a curve of constant width can rotate within a rhombus or square, whose pairs of opposite sides are separated by the width and lie on parallel support lines. Not every curve of constant width can rotate within a regular hexagon in the same way, because its supporting lines may form different irregular hexagons for different rotations rather than always forming a regular one. However, every curve of constant width can be enclosed by at least one regular hexagon with opposite sides on parallel supporting lines.
A curve has constant width if and only if, for every pair of parallel supporting lines, it touches those two lines at points whose distance equals the separation between the lines. In particular, this implies that it can only touch each supporting line at a single point. Equivalently, every line that crosses the curve perpendicularly crosses it at exactly two points of distance equal to the width. Therefore, a curve of constant width must be convex, since every non-convex simple closed curve has a supporting line that touches it at two or more points. Curves of constant width are examples of self-parallel or auto-parallel curves, curves traced by both endpoints of a line segment that moves in such a way that both endpoints move perpendicularly to the line segment. However, there exist other self-parallel curves, such as the infinite spiral formed by the involute of a circle, that do not have constant width.
Barbier's theorem asserts that the perimeter of any curve of constant width is equal to the width multiplied by formula_3. As a special case, this formula agrees with the standard formula formula_4 for the perimeter of a circle given its diameter. By the isoperimetric inequality and Barbier's theorem, the circle has the maximum area of any curve of given constant width. The Blaschke–Lebesgue theorem says that the Reuleaux triangle has the least area of any convex curve of given constant width. Every proper superset of a body of constant width has strictly greater diameter, and every Euclidean set with this property is a body of constant width. In particular, it is not possible for one body of constant width to be a subset of a different body with the same constant width. Every curve of constant width can be approximated arbitrarily closely by a piecewise circular curve or by an analytic curve of the same constant width.
A vertex of a smooth curve is a point where its curvature is a local maximum or minimum; for a circular arc, all points are vertices, but non-circular curves may have a finite discrete set of vertices. For a curve that is not smooth, the points where it is not smooth can also be considered as vertices, of infinite curvature. For a curve of constant width, each vertex of locally minimum curvature is paired with a vertex of locally maximum curvature, opposite it on a diameter of the curve, and there must be at least six vertices. This stands in contrast to the four-vertex theorem, according to which every simple closed smooth curve in the plane has at least four vertices. Some curves, such as ellipses, have exactly four vertices, but this is not possible for a curve of constant width. Because local minima of curvature are opposite local maxima of curvature, the only curves of constant width with central symmetry are the circles, for which the curvature is the same at all points. For every curve of constant width, the minimum enclosing circle of the curve and the largest circle that it contains are concentric, and the average of their diameters is the width of the curve. These two circles together touch the curve in at least three pairs of opposite points, but these points are not necessarily vertices.
A convex body has constant width if and only if the Minkowski sum of the body and its 180° rotation is a circular disk; if so, the width of the body is the radius of the disk.
Applications.
Because of the ability of curves of constant width to roll between parallel lines, any cylinder with a curve of constant width as its cross-section can act as a "roller", supporting a level plane and keeping it flat as it rolls along any level surface. However, the center of the roller moves up and down as it rolls, so this construction would not work for wheels in this shape attached to fixed axles.
Some coinage shapes are non-circular bodies of constant width. For instance the British 20p and 50p coins are Reuleaux heptagons, and the Canadian loonie is a Reuleaux 11-gon. These shapes allow automated coin machines to recognize these coins from their widths, regardless of the orientation of the coin in the machine. On the other hand, testing the width is inadequate to determine the roundness of an object, because such tests cannot distinguish circles from other curves of constant width. Overlooking this fact may have played a role in the Space Shuttle Challenger disaster, as the roundness of sections of the rocket in that launch was tested only by measuring widths, and off-round shapes may cause unusually high stresses that could have been one of the factors causing the disaster.
Generalizations.
The curves of constant width can be generalized to certain non-convex curves, the curves that have two tangent lines in each direction, with the same separation between these two lines regardless of their direction. As a limiting case, the projective hedgehogs (curves with one tangent line in each direction) have also been called "curves of zero width".
One way to generalize these concepts to three dimensions is through the surfaces of constant width. The three-dimensional analog of a Reuleaux triangle, the Reuleaux tetrahedron, does not have constant width, but minor changes to it produce the Meissner bodies, which do. The curves of constant width may also be generalized to the bodies of constant brightness, three-dimensional shapes whose two-dimensional projections all have equal area; these shapes obey a generalization of Barbier's theorem. A different class of three-dimensional generalizations, the space curves of constant width, are defined by the properties that each plane that crosses the curve perpendicularly intersects it at exactly one other point, where it is also perpendicular, and that all pairs of points intersected by perpendicular planes are the same distance apart.
Curves and bodies of constant width have also been studied in non-Euclidean geometry and for non-Euclidean normed vector spaces.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\nf(x,y)={}&(x^2 + y^2)^4 - 45(x^2 + y^2)^3 - 41283(x^2 + y^2)^2\\\\\n& + 7950960(x^2 + y^2) + 16(x^2 - 3y^2)^3 +48(x^2 + y^2)(x^2 - 3y^2)^2\\\\\n&+ x(x^2 - 3y^2)\\left(16(x^2 + y^2)^2 - 5544(x^2 + y^2) + 266382\\right) - 720^3.\n\\end{align}"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "\\tfrac{1}{2}\\sqrt{3}"
},
{
"math_id": 3,
"text": "\\pi"
},
{
"math_id": 4,
"text": "\\pi d"
}
] |
https://en.wikipedia.org/wiki?curid=580238
|
580252
|
Reuleaux triangle
|
Curved triangle with constant width
A Reuleaux triangle is a curved triangle with constant width, the simplest and best known curve of constant width other than the circle. It is formed from the intersection of three circular disks, each having its center on the boundary of the other two. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because its width is constant, the Reuleaux triangle is one answer to the question "Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?"
They are named after Franz Reuleaux, a 19th-century German engineer who pioneered the study of machines for translating one type of motion into another, and who used Reuleaux triangles in his designs. However, these shapes were known before his time, for instance by the designers of Gothic church windows, by Leonardo da Vinci, who used it for a map projection, and by Leonhard Euler in his study of constant-width shapes. Other applications of the Reuleaux triangle include giving the shape to guitar picks, fire hydrant nuts, pencils, and drill bits for drilling filleted square holes, as well as in graphic design in the shapes of some signs and corporate logos.
Among constant-width shapes with a given width, the Reuleaux triangle has the minimum area and the sharpest (smallest) possible angle (120°) at its corners. By several numerical measures it is the farthest from being centrally symmetric. It provides the largest constant-width shape avoiding the points of an integer lattice, and is closely related to the shape of the quadrilateral maximizing the ratio of perimeter to diameter. It can perform a complete rotation within a square while at all times touching all four sides of the square, and has the smallest possible area of shapes with this property. However, although it covers most of the square in this rotation process, it fails to cover a small fraction of the square's area, near its corners. Because of this property of rotating within a square, the Reuleaux triangle is also sometimes known as the Reuleaux rotor.
The Reuleaux triangle is the first of a sequence of Reuleaux polygons whose boundaries are curves of constant width formed from regular polygons with an odd number of sides. Some of these curves have been used as the shapes of coins. The Reuleaux triangle can also be generalized into three dimensions in multiple ways: the Reuleaux tetrahedron (the intersection of four balls whose centers lie on a regular tetrahedron) does not have constant width, but can be modified by rounding its edges to form the Meissner tetrahedron, which does. Alternatively, the surface of revolution of the Reuleaux triangle also has constant width.
Construction.
The Reuleaux triangle may be constructed either directly from three circles, or by rounding the sides of an equilateral triangle.
The three-circle construction may be performed with a compass alone, not even needing a straightedge. By the Mohr–Mascheroni theorem
the same is true more generally of any compass-and-straightedge construction, but the construction for the Reuleaux triangle is particularly simple.
The first step is to mark two arbitrary points of the plane (which will eventually become vertices of the triangle), and use the compass to draw a circle centered at one of the marked points, through the other marked point. Next, one draws a second circle, of the same radius, centered at the other marked point and passing through the first marked point.
Finally, one draws a third circle, again of the same radius, with its center at one of the two crossing points of the two previous circles, passing through both marked points. The central region in the resulting arrangement of three circles will be a Reuleaux triangle.
Alternatively, a Reuleaux triangle may be constructed from an equilateral triangle "T" by drawing three arcs of circles, each centered at one vertex of "T" and connecting the other two vertices.
Or, equivalently, it may be constructed as the intersection of three disks centered at the vertices of "T", with radius equal to the side length of "T".
Mathematical properties.
The most basic property of the Reuleaux triangle is that it has constant width, meaning that for every pair of parallel supporting lines (two lines of the same slope that both touch the shape without crossing through it) the two lines have the same Euclidean distance from each other, regardless of the orientation of these lines. In any pair of parallel supporting lines, one of the two lines will necessarily touch the triangle at one of its vertices. The other supporting line may touch the triangle at any point on the opposite arc, and their distance (the width of the Reuleaux triangle) equals the radius of this arc.
The first mathematician to discover the existence of curves of constant width, and to observe that the Reuleaux triangle has constant width, may have been Leonhard Euler. In a paper that he presented in 1771 and published in 1781 entitled "De curvis triangularibus", Euler studied curvilinear triangles as well as the curves of constant width, which he called orbiforms.
Extremal measures.
By many different measures, the Reuleaux triangle is one of the most extreme curves of constant width.
By the Blaschke–Lebesgue theorem, the Reuleaux triangle has the smallest possible area of any curve of given constant width. This area is
formula_0
where "s" is the constant width. One method for deriving this area formula is to partition the Reuleaux triangle into an inner equilateral triangle and three curvilinear regions between this inner triangle and the arcs forming the Reuleaux triangle, and then add the areas of these four sets. At the other extreme, the curve of constant width that has the maximum possible area is a circular disk, which has area formula_1.
The angles made by each pair of arcs at the corners of a Reuleaux triangle are all equal to 120°. This is the sharpest possible angle at any vertex of any curve of constant width. Additionally, among the curves of constant width, the Reuleaux triangle is the one with both the largest and the smallest inscribed equilateral triangles. The largest equilateral triangle inscribed in a Reuleaux triangle is the one connecting its three corners, and the smallest one is the one connecting the three midpoints of its sides. The subset of the Reuleaux triangle consisting of points belonging to three or more diameters is the interior of the larger of these two triangles; it has a larger area than the set of three-diameter points of any other curve of constant width.
Although the Reuleaux triangle has sixfold dihedral symmetry, the same as an equilateral triangle, it does not have central symmetry.
The Reuleaux triangle is the least symmetric curve of constant width according to two different measures of central asymmetry, the Kovner–Besicovitch measure (ratio of area to the largest centrally symmetric shape enclosed by the curve) and the Estermann measure (ratio of area to the smallest centrally symmetric shape enclosing the curve). For the Reuleaux triangle, the two centrally symmetric shapes that determine the measures of asymmetry are both hexagonal, although the inner one has curved sides. The Reuleaux triangle has diameters that split its area more unevenly than any other curve of constant width. That is, the maximum ratio of areas on either side of a diameter, another measure of asymmetry, is bigger for the Reuleaux triangle than for other curves of constant width.
Among all shapes of constant width that avoid all points of an integer lattice, the one with the largest width is a Reuleaux triangle. It has one of its axes of symmetry parallel to the coordinate axes on a half-integer line. Its width, approximately 1.54, is the root of a degree-6 polynomial with integer coefficients.
Just as it is possible for a circle to be surrounded by six congruent circles that touch it, it is also possible to arrange seven congruent Reuleaux triangles so that they all make contact with a central Reuleaux triangle of the same size. This is the maximum number possible for any curve of constant width.
Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter is an equidiagonal kite that can be inscribed into a Reuleaux triangle.
Other measures.
By Barbier's theorem all curves of the same constant width including the Reuleaux triangle have equal perimeters. In particular this perimeter equals the perimeter of the circle with the same width, which is formula_2.
The radii of the largest inscribed circle of a Reuleaux triangle with width "s", and of the circumscribed circle of the same triangle, are
formula_3
respectively; the sum of these radii equals the width of the Reuleaux triangle. More generally, for every curve of constant width, the largest inscribed circle and the smallest circumscribed circle are concentric, and their radii sum to the constant width of the curve.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
How densely can Reuleaux triangles be packed in the plane?
The optimal packing density of the Reuleaux triangle in the plane remains unproven, but is conjectured to be
formula_4
which is the density of one possible double lattice packing for these shapes. The best proven upper bound on the packing density is approximately 0.947. It has also been conjectured, but not proven, that the Reuleaux triangles have the highest packing density of any curve of constant width.
Rotation within a square.
Any curve of constant width can form a rotor within a square, a shape that can perform a complete rotation while staying within the square and at all times touching all four sides of the square. However, the Reuleaux triangle is the rotor with the minimum possible area. As it rotates, its axis does not stay fixed at a single point, but instead follows a curve formed by the pieces of four ellipses. Because of its 120° angles, the rotating Reuleaux triangle cannot reach some points near the sharper angles at the square's vertices, but rather covers a shape with slightly rounded corners, also formed by elliptical arcs.
At any point during this rotation, two of the corners of the Reuleaux triangle touch two adjacent sides of the square, while the third corner of the triangle traces out a curve near the opposite vertex of the square. The shape traced out by the rotating Reuleaux triangle covers approximately 98.8% of the area of the square.
As a counterexample.
Reuleaux's original motivation for studying the Reuleaux triangle was as a counterexample, showing that three single-point contacts may not be enough to fix a planar object into a single position. The existence of Reuleaux triangles and other curves of constant width shows that diameter measurements alone cannot verify that an object has a circular cross-section.
In connection with the inscribed square problem, observed that the Reuleaux triangle provides an example of a constant-width shape in which no regular polygon with more than four sides can be inscribed, except the regular hexagon, and he described a small modification to this shape that preserves its constant width but also prevents regular hexagons from being inscribed in it. He generalized this result to three dimensions using a cylinder with the same shape as its cross section.
Applications.
Reaching into corners.
Several types of machinery take the shape of the Reuleaux triangle, based on its property of being able to rotate within a square.
The Watts Brothers Tool Works square drill bit has the shape of a Reuleaux triangle, modified with concavities to form cutting surfaces. When mounted in a special chuck which allows for the bit not having a fixed centre of rotation, it can drill a hole that is nearly square. Although patented by Henry Watts in 1914, similar drills invented by others were used earlier. Other Reuleaux polygons are used to drill pentagonal, hexagonal, and octagonal holes.
Panasonic's RULO robotic vacuum cleaner has its shape based on the Reuleaux triangle in order to ease cleaning up dust in the corners of rooms.
Rolling cylinders.
Another class of applications of the Reuleaux triangle involves cylindrical objects with a Reuleaux triangle cross section. Several pencils are manufactured in this shape, rather than the more traditional round or hexagonal barrels. They are usually promoted as being more comfortable or encouraging proper grip, as well as being less likely to roll off tables (since the center of gravity moves up and down more than a rolling hexagon).
A Reuleaux triangle (along with all other curves of constant width) can roll but makes a poor wheel because it does not roll about a fixed center of rotation. An object on top of rollers that have Reuleaux triangle cross-sections would roll smoothly and flatly, but an axle attached to Reuleaux triangle wheels would bounce up and down three times per revolution. This concept was used in a science fiction short story by Poul Anderson titled "The Three-Cornered Wheel". A bicycle with floating axles and a frame supported by the rim of its Reuleaux triangle shaped wheel was built and demonstrated in 2009 by Chinese inventor Guan Baihua, who was inspired by pencils with the same shape.
Mechanism design.
Another class of applications of the Reuleaux triangle involves using it as a part of a mechanical linkage that can convert rotation around a fixed axis
into reciprocating motion. These mechanisms were studied by Franz Reuleaux. With the assistance of the Gustav Voigt company, Reuleaux built approximately 800 models of mechanisms, several of which involved the Reuleaux triangle. Reuleaux used these models in his pioneering scientific investigations of their motion. Although most of the Reuleaux–Voigt models have been lost, 219 of them have been collected at Cornell University, including nine based on the Reuleaux triangle. However, the use of Reuleaux triangles in mechanism design predates the work of Reuleaux; for instance, some steam engines from as early as 1830 had a cam in the shape of a Reuleaux triangle.
One application of this principle arises in a film projector. In this application, it is necessary to advance the film in a jerky, stepwise motion, in which each frame of film stops for a fraction of a second in front of the projector lens, and then much more quickly the film is moved to the next frame. This can be done using a mechanism in which the rotation of a Reuleaux triangle within a square is used to create a motion pattern for an actuator that pulls the film quickly to each new frame and then pauses the film's motion while the frame is projected.
The rotor of the Wankel engine is shaped as a curvilinear triangle that is often cited as an example of a Reuleaux triangle. However, its curved sides are somewhat flatter than those of a Reuleaux triangle and so it does not have constant width.
Architecture.
In Gothic architecture, beginning in the late 13th century or early 14th century, the Reuleaux triangle became one of several curvilinear forms frequently used for windows, window tracery, and other architectural decorations. For instance, in English Gothic architecture, this shape was associated with the decorated period, both in its geometric style of 1250–1290 and continuing into its curvilinear style of 1290–1350. It also appears in some of the windows of the Milan Cathedral. In this context, the shape is sometimes called a "spherical triangle", which should not be confused with spherical triangle meaning a triangle on the surface of a sphere. In its use in Gothic church architecture, the three-cornered shape of the Reuleaux triangle may be seen both as a symbol of the Trinity, and as "an act of opposition to the form of the circle".
The Reuleaux triangle has also been used in other styles of architecture. For instance, Leonardo da Vinci sketched this shape as the plan for a fortification. Modern buildings that have been claimed to use a Reuleaux triangle shaped floorplan include the MIT Kresge Auditorium, the Kölntriangle, the Donauturm, the Torre de Collserola, and the Mercedes-Benz Museum. However in many cases these are merely rounded triangles, with different geometry than the Reuleaux triangle.
Mapmaking.
Another early application of the Reuleaux triangle, da Vinci's world map from circa 1514, was a world map in which the spherical surface of the earth was divided into eight octants, each flattened into the shape of a Reuleaux triangle.
Similar maps also based on the Reuleaux triangle were published by Oronce Finé in 1551 and by John Dee in 1580.
Other objects.
Many guitar picks employ the Reuleaux triangle, as its shape combines a sharp point to provide strong articulation, with a wide tip to produce a warm timbre. Because all three points of the shape are usable, it is easier to orient and wears less quickly compared to a pick with a single tip.
The Reuleaux triangle has been used as the shape for the cross section of a fire hydrant valve nut. The constant width of this shape makes it difficult to open the fire hydrant using standard parallel-jawed wrenches; instead, a wrench with a special shape is needed. This property allows the fire hydrants to be opened only by firefighters (who have the special wrench) and not by other people trying to use the hydrant as a source of water for other activities.
Following a suggestion of , the antennae of the Submillimeter Array, a radio-wave astronomical observatory on Mauna Kea in Hawaii, are arranged on four nested Reuleaux triangles. Placing the antennae on a curve of constant width causes the observatory to have the same spatial resolution in all directions, and provides a circular observation beam. As the most asymmetric curve of constant width, the Reuleaux triangle leads to the most uniform coverage of the plane for the Fourier transform of the signal from the array. The antennae may be moved from one Reuleaux triangle to another for different observations, according to the desired angular resolution of each observation. The precise placement of the antennae on these Reuleaux triangles was optimized using a neural network. In some places the constructed observatory departs from the preferred Reuleaux triangle shape because that shape was not possible within the given site.
Signs and logos.
The shield shapes used for many signs and corporate logos feature rounded triangles. However, only some of these are Reuleaux triangles.
The corporate logo of Petrofina (Fina), a Belgian oil company with major operations in Europe, North America and Africa, used a Reuleaux triangle with the Fina name from 1950 until Petrofina's merger with "Total S.A." (today TotalEnergies) in 2000.
Another corporate logo framed in the Reuleaux triangle, the south-pointing compass of Bavaria Brewery, was part of a makeover by design company Total Identity that won the SAN 2010 Advertiser of the Year award. The Reuleaux triangle is also used in the logo of Colorado School of Mines.
In the United States, the National Trails System and United States Bicycle Route System both mark routes with Reuleaux triangles on signage.
In nature.
According to Plateau's laws, the circular arcs in two-dimensional soap bubble clusters meet at 120° angles, the same angle found at the corners of a Reuleaux triangle. Based on this fact, it is possible to construct clusters in which some of the bubbles take the form of a Reuleaux triangle.
The shape was first isolated in crystal form in 2014 as Reuleaux triangle disks. Basic bismuth nitrate disks with the Reuleaux triangle shape were formed from the hydrolysis and precipitation of bismuth nitrate in an ethanol–water system in the presence of 2,3-bis(2-pyridyl)pyrazine.
Generalizations.
Triangular curves of constant width with smooth rather than sharp corners may be obtained as the locus of points at a fixed distance from the Reuleaux triangle. Other generalizations of the Reuleaux triangle include surfaces in three dimensions, curves of constant width with more than three sides, and the Yanmouti sets which provide extreme examples of an inequality between width, diameter, and inradius.
Three-dimensional version.
The intersection of four balls of radius "s" centered at the vertices of a regular tetrahedron with side length "s" is called the Reuleaux tetrahedron, but its surface is not a surface of constant width. It can, however, be made into a surface of constant width, called Meissner's tetrahedron, by replacing three of its edge arcs by curved surfaces, the surfaces of rotation of a circular arc. Alternatively, the surface of revolution of a Reuleaux triangle through one of its symmetry axes forms a surface of constant width, with minimum volume among all known surfaces of revolution of given constant width.
Reuleaux polygons.
The Reuleaux triangle can be generalized to regular or irregular polygons with an odd number of sides, yielding a Reuleaux polygon, a curve of constant width formed from circular arcs of constant radius. The constant width of these shapes allows their use as coins that can be used in coin-operated machines. Although coins of this type in general circulation usually have more than three sides, a Reuleaux triangle has been used for a commemorative coin from Bermuda.
Similar methods can be used to enclose an arbitrary simple polygon within a curve of constant width, whose width equals the diameter of the given polygon. The resulting shape consists of circular arcs (at most as many as sides of the polygon), can be constructed algorithmically in linear time, and can be drawn with compass and straightedge. Although the Reuleaux polygons all have an odd number of circular-arc sides, it is possible to construct constant-width shapes with an even number of circular-arc sides of varying radii.
Yanmouti sets.
The Yanmouti sets are defined as the convex hulls of an equilateral triangle together with three circular arcs, centered at the triangle vertices and spanning the same angle as the triangle, with equal radii that are at most equal to the side length of the triangle. Thus, when the radius is small enough, these sets degenerate to the equilateral triangle itself, but when the radius is as large as possible they equal the corresponding Reuleaux triangle. Every shape with width "w", diameter "d", and inradius "r" (the radius of the largest possible circle contained in the shape) obeys the inequality
formula_5
and this inequality becomes an equality for the Yanmouti sets, showing that it cannot be improved.
Related figures.
In the classical presentation of a three-set Venn diagram as three overlapping circles, the central region (representing elements belonging to all three sets) takes the shape of a Reuleaux triangle. The same three circles form one of the standard drawings of the Borromean rings, three mutually linked rings that cannot, however, be realized as geometric circles. Parts of these same circles are used to form the triquetra, a figure of three overlapping semicircles (each two of which form a vesica piscis symbol) that again has a Reuleaux triangle at its center; just as the three circles of the Venn diagram may be interlaced to form the Borromean rings, the three circular arcs of the triquetra may be interlaced to form a trefoil knot.
Relatives of the Reuleaux triangle arise in the problem of finding the minimum perimeter shape that encloses a fixed amount of area and includes three specified points in the plane. For a wide range of choices of the area parameter, the optimal solution to this problem will be a curved triangle whose three sides are circular arcs with equal radii. In particular, when the three points are equidistant from each other and the area is that of the Reuleaux triangle, the Reuleaux triangle is the optimal enclosure.
Circular triangles are triangles with circular-arc edges, including the Reuleaux triangle as well as other shapes.
The deltoid curve is another type of curvilinear triangle, but one in which the curves replacing each side of an equilateral triangle are concave rather than convex. It is not composed of circular arcs, but may be formed by rolling one circle within another of three times the radius. Other planar shapes with three curved sides include the arbelos, which is formed from three semicircles with collinear endpoints, and the Bézier triangle.
The Reuleaux triangle may also be interpreted as the stereographic projection of one triangular face of a spherical tetrahedron, the Schwarz triangle of parameters formula_6 with spherical angles of measure formula_7 and sides of spherical length
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{2}(\\pi - \\sqrt3)s^2 \\approx 0.705s^2,"
},
{
"math_id": 1,
"text": "\\pi s^2 / 4\\approx 0.785s^2"
},
{
"math_id": 2,
"text": "\\pi s"
},
{
"math_id": 3,
"text": "\\displaystyle\\left(1-\\frac{1}{\\sqrt 3}\\right)s\\approx 0.423s \\quad \\text{and} \\quad \\displaystyle\\frac{s}{\\sqrt 3}\\approx 0.577s"
},
{
"math_id": 4,
"text": "\\frac{2(\\pi-\\sqrt 3)}{\\sqrt{15}+\\sqrt{7}-\\sqrt{12}} \\approx 0.923, "
},
{
"math_id": 5,
"text": "w - r \\le \\frac{d}{\\sqrt 3},"
},
{
"math_id": 6,
"text": "\\tfrac32, \\tfrac32, \\tfrac32"
},
{
"math_id": 7,
"text": "120^\\circ"
}
] |
https://en.wikipedia.org/wiki?curid=580252
|
580264
|
Barbier's theorem
|
In geometry, Barbier's theorem states that every curve of constant width has perimeter π times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860.
Examples.
The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width "w" has perimeter π"w". A Reuleaux triangle of width "w" consists of three arcs of circles of radius "w". Each of these arcs has central angle π/3, so the perimeter of the Reuleaux triangle of width "w" is equal to half the perimeter of a circle of radius "w" and therefore is equal to π"w". A similar analysis of other simple examples such as Reuleaux polygons gives the same answer.
Proofs.
One proof of the theorem uses the properties of Minkowski sums. If "K" is a body of constant width "w", then the Minkowski sum of "K" and its 180° rotation is a disk with radius "w" and perimeter 2π"w". The Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of "K" must be half the perimeter of this disk, which is π"w" as the theorem states.
Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem.
An elementary probabilistic proof of the theorem can be found at Buffon's noodle.
Higher dimensions.
The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area formula_0, while the surface of revolution of a Reuleaux triangle with the same constant width has surface area formula_1.
Instead, Barbier's theorem generalizes to bodies of constant brightness, three-dimensional convex sets for which every two-dimensional projection has the same area. These all have the same surface area as a sphere of the same projected area.
And in general, if formula_2 is a convex subset of formula_3, for which every ("n"−1)-dimensional projection has area of the unit ball in formula_4, then the surface area of formula_2 is equal to that of the unit sphere in formula_3. This follows from the general form of Crofton formula.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "4\\pi\\approx 12.566"
},
{
"math_id": 1,
"text": "8\\pi-\\tfrac{4}{3}\\pi^2\\approx 11.973"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "\\R^{n}"
},
{
"math_id": 4,
"text": "\\R^{n-1}"
}
] |
https://en.wikipedia.org/wiki?curid=580264
|
5802667
|
Ribonucleoside-triphosphate reductase
|
Ribonucleoside-triphosphate reductase (EC 1.17.4.2, "ribonucleotide reductase", "2'-deoxyribonucleoside-triphosphate:oxidized-thioredoxin 2'-oxidoreductase") is an enzyme with systematic name "2'-deoxyribonucleoside-triphosphate:thioredoxin-disulfide 2'-oxidoreductase". This enzyme catalyses the following chemical reaction
2'-deoxyribonucleoside triphosphate + thioredoxin disulfide + H2O formula_0 ribonucleoside triphosphate + thioredoxin
Ribonucleoside-triphosphate reductase requires a cobamide coenzyme and ATP.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=5802667
|
58030745
|
Ionic Coulomb blockade
|
Electrostatic phenomenon
Ionic Coulomb blockade (ICB) is an electrostatic phenomenon predicted by M. Krems and Massimiliano Di Ventra (UC San Diego) that appears in ionic transport through mesoscopic electro-diffusive systems (artificial nanopores and biological ion channels) and manifests itself as oscillatory dependences of the conductance on the fixed charge formula_0 in the pore ( or on the external voltage formula_1, or on the bulk concentration formula_2).
ICB represents an ion-related counterpart of the better-known electronic Coulomb blockade (ECB) that is observed in quantum dots. Both ICB and ECB arise from quantisation of the electric charge and from an electrostatic exclusion principle and they share in common a number of effects and underlying physical mechanisms. ICB provides some specific effects related to the existence of ions of different charge formula_3 (different in both sign and value) where integer formula_4 is ion valence and formula_5 is the elementary charge, in contrast to the single-valence electrons of ECB (formula_6).
ICB effects appear in tiny pores whose self-capacitance formula_7 is so small that the charging energy of a single ion formula_8becomes large compared to the thermal energy per particle ( formula_9). In such cases there is strong quantisation of the energy spectrum inside the pore, and the system may either be “blockaded” against the transportation of ions or, in the opposite extreme, it may show resonant barrier-less conduction, depending on the free energy bias coming from formula_0, formula_1, or formula_10.
The ICB model claims that formula_0 is a primary determinant of conduction and selectivity for particular ions, and the predicted oscillations in conductance and an associated Coulomb staircase of channel occupancy "vs" formula_0 are expected to be strong effects in the cases of divalent ions (formula_11) or trivalent ions (formula_12).
Some effects, now recognised as belonging to ICB, were discovered and considered earlier in precursor papers on electrostatics-governed conduction mechanisms in channels and nanopores.
The manifestations of ICB have been observed in water-filled sub-nanometre pores through a 2D <chem>MoS2</chem> monolayer, revealed by Brownian dynamics (BD) simulations of calcium conductance bands in narrow channels, and account for a diversity of effects seen in biological ion channels. ICB predictions have also been confirmed by a mutation study of divalent blockade in the NaChBac bacterial channel.
Model.
Generic electrostatic model of channel/nanopore.
ICB effects may be derived on the basis of a simplified electrostatics/Brownian dynamics model of a nanopore or of the selectivity filter of an ion channel. The model represents the channel/pore as a charged hole through a water-filled protein hub embedded in the membrane. Its fixed charge formula_0 is considered as a uniform, centrally placed, rigid ring (Fig.1). The channel is assumed to have geometrical parameters length formula_13nm and radius formula_14nm, allowing for the single-file movement of partially hydrated ions.
The model represents the water and protein as continuous media with dielectric constants formula_15 and formula_16 respectively. The mobile ions are described as discrete entities with valence formula_4 and of radius formula_17, moving stochastically through the pore, governed by the self-consistently coupled Poisson's electrostatic equation and Langevin stochastic equation.
The model is applicable to both cationic and anionic biological ion channels and to artificial nanopores.
Electrostatics.
The mobile ion is assumed to be partially hydrated (typically retaining its first hydration shell) and carrying charge formula_3 where formula_5 is the elementary charge (e.g. the formula_18 ion with formula_11). The model allows one to derive the pore and ion parameters satisfying the barrier-less permeation conditions, and to do so from basic electrostatics taking account of charge quantisation.
The potential energy formula_19 of a channel/pore containing formula_20 ions can be decomposed into electrostatic energyformula_21 , dehydration energy, formula_22 and ion-ion local interaction energy formula_23:formula_24
The basic ICB model makes the simplifying approximation that formula_25, whence:formula_26where formula_27 is the net charge of the pore when it contains formula_28 identical ions of valence formula_4, the sign of the moving ions being opposite to that of the formula_0, formula_29 represents the electrostatic self-capacitance of the pore, and formula_30 is the electric permittivity of the vacuum.
Resonant barrier-less conduction.
Thermodynamics and statistical mechanics describe systems that have variable numbers of particles via the chemical potential formula_33, defined as Gibbs free energy formula_34 per particle:formula_35, where formula_36 is the Gibbs free energy for the system of formula_28 particles. In thermal and particle equilibrium with bulk reservoirs, the entire system has a common value of chemical potential formula_37 (the Fermi level in other contexts). The free energy needed for the entry of a new ion to the channel is defined by the excess chemical potential formula_38 which (ignoring an entropy term ) can be written as formula_39 where formula_32 is the charging energy (self-energy barrier) of an incoming ion and formula_40is its affinity (i.e. energy of attraction to the binding site formula_0). The difference in energy between formula_32 and formula_41 (Fig.2.) defines the ionic energy level separation (Coulomb gap) and gives rise to most of the observed ICB effects.
In selective ion channels, the favoured ionic species passes through the channel almost at the rate of free diffusion, despite the strong affinity to the binding site. This conductivity-selectivity paradox has been explained as being a consequence of selective barrier-less conduction. In the ICB model, this occurs when formula_42 is almost exactly balanced by formula_40 (formula_43), which happens for a particular value of formula_0 (Fig.2.). This resonant value of formula_0 depends on the ionic properties formula_4 and formula_17 (implicitly, via the formula_17-dependent dehydration energy ), thereby providing a basis for selectivity.
Oscillations of conductance.
The ICB model explicitly predicts an oscillatory dependence of conduction on formula_0, with two interlaced sets of singularities associated with a sequentially increasing number of ions formula_44 in the channel (Fig.3A).
Electrostatic blockade points formula_45 correspond to minima in the ground state energy of the pore (Fig.3C).formula_46 The formula_45 points (formula_47) are equivalent to neutralisation points where formula_48.
Resonant conduction points formula_49 correspond to the barrier-less condition: formula_50, or formula_51.
The values of formula_52 are given by the simple formulaeformula_53i.e. the period of conductance oscillations in formula_0, formula_54.
For formula_11, in a typical ion channel geometry, formula_55, and ICB becomes strong. Consequently, plots of the BD-simulated <chem>Ca^2+</chem>current formula_56 "vs" formula_0 exhibit multi-ion conduction bands "- strong Coulomb blockade oscillations" between minima formula_57and maxima formula_58(Fig.3A)).
The point formula_59 corresponds to an uncharged pore with formula_60. Such pores are blockaded for ions of either sign.
Coulomb staircase.
The ICB oscillations in conductance correspond to a "Coulomb staircase" in the pore occupancy formula_61, with transition regions corresponding to formula_58 and saturation regions corresponding to formula_57 (Fig.3B) . The shape of the staircase is described by the Fermi-Dirac (FD) distribution, similarly to the Coulomb staircases of quantum dots. Thus, for the formula_62 transition, the FD function is: formula_63Here formula_31 is the excess chemical potential for the particular ion and formula_64 is an equivalent bulk occupancy related to pore volume. The saturated FD statistics of occupancy is equivalent to the Langmuir isotherm or to Michaelis–Menten kinetics.
It is the factor formula_65 that gives rise to the concentration-related shift in the staircase seen in Fig.3B.
Shift of singular points.
Addition of the partial excess chemical potentials formula_66 coming from different sources formula_67(including dehydration, local binding, volume exclusion etc.) leads to the ICB barrier-less condition formula_68 leads to a proper shift in the ICB resonant points formula_58, described by a "shift equation" :formula_69 i.e. the additional energy contributions formula_70 lead to shifts in the resonant barrier-less point formula_71.
The more important of these shifts (excess potentials) are:
In artificial nanopores.
Sub-nm MoS2 pores.
Following its prediction based on analytic theory and molecular dynamics simulations, experimental evidence for ICB emerged from experiments on monolayer <chem>MoS2</chem> pierced by a single formula_75nm nanopore. Highly non-Ohmic conduction was observed between aqueous ionic solutions on either side of the membrane. In particular, for low voltages across the membrane, the current remained close to zero, but it rose abruptly when a threshold of about formula_76mV was exceeded. This was interpreted as complete ionic Coulomb blockade of current in the (uncharged) nanopore due to the large potential barrier at low voltages. But the application of larger voltages pulled the barrier down, producing accessible states into which transitions could occur, thus leading to conduction.
In biological ion channels.
The realisation that ICB could occur in biological ion channels accounted for several experimentally observed features of selectivity, including:
Valence selectivity.
Valence selectivity is the channel's ability to discriminate between ions of different valence formula_4, wherein e.g. a calcium channel favours formula_18 ions over formula_77 ions by a factor of up to 1000×. Valence selectivity has been attributed variously to pure electrostatics,
or to a charge space competition mechanism,
or to a snug fit of the ion to ligands,
or to quantised dehydration.
In the ICB model, valence selectivity arises from electrostatics, namely from formula_4-dependence of the value of formula_78 needed to provide for barrier-less conduction.
Correspondingly, the ICB model provides explanations of why site-directed mutations that alter formula_0 can destroy the channel by blockading it, or can alter its selectivity from favouring formula_18 ions to favouring formula_77 ions, or "vice versa" "."
Divalent blockade.
Divalent (e.g. formula_18) blockade of monovalent (e.g. formula_77) currents is observed in some types of ion channels. Namely, formula_77 ions in a pure sodium solution pass unimpeded through a calcium channel, but are blocked by tiny (nM) extracellular concentrations of formula_18 ions. ICB provides a transparent explanation of both the phenomenon itself and of the Langmuir-isotherm-shape of the current "vs." formula_79 attenuation curve, deriving them from the strong affinity and an FD distribution of <chem>Ca^2+</chem>ions. "Vice versa", appearance divalent blockade presents strong evidence in favour of ICB
Similarly, ICB can account for the divalent (Iodide <chem>I^2-</chem>) blockade that has been observed in biological chloride (<chem>Cl-</chem>)-selective channels.
Special features.
Comparisons between ICB and ECB.
ICB and ECB should be considered as two versions of the same fundamental electrostatic phenomenon. Both ICB and ECB are based on charge quantisation and on the finite single-particle charging energy formula_80, resulting in close similarity of the governing equations and manifestations of these closely related phenomena. Nonetheless, there are important distinctions between ICB and ECB: their similarities and differences are summarised in Table 1.
Particular cases.
Coulomb blockade can also appear in superconductors; in such a case the free charge carriers are Cooper pairs (formula_81)
In addition, Pauli spin blockade represents a special kind of Coulomb blockade, connected with Pauli exclusion principle.
Quantum analogies.
Despite appearing in completely classical systems, ICB exhibits some phenomena reminiscent of quantum-mechanics (QM). They arise because the charge/entity discreteness of the ions leads to quantisation of the energy formula_32 spectrum and hence to the QM-analogies:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q_{\\rm f}"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "c_{\\rm b}"
},
{
"math_id": 3,
"text": "q=ze"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "e"
},
{
"math_id": 6,
"text": "z=-1"
},
{
"math_id": 7,
"text": "C_{\\rm s}"
},
{
"math_id": 8,
"text": "\\Delta E=z^2e^2/(2 C_s)"
},
{
"math_id": 9,
"text": "\\Delta E \\gg k_{\\rm B}T"
},
{
"math_id": 10,
"text": "\\log{c_{\\rm b}}"
},
{
"math_id": 11,
"text": "z=2"
},
{
"math_id": 12,
"text": "z=3"
},
{
"math_id": 13,
"text": "L\\approx1"
},
{
"math_id": 14,
"text": "R\\approx 0.3-0.5"
},
{
"math_id": 15,
"text": "\\varepsilon_{\\rm w}=80"
},
{
"math_id": 16,
"text": "\\varepsilon_{\\rm p}=2-10"
},
{
"math_id": 17,
"text": "R_{\\rm ion}"
},
{
"math_id": 18,
"text": "\\text{Ca}^{2+}"
},
{
"math_id": 19,
"text": "E_n"
},
{
"math_id": 20,
"text": "n "
},
{
"math_id": 21,
"text": "E_{n}^{\\rm ES}"
},
{
"math_id": 22,
"text": "E_n^{\\rm DH}"
},
{
"math_id": 23,
"text": "E_n^{\\rm INT}"
},
{
"math_id": 24,
"text": "E_n=E_n^{\\rm ES}+E_n^{\\rm DH}+E_n^{\\rm INT}...(E_n \\text{ Decomposition})"
},
{
"math_id": 25,
"text": "E_n=E_n^{ES}"
},
{
"math_id": 26,
"text": "\\begin{align} Q_n & = z e n+Q_{\\rm f} &\\text{(Excess charge)}\\\\ E_n&=\\dfrac{Q_n^2}{2C_s} &\\text{(Electrostatic energy)} \\\\\nC_s& = 4\\pi \\epsilon_0 \\epsilon_w \\dfrac{R^2}{L} &\\text{(Self-capacitance)}\n\\end{align}\n "
},
{
"math_id": 27,
"text": "Q_{n}"
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "C_{\\rm s}\n "
},
{
"math_id": 30,
"text": "\\epsilon_0"
},
{
"math_id": 31,
"text": "\\mu_{\\rm ex}"
},
{
"math_id": 32,
"text": "\\Delta E"
},
{
"math_id": 33,
"text": "\\mu "
},
{
"math_id": 34,
"text": "G"
},
{
"math_id": 35,
"text": "\\begin{align}\nG_n&=E_n-TS_n & \\text{(Gibbs free energy)}\\\\\n\\mu_n&=G_{n+1}-G_n &\\text{(Chemical potential)}\n\\end{align} "
},
{
"math_id": 36,
"text": "G_n "
},
{
"math_id": 37,
"text": "\\mu=\\mu_F "
},
{
"math_id": 38,
"text": "\\mu_{\\rm ex}=\\mu_n-\\mu_F"
},
{
"math_id": 39,
"text": "\\begin{align}\n\\mu_{\\rm ex}&=E_{n+1}-E_n=\\Delta E+E_{\\rm AFF} &\\text{(Coulomb gap)}\\\\\n\\Delta E&=\\frac{z^2 e^2}{2 C_s}; &\\text{(Charging energy)}\\\\\nE_{\\rm AFF}&=\\frac{z e}{C_s}(zen+Q_{\\rm f}) &\\text{(Affinity energy)}\n\\end{align}\n "
},
{
"math_id": 40,
"text": " E_{\\rm AFF} "
},
{
"math_id": 41,
"text": "\\Delta E_{\\rm AFF}"
},
{
"math_id": 42,
"text": " \\Delta E "
},
{
"math_id": 43,
"text": "\\mu_{\\rm ex}\\approx0"
},
{
"math_id": 44,
"text": "n=1,2,3,... "
},
{
"math_id": 45,
"text": "Z_n "
},
{
"math_id": 46,
"text": "E_{\\rm G}(Q_{\\rm f}) =\\min_n{E_n(Q_{\\rm f})} \\quad \\quad \\text{(Ground state})"
},
{
"math_id": 47,
"text": "\\partial E_n/\\partial Q_{\\rm f}=0"
},
{
"math_id": 48,
"text": "Q_{n}=0 "
},
{
"math_id": 49,
"text": "M_n "
},
{
"math_id": 50,
"text": "\\mu_{\\rm ex} =0 "
},
{
"math_id": 51,
"text": "\\Delta E\\approx-E_{\\rm AFF}"
},
{
"math_id": 52,
"text": "Z_n \\text{ and } M_n"
},
{
"math_id": 53,
"text": "\\begin{align}Z_n&=-z e n &\\text{(Electrostatic blockade)} \\\\\nM_n&=-z e (n+1/2) &\\text{(Resonant conduction)},\n\\end{align} "
},
{
"math_id": 54,
"text": "\\Delta=|M_{n+1}-M_n|=|Z_{n+1}-Z_n|=|ze| "
},
{
"math_id": 55,
"text": "\\Delta E/(k_{\\rm B}T) \\approx 20 \\gg 1"
},
{
"math_id": 56,
"text": "J"
},
{
"math_id": 57,
"text": "Z_n"
},
{
"math_id": 58,
"text": "M_n"
},
{
"math_id": 59,
"text": "Z_0=0 "
},
{
"math_id": 60,
"text": "Q_{\\rm f}=0 "
},
{
"math_id": 61,
"text": "P_{\\rm c}"
},
{
"math_id": 62,
"text": "0 \\rightarrow 1"
},
{
"math_id": 63,
"text": "\\begin{align}\nP_{\\rm c}&=\\left[1+\\dfrac{1}{P_{\\rm b}}\\exp\\left(\\dfrac{\\mu_{\\rm ex}}{k_{\\rm B}T}\\right)\\right]^{-1}; &\\text{(Fermi-Dirac distribution)}\\\\\n\\mu_{\\rm ex}&=\\frac{z e}{C_s}\\left( Q_{\\rm f}-M_0\\right).\n\\end{align}"
},
{
"math_id": 64,
"text": "P_{\\rm b}"
},
{
"math_id": 65,
"text": "1/P_{\\rm b} "
},
{
"math_id": 66,
"text": "\\mu_{\\rm ex}^Y"
},
{
"math_id": 67,
"text": "Y "
},
{
"math_id": 68,
"text": "\\mu_{\\rm ex}=0"
},
{
"math_id": 69,
"text": "\\Delta M_n= -\\dfrac{C_s}{z e} \\sum_Y{\\mu_{\\rm ex}^{Y}} \\quad \\quad \\text{(Shift equation)} "
},
{
"math_id": 70,
"text": "\\mu_{\\rm ex}^Y"
},
{
"math_id": 71,
"text": "M_0"
},
{
"math_id": 72,
"text": "\\mu_{\\rm ex}^{\\rm ES}=-k_{\\rm B}T\\log(P_{\\rm b})"
},
{
"math_id": 73,
"text": "\\mu_{\\rm ex}^{\\rm DH}"
},
{
"math_id": 74,
"text": "\\mu_{\\rm ex}^{\\rm INT}"
},
{
"math_id": 75,
"text": "0.6"
},
{
"math_id": 76,
"text": "400"
},
{
"math_id": 77,
"text": "\\text{Na}^{+}"
},
{
"math_id": 78,
"text": "Q_{\\rm f}=M_n=-ze(n+1/2)"
},
{
"math_id": 79,
"text": "\\log[\\text{Ca}^{2+}]"
},
{
"math_id": 80,
"text": "\\Delta E "
},
{
"math_id": 81,
"text": "z=-2"
},
{
"math_id": 82,
"text": "\\text {Ca}^{2+}"
},
{
"math_id": 83,
"text": "\\log{[\\text{Ca}^{2+}]}"
}
] |
https://en.wikipedia.org/wiki?curid=58030745
|
58031307
|
ZX-calculus
|
Graphical language for quantum processes
The ZX-calculus is a rigorous graphical language for reasoning about linear maps between qubits, which are represented as string diagrams called "ZX-diagrams". A ZX-diagram consists of a set of generators called "spiders" that represent specific tensors. These are connected together to form a tensor network similar to Penrose graphical notation. Due to the symmetries of the spiders and the properties of the underlying category, topologically deforming a ZX-diagram (i.e. moving the generators without changing their connections) does not affect the linear map it represents. In addition to the equalities between ZX-diagrams that are generated by topological deformations, the calculus also has a set of graphical rewrite rules for transforming diagrams into one another. The ZX-calculus is "universal" in the sense that any linear map between qubits can be represented as a diagram, and different sets of graphical rewrite rules are complete for different families of linear maps. ZX-diagrams can be seen as a generalisation of quantum circuit notation, and they form a strict subset of tensor networks which represent general fusion categories and wavefunctions of quantum spin systems.
History.
The ZX-calculus was first introduced by Bob Coecke and Ross Duncan in 2008 as an extension of the categorical quantum mechanics school of reasoning. They introduced the fundamental concepts of spiders, strong complementarity and most of the standard rewrite rules.
In 2009 Duncan and Perdrix found the additional Euler decomposition rule for the Hadamard gate, which was used by Backens in 2013 to establish the first completeness result for the ZX-calculus. Namely that there exists a set of rewrite rules that suffice to prove all equalities between stabilizer ZX-diagrams, where phases are multiples of formula_0, up to global scalars. This result was later refined to completeness including scalar factors.
Following an incompleteness result, in 2017, a completion of the ZX-calculus for the approximately universal formula_1 fragment was found, in addition to two different completeness results for the universal ZX-calculus (where phases are allowed to take any real value).
Also in 2017 the book "Picturing Quantum Processes" was released, that builds quantum theory from the ground up, using the ZX-calculus. See also the 2019 book "Categories for Quantum Theory".
Informal introduction.
ZX-diagrams consist of green and red nodes called "spiders", which are connected by "wires". Wires may curve and cross, arbitrarily many wires may connect to the same spider, and multiple wires can go between the same pair of nodes. There are also Hadamard nodes, usually denoted by a yellow box, which always connect to exactly two wires.
ZX-diagrams represent linear maps between qubits, similar to the way in which quantum circuits represent unitary maps between qubits. ZX-diagrams differ from quantum circuits in two main ways. The first is that ZX-diagrams do not have to conform to the rigid topological structure of circuits, and hence can be deformed arbitrarily. The second is that ZX-diagrams come equipped with a set of rewrite rules, collectively referred to as the "ZX-calculus". Using these rules, calculations can be performed in the graphical language itself.
Generators.
The building blocks or generators of the ZX-calculus are graphical representations of specific states, unitary operators, linear isometries, and projections in the computational basis formula_2 and the Hadamard-transformed basis formula_3 and formula_4. The colour green (or sometimes white) is used to represent the computational basis and the colour red (or sometimes grey) is used to represent the Hadamard-transformed basis. Each of these generators can furthermore be labelled by a phase, which is a real number from the interval formula_5. If the phase is zero it is usually not written.
The generators are:
Composition.
The generators can be composed in two ways:
These laws correspond to the composition and tensor product of linear maps.
Any diagram written by composing generators in this way is called a ZX-diagram. ZX-diagrams are closed under both composition laws: connecting an output of one ZX-diagram to an input of another creates a valid ZX-diagram, and vertically stacking two ZX-diagrams creates a valid ZX-diagram.
Only topology matters.
Two diagrams represent the same linear operator if they consist of the same generators connected in the same ways. In other words, whenever two ZX-diagrams can be transformed into one another by topological deformation, then they represent the same linear map. Thus, the controlled-NOT gate can be represented as follows:
Diagram rewriting.
The following example of a quantum circuit constructs a GHZ-state. By translating it into a ZX-diagram, using the rules that "adjacent spiders of the same color merge", "Hadamard changes the color of spiders", and "parity-2 spiders are identities", it can be graphically reduced to a GHZ-state:
Any linear map between qubits can be represented as a ZX-diagram, i.e. ZX-diagrams are "universal". A given ZX-diagram can be transformed into another ZX-diagram using the rewrite rules of the ZX-calculus if and only if the two diagrams represent the same linear map, i.e. the ZX-calculus is sound and complete.
Formal definition.
The category of ZX-diagrams is a dagger compact category, which means that it has symmetric monoidal structure (a tensor product), is compact closed (it has "cups" and "caps") and comes equipped with a dagger, such that all these structures suitably interact. The objects of the category are the natural numbers, with the tensor product given by addition (the category is a PROP). The morphisms of this category are ZX-diagrams. Two ZX-diagrams compose by juxtaposing them horizontally and connecting the outputs of the left-hand diagram to the inputs of the right-hand diagram. The monoidal product of two diagrams is represented by placing one diagram above the other.
Indeed, all ZX-diagrams are built freely from a set of generators via composition and monoidal product, modulo the equalities induced by the compact structure and the rules of the ZX-calculus given below. For instance, the identity of the object formula_6 is depicted as formula_6 parallel wires from left to right, with the special case formula_7 being the empty diagram.
The following table gives the generators together with their standard interpretations as linear maps, expressed in Dirac notation. The computational basis states are denoted by formula_8 and the Hadamard-transformed basis states are formula_9.
The formula_6-fold tensor-product of the vector formula_10 is denoted by formula_11.
There are many different versions of the ZX-calculus, using different systems of rewrite rules as axioms. All share the meta rule "only the topology matters", which means that two diagrams are equal if they consist of the same generators connected in the same way, no matter how these generators are arranged in the diagram.
The following are some of the core set of rewrite rules, here given "up to scalar factor": i.e. two diagrams are considered to be equal if their interpretations as linear maps differ by a non-zero complex factor.
Applications.
The ZX-calculus has been used in a variety of quantum information and computation tasks.
Tools.
The rewrite rules of the ZX-calculus can be implemented formally as an instance of double-pushout rewriting. This has been used in the software "Quantomatic" to allow automated rewriting of ZX-diagrams (or more general string diagrams). In order to formalise the usage of the "dots" to denote any number of wires, such as used in the spider fusion rule, this software uses "bang-box" notation to implement rewrite rules where the spiders can have any number of inputs or outputs.
A more recent project to handle ZX-diagrams is PyZX, which is primarily focused on circuit optimisation.
A LaTeX package zx-calculus can be used to typeset ZX-diagrams. Many authors also use the software TikZiT as a GUI to help typeset diagrams.
Related graphical languages.
The ZX-calculus is only one of several graphical languages for describing linear maps between qubits. The "ZW-calculus" was developed alongside the ZX-calculus, and can naturally describe the W-state and Fermionic quantum computing. It was the first graphical language which had a complete rule-set for an approximately universal set of linear maps between qubits, and the early completeness results of the ZX-calculus use a reduction to the ZW-calculus.
A more recent language is the "ZH-calculus". This adds the "H-box" as a generator, that generalizes the Hadamard gate from the ZX-calculus. It can naturally describe quantum circuits involving Toffoli gates.
Related algebraic concepts.
Up to scalars, the phase-free ZX-calculus, generated by formula_12-labelled spiders is equivalent to the dagger compact closed category of linear relations over the finite field formula_13. In other words, given a diagram with formula_6 inputs and formula_14 outputs in the phase-free ZX-calculus, its X stabilizers form a linear subspace of formula_15, and the composition of phase-free ZX diagrams corresponds to relational composition of these subspaces. In particular, the Z comonoid (given by the Z spider with one input and two outputs, and the Z spider with one input and no outputs) and X monoid (given by the X spider with one output and two inputs, and the X spider with one output and no inputs) generate the symmetric monoidal category of matrices over formula_13 with respect to the direct sum as the monoidal product.
|
[
{
"math_id": 0,
"text": "\\pi/2"
},
{
"math_id": 1,
"text": "\\pi/4"
},
{
"math_id": 2,
"text": "| 0 \\rangle, | 1 \\rangle"
},
{
"math_id": 3,
"text": " | + \\rangle = \\frac{| 0 \\rangle + | 1 \\rangle}{\\sqrt{2}}"
},
{
"math_id": 4,
"text": " | - \\rangle = \\frac{| 0 \\rangle - | 1 \\rangle}{\\sqrt{2}}"
},
{
"math_id": 5,
"text": "[0,2\\pi)"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "n=0"
},
{
"math_id": 8,
"text": "\\mid 0 \\rangle, \\vert 1 \\rangle"
},
{
"math_id": 9,
"text": "\\mid \\pm \\rangle = \\frac{1}{\\sqrt{2}} (\\vert 0 \\rangle \\pm \\vert 1 \\rangle)"
},
{
"math_id": 10,
"text": "\\mid \\psi \\rangle"
},
{
"math_id": 11,
"text": "\\mid \\psi \\rangle^{\\otimes n}"
},
{
"math_id": 12,
"text": "0"
},
{
"math_id": 13,
"text": "\\mathbb{F}_2"
},
{
"math_id": 14,
"text": "m"
},
{
"math_id": 15,
"text": "\\mathbb{F}_2^n\\oplus \\mathbb{F}_2^m"
}
] |
https://en.wikipedia.org/wiki?curid=58031307
|
5803388
|
Tractor bundle
|
In conformal geometry, the tractor bundle is a particular vector bundle constructed on a conformal manifold whose fibres form an effective representation of the conformal group (see associated bundle).
The term "tractor" is a portmanteau of "Tracy Thomas" and "twistor", the bundle having been introduced first by T. Y. Thomas as an alternative formulation of the Cartan conformal connection, and later rediscovered within the formalism of local twistors and generalized to projective connections by Michael Eastwood "et al." in Tractor bundles can be defined for arbitrary parabolic geometries.
Conformal manifolds.
The tractor bundle for a formula_0-dimensional conformal manifold formula_1 of signature formula_2 is a rank formula_3 vector bundle formula_4 equipped with the following data:
formula_12
is a linear isomorphism at each point from the tangent bundle of formula_1 (formula_13) to the quotient bundle formula_14, where formula_15 denotes the orthogonal complement of formula_11 in formula_16 relative to the metric formula_9.
Given a tractor bundle, the metrics in the conformal class are given by fixing a local section formula_10 of formula_11, and defining for formula_17,
formula_18
To go the other way, and construct a tractor bundle from a conformal structure, requires more work. The tractor bundle is then an associated bundle of the Cartan geometry determined by the conformal structure. The conformal group for a manifold of signature formula_2 is formula_19, and one obtains the tractor bundle (with connection) as the connection induced by the Cartan conformal connection on the bundle associated to the standard representation of the conformal group. Because the fibre of the Cartan conformal bundle is the stabilizer of a null ray, this singles out the line bundle formula_11.
More explicitly, suppose that formula_20 is a metric on formula_1, with Levi-Civita connection formula_8. The tractor bundle is the space of 2-jets of solutions formula_21 to the eigenvalue equation
formula_22
where formula_23 is the Schouten tensor. A little work then shows that the sections of the tractor bundle (in a fixed Weyl gauge) can be represented by formula_24-vectors
formula_25
The connection is
formula_26
The metric, on formula_27 and formula_28 is:
formula_29
The preferred line bundle formula_11 is the span of
formula_30
Given a change in Weyl gauge formula_31, the components of the tractor bundle change according to the rule
formula_32
where formula_33, and the inverse metric formula_34 has been used in one place to raise the index. Clearly the bundle formula_11 is invariant under the change in gauge, and the connection can be shown to be invariant using the conformal change in the Levi-Civita connection and Schouten tensor.
Projective manifolds.
Let formula_1 be a projective manifold of dimension formula_0. Then the tractor bundle is a rank formula_35 vector bundle formula_16, with connection formula_8, on formula_1 equipped with the additional data of a line subbundle formula_11 such that, for any non-vanishing local section formula_10 of formula_11, the linear operator
formula_36
is a linear isomorphism of the tangent space to formula_37.
One recovers an affine connection in the projective class from a section formula_10 of formula_11 by defining
formula_38
and using the aforementioned isomorphism.
Explicitly, the tractor bundle can be represented in a given affine chart by pairs formula_39, where the connection is
formula_40
where formula_23 is the projective Schouten tensor. The preferred subbundle formula_11 is that spanned by formula_41.
Here the projective Schouten tensor of an affine connection is defined as follows. Define the Riemann tensor in the usual way (indices are abstract)
formula_42
Then
formula_43
where the Weyl tensor formula_44 is trace-free, and formula_45 (by Bianchi).
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "(p,q)"
},
{
"math_id": 3,
"text": "n+2"
},
{
"math_id": 4,
"text": "\\mathcal T\\to M"
},
{
"math_id": 5,
"text": "G:\\mathcal T\\otimes\\mathcal T\\to\\mathbb R"
},
{
"math_id": 6,
"text": "(p+1,q+1)"
},
{
"math_id": 7,
"text": "\\mathcal X\\subset\\mathcal T"
},
{
"math_id": 8,
"text": "\\nabla"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\mathcal X"
},
{
"math_id": 12,
"text": "v\\mapsto \\nabla_vX\\pmod{\\mathcal X}"
},
{
"math_id": 13,
"text": "v\\in TM"
},
{
"math_id": 14,
"text": "\\mathcal X^\\perp/\\mathcal X"
},
{
"math_id": 15,
"text": "\\mathcal X^\\perp"
},
{
"math_id": 16,
"text": "\\mathcal T"
},
{
"math_id": 17,
"text": "v,w\\in TM"
},
{
"math_id": 18,
"text": "g_X(v,w) = G(\\nabla_vX,\\nabla_wX)."
},
{
"math_id": 19,
"text": "SO(p+1,q+1)"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "\\sigma"
},
{
"math_id": 22,
"text": "(\\nabla_i\\nabla_j + P_{ij})\\sigma = \\lambda g_{ij}"
},
{
"math_id": 23,
"text": "P_{ij}"
},
{
"math_id": 24,
"text": "(n+2)"
},
{
"math_id": 25,
"text": "U^I=\\begin{bmatrix}\\sigma\\\\ \\mu^i\\\\ \\rho\\end{bmatrix}."
},
{
"math_id": 26,
"text": "\\nabla_jU^I=\\nabla_j\\begin{bmatrix}\\sigma\\\\ \\mu^i\\\\ \\rho\\end{bmatrix}=\\begin{bmatrix}\\nabla_j\\sigma-\\mu_j\\\\ \\nabla_j\\mu^i + \\delta_j^i\\rho + P_j^i\\sigma\\\\ \\nabla_j\\rho - P_{ji}\\mu^i\\end{bmatrix}."
},
{
"math_id": 27,
"text": "U^I=(\\sigma\\ \\mu^i\\ \\rho)"
},
{
"math_id": 28,
"text": "V^J=(\\tau\\ \\nu^j\\ \\alpha)"
},
{
"math_id": 29,
"text": "G_{IJ}U^IV^J = \\mu^i\\nu_i + \\sigma\\tau + \\rho\\alpha"
},
{
"math_id": 30,
"text": "X^I = \\begin{bmatrix}0\\\\0\\\\1\\end{bmatrix}."
},
{
"math_id": 31,
"text": "\\widehat g_{ij} = e^{2\\gamma}g_{ij}"
},
{
"math_id": 32,
"text": "\\begin{bmatrix}\\widehat\\sigma\\\\\\widehat \\mu^i\\\\\\widehat\\rho\\end{bmatrix} = \\begin{bmatrix}\\sigma\\\\ \\mu^i+\\gamma^i\\sigma\\\\ \\rho-\\gamma_j\\mu^j - \\gamma^2\\sigma/2\\end{bmatrix}"
},
{
"math_id": 33,
"text": "\\gamma_i=\\nabla_i\\gamma"
},
{
"math_id": 34,
"text": "g^{ij}"
},
{
"math_id": 35,
"text": "n+1"
},
{
"math_id": 36,
"text": "v\\mapsto \\nabla_v X\\pmod{\\mathcal X}"
},
{
"math_id": 37,
"text": "\\mathcal T/\\mathcal X"
},
{
"math_id": 38,
"text": "\\nabla_{\\nabla_vw}X = \\nabla_v\\nabla_wX \\pmod{\\mathcal X}"
},
{
"math_id": 39,
"text": "(\\mu^i\\ \\rho)"
},
{
"math_id": 40,
"text": "\\nabla_j\\begin{bmatrix}\\mu^i\\\\ \\rho\\end{bmatrix} = \\begin{bmatrix}\\nabla_j\\mu^i + \\delta_j^i\\rho\\\\ \\nabla_j\\rho - P_{ij}\\mu^i\\end{bmatrix}"
},
{
"math_id": 41,
"text": "X=(0\\ 1)"
},
{
"math_id": 42,
"text": "(\\nabla_i\\nabla_j-\\nabla_j\\nabla_i)U^\\ell = {R_{ijk}}^\\ell U^k."
},
{
"math_id": 43,
"text": "{R_{ijk}}^\\ell = {C_{ijk}}^\\ell + 2\\delta^\\ell_{[i}P_{j]k} + \\beta_{ij}\\delta_k^\\ell"
},
{
"math_id": 44,
"text": "{C_{ijk}}^\\ell"
},
{
"math_id": 45,
"text": "2P_{[ij]} = -\\beta_{ij}"
}
] |
https://en.wikipedia.org/wiki?curid=5803388
|
58037076
|
Kernel-phase
|
Kernel-phases are observable quantities used in high resolution astronomical imaging used for superresolution image creation. It can be seen as a generalization of closure phases for redundant arrays. For this reason, when the wavefront quality requirement are met, it is an alternative to aperture masking interferometry that can be executed without a mask while retaining phase error rejection properties. The observables are computed through linear algebra from the Fourier transform of direct images. They can then be used for statistical testing, model fitting, or image reconstruction.
Prerequisites.
In order to extract kernel-phases from an image, some requirements must be met:
Deviations from these requirements are known to be acceptable, but lead to observational bias that should be corrected by the observation of calibrators.
Definition.
The method relies on a discrete model of the instrument's pupil plane and the corresponding list of baselines to provide corresponding vectors formula_1 of pupil plane errors and formula_2 of image plane Fourier Phases. When the wavefront error in the pupil plane is small enough (i.e. when the Strehl ratio of the imaging system is sufficiently high), the complex amplitude associated to the instrumental phase in one point of the pupil formula_3, can be approximated by formula_4 . This permits the expression of the pupil-plane phase aberrations formula_1 to the image plane Fourier phase as a linear transformation described by the matrix formula_5:
formula_6
Where formula_7 is the theoretical Fourier phase vector of the object. In this formalism, singular value decomposition can be used to find a matrix formula_8 satisfying formula_9. The rows of formula_8 constitute a basis of the kernel of formula_10.
formula_11
The vector formula_12 is called the kernel-phase vector of observables. This equation can be used for model-fitting as it represents the interpretation of a sub-space of the Fourier phase that is immune to the instrumental phase errors to the first order.
Applications.
The technique was first used in the re-analysis of archival images from the Hubble Space Telescope where it enabled the discovery of a number of brown dwarf in close binary systems.
The technique is used as an alternative to aperture masking interferometry, especially for fainter stars because it does not require the use of masks that typically block 90% of the light, and therefore allows higher throughput. It is also considered to be an alternative to coronagraphy for direct detection of exoplanets at very small separations (below formula_13) where coronagraphs are limited by the wavefront errors of adaptive optics.
The same framework can be used for wavefront sensing. In the case of an asymmetric aperture, a pseudo-inverse of formula_5 can be used to reconstruct the wavefront errors directly from the image.
A Python library called xara is available on GitHub and maintained by Frantz Martinache to facilitate the extraction and interpretation of kernel-phases.
The KERNEL project, has received funding from the European Research Council to explore the potential of these observables for a number of use-cases, including direct detection of exoplanets, image reconstruction, and image plane wavefront sensing for adaptive optics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{\\lambda}{D} "
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\Phi"
},
{
"math_id": 3,
"text": "\\varphi_k"
},
{
"math_id": 4,
"text": " e^{i\\varphi_k} \\approx 1 + \\mathit{i}\\varphi_{k}"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "\\Phi = \\Phi_0 + A \\cdot \\varphi"
},
{
"math_id": 7,
"text": "\\Phi_0"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "K \\cdot A=0"
},
{
"math_id": 10,
"text": "A^{T}"
},
{
"math_id": 11,
"text": "K \\cdot \\Phi = K \\cdot \\Phi_0 + \\cancel{K \\cdot A \\cdot \\varphi}"
},
{
"math_id": 12,
"text": "K.\\Phi"
},
{
"math_id": 13,
"text": " 2\\frac{\\lambda}{D} "
}
] |
https://en.wikipedia.org/wiki?curid=58037076
|
58037613
|
Standard Uptake Fraction
|
The Standard Uptake Fraction (SUF) is the relative distribution of water uptake of a plant in a soil with a uniform water potential. The SUF gives the ratio coefficient to obtain the equivalent soil water potential sensed by the plant. It is one of the macroscopic parameter for the hydraulic properties of the root system.
formula_0
formula_1 = the radial flow entering each root segment (formula_2), and
formula_3 = the actual transpiration (formula_2).
In order to get this parameter, the easiest way is to deal with Functional-Structural Plant Models. They will compute the radial water flow for each root segment and then divide the total by the actual transpiration.
MARSHAL is a set of online tools developed to visualise the root system and allows to look after the SUF.
Standard Uptake Density.
The standard uptake density (SUD) (formula_4) is the distribution of the water uptake flow rate in the soil where the water potential is uniform. In other words:
formula_5
formula_6 = the segment length (formula_7).
Standard Sink Fraction.
The standard sink fraction (SSF) is very similar to the SUF, but instead of being a function of the root segment, it is related to the soil voxel. It is the normalised distribution of the sink term in a uniform water potential soil.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "SUF = \\dfrac{J_r}{T_{act}}"
},
{
"math_id": 1,
"text": "J_r"
},
{
"math_id": 2,
"text": "L^{3}T^{-1}"
},
{
"math_id": 3,
"text": "T_{act}"
},
{
"math_id": 4,
"text": "L^{-1}"
},
{
"math_id": 5,
"text": "SUD = \\dfrac{SUF}{L_{segment}}"
},
{
"math_id": 6,
"text": "L_{segment}"
},
{
"math_id": 7,
"text": "L"
}
] |
https://en.wikipedia.org/wiki?curid=58037613
|
58037802
|
Compensatory conductance
|
The compensatory root water uptake conductance (Kcomp) (formula_0) characterizes how a plant compensates its water uptake under heterogeneous water potential.
It controls the root water uptake in a soil where the water potential is not uniform.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L^3 P^{-1} T^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=58037802
|
580384
|
Hopf fibration
|
Fiber bundle of the 3-sphere over the 2-sphere, with 1-spheres as fibers
In differential topology, the Hopf fibration (also known as the Hopf bundle or Hopf map) describes a 3-sphere (a hypersphere in four-dimensional space) in terms of circles and an ordinary sphere. Discovered by Heinz Hopf in 1931, it is an influential early example of a fiber bundle. Technically, Hopf found a many-to-one continuous function (or "map") from the 3-sphere onto the 2-sphere such that each distinct "point" of the 2-sphere is mapped from a distinct great circle of the 3-sphere . Thus the 3-sphere is composed of fibers, where each fiber is a circle — one for each point of the 2-sphere.
This fiber bundle structure is denoted
formula_0
meaning that the fiber space "S"1 (a circle) is embedded in the total space "S"3 (the 3-sphere), and "p" : "S"3 → "S"2 (Hopf's map) projects "S"3 onto the base space "S"2 (the ordinary 2-sphere). The Hopf fibration, like any fiber bundle, has the important property that it is locally a product space. However it is not a "trivial" fiber bundle, i.e., "S"3 is not "globally" a product of "S"2 and "S"1 although locally it is indistinguishable from it.
This has many implications: for example the existence of this bundle shows that the higher homotopy groups of spheres are not trivial in general. It also provides a basic example of a principal bundle, by identifying the fiber with the circle group.
Stereographic projection of the Hopf fibration induces a remarkable structure on R3, in which all of 3-dimensional space, except for the z-axis, is filled with nested tori made of linking Villarceau circles. Here each fiber projects to a circle in space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of the inverse image of a circle of latitude of the 2-sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. When R3 is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (see Topology and geometry). The loops are homeomorphic to circles, although they are not geometric circles.
There are numerous generalizations of the Hopf fibration. The unit sphere in complex coordinate space C"n"+1 fibers naturally over the complex projective space CP"n" with circles as fibers, and there are also real, quaternionic, and octonionic versions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres:
formula_1
formula_2
formula_3
formula_4
By Adams's theorem such fibrations can occur only in these dimensions.
Definition and construction.
For any natural number "n", an "n"-dimensional sphere, or n-sphere, can be defined as the set of points in an formula_5-dimensional space which are a fixed distance from a central point. For concreteness, the central point can be taken to be the origin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this convention, the "n"-sphere, formula_6, consists of the points formula_7 in formula_8 with "x"12 + "x"22 + ⋯+ "x""n" + 12 = 1. For example, the 3-sphere consists of the points ("x"1, "x"2, "x"3, "x"4) in R4 with "x"12 + "x"22 + "x"32 + "x"42 = 1.
The Hopf fibration "p": "S"3 → "S"2 of the 3-sphere over the 2-sphere can be defined in several ways.
Direct construction.
Identify R4 with C2 and R3 with C × R (where C denotes the complex numbers) by writing:
formula_9
and
formula_10.
Thus "S"3 is identified with the subset of all ("z"0, "z"1) in C2 such that |"z"0|2 + |"z"1|2
1, and "S"2 is identified with the subset of all ("z", "x") in C×R such that |"z"|2 + "x"2
1. (Here, for a complex number "z" = "x" + i"y", |"z"|2 = "z" "z"∗ = "x"2 + "y"2, where the star denotes the complex conjugate.) Then the Hopf fibration "p" is defined by
formula_11
The first component is a complex number, whereas the second component is real. Any point on the 3-sphere must have the property that |"z"0|2 + |"z"1|2
1. If that is so, then "p"("z"0, "z"1) lies on the unit 2-sphere in C × R, as may be shown by adding the squares of the absolute values of the complex and real components of "p"
formula_12
Furthermore, if two points on the 3-sphere map to the same point on the 2-sphere, i.e., if "p"("z"0, "z"1) = "p"("w"0, "w"1), then ("w"0, "w"1) must equal ("λ" "z"0, "λ" "z"1) for some complex number "λ" with |"λ"|2 = 1. The converse is also true; any two points on the 3-sphere that differ by a common complex factor "λ" map to the same point on the 2-sphere. These conclusions follow, because the complex factor "λ" cancels with its complex conjugate "λ"∗ in both parts of "p": in the complex 2"z"0"z"1∗ component and in the real component |"z"0|2 − |"z"1|2.
Since the set of complex numbers "λ" with |"λ"|2
1 form the unit circle in the complex plane, it follows that for each point "m" in "S"2, the inverse image "p"−1("m") is a circle, i.e., "p"−1"m" ≅ "S"1. Thus the 3-sphere is realized as a disjoint union of these circular fibers.
A direct parametrization of the 3-sphere employing the Hopf map is as follows.
formula_13
formula_14
or in Euclidean R4
formula_15
formula_16
formula_17
formula_18
Where "η" runs over the range from 0 to "π"/2, "ξ"1 runs over the range from 0 to 2"π", and "ξ"2 can take any value from 0 to 4"π". Every value of "η", except 0 and "π"/2 which specify circles, specifies a separate flat torus in the 3-sphere, and one round trip (0 to 4"π") of either "ξ"1 or "ξ"2 causes you to make one full circle of both limbs of the torus.
A mapping of the above parametrization to the 2-sphere is as follows, with points on the circles parametrized by "ξ"2.
formula_19
formula_20
formula_21
Geometric interpretation using the complex projective line.
A geometric interpretation of the fibration may be obtained using the complex projective line, CP1, which is defined to be the set of all complex one-dimensional subspaces of C2. Equivalently, CP1 is the quotient of C2\{0} by the equivalence relation which identifies ("z"0, "z"1) with ("λ" "z"0, "λ" "z"1) for any nonzero complex number "λ". On any complex line in C2 there is a circle of unit norm, and so the restriction of the quotient map to the points of unit norm is a fibration of "S"3 over CP1.
CP1 is diffeomorphic to a 2-sphere: indeed it can be identified with the Riemann sphere C∞ = C ∪ {∞}, which is the one point compactification of C (obtained by adding a point at infinity). The formula given for "p" above defines an explicit diffeomorphism between the complex projective line and the ordinary 2-sphere in 3-dimensional space. Alternatively, the point ("z"0, "z"1) can be mapped to the ratio "z"1/"z"0 in the Riemann sphere C∞.
Fiber bundle structure.
The Hopf fibration defines a fiber bundle, with bundle projection "p". This means that it has a "local product structure", in the sense that every point of the 2-sphere has some neighborhood "U" whose inverse image in the 3-sphere can be identified with the product of "U" and a circle: "p"−1("U") ≅ "U" × "S"1. Such a fibration is said to be locally trivial.
For the Hopf fibration, it is enough to remove a single point "m" from "S"2 and the corresponding circle "p"−1("m") from "S"3; thus one can take "U" = "S"2\{"m"}, and any point in "S"2 has a neighborhood of this form.
Geometric interpretation using rotations.
Another geometric interpretation of the Hopf fibration can be obtained by considering rotations of the 2-sphere in ordinary 3-dimensional space. The rotation group SO(3) has a double cover, the spin group Spin(3), diffeomorphic to the 3-sphere. The spin group acts transitively on "S"2 by rotations. The stabilizer of a point is isomorphic to the circle group; its elements are angles of rotation leaving the given point unmoved, all sharing the axis connecting that point to the sphere's center. It follows easily that the 3-sphere is a principal circle bundle over the 2-sphere, and this is the Hopf fibration.
To make this more explicit, there are two approaches: the group Spin(3) can either be identified with the group Sp(1) of unit quaternions, or with the special unitary group SU(2).
In the first approach, a vector ("x"1, "x"2, "x"3, "x"4) in R4 is interpreted as a quaternion "q" ∈ H by writing
formula_22
The 3-sphere is then identified with the versors, the quaternions of unit norm, those "q" ∈ H for which |"q"|2
1, where |"q"|2
"q q"∗, which is equal to "x"12 + "x"22 + "x"32 + "x"42 for "q" as above.
On the other hand, a vector ("y"1, "y"2, "y"3) in R3 can be interpreted as a pure quaternion
formula_23
Then, as is well-known since , the mapping
formula_24
is a rotation in R3: indeed it is clearly an isometry, since |"q p q"∗|2
"q p q"∗ "q p"∗ "q"∗
"q p p"∗ "q"∗
|"p"|2, and it is not hard to check that it preserves orientation.
In fact, this identifies the group of versors with the group of rotations of R3, modulo the fact that the versors "q" and −"q" determine the same rotation. As noted above, the rotations act transitively on "S"2, and the set of versors "q" which fix a given right versor "p" have the form "q" = "u" + "v" "p", where "u" and "v" are real numbers with "u"2 + "v"2
1. This is a circle subgroup. For concreteness, one can take "p" = k, and then the Hopf fibration can be defined as the map sending a versor "ω" to "ω" k "ω"∗. All the quaternions "ωq", where "q" is one of the circle of versors that fix "k", get mapped to the same thing (which happens to be one of the two 180° rotations rotating "k" to the same place as "ω" does).
Another way to look at this fibration is that every versor ω moves the plane spanned by {1, "k"} to a new plane spanned by {"ω", "ωk"}. Any quaternion "ωq", where "q" is one of the circle of versors that fix "k", will have the same effect. We put all these into one fibre, and the fibres can be mapped one-to-one to the 2-sphere of 180° rotations which is the range of "ωkω"*.
This approach is related to the direct construction by identifying a quaternion "q" = "x"1 + i "x"2 + j "x"3 + k "x"4 with the 2×2 matrix:
formula_25
This identifies the group of versors with SU(2), and the imaginary quaternions with the skew-hermitian 2×2 matrices (isomorphic to C × R).
Explicit formulae.
The rotation induced by a unit quaternion "q" = "w" + i "x" + j "y" + k "z" is given explicitly by the orthogonal matrix
formula_26
Here we find an explicit real formula for the bundle projection by noting that the fixed unit vector along the "z" axis, (0,0,1), rotates to another unit vector,
formula_27
which is a continuous function of ("w", "x", "y", "z"). That is, the image of "q" is the point on the 2-sphere where it sends the unit vector along the "z" axis. The fiber for a given point on "S"2 consists of all those unit quaternions that send the unit vector there.
We can also write an explicit formula for the fiber over a point ("a", "b", "c") in "S"2. Multiplication of unit quaternions produces composition of rotations, and
formula_28
is a rotation by 2"θ" around the "z" axis. As "θ" varies, this sweeps out a great circle of "S"3, our prototypical fiber. So long as the base point, ("a", "b", "c"), is not the antipode, (0, 0, −1), the quaternion
formula_29
will send (0, 0, 1) to ("a", "b", "c"). Thus the fiber of ("a", "b", "c") is given by quaternions of the form "q"("a", "b", "c")"q""θ", which are the "S"3 points
formula_30
Since multiplication by "q"("a","b","c") acts as a rotation of quaternion space, the fiber is not merely a topological circle, it is a geometric circle.
The final fiber, for (0, 0, −1), can be given by defining "q"(0,0,−1) to equal i, producing
formula_31
which completes the bundle. But note that this one-to-one mapping between "S"3 and "S"2×"S"1 is not continuous on this circle, reflecting the fact that "S"3 is not topologically equivalent to "S"2×"S"1.
Thus, a simple way of visualizing the Hopf fibration is as follows. Any point on the 3-sphere is equivalent to a quaternion, which in turn is equivalent to a particular rotation of a Cartesian coordinate frame in three dimensions. The set of all possible quaternions produces the set of all possible rotations, which moves the tip of one unit vector of such a coordinate frame (say, the z vector) to all possible points on a unit 2-sphere. However, fixing the tip of the z vector does not specify the rotation fully; a further rotation is possible about the z-axis. Thus, the 3-sphere is mapped onto the 2-sphere, plus a single rotation.
The rotation can be represented using the Euler angles "θ", "φ", and "ψ". The Hopf mapping maps the rotation to the point on the 2-sphere given by θ and φ, and the associated circle is parametrized by ψ. Note that when θ = π the Euler angles φ and ψ are not well defined individually, so we do not have a one-to-one mapping (or a one-to-two mapping) between the 3-torus of ("θ", "φ", "ψ") and "S"3.
Fluid mechanics.
If the Hopf fibration is treated as a vector field in 3 dimensional space then there is a solution to the (compressible, non-viscous) Navier–Stokes equations of fluid dynamics in which the fluid flows along the circles of the projection of the Hopf fibration in 3 dimensional space. The size of the velocities, the density and the pressure can be chosen at each point to satisfy the equations. All these quantities fall to zero going away from the centre. If a is the distance to the inner ring, the velocities, pressure and density fields are given by:
formula_32
formula_33
formula_34
for arbitrary constants "A" and "B". Similar patterns of fields are found as soliton solutions of magnetohydrodynamics:
Generalizations.
The Hopf construction, viewed as a fiber bundle "p": "S"3 → CP"1", admits several generalizations, which are also often known as Hopf fibrations. First, one can replace the projective line by an "n"-dimensional projective space. Second, one can replace the complex numbers by any (real) division algebra, including (for "n" = 1) the octonions.
Real Hopf fibrations.
A real version of the Hopf fibration is obtained by regarding the circle "S"1 as a subset of R2 in the usual way and by
identifying antipodal points. This gives a fiber bundle "S"1 → RP1 over the real projective line with fiber "S"0 = {1, −1}. Just as CP1 is diffeomorphic to a sphere, RP1 is diffeomorphic to a circle.
More generally, the "n"-sphere "S""n" fibers over real projective space RP"n" with fiber "S"0.
Complex Hopf fibrations.
The Hopf construction gives circle bundles "p" : "S"2"n"+1 → CP"n" over complex projective space. This is actually the restriction of the tautological line bundle over CP"n" to the unit sphere in C"n"+1.
Quaternionic Hopf fibrations.
Similarly, one can regard "S"4"n+3" as lying in H"n+1" (quaternionic "n"-space) and factor out by unit quaternion (= "S"3) multiplication to get the quaternionic projective space HP"n". In particular, since "S"4 = HP1, there is a bundle "S"7 → "S"4 with fiber "S"3.
Octonionic Hopf fibrations.
A similar construction with the octonions yields a bundle "S"15 → "S"8 with fiber "S"7. But the sphere "S"31 does not fiber over "S"16 with fiber "S"15. One can regard "S"8 as the octonionic projective line OP1. Although one can also define an octonionic projective plane OP2, the sphere "S"23 does not fiber over OP2
with fiber "S"7.
Fibrations between spheres.
Sometimes the term "Hopf fibration" is restricted to the fibrations between spheres obtained above, which are
As a consequence of Adams's theorem, fiber bundles with spheres as total space, base space, and fiber can occur only in these dimensions.
Fiber bundles with similar properties, but different from the Hopf fibrations, were used by John Milnor to construct exotic spheres.
Geometry and applications.
The Hopf fibration has many implications, some purely attractive, others deeper. For example, stereographic projection "S"3 → R3 induces a remarkable structure in R3, which in turn illuminates the topology of the bundle . Stereographic projection preserves circles and maps the Hopf fibers to geometrically perfect circles in R3 which fill space. Here there is one exception: the Hopf circle containing the projection point maps to a straight line in R3 — a "circle through infinity".
The fibers over a circle of latitude on "S"2 form a torus in "S"3 (topologically, a torus is the product of two circles) and these project to nested toruses in R3 which also fill space. The individual fibers map to linking Villarceau circles on these tori, with the exception of the circle through the projection point and the one through its opposite point: the former maps to a straight line, the latter to a unit circle perpendicular to, and centered on, this line, which may be viewed as a degenerate torus whose minor radius has shrunken to zero. Every other fiber image encircles the line as well, and so, by symmetry, each circle is linked through "every" circle, both in R3 and in "S"3. Two such linking circles form a Hopf link in R3
Hopf proved that the Hopf map has Hopf invariant 1, and therefore is not null-homotopic. In fact it generates the homotopy group π3("S"2) and has infinite order.
In quantum mechanics, the Riemann sphere is known as the Bloch sphere, and the Hopf fibration describes the topological structure of a quantum mechanical two-level system or qubit. Similarly, the topology of a pair of entangled two-level systems is given by the Hopf fibration
formula_35
. Moreover, the Hopf fibration is equivalent to the fiber bundle structure of the Dirac monopole.
Hopf fibration also found applications in robotics, where it was used to generate uniform samples on SO(3) for the probabilistic roadmap algorithm in motion planning. It also found application in the automatic control of quadrotors.
|
[
{
"math_id": 0,
"text": "S^1 \\hookrightarrow S^3 \\xrightarrow{\\ p \\, } S^2, "
},
{
"math_id": 1,
"text": "S^0\\hookrightarrow S^1 \\to S^1,"
},
{
"math_id": 2,
"text": "S^1\\hookrightarrow S^3 \\to S^2,"
},
{
"math_id": 3,
"text": "S^3\\hookrightarrow S^7 \\to S^4,"
},
{
"math_id": 4,
"text": "S^7\\hookrightarrow S^{15}\\to S^8."
},
{
"math_id": 5,
"text": "(n+1)"
},
{
"math_id": 6,
"text": "S^n"
},
{
"math_id": 7,
"text": "(x_1, x_2,\\ldots , x_{n+ 1})"
},
{
"math_id": 8,
"text": "\\R^{n+1}"
},
{
"math_id": 9,
"text": "(x_1, x_2, x_3, x_4) \\leftrightarrow (z_0, z_1) = (x_1 + ix_2, x_3+ix_4)"
},
{
"math_id": 10,
"text": "(x_1, x_2, x_3) \\leftrightarrow (z, x) = (x_1 + ix_2, x_3)"
},
{
"math_id": 11,
"text": "p(z_0,z_1) = (2z_0z_1^{\\ast}, \\left|z_0 \\right|^2-\\left|z_1 \\right|^2)."
},
{
"math_id": 12,
"text": "2 z_{0} z_{1}^{\\ast} \\cdot 2 z_{0}^{\\ast} z_{1} + \n\\left( \\left| z_{0} \\right|^{2} - \\left| z_{1} \\right|^{2} \\right)^{2} = \n4 \\left| z_{0} \\right|^{2} \\left| z_{1} \\right|^{2} + \n\\left| z_{0} \\right|^{4} - 2 \\left| z_{0} \\right|^{2} \\left| z_{1} \\right|^{2} + \\left| z_{1} \\right|^{4} = \n\\left( \\left| z_{0} \\right|^{2} + \\left| z_{1} \\right|^{2} \\right)^{2} = 1"
},
{
"math_id": 13,
"text": "z_0 = e^{i\\,\\frac{\\xi_1+\\xi_2}{2}}\\sin\\eta "
},
{
"math_id": 14,
"text": "z_1 = e^{i\\,\\frac{\\xi_2-\\xi_1}{2}}\\cos\\eta. "
},
{
"math_id": 15,
"text": "x_1 = \\cos\\left(\\frac{\\xi_1+\\xi_2}{2}\\right)\\sin\\eta"
},
{
"math_id": 16,
"text": "x_2 = \\sin\\left(\\frac{\\xi_1+\\xi_2}{2}\\right)\\sin\\eta "
},
{
"math_id": 17,
"text": "x_3 = \\cos\\left(\\frac{\\xi_2-\\xi_1}{2}\\right)\\cos\\eta "
},
{
"math_id": 18,
"text": "x_4 = \\sin\\left(\\frac{\\xi_2-\\xi_1}{2}\\right)\\cos\\eta "
},
{
"math_id": 19,
"text": "z = \\cos(2\\eta)"
},
{
"math_id": 20,
"text": "x = \\sin(2\\eta)\\cos\\xi_1"
},
{
"math_id": 21,
"text": "y = \\sin(2\\eta)\\sin\\xi_1"
},
{
"math_id": 22,
"text": " q = x_1+\\mathbf{i}x_2+\\mathbf{j}x_3+\\mathbf{k}x_4.\\,\\!"
},
{
"math_id": 23,
"text": " p = \\mathbf{i}y_1+\\mathbf{j}y_2+\\mathbf{k}y_3. \\,\\!"
},
{
"math_id": 24,
"text": " p \\mapsto q p q^* \\,\\!"
},
{
"math_id": 25,
"text": "\\begin{bmatrix} x_1+\\mathbf i x_2 & x_3+\\mathbf i x_4 \\\\ -x_3+\\mathbf i x_4 & x_1-\\mathbf i x_2 \\end{bmatrix}.\\,\\!"
},
{
"math_id": 26,
"text": "\\begin{bmatrix}\n1-2(y^2+z^2) & 2(xy - wz) & 2(xz+wy)\\\\\n2(xy + wz) & 1-2(x^2+z^2) & 2(yz-wx)\\\\\n2(xz-wy) & 2(yz+wx) & 1-2(x^2+y^2)\n\\end{bmatrix} . "
},
{
"math_id": 27,
"text": " \\Big(2(xz+wy) , 2(yz-wx) , 1-2(x^2+y^2)\\Big) , \\,\\!"
},
{
"math_id": 28,
"text": "q_{\\theta} = \\cos \\theta + \\mathbf{k} \\sin \\theta"
},
{
"math_id": 29,
"text": " q_{(a,b,c)} = \\frac{1}{\\sqrt{2(1+c)}}(1+c-\\mathbf{i}b+\\mathbf{j}a) "
},
{
"math_id": 30,
"text": " \\frac{1}{\\sqrt{2(1+c)}}\n \\Big((1+c) \\cos (\\theta ),\n a \\sin (\\theta )-b \\cos (\\theta ),\n a \\cos (\\theta )+b \\sin (\\theta ),\n (1+c) \\sin (\\theta )\\Big) . \\,\\!"
},
{
"math_id": 31,
"text": " \\Big(0,\\cos (\\theta ),-\\sin (\\theta ),0\\Big),"
},
{
"math_id": 32,
"text": "\\mathbf{v}(x,y,z) = A \\left(a^2+x^2+y^2+z^2\\right)^{-2} \\left( 2(-ay+xz), 2(ax+yz) , a^2-x^2-y^2+z^2 \\right)"
},
{
"math_id": 33,
"text": "p(x,y,z) = -A^2B \\left(a^2+x^2+y^2+z^2\\right)^{-3},"
},
{
"math_id": 34,
"text": "\\rho(x,y,z) = 3B\\left(a^2+x^2+y^2+z^2\\right)^{-1}"
},
{
"math_id": 35,
"text": "S^3 \\hookrightarrow S^7\\to S^4."
}
] |
https://en.wikipedia.org/wiki?curid=580384
|
58039179
|
Aluthge transform
|
In mathematics and more precisely in functional analysis, the Aluthge transformation is an operation defined on the set of bounded operators of a Hilbert space. It was introduced by Ariyadasa Aluthge to study p-hyponormal linear operators.
Definition.
Let formula_0 be a Hilbert space and let formula_1 be the algebra of linear operators from formula_0 to formula_0. By the polar decomposition theorem, there exists a unique partial isometry formula_2 such that formula_3 and formula_4, where formula_5 is the square root of the operator formula_6. If formula_7 and formula_8 is its polar decomposition, the Aluthge transform of formula_9 is the operator formula_10 defined as:
formula_11
More generally, for any real number formula_12, the formula_13-Aluthge transformation is defined as
formula_14
Example.
For vectors formula_15, let formula_16 denote the operator defined as
formula_17
An elementary calculation shows that if formula_18, then formula_19
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "B(H)"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "T=U|T|"
},
{
"math_id": 4,
"text": "\\ker(U)\\supset\\ker(T)"
},
{
"math_id": 5,
"text": "|T|"
},
{
"math_id": 6,
"text": " T^*T"
},
{
"math_id": 7,
"text": "T\\in B(H)"
},
{
"math_id": 8,
"text": " T=U|T|"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\Delta(T)"
},
{
"math_id": 11,
"text": "\\Delta(T)=|T|^{\\frac12}U|T|^{\\frac12}."
},
{
"math_id": 12,
"text": "\\lambda\\in [0,1]"
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "\\Delta_\\lambda(T):=|T|^{\\lambda}U|T|^{1-\\lambda}\\in B(H)."
},
{
"math_id": 15,
"text": "x,y \\in H"
},
{
"math_id": 16,
"text": "x\\otimes y"
},
{
"math_id": 17,
"text": "\\forall z\\in H\\quad x\\otimes y(z)=\\langle z,y\\rangle x."
},
{
"math_id": 18,
"text": "y\\ne0"
},
{
"math_id": 19,
"text": "\\Delta_\\lambda(x\\otimes y)=\\Delta(x\\otimes y)=\\frac{\\langle x,y\\rangle}{\\lVert y \\rVert^2} y\\otimes y."
}
] |
https://en.wikipedia.org/wiki?curid=58039179
|
58045
|
LL parser
|
Top-down parser that parses input from left to right
In computer science, an LL parser (Left-to-right, leftmost derivation) is a top-down parser for a restricted context-free language. It parses the input from Left to right, performing Leftmost derivation of the sentence.
An LL parser is called an LL("k") parser if it uses "k" tokens of lookahead when parsing a sentence. A grammar is called an LL("k") grammar if an LL("k") parser can be constructed from it. A formal language is called an LL("k") language if it has an LL("k") grammar. The set of LL("k") languages is properly contained in that of LL("k"+1) languages, for each "k" ≥ 0. A corollary of this is that not all context-free languages can be recognized by an LL("k") parser.
An LL parser is called LL-regular (LLR) if it parses an LL-regular language. The class of LLR grammars contains every LL(k) grammar for every k. For every LLR grammar there exists an LLR parser that parses the grammar in linear time.
Two nomenclative outlier parser types are LL(*) and LL(finite). A parser is called LL(*)/LL(finite) if it uses the LL(*)/LL(finite) parsing strategy. LL(*) and LL(finite) parsers are functionally closer to PEG parsers. An LL(finite) parser can parse an arbitrary LL(k) grammar optimally in the amount of lookahead and lookahead comparisons. The class of grammars parsable by the LL(*) strategy encompasses some context-sensitive languages due to the use of syntactic and semantic predicates and has not been identified. It has been suggested that LL(*) parsers are better thought of as TDPL parsers.
Against the popular misconception, LL(*) parsers are not LLR in general, and are guaranteed by construction to perform worse on average (super-linear against linear time) and far worse in the worst-case (exponential against linear time).
LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and many computer languages are designed to be LL(1) for this reason. LL parsers may be table-based, i.e. similar to LR parsers, but LL grammars can also be parsed by recursive descent parsers. According to Waite and Goos (1984), LL("k") grammars were introduced by Stearns and Lewis (1969).
Overview.
For a given context-free grammar, the parser attempts to find the leftmost derivation.
Given an example grammar formula_0:
the leftmost derivation for formula_4 is:
formula_5
Generally, there are multiple possibilities when selecting a rule to expand the leftmost non-terminal. In step 2 of the previous example, the parser must choose whether to apply rule 2 or rule 3:
formula_6
To be efficient, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is formula_7 , the only correct rule that can be used is 2.
Generally, an formula_8 parser can look ahead at formula_9 symbols. However, given a grammar, the problem of determining if there exists a formula_8 parser for some formula_9 that recognizes it is undecidable. For each formula_9, there is a language that cannot be recognized by an formula_8 parser, but can be by an formula_10.
We can use the above analysis to give the following formal definition:
Let formula_0 be a context-free grammar and formula_11. We say that formula_0 is formula_8, if and only if for any two leftmost derivations:
the following condition holds: the prefix of the string formula_14 of length formula_9 equals the prefix of the string formula_15 of length formula_9 implies formula_16.
In this definition, formula_17 is the start symbol and formula_18 any non-terminal. The already derived input formula_19, and yet unread formula_14 and formula_20 are strings of terminals. The Greek letters formula_21, formula_22 and formula_23 represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words.
Parser.
The formula_8 parser is a deterministic pushdown automaton with the ability to peek on the next formula_9 input symbols without reading. This peek capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction.
The stack alphabet is formula_24, where:
The parser stack initially contains the starting symbol above the EOI: formula_28. During operation, the parser repeatedly replaces the symbol formula_29 on top of the stack:
If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack.
The states and the transition function are not explicitly given; they are specified (generated) using a more convenient "parse table" instead. The table provides the following mapping:
If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals.
Concrete example.
Set up.
To explain an LL(1) parser's workings we will consider the following small LL(1) grammar:
and parse the following input:
( a + a )
An LL(1) parsing table for a grammar has a row for each of the non-terminals and a column for each terminal (including the special terminal, represented here as $, that is used to indicate the end of the input stream).
Each cell of the table may point to at most one rule of the grammar (identified by its number). For example, in the parsing table for the above grammar, the cell for the non-terminal 'S' and terminal '(' points to the rule number 2:
The algorithm to construct a parsing table is described in a later section, but first let's see how the parser uses the parsing table to process its input.
Parsing procedure.
In each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack.
Thus, in its first step, the parser reads the input symbol '(' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '(' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '( S + F )' on the stack by removing 'S' from stack and pushing ')', 'F', '+', 'S', '(' onto the stack, and this writes the rule number 2 to the output. The stack then becomes:
[ (, S, +, F, ), $ ]
In the second step, the parser removes the '(' from its input stream and from its stack, since they now match. The stack now becomes:
[ S, +, F, ), $ ]
Now the parser has an 'a' on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes:
[ F, +, F, ), $ ]
The parser now has an 'a' on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes:
[ a, +, F, ), $ ]
The parser now has an 'a' on the input stream and an 'a' at its stack top. Because they are the same, it removes it from the input stream and pops it from the top of the stack. The parser then has an '+' on the input stream and '+' is at the top of the stack meaning, like with 'a', it is popped from the stack and removed from the input stream. This results in:
[ F, ), $ ]
In the next three steps the parser will replace 'F' on the stack by 'a', write the rule number 3 to the output stream and remove the 'a' and ')' from both the stack and the input stream. The parser thus ends with '$' on both its stack and its input stream.
In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream:
[ 2, 1, 3, 3 ]
This is indeed a list of rules for a leftmost derivation of the input string, which is:
S → ( S + F ) → ( F + F ) → ( a + F ) → ( a + a )
Parser implementation in C++.
Below follows a C++ implementation of a table-based LL parser for the example language:
enum Symbols {
// the symbols:
// Terminal symbols:
TS_L_PARENS, // (
TS_R_PARENS, // )
TS_A, // a
TS_PLUS, // +
TS_EOS, // $, in this case corresponds to '\0'
TS_INVALID, // invalid token
// Non-terminal symbols:
NTS_S, // S
NTS_F // F
Converts a valid token to the corresponding terminal symbol
Symbols lexer(char c)
switch (c)
case '(': return TS_L_PARENS;
case ')': return TS_R_PARENS;
case 'a': return TS_A;
case '+': return TS_PLUS;
case '\0': return TS_EOS; // end of stack: the $ terminal symbol
default: return TS_INVALID;
int main(int argc, char **argv)
using namespace std;
if (argc < 2)
cout « "usage:\n\tll '(a+a)'" « endl;
return 0;
// LL parser table, maps < non-terminal, terminal> pair to action
map< Symbols, map<Symbols, int> > table;
stack<Symbols> ss; // symbol stack
char *p; // input buffer
// initialize the symbols stack
ss.push(TS_EOS); // terminal, $
ss.push(NTS_S); // non-terminal, S
// initialize the symbol stream cursor
p = &argv[1][0];
// set up the parsing table
table[NTS_S][TS_L_PARENS] = 2;
table[NTS_S][TS_A] = 1;
table[NTS_F][TS_A] = 3;
while (ss.size() > 0)
if (lexer(*p) == ss.top())
cout « "Matched symbols: " « lexer(*p) « endl;
p++;
ss.pop();
else
cout « "Rule " « table[ss.top()][lexer(*p)] « endl;
switch (table[ss.top()][lexer(*p)])
case 1: // 1. S → F
ss.pop();
ss.push(NTS_F); // F
break;
case 2: // 2. S → ( S + F )
ss.pop();
ss.push(TS_R_PARENS); // )
ss.push(NTS_F); // F
ss.push(TS_PLUS); // +
ss.push(NTS_S); // S
ss.push(TS_L_PARENS); // (
break;
case 3: // 3. F → a
ss.pop();
ss.push(TS_A); // a
break;
default:
cout « "parsing table defaulted" « endl;
return 0;
cout « "finished parsing" « endl;
return 0;
Parser implementation in Python.
TERM = 0
RULE = 1
T_LPAR = 0
T_RPAR = 1
T_A = 2
T_PLUS = 3
T_END = 4
T_INVALID = 5
N_S = 0
N_F = 1
table = 1, -1, 0, -1, -1, -1],
[-1, -1, 2, -1, -1, -1
RULES = (RULE, N_F)],
[(TERM, T_LPAR), (RULE, N_S), (TERM, T_PLUS), (RULE, N_F), (TERM, T_RPAR)],
[(TERM, T_A)
stack = [(TERM, T_END), (RULE, N_S)]
def lexical_analysis(inputstring: str) -> list:
print("Lexical analysis")
tokens = []
for c in inputstring:
if c == "+": tokens.append(T_PLUS)
elif c == "(": tokens.append(T_LPAR)
elif c == ")": tokens.append(T_RPAR)
elif c == "a": tokens.append(T_A)
else: tokens.append(T_INVALID)
tokens.append(T_END)
print(tokens)
return tokens
def syntactic_analysis(tokens: list) -> None:
print("Syntactic analysis")
position = 0
while len(stack) > 0:
(stype, svalue) = stack.pop()
token = tokens[position]
if stype == TERM:
if svalue == token:
position += 1
print("pop", svalue)
if token == T_END:
print("input accepted")
else:
print("bad term on input:", token)
break
elif stype == RULE:
print("svalue", svalue, "token", token)
rule = table[svalue][token]
print("rule", rule)
for r in reversed(RULES[rule]):
stack.append(r)
print("stack", stack)
inputstring = "(a+a)"
syntactic_analysis(lexical_analysis(inputstring))
Remarks.
As can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol $:
These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a leftmost derivation to the output stream or it will have reported an error.
Constructing an LL(1) parsing table.
In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal "A" on the top of its stack and a symbol "a" on its input stream.
It is easy to see that such a rule should be of the form "A" → "w" and that the language corresponding to "w" should have at least one string starting with "a".
For this purpose we define the "First-set" of "w", written here as Fi("w"), as the set of terminals that can be found at the start of some string in "w", plus ε if the empty string also belongs to "w".
Given a grammar with the rules "A"1 → "w"1, …, "A""n" → "w""n", we can compute the Fi("w""i") and Fi("A""i") for every rule as follows:
The result is the least fixed point solution to the following system:
where, for sets of words U and V, the truncated product is defined by formula_38, and w:1 denotes the initial length-1 prefix of words w of length 2 or more, or w, itself, if w has length 0 or 1.
Unfortunately, the First-sets are not sufficient to compute the parsing table.
This is because a right-hand side "w" of a rule might ultimately be rewritten to the empty string.
So the parser should also use the rule "A" → "w" if ε is in Fi("w") and it sees on the input stream a symbol that could follow "A". Therefore, we also need the "Follow-set" of "A", written as Fo("A") here, which is defined as the set of terminals "a" such that there is a string of symbols "αAaβ" that can be derived from the start symbol. We use $ as a special terminal indicating end of input stream, and "S" as start symbol.
Computing the Follow-sets for the nonterminals in a grammar can be done as follows:
This provides the least fixed point solution to the following system:
Now we can define exactly which rules will appear where in the parsing table.
If "T"["A", "a"] denotes the entry in the table for nonterminal "A" and terminal "a", then
"T"["A","a"] contains the rule "A" → "w" if and only if
"a" is in Fi("w") or
ε is in Fi("w") and "a" is in Fo("A").
Equivalently: "T"["A", "a"] contains the rule "A" → "w" for each "a" ∈ Fi("w")·Fo("A").
If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking.
It is in precisely this case that the grammar is called an "LL"(1) "grammar".
Constructing an LL("k") parsing table.
The construction for LL(1) parsers can be adapted to LL("k") for "k" > 1 with the following modifications:
where an input is suffixed by k end-markers $, to fully account for the k lookahead context. This approach eliminates special cases for ε, and can be applied equally well in the LL(1) case.
Until the mid-1990s, it was widely believed that (for "k" > 1) was impractical, since the parser table would have exponential size in "k" in the worst case. This perception changed gradually after the release of the Purdue Compiler Construction Tool Set around 1992, when it was demonstrated that many programming languages can be parsed efficiently by an LL("k") parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators like yacc use LALR(1) parser tables to construct a restricted LR parser with a fixed one-token lookahead.
Conflicts.
As described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars; LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages, which in turn are a proper subset of all context-free languages. In order for a context-free grammar to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section.
Terminology.
Let "A" be a non-terminal. FIRST("A") is (defined to be) the set of terminals that can appear in the first position of any string derived from "A". FOLLOW("A") is the union over:
LL(1) conflicts.
There are two main types of LL(1) conflicts:
FIRST/FIRST conflict.
The FIRST sets of two different grammar rules for the same non-terminal intersect.
An example of an LL(1) FIRST/FIRST conflict:
S -> E | E 'a'
E -> 'b' | ε
FIRST("E") = {"b", ε} and FIRST("E" "a") = {"b", "a"}, so when the table is drawn, there is conflict under terminal "b" of production rule "S".
Special case: left recursion.
Left recursion will cause a FIRST/FIRST conflict with all alternatives.
E -> E '+' term | alt1 | alt2
FIRST/FOLLOW conflict.
The FIRST and FOLLOW set of a grammar rule overlap. With an empty string (ε) in the FIRST set, it is unknown which alternative to select.
An example of an LL(1) conflict:
S -> A 'a' 'b'
A -> 'a' | ε
The FIRST set of "A" is {"a", ε}, and the FOLLOW set is {"a"}.
Solutions to LL(1) conflicts.
Left factoring.
A common left-factor is "factored out".
A -> X | X Y Z
becomes
A -> X B
B -> Y Z | ε
Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict.
Another example (more complex) using above FIRST/FIRST conflict example:
S -> E | E 'a'
E -> 'b' | ε
becomes (merging into a single non-terminal)
S -> 'b' | ε | 'b' 'a' | 'a'
then through left-factoring, becomes
S -> 'b' E | E
E -> 'a' | ε
Substitution.
Substituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts.
Note that this may cause a FIRST/FIRST conflict.
Left recursion removal.
See.
For a general method, see removing left recursion.
A simple example for left recursion removal:
The following production rule has left recursion on E
E -> E '+' T
E -> T
This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*.
So the rule could be rewritten as
E -> T Z
Z -> '+' T Z
Z -> ε
Now there is no left recursion and no conflicts on either of the rules.
However, not all context-free grammars have an equivalent LL(k)-grammar, e.g.:
S -> A | B
A -> 'a' A 'b' | ε
B -> 'a' B 'b' 'b' | ε
It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar.
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "S \\to E"
},
{
"math_id": 2,
"text": "E \\to ( E + E )"
},
{
"math_id": 3,
"text": "E \\to i"
},
{
"math_id": 4,
"text": "w = ((i+i)+i)"
},
{
"math_id": 5,
"text": "S\\ \\overset{(1)}{\\Rightarrow}\\ E\\ \\overset{(2)}{\\Rightarrow}\\ (E+E)\\ \\overset{(2)}{\\Rightarrow}\\ ((E+E)+E)\\ \\overset{(3)}{\\Rightarrow}\\ ((i+E)+E)\\ \\overset{(3)}{\\Rightarrow}\\ ((i+i)+E)\\ \\overset{(3)}{\\Rightarrow}\\ ((i+i)+i)"
},
{
"math_id": 6,
"text": "S\\ \\overset{(1)}{\\Rightarrow}\\ E\\ \\overset{(?)}{\\Rightarrow}\\ ?"
},
{
"math_id": 7,
"text": "("
},
{
"math_id": 8,
"text": "LL(k)"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "LL(k+1)"
},
{
"math_id": 11,
"text": "k \\ge 1"
},
{
"math_id": 12,
"text": "S\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ wA\\alpha\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ w\\beta\\alpha\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ wu"
},
{
"math_id": 13,
"text": "S\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ wA\\alpha\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ w\\gamma\\alpha\\ \\Rightarrow\\ \\cdots\\ \\Rightarrow\\ wv"
},
{
"math_id": 14,
"text": "u"
},
{
"math_id": 15,
"text": "v "
},
{
"math_id": 16,
"text": "\\beta\\ =\\ \\gamma"
},
{
"math_id": 17,
"text": "S"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "w"
},
{
"math_id": 20,
"text": "v"
},
{
"math_id": 21,
"text": "\\alpha"
},
{
"math_id": 22,
"text": "\\beta"
},
{
"math_id": 23,
"text": "\\gamma"
},
{
"math_id": 24,
"text": "\\Gamma = N \\cup \\Sigma"
},
{
"math_id": 25,
"text": "N"
},
{
"math_id": 26,
"text": "\\Sigma"
},
{
"math_id": 27,
"text": "\\$"
},
{
"math_id": 28,
"text": "[\\ S\\ \\$\\ ]"
},
{
"math_id": 29,
"text": "X"
},
{
"math_id": 30,
"text": "X \\in N"
},
{
"math_id": 31,
"text": "X \\to \\alpha"
},
{
"math_id": 32,
"text": "\\epsilon"
},
{
"math_id": 33,
"text": "\\lambda"
},
{
"math_id": 34,
"text": "X \\in \\Sigma"
},
{
"math_id": 35,
"text": "x"
},
{
"math_id": 36,
"text": "x \\neq X"
},
{
"math_id": 37,
"text": "|w| \\le k"
},
{
"math_id": 38,
"text": "U \\cdot V = \\{ (uv):1 \\mid u \\in U, v \\in V \\}"
},
{
"math_id": 39,
"text": "U \\cdot V = \\{ (uv):k \\mid u \\in U, v \\in V \\}"
},
{
"math_id": 40,
"text": "\\cdot"
}
] |
https://en.wikipedia.org/wiki?curid=58045
|
58048512
|
Active reflection coefficient
|
The active reflection coefficient (ARC) is the reflection coefficient for a single antenna element in an array antenna, in the presence of mutual coupling. The active reflection coefficient is a function of frequency in addition to the excitation of the neighboring cells. In computational electromagnetics, the active reflection coefficient is usually determined from unit cell analysis in the frequency domain, where the phase shift over the unit cell (progressive phase shift used to steer the beam) is applied as a boundary condition.
It has been suggested that the name "scan reflection coefficient" is more appropriate than "active reflection coefficient", however the latter remains the most commonly used name.
Mathematical description.
General case.
The ARC for antenna element formula_0 in an array of formula_1 elements is calculated by:
formula_2
where formula_3 are the excitation coefficients and formula_4 are the coupling coefficients.
Linear array with specified scan angle.
In a linear array with inter element spacing formula_5, uniform amplitude tapering and scan angle formula_6, the following excitation coefficients are used: formula_7. By inserting this expression into the general equation above, we obtain:
formula_8
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " m "
},
{
"math_id": 1,
"text": " N "
},
{
"math_id": 2,
"text": " \\Gamma_S = \\sum_{n = 1}^N S_{mn} \\frac{a_{n}}{a_{m}}, "
},
{
"math_id": 3,
"text": " a_n "
},
{
"math_id": 4,
"text": " S_{mn} "
},
{
"math_id": 5,
"text": " a "
},
{
"math_id": 6,
"text": " \\theta_0 "
},
{
"math_id": 7,
"text": " a_n = e^{-jkna \\sin\\theta_0}"
},
{
"math_id": 8,
"text": " \\Gamma_S(\\theta_0) = e^{jkma \\sin\\theta_0} \\sum_{n = 1}^N S_{mn} e^{-jkna \\sin\\theta_0}. "
}
] |
https://en.wikipedia.org/wiki?curid=58048512
|
58056511
|
Umberto Zannier
|
Italian mathematician
Umberto Zannier (born 25 May 1957, in Spilimbergo, Italy) is an Italian mathematician, specializing in number theory and Diophantine geometry.
Education.
Zannier earned a Laurea degree from University of Pisa and studied at the Scuola Normale Superiore di Pisa with Ph.D. supervised by Enrico Bombieri.
Career.
Zannier was from 1983 to 1987 a researcher at the University of Padua, from 1987 to 1991 an associate professor at the University of Salerno, and from 1991 to 2003 a full professor at the Università IUAV di Venezia. From 2003 to the present he has been a Professor in Geometry at the Scuola Normale Superiore di Pisa.
In 2010 he gave the Hermann Weyl Lectures at the Institute for Advanced Study. He was a visiting professor at several institutions, including the Institut Henri Poincaré in Paris, the ETH Zurich, and the Erwin Schrödinger Institute in Vienna.
With Jonathan Pila he developed a method (now known as the Pila-Zannier method) of applying O-minimality to number-theoretical and algebro-geometric problems. Thus they gave a new proof of the Manin–Mumford conjecture (which was first proved by Michel Raynaud and Ehud Hrushovski). Zannier and Pietro Corvaja in 2002 gave a new proof of Siegel's theorem on integral points by using a new method based upon the subspace theorem.
Awards & Service.
Zannier was an Invited Speaker at the 4th European Mathematical Congress in Stockholm in 2004. Zannier was elected a corresponding member of the Istituto Veneto in 2004, a member of the Accademia dei Lincei in 2006, and a member of Academia Europaea in 2012. In 2014 he was an Invited Speaker of the International Congress of Mathematicians in Seoul.
In 2005 Zannier received the Mathematics Prize of the Accademia dei XL and in 2011 an Advanced Grant from the European Research Council (ERC). He is chief editor of the "Annali di Scuola Normale Superiore" and a co-editor of "Acta Arithmetica".
|
[
{
"math_id": 0,
"text": "d^{th}"
}
] |
https://en.wikipedia.org/wiki?curid=58056511
|
58058216
|
Dimension of a scheme
|
In algebraic geometry, the dimension of a scheme is a generalization of a dimension of an algebraic variety. Scheme theory emphasizes the relative point of view and, accordingly, the relative dimension of a morphism of schemes is also important.
Definition.
By definition, the dimension of a scheme "X" is the dimension of the underlying topological space: the supremum of the lengths "ℓ" of chains of irreducible closed subsets:
formula_0
In particular, if formula_1 is an affine scheme, then such chains correspond to chains of prime ideals (inclusion reversed) and so the dimension of "X" is precisely the Krull dimension of "A".
If "Y" is an irreducible closed subset of a scheme "X", then the codimension of "Y" in "X" is the supremum of the lengths "ℓ" of chains of irreducible closed subsets:
formula_2
An irreducible subset of "X" is an irreducible component of "X" if and only if the codimension of it in "X" is zero. If formula_1 is affine, then the codimension of "Y" in "X" is precisely the height of the prime ideal defining "Y" in "X".
formula_30
while "X" is irreducible.
Equidimensional scheme.
An equidimensional scheme (or, pure dimensional scheme) is a scheme all of whose irreducible components are of the same dimension (implicitly assuming the dimensions are all well-defined).
Examples.
All irreducible schemes are equidimensional.
In affine space, the union of a line and a point not on the line is "not" equidimensional. In general, if two closed subschemes of some scheme, neither containing the other, have unequal dimensions, then their union is not equidimensional.
If a scheme is smooth (for instance, étale) over Spec "k" for some field "k", then every "connected" component (which is then in fact an irreducible component), is equidimensional.
Relative dimension.
Let formula_31 be a morphism locally of finite type between two schemes formula_7 and formula_32. The relative dimension of formula_33 at a point formula_34 is the dimension of the fiber formula_35. If all the nonempty fibers are purely of the same dimension formula_36, then one says that formula_33 is of relative dimension formula_36.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\emptyset \\ne V_0 \\subsetneq V_1 \\subsetneq \\cdots \\subsetneq V_\\ell \\subset X."
},
{
"math_id": 1,
"text": "X = \\operatorname{Spec} A"
},
{
"math_id": 2,
"text": "Y = V_0 \\subsetneq V_1 \\subsetneq \\cdots \\subsetneq V_\\ell \\subset X."
},
{
"math_id": 3,
"text": "X = \\operatorname{Spec} k[x, y, z]/(xy, xz)"
},
{
"math_id": 4,
"text": "H =\n\\{ x = 0 \\} \\subset \\mathbb{A}^3"
},
{
"math_id": 5,
"text": "\\operatorname{codim}(x, X)"
},
{
"math_id": 6,
"text": "X - H"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "k(X)"
},
{
"math_id": 10,
"text": "U"
},
{
"math_id": 11,
"text": "\\dim U = \\dim X"
},
{
"math_id": 12,
"text": "X = \\mathbb{A}^1_R = \\operatorname{Spec}(R[t])"
},
{
"math_id": 13,
"text": "\\pi: X \\to \\operatorname{Spec}R"
},
{
"math_id": 14,
"text": "\\operatorname{Spec}(R) = \\{ s, \\eta \\}"
},
{
"math_id": 15,
"text": "s"
},
{
"math_id": 16,
"text": "\\eta"
},
{
"math_id": 17,
"text": "\\pi^{-1}(s), \\pi^{-1}(\\eta)"
},
{
"math_id": 18,
"text": "\\pi^{-1}(\\eta)"
},
{
"math_id": 19,
"text": "2 = 1 + \\dim R"
},
{
"math_id": 20,
"text": "\\mathfrak{m}_R"
},
{
"math_id": 21,
"text": "\\omega_R"
},
{
"math_id": 22,
"text": "R[t]"
},
{
"math_id": 23,
"text": "\\mathfrak{p}_1 = (\\omega_R t - 1)"
},
{
"math_id": 24,
"text": "\\mathfrak{p}_2 = "
},
{
"math_id": 25,
"text": "R[t] \\to R/\\mathfrak{m}_R, f \\mapsto f(0) \\bmod\\mathfrak{m}_R"
},
{
"math_id": 26,
"text": "\\mathfrak{p}_1"
},
{
"math_id": 27,
"text": "R[t]/(\\omega_R t - 1) = R[\\omega_R^{-1}] = "
},
{
"math_id": 28,
"text": "\\mathfrak{p}_2"
},
{
"math_id": 29,
"text": "\\mathfrak{m}_R[t] \\subsetneq \\mathfrak{p}_2"
},
{
"math_id": 30,
"text": "\\operatorname{codim}(\\mathfrak{p}_1, X) = 1, \\, \\operatorname{codim}(\\mathfrak{p}_2, X) = 2,"
},
{
"math_id": 31,
"text": "f: X\\rightarrow Y"
},
{
"math_id": 32,
"text": "Y"
},
{
"math_id": 33,
"text": "f"
},
{
"math_id": 34,
"text": "y \\in Y"
},
{
"math_id": 35,
"text": "f^{-1} (y)"
},
{
"math_id": 36,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=58058216
|
58062775
|
CPLEAR experiment
|
The CPLEAR experiment used the antiproton beam of the LEAR facility – Low-Energy Antiproton Ring which operated at CERN from 1982 to 1996 – to produce neutral kaon"s" through proton-antiproton annihilation in order to study "CP", "T" and "CPT" violation in the neutral kaon system.
Background.
According to the theory of the Big Bang, matter and antimatter would have existed in the same amount at the beginning of the Universe. If this was true, particle"s" and antiparticle"s" would have annihilated each other, creating photon"s", and thus the Universe would have been only compounded by light (one particle of matter for 1018 photons). However, only matter has remained and at a rate of one billion times more particles than expected. What happened then, for the antimatter to disappear in favor of matter? A possible answer to this question is baryogenesis, the hypothetical physical process that took place during the early universe that produced baryonic asymmetry, i.e. the imbalance of matter (baryons) and antimatter (antibaryons) in the observed universe. However, baryogenesis is only possible under the following conditions proposed by Andrei Sakharov in 1967:
The first experimental test of CP violation came in 1964 with the Fitch-Cronin experiment. The experiment involved particles called neutral K-mesons, which fortuitously have the properties needed to test CP. First, as mesons, they're a combination of a quark and an anti-quark, in this case, down and antistrange, or anti-down and strange. Second, the two different particles have different CP values and different decay modes: K1 has CP = +1 and decays into two pions; K2 has CP = −1 and decays into three. Because decays with larger changes in mass occur more readily, the K1 decay happens 100 times faster than the K2 decay. This means that a sufficiently long beam of neutral Kaons will become arbitrarily pure K2 after a sufficient amount of time. The Fitch-Cronin experiment exploits this. If all the K1s are allowed to decay out of a beam of mixed Kaons, only K2 decays should be observed. If any K1 decays are found, it means that a K2 flipped to a K1, and the CP for the particles flipped from −1 to +1, and CP wasn't conserved. The experiment resulted in an excess of 45±9 events around cos(θ) = 1 in the correct mass range for 2-pion decays. This means that for every decay of K2 into three pions, there are (2.0±0.4)×10-3 decays into two pions. Because of this, neutral K mesons violate CP. The study of the ratio of neutral kaon and neutral anti-kaons production is thus an efficient tool to understand what happened in the early Universe that promoted the production of matter.
The experiment.
CPLEAR is a collaboration of about 100 scientists, coming from 17 institutions from 9 different countries. Accepted in 1985, the experiment took data from 1990 until 1996. Its main aim was to study CP, "T" and "CPT" symmetries in the neutral kaon system.
In addition, CPLEAR performed measurements about quantum coherence of wave function"s", Bose-Einstein correlations in multi-pion states, regeneration of the short-lived kaon component in the matter, the Einstein-Rosen-Podolsky paradox using entangled neutral-kaon pair states and the equivalence principle of general relativity.
Facility description.
The CPLEAR detector was able to determine the locations, the momenta and the charges of the tracks at the production of the neutral kaon and at its decay, thus visualizing the complete event.
Strangeness is not conserved under weak interactions, meaning that under weak interactions a can transform into a and vice versa. To study the asymmetries between and decay rates in the various final states f (f = π+π−, π0π0, π+π−π0, π0π0π+, π"l"ν), the CPLEAR collaboration used the fact that the strangeness of kaons is tagged by the charge of the accompanying kaon. Time-reversal invariance would imply that all details of one of the transformations could be deducible from the other one, i.e. the probability for a kaon to oscillate into an anti-kaon would be equal to the one for the reverse process. The measurement of these probabilities required the knowledge of the strangeness of a kaon at two different times of its life. Since the strangeness of the kaon is given by the charge of the accompanying kaon, and thus be known for each event, it was observed that this symmetry was not respected, thereby proving the "T" violation in neutral kaon systems under weak interaction.
The neutral kaons are initially produced in the annihilation channels
which happen when the 106 anti-protons per second beam coming from the LEAR facility is stopped by a highly-pressurized hydrogen gas target. The low momentum of the antiprotons and the high pressure allowed to keep the size of the stopping region small in the detector. Since the proton-antiproton reaction happens at rest, the particles are produced isotropically, and as a consequence, the detector has to have a near-4π symmetry. The whole detector was embedded in a 3.6 m long and 2 m diameter warm solenoidal magnet providing a 0.44 T uniform magnetic field.
The antiprotons were stopped using a pressurized hydrogen gas target. A hydrogen gas target was used instead of liquid hydrogen to minimize the amount of matter in the decay volume. The target initially had a radius of 7 cm and subjected to a pressure of 16 bar. Changed in 1994, its radius became equal to 1.1 cm, under a 27 bar pressure.
Layout of the detector.
The detector had to fulfill the specific requirements of the experiment and thus had to be able to:
Cylindrical tracking detectors together with a solenoid field were used to determine the charge signs, momenta and positions of the charged particles. They were followed by the particle identification detector (PID) whose role was to identify the charged kaon. It was compounded by a Cherenkov detector, which carried out the kaon-pion separation; and scintillator"s", measuring the energy loss and the time of flight of the charged particles. It was also used for the electron-pion separation. The detection of photons produced in π0 decays was performed by ECAL, an outermost lead/gas sampling calorimeter, complementary to the PID by separating pions and electrons at higher momenta. Finally, hardwired processors (HWK) were used to analyze and select the events in a few microseconds, deleting the unwanted ones, by providing a full event reconstruction with sufficient precision.
|
[
{
"math_id": 0,
"text": "B"
}
] |
https://en.wikipedia.org/wiki?curid=58062775
|
580668
|
Pollard's rho algorithm
|
Integer factorization algorithm
Pollard's rho algorithm is an algorithm for integer factorization. It was invented by John Pollard in 1975. It uses only a small amount of space, and its expected running time is proportional to the square root of the smallest prime factor of the composite number being factorized.
Core ideas.
The algorithm is used to factorize a number formula_0, where formula_1 is a non-trivial factor. A polynomial modulo formula_2, called formula_3 (e.g., formula_4), is used to generate a pseudorandom sequence. It is important to note that formula_3 must be a polynomial. A starting value, say 2, is chosen, and the sequence continues as formula_5, formula_6, formula_7, etc. The sequence is related to another sequence formula_8. Since formula_1 is not known beforehand, this sequence cannot be explicitly computed in the algorithm. Yet in it lies the core idea of the algorithm.
Because the number of possible values for these sequences is finite, both the formula_9 sequence, which is mod formula_2, and formula_8 sequence will eventually repeat, even though these values are unknown. If the sequences were to behave like random numbers, the birthday paradox implies that the number of formula_10 before a repetition occurs would be expected to be formula_11, where formula_12 is the number of possible values. So the sequence formula_8 will likely repeat much earlier than the sequence formula_9. When one has found a formula_13 such that formula_14 but formula_15, the number formula_16 is a multiple of formula_1, so formula_1 has been found.
Once a sequence has a repeated value, the sequence will cycle, because each value depends only on the one before it. This structure of eventual cycling gives rise to the name "rho algorithm", owing to similarity to the shape of the Greek letter "ρ" when the values formula_17, formula_18, etc. are represented as nodes in a directed graph.
This is detected by Floyd's cycle-finding algorithm: two nodes formula_19 and formula_20 (i.e., formula_21 and formula_22) are kept. In each step, one moves to the next node in the sequence and the other moves forward by two nodes. After that, it is checked whether formula_23. If it is not 1, then this implies that there is a repetition in the formula_8 sequence (i.e. formula_24. This works because if the formula_21 is the same as formula_22, the difference between formula_21 and formula_22 is necessarily a multiple of formula_1. Although this always happens eventually, the resulting greatest common divisor (GCD) is a divisor of formula_2 other than 1. This may be formula_2 itself, since the two sequences might repeat at the same time. In this (uncommon) case the algorithm fails, and can be repeated with a different parameter.
Algorithm.
The algorithm takes as its inputs n, the integer to be factored; and &NoBreak;&NoBreak;, a polynomial in x computed modulo n. In the original algorithm, formula_25, but nowadays it is more common to use formula_4. The output is either a non-trivial factor of n, or failure.
It performs the following steps:
Pseudocode for Pollard's rho algorithm
x ← 2 // starting value
y ← x
d ← 1
while d = 1:
x ← g(x)
y ← g(g(y))
d ← gcd(|x - y|, n)
if d = n:
return failure
else:
return d
Here x and y corresponds to &NoBreak;&NoBreak; and &NoBreak;&NoBreak; in the previous section. Note that this algorithm may fail to find a nontrivial factor even when n is composite. In that case, the method can be tried again, using a starting value of "x" other than 2 (formula_26) or a different &NoBreak;&NoBreak;, formula_27, with formula_28.
Example factorization.
Let formula_29 and formula_30.
Now 97 is a non-trivial factor of 8051. Starting values other than "x" = "y" = 2 may give the cofactor (83) instead of 97. One extra iteration is shown above to make it clear that y moves twice as fast as x. Note that even after a repetition, the GCD can return to 1.
Variants.
In 1980, Richard Brent published a faster variant of the rho algorithm. He used the same core ideas as Pollard but a different method of cycle detection, replacing Floyd's cycle-finding algorithm with the related Brent's cycle finding method.
A further improvement was made by Pollard and Brent. They observed that if formula_31, then also formula_32 for any positive integer &NoBreak;&NoBreak;. In particular, instead of computing formula_33 at every step, it suffices to define &NoBreak;&NoBreak; as the product of 100 consecutive formula_34 terms modulo &NoBreak;&NoBreak;, and then compute a single formula_35. A major speed up results as 100 gcd steps are replaced with 99 multiplications modulo &NoBreak;&NoBreak; and a single gcd. Occasionally it may cause the algorithm to fail by introducing a repeated factor, for instance when &NoBreak;&NoBreak; is a square. But it then suffices to go back to the previous gcd term, where formula_36, and use the regular "ρ" algorithm from there.
Application.
The algorithm is very fast for numbers with small factors, but slower in cases where all factors are large. The "ρ" algorithm's most remarkable success was the 1980 factorization of the Fermat number "F"8 = 1238926361552897 × 93461639715357977769163558199606896584051237541638188580280321. The "ρ" algorithm was a good choice for "F"8 because the prime factor p = 1238926361552897 is much smaller than the other factor. The factorization took 2 hours on a UNIVAC 1100/42.
Example: factoring n = 10403 = 101 · 103.
The following table shows numbers produced by the algorithm, starting with formula_37 and using the polynomial formula_38.
The third and fourth columns of the table contain additional information not known by the algorithm.
They are included to show how the algorithm works.
The first repetition modulo 101 is 97 which occurs in step 17. The repetition is not detected until step 23, when formula_39. This causes formula_40 to be formula_41, and a factor is found.
Complexity.
If the pseudorandom number formula_42 occurring in the Pollard "ρ" algorithm were an actual random number, it would follow that success would be achieved half the time, by the birthday paradox in formula_43 iterations. It is believed that the same analysis applies as well to the actual rho algorithm, but this is a heuristic claim, and rigorous analysis of the algorithm remains open.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n = pq"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "g(x)"
},
{
"math_id": 4,
"text": "g(x) = (x^2 + 1) \\bmod n"
},
{
"math_id": 5,
"text": "x_1 = g(2)"
},
{
"math_id": 6,
"text": "x_2 = g(g(2))"
},
{
"math_id": 7,
"text": "x_3 = g(g(g(2)))"
},
{
"math_id": 8,
"text": "\\{x_k \\bmod p\\}"
},
{
"math_id": 9,
"text": "\\{x_k\\}"
},
{
"math_id": 10,
"text": "x_k"
},
{
"math_id": 11,
"text": "O(\\sqrt N)"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "k_1,k_2"
},
{
"math_id": 14,
"text": "x_{k_1}\\neq x_{k_2}"
},
{
"math_id": 15,
"text": "x_{k_1}\\equiv x_{k_2}\\bmod p"
},
{
"math_id": 16,
"text": "|x_{k_1}-x_{k_2}|"
},
{
"math_id": 17,
"text": "x_1 \\bmod p"
},
{
"math_id": 18,
"text": "x_2 \\bmod p"
},
{
"math_id": 19,
"text": "i"
},
{
"math_id": 20,
"text": "j"
},
{
"math_id": 21,
"text": "x_i"
},
{
"math_id": 22,
"text": "x_j"
},
{
"math_id": 23,
"text": "\\gcd(x_i - x_j, n) \\ne 1"
},
{
"math_id": 24,
"text": "x_i \\bmod p = x_j \\bmod p)"
},
{
"math_id": 25,
"text": "g(x) = (x^2 - 1) \\bmod n"
},
{
"math_id": 26,
"text": "0 \\leq x < n"
},
{
"math_id": 27,
"text": "g(x) = (x^2 + b) \\bmod n"
},
{
"math_id": 28,
"text": "1 \\leq b < n-2"
},
{
"math_id": 29,
"text": "n = 8051"
},
{
"math_id": 30,
"text": "g(x) = (x^2 + 1) \\bmod 8051"
},
{
"math_id": 31,
"text": "\\gcd(a,n) > 1"
},
{
"math_id": 32,
"text": "\\gcd(ab,n) > 1"
},
{
"math_id": 33,
"text": "\\gcd (|x-y|,n)"
},
{
"math_id": 34,
"text": "|x-y|"
},
{
"math_id": 35,
"text": "\\gcd(z,n)"
},
{
"math_id": 36,
"text": "\\gcd(z,n)=1"
},
{
"math_id": 37,
"text": "x=2"
},
{
"math_id": 38,
"text": "g(x) = (x^2 + 1) \\bmod 10403"
},
{
"math_id": 39,
"text": "x \\equiv y \\pmod{101}"
},
{
"math_id": 40,
"text": "\\gcd (x - y, n) = \\gcd (2799 - 9970, n)"
},
{
"math_id": 41,
"text": "p = 101"
},
{
"math_id": 42,
"text": "x = g(x)"
},
{
"math_id": 43,
"text": "O(\\sqrt p)\\le O(n^{1/4})"
}
] |
https://en.wikipedia.org/wiki?curid=580668
|
58071309
|
Unitary transformation (quantum mechanics)
|
Important mathematical operations in quantum mechanics
In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time.
Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original.
Transformation.
A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian formula_0 and unitary operator formula_1. Under this change, the Hamiltonian transforms as:
formula_2.
The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by formula_3. Specifically, if the wave function formula_4 satisfies the original equation, then formula_5 will satisfy the new equation.
Derivation.
Recall that by the definition of a unitary matrix, formula_6. Beginning with the Schrödinger equation,
formula_7,
we can therefore insert formula_8 at will. In particular, inserting it after formula_9 and also premultiplying both sides by formula_3, we get
formula_10.
Next, note that by the product rule,
formula_11.
Inserting another formula_8 and rearranging, we get
formula_12.
Finally, combining (1) and (2) above results in the desired transformation:
formula_13.
If we adopt the notation formula_14 to describe the transformed wave function, the equations can be written in a clearer form. For instance, formula_15 can be rewritten as
formula_16,
which can be rewritten in the form of the original Schrödinger equation,
formula_17
The original wave function can be recovered as formula_18.
Relation to the interaction picture.
Unitary transformations can be seen as a generalization of the interaction (Dirac) picture. In the latter approach, a Hamiltonian is broken into a time-independent part and a time-dependent part,
formula_19.
In this case, the Schrödinger equation becomes
formula_20, with formula_21.
The correspondence to a unitary transformation can be shown by choosing formula_22. As a result, formula_23
Using the notation from formula_24 above, our transformed Hamiltonian becomes
formula_25
First note that since formula_3 is a function of formula_26, the two must commute. Then
formula_27,
which takes care of the first term in the transformation in formula_28, i.e. formula_29. Next use the chain rule to calculate
formula_30
which cancels with the other formula_26. Evidently we are left with formula_31, yielding formula_32 as shown above.
When applying a general unitary transformation, however, it is not necessary that formula_0 be broken into parts, or even that formula_1 be a function of any part of the Hamiltonian.
Examples.
Rotating frame.
Consider an atom with two states, ground formula_33 and excited formula_34. The atom has a Hamiltonian formula_35, where formula_36 is the frequency of light associated with the g-e transition. Now suppose we illuminate the atom with a drive at frequency formula_37 which couples the two states, and that the time-dependent driven Hamiltonian is
formula_38
for some complex drive strength formula_39. Because of the competing frequency scales (formula_36, formula_37, and formula_39), it is difficult to anticipate the effect of the drive (see driven harmonic motion).
Without a drive, the phase of formula_34 would oscillate relative to formula_33. In the Bloch sphere representation of a two-state system, this corresponds to rotation around the z-axis. Conceptually, we can remove this component of the dynamics by entering a rotating frame of reference defined by the unitary transformation formula_40. Under this transformation, the Hamiltonian becomes
formula_41.
If the driving frequency is equal to the g-e transition's frequency, formula_42, resonance will occur and then the equation above reduces to
formula_43.
From this it is apparent, even without getting into details, that the dynamics will involve an oscillation between the ground and excited states at frequency formula_39.
As another limiting case, suppose the drive is far off-resonant, formula_44. We can figure out the dynamics in that case without solving the Schrödinger equation directly. Suppose the system starts in the ground state formula_33. Initially, the Hamiltonian will populate some component of formula_34. A small time later, however, it will populate roughly the same amount of formula_34 but with completely different phase. Thus the effect of an off-resonant drive will tend to cancel itself out. This can also be expressed by saying that an off-resonant drive is "rapidly rotating" in the frame of the atom.
These concepts are illustrated in the table below, where the sphere represents the Bloch sphere, the arrow represents the state of the atom, and the hand represents the drive.
Displaced frame.
The example above could also have been analyzed in the interaction picture. The following example, however, is more difficult to analyze without the general formulation of unitary transformations. Consider two harmonic oscillators, between which we would like to engineer a beam splitter interaction,
formula_45.
This was achieved experimentally with two microwave cavity resonators serving as formula_46 and formula_47. Below, we sketch the analysis of a simplified version of this experiment.
In addition to the microwave cavities, the experiment also involved a transmon qubit, formula_48, coupled to both modes. The qubit is driven simultaneously at two frequencies, formula_49 and formula_50, for which formula_51.
formula_52
In addition, there are many fourth-order terms coupling the modes, but most of them can be neglected. In this experiment, two such terms which will become important are
formula_53.
(H.c. is shorthand for the Hermitian conjugate.) We can apply a displacement transformation, formula_54, to mode formula_48. For carefully chosen amplitudes, this transformation will cancel formula_55 while also displacing the ladder operator, formula_56. This leaves us with
formula_57.
Expanding this expression and dropping the rapidly rotating terms, we are left with the desired Hamiltonian,
formula_58.
Relation to the Baker–Campbell–Hausdorff formula.
It is common for the operators involved in unitary transformations to be written as exponentials of operators, formula_59, as seen above. Further, the operators in the exponentials commonly obey the relation formula_60, so that the transform of an operator formula_61 is,formula_62. By now introducing the iterator commutator,
formula_63
we can use a special result of the Baker-Campbell-Hausdorff formula to write this transformation compactly as,
formula_64
or, in long form for completeness,
formula_65
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H(t)"
},
{
"math_id": 1,
"text": "U(t)"
},
{
"math_id": 2,
"text": "H \\to UH{U^\\dagger} + i\\hbar\\,{\\dot{U}U^\\dagger}\n=: \\breve{H} \\quad \\quad (0)"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "\\psi(t)"
},
{
"math_id": 5,
"text": "U\\psi(t)"
},
{
"math_id": 6,
"text": "U^\\dagger U = 1"
},
{
"math_id": 7,
"text": "\\dot{\\psi}=-\\frac{i}{\\hbar}H\\psi"
},
{
"math_id": 8,
"text": "U^\\dagger U"
},
{
"math_id": 9,
"text": "H/\\hbar"
},
{
"math_id": 10,
"text": "U\\dot{\\psi}=-\\frac{i}{\\hbar} \\left(UHU^\\dagger \\right) U\\psi\\quad\\quad (1)"
},
{
"math_id": 11,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\left(U\\psi\\right)=\\dot{U}\\psi+U\\dot{\\psi}"
},
{
"math_id": 12,
"text": "U\\dot{\\psi} =\n\\frac{\\mathrm d}{\\mathrm d t}\\Big(U\\psi\\Big)\n- \\dot{U}U^\\dagger U\\psi \\quad\\quad(2)"
},
{
"math_id": 13,
"text": "\\frac{\\mathrm d}{\\mathrm d t}\\Big(U \\psi\\Big) = \n-\\frac{i}{\\hbar}\\Big(UH{U^\\dagger} + i\\hbar\\, \\dot{U}{U^\\dagger}\\Big)\n\\Big(U\\psi\\Big) \\quad\\quad \\left(3\\right)"
},
{
"math_id": 14,
"text": "\\breve{\\psi} := U\\psi"
},
{
"math_id": 15,
"text": "(3)"
},
{
"math_id": 16,
"text": "\\frac{\\mathrm d}{\\mathrm d t}\\breve{\\psi} = \n-\\frac{i}{\\hbar} \\breve{H}\\breve{\\psi}\n\\quad\\quad \\left(4\\right)"
},
{
"math_id": 17,
"text": "\\breve{H}\\breve{\\psi} =\ni\\hbar{\\operatorname{d}\\!\\breve{\\psi}\\over\\operatorname{d}\\!t}."
},
{
"math_id": 18,
"text": "\\psi = U^{\\dagger} \\breve{\\psi}"
},
{
"math_id": 19,
"text": "H(t)=H_0 + V(t) \\quad \\quad (a)"
},
{
"math_id": 20,
"text": "\\dot{\\psi_I} =\n-\\frac{i}{\\hbar} \\left(e^{iH_0 t/\\hbar} V e^{-iH_0 t/\\hbar}\\right) \\psi_I"
},
{
"math_id": 21,
"text": "\\psi_I = e^{i H_0 t/\\hbar} \\psi"
},
{
"math_id": 22,
"text": "U(t)=\\exp\\left[{+i H_0 t / \\hbar}\\right]"
},
{
"math_id": 23,
"text": "{U^\\dagger}(t) = \\exp \\left[{-i H_{0} t}/\\hbar\\right]."
},
{
"math_id": 24,
"text": "(0)"
},
{
"math_id": 25,
"text": "\\breve{H} = U\\left[H_0 + V(t)\\right]U^{\\dagger} + i\\hbar \\dot{U}U^{\\dagger}\n\\quad \\quad (b)"
},
{
"math_id": 26,
"text": "H_0"
},
{
"math_id": 27,
"text": "UH_0U^\\dagger=H_0"
},
{
"math_id": 28,
"text": "(b)"
},
{
"math_id": 29,
"text": "\\breve{H} = H_0 + UV(t)U^{\\dagger} + i \\hbar \\dot{U}U^{\\dagger}"
},
{
"math_id": 30,
"text": "\\begin{align} i\\hbar \\dot{U}U^\\dagger & =\ni\\hbar \\left({\\operatorname{d}\\!U\\over\\operatorname{d}\\!t}\\right) e^{-iH_{0} t/\\hbar} \\\\\n& = i\\hbar \\Big(iH_{0}/\\hbar\\Big) e^{+iH_{0} t/\\hbar} e^{-iH_{0} t/\\hbar} \\\\\n& = i \\hbar \\left({i H_0}/\\hbar \\right) \\\\\n& = -H_{0}, \\\\ \\end{align}"
},
{
"math_id": 31,
"text": "\\breve{H} = UVU^\\dagger"
},
{
"math_id": 32,
"text": "\\dot{\\psi_{I}} = -\\frac{i}{\\hbar} U V U^{\\dagger} \\psi_I "
},
{
"math_id": 33,
"text": "|g\\rangle"
},
{
"math_id": 34,
"text": "|e\\rangle"
},
{
"math_id": 35,
"text": "H = \\hbar\\omega {|{e}\\rangle \\langle {e}|}"
},
{
"math_id": 36,
"text": "\\omega"
},
{
"math_id": 37,
"text": "\\omega_d"
},
{
"math_id": 38,
"text": "H/\\hbar=\\omega |e\\rangle\\langle e| + \\Omega\\ e^{i\\omega_d t}|g\\rangle\\langle e| + \\Omega^*\\ e^{-i\\omega_d t}|e\\rangle\\langle g|"
},
{
"math_id": 39,
"text": "\\Omega"
},
{
"math_id": 40,
"text": "U=e^{i\\omega t|e\\rangle\\langle e|}"
},
{
"math_id": 41,
"text": "H/\\hbar\\to \\Omega\\, e^{i(\\omega_d-\\omega)t} |g\\rangle \\langle e| \n+ \\Omega^*\\, e^{i(\\omega-\\omega_d)t} |e\\rangle \\langle g|"
},
{
"math_id": 42,
"text": "\\omega_d=\\omega"
},
{
"math_id": 43,
"text": "\\breve{H} / \\hbar =\n\\Omega\\ |g\\rangle\\langle e| + \\Omega^*\\ |e\\rangle\\langle g|"
},
{
"math_id": 44,
"text": "|\\omega_d-\\omega|\\gg 0"
},
{
"math_id": 45,
"text": "g\\, ab^\\dagger + g^*\\, a^\\dagger b"
},
{
"math_id": 46,
"text": "a"
},
{
"math_id": 47,
"text": "b"
},
{
"math_id": 48,
"text": "c"
},
{
"math_id": 49,
"text": "\\omega_1"
},
{
"math_id": 50,
"text": "\\omega_2"
},
{
"math_id": 51,
"text": "\\omega_1-\\omega_2=\\omega_a-\\omega_b"
},
{
"math_id": 52,
"text": "H_\\mathrm{drive}/\\hbar=\\Re\\left[\\epsilon_1e^{i\\omega_1 t}+\\epsilon_2 e^{i\\omega_2 t}\\right](c+c^\\dagger)."
},
{
"math_id": 53,
"text": "H_4/\\hbar=g_4\\Big(e^{i(\\omega_b-\\omega_a)t}ab^\\dagger + \\text{h.c.}\\Big)c^\\dagger c"
},
{
"math_id": 54,
"text": "U=D(-\\xi_1 e^{-i\\omega_1 t}-\\xi_2 e^{-i\\omega_2 t})"
},
{
"math_id": 55,
"text": "H_\\textrm{drive}"
},
{
"math_id": 56,
"text": "c\\to c+\\xi_1 e^{-i\\omega_1 t}+\\xi_2 e^{-i\\omega_2 t}"
},
{
"math_id": 57,
"text": "H/\\hbar = g_4\\Big(e^{i(\\omega_b-\\omega_a)t}ab^\\dagger + e^{i(\\omega_a-\\omega_b)t}a^\\dagger b\\big)(c^\\dagger +\\xi_1^* e^{i\\omega_1t}+\\xi_2^* e^{i\\omega_2 t})(c +\\xi_1 e^{-i\\omega_1t}+\\xi_2 e^{-i\\omega_2 t})"
},
{
"math_id": 58,
"text": "H/\\hbar=g_4 \\xi_1^*\\xi_2 e^{i(\\omega_b-\\omega_a+\\omega_1-\\omega_2)t}\\ ab^\\dagger+\\text{h.c.} = g\\, ab^\\dagger + g^*\\, a^\\dagger b"
},
{
"math_id": 59,
"text": "U = e^X"
},
{
"math_id": 60,
"text": "X^\\dagger = -X"
},
{
"math_id": 61,
"text": "Y"
},
{
"math_id": 62,
"text": "UYU^\\dagger = e^XYe^{-X}"
},
{
"math_id": 63,
"text": "[(X)^n,Y] \\equiv \\underbrace{[X,\\dotsb[X,[X}_{n \\text { times }}, Y]] \\dotsb],\\quad [(X)^0,Y] \\equiv Y,"
},
{
"math_id": 64,
"text": "e^X Y e^{-X} = \\sum_{n=0}^{\\infty} \\frac{[(X)^n,Y]}{n!},"
},
{
"math_id": 65,
"text": "e^{X}Y e^{-X} = Y+\\left[X,Y\\right]+\\frac{1}{2!}[X,[X,Y]]+\\frac{1}{3!}[X,[X,[X,Y]]]+\\cdots."
}
] |
https://en.wikipedia.org/wiki?curid=58071309
|
58083234
|
Exterior calculus identities
|
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry.
Notation.
The following summarizes short definitions and notations that are used in this article.
Manifold.
formula_0, formula_1 are formula_2-dimensional smooth manifolds, where formula_3. That is, differentiable manifolds that can be differentiated enough times for the purposes on this page.
formula_4, formula_5 denote one point on each of the manifolds.
The boundary of a manifold formula_6 is a manifold formula_7, which has dimension formula_8. An orientation on formula_6 induces an orientation on formula_7.
We usually denote a submanifold by formula_9.
Tangent and cotangent bundles.
formula_10, formula_11 denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold formula_0.
formula_12, formula_13 denote the tangent spaces of formula_0, formula_1 at the points formula_14, formula_15, respectively. formula_16 denotes the cotangent space of formula_0 at the point formula_14.
Sections of the tangent bundles, also known as vector fields, are typically denoted as formula_17 such that at a point formula_4 we have formula_18. Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as formula_19 such that at a point formula_4 we have formula_20. An alternative notation for formula_21 is formula_22.
Differential "k"-forms.
Differential formula_23-forms, which we refer to simply as formula_23-forms here, are differential forms defined on formula_10. We denote the set of all formula_23-forms as formula_24. For formula_25 we usually write formula_26, formula_27, formula_28.
formula_29-forms formula_30 are just scalar functions formula_31 on formula_0. formula_32 denotes the constant formula_29-form equal to formula_33 everywhere.
Omitted elements of a sequence.
When we are given formula_34 inputs formula_35 and a formula_23-form formula_26 we denote omission of the formula_36th entry by writing
formula_37
Exterior product.
The exterior product is also known as the "wedge product". It is denoted by formula_38. The exterior product of a formula_23-form formula_26 and an formula_39-form formula_27 produce a formula_40-form formula_41. It can be written using the set formula_42 of all permutations formula_43 of formula_44 such that formula_45 as
formula_46
Directional derivative.
The directional derivative of a 0-form formula_30 along a section formula_47 is a 0-form denoted formula_48
Exterior derivative.
The exterior derivative formula_49 is defined for all formula_50. We generally omit the subscript when it is clear from the context.
For a formula_29-form formula_30 we have formula_51 as the formula_33-form that gives the directional derivative, i.e., for the section formula_52 we have formula_53, the directional derivative of formula_54 along formula_55.
For formula_56,
formula_57
Lie bracket.
The Lie bracket of sections formula_58 is defined as the unique section formula_59 that satisfies
formula_60
Tangent maps.
If formula_61 is a smooth map, then formula_62 defines a tangent map from formula_0 to formula_1. It is defined through curves formula_63 on formula_0 with derivative formula_64 such that
formula_65
Note that formula_66 is a formula_29-form with values in formula_1.
Pull-back.
If formula_61 is a smooth map, then the pull-back of a formula_23-form formula_67 is defined such that for any formula_23-dimensional submanifold formula_68
formula_69
The pull-back can also be expressed as
formula_70
Interior product.
Also known as the interior derivative, the interior product given a section formula_71 is a map formula_72 that effectively substitutes the first input of a formula_34-form with formula_73. If formula_74 and formula_75 then
formula_76
Metric tensor.
Given a nondegenerate bilinear form formula_77 on each formula_12 that is continuous on formula_0, the manifold becomes a pseudo-Riemannian manifold. We denote the metric tensor formula_78, defined pointwise by formula_79. We call formula_80 the signature of the metric. A Riemannian manifold has formula_81, whereas Minkowski space has formula_82.
Musical isomorphisms.
The metric tensor formula_83 induces duality mappings between vector fields and one-forms: these are the musical isomorphisms flat formula_84 and sharp formula_85. A section formula_86 corresponds to the unique one-form formula_87 such that for all sections formula_88, we have:
formula_89
A one-form formula_90 corresponds to the unique vector field formula_91 such that for all formula_88, we have:
formula_92
These mappings extend via multilinearity to mappings from formula_23-vector fields to formula_23-forms and formula_23-forms to formula_23-vector fields through
formula_93
formula_94
Hodge star.
For an "n"-manifold "M", the Hodge star operator formula_95 is a duality mapping taking a formula_23-form formula_96 to an formula_97-form formula_98.
It can be defined in terms of an oriented frame formula_99 for formula_10, orthonormal with respect to the given metric tensor formula_78:
formula_100
Co-differential operator.
The co-differential operator formula_101 on an formula_2 dimensional manifold formula_0 is defined by
formula_102
The Hodge–Dirac operator, formula_103, is a Dirac operator studied in Clifford analysis.
Oriented manifold.
An formula_2-dimensional orientable manifold M is a manifold that can be equipped with a choice of an n-form formula_104 that is continuous and nonzero everywhere on M.
Volume form.
On an orientable manifold formula_0 the canonical choice of a volume form given a metric tensor formula_78 and an orientation is formula_105 for any basis formula_106 ordered to match the orientation.
Area form.
Given a volume form formula_107 and a unit normal vector formula_1 we can also define an area form formula_108 on the boundary formula_109
Bilinear form on "k"-forms.
A generalization of the metric tensor, the symmetric bilinear form between two formula_23-forms formula_110, is defined pointwise on formula_0 by
formula_111
The formula_112-bilinear form for the space of formula_23-forms formula_24 is defined by
formula_113
In the case of a Riemannian manifold, each is an inner product (i.e. is positive-definite).
Lie derivative.
We define the Lie derivative formula_114 through Cartan's magic formula for a given section formula_52 as
formula_115
It describes the change of a formula_23-form along a flow formula_116 associated to the section formula_55.
Laplace–Beltrami operator.
The Laplacian formula_117 is defined as formula_118.
Important definitions.
Definitions on Ω"k"("M").
formula_26 is called...
Cohomology.
The formula_23-th cohomology of a manifold formula_0 and its exterior derivative operators formula_125 is given by
formula_126
Two closed formula_23-forms formula_110 are in the same cohomology class if their difference is an exact form i.e.
formula_127
A closed surface of genus formula_78 will have formula_128 generators which are harmonic.
Dirichlet energy.
Given formula_26, its Dirichlet energy is
formula_129
formula_130 ( "Stokes' theorem" )
formula_131 ( "cochain complex" )
formula_132 for formula_133 ( "Leibniz rule" )
formula_134 for formula_135 ( "directional derivative" )
formula_136 for formula_137
formula_138 for formula_133 ( "alternating" )
formula_139 ( "associativity" )
formula_140 for formula_141 ( "compatibility of scalar multiplication" )
formula_142 ( "distributivity over addition" )
formula_143 for formula_144 when formula_23 is odd or formula_145. The rank of a formula_23-form formula_146 means the minimum number of monomial terms (exterior products of one-forms) that must be summed to produce formula_146.
formula_147 ( "commutative with formula_148" )
formula_149 ( "distributes over formula_150" )
formula_151 ( "contravariant" )
formula_152 for formula_153 ( "function composition" )
formula_154
formula_155
formula_156 ( "nilpotent" )
formula_157
formula_158 for formula_159 ( "Leibniz rule" )
formula_160 for formula_90
formula_161 for formula_162
formula_163 for formula_162
formula_164 for formula_165 ( "linearity" )
formula_166 for formula_167, formula_168, and formula_169 the sign of the metric
formula_170 ( "inversion" )
formula_171 for formula_30 ( "commutative with formula_29-forms" )
formula_172 for formula_90 ( "Hodge star preserves formula_33-form norm" )
formula_173 ( "Hodge dual of constant function 1 is the volume form" )
formula_174 ( "nilpotent" )
formula_175 and formula_176 ( "Hodge adjoint to formula_148" )
formula_177 if formula_178 ( "formula_179 adjoint to formula_148" )
In general, formula_180
formula_181 for formula_162
formula_182 ( "commutative with formula_148" )
formula_183 ( "commutative with formula_184" )
formula_185
formula_186 ( "Leibniz rule" )
formula_187
formula_188 if formula_26
formula_189
formula_190
formula_191 ( "bilinear form" )
formula_192 ( "Jacobi identity" )
Exterior calculus identities.
Dimensions.
If formula_193
formula_194 for formula_195
formula_196 for formula_197
If formula_198 is a basis, then a basis of formula_24 is
formula_199
Exterior products.
Let formula_200 and formula_201 be vector fields.
formula_202
formula_203
formula_204
formula_205
formula_206 ( "interior product formula_207 dual to wedge formula_208" )
formula_209 for formula_210
Projection and rejection.
If formula_211, then
Given the boundary formula_215 with unit normal vector formula_1
formula_218
formula_219
formula_220 given a positively oriented orthonormal frame formula_221.
formula_222
Hodge decomposition.
If formula_223, formula_224 such that
formula_225
Poincaré lemma.
If a boundaryless manifold formula_0 has trivial cohomology formula_226, then any closed formula_227 is exact. This is the case if "M" is contractible.
Relations to vector calculus.
Identities in Euclidean 3-space.
Let Euclidean metric formula_228.
We use formula_229 differential operator formula_230
formula_231 for formula_90.
formula_232 ( "scalar triple product" )
formula_233 ( "cross product" )
formula_234 if formula_235
formula_236 ( "scalar product" )
formula_237 ( "gradient" )
formula_238 ( "directional derivative" )
formula_239 ( "divergence" )
formula_240 ( "curl" )
formula_241 where formula_1 is the unit normal vector of formula_215 and formula_242 is the area form on formula_215.
formula_243 ( "divergence theorem" )
formula_244 ( "formula_29-forms" )
formula_245 ( "formula_33-forms" )
formula_246 if formula_247 ( "formula_248-forms on formula_249-manifolds" )
formula_250 if formula_251 ( "formula_2-forms" )
formula_252
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": " n\\in \\mathbb{N} "
},
{
"math_id": 4,
"text": " p \\in M "
},
{
"math_id": 5,
"text": " q \\in N "
},
{
"math_id": 6,
"text": " M "
},
{
"math_id": 7,
"text": " \\partial M "
},
{
"math_id": 8,
"text": " n - 1 "
},
{
"math_id": 9,
"text": "\\Sigma \\subset M"
},
{
"math_id": 10,
"text": "TM"
},
{
"math_id": 11,
"text": "T^{*}M"
},
{
"math_id": 12,
"text": " T_p M "
},
{
"math_id": 13,
"text": " T_q N "
},
{
"math_id": 14,
"text": "p"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": " T^{*}_p M "
},
{
"math_id": 17,
"text": "X, Y, Z \\in \\Gamma(TM)"
},
{
"math_id": 18,
"text": " X|_p, Y|_p, Z|_p \\in T_p M "
},
{
"math_id": 19,
"text": "\\alpha, \\beta \\in \\Gamma(T^{*}M)"
},
{
"math_id": 20,
"text": " \\alpha|_p, \\beta|_p \\in T^{*}_p M "
},
{
"math_id": 21,
"text": "\\Gamma(T^{*}M)"
},
{
"math_id": 22,
"text": "\\Omega^1(M)"
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "\\Omega^k(M)"
},
{
"math_id": 25,
"text": " 0\\leq k,\\ l,\\ m\\leq n "
},
{
"math_id": 26,
"text": "\\alpha\\in\\Omega^k(M)"
},
{
"math_id": 27,
"text": "\\beta\\in\\Omega^l(M)"
},
{
"math_id": 28,
"text": "\\gamma\\in\\Omega^m(M)"
},
{
"math_id": 29,
"text": "0"
},
{
"math_id": 30,
"text": "f\\in\\Omega^0(M)"
},
{
"math_id": 31,
"text": "C^{\\infty}(M)"
},
{
"math_id": 32,
"text": "\\mathbf{1}\\in\\Omega^0(M)"
},
{
"math_id": 33,
"text": "1"
},
{
"math_id": 34,
"text": "(k+1)"
},
{
"math_id": 35,
"text": "X_0,\\ldots,X_k"
},
{
"math_id": 36,
"text": "i"
},
{
"math_id": 37,
"text": "\\alpha(X_0,\\ldots,\\hat{X}_i,\\ldots,X_k):=\\alpha(X_0,\\ldots,X_{i-1},X_{i+1},\\ldots,X_k) ."
},
{
"math_id": 38,
"text": " \\wedge : \\Omega^k(M) \\times \\Omega^l(M) \\rightarrow \\Omega^{k+l}(M)"
},
{
"math_id": 39,
"text": "l"
},
{
"math_id": 40,
"text": "(k+l)"
},
{
"math_id": 41,
"text": "\\alpha\\wedge\\beta \\in\\Omega^{k+l}(M)"
},
{
"math_id": 42,
"text": "S(k,k+l)"
},
{
"math_id": 43,
"text": "\\sigma"
},
{
"math_id": 44,
"text": "\\{1,\\ldots,n\\}"
},
{
"math_id": 45,
"text": "\\sigma(1)<\\ldots <\\sigma(k), \\ \\sigma(k+1)<\\ldots <\\sigma(k+l) "
},
{
"math_id": 46,
"text": "(\\alpha\\wedge\\beta)(X_1,\\ldots,X_{k+l})=\\sum_{\\sigma\\in S(k,k+l)}\\text{sign}(\\sigma)\\alpha(X_{\\sigma(1)},\\ldots,X_{\\sigma(k)})\\otimes\\beta(X_{\\sigma(k+1)},\\ldots,X_{\\sigma(k+l)}) ."
},
{
"math_id": 47,
"text": "X\\in\\Gamma(TM)"
},
{
"math_id": 48,
"text": "\\partial_X f ."
},
{
"math_id": 49,
"text": "d_k : \\Omega^k(M) \\rightarrow \\Omega^{k+1}(M) "
},
{
"math_id": 50,
"text": " 0 \\leq k\\leq n"
},
{
"math_id": 51,
"text": "d_0f\\in\\Omega^1(M)"
},
{
"math_id": 52,
"text": "X\\in \\Gamma(TM)"
},
{
"math_id": 53,
"text": "(d_0f)(X) = \\partial_X f"
},
{
"math_id": 54,
"text": "f"
},
{
"math_id": 55,
"text": "X"
},
{
"math_id": 56,
"text": " 0 < k\\leq n"
},
{
"math_id": 57,
"text": " (d_k\\omega)(X_0,\\ldots,X_k)=\\sum_{0\\leq j\\leq k}(-1)^jd_{0}(\\omega(X_0,\\ldots,\\hat{X}_j,\\ldots,X_k))(X_j) + \\sum_{0\\leq i < j\\leq k}(-1)^{i+j}\\omega([X_i,X_j],X_0,\\ldots,\\hat{X}_i,\\ldots,\\hat{X}_j,\\ldots,X_k) ."
},
{
"math_id": 58,
"text": "X,Y \\in \\Gamma(TM)"
},
{
"math_id": 59,
"text": "[X,Y] \\in \\Gamma(TM)"
},
{
"math_id": 60,
"text": "\n\\forall f\\in\\Omega^0(M) \\Rightarrow \\partial_{[X,Y]}f = \\partial_X \\partial_Y f - \\partial_Y \\partial_X f .\n"
},
{
"math_id": 61,
"text": " \\phi : M \\rightarrow N "
},
{
"math_id": 62,
"text": "d\\phi|_p:T_pM\\rightarrow T_{\\phi(p)}N"
},
{
"math_id": 63,
"text": "\\gamma"
},
{
"math_id": 64,
"text": "\\gamma'(0)=X\\in T_pM"
},
{
"math_id": 65,
"text": "d\\phi(X):=(\\phi\\circ\\gamma)' ."
},
{
"math_id": 66,
"text": "\\phi"
},
{
"math_id": 67,
"text": " \\alpha\\in \\Omega^k(N) "
},
{
"math_id": 68,
"text": "\\Sigma\\subset M"
},
{
"math_id": 69,
"text": " \\int_{\\Sigma} \\phi^*\\alpha = \\int_{\\phi(\\Sigma)} \\alpha ."
},
{
"math_id": 70,
"text": "(\\phi^*\\alpha)(X_1,\\ldots,X_k)=\\alpha(d\\phi(X_1),\\ldots,d\\phi(X_k)) ."
},
{
"math_id": 71,
"text": " Y\\in \\Gamma(TM) "
},
{
"math_id": 72,
"text": "\\iota_Y:\\Omega^{k+1}(M) \\rightarrow \\Omega^k(M)"
},
{
"math_id": 73,
"text": "Y"
},
{
"math_id": 74,
"text": "\\alpha\\in\\Omega^{k+1}(M)"
},
{
"math_id": 75,
"text": "X_i\\in \\Gamma(TM)"
},
{
"math_id": 76,
"text": " (\\iota_Y\\alpha)(X_1,\\ldots,X_k) = \\alpha(Y,X_1,\\ldots,X_k) ."
},
{
"math_id": 77,
"text": " g_p( \\cdot , \\cdot ) "
},
{
"math_id": 78,
"text": "g"
},
{
"math_id": 79,
"text": " g( X , Y )|_p = g_p( X|_p , Y|_p ) "
},
{
"math_id": 80,
"text": "s=\\operatorname{sign}(g)"
},
{
"math_id": 81,
"text": "s=1"
},
{
"math_id": 82,
"text": "s=-1"
},
{
"math_id": 83,
"text": "g(\\cdot,\\cdot)"
},
{
"math_id": 84,
"text": "\\flat"
},
{
"math_id": 85,
"text": "\\sharp"
},
{
"math_id": 86,
"text": " A \\in \\Gamma(TM)"
},
{
"math_id": 87,
"text": "A^{\\flat}\\in\\Omega^1(M)"
},
{
"math_id": 88,
"text": "X \\in \\Gamma(TM)"
},
{
"math_id": 89,
"text": " A^{\\flat}(X) = g(A,X) ."
},
{
"math_id": 90,
"text": "\\alpha\\in\\Omega^1(M)"
},
{
"math_id": 91,
"text": " \\alpha^{\\sharp}\\in \\Gamma(TM)"
},
{
"math_id": 92,
"text": " \\alpha(X) = g(\\alpha^\\sharp,X) ."
},
{
"math_id": 93,
"text": " (A_1 \\wedge A_2 \\wedge \\cdots \\wedge A_k)^{\\flat} = A_1^{\\flat} \\wedge A_2^{\\flat} \\wedge \\cdots \\wedge A_k^{\\flat}"
},
{
"math_id": 94,
"text": " (\\alpha_1 \\wedge \\alpha_2 \\wedge \\cdots \\wedge \\alpha_k)^{\\sharp} = \\alpha_1^{\\sharp} \\wedge \\alpha_2^{\\sharp} \\wedge \\cdots \\wedge \\alpha_k^{\\sharp}."
},
{
"math_id": 95,
"text": "{\\star}:\\Omega^k(M)\\rightarrow\\Omega^{n-k}(M)"
},
{
"math_id": 96,
"text": "\\alpha \\in \\Omega^k(M)"
},
{
"math_id": 97,
"text": "(n{-}k)"
},
{
"math_id": 98,
"text": "({\\star}\\alpha) \\in \\Omega^{n-k}(M)"
},
{
"math_id": 99,
"text": "(X_1,\\ldots,X_n)"
},
{
"math_id": 100,
"text": "\n({\\star}\\alpha)(X_1,\\ldots,X_{n-k})=\\alpha(X_{n-k+1},\\ldots,X_n) .\n"
},
{
"math_id": 101,
"text": "\\delta:\\Omega^k(M)\\rightarrow\\Omega^{k-1}(M)"
},
{
"math_id": 102,
"text": "\\delta := (-1)^{k} {\\star}^{-1} d {\\star} = (-1)^{nk+n+1}{\\star} d {\\star} ."
},
{
"math_id": 103,
"text": "d+\\delta"
},
{
"math_id": 104,
"text": "\\mu\\in\\Omega^n(M)"
},
{
"math_id": 105,
"text": "\\mathbf{det}:=\\sqrt{|\\det g|}\\;dX_1^{\\flat}\\wedge\\ldots\\wedge dX_n^{\\flat}"
},
{
"math_id": 106,
"text": "dX_1,\\ldots, dX_n"
},
{
"math_id": 107,
"text": "\\mathbf{det}"
},
{
"math_id": 108,
"text": "\\sigma:=\\iota_N\\textbf{det}"
},
{
"math_id": 109,
"text": "\\partial M."
},
{
"math_id": 110,
"text": "\\alpha,\\beta\\in\\Omega^k(M)"
},
{
"math_id": 111,
"text": "\n\\langle\\alpha,\\beta\\rangle|_p := {\\star}(\\alpha\\wedge {\\star}\\beta )|_p .\n"
},
{
"math_id": 112,
"text": "L^2"
},
{
"math_id": 113,
"text": "\n\\langle\\!\\langle\\alpha,\\beta\\rangle\\!\\rangle:= \\int_M\\alpha\\wedge {\\star}\\beta .\n"
},
{
"math_id": 114,
"text": "\\mathcal{L}:\\Omega^k(M)\\rightarrow\\Omega^k(M)"
},
{
"math_id": 115,
"text": "\n\\mathcal{L}_X = d \\circ \\iota_X + \\iota_X \\circ d .\n"
},
{
"math_id": 116,
"text": "\\phi_t"
},
{
"math_id": 117,
"text": "\\Delta:\\Omega^k(M) \\rightarrow \\Omega^k(M)"
},
{
"math_id": 118,
"text": "\\Delta = -(d\\delta + \\delta d)"
},
{
"math_id": 119,
"text": "d\\alpha=0"
},
{
"math_id": 120,
"text": " \\alpha = d\\beta"
},
{
"math_id": 121,
"text": "\\beta\\in\\Omega^{k-1}"
},
{
"math_id": 122,
"text": "\\delta\\alpha=0"
},
{
"math_id": 123,
"text": " \\alpha = \\delta\\beta"
},
{
"math_id": 124,
"text": "\\beta\\in\\Omega^{k+1}"
},
{
"math_id": 125,
"text": "d_0,\\ldots,d_{n-1}"
},
{
"math_id": 126,
"text": "\nH^k(M):=\\frac{\\text{ker}(d_{k})}{\\text{im}(d_{k-1})}\n"
},
{
"math_id": 127,
"text": "\n[\\alpha]=[\\beta] \\ \\ \\Longleftrightarrow\\ \\ \\alpha{-}\\beta = d\\eta \\ \\text{ for some } \\eta\\in\\Omega^{k-1}(M) \n"
},
{
"math_id": 128,
"text": "2g"
},
{
"math_id": 129,
"text": "\n\\mathcal{E}_\\text{D}(\\alpha):= \\dfrac{1}{2}\\langle\\!\\langle d\\alpha,d\\alpha\\rangle\\!\\rangle + \\dfrac{1}{2}\\langle\\!\\langle \\delta\\alpha,\\delta\\alpha\\rangle\\!\\rangle\n"
},
{
"math_id": 130,
"text": "\n\\int_{\\Sigma} d\\alpha = \\int_{\\partial\\Sigma} \\alpha "
},
{
"math_id": 131,
"text": "\nd \\circ d = 0\n"
},
{
"math_id": 132,
"text": "\nd(\\alpha \\wedge \\beta ) = d\\alpha\\wedge \\beta +(-1)^k\\alpha\\wedge d\\beta\n"
},
{
"math_id": 133,
"text": " \\alpha\\in\\Omega^k(M), \\ \\beta\\in\\Omega^l(M) "
},
{
"math_id": 134,
"text": "\ndf(X) = \\partial_X f\n"
},
{
"math_id": 135,
"text": " f\\in\\Omega^0(M), \\ X\\in \\Gamma(TM) "
},
{
"math_id": 136,
"text": "\nd\\alpha = 0\n"
},
{
"math_id": 137,
"text": "\\alpha \\in \\Omega^n(M), \\ \\text{dim}(M)=n "
},
{
"math_id": 138,
"text": "\n\\alpha \\wedge \\beta = (-1)^{kl}\\beta \\wedge \\alpha\n"
},
{
"math_id": 139,
"text": "\n(\\alpha \\wedge \\beta)\\wedge\\gamma = \\alpha \\wedge (\\beta\\wedge\\gamma)\n"
},
{
"math_id": 140,
"text": "\n(\\lambda\\alpha) \\wedge \\beta = \\lambda (\\alpha \\wedge \\beta)\n"
},
{
"math_id": 141,
"text": "\\lambda\\in\\mathbb{R}"
},
{
"math_id": 142,
"text": "\n\\alpha \\wedge ( \\beta_1 + \\beta_2 ) = \\alpha \\wedge \\beta_1 + \\alpha \\wedge \\beta_2\n"
},
{
"math_id": 143,
"text": "\n\\alpha \\wedge \\alpha = 0\n"
},
{
"math_id": 144,
"text": " \\alpha\\in\\Omega^k(M) "
},
{
"math_id": 145,
"text": "\\operatorname{rank} \\alpha \\le 1 "
},
{
"math_id": 146,
"text": "\\alpha"
},
{
"math_id": 147,
"text": "\nd(\\phi^*\\alpha) = \\phi^*(d\\alpha)\n"
},
{
"math_id": 148,
"text": "d"
},
{
"math_id": 149,
"text": "\n\\phi^*(\\alpha\\wedge\\beta) = (\\phi^*\\alpha)\\wedge(\\phi^*\\beta)\n"
},
{
"math_id": 150,
"text": "\\wedge"
},
{
"math_id": 151,
"text": "\n(\\phi_1\\circ\\phi_2)^* = \\phi_2^*\\phi_1^*\n"
},
{
"math_id": 152,
"text": "\n\\phi^*f=f\\circ\\phi\n"
},
{
"math_id": 153,
"text": "f\\in\\Omega^0(N)"
},
{
"math_id": 154,
"text": "\n(X^{\\flat})^{\\sharp}=X\n"
},
{
"math_id": 155,
"text": "\n(\\alpha^{\\sharp})^{\\flat}=\\alpha\n"
},
{
"math_id": 156,
"text": "\n\\iota_X \\circ \\iota_X = 0\n"
},
{
"math_id": 157,
"text": "\n\\iota_X \\circ \\iota_Y = - \\iota_Y \\circ \\iota_X\n"
},
{
"math_id": 158,
"text": "\n\\iota_X (\\alpha \\wedge \\beta ) = (\\iota_X\\alpha)\\wedge\\beta + (-1)^k\\alpha\\wedge(\\iota_X \\beta ) \n"
},
{
"math_id": 159,
"text": "\\alpha\\in\\Omega^k(M), \\ \\beta\\in\\Omega^l(M)"
},
{
"math_id": 160,
"text": "\n\\iota_X\\alpha = \\alpha(X)\n"
},
{
"math_id": 161,
"text": "\n\\iota_X f = 0\n"
},
{
"math_id": 162,
"text": "f \\in \\Omega^0(M)"
},
{
"math_id": 163,
"text": "\n\\iota_X(f\\alpha) = f \\iota_X\\alpha\n"
},
{
"math_id": 164,
"text": "\n{\\star}(\\lambda_1\\alpha + \\lambda_2\\beta) = \\lambda_1({\\star}\\alpha) + \\lambda_2({\\star}\\beta)\n"
},
{
"math_id": 165,
"text": "\\lambda_1,\\lambda_2\\in\\mathbb{R}"
},
{
"math_id": 166,
"text": "\n{\\star}{\\star}\\alpha = s(-1)^{k(n-k)}\\alpha\n"
},
{
"math_id": 167,
"text": "\\alpha\\in \\Omega^k(M)"
},
{
"math_id": 168,
"text": "n=\\dim(M)"
},
{
"math_id": 169,
"text": "s = \\operatorname{sign}(g)"
},
{
"math_id": 170,
"text": "\n{\\star}^{(-1)} = s(-1)^{k(n-k)}{\\star}\n"
},
{
"math_id": 171,
"text": "\n{\\star}(f\\alpha)=f({\\star}\\alpha)\n"
},
{
"math_id": 172,
"text": "\n\\langle\\!\\langle\\alpha,\\alpha\\rangle\\!\\rangle = \\langle\\!\\langle{\\star}\\alpha,{\\star}\\alpha\\rangle\\!\\rangle\n"
},
{
"math_id": 173,
"text": "\n{\\star} \\mathbf{1} = \\mathbf{det}\n"
},
{
"math_id": 174,
"text": "\n\\delta\\circ\\delta = 0\n"
},
{
"math_id": 175,
"text": "\n{\\star}\\delta=(-1)^kd{\\star}\n"
},
{
"math_id": 176,
"text": "{\\star} d = (-1)^{k+1}\\delta{\\star}"
},
{
"math_id": 177,
"text": "\n\\langle\\!\\langle d\\alpha,\\beta\\rangle\\!\\rangle = \\langle\\!\\langle \\alpha,\\delta\\beta\\rangle\\!\\rangle\n"
},
{
"math_id": 178,
"text": "\\partial M=0"
},
{
"math_id": 179,
"text": "\\delta"
},
{
"math_id": 180,
"text": "\\int_M d\\alpha \\wedge \\star \\beta = \\int_{\\partial M} \\alpha \\wedge \\star \\beta + \\int_M \\alpha\\wedge\\star\\delta\\beta "
},
{
"math_id": 181,
"text": "\n\\delta f = 0\n"
},
{
"math_id": 182,
"text": "\nd\\circ\\mathcal{L}_X = \\mathcal{L}_X\\circ d\n"
},
{
"math_id": 183,
"text": "\n\\iota_X \\circ\\mathcal{L}_X = \\mathcal{L}_X\\circ \\iota_X\n"
},
{
"math_id": 184,
"text": "\\iota_X"
},
{
"math_id": 185,
"text": "\n\\mathcal{L}_X(\\iota_Y\\alpha) = \\iota_{[X,Y]}\\alpha + \\iota_Y\\mathcal{L}_X\\alpha\n"
},
{
"math_id": 186,
"text": "\n\\mathcal{L}_X(\\alpha\\wedge\\beta) = (\\mathcal{L}_X\\alpha)\\wedge\\beta + \\alpha\\wedge(\\mathcal{L}_X\\beta)\n"
},
{
"math_id": 187,
"text": "\n\\iota_X({\\star}\\mathbf{1}) = {\\star} X^{\\flat}\n"
},
{
"math_id": 188,
"text": "\n\\iota_X({\\star}\\alpha) = (-1)^k{\\star}(X^{\\flat}\\wedge\\alpha)\n"
},
{
"math_id": 189,
"text": "\n\\iota_X(\\phi^*\\alpha)=\\phi^*(\\iota_{d\\phi(X)}\\alpha)\n"
},
{
"math_id": 190,
"text": "\n\\nu,\\mu\\in\\Omega^n(M), \\mu \\text{ non-zero } \\ \\Rightarrow \\ \\exist \\ f\\in\\Omega^0(M): \\ \\nu=f\\mu\n"
},
{
"math_id": 191,
"text": "\nX^{\\flat}\\wedge{\\star} Y^{\\flat} = g(X,Y)( {\\star} \\mathbf{1})\n"
},
{
"math_id": 192,
"text": "\n[X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]] = 0\n"
},
{
"math_id": 193,
"text": "n=\\dim M"
},
{
"math_id": 194,
"text": "\n\\dim\\Omega^k(M) = \\binom{n}{k}\n"
},
{
"math_id": 195,
"text": "0\\leq k\\leq n"
},
{
"math_id": 196,
"text": "\n\\dim\\Omega^k(M) = 0\n"
},
{
"math_id": 197,
"text": "k < 0, \\ k > n"
},
{
"math_id": 198,
"text": "X_1,\\ldots,X_n\\in \\Gamma(TM)"
},
{
"math_id": 199,
"text": "\n\\{X_{\\sigma(1)}^{\\flat}\\wedge\\ldots\\wedge X_{\\sigma(k)}^{\\flat} \\ : \\ \\sigma\\in S(k,n)\\}\n"
},
{
"math_id": 200,
"text": "\\alpha, \\beta, \\gamma,\\alpha_i\\in \\Omega^1(M)"
},
{
"math_id": 201,
"text": "X,Y,Z,X_i"
},
{
"math_id": 202,
"text": "\n\\alpha(X) = \\det\n\\begin{bmatrix}\n \\alpha(X) \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 203,
"text": "\n(\\alpha\\wedge\\beta)(X,Y) = \\det\n\\begin{bmatrix}\n \\alpha(X) & \\alpha(Y) \\\\\n \\beta(X) & \\beta(Y) \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 204,
"text": "\n(\\alpha\\wedge\\beta\\wedge\\gamma)(X,Y,Z) = \\det\n\\begin{bmatrix}\n \\alpha(X) & \\alpha(Y) & \\alpha(Z) \\\\\n \\beta(X) & \\beta(Y) & \\beta(Z) \\\\\n \\gamma(X) & \\gamma(Y) & \\gamma(Z)\n \\end{bmatrix}\n"
},
{
"math_id": 205,
"text": "\n(\\alpha_1\\wedge\\ldots\\wedge\\alpha_l)(X_1,\\ldots,X_l) = \\det\n\\begin{bmatrix}\n \\alpha_1(X_1) & \\alpha_1(X_2) & \\dots & \\alpha_1(X_l) \\\\\n \\alpha_2(X_1) & \\alpha_2(X_2) & \\dots & \\alpha_2(X_l) \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\alpha_l(X_1) & \\alpha_l(X_2) & \\dots & \\alpha_l(X_l) \n \\end{bmatrix}\n"
},
{
"math_id": 206,
"text": "\n(-1)^k\\iota_X{\\star}\\alpha = {\\star}(X^{\\flat}\\wedge\\alpha)\n"
},
{
"math_id": 207,
"text": "\\iota_X{\\star}"
},
{
"math_id": 208,
"text": "X^{\\flat}\\wedge"
},
{
"math_id": 209,
"text": "\n(\\iota_X\\alpha)\\wedge{\\star}\\beta =\\alpha\\wedge{\\star}(X^{\\flat}\\wedge\\beta)\n"
},
{
"math_id": 210,
"text": "\\alpha\\in\\Omega^{k+1}(M),\\beta\\in\\Omega^k(M)"
},
{
"math_id": 211,
"text": "|X|=1, \\ \\alpha\\in\\Omega^k(M)"
},
{
"math_id": 212,
"text": "\\iota_X\\circ (X^{\\flat}\\wedge ):\\Omega^k(M)\\rightarrow\\Omega^k(M)"
},
{
"math_id": 213,
"text": "(X^{\\flat}\\wedge )\\circ \\iota_X:\\Omega^k(M)\\rightarrow\\Omega^k(M)"
},
{
"math_id": 214,
"text": " \\iota_X \\circ (X^{\\flat}\\wedge ) + (X^{\\flat}\\wedge)\\circ\\iota_X = \\text{id} "
},
{
"math_id": 215,
"text": "\\partial M"
},
{
"math_id": 216,
"text": "\\mathbf{t}:=\\iota_N\\circ (N^{\\flat}\\wedge )"
},
{
"math_id": 217,
"text": "\\mathbf{n}:=(\\text{id}-\\mathbf{t})"
},
{
"math_id": 218,
"text": "\n(d\\alpha)(X_0,\\ldots,X_k)=\\sum_{0\\leq j\\leq k}(-1)^jd(\\alpha(X_0,\\ldots,\\hat{X}_j,\\ldots,X_k))(X_j) + \\sum_{0\\leq i < j\\leq k}(-1)^{i+j}\\alpha([X_i,X_j],X_0,\\ldots,\\hat{X}_i,\\ldots,\\hat{X}_j,\\ldots,X_k)\n"
},
{
"math_id": 219,
"text": "\n(d\\alpha)(X_1,\\ldots,X_k) =\\sum_{i=1}^k(-1)^{i+1}(\\nabla_{X_i}\\alpha)(X_1,\\ldots,\\hat{X}_i,\\ldots,X_k)\n"
},
{
"math_id": 220,
"text": "\n(\\delta\\alpha)(X_1,\\ldots,X_{k-1})=-\\sum_{i=1}^n(\\iota_{E_i}(\\nabla_{E_i}\\alpha))(X_1,\\ldots,\\hat{X}_i,\\ldots,X_k)\n"
},
{
"math_id": 221,
"text": "E_1,\\ldots,E_n"
},
{
"math_id": 222,
"text": "\n(\\mathcal{L}_Y\\alpha)(X_1,\\ldots,X_k) =(\\nabla_Y\\alpha)(X_1,\\ldots,X_k) - \\sum_{i=1}^k\\alpha(X_1,\\ldots,\\nabla_{X_i}Y,\\ldots,X_k)\n"
},
{
"math_id": 223,
"text": "\\partial M =\\empty"
},
{
"math_id": 224,
"text": "\\omega\\in\\Omega^k(M) \\Rightarrow \\exists \\alpha\\in\\Omega^{k-1}, \\ \\beta\\in\\Omega^{k+1}, \\ \\gamma\\in\\Omega^k(M), \\ d\\gamma=0, \\ \\delta\\gamma = 0"
},
{
"math_id": 225,
"text": "\n\\omega = d\\alpha + \\delta\\beta + \\gamma\n"
},
{
"math_id": 226,
"text": "H^k(M)=\\{0\\}"
},
{
"math_id": 227,
"text": "\\omega\\in\\Omega^k(M)"
},
{
"math_id": 228,
"text": "g(X,Y):=\\langle X,Y\\rangle = X\\cdot Y"
},
{
"math_id": 229,
"text": "\n\\nabla = \\left( {\\partial \\over \\partial x}, {\\partial \\over \\partial y}, {\\partial \\over \\partial z} \\right)\n"
},
{
"math_id": 230,
"text": "\\mathbb{R}^3"
},
{
"math_id": 231,
"text": "\n\\iota_X\\alpha = g(X,\\alpha^{\\sharp}) = X\\cdot \\alpha^{\\sharp}\n"
},
{
"math_id": 232,
"text": "\n\\mathbf{det}(X,Y,Z)=\\langle X,Y\\times Z\\rangle = \\langle X\\times Y,Z\\rangle\n"
},
{
"math_id": 233,
"text": "\nX\\times Y = ({\\star}(X^{\\flat}\\wedge Y^{\\flat}))^{\\sharp}\n"
},
{
"math_id": 234,
"text": "\n\\iota_X\\alpha=-(X\\times A)^{\\flat}\n"
},
{
"math_id": 235,
"text": "\\alpha\\in\\Omega^2(M),\\ A=({\\star}\\alpha)^{\\sharp}"
},
{
"math_id": 236,
"text": "\nX\\cdot Y = {\\star}(X^{\\flat}\\wedge {\\star} Y^{\\flat})\n"
},
{
"math_id": 237,
"text": "\n\\nabla f=(df)^{\\sharp}\n"
},
{
"math_id": 238,
"text": "\nX\\cdot\\nabla f=df(X)\n"
},
{
"math_id": 239,
"text": "\n\\nabla\\cdot X = {\\star} d {\\star} X^{\\flat} = -\\delta X^{\\flat}\n"
},
{
"math_id": 240,
"text": "\n\\nabla\\times X = ({\\star} d X^{\\flat})^{\\sharp}\n"
},
{
"math_id": 241,
"text": "\n\\langle X,N\\rangle\\sigma = {\\star} X^\\flat\n"
},
{
"math_id": 242,
"text": "\\sigma=\\iota_{N}\\mathbf{det}"
},
{
"math_id": 243,
"text": "\n\\int_{\\Sigma} d{\\star} X^{\\flat} = \\int_{\\partial\\Sigma}{\\star} X^{\\flat} = \\int_{\\partial\\Sigma}\\langle X,N\\rangle\\sigma\n"
},
{
"math_id": 244,
"text": "\n\\mathcal{L}_X f =X\\cdot \\nabla f\n"
},
{
"math_id": 245,
"text": "\n\\mathcal{L}_X \\alpha = (\\nabla_X\\alpha^{\\sharp})^{\\flat} +g(\\alpha^{\\sharp},\\nabla X)\n"
},
{
"math_id": 246,
"text": "\n{\\star}\\mathcal{L}_X\\beta = \\left( \\nabla_XB - \\nabla_BX + (\\text{div}X)B \\right)^{\\flat}\n"
},
{
"math_id": 247,
"text": "B=({\\star}\\beta)^{\\sharp}"
},
{
"math_id": 248,
"text": "2"
},
{
"math_id": 249,
"text": "3"
},
{
"math_id": 250,
"text": "\n{\\star}\\mathcal{L}_X\\rho = dq(X)+(\\text{div}X)q\n"
},
{
"math_id": 251,
"text": "\\rho={\\star} q \\in \\Omega^0(M)"
},
{
"math_id": 252,
"text": "\n\\mathcal{L}_X(\\mathbf{det})=(\\text{div}(X))\\mathbf{det}\n"
}
] |
https://en.wikipedia.org/wiki?curid=58083234
|
58092933
|
Theorem of the highest weight
|
In representation theory, a branch of mathematics, the theorem of the highest weight classifies the irreducible representations of a complex semisimple Lie algebra formula_0. There is a closely related theorem classifying the irreducible representations of a connected compact Lie group formula_1. The theorem states that there is a bijection
formula_2
from the set of "dominant integral elements" to the set of equivalence classes of irreducible representations of formula_0 or formula_1. The difference between the two results is in the precise notion of "integral" in the definition of a dominant integral element. If formula_1 is simply connected, this distinction disappears.
The theorem was originally proved by Élie Cartan in his 1913 paper. The version of the theorem for a compact Lie group is due to Hermann Weyl. The theorem is one of the key pieces of representation theory of semisimple Lie algebras.
Statement.
Lie algebra case.
Let formula_3 be a finite-dimensional semisimple complex Lie algebra with Cartan subalgebra formula_4. Let formula_5 be the associated root system. We then say that an element formula_6 is integral if
formula_7
is an integer for each root formula_8. Next, we choose a set formula_9 of positive roots and we say that an element formula_6 is dominant if formula_10 for all formula_11. An element formula_6 dominant integral if it is both dominant and integral. Finally, if formula_12 and formula_13 are in formula_14, we say that formula_12 is higher than formula_13 if formula_15 is expressible as a linear combination of positive roots with non-negative real coefficients.
A weight formula_12 of a representation formula_16 of formula_0 is then called a highest weight if formula_12 is higher than every other weight formula_13 of formula_16.
The theorem of the highest weight then states:
The most difficult part is the last one; the construction of a finite-dimensional irreducible representation with a prescribed highest weight.
The compact group case.
Let formula_1 be a connected compact Lie group with Lie algebra formula_17 and let formula_18 be the complexification of formula_0. Let formula_19 be a maximal torus in formula_1 with Lie algebra formula_20. Then formula_21 is a Cartan subalgebra of formula_0, and we may form the associated root system formula_5. The theory then proceeds in much the same way as in the Lie algebra case, with one crucial difference: the notion of integrality is different. Specifically, we say that an element formula_22 is analytically integral if
formula_23
is an integer whenever
formula_24
where formula_25 is the identity element of formula_1. Every analytically integral element is integral in the Lie algebra sense, but there may be integral elements in the Lie algebra sense that are not analytically integral. This distinction reflects the fact that if formula_1 is not simply connected, there may be representations of formula_0 that do not come from representations of formula_1. On the other hand, if formula_1 is simply connected, the notions of "integral" and "analytically integral" coincide.
The theorem of the highest weight for representations of formula_1 is then the same as in the Lie algebra case, except that "integral" is replaced by "analytically integral."
Proofs.
There are at least four proofs:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "\\lambda \\mapsto [V^\\lambda]"
},
{
"math_id": 3,
"text": "\\mathfrak{g}"
},
{
"math_id": 4,
"text": "\\mathfrak{h}"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "\\lambda\\in\\mathfrak h^*"
},
{
"math_id": 7,
"text": "2\\frac{\\langle\\lambda,\\alpha\\rangle}{\\langle\\alpha,\\alpha\\rangle}"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "R^+"
},
{
"math_id": 10,
"text": "\\langle\\lambda,\\alpha\\rangle\\geq 0"
},
{
"math_id": 11,
"text": "\\alpha\\in R^+"
},
{
"math_id": 12,
"text": "\\lambda"
},
{
"math_id": 13,
"text": "\\mu"
},
{
"math_id": 14,
"text": "\\mathfrak h^*"
},
{
"math_id": 15,
"text": "\\lambda-\\mu"
},
{
"math_id": 16,
"text": "V"
},
{
"math_id": 17,
"text": "\\mathfrak k"
},
{
"math_id": 18,
"text": "\\mathfrak g:=\\mathfrak k+i\\mathfrak k"
},
{
"math_id": 19,
"text": "T"
},
{
"math_id": 20,
"text": "\\mathfrak t"
},
{
"math_id": 21,
"text": "\\mathfrak h:=\\mathfrak t+i\\mathfrak t"
},
{
"math_id": 22,
"text": "\\lambda\\in\\mathfrak h"
},
{
"math_id": 23,
"text": "\\langle\\lambda,H\\rangle"
},
{
"math_id": 24,
"text": "e^{2\\pi H}=I"
},
{
"math_id": 25,
"text": "I"
}
] |
https://en.wikipedia.org/wiki?curid=58092933
|
5809298
|
Neighborhood semantics
|
Neighborhood semantics, also known as Scott–Montague semantics, is a formal semantics for modal logics. It is a generalization, developed independently by Dana Scott and Richard Montague, of the more widely known relational semantics for modal logic. Whereas a relational frame formula_0 consists of a set "W" of worlds (or states) and an accessibility relation "R" intended to indicate which worlds are alternatives to (or, accessible from) others, a neighborhood frame formula_1 still has a set "W" of worlds, but has instead of an accessibility relation a "neighborhood function"
formula_2
that assigns to each element of "W" a set of subsets of "W". Intuitively, each family of subsets assigned to a world are the propositions necessary at that world, where 'proposition' is defined as a subset of "W" (i.e. the set of worlds at which the proposition is true). Specifically, if "M" is a model on the frame, then
formula_3
where
formula_4
is the "truth set" of formula_5.
Neighborhood semantics is used for the classical modal logics that are strictly weaker than the normal modal logic K.
Correspondence between relational and neighborhood models.
To every relational model "M" = ("W", "R", "V") there corresponds an equivalent (in the sense of having pointwise-identical modal theories) neighborhood model "M'" = ("W", "N", "V") defined by
formula_6
The fact that the converse fails gives a precise sense to the remark that neighborhood models are a generalization of relational ones. Another (perhaps more natural) generalization of relational structures are general frames.
|
[
{
"math_id": 0,
"text": "\\langle W,R\\rangle"
},
{
"math_id": 1,
"text": "\\langle W,N\\rangle"
},
{
"math_id": 2,
"text": " N : W \\to 2^{2^W} "
},
{
"math_id": 3,
"text": " M,w\\models\\square \\varphi \\Longleftrightarrow (\\varphi)^M \\in N(w), "
},
{
"math_id": 4,
"text": "(\\varphi)^M = \\{u\\in W \\mid M,u\\models \\varphi \\}"
},
{
"math_id": 5,
"text": "\\varphi"
},
{
"math_id": 6,
"text": " N(w) = \\{(\\varphi)^M \\mid M,w\\models\\Box \\varphi\\}. "
}
] |
https://en.wikipedia.org/wiki?curid=5809298
|
58094662
|
Optically detected magnetic resonance
|
In physics, optically detected magnetic resonance (ODMR) is a double resonance technique by which the electron spin state of a crystal defect may be optically pumped for spin initialisation and readout.
Like electron paramagnetic resonance (EPR), ODMR makes use of the Zeeman effect in unpaired electrons. The negatively charged nitrogen vacancy centre (NV−) has been the target of considerable interest with regards to performing experiments using ODMR.
ODMR of NV−s in diamond has applications in magnetometry and sensing, biomedical imaging, quantum information and the exploration of fundamental physics.
NV ODMR.
The nitrogen vacancy defect in diamond consists of a single substitutional nitrogen atom (replacing one carbon atom) and an adjacent gap, or vacancy, in the lattice where normally a carbon atom would be located.
The nitrogen vacancy occurs in three possible charge states: positive (NV+), neutral (NV0) and negative (NV−). As NV− is the only one of these charge states which has shown to be ODMR active, it is often referred to simply as the NV.
The energy level structure of the NV− consists of a triplet ground state, a triplet excited state and two singlet states. Under resonant optical excitation, the NV may be raised from the triplet ground state to the triplet excited state. The centre may then return to the ground state via two routes; by the emission of a photon of 637 nm in the zero phonon line (ZPL) (or longer wavelength from the phonon sideband) or alternatively via the aforementioned singlet states through intersystem crossing and the emission of a 1042 nm photon. A return to the ground state via the latter route will preferentially result in the formula_0 state.
Relaxation to the formula_0 state necessarily results in a decrease in visible wavelength fluorescence (as the emitted photon is in the infrared range). Microwave pumping at a resonant frequency of formula_1 places the centre in the degenerate formula_2 state. The application of a magnetic field lifts this degeneracy, causing Zeeman splitting and the decrease of fluorescence at two resonant frequencies, given by formula_3, where formula_4 is the Planck constant, formula_5 is the electron g-factor and formula_6 is the Bohr magneton. Sweeping the microwave field through these frequencies results in two characteristic dips in the observed fluorescence, the separation between which enables determination of the strength of the magnetic field formula_7.
Hyperfine splitting.
Further splitting in the fluorescence spectrum may occur due to the hyperfine interaction which leads to further resonance conditions and corresponding spectral lines. In NV ODMR, this detailed structure usually originates from nitrogen and carbon-13 atoms near to the defect. These atoms have small magnetic fields which interact with the spectral lines from the NV, causing further splitting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m_s = 0"
},
{
"math_id": 1,
"text": "\\nu = 2.87\\text{ }GHz"
},
{
"math_id": 2,
"text": "m_s = \\pm 1"
},
{
"math_id": 3,
"text": "h\\nu = g_e\\mu_{B} B_0"
},
{
"math_id": 4,
"text": "h"
},
{
"math_id": 5,
"text": "g_e"
},
{
"math_id": 6,
"text": "\\mu_B"
},
{
"math_id": 7,
"text": "B_0"
}
] |
https://en.wikipedia.org/wiki?curid=58094662
|
5809688
|
Structural dynamics
|
Behavior of a structure subjected to dynamic (actions having high acceleration) loading
Structural dynamics is a type of structural analysis which covers the behavior of a structure subjected to dynamic (actions having high acceleration) loading. Dynamic loads include people, wind, waves, traffic, earthquakes, and blasts. Any structure can be subjected to dynamic loading. Dynamic analysis can be used to find dynamic displacements, time history, and modal analysis.
Structural analysis is mainly concerned with finding out the behavior of a physical structure when subjected to force. This action can be in the form of load due to the weight of things such as people, furniture, wind, snow, etc. or some other kind of excitation such as an earthquake, shaking of the ground due to a blast nearby, etc. In essence all these loads are dynamic, including the self-weight of the structure because at some point in time these loads were not there. The distinction is made between the dynamic and the static analysis on the basis of whether the applied action has enough acceleration in comparison to the structure's natural frequency. If a load is applied sufficiently slowly, the inertia forces (Newton's first law of motion) can be ignored and the analysis can be simplified as static analysis.
A static load is one which varies very slowly. A dynamic load is one which changes with time fairly quickly in comparison to the structure's natural frequency. If it changes slowly, the structure's response may be determined with static analysis, but if it varies quickly (relative to the structure's ability to respond), the response must be determined with a dynamic analysis.
Dynamic analysis for simple structures can be carried out manually, but for complex structures finite element analysis can be used to calculate the mode shapes and frequencies.
Displacements.
A dynamic load can have a significantly larger effect than a static load of the same magnitude due to the structure's inability to respond quickly to the loading (by deflecting). The increase in the effect of a dynamic load is given by the dynamic amplification factor (DAF) or dynamic load factor (DLF):
formula_0
where "u" is the deflection of the structure due to the applied load.
Graphs of dynamic amplification factors vs non-dimensional rise time ("t""r"/"T") exist for standard loading functions (for an explanation of rise time, see time history analysis below). Hence the DAF for a given loading can be read from the graph, the static deflection can be easily calculated for simple structures and the dynamic deflection found.
Time history analysis.
A full time history will give the response of a structure over time during and after the application of a load. To find the full time history of a structure's response, you must solve the structure's equation of motion.
Example.
A simple single degree of freedom system (a mass, "M", on a spring of stiffness "k", for example) has the following equation of motion:
formula_1
where formula_2 is the acceleration (the double derivative of the displacement) and x is the displacement.
If the loading "F"("t") is a Heaviside step function (the sudden application of a constant load), the solution to the equation of motion is:
formula_3
where formula_4 and the fundamental natural frequency, formula_5.
The static deflection of a single degree of freedom system is:
formula_6
so we can write, by combining the above formulae:
formula_7
This gives the (theoretical) time history of the structure due to a load F(t), where the false assumption is made that there is no damping.
Although this is too simplistic to apply to a real structure, the Heaviside step function is a reasonable model for the application of many real loads, such as the sudden addition of a piece of furniture, or the removal of a prop to a newly cast concrete floor. However, in reality loads are never applied instantaneously – they build up over a period of time (this may be very short indeed). This time is called the rise time.
As the number of degrees of freedom of a structure increases it very quickly becomes too difficult to calculate the time history manually – real structures are analysed using non-linear finite element analysis software.
Damping.
Any real structure will dissipate energy (mainly through friction). This can be modelled by modifying the DAF
formula_8
where formula_9 and is typically 2–10% depending on the type of construction:
Methods to increase damping
One of the widely used methods to increase damping is to attach a layer of material with a high Damping Coefficient, for example rubber, to a vibrating structure.
Modal analysis.
A modal analysis calculates the frequency modes or natural frequencies of a given system, but not necessarily its full-time history response to a given input. The natural frequency of a system is dependent only on the stiffness of the structure and the mass which participates with the structure (including self-weight). It is not dependent on the load function.
It is useful to know the modal frequencies of a structure as it allows you to ensure that the frequency of any applied periodic loading will not coincide with a modal frequency and hence cause resonance, which leads to large oscillations.
The method is:
Energy method.
It is possible to calculate the frequency of different mode shape of system manually by the energy method. For a given mode shape of a multiple degree of freedom system you can find an "equivalent" mass, stiffness and applied force for a single degree of freedom system. For simple structures the basic mode shapes can be found by inspection, but it is not a conservative method. Rayleigh's principle states:
"The frequency ω of an arbitrary mode of vibration, calculated by the energy method, is always greater than – or equal to – the fundamental frequency "ω""n"."
For an assumed mode shape formula_10, of a structural system with mass M; bending stiffness, EI (Young's modulus, "E", multiplied by the second moment of area, "I"); and applied force, "F"("x"):
formula_11
formula_12
formula_13
then, as above:
formula_14
Modal response.
The complete modal response to a given load "F"("x","t") is formula_15. The summation can be carried out by one of three common methods:
To superpose the individual modal responses manually, having calculated them by the energy method:
Assuming that the rise time tr is known ("T" = 2π/"ω"), it is possible to read the DAF from a standard graph. The static displacement can be calculated with formula_16. The dynamic displacement for the chosen mode and applied force can then be found from:
formula_17
Modal participation factor.
For real systems there is often mass participating in the forcing function (such as the mass of ground in an earthquake) and mass participating in inertia effects (the mass of the structure itself, "M"eq). The modal participation factor Γ is a comparison of these two masses. For a single degree of freedom system Γ = 1.
formula_18
|
[
{
"math_id": 0,
"text": " \\text{DAF} = \\text{DLF} = \\frac{u_{\\max}}{u_\\text{static}}"
},
{
"math_id": 1,
"text": "M \\ddot{x} + kx = F(t)"
},
{
"math_id": 2,
"text": "\\ddot{x}"
},
{
"math_id": 3,
"text": "x = \\frac{F_0} k [1 - \\cos(\\omega t)]"
},
{
"math_id": 4,
"text": "\\omega = \\sqrt{\\frac{k}{M}}"
},
{
"math_id": 5,
"text": "f = \\frac {\\omega}{2\\pi}"
},
{
"math_id": 6,
"text": "x_\\text{static} = \\frac{F_0}{k}"
},
{
"math_id": 7,
"text": "x = x_\\text{static}[1 - \\cos(\\omega t)]"
},
{
"math_id": 8,
"text": " \\text{DAF} = 1 + e^{-c\\pi}"
},
{
"math_id": 9,
"text": "c=\\frac{\\text {damping coefficient}}{\\text{critical damping coefficient}}"
},
{
"math_id": 10,
"text": "\\bar{u}(x)"
},
{
"math_id": 11,
"text": "\\text{Equivalent mass, } M_\\text{eq} = \\int M \\bar{u}^2 \\, du "
},
{
"math_id": 12,
"text": "\\text{Equivalent stiffness, } k_\\text{eq} = \\int EI \\left(\\frac{d^2\\bar{u}}{dx^2} \\right)^2 \\, dx"
},
{
"math_id": 13,
"text": "\\text{Equivalent force, } F_\\text{eq} = \\int F\\bar{u} \\, dx"
},
{
"math_id": 14,
"text": "\\omega = \\sqrt{\\frac{k_\\text{eq}}{M_\\text{eq}}}"
},
{
"math_id": 15,
"text": "v(x,t)=\\sum u_n(x,t) "
},
{
"math_id": 16,
"text": "u_\\text{static}=\\frac{F_{1,\\text{eq}}}{k_{1,\\text{eq}}}"
},
{
"math_id": 17,
"text": "u_{\\max} = u_\\text{static} \\text{DAF}"
},
{
"math_id": 18,
"text": " \\Gamma = \\frac{\\sum M_n\\bar{u}_n }{\\sum M_n\\bar{u}_n^2 }"
}
] |
https://en.wikipedia.org/wiki?curid=5809688
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.