id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
58681816
Reciprocity (optoelectronic)
Relation between properties of diodes Optoelectronic reciprocity relations relate properties of a diode under illumination to the photon emission of the same diode under applied voltage. The relations are useful for interpretation of luminescence based measurements of solar cells and modules and for the analysis of recombination losses in solar cells. Basics. Solar cells and light-emitting diodes are both semiconducting diodes that are operated in a different voltage and illumination regime and that serve different purposes. A solar cell is operated under illumination (usually by solar radiation) and is typically kept at the maximum power point where the product of current and voltage are maximized. A light emitting diode is operated at an applied forward bias (without external illumination). While a solar cell converts the energy contained in the electromagnetic waves of the incoming solar radiation into electric power (voltage x current) a light-emitting diode does the inverse, namely converting electrical power into electromagnetic radiation. A solar cell and a light emitting diode are typically made from different materials and optimized for different purposes; however, conceptually every solar cell could be operated as a light emitting diode and vice versa. Given that the operation principles have a high symmetry it is fair to assume that the key figures of merit that are used to characterize photovoltaic and luminescent operation of diodes are related to each other. These relations become particularly simple in a situation, where recombination rates scale linearly with minority carrier density and are explained below. Reciprocity between the photovoltaic quantum efficiency and the electroluminescence spectrum of a pn-junction diode. The photovoltaic quantum efficiency formula_0 is a spectral quantity that is generally measured as a function of photon energy (or wavelength). The same is true for the electroluminescence spectrum formula_1 of a light emitting diode under applied forward voltage formula_2. Under certain conditions specified below, these two properties measured on the same diode are connected via the equation formula_3 (1) where formula_4 is the black body spectrum emitted by a surface (the diode) into the hemisphere above the diode in units of photons per area, time and electron interval. In this case the black body spectrum is given by formula_5 where formula_6 is the Boltzmann constant, formula_7 is Planck's constant, formula_8 is the speed of light in vacuum, and formula_9 is the temperature of the diode. This simple relation is useful for the analysis of solar cells using luminescence-based characterization methods. Luminescence used for characterization of solar cells is useful because of the ability to image the luminescence of solar cells and modules in short periods of times, while spatially resolved measurements of photovoltaic properties (such as photocurrent or photovoltage) would be very time-consuming and technically difficult. Equation (1) is valid for the practically relevant situation, where the neutral base region of a pn-junction makes up most of the volume of the diode. Typically, the thickness of a crystalline Si solar cell is ~ 200 μm while the thickness of the emitter and space charge region is only on the order of hundreds of nanometers, i.e. three orders of magnitude thinner. In the base of a pn-junction, recombination is typically linear with minority carrier concentration over a large range of injection conditions and charge carrier transport is by diffusion. In this situation, the Donolato theorem. is valid that states that the collection efficiency formula_10 is related to the normalized minority carrier concentration formula_11 via formula_12 where formula_13 is a spatial coordinate and formula_14 defines the position of the edge of the space charge region (where the neutral zone and the space charge region connect). Thus, if formula_15, the collection efficiency is one. Further away from the edge of the space charge region, the collection efficiency will be smaller than one depending on the distance and the amount of recombination happening in the neutral zone. The same holds for the electron concentration in the dark under applied bias. Here, the electron concentration will also decrease from the edge of the space charge region towards the back contact. This decrease as well as the collection efficiency will be approximately exponential (with the diffusion length controlling the decay). The Donolato theorem is based on the principle of detailed balance and connects the processes of charge carrier injection (relevant in the luminescent mode of operation) and charge carrier extraction (relevant in the photovoltaic mode of operation). In addition, the detailed balance between absorption of photons and radiative recombination can be mathematically expressed using the van Roosbroeck-Shockley equation as formula_16 Here, formula_17 is the absorption coefficient, formula_18 is the radiative recombination coefficient, formula_19 is the refractive index, formula_20 is the intrinsic charge carrier concentration. A derivation of equation (1) can be found in ref. The reciprocity relation (eq. (1)) is only valid if absorption and emission is dominated by the neutral region of the pn-junction shown in the adjacent figure. This is a good approximation for crystalline silicon solar cells and the method can also be used for copper indium gallium selenide solar cells. However the equations has limitations when applied to solar cells where the space charge region is of comparable size to the total absorber volume. This is the case for instance for organic solar cells or amorphous Si solar cells. The reciprocity relation is also invalid if the emission of the solar cell is not from delocalized conduction and valence band states as would be the case for most mono and polycrystalline semiconductors but from localized states (defect states). This limitation is relevant for microcrystalline and amorphous silicon solar cells. Reciprocity between the open-circuit voltage of a solar cell and the external luminescence quantum efficiency. The open-circuit voltage formula_21 of a solar cell is the voltage created by a certain amount of illumination if the contacts of the solar cell are not connected, i.e. in open circuit. The voltage that can build up in such as situation is directly connected to the density of electrons and holes in the device. These densities in turn depend on the rates of photogeneration (determined by the amount of illumination) and the rates of recombination. The rate of photogeneration is usually determined by the typically used illumination with white light with a power density of 100 mW/cm2 (called one sun) and by the band gap of the solar cell and does not change much between different devices of the same type. The rate of recombination however might vary over orders of magnitude depending on the quality of the material and the interfaces. Thus, the open-circuit voltage depends quite drastically on the rates of recombination at a given concentration of charge carriers. The highest possible open-circuit voltage, the radiative open-circuit voltage formula_22, is obtained if all recombination is radiative and non-radiative recombination is negligible. This is the ideal situation, because radiative recombination cannot be avoided other than by avoiding light absorption (principle of detailed balance). However, since absorption is a key requirement for a solar cell and necessary to achieve a high concentration of electrons and holes as well, radiative recombination is a necessity (see van Roosbroeck-Shockley equation ). If non-radiative recombination is substantial and non negligible, the open-circuit voltage will be reduced depending on the ratio between the radiative and non-radiative recombination currents (where the recombination currents are the integral of the recombination rates over volume). This leads to a second reciprocity relation between the photovoltaic and the luminescent operation mode of a solar cell because the ratio of radiative to total (radiative and non-radiative) recombination currents is the external luminescence quantum efficiency formula_23 of a (light emitting) diode. Mathematically, this relation is expressed as, formula_24 Thus, any reduction in the external luminescence quantum efficiency by one order of magnitude would lead to a reduction in open-circuit voltage (relative to formula_22) by formula_25. Equation (2) is frequently used in the literature on solar cells. For instance for an improved understanding of the open-circuit voltage in organic solar cells and for comparing voltage losses between different photovoltaic technologies. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Q_{e,PV}" }, { "math_id": 1, "text": "\\phi_{EL}" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "\\phi_{EL}=Q_{e,PV}\\phi_{bb}[\\exp{\\frac{qV}{kT}}-1]" }, { "math_id": 4, "text": "\\phi_{bb}" }, { "math_id": 5, "text": "\\phi_{bb}=\\frac{2\\pi}{h^3c^2}\\frac{E^2}{\\exp{E/kT}-1}" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "h" }, { "math_id": 8, "text": "c" }, { "math_id": 9, "text": "T" }, { "math_id": 10, "text": "f_c" }, { "math_id": 11, "text": "\\delta n(x)/\\delta n(x=x_j)" }, { "math_id": 12, "text": "f_c(x)=\\frac{\\delta n(x)}{\\delta n(x=x_j)}" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "x_j" }, { "math_id": 15, "text": "x=x_j" }, { "math_id": 16, "text": "k_{rad}n_i^2=\\int\\alpha4n_r^2\\phi_{bb}dE" }, { "math_id": 17, "text": "\\alpha" }, { "math_id": 18, "text": "k_{rad}" }, { "math_id": 19, "text": "n_r" }, { "math_id": 20, "text": "n_i" }, { "math_id": 21, "text": "V_{oc}" }, { "math_id": 22, "text": "V_{oc,rad}" }, { "math_id": 23, "text": "Q_{e,lum}" }, { "math_id": 24, "text": "qV_{oc,rad}-qV_{oc}=-kT\\ln{Q_{e,lum}} (2) " }, { "math_id": 25, "text": "kT/q\\times \\ln(10)\\approx 60 mV" } ]
https://en.wikipedia.org/wiki?curid=58681816
58685207
Kron reduction
In power engineering, Kron reduction is a method used to reduce or eliminate the desired node without need of repeating the steps like in Gaussian elimination. It is named after American electrical engineer Gabriel Kron. Description. Kron reduction is a useful tool to eliminate unused nodes in a Y-parameter matrix. For example, three linear elements linked in series with a port at each end may be easily modeled as a 4X4 nodal admittance matrix of Y-parameters, but only the two port nodes normally need to be considered for modeling and simulation. Kron reduction may be used to eliminate the internal nodes, and thereby reducing the 4th order Y-parameter matrix to a 2nd order Y-parameter matrix. The 2nd order Y-parameter matrix is then more easily converted to a Z-parameter matrix or S-parameter matrix when needed. Matrix operations Consider a general Y-parameter matrix that may be created from a combination of linear elements constructed such that two internal nodes exist. formula_0 While it is possible to use the 4X4 matrix in simulations or to construct a 4X4 S-parameter matrix, is may be simpler to reduce the Y-parameter matrix to a 2X2 by eliminating the two internal nodes through Kron Reduction, and then simulating with a 2X2 matrix and/or converting to a 2X2 S-parameter or Z-Parameter matrix. formula_1 The process for executing a Kron reduction is as follows: Select the Kth row/column used to model the undesired internal nodes to be eliminated. Apply the below formula to all other matrix entries that do not reside on the Kth row and column. Then simply remove the Kth row and column of the matrix, which reduces the size of the matrix by one. Kron Reduction for the Kth row/column of an NxN matrix: formula_2 Linear elements that are also passive always form a symmetric Y-parameter matrix, that is, formula_3 in all cases. The number of computations of a Kron reduction may be reduced by taking advantage of this symmetry, as shown ion the equation below. Kron Reduction for symmetric NxN matrices: formula_4 Once all the matrix entries have been modified by the Kron Reduction equation, the Kth row/column me be eliminated, and the matrix order is reduced by one. Repeat for all internal nodes desired to be eliminated Simplified theory and derivation. The concept behind Kron reduction is quite simple. Y-parameters are measured using nodes shorted to ground, but unused nodes, that is nodes without ports, are not necessarily grounded, and their state is not directly known to the outside. Therefore, the Y-parameter matrix of the full network does not adequately describe the Y-parameter of the network being modeled, and contains extraneous entries if some nodes do not have ports. Consider the case of two lumped elements of equal value in series, two resistors of equal resistance for example. If both resistors have an admittance of formula_5, and the series network has an admittance of formula_6. The full admittance matrix that accounts for all three nodes in the network would look like below, using standard Y-parameter matrix construction techniques: formula_7 However, it is easily observed that the two resistors in series, each with an assigned admittance of Y, has a net admittance of formula_6, and since resistors do not leak current to ground, that the network Y12 is equal and opposite to YR11, that is YR12 = -YR11. The 2 port network without the middle node can be created by inspection and is shown below: formula_8 Since row and column 2 of the formula_9matrix is to be eliminated, we can rewrite formula_9 without row 2 and column 2. We will call this rewritten matrix formula_10. formula_11 Now we have a basis to create the translation equation by finding an equation that translates each entry in formula_10 to the corresponding entry in formula_12: formula_13 For each of the four entries, it can be observed that subtracting formula_14 from the left-of-arrow value successfully makes the translation. Since formula_14 is identical to formula_15, each case of formula_10 meets the condition formula_16 shown in the general translation equations. The same process may be used for elements of arbitrary admittance (formula_17 etc.) and networks of arbitrary size, but the algebra becomes more complex. The trick is to deduce and/or calculate an expression that translates the original matrix entries to the reduced matrix entries. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Y {_4}{_X}{_4} = \n\\begin{bmatrix} \nY {_1}{_1} & Y {_1}{_2} & Y {_1}{_3} & Y {_1}{_4} \\\\\nY {_2}{_1} & Y {_2}{_2} & Y {_2}{_3} & Y {_2}{_4} \\\\\nY {_3}{_1} & Y {_3}{_2} & Y {_3}{_3} & Y {_3}{_4} \\\\\nY {_4}{_1} & Y {_4}{_2} & Y {_4}{_3} & Y {_4}{_4} \\\\\n\\end{bmatrix}" }, { "math_id": 1, "text": "Y' {_2}{_X}{_2} = \n\\begin{bmatrix} Y' {_1}{_1} & Y' {_1}{_2} \\\\ Y' {_2}{_1} & Y' {_2}{_2} \\end{bmatrix} \\qquad \\mathrm{Where\\ Y' \\ is \\ the \\ Kron \\ Reduced \\ Matrix}" }, { "math_id": 2, "text": "\\begin{align} \n\\sum_{i=1}^N\\sum_{j=1}^N Y' {_i}{_j} = Y{_i}{_j} - \\frac{Y{_i}{_k}Y{_j}{_k}}{Y{_k}{_k}} ,&\\qquad for\\ i\\neq k, ,j\\neq k \\\\\n&\\qquad \\mathrm{Where \\ Y' \\ is \\ The\\ Replacement \\ Matrix \\ Entry} \\\\\n\\end{align}" }, { "math_id": 3, "text": "Y {_i}{_j}=Y {_j}{_i}" }, { "math_id": 4, "text": "\\begin{align} \n&\\sum_{i=1}^N\\sum_{j=i}^N Y'{_i}{_j} = Y{_i}{_j} - \\frac{Y{_i}{_k}Y{_j}{_k}}{Y{_k}{_k}},\\qquad for\\ i\\neq k,j\\neq k \\\\ \n&Y{_j}{_i} = Y{_i}{_j}, \\qquad for\\ i\\neq j \\\\ \n\\end{align}" }, { "math_id": 5, "text": "Y_R" }, { "math_id": 6, "text": "Y_R/2" }, { "math_id": 7, "text": "Y_{FULL} = \n\\begin{bmatrix} 0 & -Y_R & Y_R \\\\ -Y_R & 2Y_R & -Y_R \\\\ Y_R & -Y_R & 0 \\end{bmatrix}" }, { "math_id": 8, "text": "Y_{PORTS} = \n\\begin{bmatrix} -Y_R/2 & Y_R/2 \\\\ Y_R/2 & -Y_R/2 \\end{bmatrix}" }, { "math_id": 9, "text": "Y_{FULL}" }, { "math_id": 10, "text": "Y'_{FULL}" }, { "math_id": 11, "text": "Y'_{FULL} = \n\\begin{bmatrix} 0 & Y_R \\\\ Y_R & 0 \\end{bmatrix}" }, { "math_id": 12, "text": "Y_{PORTS}" }, { "math_id": 13, "text": "\\begin{bmatrix} 0 \\Rightarrow -Y_R/2 & Y_R \\Rightarrow Y_R/2 \\\\ Y_R \\Rightarrow Y_R/2 & 0 \\Rightarrow -Y_R/2 \\end{bmatrix}" }, { "math_id": 14, "text": " Y_R/2" }, { "math_id": 15, "text": "{Y_R}^2 /(2Y_R)" }, { "math_id": 16, "text": " Y'_{ij} = Y_{ij} - Y_{ij}Y_{ji} /Y_{kk} " }, { "math_id": 17, "text": " Y_{11} \\neq -Y_{12} , Y_{ij} \\neq Y_{ji}" } ]
https://en.wikipedia.org/wiki?curid=58685207
58686423
Introduction to electromagnetism
Non-technical introduction to topics in electromagnetism Electromagnetism is one of the fundamental forces of nature. Early on, electricity and magnetism were studied separately and regarded as separate phenomena. Hans Christian Ørsted discovered that the two were related – electric currents give rise to magnetism. Michael Faraday discovered the converse, that magnetism could induce electric currents, and James Clerk Maxwell put the whole thing together in a unified theory of electromagnetism. Maxwell's equations further indicated that electromagnetic waves existed, and the experiments of Heinrich Hertz confirmed this, making radio possible. Maxwell also postulated, correctly, that light was a form of electromagnetic wave, thus making all of optics a branch of electromagnetism. Radio waves differ from light only in that the wavelength of the former is much longer than the latter. Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics. In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results. Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations. Electrostatics deals only with stationary electric charges so magnetic fields do not arise and are not considered. Permanent magnets can be described without reference to electricity or electromagnetism. Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode. Electric charge. Electromagnetism is one of the fundamental forces of nature alongside gravity, the strong force and the weak force. Whereas gravity acts on all things that have mass, electromagnetism acts on all things that have electric charge. Furthermore, as there is the conservation of mass according to which mass cannot be created or destroyed, there is also the conservation of charge which means that the charge in a closed system (where no charges are leaving or entering) must remain constant. The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula formula_0 where "F" is the force, "k"e is the Coulomb constant, "q"1 and "q"2 are the magnitudes of the two charges, and "r"2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four. Electric and magnetic fields. In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in. By convention, the direction of the electric field is the same as the direction of the force on positive charges and opposite to the direction of the force on negative charges. Because positive charges are repelled by other positive charges and are attracted to negative charges, this means the electric fields point away from positive charges and towards negative charges. These properties of the electric field are encapsulated in the equation for the electric force on a charge written in terms of the electric field: formula_1 where "F" is the force on a charge "q" in an electric field "E". As well as producing an electric field, charged particles will produce a magnetic field when they are in a state of motion that will be felt by other charges that are in motion (as well as permanent magnets). The direction of the force on a moving charge from a magnetic field is perpendicular to both the direction of motion and the direction of the magnetic field lines and can be found using the right-hand rule. The strength of the force is given by the equation formula_2 where "F" is the force on a charge "q" with speed "v" in a magnetic field "B" which is pointing in a direction of angle "θ" from the direction of motion of the charge. The combination of the electric and magnetic forces on a charged particle is called the Lorentz force. Classical electromagnetism is fully described by the Lorentz force alongside a set of equations called Maxwell's equations. The first of these equations is known as Gauss's law. It describes the electric field produced by charged particles and by charge distributions. According to Gauss's law, the flux (or flow) of electric field through any closed surface is proportional to the amount of charge that is enclosed by that surface. This means that the greater the charge, the greater the electric field that is produced. It also has other important implications. For example, this law means that if there is no charge enclosed by the surface, then either there is no electric field at all or, if there is a charge near to but outside of the closed surface, the flow of electric field into the surface must exactly cancel with the flow out of the surface. The second of Maxwell's equations is known as Gauss's law for magnetism and, similarly to the first Gauss's law, it describes flux, but instead of electric flux, it describes magnetic flux. According to Gauss's law for magnetism, the flow of magnetic field through a closed surface is always zero. This means that if there is a magnetic field, the flow into the closed surface will always cancel out with the flow out of the closed surface. This law has also been called "no magnetic monopoles" because it means that any magnetic flux flowing out of a closed surface must flow back into it, meaning that positive and negative magnetic poles must come together as a magnetic dipole and can never be separated into magnetic monopoles. This is in contrast to electric charges which can exist as separate positive and negative charges. The third of Maxwell's equations is called the Ampère–Maxwell law. It states that a magnetic field can be generated by an electric current. The direction of the magnetic field is given by Ampère's right-hand grip rule. If the wire is straight, then the magnetic field is curled around it like the gripped fingers in the right-hand rule. If the wire is wrapped into coils, then the magnetic field inside the coils points in a straight line like the outstretched thumb in the right-hand grip rule. When electric currents are used to produce a magnet in this way, it is called an electromagnet. Electromagnets often use a wire curled up into solenoid around an iron core which strengthens the magnetic field produced because the iron core becomes magnetised. Maxwell's extension to the law states that a time-varying electric field can also generate a magnetic field. Similarly, Faraday's law of induction states that a magnetic field can produce an electric current. For example, a magnet pushed in and out of a coil of wires can produce an electric current in the coils which is proportional to the strength of the magnet as well as the number of coils and the speed at which the magnet is inserted and extracted from the coils. This principle is essential for transformers which are used to transform currents from high voltage to low voltage, and vice versa. They are needed to convert high voltage mains electricity into low voltage electricity which can be safely used in homes. Maxwell's formulation of the law is given in the Maxwell–Faraday equation—the fourth and final of Maxwell's equations—which states that a time-varying magnetic field produces an electric field. Together, Maxwell's equations provide a single uniform theory of the electric and magnetic fields and Maxwell's work in creating this theory has been called "the second great unification in physics" after the first great unification of Newton's law of universal gravitation. The solution to Maxwell's equations in free space (where there are no charges or currents) produces wave equations corresponding to electromagnetic waves (with both electric and magnetic components) travelling at the speed of light. The observation that these wave solutions had a wave speed exactly equal to the speed of light led Maxwell to hypothesise that light is a form of electromagnetic radiation and to posit that other electromagnetic radiation could exist with different wavelengths. The existence of electromagnetic radiation was proved by Heinrich Hertz in a series of experiments ranging from 1886 to 1889 in which he discovered the existence of radio waves. The full electromagnetic spectrum (in order of increasing frequency) consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet light, X-rays and gamma rays. A further unification of electromagnetism came with Einstein's special theory of relativity. According to special relativity, observers moving at different speeds relative to one another occupy different observational frames of reference. If one observer is in motion relative to another observer then they experience length contraction where unmoving objects appear closer together to the observer in motion than to the observer at rest. Therefore, if an electron is moving at the same speed as the current in a neutral wire, then they experience the flowing electrons in the wire as standing still relative to it and the positive charges as contracted together. In the lab frame, the electron is moving and so feels a magnetic force from the current in the wire but because the wire is neutral it feels no electric force. But in the electron's rest frame, the positive charges seem closer together compared to the flowing electrons and so the wire seems positively charged. Therefore, in the electron's rest frame it feels no magnetic force (because it is not moving in its own frame) but it does feel an electric force due to the positively charged wire. This result from relativity proves that magnetic fields are just electric fields in a different reference frame (and vice versa) and so the two are different manifestations of the same underlying electromagnetic field. Conductors, insulators and circuits. Conductors. A conductor is a material that allows electrons to flow easily. The most effective conductors are usually metals because they can be described fairly accurately by the free electron model in which electrons delocalize from the atomic nuclei, leaving positive ions surrounded by a cloud of free electrons. Examples of good conductors include copper, aluminum, and silver. Wires in electronics are often made of copper. The main properties of conductors are: In some materials, the electrons are bound to the atomic nuclei and so are not free to move around but the energy required to set them free is low. In these materials, called semiconductors, the conductivity is low at low temperatures but as the temperature is increased the electrons gain more thermal energy and the conductivity increases. Silicon is an example of a semiconductors that can be used to create solar cells which become more conductive the more energy they receive from photons from the sun. Superconductors are materials that exhibit little to no resistance to the flow of electrons when cooled below a certain critical temperature. Superconductivity can only be explained by the quantum mechanical Pauli exclusion principle which states that no two fermions (an electron is a type of fermion) can occupy exactly the same quantum state. In superconductors, below a certain temperature the electrons form boson bound pairs which do not follow this principle and this means that all the electrons can fall to the same energy level and move together uniformly in a current. Insulators. Insulators are material which are highly resistive to the flow of electrons and so are often used to cover conducting wires for safety. In insulators, electrons are tightly bound to atomic nuclei and the energy to free them is very high so they are not free to move and are resistive to induced movement by an external electric field. However, some insulators, called dielectrics, can be polarised under the influence of an external electric field so that the charges are minutely displaced forming dipoles that create a positive and negative side. Dielectrics are used in capacitors to allow them to store more electric potential energy in the electric field between the capacitor plates. Capacitors. A capacitor is an electronic component that stores electrical potential energy in an electric field between two oppositely charged conducting plates. If one of the conducting plates has a charge density of +"Q/A" and the other has a charge of -"Q/A" where "A" is the area of the plates, then there will be an electric field between them. The potential difference between two parallel plates "V" can be derived mathematically as formula_3 where "d" is the plate separation and formula_4 is the permittivity of free space. The ability of the capacitor to store electrical potential energy is measured by the capacitance which is defined as formula_5 and for a parallel plate capacitor this is formula_6 If a dielectric is placed between the plates then the permittivity of free space is multiplied by the relative permittivity of the dielectric and the capacitance increases. The maximum energy that can be stored by a capacitor is proportional to the capacitance and the square of the potential difference between the plates formula_7 Inductors. An inductor is an electronic component that stores energy in a magnetic field inside a coil of wire. A current-carrying coil of wire induces a magnetic field according to Ampère's circuital law. The greater the current "I", the greater the energy stored in the magnetic field and the lower the inductance which is defined formula_8 where formula_9 is the magnetic flux produced by the coil of wire. The inductance is a measure of the circuit's resistance to a change in current and so inductors with high inductances can also be used to oppose alternating current. Circuit laws. Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, simple circuit laws can be used instead of deriving all the behaviour of the circuits directly from electromagnetic laws. Ohm's law states the relationship between the current "I" and the voltage "V" of a circuit by introducing the quantity known as resistance "R" Ohm's law: formula_10 Power is defined as formula_11 so Ohm's law can be used to tell us the power of the circuit in terms of other quantities formula_12 Kirchhoff's junction rule states that the current going into a junction (or node) must equal the current that leaves the node. This comes from charge conservation, as current is defined as the flow of charge over time. If a current splits as it exits a junction, the sum of the resultant split currents is equal to the incoming circuit. Kirchhoff's loop rule states that the sum of the voltage in a closed loop around a circuit equals zero. This comes from the fact that the electric field is conservative which means that no matter the path taken, the potential at a point does not change when you get back there. Rules can also tell us how to add up quantities such as the current and voltage in series and parallel circuits. For series circuits, the current remains the same for each component and the voltages and resistances add up: formula_13 For parallel circuits, the voltage remains the same for each component and the currents and resistances are related as shown: formula_14 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "F=k_\\text{e}{q_1q_2\\over r^2}" }, { "math_id": 1, "text": "F = qE" }, { "math_id": 2, "text": "F = qvB \\sin\\theta" }, { "math_id": 3, "text": "V = {Qd \\over \\varepsilon_0 A}" }, { "math_id": 4, "text": "\\varepsilon_0" }, { "math_id": 5, "text": "C=Q/V" }, { "math_id": 6, "text": "C = {\\varepsilon_0 A \\over d}" }, { "math_id": 7, "text": "E = \\frac 1 2 CV^2" }, { "math_id": 8, "text": "L= \\Phi_B/I" }, { "math_id": 9, "text": "\\Phi_B" }, { "math_id": 10, "text": "I = V/R" }, { "math_id": 11, "text": "P = IV" }, { "math_id": 12, "text": "P = IV = V^2/R = I^2R" }, { "math_id": 13, "text": "V_{tot} = V_1 + V_2 + V_3 + \\ldots \\qquad R_{tot} = R_1 + R_2 + R_3 + \\ldots \\qquad I = I_1 = I_2 = I_3 = \\ldots" }, { "math_id": 14, "text": "V_{tot} = V_1 = V_2 = V_3 = \\ldots \\qquad {1 \\over R_{tot}} = {1 \\over R_1} + {1 \\over R_2} + {1 \\over R_3} + \\ldots \\qquad I_{tot} = I_1 + I_2 + I_3 + \\ldots" } ]
https://en.wikipedia.org/wiki?curid=58686423
5869
Category theory
General theory of mathematical structures Category theory is a general theory of mathematical structures and their relations. It was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, many constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. Many areas of computer science also rely on category theory, such as functional programming and semantics. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the "source" and the "target" of the morphism. Metaphorically, a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one. Morphism composition has similar properties as function composition (associativity and existence of an identity morphisms for each object). Morphisms are often some sort of functions, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid. The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories formula_0 and formula_1: it maps objects of formula_0 to objects of formula_1 and morphisms of formula_0 to morphisms of formula_1 in such a way that sources are mapped to sources, and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and "vice-versa"). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. Categories, objects, and morphisms. Categories. A "category" formula_2 consists of the following three mathematical entities: The expression formula_8, would be verbally stated as "formula_5 is a morphism from a to b". The expression formula_9 – alternatively expressed as formula_10, formula_11, or formula_12 – denotes the "hom-class" of all morphisms from formula_6 to formula_7. for any three objects "a", "b", and "c", we have formula_14 The composition of formula_15 and formula_16 is written as formula_17 or formula_18, governed by two axioms: 1. Associativity: If formula_19, formula_16, and formula_20 then formula_21 2. Identity: For every object x, there exists a morphism formula_22 (also denoted as formula_23) called the "identity morphism for x", such that for every morphism formula_19, we have formula_24 From the axioms, it can be proved that there is exactly one identity morphism for every object. Morphisms. Relations among morphisms (such as "fg" = "h") are often depicted using commutative diagrams, with "points" (corners) representing objects and "arrows" representing morphisms. Morphisms can have any of the following properties. A morphism "f" : "a" → "b" is a: Every retraction is an epimorphism, and every section is a monomorphism. Furthermore, the following three statements are equivalent: Functors. Functors are structure-preserving maps between categories. They can be thought of as morphisms in the category of all (small) categories. A (covariant) functor "F" from a category "C" to a category "D", written "F" : "C" → "D", consists of: such that the following two properties hold: A contravariant functor "F": "C" → "D" is like a covariant functor, except that it "turns morphisms around" ("reverses all the arrows"). More specifically, every morphism "f" : "x" → "y" in "C" must be assigned to a morphism "F"("f") : "F"("y") → "F"("x") in "D". In other words, a contravariant functor acts as a covariant functor from the opposite category "C"op to "D". Natural transformations. A "natural transformation" is a relation between two functors. Functors often describe "natural constructions" and natural transformations then describe "natural homomorphisms" between two such constructions. Sometimes two quite different constructions yield "the same" result; this is expressed by a natural isomorphism between the two functors. If "F" and "G" are (covariant) functors between the categories "C" and "D", then a natural transformation "η" from "F" to "G" associates to every object "X" in "C" a morphism "η""X" : "F"("X") → "G"("X") in "D" such that for every morphism "f" : "X" → "Y" in "C", we have "η""Y" ∘ "F"("f") = "G"("f") ∘ "η""X"; this means that the following diagram is commutative: The two functors "F" and "G" are called "naturally isomorphic" if there exists a natural transformation from "F" to "G" such that "η""X" is an isomorphism for every object "X" in "C". Other concepts. Universal constructions, limits, and colimits. Using the language of category theory, many areas of mathematical study can be categorized. Categories include sets, groups and topologies. Each category is distinguished by properties that all its objects have in common, such as the empty set or the product of two topologies, yet in the definition of a category, objects are considered atomic, i.e., we "do not know" whether an object "A" is a set, a topology, or any other abstract concept. Hence, the challenge is to define special objects without referring to the internal structure of those objects. To define the empty set without referring to elements, or the product topology without referring to open sets, one can characterize these objects in terms of their relations to other objects, as given by the morphisms of the respective categories. Thus, the task is to find "universal properties" that uniquely determine the objects of interest. Numerous important constructions can be described in a purely categorical way if the "category limit" can be developed and dualized to yield the notion of a "colimit". Equivalent categories. It is a natural question to ask: under which conditions can two categories be considered "essentially the same", in the sense that theorems about one category can readily be transformed into theorems about the other category? The major tool one employs to describe such a situation is called "equivalence of categories", which is given by appropriate functors between two categories. Categorical equivalence has found numerous applications in mathematics. Further concepts and results. The definitions of categories and functors provide only the very basics of categorical algebra; additional important topics are listed below. Although there are strong interrelations between all of these topics, the given order can be considered as a guideline for further reading. Higher-dimensional categories. Many of the above concepts, especially equivalence of categories, adjoint functor pairs, and functor categories, can be situated into the context of "higher-dimensional categories". Briefly, if we consider a morphism between two objects as a "process taking us from one object to another", then higher-dimensional categories allow us to profitably generalize this by considering "higher-dimensional processes". For example, a (strict) 2-category is a category together with "morphisms between morphisms", i.e., processes which allow us to transform one morphism into another. We can then "compose" these "bimorphisms" both horizontally and vertically, and we require a 2-dimensional "exchange law" to hold, relating the two composition laws. In this context, the standard example is Cat, the 2-category of all (small) categories, and in this example, bimorphisms of morphisms are simply natural transformations of morphisms in the usual sense. Another basic example is to consider a 2-category with a single object; these are essentially monoidal categories. Bicategories are a weaker notion of 2-dimensional categories in which the composition of morphisms is not strictly associative, but only associative "up to" an isomorphism. This process can be extended for all natural numbers "n", and these are called "n"-categories. There is even a notion of "ω-category" corresponding to the ordinal number ω. Higher-dimensional categories are part of the broader mathematical field of higher-dimensional algebra, a concept introduced by Ronald Brown. For a conversational introduction to these ideas, see John Baez, 'A Tale of "n"-categories' (1996). Historical notes. <templatestyles src="Rquote/styles.css"/>{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; " Whilst specific examples of functors and natural transformations had been given by Samuel Eilenberg and Saunders Mac Lane in a 1942 paper on group theory, these concepts were introduced in a more general sense, together with the additional notion of categories, in a 1945 paper by the same authors (who discussed applications of category theory to the field of algebraic topology). Their work was an important part of the transition from intuitive and geometric homology to homological algebra, Eilenberg and Mac Lane later writing that their goal was to understand natural transformations, which first required the definition of functors, then categories. Stanislaw Ulam, and some writing on his behalf, have claimed that related ideas were current in the late 1930s in Poland. Eilenberg was Polish, and studied mathematics in Poland in the 1930s. Category theory is also, in some sense, a continuation of the work of Emmy Noether (one of Mac Lane's teachers) in formalizing abstract processes; Noether realized that understanding a type of mathematical structure requires understanding the processes that preserve that structure (homomorphisms). Eilenberg and Mac Lane introduced categories for understanding and formalizing the processes (functors) that relate topological structures to algebraic structures (topological invariants) that characterize them. Category theory was originally introduced for the need of homological algebra, and widely extended for the need of modern algebraic geometry (scheme theory). Category theory may be viewed as an extension of universal algebra, as the latter studies algebraic structures, and the former applies to any kind of mathematical structure and studies also the relationships between structures of different nature. For this reason, it is used throughout mathematics. Applications to mathematical logic and semantics (categorical abstract machine) came later. Certain categories called topoi (singular "topos") can even serve as an alternative to axiomatic set theory as a foundation of mathematics. A topos can also be considered as a specific type of category with two additional topos axioms. These foundational applications of category theory have been worked out in fair detail as a basis for, and justification of, constructive mathematics. Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology. Categorical logic is now a well-defined field based on type theory for intuitionistic logics, with applications in functional programming and domain theory, where a cartesian closed category is taken as a non-syntactic description of a lambda calculus. At the very least, category theoretic language clarifies what exactly these related areas have in common (in some abstract sense). Category theory has been applied in other fields as well, see applied category theory. For example, John Baez has shown a link between Feynman diagrams in physics and monoidal categories. Another application of category theory, more specifically topos theory, has been made in mathematical music theory, see for example the book "The Topos of Music, Geometric Logic of Concepts, Theory, and Performance" by Guerino Mazzola. More recent efforts to introduce undergraduates to categories as a foundation for mathematics include those of William Lawvere and Rosebrugh (2003) and Lawvere and Stephen Schanuel (1997) and Mirroslav Yotov (2012). See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. Citations. <templatestyles src="Reflist/styles.css" /> Sources. <templatestyles src="Refbegin/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" /> External links. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\mathcal{C}_1" }, { "math_id": 1, "text": "\\mathcal{C}_2" }, { "math_id": 2, "text": "\\mathcal{C}" }, { "math_id": 3, "text": "\\text{ob}(\\mathcal{C})" }, { "math_id": 4, "text": "\\text{hom}(\\mathcal{C})" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "f:a \\mapsto b" }, { "math_id": 9, "text": "\\text{hom}(a, b)" }, { "math_id": 10, "text": "\\text{hom}_\\mathcal{C}(a, b)" }, { "math_id": 11, "text": "\\text{mor}(a, b)" }, { "math_id": 12, "text": "\\mathcal{C}(a, b)" }, { "math_id": 13, "text": "\\circ" }, { "math_id": 14, "text": "\\circ : \\text{hom}(b, c) \\times \\text{hom}(a, b) \\mapsto \\text{hom}(a, c)" }, { "math_id": 15, "text": "f : a \\mapsto b" }, { "math_id": 16, "text": "g: b \\mapsto c" }, { "math_id": 17, "text": "g \\circ f" }, { "math_id": 18, "text": "gf" }, { "math_id": 19, "text": "f: a \\mapsto b" }, { "math_id": 20, "text": "h: c \\mapsto d" }, { "math_id": 21, "text": "h \\circ (g \\circ f) = (h \\circ g) \\circ f" }, { "math_id": 22, "text": "1_x : x \\mapsto x" }, { "math_id": 23, "text": "\\text{id}_x" }, { "math_id": 24, "text": "1_b \\circ f = f = f \\circ 1_a" }, { "math_id": 25, "text": "\\text{ob} (\\text{Set})" }, { "math_id": 26, "text": "\\text{hom} (\\text{Set})" }, { "math_id": 27, "text": "\\text{hom} (A,B)" }, { "math_id": 28, "text": "(g \\circ f)(x) = g(f(x))" }, { "math_id": 29, "text": "\\text{id}_A" }, { "math_id": 30, "text": "\\text{id}_A : A \\mapsto A" }, { "math_id": 31, "text": "\\text{id}_A (x) = x" } ]
https://en.wikipedia.org/wiki?curid=5869
58690
Crystal structure
Ordered arrangement of atoms, ions, or molecules in a crystalline material In crystallography, crystal structure is a description of ordered arrangement of atoms, ions, or molecules in a crystalline material. Ordered structures occur from intrinsic nature of constituent particles to form symmetric patterns that repeat along the principal directions of three-dimensional space in matter. The smallest group of particles in material that constitutes this repeating pattern is unit cell of the structure. The unit cell completely reflects symmetry and structure of the entire crystal, which is built up by repetitive translation of unit cell along its principal axes. The translation vectors define the nodes of Bravais lattice. The lengths of principal axes/edges, of unit cell and angles between them are lattice constants, also called "lattice parameters" or "cell parameters". The symmetry properties of crystal are described byconcept of space groups. All possible symmetric arrangements of particles in three-dimensional space may be described by 230 space groups. The crystal structure and symmetry play a critical role in determining many physical properties, such as cleavage, electronic band structure, and optical transparency. Unit cell. Crystal structure is described in terms of the geometry of arrangement of particles in the unit cells. The unit cell is defined as the smallest repeating unit having the full symmetry of the crystal structure. The geometry of the unit cell is defined as a parallelepiped, providing six lattice parameters taken as the lengths of the cell edges ("a", "b", "c") and the angles between them (α, β, γ). The positions of particles inside the unit cell are described by the fractional coordinates ("xi", "yi", "zi") along the cell edges, measured from a reference point. It is thus only necessary to report the coordinates of a smallest asymmetric subset of particles, called the crystallographic asymmetric unit. The asymmetric unit may be chosen so that it occupies the smallest physical space, which means that not all particles need to be physically located inside the boundaries given by the lattice parameters. All other particles of the unit cell are generated by the symmetry operations that characterize the symmetry of the unit cell. The collection of symmetry operations of the unit cell is expressed formally as the space group of the crystal structure. Miller indices. Vectors and planes in a crystal lattice are described by the three-value Miller index notation. This syntax uses the indices "h", "k", and "ℓ" as directional parameters. By definition, the syntax ("hkℓ") denotes a plane that intercepts the three points "a"1/"h", "a"2/"k", and "a"3/"ℓ", or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane with the unit cell (in the basis of the lattice vectors). If one or more of the indices is zero, it means that the planes do not intersect that axis (i.e., the intercept is "at infinity"). A plane containing a coordinate axis is translated so that it no longer contains that axis before its Miller indices are determined. The Miller indices for a plane are integers with no common factors. Negative indices are indicated with horizontal bars, as in (123). In an orthogonal coordinate system for a cubic cell, the Miller indices of a plane are the Cartesian components of a vector normal to the plane. Considering only ("hkℓ") planes intersecting one or more lattice points (the "lattice planes"), the distance "d" between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula formula_0 Planes and directions. The crystallographic directions are geometric lines linking nodes (atoms, ions or molecules) of a crystal. Likewise, the crystallographic planes are geometric "planes" linking nodes. Some directions and planes have a higher density of nodes. These high density planes have an influence on the behavior of the crystal as follows: Some directions and planes are defined by symmetry of the crystal system. In monoclinic, trigonal, tetragonal, and hexagonal systems there is one unique axis (sometimes called the principal axis) which has higher rotational symmetry than the other two axes. The basal plane is the plane perpendicular to the principal axis in these crystal systems. For triclinic, orthorhombic, and cubic crystal systems the axis designation is arbitrary and there is no principal axis. Cubic structures. For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted "a"); similarly for the reciprocal lattice. So, in this common case, the Miller indices ("ℓmn") and ["ℓmn"] both simply denote normals/directions in Cartesian coordinates. For cubic crystals with lattice constant "a", the spacing "d" between adjacent (ℓmn) lattice planes is (from above): formula_1 Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes: For face-centered cubic (fcc) and body-centered cubic (bcc) lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions. Interplanar spacing. The spacing d between adjacent ("hkℓ") lattice planes is given by: Classification by symmetry. The defining property of a crystal is its inherent symmetry. Performing certain symmetry operations on the crystal lattice leaves it unchanged. All crystals have translational symmetry in three directions, but some have other symmetry elements as well. For example, rotating the crystal 180° about a certain axis may result in an atomic configuration that is identical to the original configuration; the crystal has twofold rotational symmetry about this axis. In addition to rotational symmetry, a crystal may have symmetry in the form of mirror planes, and also the so-called compound symmetries, which are a combination of translation and rotation or mirror symmetries. A full classification of a crystal is achieved when all inherent symmetries of the crystal are identified. Lattice systems. Lattice systems are a grouping of crystal structures according to the point groups of their lattice. All crystals fall into one of seven lattice systems. They are related to, but not the same as the seven crystal systems. The most symmetric, the cubic or isometric system, has the symmetry of a cube, that is, it exhibits four threefold rotational axes oriented at 109.5° (the tetrahedral angle) with respect to each other. These threefold axes lie along the body diagonals of the cube. The other six lattice systems, are hexagonal, tetragonal, rhombohedral (often confused with the trigonal crystal system), orthorhombic, monoclinic and triclinic. Bravais lattices. Bravais lattices, also referred to as "space lattices", describe the geometric arrangement of the lattice points, and therefore the translational symmetry of the crystal. The three dimensions of space afford 14 distinct Bravais lattices describing the translational symmetry. All crystalline materials recognized today, not including quasicrystals, fit in one of these arrangements. The fourteen three-dimensional lattices, classified by lattice system, are shown above. The crystal structure consists of the same group of atoms, the "basis", positioned around each and every lattice point. This group of atoms therefore repeats indefinitely in three dimensions according to the arrangement of one of the Bravais lattices. The characteristic rotation and mirror symmetries of the unit cell is described by its crystallographic point group. Crystal systems. A crystal system is a set of point groups in which the point groups themselves and their corresponding space groups are assigned to a lattice system. Of the 32 point groups that exist in three dimensions, most are assigned to only one lattice system, in which case the crystal system and lattice system both have the same name. However, five point groups are assigned to two lattice systems, rhombohedral and hexagonal, because both lattice systems exhibit threefold rotational symmetry. These point groups are assigned to the trigonal crystal system. In total there are seven crystal systems: triclinic, monoclinic, orthorhombic, tetragonal, trigonal, hexagonal, and cubic. Point groups. The crystallographic point group or "crystal class" is the mathematical group comprising the symmetry operations that leave at least one point unmoved and that leave the appearance of the crystal structure unchanged. These symmetry operations include Rotation axes (proper and improper), reflection planes, and centers of symmetry are collectively called "symmetry elements". There are 32 possible crystal classes. Each one can be classified into one of the seven crystal systems. Space groups. In addition to the operations of the point group, the space group of the crystal structure contains translational symmetry operations. These include: There are 230 distinct space groups. Atomic coordination. By considering the arrangement of atoms relative to each other, their coordination numbers, interatomic distances, types of bonding, etc., it is possible to form a general view of the structures and alternative ways of visualizing them. Close packing. The principles involved can be understood by considering the most efficient way of packing together equal-sized spheres and stacking close-packed atomic planes in three dimensions. For example, if plane A lies beneath plane B, there are two possible ways of placing an additional atom on top of layer B. If an additional layer were placed directly over plane A, this would give rise to the following series: ...ABABABAB... This arrangement of atoms in a crystal structure is known as hexagonal close packing (hcp). If, however, all three planes are staggered relative to each other and it is not until the fourth layer is positioned directly over plane A that the sequence is repeated, then the following sequence arises: ...ABCABCABC... This type of structural arrangement is known as cubic close packing (ccp). The unit cell of a ccp arrangement of atoms is the face-centered cubic (fcc) unit cell. This is not immediately obvious as the closely packed layers are parallel to the {111} planes of the fcc unit cell. There are four different orientations of the close-packed layers. APF and CN. One important characteristic of a crystalline structure is its atomic packing factor (APF). This is calculated by assuming that all the atoms are identical spheres, with a radius large enough that each sphere abuts on the next. The atomic packing factor is the proportion of space filled by these spheres which can be worked out by calculating the total volume of the spheres and dividing by the volume of the cell as follows: formula_9 Another important characteristic of a crystalline structure is its coordination number (CN). This is the number of nearest neighbours of a central atom in the structure. The APFs and CNs of the most common crystal structures are shown below: The 74% packing efficiency of the FCC and HCP is the maximum density possible in unit cells constructed of spheres of only one size. Interstitial sites. Interstitial sites refer to the empty spaces in between the atoms in the crystal lattice. These spaces can be filled by oppositely charged ions to form multi-element structures. They can also be filled by impurity atoms or self-interstitials to form interstitial defects. Defects and impurities. Real crystals feature defects or irregularities in the ideal arrangements described above and it is these defects that critically determine many of the electrical and mechanical properties of real materials. Impurities. When one atom substitutes for one of the principal atomic components within the crystal structure, alteration in the electrical and thermal properties of the material may ensue. Impurities may also manifest as electron spin impurities in certain materials. Research on magnetic impurities demonstrates that substantial alteration of certain properties such as specific heat may be affected by small concentrations of an impurity, as for example impurities in semiconducting ferromagnetic alloys may lead to different properties as first predicted in the late 1960s. Dislocations. Dislocations in a crystal lattice are line defects that are associated with local stress fields. Dislocations allow shear at lower stress than that needed for a perfect crystal structure. The local stress fields result in interactions between the dislocations which then result in strain hardening or cold working. Grain boundaries. Grain boundaries are interfaces where crystals of different orientations meet. A grain boundary is a single-phase interface, with crystals on each side of the boundary being identical except in orientation. The term "crystallite boundary" is sometimes, though rarely, used. Grain boundary areas contain those atoms that have been perturbed from their original lattice sites, dislocations, and impurities that have migrated to the lower energy grain boundary. Treating a grain boundary geometrically as an interface of a single crystal cut into two parts, one of which is rotated, we see that there are five variables required to define a grain boundary. The first two numbers come from the unit vector that specifies a rotation axis. The third number designates the angle of rotation of the grain. The final two numbers specify the plane of the grain boundary (or a unit vector that is normal to this plane). Grain boundaries disrupt the motion of dislocations through a material, so reducing crystallite size is a common way to improve strength, as described by the Hall–Petch relationship. Since grain boundaries are defects in the crystal structure they tend to decrease the electrical and thermal conductivity of the material. The high interfacial energy and relatively weak bonding in most grain boundaries often makes them preferred sites for the onset of corrosion and for the precipitation of new phases from the solid. They are also important to many of the mechanisms of creep. Grain boundaries are in general only a few nanometers wide. In common materials, crystallites are large enough that grain boundaries account for a small fraction of the material. However, very small grain sizes are achievable. In nanocrystalline solids, grain boundaries become a significant volume fraction of the material, with profound effects on such properties as diffusion and plasticity. In the limit of small crystallites, as the volume fraction of grain boundaries approaches 100%, the material ceases to have any crystalline character, and thus becomes an amorphous solid. Prediction of structure. The difficulty of predicting stable crystal structures based on the knowledge of only the chemical composition has long been a stumbling block on the way to fully computational materials design. Now, with more powerful algorithms and high-performance computing, structures of medium complexity can be predicted using such approaches as evolutionary algorithms, random sampling, or metadynamics. The crystal structures of simple ionic solids (e.g., NaCl or table salt) have long been rationalized in terms of Pauling's rules, first set out in 1929 by Linus Pauling, referred to by many since as the "father of the chemical bond". Pauling also considered the nature of the interatomic forces in metals, and concluded that about half of the five d-orbitals in the transition metals are involved in bonding, with the remaining nonbonding d-orbitals being responsible for the magnetic properties. Pauling was therefore able to correlate the number of d-orbitals in bond formation with the bond length, as well as with many of the physical properties of the substance. He subsequently introduced the metallic orbital, an extra orbital necessary to permit uninhibited resonance of valence bonds among various electronic structures. In the resonating valence bond theory, the factors that determine the choice of one from among alternative crystal structures of a metal or intermetallic compound revolve around the energy of resonance of bonds among interatomic positions. It is clear that some modes of resonance would make larger contributions (be more mechanically stable than others), and that in particular a simple ratio of number of bonds to number of positions would be exceptional. The resulting principle is that a special stability is associated with the simplest ratios or "bond numbers": <templatestyles src="Fraction/styles.css" />1⁄2, <templatestyles src="Fraction/styles.css" />1⁄3, <templatestyles src="Fraction/styles.css" />2⁄3, <templatestyles src="Fraction/styles.css" />1⁄4, <templatestyles src="Fraction/styles.css" />3⁄4, etc. The choice of structure and the value of the axial ratio (which determines the relative bond lengths) are thus a result of the effort of an atom to use its valency in the formation of stable bonds with simple fractional bond numbers. After postulating a direct correlation between electron concentration and crystal structure in beta-phase alloys, Hume-Rothery analyzed the trends in melting points, compressibilities and bond lengths as a function of group number in the periodic table in order to establish a system of valencies of the transition elements in the metallic state. This treatment thus emphasized the increasing bond strength as a function of group number. The operation of directional forces were emphasized in one article on the relation between bond hybrids and the metallic structures. The resulting correlation between electronic and crystalline structures is summarized by a single parameter, the weight of the d-electrons per hybridized metallic orbital. The "d-weight" calculates out to 0.5, 0.7 and 0.9 for the fcc, hcp and bcc structures respectively. The relationship between d-electrons and crystal structure thus becomes apparent. In crystal structure predictions/simulations, the periodicity is usually applied, since the system is imagined as being unlimited in all directions. Starting from a triclinic structure with no further symmetry property assumed, the system may be driven to show some additional symmetry properties by applying Newton's Second Law on particles in the unit cell and a recently developed dynamical equation for the system period vectors (lattice parameters including angles), even if the system is subject to external stress. Polymorphism. Polymorphism is the occurrence of multiple crystalline forms of a material. It is found in many crystalline materials including polymers, minerals, and metals. According to Gibbs' rules of phase equilibria, these unique crystalline phases are dependent on intensive variables such as pressure and temperature. Polymorphism is related to allotropy, which refers to elemental solids. The complete morphology of a material is described by polymorphism and other variables such as crystal habit, amorphous fraction or crystallographic defects. Polymorphs have different stabilities and may spontaneously and irreversibly transform from a metastable form (or thermodynamically unstable form) to the stable form at a particular temperature. They also exhibit different melting points, solubilities, and X-ray diffraction patterns. One good example of this is the quartz form of silicon dioxide, or SiO2. In the vast majority of silicates, the Si atom shows tetrahedral coordination by 4 oxygens. All but one of the crystalline forms involve tetrahedral {SiO4} units linked together by shared vertices in different arrangements. In different minerals the tetrahedra show different degrees of networking and polymerization. For example, they occur singly, joined in pairs, in larger finite clusters including rings, in chains, double chains, sheets, and three-dimensional frameworks. The minerals are classified into groups based on these structures. In each of the 7 thermodynamically stable crystalline forms or polymorphs of crystalline quartz, only 2 out of 4 of each the edges of the {SiO4} tetrahedra are shared with others, yielding the net chemical formula for silica: SiO2. Another example is elemental tin (Sn), which is malleable near ambient temperatures but is brittle when cooled. This change in mechanical properties due to existence of its two major allotropes, α- and β-tin. The two allotropes that are encountered at normal pressure and temperature, α-tin and β-tin, are more commonly known as "gray tin" and "white tin" respectively. Two more allotropes, γ and σ, exist at temperatures above 161 °C and pressures above several GPa. White tin is metallic, and is the stable crystalline form at or above room temperature. Below 13.2 °C, tin exists in the gray form, which has a diamond cubic crystal structure, similar to diamond, silicon or germanium. Gray tin has no metallic properties at all, is a dull gray powdery material, and has few uses, other than a few specialized semiconductor applications. Although the α–β transformation temperature of tin is nominally 13.2 °C, impurities (e.g. Al, Zn, etc.) lower the transition temperature well below 0 °C, and upon addition of Sb or Bi the transformation may not occur at all. Physical properties. Twenty of the 32 crystal classes are piezoelectric, and crystals belonging to one of these classes (point groups) display piezoelectricity. All piezoelectric classes lack inversion symmetry. Any material develops a dielectric polarization when an electric field is applied, but a substance that has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes. There are a few crystal structures, notably the perovskite structure, which exhibit ferroelectric behavior. This is analogous to ferromagnetism, in that, in the absence of an electric field during production, the ferroelectric crystal does not exhibit a polarization. Upon the application of an electric field of sufficient magnitude, the crystal becomes permanently polarized. This polarization can be reversed by a sufficiently large counter-charge, in the same way that a ferromagnet can be reversed. However, although they are called ferroelectrics, the effect is due to the crystal structure (not the presence of a ferrous metal). See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" /> External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "d = \\frac{2\\pi} {|\\mathbf{g}_{h k \\ell}|}" }, { "math_id": 1, "text": "d_{\\ell mn}= \\frac {a} { \\sqrt{\\ell ^2 + m^2 + n^2} }" }, { "math_id": 2, "text": "\\frac {1} {d^{2}}= \\frac {h^2+k^2+\\ell^2} {a^2}" }, { "math_id": 3, "text": "\\frac {1} {d^{2}}= \\frac {h^2+k^2} {a^2}+\\frac{\\ell^2}{c^2}" }, { "math_id": 4, "text": "\\frac {1} {d^{2}}= \\frac{4}{3}\\left(\\frac{h^2+hk+k^2}{a^2}\\right)+\\frac{\\ell^2}{c^2}" }, { "math_id": 5, "text": "\\frac {1} {d^{2}}= \\frac{(h^2+k^2+\\ell^2)\\sin^2\\alpha+2(hk+k\\ell+h\\ell)(\\cos^2\\alpha-\\cos\\alpha)}{a^2(1-3\\cos^2\\alpha+2\\cos^3\\alpha)}" }, { "math_id": 6, "text": "\\frac {1} {d^{2}}= \\frac{h^2}{a^2}+\\frac{k^2}{b^2}+\\frac{\\ell^2}{c^2}" }, { "math_id": 7, "text": "\\frac {1} {d^{2}}=\\left(\\frac{h^2}{a^2}+\\frac{k^2\\sin^2\\beta}{b^2}+\\frac{\\ell^2}{c^2}-\\frac{2h\\ell\\cos\\beta}{ac}\\right) \\csc^2\\beta" }, { "math_id": 8, "text": "\\frac {1} {d^{2}}= \\frac{\\frac{h^2}{a^2}\\sin^2\\alpha+\\frac{k^2}{b^2}\\sin^2\\beta+\\frac{\\ell^2}{c^2}\\sin^2\\gamma+\\frac{2k\\ell}{bc}(\\cos\\beta\\cos\\gamma-\\cos\\alpha)+\\frac{2h\\ell}{ac}(\\cos\\gamma\\cos\\alpha-\\cos\\beta)+\\frac{2hk}{ab}(\\cos\\alpha\\cos\\beta-\\cos\\gamma)}{1-\\cos^2\\alpha-\\cos^2\\beta-\\cos^2\\gamma+2\\cos\\alpha\\cos\\beta\\cos\\gamma}" }, { "math_id": 9, "text": "\\mathrm{APF} = \\frac{N_\\mathrm{particle} V_\\mathrm{particle}}{V_\\text{unit cell}}" } ]
https://en.wikipedia.org/wiki?curid=58690
58701508
Anne M. Leggett
American mathematical logician Anne Marie Leggett (born May 28, 1947) is an American mathematical logician. She is an associate professor emerita of mathematics at Loyola University Chicago. Leggett was the editor-in-chief of the bi-monthly newsletter of the Association for Women in Mathematics (AWM), a position she held continuously from 1977 until the January-February 2024 issue. Leggett described her tenure as AWM Newsletter Editor in the article "This and That: My Time as AWM Newsletter Editor" which appeared in the volume "Fifty Years of Women in Mathematics: Reminiscences, History, and Visions for the Future of AWM". She has served on the Executive Committee of the AWM since 1977 and the AWM Policy and Advocacy Committee (2008-2015). With Bettye Anne Case, she is the editor of the book "" (with Anne M. Leggett, Princeton University Press, 2005). Leggett received an Alpha Sigma Nu Book Award for "Complexities" in 2006. Education and career. Leggett did her undergraduate studies at Ohio State University, and completed her Ph.D. in 1973 at Yale University. Her dissertation, "Maximal formula_0-r.e. sets and their complements", was supervised by Manuel Lerman. She became a C. L. E. Moore instructor at the Massachusetts Institute of Technology in 1973, and was also on the faculties of Western Illinois University and the University of Texas at Austin. In 1982, she married another mathematician, Gerard McDonald (1946–2012), and in 1983, they both joined the Loyola University Chicago faculty. Recognition. Leggett was chosen to be part of the 2019 class of fellows of the Association for Women in Mathematics, "for extraordinary contributions in promoting opportunities for women in the mathematical sciences through AWM and as a teacher and scholar; for her amazing and steady work as editor of the AWM Newsletter since 1977; and for her invaluable leadership and guidance." References. <templatestyles src="Reflist/styles.css" /> External links. Anne M. Leggett's Author Profile Page on MathSciNet
[ { "math_id": 0, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=58701508
5870300
Potential method
Method of analyzing the amortized complexity of a data structure In computational complexity theory, the potential method is a method used to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost of infrequent but expensive operations. Definition of amortized time. In the potential method, a function Φ is chosen that maps states of the data structure to non-negative numbers. If "S" is a state of the data structure, Φ("S") represents work that has been accounted for ("paid for") in the amortized analysis but not yet performed. Thus, Φ("S") may be thought of as calculating the amount of potential energy stored in that state. The potential value prior to the operation of initializing a data structure is defined to be zero. Alternatively, Φ("S") may be thought of as representing the amount of disorder in state "S" or its distance from an ideal state. Let "o" be any individual operation within a sequence of operations on some data structure, with "S"before denoting the state of the data structure prior to operation "o" and "S"after denoting its state after operation "o" has completed. Once Φ has been chosen, the amortized time for operation "o" is defined to be formula_0 where "C" is a non-negative constant of proportionality (in units of time) that must remain fixed throughout the analysis. That is, the amortized time is defined to be the actual time taken by the operation plus "C" times the difference in potential caused by the operation. When studying asymptotic computational complexity using big O notation, constant factors are irrelevant and so the constant "C" is usually omitted. Relation between amortized and actual time. Despite its artificial appearance, the total amortized time of a sequence of operations provides a valid upper bound on the actual time for the same sequence of operations. For any sequence of operations formula_1, define: Then: formula_4 where the sequence of potential function values forms a telescoping series in which all terms other than the initial and final potential function values cancel in pairs. Rearranging this, we obtain: formula_5 Since formula_6 and formula_7, formula_8, so the amortized time can be used to provide an accurate upper bound on the actual time of a sequence of operations, even though the amortized time for an individual operation may vary widely from its actual time. Amortized analysis of worst-case inputs. Typically, amortized analysis is used in combination with a worst case assumption about the input sequence. With this assumption, if "X" is a type of operation that may be performed by the data structure, and "n" is an integer defining the size of the given data structure (for instance, the number of items that it contains), then the amortized time for operations of type "X" is defined to be the maximum, among all possible sequences of operations on data structures of size "n" and all operations "oi" of type "X" within the sequence, of the amortized time for operation "oi". With this definition, the time to perform a sequence of operations may be estimated by multiplying the amortized time for each type of operation in the sequence by the number of operations of that type. Examples. Dynamic array. A dynamic array is a data structure for maintaining an array of items, allowing both random access to positions within the array and the ability to increase the array size by one. It is available in Java as the "ArrayList" type and in Python as the "list" type. A dynamic array may be implemented by a data structure consisting of an array "A" of items, of some length "N", together with a number "n" ≤ "N" representing the positions within the array that have been used so far. With this structure, random accesses to the dynamic array may be implemented by accessing the same cell of the internal array "A", and when "n" < "N" an operation that increases the dynamic array size may be implemented simply by incrementing "n". However, when "n" = "N", it is necessary to resize "A", and a common strategy for doing so is to double its size, replacing "A" by a new array of length 2"n". This structure may be analyzed using the potential function: Φ = 2"n" − "N" Since the resizing strategy always causes "A" to be at least half-full, this potential function is always non-negative, as desired. When an increase-size operation does not lead to a resize operation, Φ increases by 2, a constant. Therefore, the constant actual time of the operation and the constant increase in potential combine to give a constant amortized time for an operation of this type. However, when an increase-size operation causes a resize, the potential value of Φ decreases to zero after the resize. Allocating a new internal array "A" and copying all of the values from the old internal array to the new one takes O("n") actual time, but (with an appropriate choice of the constant of proportionality "C") this is entirely cancelled by the decrease in the potential function, leaving again a constant total amortized time for the operation. The other operations of the data structure (reading and writing array cells without changing the array size) do not cause the potential function to change and have the same constant amortized time as their actual time. Therefore, with this choice of resizing strategy and potential function, the potential method shows that all dynamic array operations take constant amortized time. Combining this with the inequality relating amortized time and actual time over sequences of operations, this shows that any sequence of "n" dynamic array operations takes O("n") actual time in the worst case, despite the fact that some of the individual operations may themselves take a linear amount of time. When the dynamic array includes operations that decrease the array size as well as increasing it, the potential function must be modified to prevent it from becoming negative. One way to do this is to replace the formula above for Φ by its absolute value. Multi-Pop Stack. Consider a stack which supports the following operations: Pop("k") requires O("k") time, but we wish to show that all operations take O(1) amortized time. This structure may be analyzed using the potential function: Φ = number-of-elements-in-stack This number is always non-negative, as required. A Push operation takes constant time and increases Φ by 1, so its amortized time is constant. A Pop operation takes time O("k") but also reduces Φ by "k", so its amortized time is also constant. This proves that any sequence of "m" operations takes O("m") actual time in the worst case. Binary counter. Consider a counter represented as a binary number and supporting the following operations: For this example, we are "not" using the transdichotomous machine model, but instead require one unit of time per bit operation in the increment. We wish to show that Inc takes O(1) amortized time. This structure may be analyzed using the potential function: Φ = number-of-bits-equal-to-1 = hammingweight(counter) This number is always non-negative and starts with 0, as required. An Inc operation flips the least significant bit. Then, if the LSB were flipped from 1 to 0, then the next bit is also flipped. This goes on until finally a bit is flipped from 0 to 1, at which point the flipping stops. If the counter initially ends in "k" 1 bits, we flip a total of "k"+1 bits, taking actual time "k"+1 and reducing the potential by "k"−1, so the amortized time is 2. Hence, the actual time for running "m" Inc operations is O("m"). Applications. The potential function method is commonly used to analyze Fibonacci heaps, a form of priority queue in which removing an item takes logarithmic amortized time, and all other operations take constant amortized time. It may also be used to analyze splay trees, a self-adjusting form of binary search tree with logarithmic amortized time per operation. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T_\\mathrm{amortized}(o) = T_\\mathrm{actual}(o) + C\\cdot(\\Phi(S_\\mathrm{after}) - \\Phi(S_\\mathrm{before}))," }, { "math_id": 1, "text": "O = o_1, o_2, \\dots,o_n " }, { "math_id": 2, "text": "T_\\mathrm{amortized}(O) = \\sum_{i=1}^n T_\\mathrm{amortized}(o_i)," }, { "math_id": 3, "text": "T_\\mathrm{actual}(O) = \\sum_{i=1}^n T_\\mathrm{actual}(o_i)." }, { "math_id": 4, "text": "T_\\mathrm{amortized}(O) = \\sum_{i=1}^n \\left(T_\\mathrm{actual}(o_i) + C\\cdot(\\Phi(S_i) - \\Phi(S_{i-1}))\\right) = T_\\mathrm{actual}(O) + C\\cdot(\\Phi(S_n) - \\Phi(S_0))," }, { "math_id": 5, "text": "T_\\mathrm{actual}(O) = T_\\mathrm{amortized}(O) - C\\cdot(\\Phi(S_n) - \\Phi(S_0))." }, { "math_id": 6, "text": "\\Phi(S_0) = 0" }, { "math_id": 7, "text": "\\Phi(S_n)\\ge 0" }, { "math_id": 8, "text": "T_\\mathrm{actual}(O) \\leq T_\\mathrm{amortized}(O)" } ]
https://en.wikipedia.org/wiki?curid=5870300
58705620
Spillover (experiment)
In experiments, a spillover is an indirect effect on a subject not directly treated by the experiment. These effects are useful for policy analysis but complicate the statistical analysis of experiments. Analysis of spillover effects involves relaxing the non-interference assumption, or SUTVA (Stable Unit Treatment Value Assumption). This assumption requires that subject "i"'s revelation of its potential outcomes depends only on that subject "i"'s own treatment status, and is unaffected by another subject "j"'s treatment status. In ordinary settings where the researcher seeks to estimate the average treatment effect (formula_0), violation of the non-interference assumption means that traditional estimators for the ATE, such as difference-in-means, may be biased. However, there are many real-world instances where a unit's revelation of potential outcomes depend on another unit's treatment assignment, and analyzing these effects may be just as important as analyzing the direct effect of treatment. One solution to this problem is to redefine the causal estimand of interest by redefining a subject's potential outcomes in terms of one's own treatment status and related subjects' treatment status. The researcher can then analyze various estimands of interest separately. One important assumption here is that this process captures all patterns of spillovers, and that there are no unmodeled spillovers remaining (ex. spillovers occur within a two-person household but not beyond). Once the potential outcomes are redefined, the rest of the statistical analysis involves modeling the probabilities of being exposed to treatment given some schedule of treatment assignment, and using inverse probability weighting (IPW) to produce unbiased (or asymptotically unbiased) estimates of the estimand of interest. Examples of spillover effects. Spillover effects can occur in a variety of different ways. Common applications include the analysis of social network spillovers and geographic spillovers. Examples include the following: In such examples, treatment in a randomized-control trial can have a direct effect on those who receive the intervention and also a spillover effect on those who were not directly treated. Statistical issues. Estimating spillover effects in experiments introduces three statistical issues that researchers must take into account. Relaxing the non-interference assumption. One key assumption for unbiased inference is the non-interference assumption, which posits that an individual's potential outcomes are only revealed by their own treatment assignment and not the treatment assignment of others. This assumption has also been called the Individualistic Treatment Response or the stable unit treatment value assumption. Non-interference is violated when subjects can communicate with each other about their treatments, decisions, or experiences, thereby influencing each other's potential outcomes. If the non-interference assumption does not hold, units no longer have just two potential outcomes (treated and control), but a variety of other potential outcomes that depend on other units’ treatment assignments, which complicates the estimation of the average treatment effect. Estimating spillover effects requires relaxing the non-interference assumption. This is because a unit's outcomes depend not only on its treatment assignment but also on the treatment assignment of its neighbors. The researcher must posit a set of potential outcomes that limit the type of interference. As an example, consider an experiment that sends out political information to undergraduate students to increase their political participation. If the study population consists of all students living with a roommate in a college dormitory, one can imagine four sets of potential outcomes, depending on whether the student or their partner received the information (assume no spillover outside of each two-person room): Now an individual's outcomes are influenced by both whether they received the treatment and whether their roommate received the treatment. We can estimate one type of spillover effect by looking at how one's outcomes change depending on whether their roommate received the treatment or not, given the individual did not receive treatment directly. This would be captured by the difference Y0,1- Y0,0. Similarly, we can measure how ones’ outcomes change depending on their roommate's treatment status, when the individual themselves are treated. This amounts to taking the difference Y1,1- Y1,0. While researchers typically embrace experiments because they require less demanding assumptions, spillovers can be “unlimited in extent and impossible to specify in form.” The researcher must make specific assumptions about which types of spillovers are operative. One can relax the non-interference assumption in various ways depending on how spillovers are thought to occur in a given setting. One way to model spillover effects is a binary indicator for whether an immediate neighbor was also treated, as in the example above. One can also posit spillover effects that depend on the number of immediate neighbors that were also treated, also known as k-level effects. Exposure mappings. The next step after redefining the causal estimand of interest is to characterize the probability of spillover exposure for each subject in the analysis, given some vector of treatment assignment. Aronow and Samii (2017) present a method for obtaining a matrix of exposure probabilities for each unit in the analysis. First, define a diagonal matrix with a vector of treatment assignment probabilities formula_1 Second, define an indicator matrix formula_2 of whether the unit is exposed to spillover or not. This is done by using an adjacency matrix as shown on the right, where information regarding a network can be transformed into an indicator matrix. This resulting indicator matrix will contain values of formula_3, the realized values of a random binary variable formula_4, indicating whether that unit has been exposed to spillover or not. Third, obtain the sandwich product formula_5, an "N" × "N" matrix which contains two elements: the individual probability of exposure formula_6on the diagonal, and the joint exposure probabilities formula_7on the off diagonals: formula_8In a similar fashion, the joint probability of exposure of "i" being in exposure condition formula_3 and "j" being in a different exposure condition formula_9can be obtained by calculating formula_10: formula_11Notice that the diagonals on the second matrix are 0 because a subject cannot be simultaneously exposed to two different exposure conditions at once, in the same way that a subject cannot reveal two different potential outcomes at once. The obtained exposure probabilities formula_12then can be used for inverse probability weighting (IPW, described below), in an estimator such as the Horvitz–Thompson estimator. One important caveat is that this procedure excludes all units whose probability of exposure is zero (ex. a unit that is not connected to any other units), since these numbers end up in the denominator of the IPW regression. Need for inverse probability weights. Estimating spillover effects requires additional care: although treatment is directly assigned, spillover status is indirectly assigned and can lead to differential probabilities of spillover assignment for units. For example, a subject with 10 friend connections is more likely to be indirectly exposed to treatment as opposed to a subject with just one friend connection. Not accounting for varying probabilities of spillover exposure can bias estimates of the average spillover effect. Figure 1 displays an example where units have varying probabilities of being assigned to the spillover condition. Subfigure A displays a network of 25 nodes where the units in green are eligible to receive treatment. Spillovers are defined as sharing at least one edge with a treated unit. For example, if node 16 is treated, nodes 11, 17, and 21 would be classified as spillover units. Suppose three of these six green units are selected randomly to be treated, so that formula_13 different sets of treatment assignments are possible. In this case, subfigure B displays each node's probability of being assigned to the spillover condition. Node 3 is assigned to spillover in 95% of the randomizations because it shares edges with three units that are treated. This node will only be a control node in 5% of randomizations: that is, when the three treated nodes are 14, 16, and 18. Meanwhile, node 15 is assigned to spillover only 50% of the time—if node 14 is not directly treated, node 15 will not be assigned to spillover. Using inverse probability weights. When analyzing experiments with varying probabilities of assignment, special precautions should be taken. These differences in assignment probabilities may be neutralized by inverse-probability-weighted (IPW) regression, where each observation is weighted by the inverse of its likelihood of being assigned to the treatment condition observed using the Horvitz-Thompson estimator. This approach addresses the bias that might arise if potential outcomes were systematically related to assignment probabilities. The downside of this estimator is that it may be fraught with sampling variability if some observations are accorded a high amount of weight (i.e. a unit with a low probability of being spillover is assigned to the spillover condition by chance). Using randomization inference for hypothesis testing. In some settings, estimating the variability of a spillover effect creates additional difficulty. When the research study has a fixed unit of clustering, such as a school or household, researchers can use traditional standard error adjustment tools like cluster-robust standard errors, which allow for correlations in error terms within clusters but not across them. In other settings, however, there is no fixed unit of clustering. In order to conduct hypothesis testing in these settings, the use of randomization inference is recommended. This technique allows one to generate p-values and confidence intervals even when spillovers do not adhere to a fixed unit of clustering but nearby units tend to be assigned to similar spillover conditions, as in the case of fuzzy clustering. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\widehat{ATE}" }, { "math_id": 1, "text": "\\mathbf { P } = \\operatorname{diag} \\left( p_{\\mathbf z_1} , p_{\\mathbf z_2} , \\dots , p_{\\mathbf{z}_{|\\Omega|}} \\right). " }, { "math_id": 2, "text": "\\mathbf{I}" }, { "math_id": 3, "text": "d_k" }, { "math_id": 4, "text": "D_i = f \\left( \\mathbf { Z } , \\theta_i \\right)" }, { "math_id": 5, "text": "\\mathbf {I}_k \\mathbf { P } \\mathbf {I}_k^{\\prime}" }, { "math_id": 6, "text": "\\pi _ { i } \\left( d _ { k } \\right)" }, { "math_id": 7, "text": "\\pi _ { i j } \\left( d _ { k } \\right)" }, { "math_id": 8, "text": "\\mathbf {I}_k \\mathbf { P } \\mathbf {I}_k^\\prime = \\left[ \\begin{array} {cccc} { \\pi_1(d_k) } & \\pi_{12} (d_k) & \\cdots & \\pi_{1N} (d_k) \\\\ \\pi_{21}(d_k) & \\pi_2(d_k) & \\cdots & \\pi_{2N}(d_k) \\\\ \\vdots & \\vdots & \\ddots & \\\\ \\pi_{N1}(d_k) & \\pi_{N2}(d_k) & { } & \\pi_N ( d_k) \\end{array} \\right]" }, { "math_id": 9, "text": "d_l" }, { "math_id": 10, "text": "\\mathbf { I } _ { k } \\mathbf { P } \\mathbf { I } _ { l } ^ { \\prime }" }, { "math_id": 11, "text": "\\mathbf { I } _ { k } \\mathbf { P } \\mathbf { I } _ { l } ^ { \\prime } = \\left[ \\begin{array} { c c c c } { 0 } & { \\pi _ { 12 } \\left( d _ { k } , d _ { l } \\right) } & { \\dots } & { \\pi _ { 1 N } \\left( d _ { k } , d _ { l } \\right) } \\\\ { \\pi _ { 21 } \\left( d _ { k } , d _ { l } \\right) } & { 0 } & { \\ldots } & { \\pi _ { 2 N } \\left( d _ { k } , d _ { l } \\right) } \\\\ { \\vdots } & { \\vdots } & { \\ddots } & { } \\\\ \\pi_{N1} (d_k, d_l ) & \\pi_{N2} (d_k, d_l) & & 0 \\end{array} \\right]" }, { "math_id": 12, "text": "\\pi" }, { "math_id": 13, "text": "\\binom{6}{3}=20" } ]
https://en.wikipedia.org/wiki?curid=58705620
5871034
Schottky group
In mathematics, a Schottky group is a special sort of Kleinian group, first studied by Friedrich Schottky (1877). Definition. Fix some point "p" on the Riemann sphere. Each Jordan curve not passing through "p" divides the Riemann sphere into two pieces, and we call the piece containing "p" the "exterior" of the curve, and the other piece its "interior". Suppose there are 2"g" disjoint Jordan curves "A"1, "B"1..., "A""g", "B""g" in the Riemann sphere with disjoint interiors. If there are Möbius transformations "T""i" taking the outside of "A""i" onto the inside of "B""i", then the group generated by these transformations is a Kleinian group. A Schottky group is any Kleinian group that can be constructed like this. Properties. By work of , a finitely generated Kleinian group is Schottky if and only if it is finitely generated, free, has nonempty domain of discontinuity, and all non-trivial elements are loxodromic. A fundamental domain for the action of a Schottky group "G" on its regular points Ω("G") in the Riemann sphere is given by the exterior of the Jordan curves defining it. The corresponding quotient space Ω("G")/"G" is given by joining up the Jordan curves in pairs, so is a compact Riemann surface of genus "g". This is the boundary of the 3-manifold given by taking the quotient ("H"∪Ω("G"))/"G" of 3-dimensional hyperbolic "H" space plus the regular set Ω("G") by the Schottky group "G", which is a handlebody of genus "g". Conversely any compact Riemann surface of genus "g" can be obtained from some Schottky group of genus "g". Classical and non-classical Schottky groups. A Schottky group is called classical if all the disjoint Jordan curves corresponding to some set of generators can be chosen to be circles. Marden (1974, 1977) gave an indirect and non-constructive proof of the existence of non-classical Schottky groups, and gave an explicit example of one. It has been shown by that all finitely generated classical Schottky groups have limit sets of Hausdorff dimension bounded above strictly by a universal constant less than 2. Conversely, has proved that there exists a universal lower bound on the Hausdorff dimension of limit sets of all non-classical Schottky groups. Limit sets of Schottky groups. The limit set of a Schottky group, the complement of Ω("G"), always has Lebesgue measure zero, but can have positive "d"-dimensional Hausdorff measure for "d" < 2. It is perfect and nowhere dense with positive logarithmic capacity. The statement on Lebesgue measures follows for classical Schottky groups from the existence of the Poincaré series formula_0 Poincaré showed that the series | "c""i" |−4 is summable over the non-identity elements of the group. In fact taking a closed disk in the interior of the fundamental domain, its images under different group elements are disjoint and contained in a fixed disk about 0. So the sums of the areas is finite. By the changes of variables formula, the area is greater than a constant times | "c""i" |−4. A similar argument implies that the limit set has Lebesgue measure zero. For it is contained in the complement of union of the images of the fundamental region by group elements with word length bounded by "n". This is a finite union of circles so has finite area. That area is bounded above by a constant times the contribution to the Poincaré sum of elements of word length "n", so decreases to 0. Schottky space. Schottky space (of some genus "g" ≥ 2) is the space of marked Schottky groups of genus "g", in other words the space of sets of "g" elements of PSL2(C) that generate a Schottky group, up to equivalence under Möbius transformations . It is a complex manifold of complex dimension 3"g"−3. It contains classical Schottky space as the subset corresponding to classical Schottky groups. Schottky space of genus "g" is not simply connected in general, but its universal covering space can be identified with Teichmüller space of compact genus "g" Riemann surfaces. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\displaystyle{P(z)=\\sum (c_iz+d_i)^{-4}.}" } ]
https://en.wikipedia.org/wiki?curid=5871034
58714870
Accessibility (transport)
Measure of ease of reaching a destination In transport planning, accessibility refers to a measure of the ease of reaching (and interacting with) destinations or activities distributed in space, e.g. around a city or country. Accessibility is generally associated with a place (or places) of origin. A place with "high accessibility" is one from which many destinations can be reached or destinations can be reached with relative ease. "Low accessibility" implies that relatively few destinations can be reached for a given amount of time/effort/cost or that reaching destinations is more difficult or costly from that place. The concept can also be defined in the other direction, and we can speak of a place having accessibility "from" some set of surrounding places. For example, one could measure the accessibility of a store to customers as well as the accessibility of a potential customer to some set of stores. In time geography, accessibility has also been defined as "person based" rather than "place based", where one would consider a person's access to some type of amenity through the course of their day as they move through space. For example, a person might live in a food desert but have easy access to a grocery store from their place of work. Accessibility is often calculated separately for different modes of transport. Mathematical definition. In general, accessibility formula_0 is defined as: formula_1 where: Cost metrics. Travel cost metrics (formula_5 in the equation above) can take a variety of forms such as: Cost metrics may also be defined using any combination of these or other metrics. For a non-motorized mode of transport, such as walking or cycling, the generalized travel cost may include additional factors such as safety or gradient. The essential idea is to define a function that describes the ease of travelling from any origin formula_2 to any destination formula_3. A large compendium of such cost metrics used in practice was developed in 2012, under the framework of Cost Action TU1002, and is available online. Impedance functions. The function on the travel cost formula_6 determines how accessible a destination is based on the travel cost associated with reaching that destination. Two common impedance functions are "cumulative opportunities" and a negative exponential function. Cumulative opportunities is a binary function yielding 1 if an opportunity can be reached within some threshold and 0 otherwise. It is defined as: formula_7 where formula_8 is the threshold parameter. A negative exponential impedance function can be defined as: formula_9 where formula_10 is a parameter defining how quickly the function decays with distance. Relation to land use. Accessibility has long been associated with land-use; as accessibility increases in a given place, the utility of developing the land increases. This association is often used in integrated transport and landuse forecasting models. At the same time, the accessibility of a place can not only be changed through a modification of the transport infrastructure, but also as a consequence of a changed spatial structure / distribution of destinations. In practice. Transport agencies. Transport for London utilize a calculated approach known as Public Transport Accessibility Level (PTAL) that uses the distance from any point to the nearest public transport stops, and service frequency at those stops, to assess the accessibility of a site to public transport services. Destination-based accessibility measures are an alternate approach that can be more sophisticated to calculate. These measures consider not just access to public transport services (or any other form of travel), but the resulting access to opportunities that arises from it. For example, using origin-based accessibility (PTAL) we can understand how many buses one may be able to be access. Using destination-based measures we can calculate how many schools, hospitals, jobs, restaurants (etc..) can be accessed. In urban planning. Accessibility-based planning is a spatial planning methodology that centralises goals of people and businesses and defines accessibility policy as enhancing people and business opportunities. Traditionally, urban transportation planning has mainly focused on the efficiency of the transport system itself and is often responding to plans made by spatial planners. Such an approach neglects the influence of interventions in the transport system on broader and often conflicting economic, social and environmental goals. Accessibility based planning defines accessibility as the amount of services and jobs people can access within a certain travel time, considering one or more modes of transport such as walking, cycling, driving or public transport. Using this definition accessibility does not only relate to the qualities of the transport system (e.g. travel speed, time or costs), but also to the qualities of the land use system (e.g. densities and mixes of opportunities). It thus provides planners with the possibility to understand interdependencies between transport and land use development. Accessibility planning opens the floor to a more normative approach to transportation planning involving different actors. For politicians, citizens and firms it might be easier to discuss the quality of access to education, services and markets than it is to discuss the inefficiencies of the transport system itself. Accessibility is also defined as "the potential for interaction". Despite the high potential of accessibility in integrating the different components of urban planning, such as land use and transportation and the large number of accessibility instruments available in the research literature, the latter are not widely used to support urban planning practices yet. By keeping the accessibility language out of the practice level, older paradigms resist the more informed and people-centred approaches. The existence of accessibility instruments is fairly acknowledged, but practitioners do not appear to have found them useful or usable enough for addressing the tasks of sustainable urban management. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\nA_i = \\sum_j {W_j } \\times f\\left( {C_{ij} } \\right)\n" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "W_j" }, { "math_id": 5, "text": "C_{ij}" }, { "math_id": 6, "text": "f\\left( {C_{ij} } \\right)" }, { "math_id": 7, "text": "\nf(C_{ij}) = \n\\begin{cases} \n1~~\\text{if} & C_{ij} \\leq \\theta \\\\\n0~~\\text{if} & C_{ij} > \\theta\n\\end{cases}\n" }, { "math_id": 8, "text": "\\theta" }, { "math_id": 9, "text": "\nf(C_{ij}) = e^{ -\\beta C_{ij} }\n" }, { "math_id": 10, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=58714870
58716496
Quantum volume
Metric for a quantum computer's capabilities Quantum volume is a metric that measures the capabilities and error rates of a quantum computer. It expresses the maximum size of square quantum circuits that can be implemented successfully by the computer. The form of the circuits is independent from the quantum computer architecture, but compiler can transform and optimize it to take advantage of the computer's features. Thus, quantum volumes for different architectures can be compared. The current world record for highest quantum volume as of  2024[ [update]] is 220, accomplished by Quantinuum's H1-1 20-qubit ion trap quantum computer. Introduction. Quantum computers are difficult to compare. Quantum volume is a single number designed to show all around performance. It is a measurement and not a calculation, and takes into account several features of a quantum computer, starting with its number of qubits—other measures used are gate and measurement errors, crosstalk and connectivity. IBM defined its Quantum Volume metric because a classical computer's transistor count and a quantum computer's quantum bit count aren't the same. Qubits decohere with a resulting loss of performance so a few fault tolerant bits are more valuable as a performance measure than a larger number of noisy, error-prone qubits. Generally, the larger the quantum volume, the more complex the problems a quantum computer can solve. Alternative benchmarks, such as Cross-entropy benchmarking, reliable Quantum Operations per Second (rQOPS) proposed by Microsoft, Circuit Layer Operations Per Second (CLOPS) proposed by IBM and IonQ's Algorithmic Qubits, have also been proposed. Definition. Original Definition. The quantum volume of a quantum computer was originally defined in 2018 by Nikolaj Moll "et al." However, since around 2021 that definition has been supplanted by IBM's 2019 redefinition. The original definition depends on the number of qubits N as well as the number of steps that can be executed, the circuit depth d formula_0 The circuit depth depends on the effective error rate as formula_1 The effective error rate is defined as the average error rate of a two-qubit gate. If the physical two-qubit gates do not have all-to-all connectivity, additional SWAP gates may be needed to implement an arbitrary two-qubit gate and , where ε is the error rate of the physical two-qubit gates. If more complex hardware gates are available, such as the three-qubit Toffoli gate, it is possible that . The allowable circuit depth decreases when more qubits with the same effective error rate are added. So with these definitions, as soon as , the quantum volume goes down if more qubits are added. To run an algorithm that only requires qubits on an N-qubit machine, it could be beneficial to select a subset of qubits with good connectivity. For this case, Moll "et al." give a refined definition of quantum volume. formula_2 where the maximum is taken over an arbitrary choice of n qubits. IBM's redefinition. In 2019, IBM's researchers modified the quantum volume definition to be an exponential of the circuit size, stating that it corresponds to the complexity of simulating the circuit on a classical computer: formula_3 Volumetric benchmarks. The quantum volume benchmark defines a family of "square" circuits, whose number of qubits N and depth d are the same. Therefore, the output of this benchmark is a single number. However, a proposed generalization is the volumetric benchmark framework, which defines a family of "rectangular" quantum circuits, for which N and d are uncoupled to allow the study of time/space performance trade-offs, thereby sacrificing the simplicity of a single-figure benchmark. Volumetric benchmarks can be generalized not only to account for uncoupled N and d dimensions, but also to test different types of quantum circuits. While quantum volume benchmarks the quantum computer's ability to implement a specific type of "randomized circuits", these can, in principle, be substituted by other families of random circuits, periodic circuits, or algorithm-inspired circuits. Each benchmark must have a success criterion that defines whether a processor has "passed" a given test circuit. While these data can be analyzed in many ways, a simple method of visualization is illustrating the Pareto front of the N versus d trade-off for the processor being benchmarked. This Pareto front provides information on the largest depth d a patch of a given number of qubits N can withstand, or, alternatively, the biggest patch of N qubits that can withstand executing a circuit of given depth d. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\n\\tilde{V}_Q = \\min[N, d(N)]^2.\n" }, { "math_id": 1, "text": "\nd \\simeq \\frac{1}{N\\varepsilon_\\mathrm{eff}}.\n" }, { "math_id": 2, "text": "\n V_Q = \\max_{n<N} \\left\\{ \\min\\left[n,\\frac{1}{n\\varepsilon_\\mathrm{eff}(n)}\\right]^2 \\right\\},\n" }, { "math_id": 3, "text": "\\log_2 V_Q = \\underset{n \\le N}{\\operatorname{arg\\,max}}\\left\\{\\min\\left[n, d(n)\\right]\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=58716496
58726047
Partial inverse of a matrix
In linear algebra and statistics, the partial inverse of a matrix is an operation related to Gaussian elimination which has applications in numerical analysis and statistics. It is also known by various authors as the principal pivot transform, or as the sweep, gyration, or exchange operator. Given an formula_0 matrix formula_1 over a vector space formula_2 partitioned into blocks: formula_3 If formula_4 is invertible, then the partial inverse of formula_5 around the pivot block formula_4 is created by inverting formula_4, putting the Schur complement formula_6 in place of formula_7, and adjusting the off-diagonal elements accordingly: formula_8 Conceptually, partial inversion corresponds to a rotation of the graph of the matrix formula_9, such that, for conformally-partitioned column matrices formula_10 and formula_11: formula_12 As defined this way, this operator is its own inverse: formula_13, and if the pivot block formula_4 is chosen to be the entire matrix, then the transform simply gives the matrix inverse formula_14. Note that some authors define a related operation (under one of the other names) which is not an inverse per se; particularly, one common definition instead has formula_15. The transform is often presented as a pivot around a single non-zero element formula_16, in which case one has formula_17 Partial inverses obey a number of nice properties: Use of the partial inverse in numerical analysis is due to the fact that there is some flexibility in the choices of pivots, allowing for non-invertible elements to be avoided, and because the operation of "rotation" (of the graph of the pivoted matrix) has better numerical stability than the "shearing" operation which is implicitly performed by Gaussian elimination. Use in statistics is due to the fact that the resulting matrix nicely decomposes into blocks which have useful meanings in the context of linear regression. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " n \\times n " }, { "math_id": 1, "text": " A" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": " A = \\begin{pmatrix} A_{11} & A_{12} \\\\ A_{21} & A_{22} \\end{pmatrix} " }, { "math_id": 4, "text": "A_{11}" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "A / A_{11}" }, { "math_id": 7, "text": "A_{22}" }, { "math_id": 8, "text": " \\operatorname{inv}_1 A = \\begin{pmatrix} (A_{11})^{-1} & - (A_{11})^{-1} A_{12} \\\\ A_{21} (A_{11})^{-1} & A_{22} - A_{21} (A_{11})^{-1}A_{12} \\end{pmatrix} " }, { "math_id": 9, "text": " (X, AX) \\in V \\times V" }, { "math_id": 10, "text": "(x_1, x_2)^T" }, { "math_id": 11, "text": "(y_1, y_2)^T" }, { "math_id": 12, "text": " A \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} = \\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} \\Leftrightarrow \n\\operatorname{inv}_1(A) \\begin{pmatrix} y_1 \\\\ x_2 \\end{pmatrix} = \\begin{pmatrix} x_1 \\\\ y_2 \\end{pmatrix} " }, { "math_id": 13, "text": " \\operatorname{inv}_k(\\operatorname{inv}_k(A)) = A " }, { "math_id": 14, "text": "A^{-1}" }, { "math_id": 15, "text": "(\\operatorname{inv}_k)^2 (A) = -A" }, { "math_id": 16, "text": "a_{kk}" }, { "math_id": 17, "text": "\n\\left[ \\operatorname{inv}_k (A) \\right]_{ij} = \\begin{cases} \n\\frac{1}{a_{kk}} & i = j = k \\\\\n-\\frac{a_{kj}}{a_{kk}} & i = k, j \\neq k \\\\\n\\frac{a_{ik}}{a_{kk}} & i \\neq k, j = k \\\\\na_{ij} - \\frac{a_{ik} a_{kj}}{a_{kk}} & i \\neq k, j \\neq k\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=58726047
587271
Torsion spring
Type of spring A torsion spring is a spring that works by twisting its end along its axis; that is, a flexible elastic object that stores mechanical energy when it is twisted. When it is twisted, it exerts a torque in the opposite direction, proportional to the amount (angle) it is twisted. There are various types: Torsion, bending. Torsion bars and torsion fibers do work by torsion. However, the terminology can be confusing because in helical torsion spring (including clock spring), the forces acting on the wire are actually bending stresses, not torsional (shear) stresses. A helical torsion spring actually works by torsion when it is bent (not twisted). We will use the word "torsion" in the following for a torsion spring according to the definition given above, whether the material it is made of actually works by torsion or by bending. Torsion coefficient. As long as they are not twisted beyond their elastic limit, torsion springs obey an angular form of Hooke's law: formula_0 where The torsion constant may be calculated from the geometry and various material properties. It is analogous to the spring constant of a linear spring. The negative sign indicates that the direction of the torque is opposite to the direction of twist. The energy "U", in joules, stored in a torsion spring is: formula_4 Uses. Some familiar examples of uses are the strong, helical torsion springs that operate clothespins and traditional spring-loaded-bar type mousetraps. Other uses are in the large, coiled torsion springs used to counterbalance the weight of garage doors, and a similar system is used to assist in opening the trunk (boot) cover on some sedans. Small, coiled torsion springs are often used to operate pop-up doors found on small consumer goods like digital cameras and compact disc players. Other more specific uses: Torsion balance. The torsion balance, also called torsion pendulum, is a scientific apparatus for measuring very weak forces, usually credited to Charles-Augustin de Coulomb, who invented it in 1777, but independently invented by John Michell sometime before 1783. Its most well-known uses were by Coulomb to measure the electrostatic force between charges to establish Coulomb's Law, and by Henry Cavendish in 1798 in the Cavendish experiment to measure the gravitational force between two masses to calculate the density of the Earth, leading later to a value for the gravitational constant. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. If an unknown force is applied at right angles to the ends of the bar, the bar will rotate, twisting the fiber, until it reaches an equilibrium where the twisting force or torque of the fiber balances the applied force. Then the magnitude of the force is proportional to the angle of the bar. The sensitivity of the instrument comes from the weak spring constant of the fiber, so a very weak force causes a large rotation of the bar. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls. Determining the force for different charges and different separations between the balls, he showed that it followed an inverse-square proportionality law, now known as Coulomb's law. To measure the unknown force, the spring constant of the torsion fiber must first be known. This is difficult to measure directly because of the smallness of the force. Cavendish accomplished this by a method widely used since: measuring the resonant vibration period of the balance. If the free balance is twisted and released, it will oscillate slowly clockwise and counterclockwise as a harmonic oscillator, at a frequency that depends on the moment of inertia of the beam and the elasticity of the fiber. Since the inertia of the beam can be found from its mass, the spring constant can be calculated. Coulomb first developed the theory of torsion fibers and the torsion balance in his 1785 memoir, "Recherches theoriques et experimentales sur la force de torsion et sur l'elasticite des fils de metal &amp;c". This led to its use in other scientific instruments, such as galvanometers, and the Nichols radiometer which measured the radiation pressure of light. In the early 1900s gravitational torsion balances were used in petroleum prospecting. Today torsion balances are still used in physics experiments. In 1987, gravity researcher A. H. Cook wrote: The most important advance in experiments on gravitation and other delicate measurements was the introduction of the torsion balance by Michell and its use by Cavendish. It has been the basis of all the most significant experiments on gravitation ever since.In the Eötvös experiment, a torsion balance was used to prove the "equivalence principle" - the idea that inertial mass and gravitational mass are one and the same. Torsional harmonic oscillators. Torsion balances, torsion pendulums and balance wheels are examples of torsional harmonic oscillators that can oscillate with a rotational motion about the axis of the torsion spring, clockwise and counterclockwise, in harmonic motion. Their behavior is analogous to translational spring-mass oscillators (see Harmonic oscillator Equivalent systems). The general differential equation of motion is: formula_8 If the damping is small, formula_9, as is the case with torsion pendulums and balance wheels, the frequency of vibration is very near the natural resonant frequency of the system: formula_10 Therefore, the period is represented by: formula_11 The general solution in the case of no drive force (formula_12), called the transient solution, is: formula_13 where: formula_14 formula_15 Applications. The balance wheel of a mechanical watch is a harmonic oscillator whose resonant frequency formula_6 sets the rate of the watch. The resonant frequency is regulated, first coarsely by adjusting formula_5 with weight screws set radially into the rim of the wheel, and then more finely by adjusting formula_3 with a regulating lever that changes the length of the balance spring. In a torsion balance the drive torque is constant and equal to the unknown force to be measured formula_16, times the moment arm of the balance beam formula_7, so formula_17. When the oscillatory motion of the balance dies out, the deflection will be proportional to the force: formula_18 To determine formula_16 it is necessary to find the torsion spring constant formula_3. If the damping is low, this can be obtained by measuring the natural resonant frequency of the balance, since the moment of inertia of the balance can usually be calculated from its geometry, so: formula_19 In measuring instruments, such as the D'Arsonval ammeter movement, it is often desired that the oscillatory motion die out quickly so the steady state result can be read off. This is accomplished by adding damping to the system, often by attaching a vane that rotates in a fluid such as air or water (this is why magnetic compasses are filled with fluid). The value of damping that causes the oscillatory motion to settle quickest is called the critical dampingformula_20: formula_21 Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\tau = -\\kappa\\theta\\," }, { "math_id": 1, "text": "\\tau\\," }, { "math_id": 2, "text": "\\theta\\," }, { "math_id": 3, "text": "\\kappa\\," }, { "math_id": 4, "text": " U = \\frac{1}{2}\\kappa\\theta^2" }, { "math_id": 5, "text": "I\\," }, { "math_id": 6, "text": "f_n\\," }, { "math_id": 7, "text": "L\\," }, { "math_id": 8, "text": "I\\frac{d^2\\theta}{dt^2} + C\\frac{d\\theta}{dt} + \\kappa\\theta = \\tau(t)" }, { "math_id": 9, "text": "C \\ll \\sqrt{\\kappa I}\\," }, { "math_id": 10, "text": "f_n = \\frac{\\omega_n}{2\\pi} = \\frac{1}{2\\pi}\\sqrt{\\frac{\\kappa}{I}}\\," }, { "math_id": 11, "text": "T_n = \\frac{1}{f_n} = \\frac{2\\pi}{\\omega_n} = 2\\pi \\sqrt{\\frac{I}{\\kappa}}\\," }, { "math_id": 12, "text": "\\tau = 0\\," }, { "math_id": 13, "text": "\\theta = Ae^{-\\alpha t} \\cos{(\\omega t + \\phi)}\\," }, { "math_id": 14, "text": "\\alpha = C/2I\\," }, { "math_id": 15, "text": "\\omega = \\sqrt{\\omega_n^2 - \\alpha^2} = \\sqrt{\\kappa/I - (C/2I)^2}\\," }, { "math_id": 16, "text": "F\\," }, { "math_id": 17, "text": "\\tau(t) = FL\\," }, { "math_id": 18, "text": "\\theta = FL/\\kappa\\," }, { "math_id": 19, "text": "\\kappa = (2\\pi f_n)^2 I\\," }, { "math_id": 20, "text": "C_c\\," }, { "math_id": 21, "text": "C_c = 2 \\sqrt{\\kappa I}\\," } ]
https://en.wikipedia.org/wiki?curid=587271
58735072
Real radical
Largest ideal with the same vanishing locus In algebra, the real radical of an ideal "I" in a polynomial ring with real coefficients is the largest ideal containing "I" with the same (real) vanishing locus. It plays a similar role in real algebraic geometry that the radical of an ideal plays in algebraic geometry over an algebraically closed field. More specifically, Hilbert's Nullstellensatz says that when "I" is an ideal in a polynomial ring with coefficients coming from an algebraically closed field, the radical of "I" is the set of polynomials vanishing on the vanishing locus of "I". In real algebraic geometry, the Nullstellensatz fails as the real numbers are not algebraically closed. However, one can recover a similar theorem, the "real Nullstellensatz", by using the real radical in place of the (ordinary) radical. Definition. The real radical of an ideal "I" in a polynomial ring formula_0 over the real numbers, denoted by formula_1, is defined as formula_2 The Positivstellensatz then implies that formula_1 is the set of all polynomials that vanish on the real variety defined by the vanishing of formula_3. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}[x_1,\\dots,x_n]" }, { "math_id": 1, "text": "\\sqrt[\\mathbb{R}]{I}" }, { "math_id": 2, "text": "\\sqrt[\\mathbb{R}]{I} = \\Big\\{ f \\in \\mathbb{R}[x_1,\\dots,x_n] \\left|\\, -f^{2m} = \\textstyle{\\sum_i} h_i^2 + g \\right.\\text{ where }\\ m \\in \\mathbb{Z}_+,\\, h_i \\in \\mathbb{R}[x_1,\\dots,x_n], \\,\\text{and } g \\in I\\Big\\}." }, { "math_id": 3, "text": "I" } ]
https://en.wikipedia.org/wiki?curid=58735072
58740
Low-temperature technology timeline
The following is a timeline of low-temperature technology and cryogenic technology (refrigeration down to close to absolute zero, i.e. –273.15 °C, –459.67 °F or 0 K). It also lists important milestones in thermometry, thermodynamics, statistical physics and calorimetry, that were crucial in development of low temperature systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "10^{-12}" } ]
https://en.wikipedia.org/wiki?curid=58740
5874390
Holding period return
In finance, holding period return (HPR) is the return on an asset or portfolio over the whole period during which it was held. It is one of the simplest and most important measures of investment performance. HPR is the change in value of an investment, asset or portfolio over a particular period. It is the entire gain or loss, which is the sum income and capital gains, divided by the value at the beginning of the period. HPR = (End Value - Initial Value) / Initial Value where the End Value includes income, such as dividends, earned on the investment: formula_0 where formula_1 is the value at the start of the holding period and formula_2 is the total value at the end of the holding period. Annualizing the holding period return. Over multiple years. To "annualize" a holding period return means to find the equivalent rate of return per year. Assuming income and capital gains and losses are reinvested, i.e. retained in the portfolio, then: formula_3 formula_4 "t" being the length of the holding period, measured in years. For example, if you have held the item for half a year, "t" would equal 1/2, so 1/"t" would equal 2. (However, investment performance professionals generally advise against quoting annualized return over a holding period of less than a year). From quarterly holding period returns. To calculate an annual HPR from four quarterly HPRs, it is necessary to know whether income is reinvested within each quarter or not. If HPR1 through HPR4 are the holding period returns for four consecutive periods, assuming that income is reinvested, the annual HPR obeys the relation: formula_5 Example with income not reinvested. To the right is an example of a stock investment of one share purchased at the beginning of the year for $100. Assume dividends are not reinvested. At the end of the first quarter the stock price is $98. The stock share bought for $100 can only be sold for $98, which is the value of the investment at the end of the first quarter. This is less than the purchase price, so the investment has suffered a capital loss. The first quarter holding period return is: ($98 – $100 + $1) / $100 = -1% Since the final stock price at the end of the year is $99, the annual holding period return is: ($99 ending price - $100 beginning price + $4 dividends) / $100 beginning price = 3% If the final stock price had been $95, the annual HPR would be: ($95 ending price - $100 beginning price + $4 dividends) / $100 beginning price = -1%. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "HPR_n \\ = \\ \\frac{Income + P_{n+1} - P_n}{P_n}" }, { "math_id": 1, "text": "P_n" }, { "math_id": 2, "text": "Income + P_{n+1}" }, { "math_id": 3, "text": "\\text {Annualized rate of return} = \\left( \\frac {\\text{end value}} {\\text {initial value}} \\right) ^ \\frac {1}{t} - 1" }, { "math_id": 4, "text": "=\\left(\\text {holding-period return} + 1 \\right)^{\\frac{1}{t}} - 1" }, { "math_id": 5, "text": "1+HPR=\\left(1+HPR_{1}\\right)\\left(1+HPR_{2}\\right)\\left(1+HPR_{3}\\right)\\left(1+HPR_{4}\\right)" } ]
https://en.wikipedia.org/wiki?curid=5874390
58753418
Uruguay and the World Bank
Uruguay and the World Bank have been working together for a long time. This is because they both mutually benefit. From the WBG, Uruguay asks for the development of finance services and innovative knowledge, the use of integrated services with the participation of the World Bank, the International Finance Corporation (IFC) and the Multilateral Investment Guarantee Agency (MIGA) and the publication of Uruguayan development experiences in web sites where the WBG can serve as a platform for the dissemination of successful reforms. On the other hand, working with Uruguay is interesting for the world bank, because it is a country that is interested in increasing the productivity and insertion in the international sphere plus they are both interested in finding innovative development solution to assist the country and create positive externalities. Current state of Uruguay. Uruguay has had an annual growth rate of 4.54% between 2003 and 2016, it was reported a high income country by the WB in 2013, and by 2016 extreme poverty almost disappeared, it went from 2.5% of the population to .2%. As of 2017, Uruguay's GDP per capita is US$16,245.6. WBG projects in Uruguay. The WBG is formed by the IBRD, the IFC and MIGA. As of October 2017, the World bank has invested in 12 projects worth US$1.347 billion, these projects are in areas such as infrastructure, transport agriculture natural resources, education, sanitation and health. IBRD projects. The IBRD has several different projects happening some of them are for road construction. There are also loans for policy development, which result in the improvement of the country's credit rating, which obtained the Investment grade in 2012. IFC projects. Since October 2017, the IFC is focusing on infrastructure, the financial sector and the angro-industry, their projects are valued at US$101 million. MIGA projects. In 2016, the Uruguayan Subsidiary of Banco Santander received a guarantee of US$439 million from the MIGA. Specific projects. FMD Emergency Recovery Project. In May 2001, Uruguay suffered from a major crisis, that was caused the Foot and Mouth disease (FMD) which affected about 8.3% of the cattle. The livestock sector makes up 6% of the total GDP therefore this affected the economy drastically. The IBRD contributed with the FMD Emergency Recovery Project with US$18.5 Million. The project consisted of treating the livestock with vaccines and training workers and having awareness campaigns, this resulted in the improvement of the sanitary status and the FDM was eradicated from the country. Furthermore, the country's economy was reactivated and it is globally recognized for effective food safety and surveillance. Since 1996 livestock production has grown 124% which proves the success of the project. Uruguay Water. Although Uruguay's economy is one of the largest in Latin America, it is also pretty small (3 million people), meaning all public utilities are monopolies because once someone is in charge no one wants to be the competition, which would push for reform. Although the country has tried to make progress concerning infrastructure the pipe sewerage is still relatively low compared to Chile, Colombia, and Mexico. This problem has been addressed by the IBRD for the past 2 decades with 3 infrastructure investment loans and it extended one of the Uruguay a Technical Assistance Loan. Thanks to this Uruguay's OSE (the country's public &amp; sanitation utility) has slowly evolved, water treatment supplies have increased from 440,000formula_0per day in 1988 to 630,000formula_0per day in 2006. Moreover, the IBRD has financed 12300 sewerage connections in 12 cities that cover 60,000 people. And thanks to the IBRD OSE now is part of a system that includes an autonomous water and electricity regulator and a separate policy-making agency. Energy efficiency. Two decades ago Uruguayans did not care for energy efficiency but in 2004 the Project Energy Efficiency Uruguay promoted using energy in a more efficient way. The WB loaned US$6,8 million for this project. This project changed the way Uruguayans use energy, since part of the project was to educate about energy efficiency. Results. Uruguay suffered from a crisis in 2001 and 2002 with the contribution of the World Bank Uruguay got out of this crisis. The bank did this by providing Uruguay with lending and non lending services that included measures in areas of tax reform, financial sector and capital market development. The bank made these lendings easier by providing part of this aid in Uruguayan pesos in 2007. Thanks to the collaboration of the bank Uruguay overcame its crisis and achieved macroeconomic stability. Poverty was reduced by nearly 39% between 2003–09 and of the 3% of the population that lived in extreme poverty went down to 1.3%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m^3" } ]
https://en.wikipedia.org/wiki?curid=58753418
58753466
Bendixson's inequality
In mathematics, Bendixson's inequality is a quantitative result in the field of matrices derived by Ivar Bendixson in 1902. The inequality puts limits on the imaginary and real parts of characteristic roots (eigenvalues) of real matrices. A special case of this inequality leads to the result that characteristic roots of a real symmetric matrix are always real. The inequality relating to the imaginary parts of characteristic roots of real matrices (Theorem I in ) is stated as: Let formula_0 be a real formula_1 matrix and formula_2. If formula_3 is any characteristic root of formula_4, then formula_5 If formula_4 is symmetric then formula_6 and consequently the inequality implies that formula_3 must be real. The inequality relating to the real parts of characteristic roots of real matrices (Theorem II in ) is stated as: Let formula_7 and formula_8 be the smallest and largest characteristic roots of formula_9, then formula_10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\left ( a_{ij} \\right )" }, { "math_id": 1, "text": "n \\times n" }, { "math_id": 2, "text": "\\alpha = \\max_{{1\\leq i,j \\leq n}} \\frac{1}{2} \\left | a_{ij} - a_{ji} \\right |" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "\\left | \\operatorname{Im} (\\lambda) \\right | \\le \\alpha \\sqrt{\\frac{n(n-1)} 2 }.\\,{} " }, { "math_id": 6, "text": "\\alpha = 0" }, { "math_id": 7, "text": "m" }, { "math_id": 8, "text": "M\n" }, { "math_id": 9, "text": "\\tfrac{A+A^H}{2}" }, { "math_id": 10, "text": "m \\leq\\operatorname{Re}(\\lambda) \\leq M" } ]
https://en.wikipedia.org/wiki?curid=58753466
58756817
Tanja Eisner
Ukrainian-born German mathematician Tatjana (Tanja) Eisner (née Lobova, born 1980) is a German and Ukrainian mathematician specializing in functional analysis, operator theory as well as ergodic theory and its connection to number theory. She is a professor of mathematics at Leipzig University. Education and career. Eisner was born on 1 July 1980 in Kharkiv, but has German citizenship. She earned a diploma in applied mathematics in 2002 from the National University of Kharkiv, with a diploma thesis supervised by Anna Vishnyakova. She then earned a diploma in mathematics at the University of Tübingen in 2004, followed by a Ph.D. in 2007. Her dissertation, "Stability of Operators and formula_0-Semigroups", was supervised by Rainer Nagel. From 2007 to 2010, Eisner worked as a scientific assistant at the University of Tübingen. After her habilitation in 2010 in Tübingen she was an assistant professor at the University of Amsterdam from 2011 to 2013 before joining Leipzig University as a full professor in 2013. Books. Eisner is the author of the book "Stability of Operators and Operator Semigroups" (Operator Theory: Advances and Applications, Vol. 209, Birkhäuser, 2010). She is a coauthor of "Operator Theoretic Aspects of Ergodic Theory" (with Bálint Farkas, Markus Haase, Rainer Nagel, Graduate Texts in Mathematics 272, Springer, 2015). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_0" } ]
https://en.wikipedia.org/wiki?curid=58756817
58756838
Wigner surmise
Scientific hypothesis in mathematical physics In mathematical physics, the Wigner surmise is a statement about the probability distribution of the spaces between points in the spectra of nuclei of heavy atoms, which have many degrees of freedom, or quantum systems with few degrees of freedom but chaotic classical dynamics. It was proposed by Eugene Wigner in probability theory. The surmise was a result of Wigner's introduction of random matrices in the field of nuclear physics. The surmise consists of two postulates: formula_0 Here, formula_1 where "S" is a particular spacing and "D" is the mean distance between neighboring intervals. The above result is exact for formula_2 real symmetric matrices formula_3, with elements that are independent standard gaussian random variables, with joint distribution proportional to formula_4 In practice, it is a good approximation for the actual distribution for real symmetric matrices of any dimension. The corresponding result for complex hermitian matrices (which is also exact in the formula_2 case and a good approximation in general) with distribution proportional to formula_5, is given by formula_6 History. During the conference on Neutron Physics by Time-of-Flight, held at Gatlinburg, Tennessee, November 1 and 2, 1956, Wigner delivered a presentation on the theoretical arrangement of neighboring neutron resonances (with matching spin and parity) in heavy nuclei. In the presentation he gave the following guess:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Perhaps I am now too courageous when I try to guess the distribution of the distances between successive levels (of energies of heavy nuclei). Theoretically, the situation is quite simple if one attacks the problem in a simpleminded fashion. The question is simply what are the distances of the characteristic values of a symmetric matrix with random coefficients. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_w(s) = \\frac{\\pi s}{2} e^{-\\pi s^2/4}." }, { "math_id": 1, "text": "s = \\frac{S}{D}" }, { "math_id": 2, "text": "2\\times 2" }, { "math_id": 3, "text": " M" }, { "math_id": 4, "text": "e^{-\\frac{1}{2}{\\rm Tr}(M^2)}=e^{-\\frac{1}{2}{\\rm Tr}\\left(\\begin{array}{cc}a & b \\\\b & c\n\\\\\\end{array}\\right)^2}=e^{-\\frac{1}{2}a^2-\\frac{1}{2}c^2-b^2}." }, { "math_id": 5, "text": "e^{-\\frac{1}{2}{\\rm Tr}(MM^\\dagger)}" }, { "math_id": 6, "text": "p_w(s) = \\frac{32 s^2}{\\pi^2} e^{-4s^2/\\pi}." } ]
https://en.wikipedia.org/wiki?curid=58756838
58759432
Edray Herber Goins
American mathematician Edray Herber Goins (born June 29, 1972, Los Angeles) is an American mathematician. He specializes in number theory and algebraic geometry. His interests include Selmer groups for elliptic curves using class groups of number fields, Belyi maps and Dessin d'enfants. Early life. Goins was born in Los Angeles in 1972. His mother, Eddi Beatrice Brown, was a teacher. He attended public schools in South Los Angeles and got his BSc in mathematics and physics in 1994 from California Institute of Technology, where he also received two prizes for mathematics. He completed his PhD in 1999 on “Elliptic Curves and Icosahedral Galois Representations” from Stanford University, under Daniel Bump and Karl Rubin. Career. He served for many years on the faculty of Purdue University. He has also served as visiting scholar at both the Institute for Advanced Study in Princeton, and Harvard. Goins took a position at Pomona College in 2018. His summers have focused on engaging underrepresented students in research in the mathematical sciences. He currently runs the NSF-funded Research Experience for Undergraduates (REU) "Pomona Research in Mathematics Experience (PRiME)", a program that Goins started in 2016 at Purdue University under the title "Purdue Research in Mathematics Experience (PRiME)". He is noted for his 2018 essay, "Three Questions: The Journey of One Black Mathematician". He was elected to the 2019 Class of Fellows of the Association for Women in Mathematics. From 2015 to 2020, Goins served as president of the National Association of Mathematicians (NAM). Mathematicians of the African Diaspora. In 1997 Scott W. Williams of the University at Buffalo, SUNY created the website Mathematicians of the African Diaspora (MAD) dedicated to promoting and highlighting the contributions of members of the African diaspora to mathematics, especially contributions to current mathematical research. Williams retired in 2008 and it was left to others to continue the website he had spent 11 years building. After an initial town hall meeting about the future of the MAD Pages which took place at a Conference for African American Researchers in the Mathematical Sciences (CAARMS), an informal group of mathematicians decided to work together to preserve Williams’ work. In 2015, the National Association of Mathematicians (NAM) formed an ad hoc committee to update the MAD Pages, consisting of Edray Goins as NAM President, Committee Co-Chairs Don King (Northeastern University) and Asamoah Nkwanta (Morgan State University), and web developer John Weaver (Varsity Software). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2 + 2^\\alpha5^\\alpha13^\\alpha = y^n" } ]
https://en.wikipedia.org/wiki?curid=58759432
587650
Pursuit curve
Class of curves traced by a point which follows another moving point In geometry, a curve of pursuit is a curve constructed by analogy to having a point or points representing pursuers and pursuees; the curve of pursuit is the curve traced by the pursuers. With the paths of the pursuer and pursuee parameterized in time, the pursuee is always on the pursuer's tangent. That is, given "F"("t"), the pursuer (follower), and "L"("t"), the pursued (leader), for every t with "F′"&amp;hairsp;("t") ≠ 0 there is an x such that formula_0 History. The pursuit curve was first studied by Pierre Bouguer in 1732. In an article on navigation, Bouguer defined a curve of pursuit to explore the way in which one ship might maneuver while pursuing another. Leonardo da Vinci has occasionally been credited with first exploring curves of pursuit. However Paul J. Nahin, having traced such accounts as far back as the late 19th century, indicates that these anecdotes are unfounded. Single pursuer. The path followed by a single pursuer, following a pursuee that moves at constant speed on a line, is a radiodrome. It is a solution of the differential equation 1 +  "y′" 2 "k"&amp;hairsp;2 ("a" − "x")2 "y′′" 2. Multiple pursuers. Typical drawings of curves of pursuit have each point acting as both pursuer and pursuee, inside a polygon, and having each pursuer pursue the adjacent point on the polygon. An example of this is the mice problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L(t) = F(t) + xF'\\!(t)." } ]
https://en.wikipedia.org/wiki?curid=587650
58765822
Hydrogen Intensity and Real-time Analysis eXperiment
The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) is an interferometric array of 1024 6-meter (20ft) diameter radio telescopes, operating at 400-800MHz, that will be deployed at the Square Kilometer Array site in the Karoo region of South Africa. The array is designed to measure red-shifted 21-cm hydrogen line emission on large angular scales, in order to map out the baryon acoustic oscillations, and constrain models of dark energy and dark matter. The HIRAX collaboration is made up of over a dozen institutions, mainly from South Africa, the United States, and Canada, including the University of KwaZulu-Natal, the Durban University of Technology, the African Institute for Mathematical Sciences, the Botswana International University of Science and Technology, the University of the Western Cape, Rhodes University, the University of Cape Town, McGill University, the University of Toronto, the University of British Columbia, the Inter-University Centre for Astronomy and Astrophysics, Yale University, Caltech, Carnegie Mellon, the University of Wisconsin, the West Virginia University, Oxford University, the Astroparticle and Cosmology Laboratory, the Nelson Mandela University, EPFL, the ETH Zurich, and the NASA Jet Propulsion Laboratory. It is funded by the National Research Foundation of South Africa, and by the partner institutions. The HIRAX array is named in reference to the hyrax, a local mammal, and in parallel to the neighboring meerKAT radio telescope and its eponymous animal. Science goals. The nature of dark energy and dark matter are among the greatest unsolved mysteries in modern cosmology.&lt;ref name="albrecht/etal"&gt;&lt;/ref&gt; It has been known since the late 1920s, with the discovery of Hubble's law, that the universe is expanding, but for most of the 20th century it was assumed that this was a decelerating expansion, following a hot Big Bang. However, in the late 1990s it was discovered that the expansion of the universe is in fact accelerating. Dark energy is the hypothesized form of energy which causes this acceleration, however little is known about it beyond the fact that it must currently comprise approximately 70% of the energy density of the universe. Dark matter also plays a significant role in the growth of structures within the universe. It is believed to be a form of matter that interacts with the gravitational force, but not the electromagnetic force, and it is known to make up approximately 25% of the energy density of the universe, but the exact nature of it is not understood. The remaining 5% of the energy density of the universe is the baryonic matter which we can see; the stars, gas and dust that makes up galaxies and galaxy clusters. HIRAX is designed to measure the effects of dark energy and dark matter on the dynamics of the universe over a long period of time (~4 billion years) to learn more about their nature. This is accomplished by looking at the 21-cm line emission produced by hot diffuse neutral hydrogen from distant galaxy clusters and from the intracluster medium. This neutral hydrogen traces out the large scale structures in the universe, and so can be used to map out the large scale Baryon Acoustic Oscillation (BAO) structure of the universe. The BAO are a fixed comoving size, and so they act as a standard ruler, marking the expansion of the universe over time, and therefore giving information about dark energy and dark matter. For example, if dark energy is not a cosmological constant, as the standard ΛCDM theory of cosmology predicts, then the rate of acceleration of the universe may not be constant over time. Due to the expansion of the universe, the 400-800 MHz operating band of the HIRAX instrument corresponds to redshifted 21-cm emission from formula_0 (7-11Bya, or when the universe was between 2.5 and 6.5 billion years old). This range encompasses the period when the standard ΛCDM cosmological model predicts that dark energy is beginning to affect the dynamics of the universe, causing it to transition from decelerating expansion to accelerating expansion. The HIRAX array will survey most of the southern sky to map out BAO, and its large field of view and large survey area will additionally make it a very powerful tool for detecting radio transient events. In particular, HIRAX will be extremely efficient at detecting Fast Radio Bursts (FRBs) and Pulsars. FRBs are short (~1 ms) bright (~1 Jy) radio bursts, whose origins are completely unknown. Only approximately 612 have been detected as of 2021, but the HIRAX array expects to detect tens of FRBs per day. Pulsars are rapidly rotating neutron stars, whose rotation causes them to appear to emit radio frequency pulses at very regular rates. Precise measurements of the rates of their pulses could be used to detect gravitational waves, because the gravitational waves would distort the size of the space the pulses travel through, and thus their arrival times at Earth. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a sister experiment to HIRAX. It has similar science objectives, but observes in the northern hemisphere, and has different instrumental systematics. The Canadian Hydrogen Observatory and Radio-transient Detector (CHORD) is a next-generation radio telescope, proposed for construction to start immediately. CHORD is a pan-Canadian project, designed to work with and build on the success of the CHIME. It will act as a sister experiment to HIRAX. CHORD will incorporate CHIME’s best innovations alongside new Canadian technology. Small cylinders derived from the CHIME design and operating from 400-800MHz will be deployed at remote outrigger sites and provide milli-arcsecond-level localization of radio transients. These will be complemented by focused arrays of 6m composite dishes at each site, instrumented with novel ultra-wideband (UWB) feeds, covering a 5:1 radio band from 300–1500MHz. Instrument. The HIRAX array will consist of 1024 6-meter diameter parabolic dish reflectors with a field of view of 5–10°. The dishes will "not" be steered, but fixed in position and sweep the sky as the Earth rotates. Every few months, they will be manually re-pointed in elevation to survey a new strip of the sky. The dishes are extremely deep, with an f-number of 0.23, to shield the feeds from ground pickup, and crosstalk from neighboring dishes in the array. The antennas have been optimized to have low loss and high reflectivity across the 400–800 MHz observing band of the telescope. Each dish is coupled to a single dual-polarization clover-leaf dipole antenna. The signal is amplified by a pair of low-noise amplifiers (LNAs), and transmitted to a centralized computation structure (the "back end") by means of fibre-optic links. At the back end the signal is amplified further by analog amplifier chains, then digitized and correlated with the signals from all other dishes to produce a single coherent image from the whole array. The digitization and frequency channelization operations will be performed by custom field programmable gate array (FPGA) boards, and the correlation will be run on a custom graphics processing unit (GPU) based high performance computing cluster. This correlation operation is extremely computationally expensive, and is the primary reason why such large interferometric arrays have not previously been fielded. In full array operation, HIRAX will be required to process 6.5 Tb of data per second, which is comparable to the total international internet bandwidth for the continent of Africa. This problem is made feasible by recent advances in GPU based computing, and by the regular spacing between the array elements, which lowers the computational difficulty from formula_1 to formula_2, where "n" is the number of elements in the array. Status. The HIRAX collaboration fielded an 8-element prototype array at the Hartebeesthoek Radio Astronomy Observatory (HartRAO) in 2017, which is used as a test bed for hardware and software development leading up to the construction of the full array at the South African Radio Astronomy Observatory (SARAO) site in the Karoo. Construction of a 128-element pathfinder array is slated to begin in 2024. The pathfinder array will then be expanded out to the full 1024-element array over the course of the following three years. The HartRAO 8-element array will be incorporated into the full array as an "outrigger" array, along with several others throughout southern Africa. These outriggers will dramatically improve the angular resolution of the HIRAX array, allowing it to localize FRB detections with sub-arcsecond precision. The University of KwaZulu-Natal, and the South African Department of Science and Technology and National Research Foundation announced the official launch of the HIRAX experiment in August 2018. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.8 < z < 2.5" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "O(n \\log n)" } ]
https://en.wikipedia.org/wiki?curid=58765822
587678
Nerve complex
In topology, the nerve complex of a set family is an abstract complex that records the pattern of intersections between the sets in the family. It was introduced by Pavel Alexandrov and now has many variants and generalisations, among them the Čech nerve of a cover, which in turn is generalised by hypercoverings. It captures many of the interesting topological properties in an algorithmic or combinatorial way. Basic definition. Let formula_0 be a set of indices and formula_1 be a family of sets formula_2. The nerve of formula_1 is a set of finite subsets of the index set "formula_0". It contains all finite subsets formula_3 such that the intersection of the formula_4 whose subindices are in formula_5 is non-empty:"81" formula_6 In Alexandrov's original definition, the sets formula_2 are open subsets of some topological space formula_7. The set formula_8 may contain singletons (elements formula_9 such that formula_4 is non-empty), pairs (pairs of elements formula_10 such that formula_11), triplets, and so on. If formula_12, then any subset of formula_5 is also in formula_8, making formula_8 an abstract simplicial complex. Hence N(C) is often called the nerve complex of formula_1. The Čech nerve. Given an open cover formula_20 of a topological space formula_7, or more generally a cover in a site, we can consider the pairwise fibre products formula_21, which in the case of a topological space are precisely the intersections formula_22. The collection of all such intersections can be referred to as formula_23 and the triple intersections as formula_24. By considering the natural maps formula_25 and formula_26, we can construct a simplicial object formula_27 defined by formula_28, n-fold fibre product. This is the Čech nerve. By taking connected components we get a simplicial set, which we can realise topologically: formula_29. Nerve theorems. The nerve complex formula_8 is a simple combinatorial object. Often, it is much simpler than the underlying topological space (the union of the sets in formula_1). Therefore, a natural question is whether the topology of formula_8 is equivalent to the topology of formula_30. In general, this need not be the case. For example, one can cover any "n"-sphere with two contractible sets formula_15 and formula_16 that have a non-empty intersection, as in example 1 above. In this case, formula_8 is an abstract 1-simplex, which is similar to a line but not to a sphere. However, in some cases formula_8 does reflect the topology of "X". For example, if a circle is covered by three open arcs, intersecting in pairs as in Example 2 above, then formula_8 is a 2-simplex (without its interior) and it is homotopy-equivalent to the original circle. A nerve theorem (or nerve lemma) is a theorem that gives sufficient conditions on "C" guaranteeing that formula_8 reflects, in some sense, the topology of "formula_30". A functorial nerve theorem is a nerve theorem that is functorial in an approriate sense, which is, for example, crucial in topological data analysis. Leray's nerve theorem. The basic nerve theorem of Jean Leray says that, if any intersection of sets in formula_8 is contractible (equivalently: for each finite formula_31 the set formula_32 is either empty or contractible; equivalently: "C" is a good open cover), then formula_8 is homotopy-equivalent to "formula_30". Borsuk's nerve theorem. There is a discrete version, which is attributed to Borsuk."81,&amp;hairsp;Thm.4.4.4" Let "K1...,Kn" be abstract simplicial complexes, and denote their union by "K". Let "Ui" = ||"Ki||" = the geometric realization of "Ki", and denote the nerve of {"U1", ... , "Un" } by "N". If, for each nonempty formula_31, the intersection formula_32 is either empty or contractible, then "N" is homotopy-equivalent to "K". A stronger theorem was proved by Anders Bjorner. if, for each nonempty formula_31, the intersection formula_32 is either empty or (k-|J|+1)-connected, then for every "j" ≤ "k", the "j"-th homotopy group of "N" is isomorphic to the "j"-th homotopy group of "K". In particular, "N" is "k"-connected if-and-only-if "K" is "k"-connected. Čech nerve theorem. Another nerve theorem relates to the Čech nerve above: if formula_7 is compact and all intersections of sets in "C" are contractible or empty, then the space formula_29 is homotopy-equivalent to formula_7. Homological nerve theorem. The following nerve theorem uses the homology groups of intersections of sets in the cover. For each finite formula_31, denote formula_33 the "j"-th reduced homology group of formula_32. If "HJ,j" is the trivial group for all "J" in the "k"-skeleton of N("C") and for all "j" in {0, ..., "k"-dim("J")}, then N("C") is "homology-equivalent" to "X" in the following sense:
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "(U_i)_{i\\in I}" }, { "math_id": 3, "text": "J\\subseteq I" }, { "math_id": 4, "text": "U_i" }, { "math_id": 5, "text": "J" }, { "math_id": 6, "text": "N(C) := \\bigg\\{J\\subseteq I: \\bigcap_{j\\in J}U_j \\neq \\varnothing, J \\text{ finite set} \\bigg\\}." }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "N(C)" }, { "math_id": 9, "text": "i \\in I" }, { "math_id": 10, "text": "i,j \\in I" }, { "math_id": 11, "text": "U_i \\cap U_j \\neq \\emptyset" }, { "math_id": 12, "text": "J \\in N(C)" }, { "math_id": 13, "text": "S^1" }, { "math_id": 14, "text": "C = \\{U_1, U_2\\}" }, { "math_id": 15, "text": "U_1" }, { "math_id": 16, "text": "U_2" }, { "math_id": 17, "text": "N(C) = \\{ \\{1\\}, \\{2\\}, \\{1,2\\} \\}" }, { "math_id": 18, "text": "C = \\{U_1, U_2, U_3\\}" }, { "math_id": 19, "text": "N(C) = \\{ \\{1\\}, \\{2\\}, \\{3\\}, \\{1,2\\}, \\{2,3\\}, \\{3,1\\} \\}" }, { "math_id": 20, "text": "C=\\{U_i: i\\in I\\}" }, { "math_id": 21, "text": "U_{ij}=U_i\\times_XU_j" }, { "math_id": 22, "text": "U_i\\cap U_j" }, { "math_id": 23, "text": "C\\times_X C" }, { "math_id": 24, "text": "C\\times_X C\\times_X C" }, { "math_id": 25, "text": "U_{ij}\\to U_i" }, { "math_id": 26, "text": "U_i\\to U_{ii}" }, { "math_id": 27, "text": "S(C)_\\bullet" }, { "math_id": 28, "text": "S(C)_n=C\\times_X\\cdots\\times_XC" }, { "math_id": 29, "text": "|S(\\pi_0(C))|" }, { "math_id": 30, "text": "\\bigcup C" }, { "math_id": 31, "text": "J\\subset I" }, { "math_id": 32, "text": "\\bigcap_{i\\in J} U_i" }, { "math_id": 33, "text": "H_{J,j} := \\tilde{H}_j(\\bigcap_{i\\in J} U_i)=" }, { "math_id": 34, "text": "\\tilde{H}_j(N(C)) \\cong \\tilde{H}_j(X)" }, { "math_id": 35, "text": "\\tilde{H}_{k+1}(N(C))\\not\\cong 0" }, { "math_id": 36, "text": "\\tilde{H}_{k+1}(X)\\not\\cong 0" } ]
https://en.wikipedia.org/wiki?curid=587678
5877457
Inviscid flow
Flow of fluids with zero viscosity (superfluids) In fluid dynamics, inviscid flow is the flow of an "inviscid fluid" which is a fluid with zero viscosity. The Reynolds number of inviscid flow approaches infinity as the viscosity approaches zero. When viscous forces are neglected, such as the case of inviscid flow, the Navier–Stokes equation can be simplified to a form known as the Euler equation. This simplified equation is applicable to inviscid flow as well as flow with low viscosity and a Reynolds number much greater than one. Using the Euler equation, many fluid dynamics problems involving low viscosity are easily solved, however, the assumed negligible viscosity is no longer valid in the region of fluid near a solid boundary (the boundary layer) or, more generally in regions with large velocity gradients which are evidently accompanied by viscous forces. The flow of a superfluid is inviscid. Inviscid flows are broadly classified into potential flows (or, irrotational flows) and rotational inviscid flows. Prandtl hypothesis. Ludwig Prandtl developed the modern concept of the boundary layer. His hypothesis establishes that for fluids of low viscosity, shear forces due to viscosity are evident only in thin regions at the boundary of the fluid, adjacent to solid surfaces. Outside these regions, and in regions of favorable pressure gradient, viscous shear forces are absent so the fluid flow field can be assumed to be the same as the flow of an inviscid fluid. By employing the Prandtl hypothesis it is possible to estimate the flow of a real fluid in regions of favorable pressure gradient by assuming inviscid flow and investigating the irrotational flow pattern around the solid body. Real fluids experience separation of the boundary layer and resulting turbulent wakes but these phenomena cannot be modelled using inviscid flow. Separation of the boundary layer usually occurs where the pressure gradient reverses from favorable to adverse so it is inaccurate to use inviscid flow to estimate the flow of a real fluid in regions of unfavorable pressure gradient. Reynolds number. The Reynolds number (Re) is a dimensionless quantity that is commonly used in fluid dynamics and engineering. Originally described by George Gabriel Stokes in 1850, it became popularized by Osborne Reynolds after whom the concept was named by Arnold Sommerfeld in 1908. The Reynolds number is calculated as: formula_0 The value represents the ratio of inertial forces to viscous forces in a fluid, and is useful in determining the relative importance of viscosity. In inviscid flow, since the viscous forces are zero, the Reynolds number approaches infinity. When viscous forces are negligible, the Reynolds number is much greater than one. In such cases (Re»1), assuming inviscid flow can be useful in simplifying many fluid dynamics problems. Euler equations. In a 1757 publication, Leonhard Euler described a set of equations governing inviscid flow: formula_1 Assuming inviscid flow allows the Euler equation to be applied to flows in which viscous forces are insignificant. Some examples include flow around an airplane wing, upstream flow around bridge supports in a river, and ocean currents. Navier-Stokes equations. In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations. Claude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theory. The Navier-Stokes equations describe the motion of fluids: formula_2 When the fluid is inviscid, or the viscosity can be assumed to be negligible, the Navier-Stokes equation simplifies to the Euler equation: This simplification is much easier to solve, and can apply to many types of flow in which viscosity is negligible. Some examples include flow around an airplane wing, upstream flow around bridge supports in a river, and ocean currents. The Navier-Stokes equation reduces to the Euler equation when formula_3. Another condition that leads to the elimination of viscous force is formula_4, and this results in an "inviscid flow arrangement". Such flows are found to be vortex-like. Solid boundaries. It is important to note, that negligible viscosity can no longer be assumed near solid boundaries, such as the case of the airplane wing. In turbulent flow regimes (Re » 1), viscosity can typically be neglected, however this is only valid at distances far from solid interfaces. When considering flow in the vicinity of a solid surface, such as flow through a pipe or around a wing, it is convenient to categorize four distinct regions of flow near the surface: Although these distinctions can be a useful tool in illustrating the significance of viscous forces near solid interfaces, it is important to note that these regions are fairly arbitrary. Assuming inviscid flow can be a useful tool in solving many fluid dynamics problems, however, this assumption requires careful consideration of the fluid sub layers when solid boundaries are involved. Superfluids. Superfluid is the state of matter that exhibits frictionless flow, zero viscosity, also known as inviscid flow. To date, helium is the only fluid to exhibit superfluidity that has been discovered. Helium-4 becomes a superfluid once it is cooled to below 2.2K, a point known as the lambda point. At temperatures above the lambda point, helium exists as a liquid exhibiting normal fluid dynamic behavior. Once it is cooled to below 2.2K it begins to exhibit quantum behavior. For example, at the lambda point there is a sharp increase in heat capacity, as it is continued to be cooled, the heat capacity begins to decrease with temperature. In addition, the thermal conductivity is very large, contributing to the excellent coolant properties of superfluid helium. Similarly, Helium-3 is found become a superfluid at 2.491mK. Applications. Spectrometers are kept at a very low temperature using helium as the coolant. This allows for minimal background flux in far-infrared readings. Some of the designs for the spectrometers may be simple, but even the frame is at its warmest less than 20 Kelvin. These devices are not commonly used as it is very expensive to use superfluid helium over other coolants. Superfluid helium has a very high thermal conductivity, which makes it very useful for cooling superconductors. Superconductors such as the ones used at the LHC (Large Hadron Collider) are cooled to temperatures of approximately 1.9 Kelvin. This temperature allows the niobium-titanium magnets to reach a superconductor state. Without the use of the superfluid helium, this temperature would not be possible. Using helium to cool to these temperatures is very expensive and cooling systems that use alternative fluids are more numerous. Another application of the superfluid helium is its uses in understanding quantum mechanics. Using lasers to look at small droplets allows scientists to view behaviors that may not normally be viewable. This is due to all the helium in each droplet being at the same quantum state. This application does not have any practical uses by itself, but it helps us better understand quantum mechanics which has its own applications.
[ { "math_id": 0, "text": "Re = {l_c v \\rho \\over \\mu}" }, { "math_id": 1, "text": "\\rho{D\\mathbf{v} \\over Dt} = -\\nabla p +\\rho \\mathbf{g}" }, { "math_id": 2, "text": "\\rho{D\\mathbf{v} \\over Dt} = -\\nabla p + \\mu \\nabla^2 \\mathbf{v} +\\rho \\mathbf{g}" }, { "math_id": 3, "text": "\\mu=0" }, { "math_id": 4, "text": "\\nabla^2\\mathbf{v}=0" } ]
https://en.wikipedia.org/wiki?curid=5877457
5878203
Schwartz–Bruhat function
In mathematics, a Schwartz–Bruhat function, named after Laurent Schwartz and François Bruhat, is a complex valued function on a locally compact abelian group, such as the adeles, that generalizes a Schwartz function on a real vector space. A tempered distribution is defined as a continuous linear functional on the space of Schwartz–Bruhat functions. formula_34. The function formula_29 must also be locally constant, so formula_35 for some formula_27. (As for formula_29 evaluated at zero, formula_36 is always included as a term.) Properties. The Fourier transform of a Schwartz–Bruhat function on a locally compact abelian group is a Schwartz–Bruhat function on the Pontryagin dual group. Consequently, the Fourier transform takes tempered distributions on such a group to tempered distributions on the dual group. Given the (additive) Haar measure on formula_7 the Schwartz–Bruhat space formula_45 is dense in the space formula_46 Applications. In algebraic number theory, the Schwartz–Bruhat functions on the adeles can be used to give an adelic version of the Poisson summation formula from analysis, i.e., for every formula_47 one has formula_48, where formula_49. John Tate developed this formula in his doctoral thesis to prove a more general version of the functional equation for the Riemann zeta function. This involves giving the zeta function of a number field an integral representation in which the integral of a Schwartz–Bruhat function, chosen as a test function, is twisted by a certain character and is integrated over formula_50 with respect to the multiplicative Haar measure of this group. This allows one to apply analytic methods to study zeta functions through these zeta integrals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "\\mathcal{S}(\\mathbb{R}^n)" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "A/B" }, { "math_id": 6, "text": "K" }, { "math_id": 7, "text": "\\mathbb{A}_K" }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "\\prod_v f_v" }, { "math_id": 10, "text": "v" }, { "math_id": 11, "text": "f_v" }, { "math_id": 12, "text": "K_v" }, { "math_id": 13, "text": "f_v = \\mathbf{1}_{\\mathcal{O}_v}" }, { "math_id": 14, "text": "\\mathcal{O}_v" }, { "math_id": 15, "text": "\\bigotimes_v'\\mathcal{S}(K_v) := \\varinjlim_{E}\\left(\\bigotimes_{v \\in E}\\mathcal{S}(K_v) \\right) " }, { "math_id": 16, "text": "\\mathcal{S}(K_v)" }, { "math_id": 17, "text": "E" }, { "math_id": 18, "text": "f = \\otimes_vf_v" }, { "math_id": 19, "text": "f_v \\in \\mathcal{S}(K_v)" }, { "math_id": 20, "text": "f_v|_{\\mathcal{O}_v}=1" }, { "math_id": 21, "text": "x = (x_v)_v \\in \\mathbb{A}_K" }, { "math_id": 22, "text": "f(x) = \\prod_vf_v(x_v)" }, { "math_id": 23, "text": "f \\in \\mathcal{S}(\\mathbb{Q}_p)" }, { "math_id": 24, "text": " f = \\sum_{i = 1}^n c_i \\mathbf{1}_{a_i + p^{k_i}\\mathbb{Z}_p} " }, { "math_id": 25, "text": " a_i \\in \\mathbb{Q}_p " }, { "math_id": 26, "text": "k_i \\in \\mathbb{Z} " }, { "math_id": 27, "text": " c_i \\in \\mathbb{C} " }, { "math_id": 28, "text": " \\mathbb{Q}_p " }, { "math_id": 29, "text": " f " }, { "math_id": 30, "text": " \\operatorname{supp}(f) " }, { "math_id": 31, "text": " a + p^k \\mathbb{Z}_p " }, { "math_id": 32, "text": " a \\in \\mathbb{Q}_p " }, { "math_id": 33, "text": " k \\in \\mathbb{Z} " }, { "math_id": 34, "text": " \\operatorname{supp}(f) = \\coprod_{i = 1}^n (a_i + p^{k_i}\\mathbb{Z}_p) " }, { "math_id": 35, "text": " f |_{a_i + p^{k_i}\\mathbb{Z}_p} = c_i \\mathbf{1}_{a_i + p^{k_i}\\mathbb{Z}_p} " }, { "math_id": 36, "text": " f(0)\\mathbf{1}_{\\mathbb{Z}_p} " }, { "math_id": 37, "text": " \\mathbb{A}_{\\mathbb{Q}} " }, { "math_id": 38, "text": "\\mathcal{S}(\\mathbb{A}_{\\mathbb{Q}}) " }, { "math_id": 39, "text": " \\prod_{p \\le \\infty} f_p = f_\\infty \\times \\prod_{p < \\infty } f_p " }, { "math_id": 40, "text": " p " }, { "math_id": 41, "text": " f_\\infty \\in \\mathcal{S}(\\mathbb{R}) " }, { "math_id": 42, "text": " f_p \\in \\mathcal{S}(\\mathbb{Q}_p) " }, { "math_id": 43, "text": " f_p = \\mathbf{1}_{\\mathbb{Z}_p} " }, { "math_id": 44, "text": " \\mathbb{Z}_p " }, { "math_id": 45, "text": "\\mathcal{S}(\\mathbb{A}_K)" }, { "math_id": 46, "text": "L^2(\\mathbb{A}_K, dx)." }, { "math_id": 47, "text": " f \\in \\mathcal{S}(\\mathbb{A}_K) " }, { "math_id": 48, "text": " \\sum_{x \\in K} f(ax) = \\frac{1}{|a|}\\sum_{x \\in K} \\hat{f}(a^{-1}x) " }, { "math_id": 49, "text": " a \\in \\mathbb{A}_K^{\\times} " }, { "math_id": 50, "text": "\\mathbb{A}_K^{\\times}" } ]
https://en.wikipedia.org/wiki?curid=5878203
58785
Timeline of gravitational physics and relativity
The following is a timeline of gravitational physics and general relativity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E = mc^2" }, { "math_id": 1, "text": "m_{\\gamma} \\leq 4 \\times 10^{-51} \\text{kg}" }, { "math_id": 2, "text": "\\eta" }, { "math_id": 3, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=58785
58786399
Distance set
Set of distances defined from a set of points In geometry, the distance set of a collection of points is the set of distances between distinct pairs of points. Thus, it can be seen as the generalization of a difference set, the set of distances (and their negations) in collections of numbers. Several problems and results in geometry concern distance sets, usually based on the principle that a large collection of points must have a large distance set (for varying definitions of "large"): Distance sets have also been used as a shape descriptor in computer vision. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "d/2" }, { "math_id": 2, "text": "2d" } ]
https://en.wikipedia.org/wiki?curid=58786399
5880700
Repeat-accumulate code
Class of error correction codes In computer science, repeat-accumulate codes (RA codes) are a low complexity class of error-correcting codes. They were devised so that their ensemble weight distributions are easy to derive. RA codes were introduced by Divsalar "et al." In an RA code, an information block of length formula_0 is repeated formula_1 times, scrambled by an interleaver of size formula_2, and then encoded by a rate 1 accumulator. The accumulator can be viewed as a truncated rate 1 recursive convolutional encoder with transfer function formula_3, but Divsalar "et al." prefer to think of it as a block code whose input block formula_4 and output block formula_5 are related by the formula formula_6 and formula_7 for formula_8. The encoding time for RA codes is linear and their rate is formula_9. They are nonsystematic. Irregular repeat accumulate codes. Irregular repeat accumulate (IRA) codes build on top of the ideas of RA codes. IRA replaces the outer code in RA code with a low density generator matrix code. IRA codes first repeats information bits different times, and then accumulates subsets of these repeated bits to generate parity bits. The irregular degree profile on the information nodes, together with the degree profile on the check nodes, can be designed using density evolution. Systematic IRA codes are considered a form of LDPC code. Litigation over whether the DVB-S2 LDPC code is a form of IRA code is ongoing. US patents 7,116,710; 7,421,032; 7,916,781; and 8,284,833 are at issue. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{N}" }, { "math_id": 1, "text": "{q}" }, { "math_id": 2, "text": "{qN}" }, { "math_id": 3, "text": "{1/(1 + D)}" }, { "math_id": 4, "text": "{(z_1, \\ldots , z_n)}" }, { "math_id": 5, "text": "{(x_1, \\ldots , x_n)}" }, { "math_id": 6, "text": "{x_1 = z_1}" }, { "math_id": 7, "text": "x_i = x_{i-1}+z_i" }, { "math_id": 8, "text": "i > 1" }, { "math_id": 9, "text": "1/q" } ]
https://en.wikipedia.org/wiki?curid=5880700
5880890
Plasma parameter
The plasma parameter is a dimensionless number, denoted by capital Lambda, Λ. The plasma parameter is usually interpreted to be the argument of the Coulomb logarithm, which is the ratio of the maximum impact parameter to the classical distance of closest approach in Coulomb scattering. In this case, the plasma parameter is given by: formula_0 where This expression is typically valid for a plasma in which ion thermal velocities are much less than electron thermal velocities. A detailed discussion of the Coulomb logarithm is available in the "NRL Plasma Formulary", pages 34–35. Note that the word parameter is usually used in plasma physics to refer to bulk plasma properties in general: see plasma parameters. An alternative definition of this parameter is given by the average number of electrons in a plasma contained within a Debye sphere (a sphere of radius the Debye length). This definition of the plasma parameter is more frequently (and appropriately) called the Debye number, and is denoted formula_1. In this context, the plasma parameter is defined as formula_2 Since these two definitions differ only by a factor of three, they are frequently used interchangeably. Often the factor of formula_3 is dropped. When the Debye length is given by formula_4, the plasma parameter is given by formula_5 where Confusingly, some authors define the plasma parameter as: formula_6 Coupling parameter. A closely related parameter is the plasma coupling formula_7, defined as a ratio of the Coulomb energy to the thermal one: formula_8 The Coulomb energy (per particle) is formula_9 where for the typical inter-particle distance formula_10 usually is taken the Wigner-Seitz radius. Therefore, formula_11 Clearly, up to a numeric factor of the order of unity, formula_12 In general, for multicomponent plasmas one defines the coupling parameter for each species "s" separately: formula_13 Here, "s" stands for either electrons or (a type of) ions. The ideal plasma approximation. One of the criteria which determine whether a collection of charged particles can rigorously be termed an ideal plasma is that Λ ≫ 1. When this is the case, collective electrostatic interactions dominate over binary collisions, and the plasma particles can be treated as if they only interact with a smooth background field, rather than through pairwise interactions (collisions). The equation of state of each species in an ideal plasma is that of an ideal gas. Plasma properties and Λ. Depending on the magnitude of Λ, plasma properties can be characterized as following: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Lambda = 4\\pi n_\\text{e}\\lambda_\\text{D}^3" }, { "math_id": 1, "text": "N_\\text{D}" }, { "math_id": 2, "text": "N_\\text{D} = \\frac{4\\pi}{3} n_\\text{e}\\lambda_\\text{D}^3 = \\frac{1}{3}\\Lambda" }, { "math_id": 3, "text": "\\frac{4\\pi}{3}" }, { "math_id": 4, "text": "\\lambda_\\text{D} = \\sqrt{\\frac{\\epsilon_0 kT_\\text{e}}{n_\\text{e}q_\\text{e}^2}}" }, { "math_id": 5, "text": "N_\\text{D} = \\frac{(\\epsilon_0 kT_\\text{e})^\\frac{3}{2}}{q_\\text{e}^3 {n_\\text{e}}^\\frac{1}{2}}" }, { "math_id": 6, "text": "\\epsilon_p = \\Lambda^{-1}\\ ." }, { "math_id": 7, "text": "\\Gamma" }, { "math_id": 8, "text": "\\Gamma = \\frac{E_\\text{C}}{kT_\\text{e}}." }, { "math_id": 9, "text": "E_\\text{C} = \\frac{q_\\text{e}^2}{4\\pi\\epsilon_0\\langle r \\rangle}," }, { "math_id": 10, "text": "\\langle r \\rangle" }, { "math_id": 11, "text": "\\Gamma = \\frac{q_\\text{e}^2}{4\\pi\\epsilon_0 kT_\\text{e}}\\sqrt[3]{\\frac{4\\pi n_\\text{e}}{3}}." }, { "math_id": 12, "text": "\\Gamma \\sim \\Lambda^{-\\frac{2}{3}}\\ ." }, { "math_id": 13, "text": "\\Gamma_s = \\frac{q_s^2}{4\\pi\\epsilon_0 kT_s}\\sqrt[3]{\\frac{4\\pi n_s}{3}}." } ]
https://en.wikipedia.org/wiki?curid=5880890
588260
Kakeya set
Shape containing unit line segments in all directions In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero. A Kakeya needle set (sometimes also known as a Kakeya set) is a (Besicovitch) set in the plane with a stronger property, that a unit line segment can be rotated continuously through 180 degrees within it, returning to its original position with reversed orientation. Again, the disk of radius 1/2 is an example of a Kakeya needle set. Kakeya needle problem. The Kakeya needle problem asks whether there is a minimum area of a region formula_0 in the plane, in which a needle of unit length can be turned through 360°. This question was first posed, for convex regions, by Sōichi Kakeya (1917). The minimum area for convex sets is achieved by an equilateral triangle of height 1 and area 1/√3, as Pál showed. Kakeya seems to have suggested that the Kakeya set formula_0 of minimum area, without the convexity restriction, would be a three-pointed deltoid shape. However, this is false; there are smaller non-convex Kakeya sets. Besicovitch needle sets. Besicovitch was able to show that there is no lower bound &gt; 0 for the area of such a region formula_0, in which a needle of unit length can be turned around. That is, for every formula_1, there is region of area formula_2 within which the needle can move through a continuous motion that rotates it a full 360 degrees. This built on earlier work of his, on plane sets which contain a unit segment in each orientation. Such a set is now called a Besicovitch set. Besicovitch's work showing such a set could have arbitrarily small measure was from 1919. The problem may have been considered by analysts before that. One method of constructing a Besicovitch set (see figure for corresponding illustrations) is known as a "Perron tree" after Oskar Perron who was able to simplify Besicovitch's original construction. The precise construction and numerical bounds are given in Besicovitch's popularization. The first observation to make is that the needle can move in a straight line as far as it wants without sweeping any area. This is because the needle is a zero width line segment. The second trick of Pál, known as Pál joins describes how to move the needle between any two locations that are parallel while sweeping negligible area. The needle will follow the shape of an "N". It moves from the first location some distance formula_3 up the left of the "N", sweeps out the angle to the middle diagonal, moves down the diagonal, sweeps out the second angle, and them moves up the parallel right side of the "N" until it reaches the required second location. The only non-zero area regions swept are the two triangles of height one and the angle at the top of the "N". The swept area is proportional to this angle which is proportional to formula_4. The construction starts with any triangle with height 1 and some substantial angle at the top through which the needle can easily sweep. The goal is to do many operations on this triangle to make its area smaller while keeping the directions though which the needle can sweep the same. First consider dividing the triangle in two and translating the pieces over each other so that their bases overlap in a way that minimizes the total area. The needle is able to sweep out the same directions by sweeping out those given by the first triangle, jumping over to the second, and then sweeping out the directions given by the second. The needle can jump triangles using the "N" technique because the two lines at which the original triangle was cut are parallel. Now, suppose we divide our triangle into 2"n" subtriangles. The figure shows eight. For each consecutive pair of triangles, perform the same overlapping operation we described before to get half as many new shapes, each consisting of two overlapping triangles. Next, overlap consecutive pairs of these new shapes by shifting them so that their bases overlap in a way that minimizes the total area. Repeat this "n" times until there is only one shape. Again, the needle is able to sweep out the same directions by sweeping those out in each of the 2"n" subtriangles in order of their direction. The needle can jump consecutive triangles using the "N" technique because the two lines at which these triangle were cut are parallel. What remains is to compute the area of the final shape. The proof is too hard to present here. Instead, we will just argue how the numbers might go. Looking at the figure, one sees that the 2"n" subtriangles overlap a lot. All of them overlap at the bottom, half of them at the bottom of the left branch, a quarter of them at the bottom of the left left branch, and so on. Suppose that the area of each shape created with "i" merging operations from 2"i" subtriangles is bounded by "A""i". Before merging two of these shapes, they have area bounded be 2"A""i". Then we move the two shapes together in the way that overlaps them as much as possible. In a worst case, these two regions are two 1 by ε rectangles perpendicular to each other so that they overlap at an area of only ε"2". But the two shapes that we have constructed, if long and skinny, point in much of the same direction because they are made from consecutive groups of subtriangles. The handwaving states that they over lap by at least 1% of their area. Then the merged area would be bounded by "A""i+1" = 1.99 "A""i". The area of the original triangle is bounded by 1. Hence, the area of each subtriangle is bounded by "A""0" = 2"-n" and the final shape has area bounded by "A""n" = 1.99"n" × 2"-n". In actuality, a careful summing up all areas that do not overlap gives that the area of the final region is much bigger, namely, "1/n". As "n" grows, this area shrinks to zero. A Besicovitch set can be created by combining six rotations of a Perron tree created from an equilateral triangle. A similar construction can be made with parallelograms There are other methods for constructing Besicovitch sets of measure zero aside from the 'sprouting' method. For example, Kahane uses Cantor sets to construct a Besicovitch set of measure zero in the two-dimensional plane. In 1941, H. J. Van Alphen showed that there are arbitrary small Kakeya needle sets inside a circle with radius 2 + ε (arbitrary ε &gt; 0). Simply connected Kakeya needle sets with smaller area than the deltoid were found in 1965. Melvin Bloom and I. J. Schoenberg independently presented Kakeya needle sets with areas approaching to formula_5, the Bloom-Schoenberg number. Schoenberg conjectured that this number is the lower bound for the area of simply connected Kakeya needle sets. However, in 1971, F. Cunningham showed that, given ε &gt; 0, there is a simply connected Kakeya needle set of area less than ε contained in a circle of radius 1. Although there are Kakeya needle sets of arbitrarily small positive measure and Besicovich sets of measure 0, there are no Kakeya needle sets of measure 0. Kakeya conjecture. Statement. The same question of how small these Besicovitch sets could be was then posed in higher dimensions, giving rise to a number of conjectures known collectively as the "Kakeya conjectures", and have helped initiate the field of mathematics known as geometric measure theory. In particular, if there exist Besicovitch sets of measure zero, could they also have s-dimensional Hausdorff measure zero for some dimension s less than the dimension of the space in which they lie? This question gives rise to the following conjecture: Kakeya set conjecture: Define a "Besicovitch set" in R"n" to be a set which contains a unit line segment in every direction. Is it true that such sets necessarily have Hausdorff dimension and Minkowski dimension equal to "n"? This is known to be true for "n" = 1, 2 but only partial results are known in higher dimensions. Kakeya maximal function. A modern way of approaching this problem is to consider a particular type of maximal function, which we construct as follows: Denote S"n"−1 ⊂ R"n" to be the unit sphere in "n"-dimensional space. Define formula_6 to be the cylinder of length 1, radius δ &gt; 0, centered at the point "a" ∈ R"n", and whose long side is parallel to the direction of the unit vector "e" ∈ S"n"−1. Then for a locally integrable function "f", we define the Kakeya maximal function of "f" to be formula_7 where "m" denotes the "n"-dimensional Lebesgue measure. Notice that formula_8 is defined for vectors "e" in the sphere S"n"−1. Then there is a conjecture for these functions that, if true, will imply the Kakeya set conjecture for higher dimensions: Kakeya maximal function conjecture: For all ε &gt; 0, there exists a constant "Cε" &gt; 0 such that for any function "f" and all δ &gt; 0, (see lp space for notation) formula_9 Results. Some results toward proving the Kakeya conjecture are the following: Applications to analysis. Somewhat surprisingly, these conjectures have been shown to be connected to a number of questions in other fields, notably in harmonic analysis. For instance, in 1971, Charles Fefferman was able to use the Besicovitch set construction to show that in dimensions greater than 1, truncated Fourier integrals taken over balls centered at the origin with radii tending to infinity need not converge in "L""p" norm when "p" ≠ 2 (this is in contrast to the one-dimensional case where such truncated integrals do converge). Analogues and generalizations of the Kakeya problem. Sets containing circles and spheres. Analogues of the Kakeya problem include considering sets containing more general shapes than lines, such as circles. Sets containing "k"-dimensional disks. A generalization of the Kakeya conjecture is to consider sets that contain, instead of segments of lines in every direction, but, say, portions of "k"-dimensional subspaces. Define an ("n", "k")-Besicovitch set "K" to be a compact set in R"n" containing a translate of every "k"-dimensional unit disk which has Lebesgue measure zero. That is, if "B" denotes the unit ball centered at zero, for every "k"-dimensional subspace "P", there exists "x" ∈ R"n" such that ("P" ∩ "B") + "x" ⊆ "K". Hence, a ("n", 1)-Besicovitch set is the standard Besicovitch set described earlier. The ("n", "k")-Besicovitch conjecture: There are no ("n", "k")-Besicovitch sets for "k" &gt; 1. In 1979, Marstrand proved that there were no (3, 2)-Besicovitch sets. At around the same time, however, Falconer proved that there were no ("n", "k")-Besicovitch sets for 2"k" &gt; "n". The best bound to date is by Bourgain, who proved in that no such sets exist when 2"k"−1 + "k" &gt; "n". Kakeya sets in vector spaces over finite fields. In 1999, Wolff posed the finite field analogue to the Kakeya problem, in hopes that the techniques for solving this conjecture could be carried over to the Euclidean case. Finite Field Kakeya Conjecture: Let F be a finite field, let "K" ⊆ Fn be a Kakeya set, i.e. for each vector "y" ∈ F"n" there exists "x" ∈ F"n" such that "K" contains a line {"x" + "ty" : "t" ∈ F}. Then the set "K" has size at least "cn"|F|"n" where "cn"&gt;0 is a constant that only depends on "n". Zeev Dvir proved this conjecture in 2008, showing that the statement holds for "cn" = 1/"n"!. In his proof, he observed that any polynomial in "n" variables of degree less than |F| vanishing on a Kakeya set must be identically zero. On the other hand, the polynomials in "n" variables of degree less than |F| form a vector space of dimension formula_13 Therefore, there is at least one non-trivial polynomial of degree less than |F| that vanishes on any given set with less than this number of points. Combining these two observations shows that Kakeya sets must have at least |F|"n"/"n"! points. It is not clear whether the techniques will extend to proving the original Kakeya conjecture but this proof does lend credence to the original conjecture by making essentially algebraic counterexamples unlikely. Dvir has written a survey article on progress on the finite field Kakeya problem and its relationship to randomness extractors. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "\\varepsilon>0" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "1/r" }, { "math_id": 5, "text": "\\tfrac{\\pi}{24}(5 - 2\\sqrt{2})" }, { "math_id": 6, "text": "T_{e}^{\\delta}(a)" }, { "math_id": 7, "text": " f_{*}^{\\delta}(e)=\\sup_{a\\in\\mathbf{R}^{n}}\\frac{1}{m(T_{e}^{\\delta}(a))}\\int_{T_{e}^{\\delta}(a)}|f(y)|dm(y)" }, { "math_id": 8, "text": "f_{*}^{\\delta}" }, { "math_id": 9, "text": " \\left \\|f_{*}^{\\delta} \\right \\|_{L^n(\\mathbf{S}^{n-1})} \\leqslant C_{\\epsilon} \\delta^{-\\epsilon}\\|f\\|_{L^n(\\mathbf{R}^{n})}. " }, { "math_id": 10, "text": "(2-\\sqrt{2})(n-4)+3" }, { "math_id": 11, "text": "5/2+\\epsilon" }, { "math_id": 12, "text": "\\epsilon>0" }, { "math_id": 13, "text": "{|\\mathbf{F}|+n-1\\choose n}\\ge \\frac{|\\mathbf{F}|^n}{n!}." } ]
https://en.wikipedia.org/wiki?curid=588260
58829
Market capitalization
Total value of a public company's outstanding shares Market capitalization, sometimes referred to as market cap, is the total value of a publicly traded company's outstanding common shares owned by stockholders. Market capitalization is equal to the market price per common share multiplied by the number of common shares outstanding. Description. Market capitalization is sometimes used to rank the size of companies. It measures only the equity component of a company's capital structure, and does not reflect management's decision as to how much debt (or leverage) is used to finance the firm. A more comprehensive measure of a firm's size is enterprise value (EV), which gives effect to outstanding debt, preferred stock, and other factors. For insurance firms, a value called the embedded value (EV) has been used. It is also used in ranking the relative size of stock exchanges, being a measure of the sum of the market capitalizations of all companies listed on each stock exchange. The total capitalization of stock markets or economic regions may be compared with other economic indicators (e.g. the Buffett indicator). The total market capitalization of all publicly traded companies in 2020 was approximately US$93 trillion. Historical estimates of world market cap. Total market capitalization of all publicly traded companies in the world from 1975 to 2020. Calculation. Market cap is given by the formula formula_0, where "MC" is the market capitalization, "N" is the number of common shares outstanding, and "P" is the market price per common share. For example, if a company has 4 million common shares outstanding and the closing price per share is $20, its market capitalization is then $80 million. If the closing price per share rises to $21, the market cap becomes $84 million. If it drops to $19 per share, the market cap falls to $76 million. This is in contrast to mercantile pricing where purchase price, average price and sale price may differ due to transaction costs. Not all of the outstanding shares trade on the open market. The number of shares trading on the open market is called the float. It is equal to or less than "N" because "N" includes shares that are restricted from trading. The free-float market cap uses just the floating number of shares in the calculation, generally resulting in a smaller number. Market cap terms. Traditionally, companies were divided into large-cap, mid-cap, and small-cap. The terms mega-cap and micro-cap have since come into common use, and nano-cap is sometimes heard. Large caps have a slow growth rate as compared to small caps. Different numbers are used by different indexes; there is no official definition of, or full consensus agreement about, the exact cutoff values. The cutoffs may be defined as percentiles rather than in nominal dollars. The definitions expressed in nominal dollars need to be adjusted over decades due to inflation, population change, and overall market valuation (for example, $1 billion was a large market cap in 1950, but it is not very large now), and market caps are likely to be different country to country. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\text{MC} = N \\times P " } ]
https://en.wikipedia.org/wiki?curid=58829
588356
Borel subgroup
In the theory of algebraic groups, a Borel subgroup of an algebraic group "G" is a maximal Zariski closed and connected solvable algebraic subgroup. For example, in the general linear group "GLn" ("n x n" invertible matrices), the subgroup of invertible upper triangular matrices is a Borel subgroup. For groups realized over algebraically closed fields, there is a single conjugacy class of Borel subgroups. Borel subgroups are one of the two key ingredients in understanding the structure of simple (more generally, reductive) algebraic groups, in Jacques Tits' theory of groups with a ("B", "N") pair. Here the group "B" is a Borel subgroup and "N" is the normalizer of a maximal torus contained in "B". The notion was introduced by Armand Borel, who played a leading role in the development of the theory of algebraic groups. Parabolic subgroups. Subgroups between a Borel subgroup "B" and the ambient group "G" are called parabolic subgroups. Parabolic subgroups "P" are also characterized, among algebraic subgroups, by the condition that "G"/"P" is a complete variety. Working over algebraically closed fields, the Borel subgroups turn out to be the minimal parabolic subgroups in this sense. Thus "B" is a Borel subgroup when the homogeneous space "G/B" is a complete variety which is "as large as possible". For a simple algebraic group "G", the set of conjugacy classes of parabolic subgroups is in bijection with the set of all subsets of nodes of the corresponding Dynkin diagram; the Borel subgroup corresponds to the empty set and "G" itself corresponding to the set of all nodes. (In general, each node of the Dynkin diagram determines a simple negative root and thus a one-dimensional 'root group' of "G". A subset of the nodes thus yields a parabolic subgroup, generated by "B" and the corresponding negative root groups. Moreover, any parabolic subgroup is conjugate to such a parabolic subgroup.) The corresponding subgroups of the Weyl group of "G" are also called parabolic subgroups, see Parabolic subgroup of a reflection group. Example. Let formula_0. A Borel subgroup formula_1 of formula_2 is the set of upper triangular matricesformula_3and the maximal proper parabolic subgroups of formula_2 containing formula_1 areformula_4Also, a maximal torus in formula_1 isformula_5This is isomorphic to the algebraic torus formula_6. Lie algebra. For the special case of a Lie algebra formula_7 with a Cartan subalgebra formula_8, given an ordering of formula_8, the Borel subalgebra is the direct sum of formula_8 and the weight spaces of formula_7 with positive weight. A Lie subalgebra of formula_7 containing a Borel subalgebra is called a parabolic Lie algebra.
[ { "math_id": 0, "text": "G = GL_4(\\mathbb{C})" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "\\left\\{\nA = \\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} \\\\\n0 & a_{22} & a_{23} & a_{24} \\\\\n0 & 0 & a_{33} & a_{34} \\\\\n0 & 0 & 0 & a_{44}\n\\end{bmatrix} : \\det(A) \\neq 0 \\right\\}" }, { "math_id": 4, "text": "\\left\\{\n\\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} \\\\\n0 & a_{22} & a_{23} & a_{24} \\\\\n0 & a_{32} & a_{33} & a_{34} \\\\\n0 & a_{42} & a_{43} & a_{44}\n\\end{bmatrix}\\right\\}, \\text{ } \\left\\{\n\\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} \\\\\na_{21} & a_{22} & a_{23} & a_{24} \\\\\n0 & 0 & a_{33} & a_{34} \\\\\n0 & 0 & a_{43} & a_{44}\n\\end{bmatrix}\\right\\}, \\text{ } \\left\\{\n\\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} \\\\\na_{21} & a_{22} & a_{23} & a_{24} \\\\\na_{31} & a_{32} & a_{33} & a_{34} \\\\\n0 & 0 & 0 & a_{44}\n\\end{bmatrix}\\right\\}" }, { "math_id": 5, "text": "\\left\\{\n\\begin{bmatrix}\na_{11} & 0 & 0 & 0 \\\\\n0 & a_{22} & 0 & 0 \\\\\n0 & 0 & a_{33} & 0 \\\\\n0 & 0 & 0 & a_{44}\n\\end{bmatrix}: a_{11}\\cdot a_{22} \\cdot a_{33}\\cdot a_{44} \\neq 0\\right\\}" }, { "math_id": 6, "text": "(\\mathbb{C}^*)^4 = \\text{Spec}(\\mathbb{C}[x^{\\pm 1},y^{\\pm 1},z^{\\pm 1},w^{\\pm 1}])" }, { "math_id": 7, "text": "\\mathfrak{g}" }, { "math_id": 8, "text": "\\mathfrak{h}" } ]
https://en.wikipedia.org/wiki?curid=588356
5883740
Wheeling (electric power transmission)
In electric power transmission, wheeling is the transportation of electric energy (megawatt-hours) from within an electrical grid to an electrical load outside the grid boundaries. In a simpler sense, it refers to the process of transmission of electricity through the transmission lines. Two types of wheeling are 1) a wheel-through, where the electrical power generation and the load are both outside the boundaries of the transmission system and 2) a wheel-out, where the generation resource is inside the boundaries of the transmission system but the load is outside. Wheeling often refers to the scheduling of the energy transfer from one balancing authority (cf. Balancing Authority, Tie Facility and Interconnection) to another. Since the wheeling of electric energy requires use of a transmission system, there is often an associated fee which goes to the transmission owners. Transmission ownership. Under deregulation, many vertically integrated utilities were separated into generation owners, transmission and distribution owners, and retail providers. To recover capital costs, operating costs and earn a return on investment, a transmission revenue requirement (TRR) is established and approved by a national agency (such as the Federal Energy Regulatory Commission in the United States) for each transmission owner. The TRR is paid through transmission access charges (TACs), load-weighted fees charged to internal load and energy exports for use of the transmission facilities. The energy export fee is often referred to as a wheeling charge. When wheeling-through, the transmission access charge only applies to the exported amount. Wheeling charge. A wheeling charge is a currency per megawatt-hour amount that a transmission owner receives for the use of its system to export energy. The total amount due in TAC fees is determined by the following equation: formula_0 Where 'Wc' is wheeling charge per unit. 'Pw' is the power in MW. The fee associated with wheeling is referred to as a "wheeling charge." This is an amount in $/MWh which transmission owner recovers for the use of its system. If the resource entity must go through multiple [transmission owner]s, it may be charged a wheeling charge for each one. There are many reasons for a wheeling charge. It may be to recover some costs of transmission facilities or congestion. Another motivation would be to keep prices low. For instance, if the electricity prices in Arizona are 30 $/MWh and prices in California are 50 $/MWh, resources in Arizona would want to sell to the California market to make more money. The utilities in Arizona would then be forced to pay 50 $/MWh if they needed these resources. If Arizona charged a wheeling charge of 10 $/MWh, Arizona would only have to pay 40 $/MWh to compete with California. However, Arizona would not want to charge too much, as this could impact the advantages of trading electric energy between systems. In this way, it works similarly to tariffs. In Tamilnadu, wheeling charges are applicable for the consumer who uses third party power. They charge ₹ 0.2105 Rupees per MW. In Assam, wheeling charges are applicable for the consumer who uses third party power. They charge ₹ 0.26 Rupees per MW References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Total\\ wheeling\\ fee = Wc\\ (\\$/MWh) \\times Pw\\ (MW) " } ]
https://en.wikipedia.org/wiki?curid=5883740
5884024
Ramanujan's sum
A math function by Srinivasa Ramanujan In number theory, Ramanujan's sum, usually denoted "cq"("n"), is a function of two positive integer variables "q" and "n" defined by the formula formula_0 where ("a", "q") = 1 means that "a" only takes on values coprime to "q". Srinivasa Ramanujan mentioned the sums in a 1918 paper. In addition to the expansions discussed in this article, Ramanujan's sums are used in the proof of Vinogradov's theorem that every sufficiently large odd number is the sum of three primes. Notation. For integers "a" and "b", formula_1 is read ""a" divides "b" and means that there is an integer "c" such that formula_2 Similarly, formula_3 is read "a" does not divide "b"". The summation symbol formula_4 means that "d" goes through all the positive divisors of "m", e.g. formula_5 formula_6 is the greatest common divisor, formula_7 is Euler's totient function, formula_8 is the Möbius function, and formula_9 is the Riemann zeta function. Formulas for "c""q"("n"). Trigonometry. These formulas come from the definition, Euler's formula formula_10 and elementary trigonometric identities. formula_11 and so on (OEIS: , OEIS: , OEIS: , OEIS: .., OEIS: ...). "cq"("n") is always an integer. Kluyver. Let formula_12 Then "ζq" is a root of the equation "xq" − 1 0. Each of its powers, formula_13 is also a root. Therefore, since there are "q" of them, they are all of the roots. The numbers formula_14 where 1 ≤ "n" ≤ "q" are called the "q"-th roots of unity. "ζq" is called a primitive "q"-th root of unity because the smallest value of "n" that makes formula_15 is "q". The other primitive "q"-th roots of unity are the numbers formula_16 where ("a", "q") = 1. Therefore, there are φ("q") primitive "q"-th roots of unity. Thus, the Ramanujan sum "cq"("n") is the sum of the "n"-th powers of the primitive "q"-th roots of unity. It is a fact that the powers of "ζq" are precisely the primitive roots for all the divisors of "q". Example. Let "q" = 12. Then formula_17 and formula_18 are the primitive twelfth roots of unity, formula_19 and formula_20 are the primitive sixth roots of unity, formula_21 and formula_22 are the primitive fourth roots of unity, formula_23 and formula_24 are the primitive third roots of unity, formula_25 is the primitive second root of unity, and formula_26 is the primitive first root of unity. Therefore, if formula_27 is the sum of the "n"-th powers of all the roots, primitive and imprimitive, formula_28 and by Möbius inversion, formula_29 It follows from the identity "x""q" − 1 = ("x" − 1)("x""q"−1 + "x""q"−2 + ... + "x" + 1) that formula_30 and this leads to the formula formula_31 published by Kluyver in 1906. This shows that "c""q"("n") is always an integer. Compare it with the formula formula_32 von Sterneck. It is easily shown from the definition that "c""q"("n") is multiplicative when considered as a function of "q" for a fixed value of "n": i.e. formula_33 From the definition (or Kluyver's formula) it is straightforward to prove that, if "p" is a prime number, formula_34 and if "p""k" is a prime power where "k" &gt; 1, formula_35 This result and the multiplicative property can be used to prove formula_36 This is called von Sterneck's arithmetic function. The equivalence of it and Ramanujan's sum is due to Hölder. Other properties of "c""q"("n"). For all positive integers "q", formula_37 For a fixed value of "q" the absolute value of the sequence formula_38 is bounded by φ("q"), and for a fixed value of "n" the absolute value of the sequence formula_39 is bounded by "n". If "q" &gt; 1 formula_40 Let "m"1, "m"2 &gt; 0, "m" = lcm("m"1, "m"2). Then Ramanujan's sums satisfy an orthogonality property: formula_41 Let "n", "k" &gt; 0. Then formula_42 known as the Brauer - Rademacher identity. If "n" &gt; 0 and "a" is any integer, we also have formula_43 due to Cohen. Ramanujan expansions. If "f"("n") is an arithmetic function (i.e. a complex-valued function of the integers or natural numbers), then a convergent infinite series of the form: formula_44 or of the form: formula_45 where the "ak" ∈ C, is called a Ramanujan expansion of "f"("n"). Ramanujan found expansions of some of the well-known functions of number theory. All of these results are proved in an "elementary" manner (i.e. only using formal manipulations of series and the simplest results about convergence). The expansion of the zero function depends on a result from the analytic theory of prime numbers, namely that the series formula_46 converges to 0, and the results for "r"("n") and "r"′("n") depend on theorems in an earlier paper. All the formulas in this section are from Ramanujan's 1918 paper. Generating functions. The generating functions of the Ramanujan sums are Dirichlet series: formula_47 is a generating function for the sequence "cq"(1), "cq"(2), ... where "q" is kept constant, and formula_48 is a generating function for the sequence "c"1("n"), "c"2("n"), ... where "n" is kept constant. There is also the double Dirichlet series formula_49 The polynomial with Ramanujan sum's as coefficients can be expressed with cyclotomic polynomial formula_50. σ"k"("n"). σ"k"("n") is the divisor function (i.e. the sum of the "k"-th powers of the divisors of "n", including 1 and "n"). σ0("n"), the number of divisors of "n", is usually written "d"("n") and σ1("n"), the sum of the divisors of "n", is usually written σ("n"). If "s" &gt; 0, formula_51 Setting "s" = 1 gives formula_52 If the Riemann hypothesis is true, and formula_53 formula_54 "d"("n"). "d"("n") = σ0("n") is the number of divisors of "n", including 1 and "n" itself. formula_55 where γ = 0.5772... is the Euler–Mascheroni constant. "φ"("n"). Euler's totient function φ("n") is the number of positive integers less than "n" and coprime to "n". Ramanujan defines a generalization of it, if formula_56 is the prime factorization of "n", and "s" is a complex number, let formula_57 so that "φ"1("n") = "φ"("n") is Euler's function. He proves that formula_58 and uses this to show that formula_59 Letting "s" = 1, formula_60 Note that the constant is the inverse of the one in the formula for σ("n"). Λ("n"). Von Mangoldt's function Λ("n") 0 unless "n" = "pk" is a power of a prime number, in which case it is the natural logarithm log "p". formula_61 Zero. For all "n" &gt; 0, formula_62 This is equivalent to the prime number theorem. "r"2"s"("n") (sums of squares). "r"2"s"("n") is the number of way of representing "n" as the sum of 2"s" squares, counting different orders and signs as different (e.g., "r"2(13) = 8, as 13 = (±2)2 + (±3)2 = (±3)2 + (±2)2.) Ramanujan defines a function δ2"s"("n") and references a paper in which he proved that "r"2"s"("n") = δ2"s"("n") for "s" = 1, 2, 3, and 4. For "s" &gt; 4 he shows that δ2"s"("n") is a good approximation to "r"2"s"("n"). "s" = 1 has a special formula: formula_63 In the following formulas the signs repeat with a period of 4. formula_64 and therefore, formula_65 "r"′2s(n) (sums of triangles). formula_66 is the number of ways "n" can be represented as the sum of 2"s" triangular numbers (i.e. the numbers 1, 3 = 1 + 2, 6 = 1 + 2 + 3, 10 = 1 + 2 + 3 + 4, 15, ...; the "n"-th triangular number is given by the formula "n"("n" + 1)/2.) The analysis here is similar to that for squares. Ramanujan refers to the same paper as he did for the squares, where he showed that there is a function formula_67 such that formula_68 for "s" = 1, 2, 3, and 4, and that for "s" &gt; 4, formula_67 is a good approximation to formula_69 Again, "s" = 1 requires a special formula: formula_70 If "s" is a multiple of 4, formula_71 Therefore, formula_72 Sums. Let formula_73 Then for "s" &gt; 1, formula_74 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_q(n) = \\sum_{1 \\le a \\leq q \\atop (a,q)=1} e^{2 \\pi i \\tfrac{a}{q} n}," }, { "math_id": 1, "text": "a\\mid b" }, { "math_id": 2, "text": "\\frac b a = c." }, { "math_id": 3, "text": "a\\nmid b" }, { "math_id": 4, "text": "\\sum_{d\\,\\mid\\,m}f(d)" }, { "math_id": 5, "text": "\\sum_{d\\,\\mid\\,12}f(d) = f(1) + f(2) + f(3) + f(4) + f(6) + f(12). " }, { "math_id": 6, "text": "(a,\\,b)" }, { "math_id": 7, "text": "\\phi(n)" }, { "math_id": 8, "text": "\\mu(n)" }, { "math_id": 9, "text": "\\zeta(s)" }, { "math_id": 10, "text": "e^{ix}= \\cos x + i \\sin x," }, { "math_id": 11, "text": "\\begin{align}\nc_1(n) &= 1 \\\\\nc_2(n) &= \\cos n\\pi \\\\\nc_3(n) &= 2\\cos \\tfrac23 n\\pi \\\\\nc_4(n) &= 2\\cos \\tfrac12 n\\pi \\\\\nc_5(n) &= 2\\cos \\tfrac25 n\\pi + 2\\cos \\tfrac45 n\\pi \\\\\nc_6(n) &= 2\\cos \\tfrac13 n\\pi \\\\\nc_7(n) &= 2\\cos \\tfrac27 n\\pi + 2\\cos \\tfrac47 n\\pi + 2\\cos \\tfrac67 n\\pi \\\\\nc_8(n) &= 2\\cos \\tfrac14 n\\pi + 2\\cos \\tfrac34 n\\pi \\\\\nc_9(n) &= 2\\cos \\tfrac29 n\\pi + 2\\cos \\tfrac49 n\\pi + 2\\cos \\tfrac89 n\\pi \\\\\nc_{10}(n)&= 2\\cos \\tfrac15 n\\pi + 2\\cos \\tfrac35 n\\pi \\\\\n\\end{align}" }, { "math_id": 12, "text": "\\zeta_q=e^{\\frac{2\\pi i}{q}}." }, { "math_id": 13, "text": "\\zeta_q, \\zeta_q^2, \\ldots, \\zeta_q^{q-1}, \\zeta_q^q = \\zeta_q^0 =1" }, { "math_id": 14, "text": "\\zeta_q^n" }, { "math_id": 15, "text": "\\zeta_q^n =1" }, { "math_id": 16, "text": "\\zeta_q^a" }, { "math_id": 17, "text": "\\zeta_{12}, \\zeta_{12}^5, \\zeta_{12}^7," }, { "math_id": 18, "text": "\\zeta_{12}^{11}" }, { "math_id": 19, "text": "\\zeta_{12}^2" }, { "math_id": 20, "text": "\\zeta_{12}^{10}" }, { "math_id": 21, "text": "\\zeta_{12}^3 = i" }, { "math_id": 22, "text": "\\zeta_{12}^9 = -i" }, { "math_id": 23, "text": "\\zeta_{12}^4" }, { "math_id": 24, "text": "\\zeta_{12}^8" }, { "math_id": 25, "text": "\\zeta_{12}^6 = -1" }, { "math_id": 26, "text": "\\zeta_{12}^{12} = 1" }, { "math_id": 27, "text": "\\eta_q(n) = \\sum_{k=1}^q \\zeta_q^{kn}" }, { "math_id": 28, "text": "\\eta_q(n) = \\sum_{d\\mid q} c_d(n)," }, { "math_id": 29, "text": "c_q(n) = \\sum_{d\\mid q} \\mu\\left(\\frac{q}d\\right)\\eta_d(n)." }, { "math_id": 30, "text": "\\eta_q(n) = \\begin{cases} 0 & q\\nmid n\\\\ q & q\\mid n\\\\ \\end{cases} " }, { "math_id": 31, "text": "c_q(n)=\\sum_{d\\mid (q,n)} \\mu\\left(\\frac{q}{d}\\right) d," }, { "math_id": 32, "text": "\\phi(q)=\\sum_{d \\mid q}\\mu\\left(\\frac{q}{d}\\right) d." }, { "math_id": 33, "text": "\\mbox{If } \\;(q,r) = 1 \\;\\mbox{ then }\\; c_q(n)c_r(n)=c_{qr}(n)." }, { "math_id": 34, "text": "\nc_p(n) = \n\\begin{cases}\n-1 &\\mbox{ if }p\\nmid n\\\\\n\\phi(p)&\\mbox{ if }p\\mid n\\\\\n\\end{cases}\n," }, { "math_id": 35, "text": "\nc_{p^k}(n) = \n\\begin{cases}\n0 &\\mbox{ if }p^{k-1}\\nmid n\\\\\n-p^{k-1} &\\mbox{ if }p^{k-1}\\mid n \\mbox{ and }p^k\\nmid n\\\\\n\\phi(p^k) &\\mbox{ if }p^k\\mid n\\\\\n\\end{cases}\n." }, { "math_id": 36, "text": "c_q(n)= \\mu\\left(\\frac{q}{(q, n)}\\right)\\frac{\\phi(q)}{\\phi\\left(\\frac{q}{(q, n)}\\right)}." }, { "math_id": 37, "text": "\\begin{align}\nc_1(q) &= 1 \\\\\nc_q(1) &= \\mu(q) \\\\\nc_q(q) &= \\phi(q) \\\\\nc_q(m) &= c_q(n) && \\text{for } m \\equiv n \\pmod q \\\\\n\\end{align}" }, { "math_id": 38, "text": "\\{c_q(1), c_q(2), \\ldots\\}" }, { "math_id": 39, "text": "\\{c_1(n), c_2(n), \\ldots\\}" }, { "math_id": 40, "text": "\\sum_{n=a}^{a+q-1} c_q(n)=0. " }, { "math_id": 41, "text": "\\frac{1}{m}\\sum_{k=1}^m c_{m_1}(k) c_{m_2}(k) = \\begin{cases} \\phi(m) & m_1=m_2=m,\\\\ 0 & \\text{otherwise} \\end{cases} " }, { "math_id": 42, "text": "\\sum_\\stackrel{d\\mid n}{\\gcd(d,k)=1} d\\;\\frac{\\mu(\\tfrac{n}{d})}{\\phi(d)} =\\frac{\\mu(n) c_n(k)}{\\phi(n)}," }, { "math_id": 43, "text": "\\sum_\\stackrel{1\\le k\\le n}{\\gcd(k,n)=1} c_n(k-a) = \\mu(n)c_n(a), " }, { "math_id": 44, "text": "f(n)=\\sum_{q=1}^\\infty a_q c_q(n)" }, { "math_id": 45, "text": "f(q)=\\sum_{n=1}^\\infty a_n c_q(n)" }, { "math_id": 46, "text": "\\sum_{n=1}^\\infty\\frac{\\mu(n)}{n}" }, { "math_id": 47, "text": " \\zeta(s) \\sum_{\\delta\\,\\mid\\,q} \\mu\\left(\\frac{q}{\\delta}\\right) \\delta^{1-s} = \\sum_{n=1}^\\infty \\frac{c_q(n)}{n^s} " }, { "math_id": 48, "text": "\\frac{\\sigma_{r-1}(n)}{n^{r-1}\\zeta(r)}= \\sum_{q=1}^\\infty \\frac{c_q(n)}{q^{r}} " }, { "math_id": 49, "text": "\\frac{\\zeta(s) \\zeta(r+s-1)}{\\zeta(r)}= \\sum_{q=1}^\\infty \\sum_{n=1}^\\infty \\frac{c_q(n)}{q^r n^s}." }, { "math_id": 50, "text": "\\sum_{n=1}^q c_q(n) x^{n-1} = (x^q - 1) \\frac{\\Phi_q'(x)}{\\Phi_q(x)} = \\Phi_q'(x) \\prod_{\\begin{array}{c} d \\mid q \\\\[-4pt] d \\neq q\\end{array}} \\Phi_d(x)" }, { "math_id": 51, "text": "\\begin{align}\n\\sigma_s(n) &= n^s \\zeta(s+1) \\left(\\frac{c_1(n)}{1^{s+1}}+ \\frac{c_2(n)}{2^{s+1}}+ \\frac{c_3(n)}{3^{s+1}}+\\cdots\\right) \\\\\n\\sigma_{-s}(n) &=\\zeta(s+1)\\left(\\frac{c_1(n)}{1^{s+1}}+\\frac{c_2(n)}{2^{s+1}}+\\frac{c_3(n)}{3^{s+1}}+\\cdots\\right)\n\\end{align}" }, { "math_id": 52, "text": "\\sigma(n)= \\frac{\\pi^2}{6}n \\left(\\frac{c_1(n)}{1}+ \\frac{c_2(n)}{4}+ \\frac{c_3(n)}{9}+ \\cdots \\right)." }, { "math_id": 53, "text": "-\\tfrac12<s<\\tfrac12," }, { "math_id": 54, "text": "\\sigma_s(n) = \\zeta(1-s) \\left(\\frac{c_1(n)}{1^{1-s}}+ \\frac{c_2(n)}{2^{1-s}}+ \\frac{c_3(n)}{3^{1-s}}+ \\cdots \\right) = n^s \\zeta(1+s) \\left( \\frac{c_1(n)}{1^{1+s}}+ \\frac{c_2(n)}{2^{1+s}}+ \\frac{c_3(n)}{3^{1+s}}+ \\cdots \\right)." }, { "math_id": 55, "text": "\\begin{align}\n-d(n) &= \\frac{\\log 1}{1}c_1(n)+ \\frac{\\log 2}{2}c_2(n)+ \\frac{\\log 3}{3}c_3(n)+ \\cdots \\\\\n-d(n)(2\\gamma+\\log n) &= \\frac{\\log^2 1}{1}c_1(n)+ \\frac{\\log^2 2}{2}c_2(n)+ \\frac{\\log^2 3}{3}c_3(n)+ \\cdots \n\\end{align}" }, { "math_id": 56, "text": "n=p_1^{a_1}p_2^{a_2}p_3^{a_3}\\cdots" }, { "math_id": 57, "text": "\\varphi_s(n)=n^s(1-p_1^{-s})(1-p_2^{-s})(1-p_3^{-s})\\cdots," }, { "math_id": 58, "text": " \\frac{\\mu(n)n^s}{\\varphi_s(n)\\zeta(s)}= \\sum_{\\nu=1}^\\infty \\frac{\\mu(n\\nu)}{\\nu^s} " }, { "math_id": 59, "text": "\\frac{\\varphi_s(n)\\zeta(s+1)}{n^s}=\\frac{\\mu(1)c_1(n)}{\\varphi_{s+1}(1)}+\\frac{\\mu(2)c_2(n)}{\\varphi_{s+1}(2)}+\\frac{\\mu(3)c_3(n)}{\\varphi_{s+1}(3)}+\\cdots.\n" }, { "math_id": 60, "text": "\\varphi(n) = \\frac{6}{\\pi^2}n \\left(c_1(n) -\\frac{c_2(n)}{2^2-1} -\\frac{c_3(n)}{3^2-1} -\\frac{c_5(n)}{5^2-1}+\\frac{c_6(n)}{(2^2-1)(3^2-1)} - \\frac{c_7(n)}{7^2-1} +\\frac{c_{10}(n)}{(2^2-1)(5^2-1)} -\\cdots \\right)." }, { "math_id": 61, "text": " -\\Lambda(m) = c_m(1)+ \\frac{1}{2} c_m(2)+ \\frac13c_m(3)+\\cdots" }, { "math_id": 62, "text": "0= c_1(n)+ \\frac12c_2(n)+ \\frac13c_3(n)+ \\cdots." }, { "math_id": 63, "text": " \\delta_2(n)= \\pi \\left(\\frac{c_1(n)}{1}- \\frac{c_3(n)}{3}+ \\frac{c_5(n)}{5}- \\cdots \\right). " }, { "math_id": 64, "text": "\\begin{align}\n\\delta_{2s}(n) &= \\frac{\\pi^s n^{s-1}}{(s-1)!} \\left( \\frac{c_1(n)}{1^s}+ \\frac{c_4(n)}{2^s}+ \\frac{c_3(n)}{3^s}+\\frac{c_8(n)}{4^s}+ \\frac{c_5(n)}{5^s}+ \\frac{c_{12}(n)}{6^s}+ \\frac{c_7(n)}{7^s}+ \\frac{c_{16}(n)}{8^s}+ \\cdots \\right) && s \\equiv 0 \\pmod 4 \\\\[6pt]\n\\delta_{2s}(n) &= \\frac{\\pi^s n^{s-1}}{(s-1)!} \\left( \\frac{c_1(n)}{1^s}- \\frac{c_4(n)}{2^s}+ \\frac{c_3(n)}{3^s}- \\frac{c_8(n)}{4^s}+ \\frac{c_5(n)}{5^s}- \\frac{c_{12}(n)}{6^s}+ \\frac{c_7(n)}{7^s}- \\frac{c_{16}(n)}{8^s}+ \\cdots \\right) && s \\equiv 2 \\pmod 4 \\\\[6pt]\n\\delta_{2s}(n) &= \\frac{\\pi^s n^{s-1}}{(s-1)!} \\left( \\frac{c_1(n)}{1^s}+ \\frac{c_4(n)}{2^s}- \\frac{c_3(n)}{3^s}+ \\frac{c_8(n)}{4^s}+ \\frac{c_5(n)}{5^s}+ \\frac{c_{12}(n)}{6^s}- \\frac{c_7(n)}{7^s}+ \\frac{c_{16}(n)}{8^s}+ \\cdots \\right) && s \\equiv 1 \\pmod 4 \\text{ and } s > 1 \\\\[6pt]\n\\delta_{2s}(n) &= \\frac{\\pi^s n^{s-1}}{(s-1)!} \\left(\\frac{c_1(n)}{1^s}- \\frac{c_4(n)}{2^s}- \\frac{c_3(n)}{3^s}- \\frac{c_8(n)}{4^s}+ \\frac{c_5(n)}{5^s}-\\frac{c_{12}(n)}{6^s}-\\frac{c_7(n)}{7^s}-\\frac{c_{16}(n)}{8^s}+ \\cdots \\right) && s \\equiv 3 \\pmod 4 \\\\\n\\end{align}" }, { "math_id": 65, "text": "\\begin{align}\nr_2(n) &= \\pi \\left(\\frac{c_1(n)}{1}- \\frac{c_3(n)}{3}+ \\frac{c_5(n)}{5}- \\frac{c_7(n)}{7}+ \\frac{c_{11}(n)}{11}-\\frac{c_{13}(n)}{13}+ \\frac{c_{15}(n)}{15} - \\frac{c_{17}(n)}{17} + \\cdots \\right) \\\\[6pt]\nr_4(n) &= \\pi^2 n \\left( \\frac{c_1(n)}{1}- \\frac{c_4(n)}{4}+ \\frac{c_3(n)}{9}- \\frac{c_8(n)}{16}+ \\frac{c_5(n)}{25}- \\frac{c_{12}(n)}{36}+ \\frac{c_7(n)}{49}- \\frac{c_{16}(n)}{64}+ \\cdots \\right) \\\\[6pt]\nr_6(n) &= \\frac{\\pi^3 n^2}{2} \\left( \\frac{c_1(n)}{1}- \\frac{c_4(n)}{8}- \\frac{c_3(n)}{27}- \\frac{c_8(n)}{64}+ \\frac{c_5(n)}{125}- \\frac{c_{12}(n)}{216}- \\frac{c_7(n)}{343} - \\frac{c_{16}(n)}{512}+ \\cdots \\right) \\\\[6pt]\nr_8(n) &= \\frac{\\pi^4 n^3}{6} \\left(\\frac{c_1(n)}{1}+ \\frac{c_4(n)}{16}+ \\frac{c_3(n)}{81}+ \\frac{c_8(n)}{256}+ \\frac{c_5(n)}{625}+ \\frac{c_{12}(n)}{1296}+ \\frac{c_7(n)}{2401}+ \\frac{c_{16}(n)}{4096}+ \\cdots \\right)\n\\end{align}" }, { "math_id": 66, "text": "r'_{2s}(n)" }, { "math_id": 67, "text": "\\delta'_{2s}(n)" }, { "math_id": 68, "text": "r'_{2s}(n) = \\delta'_{2s}(n)" }, { "math_id": 69, "text": "r'_{2s}(n)." }, { "math_id": 70, "text": "\\delta'_2(n)= \\frac{\\pi}{4} \\left(\\frac{c_1(4n+1)}{1}-\\frac{c_3(4n+1)}{3}+ \\frac{c_5(4n+1)}{5}- \\frac{c_7(4n+1)}{7}+ \\cdots \\right)." }, { "math_id": 71, "text": "\\begin{align}\n\\delta'_{2s}(n) &= \\frac{(\\frac{\\pi}{2})^s}{(s-1)!}\\left(n+\\frac{s}4\\right)^{s-1} \\left( \\frac{c_1(n+\\frac{s}4)}{1^s}+ \\frac{c_3(n+\\frac{s}4)}{3^s}+ \\frac{c_5(n+\\frac{s}4)}{5^s}+ \\cdots \\right) && s \\equiv 0 \\pmod 4 \\\\[6pt]\n\\delta'_{2s}(n) &= \\frac{(\\frac{\\pi}{2})^s}{(s-1)!}\\left(n+\\frac{s}4\\right)^{s-1} \\left( \\frac{c_1(2n+\\frac{s}2)}{1^s}+ \\frac{c_3(2n+\\frac{s}2)}{3^s}+ \\frac{c_5(2n+\\frac{s}2)}{5^s}+ \\cdots \\right) && s \\equiv 2 \\pmod 4 \\\\[6pt]\n\\delta'_{2s}(n) &= \\frac{(\\frac{\\pi}{2})^s}{(s-1)!}\\left(n+\\frac{s}4\\right)^{s-1} \\left(\\frac{c_1(4n+s)}{1^s}- \\frac{c_3(4n+s)}{3^s}+\\frac{c_5(4n+s)}{5^s}- \\cdots \\right) && s \\equiv 1 \\pmod 2 \\text{ and } s >1\n\\end{align}" }, { "math_id": 72, "text": "\\begin{align}\nr'_2(n) &= \\frac{\\pi}{4} \\left(\\frac{c_1(4n+1)}{1}- \\frac{c_3(4n+1)}{3}+ \\frac{c_5(4n+1)}{5}- \\frac{c_7(4n+1)}{7}+ \\cdots \\right) \\\\[6pt]\nr'_4(n) &= \\left(\\frac{\\pi}{2}\\right)^2\\left(n+\\frac12\\right) \\left(\\frac{c_1(2n+1)}{1}+\\frac{c_3(2n+1)}{9}+ \\frac{c_5(2n+1)}{25}+ \\cdots \\right) \\\\[6pt]\nr'_6(n) &= \\frac{(\\frac{\\pi}{2})^3}{2}\\left(n+\\frac34\\right)^2 \\left(\\frac{c_1(4n+3)}{1}-\\frac{c_3(4n+3)}{27}+ \\frac{c_5(4n+3)}{125}-\\cdots \\right)\\\\[6pt]\nr'_8(n) &= \\frac{(\\frac{\\pi}{2})^4}{6}(n+1)^3 \\left(\\frac{c_1(n+1)}{1}+ \\frac{c_3(n+1)}{81}+ \\frac{c_5(n+1)}{625}+ \\cdots \\right) \n\\end{align}" }, { "math_id": 73, "text": "\\begin{align}\nT_q(n) &= c_q(1) + c_q(2) + \\cdots + c_q(n) \\\\\nU_q(n) &= T_q(n) + \\tfrac12\\phi(q)\n\\end{align}" }, { "math_id": 74, "text": "\\begin{align}\n\\sigma_{-s}(1) + \\cdots + \\sigma_{-s}(n) &= \\zeta(s+1) \\left(n+ \\frac{T_2(n)}{2^{s+1}}+ \\frac{T_3(n)}{3^{s+1}}+\\frac{T_4(n)}{4^{s+1}} +\\cdots \\right) \\\\\n&= \\zeta(s+1) \\left(n+\\tfrac12+ \\frac{U_2(n)}{2^{s+1}}+ \\frac{U_3(n)}{3^{s+1}}+ \\frac{U_4(n)}{4^{s+1}} +\\cdots \\right)- \\tfrac12\\zeta(s) \\\\\nd(1)+ \\cdots+ d(n) &= - \\frac{T_2(n)\\log2}{2} - \\frac{T_3(n)\\log3}{3} - \\frac{T_4(n)\\log4}{4} - \\cdots \\\\\nd(1)\\log 1 + \\cdots + d(n)\\log n &= -\\frac{T_2(n)(2\\gamma\\log2-\\log^22)}{2} -\\frac{T_3(n)(2\\gamma\\log3-\\log^23)}{3} -\\frac{T_4(n)(2\\gamma\\log4-\\log^24)}{4} -\\cdots \\\\\nr_2(1)+ \\cdots+ r_2(n) &= \\pi \\left(n -\\frac{T_3(n)}{3} +\\frac{T_5(n)}{5} -\\frac{T_7(n)}{7} +\\cdots \\right)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=5884024
58841228
Hamilton Walk
Event commemorating the Irish mathematician Hamilton's discovery of quaternions The Hamilton Walk from Dunsink Observatory to Broom Bridge on the Royal Canal in Dublin takes place on 16 October each year. This is the anniversary of the day in 1843 when William Rowan Hamilton discovered the non-commutative algebraic system known as quaternions, while walking with his wife along the banks of the Royal Canal. History. The walk was launched in 1990 by Prof Tony O'Farrell of the Department of Mathematics at St Patrick's College, Maynooth. It starts at DIAS Dunsink Observatory, where Hamilton lived and was the Director from 1827 to 1865, and ends at the spot where he recorded his discovery by carving the following equation on Broom Bridge: formula_0 These are the basic relations which define the quaternions. The original inscription by Hamilton is no longer there, but a plaque erected by the Dublin Institute for Advanced Studies (DIAS) and unveiled by the Taoiseach Éamon de Valera in 1958 marks the spot where he recorded his discovery. Many prominent mathematicians have attended the event; they include Wolf Prize winner Roger Penrose (2013), Abel Prize and Copley Medal winner Andrew Wiles (2003), Fields Medallists Timothy Gowers (2004) and Efim Zelmanov (2009), and Nobel Prize winners Murray Gell-Mann (2002), Steven Weinberg (2005) and Frank Wilczek (2007). At the end of the 1990s, O'Farrell's younger colleague Fiacre Ó Cairbre took over the organisation of the walk, but O'Farrell always gives a speech at Broome Bridge. O’Farrell and Ó Cairbre received the 2018 Maths Week Ireland Award for "outstanding work in raising public awareness of mathematics" resulting from the founding and nurturing of the Hamilton walk. It has been argued that the discovery of the quaternions, by revealing deep mathematical structures that did not obey the commutative law, allowed mathematicians to create new systems unbound by the rules of ordinary arithmetic. It follows that the climax of the Hamilton walk at Broom Bridge marks the exact spot where modern algebra was born. The Hamilton Way is a proposed foot and cycle path that follows the route of the Hamilton Walk, linking DIAS Dunsink Observatory to the Royal Canal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i^2=j^2=k^2=ijk=-1\\," } ]
https://en.wikipedia.org/wiki?curid=58841228
58854243
Quota rule
Rule in math and political science In mathematics and political science, the quota rule describes a desired property of proportional apportionment methods. It says that the number of seats allocated to a party should be equal to their entitlement plus or minus one. The ideal number of seats for a party, called their seat entitlement, is calculated by multiplying each party's share of the vote by the total number of seats. Equivalently, it is equal to the number of votes divided by the Hare quota. For example, if a party receives 10.56% of the vote, and there are 100 seats in a parliament, the quota rule says that when all seats are allotted, the party may get either 10 or 11 seats. The most common apportionment methods (the highest averages methods) violate the quota rule in situations where upholding it would cause a population paradox, although unbiased apportionment rules like Webster's method do so only rarely. Mathematics. The entitlement for a party (the number of seats the party would ideally get) is: formula_0 The lower frame is then the entitlement rounded down to the nearest integer while the upper frame is the entitlement rounded up. The frame rule states that the only two allocations that a party can receive should be either the lower or upper frame. If at any time an allocation gives a party a greater or lesser number of seats than the upper or lower frame, that allocation (and by extension, the method used to allocate it) is said to be in violation of the quota rule. Example. If there are 5 available seats in the council of a club with 300 members, and party "A" has 106 members, then the entitlement for party "A" is formula_1. The lower frame for party "A" is 1, because 1.8 rounded down equal 1. The upper frame, 1.8 rounded up, is 2. Therefore, the quota rule states that the only two allocations allowed for party "A" are 1 or 2 seats on the council. If there is a second party, "B", that has 137 members, then the quota rule states that party "B" gets formula_2, rounded up and down equals either 2 or 3 seats. Finally, a party "C" with the remaining 57 members of the club has a entitlement of formula_3, which means its allocated seats should be either 0 or 1. In all cases, the method for actually allocating the seats determines whether an allocation violates the quota rule, which in this case would mean giving party "A" any seats other than 1 or 2, giving party "B" any other than 2 or 3, or giving party "C" any other than 0 or 1 seat. Relation to apportionment paradoxes. The Balinski–Young theorem proved in 1980 that if an apportionment method satisfies the quota rule, it must fail to satisfy some apportionment paradox. For instance, although largest remainder method satisfies the quota rule, it violates the Alabama paradox and the population paradox. The theorem itself is broken up into several different proofs that cover a wide number of circumstances. Specifically, there are two main statements that apply to the quota rule: Use in apportionment methods. Different methods for allocating seats may or may not satisfy the quota rule. While many methods do violate the quota rule, it is sometimes preferable to violate the rule very rarely than to violate some other apportionment paradox; some sophisticated methods violate the rule so rarely that it has not ever happened in a real apportionment, while some methods that never violate the quota rule violate other paradoxes in much more serious fashions. The largest remainder method does satisfy the quota rule. The method works by assigning each party its seat quota, rounded down. Then, the surplus seats are given to the party with the largest fractional part, until there are no more surplus seats. Because it is impossible to give more than one surplus seat to a party, every party will always be equal to its lower or upper frame. The D'Hondt method, also known as the Jefferson method sometimes violates the quota rule by allocating more seats than the upper frame allowed. Since Jefferson was the first method used for Congressional apportionment in the United States, this violation led to a substantial problem where larger states often received more representatives than smaller states, which was not corrected until Webster's method was implemented in 1842. Although Webster's method can in theory violate the quota rule, such occurrences are extremely rare. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{\\text{Votes}_\\text{party}}{\\#\\text{Votes}} \\cdot \\#\\text{Seats} " }, { "math_id": 1, "text": " \\frac {106} {300} \\cdot 5 \\approx 1.8" }, { "math_id": 2, "text": " \\frac {137} {300} \\cdot 5 \\approx 2.3" }, { "math_id": 3, "text": " \\frac {57} {300} \\cdot 5 \\approx 0.95" } ]
https://en.wikipedia.org/wiki?curid=58854243
58855287
Helen Popova Alderson
Mathematician and translator Helen Popova Alderson (1924–1972) was a Soviet and British mathematician and mathematics translator known for her research on quasigroups and on higher reciprocity laws. Life. Alderson was born on 14 May 1924 in Baku, then part of the Soviet Union, to a family of two academics from Moscow. Her father, a neurophysiologist, had been a student of Ivan Pavlov. She began studying mathematics at Moscow University in 1937, when she was only 13. She had to break off her studies because of World War II, moving to Paris as a refugee with her family. After the war, she returned to study at the University of Edinburgh. She completed a Ph.D. there in 1951; her dissertation was "Logarithmetics of Non-Associative Algebras". After leaving mathematical research to raise two children in Cambridge, she was funded by the Calouste Gulbenkian Foundation with a Fellowship at Lucy Cavendish College, Cambridge, beginning in the late 1960s. At Cambridge, she worked with J. W. S. Cassels. She died on 5 November 1972, from complications of kidney disease. Research. In the theory of higher reciprocity laws, Alderson published necessary and sufficient conditions for 2 and 3 to be seventh powers, in modular arithmetic modulo a given prime number formula_0.[7X] According to , "plain quasigroups were first studied by Helen Popova-Alderson, in a series of papers dating back to the early fifties". Smith cites in particular a posthumous paper [FPQ] and its references. In this context, a quasigroup is a mathematical structure consisting of a set of elements and a binary operation that does not necessarily obey the associative law, but where (like a group) this operation can be inverted. Being plain involves having only a finite number of elements and no non-trivial subalgebras. Translation. As well as Russian, English, and French, Alderson spoke Polish, Czech, and some German. She became the English translator of "Elementary Number Theory", a textbook originally published in Russian in 1937 by B. A. Venkov. Her translation was published by Wolters-Noordhoff of Groningen in 1970. As well as the original text, it includes footnotes by Alderson updating the material with new developments in number theory.[ENT] References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" } ]
https://en.wikipedia.org/wiki?curid=58855287
58856910
DN factor
Value used to determine base oil viscosity DN factor, also called "DN Value", is a number that is used to determine the correct base oil viscosity for the lubrication of various types of bearings. It can also be used to determine if a bearing is the correct choice for use in a given application. It is a product of bearing diameter (D) and speed (N). D = diameter (in millimeters) of the bearing in question. For most types of bearings, there are actually two required measurements: the inner diameter and outer diameter. In such cases, D = (A+B)/2, where A = inner diameter and B = outer diameter. The sum of these two values is then divided by 2 to obtain the median diameter, sometimes also called pitch diameter. N = bearing speed. This is the maximum amount of revolutions per minute (RPM) that the bearing will move. The DN factor of a bearing is obtained by multiplying the median diameter (A + B)/2 by RPM, and sometimes by a correction factor. This correction factor may vary from manufacturer to manufacturer. No consensus exists among tribologists as to a constant correction factor across manufacturers. Example formula. For a single or double row cylindrical bearing, the following formula would be used to obtain the DN factor. It includes a correction factor of 2: formula_0 Where: Usage. Once the DN factor of a bearing has been obtained, it can be used to consult grease selection charts in order to determine the correct lubricant. Viscosity must be matched to the needs of the bearing in order to obtain maximum efficiency, and to avoid lubricant runout due to overheating, which is a consequence of metal-on-metal contact, as well as the failure of grease to extract heat from the bearing system. Viscosity is quantified according to the National Lubricating Grease Institute (NLGI) consistency number, which is regarded as the standard measure of grease thickness. Knowing the DN factor of a bearing is critical to preventing lubricant starvation, which is characterized by decreasing lubricant film thickness coupled with increased bearing speed. Starvation occurs when bearing speed (N) exceeds the ability of the lubricant to flow back into the bearing track. This phenomenon can be the cause of metal-on-metal contact, which causes rapid wear and necessitates early replacement. Jauhari shows that degree of starvation is a function of relative lubricant layer thickness for given operating conditions. He also states that "the rolling fatigue life of [a] bearing depends greatly upon the viscosity and film thickness between the rolling contact-surface [sic]." References. Online calculators exist to determine DN factor and correct grease viscosity. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "DN factor = ((A+B)/2) * RPM * 2" } ]
https://en.wikipedia.org/wiki?curid=58856910
58858795
Gas turbine engine thrust
The familiar study of jet aircraft treats jet thrust with a "black box" description which only looks at what goes into the jet engine, air and fuel, and what comes out, exhaust gas and an unbalanced force. This force, called thrust, is the sum of the momentum difference between entry and exit and any unbalanced pressure force between entry and exit, as explained in "Thrust calculation". As an example, an early turbojet, the Bristol Olympus Mk. 101, had a momentum thrust of 9300 lb. and a pressure thrust of 1800 lb. giving a total of 11,100 lb. Looking inside the "black box" shows that the thrust results from all the unbalanced momentum and pressure forces created within the engine itself. These forces, some forwards and some rearwards, are across all the internal parts, both stationary and rotating, such as ducts, compressors, etc., which are in the primary gas flow which flows through the engine from front to rear. The algebraic sum of all these forces is delivered to the airframe for propulsion. "Flight" gives examples of these internal forces for two early jet engines, the Rolls-Royce Avon Ra.14 and the de Havilland Goblin. Transferring thrust to the aircraft. The engine thrust acts along the engine centreline. The aircraft "holds" the engine on the outer casing of the engine at some distance from the engine centreline (at the engine mounts). This arrangement causes the engine casing to bend (known as backbone bending) and the round rotor casings to distort (ovalization). Distortion of the engine structure has to be controlled with suitable mount locations to maintain acceptable rotor and seal clearances and prevent rubbing. A well-publicized example of excessive structural deformation occurred with the original Pratt &amp; Whitney JT9D engine installation in the Boeing 747 aircraft. The engine mounting arrangement had to be revised with the addition of an extra thrust frame to reduce the casing deflections to an acceptable amount. Rotor thrust. The rotor thrust on a thrust bearing is not related to the engine thrust. It may even change direction at some RPM. The bearing load is determined by bearing life considerations. Although the aerodynamic loads on the compressor and turbine blades contribute to the rotor thrust they are small compared to cavity loads inside the rotor which result from the secondary air system pressures and sealing diameters on discs, etc. To keep the load within the bearing specification seal diameters are chosen accordingly as, many years ago, on the backface of the impeller in the de Havilland Ghost engine. Sometimes an extra disc known as a balance piston has to be added inside the rotor. An early turbojet example with a balance piston was the Rolls-Royce Avon. Thrust calculation. The net thrust (FN) of an engine is given by: formula_0 Most types of jet engine have an air intake, which provides the bulk of the fluid exiting the exhaust. Conventional rocket engines, however, do not have an intake, so ṁ air is zero. Therefore, rocket engines do not have ram drag and the gross thrust of the rocket engine nozzle is the net thrust of the engine. Consequently, the thrust characteristics of a rocket motor are different from that of an air breathing jet engine, and thrust is independent of velocity. If the velocity of the jet from a jet engine is equal to sonic velocity, the jet engine's nozzle is said to be choked. If the nozzle is choked, the pressure at the nozzle exit plane is greater than atmospheric pressure, and extra terms must be added to the above equation to account for the pressure thrust. However, ve is the "effective" exhaust velocity. If a turbojet engine has a purely convergent exhaust nozzle and the actual exhaust velocity reaches the speed of sound in air at the exhaust temperature and pressure, the exhaust gas cannot be further accelerated by the nozzle. In such a case, the exhaust gas retains a pressure which is higher than that of the ambient air. This is the source of 'pressure thrust'. The rate of flow of fuel entering the engine is often very small compared with the rate of flow of air. When the contribution of fuel to the nozzle gross thrust can be ignored, the net thrust is: formula_1 The velocity of the jet (ve) must exceed the true airspeed of the aircraft (v) if there is to be a net forward thrust on the aircraft. The velocity (ve) can be calculated thermodynamically based on adiabatic expansion. Thrust augmentation. Thrust augmentation has taken many forms, most commonly to supplement inadequate take-off thrust. Some early jet aircraft needed rocket assistance to take off from high altitude airfields or when the day temperature was high. A more recent aircraft, the Tupolev Tu-22 supersonic bomber, was fitted with four SPRD-63 boosters for take-off. Possibly the most extreme requirement needing rocket assistance, and which was short-lived, was zero-length launching. Almost as extreme, but very common, is catapult assistance from aircraft carriers. Rocket assistance has also been used during flight. The SEPR 841 booster engine was used on the Dassault Mirage for high altitude interception. Early aft-fan arrangements which added bypass airflow to a turbojet were known as thrust augmentors. The aft-fan fitted to the General Electric CJ805-3 turbojet augmented the take-off thrust from 11,650lb to 16,100lb. Water, or other coolant, injection into the compressor or combustion chamber and fuel injection into the jetpipe (afterburning/reheat) became standard ways to increase thrust, known as 'wet' thrust to differentiate with the no-augmentation 'dry' thrust. Coolant injection (pre-compressor cooling) has been used, together with afterburning, to increase thrust at supersonic speeds. The 'Skyburner' McDonnell Douglas F-4 Phantom II set a world speed record using water injection in front of the engine. At high Mach numbers afterburners supply progressively more of the engine thrust as the thrust from the turbomachine drops off towards zero at which speed the engine pressure ratio (epr) has fallen to 1.0 and all the engine thrust comes from the afterburner. The afterburner also has to make up for the pressure loss across the turbomachine which is a drag item at higher speeds where the epr will be less than 1.0. Thrust augmentation of existing afterburning engine installations for special short-duration tasks has been the subject of studies for launching small payloads into low earth orbits using aircraft such as McDonnell Douglas F-4 Phantom II, McDonnell Douglas F-15 Eagle, Dassault Rafale and Mikoyan MiG-31, and also for carrying experimental packages to high altitudes using a Lockheed SR-71. In the first case an increase in the existing maximum speed capability is required for orbital launches. In the second case an increase in thrust within the existing speed capability is required. Compressor inlet cooling is used in the first case. A compressor map shows that the airflow reduces with increasing compressor inlet temperature although the compressor is still running at maximum RPM (but reduced aerodynamic speed). Compressor inlet cooling increases the aerodynamic speed and flow and thrust. In the second case a small increase in the maximum mechanical speed and turbine temperature were allowed, together with nitrous oxide injection into the afterburner and simultaneous increase in afterburner fuel flow. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_N =( \\dot{m}_{air} + \\dot{m}_{fuel}) v_e - \\dot{m}_{air} v" }, { "math_id": 1, "text": "F_N = \\dot{m}_{air} (v_e - v)" } ]
https://en.wikipedia.org/wiki?curid=58858795
588606
Law of demand
Fundamental principle in microeconomics In microeconomics, the law of demand is a fundamental principle which states that there is an inverse relationship between price and quantity demanded. In other words, "conditional on all else being equal, as the price of a good increases (↑), quantity demanded will decrease (↓); conversely, as the price of a good decreases (↓), quantity demanded will increase (↑)". Alfred Marshall worded this as: "When we say that a person's demand for anything increases, we mean that he will buy more of it than he would before at the same price, and that he will buy as much of it as before at a higher price". The law of demand, however, only makes a qualitative statement in the sense that it describes the direction of change in the amount of quantity demanded but not the magnitude of change. The law of demand is represented by a graph called the demand curve, with quantity demanded on the x-axis and price on the y-axis. Demand curves are downward sloping by definition of the law of demand. The law of demand also works together with the law of supply to determine the efficient allocation of resources in an economy through the equilibrium price and quantity. It is important to note that the relationship between price and quantity demanded holds true so long as it is complied with the "ceteris paribus" condition "all else remain equal" quantity demanded varies inversely with price when income and the prices of other goods remain constant. If all else are not held equal, the law of demand may not necessarily hold. In the real world, there are many determinants of demand other than price, such as the prices of other goods, the consumer's income, preferences etc. There are also exceptions to the law of demand such as Giffen goods and perfectly inelastic goods. Overview. Economist Alfred Marshall provided the graphical illustration of the law of demand. This graphical illustration is still used today to define and explain a variety of other concepts and theories in economics. A simple explanation of the law of demand is that all else equal, at a higher price, consumer will demand less quantity of a good and vice versa. The law of demand applies to a variety of organisational and business situations. Price determination, government policy formation etc are examples. Together with the law of supply, the law of demand provides to us the equilibrium price and quantity. Moreover, the law of demand and supply explains why goods are priced at the level that they are. They also help us identify opportunities to buy what are perceived to be underpriced (or sell overpriced) goods or assets. Law of Demand is relied heavily upon by managerial economics, which is a branch of economics that applies microeconomic analysis to managerial decision-making, to make informed decisions on pricing, production, and marketing strategies. In this context, understanding the alternative factors that influence the Law of Demand becomes crucial for managers and decision-makers. An important concept to apprehend from the law of demand is the difference between demand and quantity demanded. Demand refers to the demand curve. A change in demand is indicated by a shift in the demand curve. Quantity demanded, on the other hand refers to a specific point on the demand curve which corresponds to a specific price. A change in quantity demanded therefore refers to a movement along the existing demand curve. However, there are some exceptions to the law of demand. For instance, if the price of cigarettes goes up, its demand does not decrease. The exceptions to the law of demand typically suit the Giffen commodities and Veblen goods which is further explained below. The four main types of elasticity of demand are price elasticity of demand, cross elasticity of demand, income elasticity of demand, and advertising elasticity of demand. History. The famous law of demand was first stated by Charles Davenant (1656-1714) in his essay, "Probable Methods of Making People Gainers in the Balance of Trade (1699)". However, there were instances of its understanding and use much earlier when Gregory King (1648-1712) made a demonstration of the law of demand. He represented a relationship between the price of wheat and the harvest where the results suggested that if the harvest falls by 50%, the price would rise by 500%. This demonstration illustrated the law of demand as well as its elasticity. Skipping forward to 1890, economist Alfred Marshall documented the graphical illustration of the law of demand. In "Principles of Economics" (1890), Alfred Marshall reconciled the demand and supply into a single analytical framework. The formulation of the demand curve was provided by the utility theory while supply curve was determined by the cost. This idea of demand and supply curve is what we still use today to develop the market equilibrium and to support a variety of other economic theories and concepts. Due to general agreement with the observation, economists have come to accept the validity of the law under most situations. Economist also see Alfred Marshall as the pioneer of the standard demand and supply diagrams and their use in economic analysis including welfare applications and consumer surplus. Mathematical description. Consider the function formula_0, where formula_1 is the quantity demanded of good "formula_2", formula_3 is the demand function, formula_4 is the price of the good and formula_5 is the list of parameters other than the price. The law of demand states that formula_6. Here formula_7 is the partial derivative operator. The above equation, when plotted with quantity demanded (formula_1) on the "formula_2"-axis and price (formula_4) on the formula_8-axis, gives the demand curve, which is also known as the demand schedule. The demand curve is downward sloping illustrating the inverse relationship between quantity demanded and price. Therefore, a downward sloping demand curve embeds the law of demand. In a more specific manner: formula_9 Which is a functional relationship where the quantity demanded by the consumer formula_10 depends on the price of the good formula_4, the monetary income of the consumer formula_11, the prices of other goods formula_12, and the taste of the consumer formula_13. Another common way to express the law of demand without imposing a functional form is the following: formula_14 This formula states that, for all possible prices p' and p, and corresponding demands x' and x, prices and demand must move in opposite directions, i.e. as price increases, demand must decrease and vice versa. Note that demands are demand "bundles", not individual demands. Demand for a single good can still increase even though its price also increased, if there is another good whose price increased and which is sufficiently substituted away from. If good i is a Giffen good whose price increases while other goods' prices are held fixed (so that formula_15), the law of demand is clearly violated, as we have both formula_16 (as price increased) and formula_17 (as we consider a Giffen good), so that formula_18. Demand versus quantity demanded. It is very important to apprehend the difference between demand and quantity demanded as they are used to mean different things in the economic jargon. On the one hand, demand refers to the demand curve. Changes in demand are depicted graphically by a shift in the demand curve to the left or right. Changes in the demand curve are usually caused by 5 major factors, namely: number of buyers, consumer income, tastes or preferences, price of related goods and future expectations. On the other hand, quantity demanded refers to a specific point located on the demand curve which corresponds to a specific price. Therefore, quantity demanded represents the exact quantity of a good or service demanded by a consumer at a particular price, conditional on the other determinants. A change in quantity demanded can be indicated by a movement along the existing demand curve that is caused only by a change in price. For instance, let's take the example of a housing market. An increase or decrease in price of housing will not shift the demand curve rather it will cause a movement along the demand curve for housing i.e. change in quantity demanded. But if we look at mortgage rates (a factor other than price), even if housing prices remain unchanged, an increased mortgage rate leads to a lower willingness to buy at all prices, shifting the demand curve to the left. Consumers will buy less, even though the price is the same. On the other hand, lower mortgage rate leads to a higher willingness to buy at all prices, and eventually shifting the demand curve to the right. Consumers will now buy more, even though the price has not changed at all. Such variation in demand can be explained by demand elasticity. Demand elasticity. The elasticity of demand refers to the sensitivity of a goods demand as compared to the fluctuation of other economic factors, such as price, income, etc. The law of demand explains that the relationship between Demand and Price is directly inverse. However, the demand for some goods are more receptive to a change in price than others. There are four major elasticities of demand, these being the price elasticity of demand, income elasticity of demand, cross elasticity of demand, and advertising elasticity of demand. Price elasticity of demand. The variation in demand with regards to a change in price is known as the price elasticity of demand. The formula to solve for the coefficient of price elasticity of demand is the percentage change in quantity demanded divided by the percentage change in Price. formula_19 Price elasticity of demand can be classified as elastic, inelastic, or unitary. An elastic demand occurs when the percentage change in the quantity demanded is greater than the percentage change in price, meaning that a small change in price results in a large change in quantity demanded. Inelastic demand occurs when the percentage change in quantity demanded is smaller than the percentage change in price. Unitary elasticity occurs when the percentage change in quantity demanded is equal to the percentage change in price. Factors affecting price elasticity of demand include the availability of substitute goods, the proportion of income spent on the good, the nature of the good (whether it's a necessity or a luxury), and the time horizon under consideration. Cross elasticity of demand. The cross elasticity of demand is an economic concept that measures the relative change in demand of a good when another good varies in price. The formula to solve for the coefficient of cross elasticity of demand is calculated by dividing the percentage change in quantity demanded of good A by the percentage change in price of good B. formula_20 The Cross elasticity of demand, also commonly referred to as the Cross-price elasticity of demand, allows companies to establish competitive prices against substitute goods and complementary goods. The metric figure produced by the equation thus determines the strength of both the relationship and competition between the two goods. Income elasticity of demand. Income elasticity of demand is an economic measurement tool developed to measure the sensitivity of a goods quantity demanded when there is a change in the real income of a consumer. To calculate the income elasticity of demand, the percentage change in quantity demanded is divided by the percentage change in the consumers income. formula_21 The Income elasticity of demand allows businesses to analyse and further predict the impact of business cycles on total sales. The Income elastitcty of demand thus allows goods to be broadly categorised as Normal goods and Inferior goods. A positive measurement suggests that the good is a normal good, and a negative measurement suggests an inferior good. The Income elasticity of demand effectively represents a consumers idea as to whether a good is a luxury or a necessity. Advertising elasticity of demand. Advertising elasticity of demand measures the effectiveness of an advertising campaign as to generate new sales. To calculate the Advertising elasticity of demand, the percentage change in quantity demanded is divided by the percentage change in advertising expenditures. formula_22 A business utilises the advertising elasticity of demand to measure the effectiveness of advertising on generating new sales. A positive elasticity indicates success for the advertisement as demand for the goods has increased. However, this measurement is also subject to the availability of substitutes, consumer behaviours and price points of the good being advertised. Exceptions to the law of demand. The elasticity of demand follows the law of demand and its definition. However, there are goods and specific situations that defy the law of demand. Generally, the amount demanded of a good increases with a decrease in price of the good and vice versa. In some cases this may not be true. There are certain goods which do not follow the law of demand. These include Giffen goods, Veblen goods, basic or necessary goods and expectations of future price changes. Further exception and details are given in the sections below: Giffen goods. Initially proposed by Sir Robert Giffen, economists disagree on the existence of Giffen goods in the market. A Giffen good describes an inferior good that, as the price increases, demand for the product increases. As an example, during the Great Famine of Ireland of the 19th century, potatoes were considered a Giffen good. Potatoes were the largest staple in the Irish diet, so as the price rose it had a large impact on income. People responded by cutting out on luxury goods such as meat and vegetables, and instead bought more potatoes. Therefore, as the price of potatoes increased, so did the quantity demanded. This results in an upward sloping demand curve contrary to the fundamental law of demand. Giffen goods violate the law of demand due to the income effect dominating the substitution effect. This can be illustrated with the Slutsky equation for a change in a good's own price: formula_23 The first term on the right-hand side is the substitution effect, which is always negative. The second term on the right side is the income effect, which can be positive or negative. For inferior goods, this is negative, so subtracting this means adding its positive absolute value. The non-derivative component of the income effect is a measure of a consumer's existing demand for the good, meaning that if a consumer spends a large amount of his income on an inferior good, then a price increase could cause the income effect to dominate the substitution effect. This leads to a positive partial derivative of the good's demand with regards to its price, which violates the law of demand. Expectation of change in the price of commodity. If an increase in the price of a commodity causes households to expect the price of a commodity to increase further, they may start purchasing a greater amount of the commodity even at the presently increased price. Similarly, if the household expects the price of the commodity to decrease, it may postpone its purchases. Thus, some argue that the law of demand is violated in such cases. In this case, the demand curve does not slope down from left to right; instead, it presents a backward slope from the top right to down left. This curve is known as an exceptional demand curve. Basic or necessary goods. The goods which people need no matter how high the price is are basic or necessary goods. Medicines covered by insurance are a good example. An increase or decrease in the price of such a good does not affect its quantity demanded. Certain scenarios in stock trading. Stock buyers acting in accord with the hot-hand fallacy will increase buying when stock prices are trending upward. Other rationales for buying a high-priced stock are that previous buyers who bid up the price are proof of the issue's quality, or conversely, that an issue's low price may be evidence of viability problems. Likewise, demand among short traders during a short squeeze can increase as price increases. Veblen goods. Unlike Giffen goods, which are inferior items, Veblen goods are generally high quality goods. The demand for Veblen goods increases with the increase in price. Examples of Veblen goods are mostly luxurious items such as diamond, gold, precious stones, world-famous paintings, antiques etc. Veblen goods appear to go against the law of demand because of their exclusivity appeal, in the sense that if a price of a luxurious and expensive product is increased, it may attract the status-conscious group more, since it will be further out of reach for an average consumer. Thorstein Veblen referred to this sort of consumption as the purchase of goods that do not exhibit additional utility or functionality but offer status and reveal socioeconomic position. In simple words, these goods are not bought for their satisfaction but for their "snob appeal" or "ostentation". Accordingly, all these factors also lead to an upward sloping demand curve for Veblen goods along a certain price range. Gary S. Becker and Kevin M. Murphy analysed Veblen goods. Their analysis of the demand for paintings by masters and for other objects proves Veblen by relying heavily on the allocative role of prices in markets with social interactions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ Q_x = f(P_x ; \\mathbf Y)" }, { "math_id": 1, "text": "Q_x" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "P_x" }, { "math_id": 5, "text": "\\mathbf Y" }, { "math_id": 6, "text": "\\frac{\\partial f}{\\partial P_x} < 0" }, { "math_id": 7, "text": "\\partial/\\partial P_x" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "Qdx = f(P_x, I, P_y, T)" }, { "math_id": 10, "text": "Qdx" }, { "math_id": 11, "text": "I" }, { "math_id": 12, "text": "P_y" }, { "math_id": 13, "text": "T" }, { "math_id": 14, "text": "(p'-p)(x'-x)\\leq 0" }, { "math_id": 15, "text": "p_j'-p_j=0 \\; \\forall j\\neq i" }, { "math_id": 16, "text": "p_i'-p_i>0" }, { "math_id": 17, "text": "q_i'-q_i>0" }, { "math_id": 18, "text": "(p'-p)(x'-x)=(p_i'-p_i)(x_i'-x_i)>0" }, { "math_id": 19, "text": "E_{\\langle p \\rangle} = \\frac{\\Delta Q/Q}{\\Delta P/P}" }, { "math_id": 20, "text": "\\text{Cross-price Elasticity Of Demand}\n= \\frac{\\%\\text{ change in quantity demanded of good A}}{\\%\\text{ change in price of good B}}" }, { "math_id": 21, "text": "\\epsilon_d = \\frac{\\%\\ \\mbox{change in quantity demanded}}{\\%\\ \\mbox{change in income}}" }, { "math_id": 22, "text": "AED = \\frac{\\%\\ \\mbox{change in quantity demanded}}{\\%\\ \\mbox{change in spending on advertising}} = \\frac{\\Delta Q_d/Q_d}{\\Delta A/A} " }, { "math_id": 23, "text": "\n\n\\frac{\\partial x_i}{\\partial p_i} = \\frac{\\partial h_i}{\\partial p_i} - \\frac{\\partial x_i}{\\partial m}x_i\n\n" } ]
https://en.wikipedia.org/wiki?curid=588606
588615
Ant colony optimization algorithms
Optimization algorithm In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. As an example, ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect. This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms. Overview. In the natural world, ants of some species (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but instead to follow the trail, returning and reinforcing it if they eventually find food (see Ant communication). Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over more frequently, and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. The influence of pheromone evaporation in real ant systems is unclear, but it is very important in artificial systems. The overall result is that when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads to many ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve. Ambient networks of intelligent objects. New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have been known to lead to the production of IT systems in which data processing, control units and calculating forces are centralized. These centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or later, a new generation of information systems that are even more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not dispose of a high intelligence on their own. Indeed, their intelligence can be classed as fairly limited. It is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip that is implanted into the human body or integrated in an intelligent tag which is designed to trace commercial articles. However, once those objects are interconnected they dispose of a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain. Nature offers several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects perfectly illustrate this model which greatly differs from human societies. This model is based on the co-operation of independent units with simple and unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a very limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can also be applied to a network of ambient objects. Colonies of ants have a very high capacity to adapt themselves to changes in the environment as well as an enormous strength in dealing with situations where one individual fails to carry out a given task. This kind of flexibility would also be very useful for mobile networks of objects which are perpetually developing. Parcels of information that move from a computer to a digital object behave in the same way as ants would do. They move through the network and pass from one knot to the next with the objective of arriving at their final destination as quickly as possible. Artificial pheromone system. Pheromone-based communication is one of the most effective ways of communication which is widely observed in nature. Pheromone is used by social insects such as bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its feasibility, artificial pheromones have been adopted in multi-robot and swarm robotic systems. Pheromone-based communication was implemented by different means such as chemical or physical (RFID tags, light, sound) ways. However, those implementations were not able to replicate all the aspects of pheromones as seen in nature. Using projected light was presented in a 2007 IEEE paper by Garnier, Simon, et al. as an experimental setup to study pheromone-based communication with micro autonomous robots. Another study presented a system in which pheromones were implemented via a horizontal LCD screen on which the robots moved, with the robots having downward facing light sensors to register the patterns beneath them. Algorithm and formula. In the ant colony optimization algorithms, an artificial ant is a simple computational agent that searches for good solutions to a given optimization problem. To apply an ant colony algorithm, the optimization problem needs to be converted into the problem of finding the shortest path on a weighted graph. In the first step of each iteration, each ant stochastically constructs a solution, i.e. the order in which the edges in the graph should be followed. In the second step, the paths found by the different ants are compared. The last step consists of updating the pheromone levels on each edge. procedure ACO_MetaHeuristic is while not terminated do generateSolutions() daemonActions() pheromoneUpdate() repeat end procedure Edge selection. Each ant needs to construct a solution to move through the graph. To select the next edge in its tour, an ant will consider the length of each edge available from its current position, as well as the corresponding pheromone level. At each step of the algorithm, each ant moves from a state formula_0 to state formula_1, corresponding to a more complete intermediate solution. Thus, each ant formula_2 computes a set formula_3 of feasible expansions to its current state in each iteration, and moves to one of these in probability. For ant formula_2, the probability formula_4 of moving from state formula_0 to state formula_1 depends on the combination of two values, the "attractiveness" formula_5 of the move, as computed by some heuristic indicating the "a priori" desirability of that move and the "trail level" formula_6 of the move, indicating how proficient it has been in the past to make that particular move. The "trail level" represents a posteriori indication of the desirability of that move. In general, the formula_2th ant moves from state formula_0 to state formula_1 with probability formula_7 where formula_6 is the amount of pheromone deposited for transition from state formula_0 to formula_1, formula_8 ≥ 0 is a parameter to control the influence of formula_6, formula_5 is the desirability of state transition formula_9 ("a priori" knowledge, typically formula_10, where formula_11 is the distance) and formula_12 ≥ 1 is a parameter to control the influence of formula_5. formula_13 and formula_14 represent the trail level and attractiveness for the other possible state transitions. Pheromone update. Trails are usually updated when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively. An example of a global pheromone updating rule is formula_15 where formula_6 is the amount of pheromone deposited for a state transition formula_9, formula_16 is the "pheromone evaporation coefficient", formula_17 is the number of ants and formula_18 is the amount of pheromone deposited by formula_2th ant, typically given for a TSP problem (with moves corresponding to arcs of the graph) by formula_19 where formula_20 is the cost of the formula_2th ant's tour (typically length) and formula_21 is a constant. Common extensions. Here are some of the most popular variations of ACO algorithms. Ant system (AS). The ant system is the first ACO algorithm. This algorithm corresponds to the one presented above. It was developed by Dorigo. Ant colony system (ACS). In the ant colony system algorithm, the original ant system was modified in three aspects: Elitist ant system. In this algorithm, the global best solution deposits pheromone on its trail after every iteration (even if this trail has not been revisited), along with all the other ants. The elitist strategy has as its objective directing the search of all ants to construct a solution to contain links of the current best route. Max-min ant system (MMAS). This algorithm controls the maximum and minimum pheromone amounts on each trail. Only the global best tour or the iteration best tour are allowed to add pheromone to its trail. To avoid stagnation of the search algorithm, the range of possible pheromone amounts on each trail is limited to an interval [τmax,τmin]. All edges are initialized to τmax to force a higher exploration of solutions. The trails are reinitialized to τmax when nearing stagnation. Rank-based ant system (ASrank). All solutions are ranked according to their length. Only a fixed number of the best ants in this iteration are allowed to update their trials. The amount of pheromone deposited is weighted for each solution, such that solutions with shorter paths deposit more pheromone than the solutions with longer paths. Parallel ant colony optimization (PACO). An ant colony system (ACS) with communication strategies is developed. The artificial ants are partitioned into several groups. Seven communication methods for updating the pheromone level between groups in ACS are proposed and work on the traveling salesman problem. Continuous orthogonal ant colony (COAC). The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy. The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems. Recursive ant colony optimization. It is a recursive form of ant system which divides the whole search domain into several sub-domains and solves the objective on these subdomains. The results from all the subdomains are compared and the best few of them are promoted for the next level. The subdomains corresponding to the selected results are further subdivided and the process is repeated until an output of desired precision is obtained. This method has been tested on ill-posed geophysical inversion problems and works well. Convergence. For some versions of the algorithm, it is possible to prove that it is convergent (i.e., it is able to find the global optimum in finite time). The first evidence of convergence for an ant colony algorithm was made in 2000, the graph-based ant system algorithm, and later on for the ACS and MMAS algorithms. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. A performance analysis of a continuous ant colony algorithm with respect to its various parameters (edge selection strategy, distance measure metric, and pheromone evaporation rate) showed that its performance and rate of convergence are sensitive to the chosen parameter values, and especially to the value of the pheromone evaporation rate. In 2004, Zlochin and his colleagues showed that ACO-type algorithms are closely related to stochastic gradient descent, Cross-entropy method and estimation of distribution algorithm. They proposed an umbrella term "Model-based search" to describe this class of metaheuristics. Applications. Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to protein folding or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations. It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems. The first ACO algorithm was called the ant system and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules: Antennas optimization and synthesis. To optimize the form of antennas, ant colony algorithms can be used. As example can be considered antennas RFID-tags based on ant colony algorithms (ACO), loopback and unloopback vibrators 10×10 Image processing. The ACO algorithm is used in image processing for image edge detection and edge linking. The graph here is the 2-D image and the ants traverse from one pixel depositing pheromone. The movement of ants from one pixel to another is directed by the local variation of the image's intensity values. This movement causes the highest density of the pheromone to be deposited at the edges. The following are the steps involved in edge detection using ACO: "Step 1: Initialization." Randomly place formula_22 ants on the image formula_23 where formula_24 . Pheromone matrix formula_25 are initialized with a random value. The major challenge in the initialization process is determining the heuristic matrix. There are various methods to determine the heuristic matrix. For the below example the heuristic matrix was calculated based on the local statistics: the local statistics at the pixel position formula_26. formula_27 where formula_28 is the image of size formula_29, formula_30 is a normalization factor, and formula_31 formula_32 can be calculated using the following functions: formula_33 formula_34 formula_35 formula_36 The parameter formula_37 in each of above functions adjusts the functions’ respective shapes. "Step 2: Construction process." The ant's movement is based on 4-connected pixels or 8-connected pixels. The probability with which the ant moves is given by the probability equation formula_38 "Step 3 and step 5: Update process." The pheromone matrix is updated twice. in step 3 the trail of the ant (given by formula_39 ) is updated where as in step 5 the evaporation rate of the trail is updated which is given by: formula_40, where formula_41 is the pheromone decay coefficient formula_42 "Step 7: Decision process." Once the K ants have moved a fixed distance L for N iteration, the decision whether it is an edge or not is based on the threshold T on the pheromone matrix τ. Threshold for the below example is calculated based on Otsu's method. Image edge detected using ACO: The images below are generated using different functions given by the equation (1) to (4). Definition difficulty. With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit. Stigmergy algorithms. There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies. In practice, the use of an exchange of information between ants via the environment (a principle called "stigmergy") is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation. These maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded. An evolutionary algorithm that substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as probabilistic graphical models, from which new solutions can be sampled or generated from guided-crossover. A related global optimization technique which traverses the search space by generating neighboring solutions of the current solution. A superior neighbor is always accepted. An inferior neighbor is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search. Focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution. Similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. To prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space. Modeled on vertebrate immune systems. A swarm intelligence method. A swarm-based optimization algorithm based on natural water drops flowing in rivers A swarm intelligence method. A method that make use of clustering approach, extending the ACO. An agent-based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions. History. Chronology of ACO algorithms Chronology of ant colony optimization algorithms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "A_k(x)" }, { "math_id": 4, "text": "p_{xy}^k" }, { "math_id": 5, "text": "\\eta_{xy}" }, { "math_id": 6, "text": "\\tau_{xy}" }, { "math_id": 7, "text": "\np_{xy}^k =\n\\frac\n{ (\\tau_{xy}^{\\alpha}) (\\eta_{xy}^{\\beta}) }\n{ \\sum_{z\\in \\mathrm{allowed}_y} (\\tau_{xz}^{\\alpha}) (\\eta_{xz}^{\\beta}) }\n" }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "xy" }, { "math_id": 10, "text": "1/d_{xy}" }, { "math_id": 11, "text": "d" }, { "math_id": 12, "text": "\\beta" }, { "math_id": 13, "text": "\\tau_{xz}" }, { "math_id": 14, "text": "\\eta_{xz}" }, { "math_id": 15, "text": "\n\\tau_{xy} \\leftarrow\n(1-\\rho)\\tau_{xy} + \\sum_{k}^{m}\\Delta \\tau^{k}_{xy}\n" }, { "math_id": 16, "text": "\\rho" }, { "math_id": 17, "text": "m" }, { "math_id": 18, "text": "\\Delta \\tau^{k}_{xy}" }, { "math_id": 19, "text": "\n\\Delta \\tau^{k}_{xy} =\n\\begin{cases}\nQ/L_k & \\mbox{if ant }k\\mbox{ uses curve }xy\\mbox{ in its tour} \\\\\n0 & \\mbox{otherwise}\n\\end{cases}\n" }, { "math_id": 20, "text": "L_k" }, { "math_id": 21, "text": "Q" }, { "math_id": 22, "text": "K" }, { "math_id": 23, "text": "I_{M_1 M_2}" }, { "math_id": 24, "text": "K= (M_1*M_2)^\\tfrac{1}{2}" }, { "math_id": 25, "text": "\\tau_{(i,j)}" }, { "math_id": 26, "text": "(i,j)" }, { "math_id": 27, "text": "\\eta_{(i,j)}= \\tfrac{1}{Z}*Vc*I_{(i,j)}," }, { "math_id": 28, "text": "I" }, { "math_id": 29, "text": "M_1*M_2" }, { "math_id": 30, "text": "Z =\\sum_{i=1:M_1} \\sum_{j=1:M_2} Vc(I_{i,j})" }, { "math_id": 31, "text": "\\begin{align}Vc(I_{i,j}) = &f \\left( \\left\\vert I_{(i-2,j-1)} - I_{(i+2,j+1)} \\right\\vert + \\left\\vert I_{(i-2,j+1)} - I_{(i+2,j-1)} \\right\\vert \\right. \\\\\n& +\\left\\vert I_{(i-1,j-2)} - I_{(i+1,j+2)} \\right\\vert + \\left\\vert I_{(i-1,j-1)} - I_{(i+1,j+1)} \\right\\vert\\\\\n& +\\left\\vert I_{(i-1,j)} - I_{(i+1,j)} \\right\\vert + \\left\\vert I_{(i-1,j+1)} - I_{(i-1,j-1)} \\right\\vert\\\\\n& + \\left. \\left\\vert I_{(i-1,j+2)} - I_{(i-1,j-2)} \\right\\vert + \\left\\vert I_{(i,j-1)} - I_{(i,j+1)} \\right\\vert \\right) \\end{align}" }, { "math_id": 32, "text": "f(\\cdot)" }, { "math_id": 33, "text": "f(x) = \\lambda x, \\quad \\text{for x ≥ 0; (1)} " }, { "math_id": 34, "text": "f(x) = \\lambda x^2, \\quad \\text{for x ≥ 0; (2)} " }, { "math_id": 35, "text": "f(x) =\n\\begin{cases}\n\\sin(\\frac{\\pi x}{2 \\lambda}), & \\text{for 0 ≤ x ≤} \\lambda \\text{; (3)} \\\\\n0, & \\text{else}\n\\end{cases}" }, { "math_id": 36, "text": "f(x) =\n\\begin{cases}\n\\pi x \\sin(\\frac{\\pi x}{2 \\lambda}), & \\text{for 0 ≤ x ≤} \\lambda \\text{; (4)} \\\\\n0, & \\text{else}\n\\end{cases}" }, { "math_id": 37, "text": "\\lambda" }, { "math_id": 38, "text": "P_{x,y}" }, { "math_id": 39, "text": "\\tau_{(x,y)}" }, { "math_id": 40, "text": "\n\\tau_{new} \\leftarrow\n(1-\\psi)\\tau_{old} + \\psi \\tau_{0}\n" }, { "math_id": 41, "text": "\\psi" }, { "math_id": 42, "text": "0< \\tau <1" } ]
https://en.wikipedia.org/wiki?curid=588615
58862
Three utilities problem
Mathematical puzzle of avoiding crossings The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved. This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph formula_0, with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that formula_0 is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which is formula_0. The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for formula_0 the minimum number of crossings is one. formula_0 is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem. It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph. History. A review of the history of the three utilities problem is given by . He states that most published references to the problem characterize it as "very ancient". In the earliest publication found by Kullman, Henry Dudeney (1917) names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas". Dudeney also published the same puzzle previously, in "The Strand Magazine" in 1913. A competing claim of priority goes to Sam Loyd, who was quoted by his son in a posthumous biography as having published the problem in 1900. Another early version of the problem involves connecting three houses to three wells. It is stated similarly to a different (and solvable) puzzle that also involves three houses and three fountains, with all three fountains and one house touching a rectangular wall; the puzzle again involves making non-crossing connections, but only between three designated pairs of houses and wells or fountains, as in modern numberlink puzzles. Loyd's puzzle "The Quarrelsome Neighbors" similarly involves connecting three houses to three gates by three non-crossing paths (rather than nine as in the utilities problem); one house and the three gates are on the wall of a rectangular yard, which contains the other two houses within it. As well as in the three utilities problem, the graph formula_0 appears in late 19th-century and early 20th-century publications both in early studies of structural rigidity and in chemical graph theory, where Julius Thomsen proposed it in 1886 for the then-uncertain structure of benzene. In honor of Thomsen's work, formula_0 is sometimes called the Thomsen graph. Statement. The three utilities problem can be stated as follows: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Suppose three houses each need to be connected to the water, gas, and electricity companies, with a separate line from each house to each company. Is there a way to make all nine connections without any of the lines crossing each other? The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing. In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph formula_0 is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle. Puzzle solutions. Unsolvability. As it is usually presented (on a flat two-dimensional plane), the solution to the utility puzzle is "no": there is no way to make all nine connections without any of the lines crossing each other. In other words, the graph formula_0 is not planar. Kazimierz Kuratowski stated in 1930 that formula_0 is nonplanar, from which it follows that the problem has no solution. , however, states that "Interestingly enough, Kuratowski did not publish a detailed proof that [ formula_0 ] is non-planar". One proof of the impossibility of finding a planar embedding of formula_0 uses a case analysis involving the Jordan curve theorem. In this solution, one examines different possibilities for the locations of the vertices with respect to the 4-cycles of the graph and shows that they are all inconsistent with a planar embedding. Alternatively, it is possible to show that any bridgeless bipartite planar graph with formula_1 vertices and formula_2 edges has formula_3 by combining the Euler formula formula_4 (where formula_5 is the number of faces of a planar embedding) with the observation that the number of faces is at most half the number of edges (the vertices around each face must alternate between houses and utilities, so each face has at least four edges, and each edge belongs to exactly two faces). In the utility graph, formula_6 and formula_7 so in the utility graph it is untrue that formula_3. Because it does not satisfy this inequality, the utility graph cannot be planar. Changing the rules. formula_0 is a toroidal graph, which means that it can be embedded without crossings on a torus, a surface of genus one. These embeddings solve versions of the puzzle in which the houses and companies are drawn on a coffee mug or other such surface instead of a flat plane. There is even enough additional freedom on the torus to solve a version of the puzzle with four houses and four utilities. Similarly, if the three utilities puzzle is presented on a sheet of a transparent material, it may be solved after twisting and gluing the sheet to form a Möbius strip. Another way of changing the rules of the puzzle that would make it solvable, suggested by Henry Dudeney, is to allow utility lines to pass through other houses or utilities than the ones they connect. Properties of the utility graph. Beyond the utility puzzle, the same graph formula_0 comes up in several other mathematical contexts, including rigidity theory, the classification of cages and well-covered graphs, the study of graph crossing numbers, and the theory of graph minors. Rigidity. The utility graph formula_0 is a Laman graph, meaning that for almost all placements of its vertices in the plane, there is no way to continuously move its vertices while preserving all edge lengths, other than by a rigid motion of the whole plane, and that none of its spanning subgraphs have the same rigidity property. It is the smallest example of a nonplanar Laman graph. Despite being a minimally rigid graph, it has non-rigid embeddings with special placements for its vertices. For general-position embeddings, a polynomial equation describing all possible placements with the same edge lengths has degree 16, meaning that in general there can be at most 16 placements with the same lengths. It is possible to find systems of edge lengths for which up to eight of the solutions to this equation describe realizable placements. Other graph-theoretic properties. formula_0 is a triangle-free graph, in which every vertex has exactly three neighbors (a cubic graph). Among all such graphs, it is the smallest. Therefore, it is the (3,4)-cage, the smallest graph that has three neighbors per vertex and in which the shortest cycle has length four. Like all other complete bipartite graphs, it is a well-covered graph, meaning that every maximal independent set has the same size. In this graph, the only two maximal independent sets are the two sides of the bipartition, and are of equal sizes. formula_0 is one of only seven 3-regular 3-connected well-covered graphs. Generalizations. Two important characterizations of planar graphs, Kuratowski's theorem that the planar graphs are exactly the graphs that contain neither formula_0 nor the complete graph formula_8 as a subdivision, and Wagner's theorem that the planar graphs are exactly the graphs that contain neither formula_0 nor formula_8 as a minor, make use of and generalize the non-planarity of formula_0. Pál Turán's "brick factory problem" asks more generally for a formula for the minimum number of crossings in a drawing of the complete bipartite graph formula_9 in terms of the numbers of vertices formula_10 and formula_11 on the two sides of the bipartition. The utility graph formula_0 may be drawn with only one crossing, but not with zero crossings, so its crossing number is one. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_{3,3}" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "E\\le 2V-4" }, { "math_id": 4, "text": "V-E+F=2" }, { "math_id": 5, "text": "F" }, { "math_id": 6, "text": "E=9" }, { "math_id": 7, "text": "2V-4=8" }, { "math_id": 8, "text": "K_5" }, { "math_id": 9, "text": "K_{a,b}" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=58862
58862435
Vertex k-center problem
The vertex "k"-center problem is a classical NP-hard problem in computer science. It has application in facility location and clustering. Basically, the vertex "k"-center problem models the following real problem: given a city with formula_0 facilities, find the best formula_1 facilities where to build fire stations. Since firemen must attend any emergency as quickly as possible, the distance from the farthest facility to its nearest fire station has to be as small as possible. In other words, the position of the fire stations must be such that every possible fire is attended as quickly as possible. Formal definition. The vertex "k"-center problem is a classical NP-Hard problem in computer science. It was first proposed by Hakimi in 1964. Formally, the vertex "k"-center problem consists in: given a complete undirected graph formula_2 in a metric space, and a positive integer formula_1, find a subset formula_3 such that formula_4 and the objective function formula_5 is minimized. The distance formula_6 is defined as the distance from the vertex formula_7 to its nearest center in formula_8. Approximation algorithms. If formula_9, the vertex "k"-center problem can not be (optimally) solved in polynomial time. However, there are some polynomial time approximation algorithms that get near-optimal solutions. Specifically, 2-approximated solutions. Actually, if formula_9 the best possible solution that can be achieved by a polynomial time algorithm is a 2-approximated one. In the context of a minimization problem, such as the vertex "k"-center problem, a 2-approximated solution is any solution formula_10 such that formula_11, where formula_12 is the size of an optimal solution. An algorithm that guarantees to generate 2-approximated solutions is known as a 2-approximation algorithm. The main 2-approximated algorithms for the vertex "k"-center problem reported in the literature are the Sh algorithm, the HS algorithm, and the Gon algorithm. Even though these algorithms are the (polynomial) best possible ones, their performance on most benchmark datasets is very deficient. Because of this, many heuristics and metaheuristics have been developed through the time. Contrary to common sense, one of the most practical (polynomial) heuristics for the vertex "k"-center problem is based on the CDS algorithm, which is a 3-approximation algorithm The Sh algorithm. Formally characterized by David Shmoys in 1995, the Sh algorithm takes as input a complete undirected graph formula_2, a positive integer formula_1, and an assumption formula_13 on what the optimal solution size is. The Sh algorithm works as follows: selects the first center formula_14 at random. So far, the solution consists of only one vertex, formula_15. Next, selects center formula_16 at random from the set containing all the vertices whose distance from formula_8 is greater than formula_17. At this point, formula_18. Finally, selects the remaining formula_19 centers the same way formula_16 was selected. The complexity of the Sh algorithm is formula_20, where formula_0 is the number of vertices. The HS algorithm. Proposed by Dorit Hochbaum and David Shmoys in 1985, the HS algorithm takes the Sh algorithm as basis. By noticing that the value of formula_12 must equals the cost of some edge in formula_21, and since there are formula_22 edges in formula_21, the HS algorithm basically repeats the Sh algorithm with every edge cost. The complexity of the HS algorithm is formula_23. However, by running a binary search over the ordered set of edge costs, its complexity is reduced to formula_24. The Gon algorithm. Proposed independently by Teofilo Gonzalez, and by Martin Dyer and Alan Frieze in 1985, the Gon algorithm is basically a more powerful version of the Sh algorithm. While the Sh algorithm requires a guess formula_13 on formula_12, the Gon algorithm prescinds from such guess by noticing that if any set of vertices at distance greater than formula_25 exists, then the farthest vertex must be inside such set. Therefore, instead of computing at each iteration the set of vertices at distance greater than formula_17 and then selecting a random vertex, the Gon algorithm simply selects the farthest vertex from every partial solution formula_10. The complexity of the Gon algorithm is formula_20, where formula_0 is the number of vertices. The CDS algorithm. Proposed by García Díaz et al. in 2017, the CDS algorithm is a 3-approximation algorithm that takes ideas from the Gon algorithm (farthest point heuristic), the HS algorithm (parametric pruning), and the relationship between the vertex "k"-center problem and the Dominating Set problem. The CDS algorithm has a complexity of formula_23. However, by performing a binary search over the ordered set of edge costs, a more efficiente heuristic named CDSh is proposed. The CDSh algorithm complexity is formula_24. Despite the suboptimal performance of the CDS algorithm, and the heuristic performance of CDSh, both present a much better performance than the Sh, HS, and Gon algorithms. Parameterized approximations. It can be shown that the "k"-Center problem is W[2]-hard to approximate within a factor of 2 − ε for any ε &gt; 0, when using "k" as the parameter. This is also true when parameterizing by the doubling dimension (in fact the dimension of a Manhattan metric), unless P=NP. When considering the combined parameter given by "k" and the doubling dimension, "k"-Center is still W[1]-hard but it is possible to obtain a parameterized approximation scheme. This is even possible for the variant with vertex capacities, which bound how many vertices can be assigned to an opened center of the solution. Experimental comparison. Some of the most widely used benchmark datasets for the vertex "k"-center problem are the pmed instances from OR-Lib., and some instances from TSP-Lib. Table 1 shows the mean and standard deviation of the experimental approximation factors of the solutions generated by each algorithm over the 40 pmed instances from OR-Lib Polynomial heuristics. Greedy pure algorithm. The greedy pure algorithm (or Gr) follows the core idea of greedy algorithms: to take optimal local decisions. In the case of the vertex "k"-center problem, the optimal local decision consists in selecting each center in such a way that the size of the solution (covering radius) is minimum at each iteration. In other words, the first center selected is the one that solves the 1-center problem. The second center selected is the one that, along with the previous center, generates a solution with minimum covering radius. The remaining centers are selected the same way. The complexity of the Gr algorithm is formula_26. The empirical performance of the Gr algorithm is poor on most benchmark instances. Scoring algorithm. The Scoring algorithm (or Scr) was introduced by Jurij Mihelič and Borut Robič in 2005. This algorithm takes advantage of the reduction from the vertex "k"-center problem to the minimum dominating set problem. The problem is solved by pruning the input graph with every possible value of the optimal solution size and then solving the minimum dominating set problem heuristically. This heuristic follows the "lazy principle," which takes every decision as slow as possible (opossed to the greedy strategy). The complexity of the Scr algorithm is formula_23. The empirical performance of the Scr algorithm is very good on most benchmark instances. However, its running time rapidly becomes impractical as the input grows. So, it seems to be a good algorithm only for small instances. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "G=(V,E)" }, { "math_id": 3, "text": "C \\subseteq V" }, { "math_id": 4, "text": "|C|\\le k" }, { "math_id": 5, "text": "r(C)=\\max_{v \\in V}\\{d(v,C)\\}" }, { "math_id": 6, "text": "d(v,C)" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "P \\neq NP" }, { "math_id": 10, "text": "C'" }, { "math_id": 11, "text": "r(C') \\le 2 \\times r(\\text{OPT})" }, { "math_id": 12, "text": "r(\\text{OPT})" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "c_1" }, { "math_id": 15, "text": "C=\\{c_1\\}" }, { "math_id": 16, "text": "c_2" }, { "math_id": 17, "text": "2 \\times r" }, { "math_id": 18, "text": "C=\\{c_1,c_2\\}" }, { "math_id": 19, "text": "k-2" }, { "math_id": 20, "text": "O(kn)" }, { "math_id": 21, "text": "E" }, { "math_id": 22, "text": "O(n^2)" }, { "math_id": 23, "text": "O(n^4)" }, { "math_id": 24, "text": "O(n^2 \\log n)" }, { "math_id": 25, "text": "2 \\times r(\\text{OPT})" }, { "math_id": 26, "text": "O(kn^2)" } ]
https://en.wikipedia.org/wiki?curid=58862435
5886266
Sankar Das Sarma
Sankar Das Sarma () is an India-born American theoretical condensed matter physicist. He has been a member of the department of physics at University of Maryland, College Park since 1980. Das Sarma is the Richard E. Prange Chair in Physics, a distinguished university professor, a Fellow of the Joint Quantum Institute (JQI), and the director of the Condensed Matter Theory Center at the University of Maryland, College Park. Career. Das Sarma came to the United States from India as a physics graduate student in 1974 after finishing his secondary school (Hare School in Kolkata) and undergraduate education at Presidency College in Calcutta, India (now Presidency University in Kolkata) where he was born. He received his PhD in theoretical physics from Brown University in 1979 as a doctoral student of John Quinn. In collaboration with Chetan Nayak and Michael Freedman of Microsoft Research, Das Sarma introduced the formula_0 topological qubit in 2005, which has led to experiments in building a fault-tolerant quantum computer based on two-dimensional semiconductor structures. Das Sarma's work on graphene has led to the theoretical understanding of graphene carrier transport properties at low densities where the inhomogeneous electron-hole puddles dominate the graphene landscape. In 2006 Das Sarma with Euyheon Hwang provided the basic theory for collective modes and dielectric response in graphene and related chiral two-dimensional materials. In 2011 Das Sarma and collaborators introduced a new class of lattice tight-binding flat-band systems with nontrivial Chern numbers which belongs to the universality class of continuum quantum Hall and fractional quantum Hall systems without any external magnetic fields. Such flat-band tight-binding systems with non-trivial Chern numbers have substantially enhanced the types of possible physical systems for the realization of topological matter. In 2010, Das Sarma and collaborators, made a prediction that Majorana fermions will be found in condensed matter, in particular, in semiconductor nanowires. This has led to considerable experimental activity, led by Microsoft Corporation, to produce a topological quantum computer. He has been a visiting professor at many institutions during his professional career, including Technical University of Munich, IBM Thomas J. Watson Research Center, University of Hamburg, Cambridge University, University of California, Santa Barbara, University of New South Wales, Sandia National Laboratories, University of Melbourne, Kavli Institute for Theoretical Physics in Santa Barbara, Institute for Theoretical Physics in Beijing, and Microsoft Station Q Research Center. External links. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu=5/2 " } ]
https://en.wikipedia.org/wiki?curid=5886266
58863
Gödel's incompleteness theorems
Limitative results in mathematical logic Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. Employing a diagonal argument, Gödel's incompleteness theorems were the first of several closely related theorems on the limitations of formal systems. They were followed by Tarski's undefinability theorem on the formal undefinability of truth, Church's proof that Hilbert's "Entscheidungsproblem" is unsolvable, and Turing's theorem that there is no algorithm to solve the halting problem. Formal systems: completeness, consistency, and effective axiomatization. The incompleteness theorems apply to formal systems that are of sufficient complexity to express the basic arithmetic of the natural numbers and which are consistent and effectively axiomatized. Particularly in the context of first-order logic, formal systems are also called "formal theories". In general, a formal system is a deductive apparatus that consists of a particular set of axioms along with rules of symbolic manipulation (or rules of inference) that allow for the derivation of new theorems from the axioms. One example of such a system is first-order Peano arithmetic, a system in which all variables are intended to denote natural numbers. In other systems, such as set theory, only some sentences of the formal system express statements about the natural numbers. The incompleteness theorems are about formal provability "within" these systems, rather than about "provability" in an informal sense. There are several properties that a formal system may have, including completeness, consistency, and the existence of an effective axiomatization. The incompleteness theorems show that systems which contain a sufficient amount of arithmetic cannot possess all three of these properties. Effective axiomatization. A formal system is said to be "effectively axiomatized" (also called "effectively generated") if its set of theorems is recursively enumerable. This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC). The theory known as true arithmetic consists of all true statements about the standard integers in the language of Peano arithmetic. This theory is consistent and complete, and contains a sufficient amount of arithmetic. However, it does not have a recursively enumerable set of axioms, and thus does not satisfy the hypotheses of the incompleteness theorems. Completeness. A set of axioms is ("syntactically", or "negation"-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms. This is the notion relevant for Gödel's first Incompleteness theorem. It is not to be confused with "semantic" completeness, which means that the set of axioms proves all the semantic tautologies of the given language. In his completeness theorem (not to be confused with the incompleteness theorems described here), Gödel proved that first-order logic is "semantically" complete. But it is not syntactically complete, since there are sentences expressible in the language of first-order logic that can be neither proved nor disproved from the axioms of logic alone. In a system of mathematics, thinkers such as Hilbert believed that it was just a matter of time to find such an axiomatization that would allow one to either prove or disprove (by proving its negation) every mathematical formula. A formal system might be syntactically incomplete by design, as logics generally are. Or it may be incomplete simply because not all the necessary axioms have been discovered or included. For example, Euclidean geometry without the parallel postulate is incomplete, because some statements in the language (such as the parallel postulate itself) can not be proved from the remaining axioms. Similarly, the theory of dense linear orders is not complete, but becomes complete with an extra axiom stating that there are no endpoints in the order. The continuum hypothesis is a statement in the language of ZFC that is not provable within ZFC, so ZFC is not complete. In this case, there is no obvious candidate for a new axiom that resolves the issue. The theory of first-order Peano arithmetic seems consistent. Assuming this is indeed the case, note that it has an infinite but recursively enumerable set of axioms, and can encode enough arithmetic for the hypotheses of the incompleteness theorem. Thus by the first incompleteness theorem, Peano Arithmetic is not complete. The theorem gives an explicit example of a statement of arithmetic that is neither provable nor disprovable in Peano's arithmetic. Moreover, this statement is true in the usual model. In addition, no effectively axiomatized, consistent extension of Peano arithmetic can be complete. Consistency. A set of axioms is (simply) consistent if there is no statement such that both the statement and its negation are provable from the axioms, and "inconsistent" otherwise. That is to say, a consistent axiomatic system is one that is free from contradiction. Peano arithmetic is provably consistent from ZFC, but not from within itself. Similarly, ZFC is not provably consistent from within itself, but ZFC + "there exists an inaccessible cardinal" proves ZFC is consistent because if κ is the least such cardinal, then "V"κ sitting inside the von Neumann universe is a model of ZFC, and a theory is consistent if and only if it has a model. If one takes all statements in the language of Peano arithmetic as axioms, then this theory is complete, has a recursively enumerable set of axioms, and can describe addition and multiplication. However, it is not consistent. Additional examples of inconsistent theories arise from the paradoxes that result when the axiom schema of unrestricted comprehension is assumed in set theory. Systems which contain arithmetic. The incompleteness theorems apply only to formal systems which are able to prove a sufficient collection of facts about the natural numbers. One sufficient collection is the set of theorems of Robinson arithmetic Q. Some systems, such as Peano arithmetic, can directly express statements about natural numbers. Others, such as ZFC set theory, are able to interpret statements about natural numbers into their language. Either of these options is appropriate for the incompleteness theorems. The theory of algebraically closed fields of a given characteristic is complete, consistent, and has an infinite but recursively enumerable set of axioms. However it is not possible to encode the integers into this theory, and the theory cannot describe arithmetic of integers. A similar example is the theory of real closed fields, which is essentially equivalent to Tarski's axioms for Euclidean geometry. So Euclidean geometry itself (in Tarski's formulation) is an example of a complete, consistent, effectively axiomatized theory. The system of Presburger arithmetic consists of a set of axioms for the natural numbers with just the addition operation (multiplication is omitted). Presburger arithmetic is complete, consistent, and recursively enumerable and can encode addition but not multiplication of natural numbers, showing that for Gödel's theorems one needs the theory to encode not just addition but also multiplication. Dan Willard (2001) has studied some weak families of arithmetic systems which allow enough arithmetic as relations to formalise Gödel numbering, but which are not strong enough to have multiplication as a function, and so fail to prove the second incompleteness theorem; that is to say, these systems are consistent and capable of proving their own consistency (see self-verifying theories). Conflicting goals. In choosing a set of axioms, one goal is to be able to prove as many correct results as possible, without proving any incorrect results. For example, we could imagine a set of true axioms which allow us to prove every true arithmetical claim about the natural numbers . In the standard system of first-order logic, an inconsistent set of axioms will prove every statement in its language (this is sometimes called the principle of explosion), and is thus automatically complete. A set of axioms that is both complete and consistent, however, proves a maximal set of non-contradictory theorems. The pattern illustrated in the previous sections with Peano arithmetic, ZFC, and ZFC + "there exists an inaccessible cardinal" cannot generally be broken. Here ZFC + "there exists an inaccessible cardinal" cannot from itself, be proved consistent. It is also not complete, as illustrated by the continuum hypothesis, which is unresolvable in ZFC + "there exists an inaccessible cardinal". The first incompleteness theorem shows that, in formal systems that can express basic arithmetic, a complete and consistent finite list of axioms can never be created: each time an additional, consistent statement is added as an axiom, there are other true statements that still cannot be proved, even with the new axiom. If an axiom is ever added that makes the system complete, it does so at the cost of making the system inconsistent. It is not even possible for an infinite list of axioms to be complete, consistent, and effectively axiomatized. First incompleteness theorem. Gödel's first incompleteness theorem first appeared as "Theorem VI" in Gödel's 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I". The hypotheses of the theorem were improved shortly thereafter by J. Barkley Rosser (1936) using Rosser's trick. The resulting theorem (incorporating Rosser's improvement) may be paraphrased in English as follows, where "formal system" includes the assumption that the system is effectively generated. First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2020) The unprovable statement "G""F" referred to by the theorem is often referred to as "the Gödel sentence" for the system F. The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence. Each effectively generated system has its own Gödel sentence. It is possible to define a larger system F' that contains the whole of F plus "G""F" as an additional axiom. This will not result in a complete system, because Gödel's theorem will also apply to F', and thus F' also cannot be complete. In this case, "G""F" is indeed a theorem in F', because it is an axiom. Because "G""F" states only that it is not provable in F, no contradiction is presented by its provability within F'. However, because the incompleteness theorem applies to F', there will be a new Gödel statement "G""F"' for F', showing that F' is also incomplete. "G""F"' will differ from "G""F" in that "G""F"' will refer to F', rather than F. Syntactic form of the Gödel sentence. The Gödel sentence is designed to refer, indirectly, to itself. The sentence states that, when a particular sequence of steps is used to construct another sentence, that constructed sentence will not be provable in F. However, the sequence of steps is such that the constructed sentence turns out to be "G""F" itself. In this way, the Gödel sentence "G""F" indirectly states its own unprovability within F . To prove the first incompleteness theorem, Gödel demonstrated that the notion of provability within a system could be expressed purely in terms of arithmetical functions that operate on Gödel numbers of sentences of the system. Therefore, the system, which can prove certain facts about numbers, can also indirectly prove facts about its own statements, provided that it is effectively generated. Questions about the provability of statements within the system are represented as questions about the arithmetical properties of numbers themselves, which would be decidable by the system if it were complete. Thus, although the Gödel sentence refers indirectly to sentences of the system F, when read as an arithmetical statement the Gödel sentence directly refers only to natural numbers. It asserts that no natural number has a particular property, where that property is given by a primitive recursive relation . As such, the Gödel sentence can be written in the language of arithmetic with a simple syntactic form. In particular, it can be expressed as a formula in the language of arithmetic consisting of a number of leading universal quantifiers followed by a quantifier-free body (these formulas are at level formula_0 of the arithmetical hierarchy). Via the MRDP theorem, the Gödel sentence can be re-written as a statement that a particular polynomial in many variables with integer coefficients never takes the value zero when integers are substituted for its variables . Truth of the Gödel sentence. The first incompleteness theorem shows that the Gödel sentence "G""F" of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (; also see ). For this reason, the sentence "G""F" is often said to be "true but unprovable." . However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence "G""F" may only be arrived at via a meta-analysis from outside the system. In general, this meta-analysis can be carried out within the weak formal system known as primitive recursive arithmetic, which proves the implication "Con"("F")→"G"F, where "Con"("F") is a canonical sentence asserting the consistency of F (, ). Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem . That theorem shows that, when a sentence is independent of a theory, the theory will have models in which the sentence is true and models in which the sentence is false. As described earlier, the Gödel sentence of a system F is an arithmetical statement which claims that no number exists with a particular property. The incompleteness theorem shows that this claim will be independent of the system F, and the truth of the Gödel sentence follows from the fact that no standard natural number has the property in question. Any model in which the Gödel sentence is false must contain some element which satisfies the property within that model. Such a model must be "nonstandard" – it must contain elements that do not correspond to any standard natural number (, ). Relationship with the liar paradox. Gödel specifically cites Richard's paradox and the liar paradox as semantical analogues to his syntactical incompleteness result in the introductory section of "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". The liar paradox is the sentence "This sentence is false." An analysis of the liar sentence shows that it cannot be true (for then, as it asserts, it is false), nor can it be false (for then, it is true). A Gödel sentence G for a system F makes a similar assertion to the liar sentence, but with truth replaced by provability: G says "G is not provable in the system F." The analysis of the truth and provability of G is a formalized version of the analysis of the truth of the liar sentence. It is not possible to replace "not provable" with "false" in a Gödel sentence because the predicate "Q is the Gödel number of a false formula" cannot be represented as a formula of arithmetic. This result, known as Tarski's undefinability theorem, was discovered independently both by Gödel, when he was working on the proof of the incompleteness theorem, and by the theorem's namesake, Alfred Tarski. Extensions of Gödel's original result. Compared to the theorems stated in Gödel's 1931 paper, many contemporary statements of the incompleteness theorems are more general in two ways. These generalized statements are phrased to apply to a broader class of systems, and they are phrased to incorporate weaker consistency assumptions. Gödel demonstrated the incompleteness of the system of "Principia Mathematica", a particular system of arithmetic, but a parallel demonstration could be given for any effective system of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal system. The terminology used to state these conditions was not yet developed in 1931 when Gödel published his results. Gödel's original statement and proof of the incompleteness theorem requires the assumption that the system is not just consistent but "ω-consistent". A system is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number m the system proves ~"P"("m"), and yet the system also proves that there exists a natural number n such that P(n). That is, the system says that a number with property P exists while denying that it has any specific value. The ω-consistency of a system implies its consistency, but consistency does not imply ω-consistency. J. Barkley Rosser (1936) strengthened the incompleteness theorem by finding a variation of the proof (Rosser's trick) that only requires the system to be consistent, rather than ω-consistent. This is mostly of technical interest, because all true formal theories of arithmetic (theories whose axioms are all true statements about natural numbers) are ω-consistent, and thus Gödel's theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, rather than ω-consistency, is now commonly known as Gödel's incompleteness theorem and as the Gödel–Rosser theorem. Second incompleteness theorem. For each formal system F containing basic arithmetic, it is possible to canonically define a formula Cons(F) expressing the consistency of F. This formula expresses the property that "there does not exist a natural number coding a formal derivation within the system F whose conclusion is a syntactic contradiction." The syntactic contradiction is often taken to be "0=1", in which case Cons(F) states "there is no natural number that codes a derivation of '0=1' from the axioms of F." Gödel's second incompleteness theorem shows that, under general assumptions, this canonical consistency statement Cons(F) will not be provable in F. The theorem first appeared as "Theorem XI" in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I". In the following statement, the term "formalized system" also includes an assumption that F is effectively axiomatized. This theorem states that for any consistent system "F" within which a certain amount of elementary arithmetic can be carried out, the consistency of "F" cannot be proved in "F" itself. This theorem is stronger than the first incompleteness theorem because the statement constructed in the first incompleteness theorem does not directly express the consistency of the system. The proof of the second incompleteness theorem is obtained by formalizing the proof of the first incompleteness theorem within the system F itself. Expressing consistency. There is a technical subtlety in the second incompleteness theorem regarding the method of expressing the consistency of F as a formula in the language of F. There are many ways to express the consistency of a system, and not all of them lead to the same result. The formula Cons(F) from the second incompleteness theorem is a particular expression of consistency. Other formalizations of the claim that F is consistent may be inequivalent in F, and some may even be provable. For example, first-order Peano arithmetic (PA) can prove that "the largest consistent subset of PA" is consistent. But, because PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA "proves that it is consistent". What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term "largest consistent subset of PA" is meant here to be the largest consistent initial segment of the axioms of PA under some particular effective enumeration.) The Hilbert–Bernays conditions. The standard proof of the second incompleteness theorem assumes that the provability predicate "Prov"A("P") satisfies the Hilbert–Bernays provability conditions. Letting #("P") represent the Gödel number of a formula P, the provability conditions say: There are systems, such as Robinson arithmetic, which are strong enough to meet the assumptions of the first incompleteness theorem, but which do not prove the Hilbert–Bernays conditions. Peano arithmetic, however, is strong enough to verify these conditions, as are all theories stronger than Peano arithmetic. Implications for consistency proofs. Gödel's second incompleteness theorem also implies that a system "F"1 satisfying the technical conditions outlined above cannot prove the consistency of any system "F"2 that proves the consistency of "F"1. This is because such a system "F"1 can prove that if "F"2 proves the consistency of "F"1, then "F"1 is in fact consistent. For the claim that "F"1 is consistent has form "for all numbers n, n has the decidable property of not being a code for a proof of contradiction in "F"1". If "F"1 were in fact inconsistent, then "F"2 would prove for some n that n is the code of a contradiction in "F"1. But if "F"2 also proved that "F"1 is consistent (that is, that there is no such n), then it would itself be inconsistent. This reasoning can be formalized in "F"1 to show that if "F"2 is consistent, then "F"1 is consistent. Since, by second incompleteness theorem, "F"1 does not prove its consistency, it cannot prove the consistency of "F"2 either. This corollary of the second incompleteness theorem shows that there is no hope of proving, for example, the consistency of Peano arithmetic using any finitistic means that can be formalized in a system the consistency of which is provable in Peano arithmetic (PA). For example, the system of primitive recursive arithmetic (PRA), which is widely accepted as an accurate formalization of finitistic mathematics, is provably consistent in PA. Thus PRA cannot prove the consistency of PA. This fact is generally seen to imply that Hilbert's program, which aimed to justify the use of "ideal" (infinitistic) mathematical principles in the proofs of "real" (finitistic) mathematical statements by giving a finitistic proof that the ideal principles are consistent, cannot be carried out. The corollary also indicates the epistemological relevance of the second incompleteness theorem. It would provide no interesting information if a system F proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of F in F would give us no clue as to whether F is consistent; no doubts about the consistency of F would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a system F in some system F' that is in some sense less doubtful than F itself, for example, weaker than F. For many naturally occurring theories F and F', such as F = Zermelo–Fraenkel set theory and F' = primitive recursive arithmetic, the consistency of F' is provable in F, and thus F' cannot prove the consistency of F by the above corollary of the second incompleteness theorem. The second incompleteness theorem does not rule out altogether the possibility of proving the consistency of some theory T, only doing so in a theory that T itself can prove to be consistent. For example, Gerhard Gentzen proved the consistency of Peano arithmetic in a different system that includes an axiom asserting that the ordinal called "ε"0 is wellfounded; see Gentzen's consistency proof. Gentzen's theorem spurred the development of ordinal analysis in proof theory. Examples of undecidable statements. There are two distinct senses of the word "undecidable" in mathematics and computer science. The first of these is the proof-theoretic sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense, which will not be discussed here, is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set (see undecidable problem). Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point in the philosophy of mathematics. The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms "except" the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proved from ZFC. showed that the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory. Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's incompleteness theorem states that for any system that can represent enough arithmetic, there is an upper bound c such that no specific number can be proved in that system to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox. Undecidable statements provable in larger systems. These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic. In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic. Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system ATR0 codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory. Relationship with computability. The incompleteness theorem is closely related to several results about undecidable sets in recursion theory. presented a proof of Gödel's incompleteness theorem using basic results of computability theory. One such result shows that the halting problem is undecidable: no computer program can correctly determine, given any program P as input, whether P eventually halts when run with a particular given input. Kleene showed that the existence of a complete effective system of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. This method of proof has also been presented by ; ; and . explains how Matiyasevich's solution to Hilbert's 10th problem can be used to obtain a proof to Gödel's first incompleteness theorem. Matiyasevich proved that there is no algorithm that, given a multivariate polynomial "p"("x"1, "x"2...,"x"k) with integer coefficients, determines whether there is an integer solution to the equation p = 0. Because polynomials with integer coefficients, and integers themselves, are directly expressible in the language of arithmetic, if a multivariate integer polynomial equation p = 0 does have a solution in the integers then any sufficiently strong system of arithmetic T will prove this. Moreover, suppose the system T is ω-consistent. In that case, it will never prove that a particular polynomial equation has a solution when there is no solution in the integers. Thus, if T were complete and ω-consistent, it would be possible to determine algorithmically whether a polynomial equation has a solution by merely enumerating proofs of T until either "p has a solution" or "p has no solution" is found, in contradiction to Matiyasevich's theorem. Hence it follows that T cannot be ω-consistent and complete. Moreover, for each consistent effectively generated system T, it is possible to effectively generate a multivariate polynomial p over the integers such that the equation p = 0 has no solutions over the integers, but the lack of solutions cannot be proved in T. shows how the existence of recursively inseparable sets can be used to prove the first incompleteness theorem. This proof is often extended to show that systems such as Peano arithmetic are essentially undecidable. Chaitin's incompleteness theorem gives a different method of producing independent sentences, based on Kolmogorov complexity. Like the proof presented by Kleene that was mentioned above, Chaitin's theorem only applies to theories with the additional property that all their axioms are true in the standard model of the natural numbers. Gödel's incompleteness theorem is distinguished by its applicability to consistent theories that nonetheless include false statements in the standard model; these theories are known as ω-inconsistent. Proof sketch for the first theorem. The proof by contradiction has three essential parts. To begin, choose a formal system that meets the proposed criteria: Arithmetization of syntax. The main problem in fleshing out the proof described above is that it seems at first that to construct a statement p that is equivalent to "p cannot be proved", p would somehow have to contain a reference to p, which could easily give rise to an infinite regress. Gödel's technique is to show that statements can be matched with numbers (often called the arithmetization of syntax) in such a way that "proving a statement" can be replaced with "testing whether a number has a given property". This allows a self-referential formula to be constructed in a way that avoids any infinite regress of definitions. The same technique was later used by Alan Turing in his work on the "Entscheidungsproblem". In simple terms, a method can be devised so that every formula or statement that can be formulated in the system gets a unique number, called its Gödel number, in such a way that it is possible to mechanically convert back and forth between formulas and Gödel numbers. The numbers involved might be very long indeed (in terms of number of digits), but this is not a barrier; all that matters is that such numbers can be constructed. A simple example is how English can be stored as a sequence of numbers for each letter and then combined into a single larger number: * The word codice_0 is encoded as 104-101-108-108-111 in ASCII, which can be converted into the number 104101108108111. * The logical statement codice_1 is encoded as 120-061-121-032-061-062-032-121-061-120 in ASCII, which can be converted into the number 120061121032061062032121061120. In principle, proving a statement true or false can be shown to be equivalent to proving that the number matching the statement does or does not have a given property. Because the formal system is strong enough to support reasoning about "numbers in general", it can support reasoning about "numbers that represent formulae and statements" as well. Crucially, because the system can support reasoning about "properties of numbers", the results are equivalent to reasoning about "provability of their equivalent statements". Construction of a statement about "provability". Having shown that in principle the system can indirectly make statements about provability, by analyzing properties of those numbers representing statements it is now possible to show how to create a statement that actually does this. A formula "F"("x") that contains exactly one free variable x is called a "statement form" or "class-sign". As soon as x is replaced by a specific number, the statement form turns into a "bona fide" statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, &amp;NoBreak;&amp;NoBreak; is true if and only if it can be proved (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as "2×3 = 6". Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form "F"("x") can be assigned a Gödel number denoted by G("F"). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G("F"). The notion of provability itself can also be encoded by Gödel numbers, in the following way: since a proof is a list of statements which obey certain rules, the Gödel number of a proof can be defined. Now, for every statement p, one may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the potential Gödel number of its proof, is an arithmetical relation between two numbers. Therefore, there is a statement form "Bew"("y") that uses this arithmetical relation to state that a Gödel number of a proof of y exists: "Bew"("y") = ∃ "x" (y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y). The name Bew is short for "beweisbar", the German word for "provable"; this name was originally used by Gödel to denote the provability formula just described. Note that ""Bew"("y")" is merely an abbreviation that represents a particular, very long, formula in the original language of T; the string "Bew" itself is not claimed to be part of this language. An important feature of the formula "Bew"("y") is that if a statement p is provable in the system then "Bew"(G("p")) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G("p")) to be satisfied. Diagonalization. The next step in the proof is to obtain a statement which, indirectly, asserts its own unprovability. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves "p" ↔ "F"(G("p")). By letting F be the negation of "Bew"("x"), we obtain the theorem "p" ↔ ~"Bew"(G("p")) and the p defined by this roughly states that its own Gödel number is the Gödel number of an unprovable formula. The statement p is not literally equal to ~"Bew"(G("p")); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English: ", when preceded by itself in quotes, is unprovable.", when preceded by itself in quotes, is unprovable. This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence indirectly asserts its own unprovability. The proof of the diagonal lemma employs a similar method. Now, assume that the axiomatic system is ω-consistent, and let p be the statement obtained in the previous section. If p were provable, then "Bew"(G("p")) would be provable, as argued above. But p asserts the negation of "Bew"(G("p")). Thus the system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable. If the negation of p were provable, then "Bew"(G("p")) would be provable (because p was constructed to be equivalent to the negation of "Bew"(G("p"))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable. Thus the statement p is undecidable in our axiomatic system: it can neither be proved nor disproved within the system. In fact, to show that p is not provable only requires the assumption that the system is consistent. The stronger assumption of ω-consistency is required to show that the negation of p is not provable. Thus, if p is constructed for a particular system: If one tries to "add the missing axioms" to avoid the incompleteness of the system, then one has to add either p or "not p" as axioms. But then the definition of "being a Gödel number of a proof" of a statement changes. which means that the formula "Bew"("x") is now different. Thus when we apply the diagonal lemma to this new Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent. Proof via Berry's paradox. sketches an alternative proof of the first incompleteness theorem that uses Berry's paradox rather than the liar paradox to construct a true but unprovable formula. A similar proof method was independently discovered by Saul Kripke. Boolos's proof proceeds by constructing, for any computably enumerable set S of true sentences of arithmetic, another sentence which is true but not contained in S. This gives the first incompleteness theorem as a corollary. According to Boolos, this proof is interesting because it provides a "different sort of reason" for the incompleteness of effective, consistent theories of arithmetic. Computer verified proofs. The incompleteness theorems are among a relatively small number of nontrivial theorems that have been transformed into formalized theorems that can be completely verified by proof assistant software. Gödel's original proofs of the incompleteness theorems, like most mathematical proofs, were written in natural language intended for human readers. Computer-verified proofs of versions of the first incompleteness theorem were announced by Natarajan Shankar in 1986 using Nqthm , by Russell O'Connor in 2003 using Coq and by John Harrison in 2009 using HOL Light . A computer-verified proof of both incompleteness theorems was announced by Lawrence Paulson in 2013 using Isabelle . Proof sketch for the second theorem. The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within a system S using a formal predicate "P" for provability. Once this is done, the second incompleteness theorem follows by formalizing the entire proof of the first incompleteness theorem within the system S itself. Let p stand for the undecidable sentence constructed above, and assume for purposes of obtaining a contradiction that the consistency of the system S can be proved from within the system S itself. This is equivalent to proving the statement "System S is consistent". Now consider the statement c, where c = "If the system S is consistent, then p is not provable". The proof of sentence c can be formalized within the system S, and therefore the statement c, "p is not provable", (or identically, "not "P"("p")") can be proved in the system S. Observe then, that if we can prove that the system S is consistent (ie. the statement in the hypothesis of c), then we have proved that p is not provable. But this is a contradiction since by the 1st Incompleteness Theorem, this sentence (ie. what is implied in the sentence c, ""p" is not provable") is what we construct to be unprovable. Notice that this is why we require formalizing the first Incompleteness Theorem in S: to prove the 2nd Incompleteness Theorem, we obtain a contradiction with the 1st Incompleteness Theorem which can do only by showing that the theorem holds in S. So we cannot prove that the system S is consistent. And the 2nd Incompleteness Theorem statement follows. Discussion and implications. The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system of formal logic to define their principles. Consequences for logicism and Hilbert's second problem. The incompleteness theorem is sometimes thought to have severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic. Bob Hale and Crispin Wright argue that it is not a problem for logicism because the incompleteness theorems apply equally to first-order logic as they do to arithmetic. They argue that only those who believe that the natural numbers are to be defined in terms of first order logic have this problem. Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's second problem, which asked for a finitary consistency proof for mathematics. The second incompleteness theorem, in particular, is often viewed as making the problem impossible. Not all mathematicians agree with this analysis, however, and the status of Hilbert's second problem is not yet decided (see "Modern viewpoints on the status of the problem"). Minds and machines. Authors including the philosopher J. R. Lucas and physicist Roger Penrose have debated what, if anything, Gödel's incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church–Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel's incompleteness theorems would apply to it. suggested that while Gödel's theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. Assuming that it is consistent, either its consistency cannot be proved or it cannot be represented by a Turing machine. has proposed that the concept of mathematical "knowability" should be based on computational complexity rather than logical decidability. He writes that "when "knowability" is interpreted by modern standards, namely via computational complexity, the Gödel phenomena are very much with us." Douglas Hofstadter, in his books "Gödel, Escher, Bach" and "I Am a Strange Loop", cites Gödel's theorems as an example of what he calls a "strange loop", a hierarchical, self-referential structure existing within an axiomatic formal system. He argues that this is the same kind of structure that gives rise to consciousness, the sense of "I", in the human mind. While the self-reference in Gödel's theorem comes from the Gödel sentence asserting its unprovability within the formal system of Principia Mathematica, the self-reference in the human mind comes from how the brain abstracts and categorises stimuli into "symbols", or groups of neurons which respond to concepts, in what is effectively also a formal system, eventually giving rise to symbols modeling the concept of the very entity doing the perception. Hofstadter argues that a strange loop in a sufficiently complex formal system can give rise to a "downward" or "upside-down" causality, a situation in which the normal hierarchy of cause-and-effect is flipped upside-down. In the case of Gödel's theorem, this manifests, in short, as the following: Merely from knowing the formula's meaning, one can infer its truth or falsity without any effort to derive it in the old-fashioned way, which requires one to trudge methodically "upwards" from the axioms. This is not just peculiar; it is astonishing. Normally, one cannot merely look at what a mathematical conjecture says and simply appeal to the content of that statement on its own to deduce whether the statement is true or false. In the case of the mind, a far more complex formal system, this "downward causality" manifests, in Hofstadter's view, as the ineffable human instinct that the causality of our minds lies on the high level of desires, concepts, personalities, thoughts, and ideas, rather than on the low level of interactions between neurons or even fundamental particles, even though according to physics the latter seems to possess the causal power. There is thus a curious upside-downness to our normal human way of perceiving the world: we are built to perceive “big stuff” rather than “small stuff”, even though the domain of the tiny seems to be where the actual motors driving reality reside. Paraconsistent logic. Although Gödel's theorems are usually studied in the context of classical logic, they also have a role in the study of paraconsistent logic and of inherently contradictory statements ("dialetheia"). Priest (1984, 2006) argues that replacing the notion of formal proof in Gödel's theorem with the usual notion of informal proof can be used to show that naive mathematics is inconsistent, and uses this as evidence for dialetheism. The cause of this inconsistency is the inclusion of a truth predicate for a system within the language of the system. gives a more mixed appraisal of the applications of Gödel's theorems to dialetheism. Appeals to the incompleteness theorems in other fields. Appeals and analogies are sometimes made to the incompleteness of theorems in support of arguments that go beyond mathematics and logic. Several authors have commented negatively on such extensions and interpretations, including , , ; and . and , for example, quote from Rebecca Goldstein's comments on the disparity between Gödel's avowed Platonism and the anti-realist uses to which his ideas are sometimes put. criticize Régis Debray's invocation of the theorem in the context of sociology; Debray has defended this use as metaphorical (ibid.). History. After Gödel published his proof of the completeness theorem as his doctoral thesis in 1929, he turned to a second problem for his habilitation. His original goal was to obtain a positive solution to Hilbert's second problem. At the time, theories of natural numbers and real numbers similar to second-order arithmetic were known as "analysis", while theories of natural numbers alone were known as "arithmetic". Gödel was not the only person working on the consistency problem. Ackermann had published a flawed consistency proof for analysis in 1925, in which he attempted to use the method of ε-substitution originally developed by Hilbert. Later that year, von Neumann was able to correct the proof for a system of arithmetic without any axioms of induction. By 1928, Ackermann had communicated a modified proof to Bernays; this modified proof led Hilbert to announce his belief in 1929 that the consistency of arithmetic had been demonstrated and that a consistent proof of analysis would likely soon follow. After the publication of the incompleteness theorems showed that Ackermann's modified proof must be erroneous, von Neumann produced a concrete example showing that its main technique was unsound. In the course of his research, Gödel discovered that although a sentence, asserting its falsehood leads to paradox, a sentence that asserts its non-provability does not. In particular, Gödel was aware of the result now called Tarski's indefinability theorem, although he never published it. Gödel announced his first incompleteness theorem to Carnap, Feigel, and Waismann on August 26, 1930; all four would attend the Second Conference on the Epistemology of the Exact Sciences, a key conference in Königsberg the following week. Announcement. The 1930 Königsberg conference was a joint meeting of three academic societies, with many of the key logicians of the time in attendance. Carnap, Heyting, and von Neumann delivered one-hour addresses on the mathematical philosophies of logicism, intuitionism, and formalism, respectively. The conference also included Hilbert's retirement address, as he was leaving his position at the University of Göttingen. Hilbert used the speech to argue his belief that all mathematical problems can be solved. He ended his address by saying, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;For the mathematician there is no "Ignorabimus", and, in my opinion, not at all for natural science either. ... The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish "Ignorabimus", our credo avers: We must know. We shall know! This speech quickly became known as a summary of Hilbert's beliefs on mathematics (its final six words, "Wir müssen wissen. Wir werden wissen!", were used as Hilbert's epitaph in 1943). Although Gödel was likely in attendance for Hilbert's address, the two never met face to face. Gödel announced his first incompleteness theorem at a roundtable discussion session on the third day of the conference. The announcement drew little attention apart from that of von Neumann, who pulled Gödel aside for a conversation. Later that year, working independently with knowledge of the first incompleteness theorem, von Neumann obtained a proof of the second incompleteness theorem, which he announced to Gödel in a letter dated November 20, 1930. Gödel had independently obtained the second incompleteness theorem and included it in his submitted manuscript, which was received by "Monatshefte für Mathematik" on November 17, 1930. Gödel's paper was published in the "Monatshefte" in 1931 under the title "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions in Principia Mathematica and Related Systems I"). As the title implies, Gödel originally planned to publish a second part of the paper in the next volume of the "Monatshefte"; the prompt acceptance of the first paper was one reason he changed his plans. Generalization and acceptance. Gödel gave a series of lectures on his theorems at Princeton in 1933–1934 to an audience that included Church, Kleene, and Rosser. By this time, Gödel had grasped that the key property his theorems required is that the system must be effective (at the time, the term "general recursive" was used). Rosser proved in 1936 that the hypothesis of ω-consistency, which was an integral part of Gödel's original proof, could be replaced by simple consistency if the Gödel sentence was changed appropriately. These developments left the incompleteness theorems in essentially their modern form. Gentzen published his consistency proof for first-order arithmetic in 1936. Hilbert accepted this proof as "finitary" although (as Gödel's theorem had already shown) it cannot be formalized within the system of arithmetic that is being proved consistent. The impact of the incompleteness theorems on Hilbert's program was quickly realized. Bernays included a full proof of the incompleteness theorems in the second volume of "Grundlagen der Mathematik" (1939), along with additional results of Ackermann on the ε-substitution method and Gentzen's consistency proof of arithmetic. This was the first full published proof of the second incompleteness theorem. Criticisms. Finsler. used a version of Richard's paradox to construct an expression that was false but unprovable in a particular, informal framework he had developed. Gödel was unaware of this paper when he proved the incompleteness theorems (Collected Works Vol. IV., p. 9). Finsler wrote to Gödel in 1931 to inform him about this paper, which Finsler felt had priority for an incompleteness theorem. Finsler's methods did not rely on formalized provability and had only a superficial resemblance to Gödel's work. Gödel read the paper but found it deeply flawed, and his response to Finsler laid out concerns about the lack of formalization. Finsler continued to argue for his philosophy of mathematics, which eschewed formalization, for the remainder of his career. Zermelo. In September 1931, Ernst Zermelo wrote to Gödel to announce what he described as an "essential gap" in Gödel's argument. In October, Gödel replied with a 10-page letter, where he pointed out that Zermelo mistakenly assumed that the notion of truth in a system is definable in that system; it is not true in general by Tarski's undefinability theorem. However, Zermelo did not relent and published his criticisms in print with "a rather scathing paragraph on his young competitor". Gödel decided that pursuing the matter further was pointless, and Carnap agreed. Much of Zermelo's subsequent work was related to logic stronger than first-order logic, with which he hoped to show both the consistency and categoricity of mathematical theories. Wittgenstein. Ludwig Wittgenstein wrote several passages about the incompleteness theorems that were published posthumously in his 1953 "Remarks on the Foundations of Mathematics", particularly, one section sometimes called the "notorious paragraph" where he seems to confuse the notions of "true" and "provable" in Russell's system. Gödel was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking. There has been some controversy about whether Wittgenstein misunderstood the incompleteness theorem or just expressed himself unclearly. Writings in Gödel's Nachlass express the belief that Wittgenstein misread his ideas. Multiple commentators have read Wittgenstein as misunderstanding Gödel, although as well as have provided textual readings arguing that most commentary misunderstands Wittgenstein. On their release, Bernays, Dummett, and Kreisel wrote separate reviews on Wittgenstein's remarks, all of which were extremely negative. The unanimity of this criticism caused Wittgenstein's remarks on the incompleteness theorems to have little impact on the logic community. In 1972, Gödel stated: "Has Wittgenstein lost his mind? Does he mean it seriously? He intentionally utters trivially nonsensical statements", and wrote to Karl Menger that Wittgenstein's comments demonstrate a misunderstanding of the incompleteness theorems writing: It is clear from the passages you cite that Wittgenstein did "not" understand [the first incompleteness theorem] (or pretended not to understand it). He interpreted it as a kind of logical paradox, while in fact is just the opposite, namely a mathematical theorem within an absolutely uncontroversial part of mathematics (finitary number theory or combinatorics). Since the publication of Wittgenstein's "Nachlass" in 2000, a series of papers in philosophy have sought to evaluate whether the original criticism of Wittgenstein's remarks was justified. argue that Wittgenstein had a more complete understanding of the incompleteness theorem than was previously assumed. They are particularly concerned with the interpretation of a Gödel sentence for an ω-inconsistent system as saying "I am not provable", since the system has no models in which the provability predicate corresponds to actual provability. argues that their interpretation of Wittgenstein is not historically justified. explores the relationship between Wittgenstein's writing and theories of paraconsistent logic. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Translations, during his lifetime, of Gödel's paper into English. None of the following agree in all translated words and in typography. The typography is a serious matter, because Gödel expressly wished to emphasize "those metamathematical notions that had been defined in their usual sense before . . ." . Three translations exist. Of the first John Dawson states that: "The Meltzer translation was seriously deficient and received a devastating review in the "Journal of Symbolic Logic"; "Gödel also complained about Braithwaite's commentary . "Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology "The Undecidable" . . . he found the translation "not quite so good" as he had expected . . . [but because of time constraints he] agreed to its publication" (ibid). (In a footnote Dawson states that "he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints" (ibid)). Dawson states that "The translation that Gödel favored was that by Jean van Heijenoort" (ibid). For the serious student another version exists as a set of lecture notes recorded by Stephen Kleene and J. B. Rosser "during lectures given by Gödel at to the Institute for Advanced Study during the spring of 1934" (cf commentary by and beginning on p. 41); this version is titled "On Undecidable Propositions of Formal Mathematical Systems". In their order of publication: * Stephen Hawking editor, 2005. "God Created the Integers: The Mathematical Breakthroughs That Changed History", Running Press, Philadelphia, . Gödel's paper appears starting on p. 1097, with Hawking's commentary starting on p. 1089.
[ { "math_id": 0, "text": "\\Pi^0_1" } ]
https://en.wikipedia.org/wiki?curid=58863
58863156
Garsia–Wachs algorithm
The Garsia–Wachs algorithm is an efficient method for computers to construct optimal binary search trees and alphabetic Huffman codes, in linearithmic time. It is named after Adriano Garsia and Michelle L. Wachs. Problem description. The input to the problem, for an integer formula_0, consists of a sequence of formula_1 non-negative weights formula_2. The output is a rooted binary tree with formula_0 internal nodes, each having exactly two children. Such a tree has exactly formula_1 leaf nodes, which can be identified (in the order given by the binary tree) with the formula_1 input weights. The goal of the problem is to find a tree, among all of the possible trees with formula_0 internal nodes, that minimizes the weighted sum of the "external path lengths". These path lengths are the numbers of steps from the root to each leaf. They are multiplied by the weight of the leaf and then summed to give the quality of the overall tree. This problem can be interpreted as a problem of constructing a binary search tree for formula_0 ordered keys, with the assumption that the tree will be used only to search for values that are not already in the tree. In this case, the formula_0 keys partition the space of search values into formula_1 intervals, and the weight of one of these intervals can be taken as the probability of searching for a value that lands in that interval. The weighted sum of external path lengths controls the expected time for searching the tree. Alternatively, the output of the problem can be used as a Huffman code, a method for encoding formula_1 given values unambiguously by using variable-length sequences of binary values. In this interpretation, the code for a value is given by the sequence of left and right steps from a parent to the child on the path from the root to a leaf in the tree (e.g. with 0 for left and 1 for right). Unlike standard Huffman codes, the ones constructed in this way are "alphabetical", meaning that the sorted order of these binary codes is the same as the input ordering of the values. If the weight of a value is its frequency in a message to be encoded, then the output of the Garsia–Wachs algorithm is the alphabetical Huffman code that compresses the message to the shortest possible length. Algorithm. Overall, the algorithm consists of three phases: The first phase of the algorithm is easier to describe if the input is augmented with two sentinel values, formula_3 (or any sufficiently large finite value) at the start and end of the sequence. The first phase maintains a forest of trees, initially a single-node tree for each non-sentinel input weight, which will eventually become the binary tree that it constructs. Each tree is associated with a value, the sum of the weights of its leaves makes a tree node for each non-sentinel input weight. The algorithm maintains a sequence of these values, with the two sentinel values at each end. The initial sequence is just the order in which the leaf weights were given as input. It then repeatedly performs the following steps, each of which reduces the length of the input sequence, until there is only one tree containing all the leaves: To implement this phase efficiently, the algorithm can maintain its current sequence of values in any self-balancing binary search tree structure. Such a structure allows the removal of formula_4 and formula_5, and the reinsertion of their new parent, in logarithmic time. In each step, the weights up to formula_5 in the even positions of the array form a decreasing sequence, and the weights in the odd positions form another decreasing sequence. Therefore, the position to reinsert formula_8 may be found in logarithmic time by using the balanced tree to perform two binary searches, one for each of these two decreasing sequences. The search for the first position for which formula_7 can be performed in linear total time by using a sequential search that begins at the formula_6 from the previous triple. It is nontrivial to prove that, in the third phase of the algorithm, another tree with the same distances exists and that this tree provides the optimal solution to the problem. But assuming this to be true, the second and third phases of the algorithm are straightforward to implement in linear time. Therefore, the total time for the algorithm, on an input of length formula_0, is formula_9. History. The Garsia–Wachs algorithm is named after Adriano Garsia and Michelle L. Wachs, who published it in 1977. Their algorithm simplified an earlier method of T. C. Hu and Alan Tucker, and (although it is different in internal details) it ends up making the same comparisons in the same order as the Hu–Tucker algorithm. The original proof of correctness of the Garsia–Wachs algorithm was complicated, and was later simplified by and . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n+1" }, { "math_id": 2, "text": "w_0,w_1,\\dots, w_n" }, { "math_id": 3, "text": "\\infty" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "y" }, { "math_id": 6, "text": "z" }, { "math_id": 7, "text": "x\\le z" }, { "math_id": 8, "text": "x+y" }, { "math_id": 9, "text": "O(n\\log n)" } ]
https://en.wikipedia.org/wiki?curid=58863156
58864621
Christine O'Keefe
Australian mathematician Christine Margaret O'Keefe is an Australian mathematician and computer scientist whose research has included work in finite geometry, information security, and data privacy. She is a researcher at CSIRO, and was the lead author of a 2017 report from the Office of the Australian Information Commissioner on best practices for de-identification of personally identifying data. Education and career. O'Keefe has a bachelor's degree from the University of Adelaide, initially intending to study medicine but earning first-class honours in mathematics there in 1982. She returned to Adelaide for doctoral study in 1985, and completed her Ph.D. in 1988. Her dissertation, "Concerning formula_0-spreads of formula_1", was supervised by Rey Casse. She was a lecturer and research fellow at the University of Western Australia from 1999 to 2001, when she returned to the University of Adelaide. At Adelaide, she worked as a lecturer, senior lecturer, Queen Elizabeth II Fellow, and senior research fellow. Her research interests shifted from finite geometry to information security and to effect that shift she moved in 2000 from Adelaide to CSIRO. At CSIRO, she founded the Information Security and Privacy Group in 2002, became head of the Health Informatics Group in 2004, became Theme Leader for Health Data and Information in 2006, and Strategic Operations Director for Preventative Health National Research in 2008. While doing this, she studied for an MBA at Australian National University, finishing in 2008. She became Director of the Population Health Research Network Centre and Professor of Health Sciences at Curtin University from 2009 to 2010 before returning to CSIRO as Science Leader for Privacy and Confidentiality in the CSIRO Department of Mathematics, Informatics and Statistics. Recognition. O'Keefe has been a Fellow of the Institute of Combinatorics and its Applications since 1991. In 1996, O'Keefe won the Hall Medal of the Institute of Combinatorics and its Applications for her work in finite geometry. She won the Australian Mathematical Society Medal in 2000, the first woman to win the medal, and in the same year became a Fellow of the Australian Mathematical Society. Although the Medal citation primarily discussed O'Keefe's work in finite geometry, such as the discovery of new hyperovals, it included a paragraph on her research using geometry in secret sharing, a precursor to her later work on information security. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": "PG((s+1)(t+1)-1,q)" } ]
https://en.wikipedia.org/wiki?curid=58864621
5886692
Flyby anomaly
Unexplained observed excessive energy during Earth flybys of spacecraft &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: What causes the unexpected change in acceleration for flybys of spacecraft? The flyby anomaly is a discrepancy between current scientific models and the actual increase in speed (i.e. increase in "kinetic energy") observed during a planetary flyby (usually of Earth) by a spacecraft. In multiple cases, spacecraft have been observed to gain greater speed than scientists had predicted, but thus far no convincing explanation has been found. This anomaly has been observed as shifts in the S-band and X-band Doppler and ranging telemetry. The largest discrepancy noticed during a flyby is tiny, 13.46 mm/s. Observations. Gravitational assists are valuable techniques for Solar System exploration. Because the success of such flyby maneuvers depends on the exact geometry of the trajectory, the position and velocity of a spacecraft during its encounter with a planet is continually tracked with great precision by earth telemetry, e.g. via the Deep Space Network (DSN). The flyby anomaly was first noticed during a careful inspection of DSN Doppler data shortly after the Earth flyby of the "Galileo" spacecraft on 8 December 1990. While the Doppler residuals (observed minus computed data) were expected to remain flat, the analysis revealed an unexpected 66 mHz shift, which corresponds to a velocity increase of 3.92 mm/s at perigee. Investigations of this effect at the Jet Propulsion Laboratory (JPL), the Goddard Space Flight Center (GSFC) and the University of Texas have not yielded a satisfactory explanation. No such anomaly was detected after the second Earth flyby of "Galileo" in December 1992, where the measured velocity decrease matched that expected from atmospheric drag at the lower altitude of 303 km. However, the drag estimates had large error bars, and so an anomalous acceleration could not be ruled out. On 23 January 1998 the Near Earth Asteroid Rendezvous (NEAR) spacecraft experienced an anomalous velocity increase of 13.46 mm/s after its Earth encounter. "Cassini–Huygens" gained around 0.11 mm/s in August 1999, and "Rosetta" gained 1.82 mm/s after its Earth flyby in March 2005. An analysis of the "MESSENGER" spacecraft (studying Mercury) did not reveal any significant unexpected velocity increase. This may be because "MESSENGER" both approached and departed Earth symmetrically about the equator (see data and proposed equation below). This suggests that the anomaly may be related to Earth's rotation. In November 2009, ESA's "Rosetta" spacecraft was tracked closely during flyby in order to precisely measure its velocity, in an effort to gather further data about the anomaly, but no significant anomaly was found. The 2013 flyby of Juno on the way to Jupiter yielded no anomalous acceleration. In 2018, a careful analysis of the trajectory of the presumed interstellar asteroid ʻOumuamua revealed a small excess velocity as it receded from the Sun. Initial speculation suggested that the anomaly was due to outgassing, though none had been detected. Summary of some Earth-flyby spacecraft is provided in table below. Anderson's empirical relation. An empirical equation for the anomalous flyby velocity change was proposed in 2008 by J. D. Anderson et al.: formula_0 where is the angular frequency of the Earth, is the Earth radius, and and are the inbound and outbound equatorial angles of the spacecraft. This formula was derived later by Jean Paul Mbelek from special relativity, leading to one of the possible explanations of the effect. This does not, however, consider the SSN residuals – see "Possible explanations" below. Possible explanations. There have been a number of proposed explanations of the flyby anomaly, including: Related research. Some missions designed to study gravity, such as MICROSCOPE and STEP, are designed to make extremely accurate gravity measurements and may shed some light on the anomaly. However, MICROSCOPE has completed its mission, finding nothing anomalous, and STEP is yet to fly. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{dV}{V} = \\frac{2 \\omega_\\text{E} R_\\text{E} (\\cos \\varphi_\\text{i} - \\cos \\varphi_\\text{o})}{c}, " } ]
https://en.wikipedia.org/wiki?curid=5886692
58873681
Consani–Scholten quintic
Algebraic hypersurface In the mathematical fields of algebraic geometry and arithmetic geometry, the Consani–Scholten quintic is an algebraic hypersurface (the set of solutions to a single polynomial equation in multiple variables) studied in 2001 by Caterina Consani and Jasper Scholten. It has been used as a test case for the Langlands program. Definition. Consani and Scholten define their hypersurface from the (projectivized) set of solutions to the equation formula_0 in four complex variables, where formula_1 In this form the resulting hypersurface is singular: it has 120 double points. Its Hodge diamond is The Consani–Scholton quintic itself is the non-singular hypersurface obtained by blowing up these singularities. As a non-singular quintic threefold, it is a Calabi–Yau manifold. Modularity. According to the Langlands program, for any Calabi–Yau threefold formula_2 over formula_3, the Galois representations giving the action of the absolute Galois group on the formula_4-adic étale cohomology formula_5 (for prime numbers formula_4 of good reduction, which for this curve means any prime other than 2, 3, or 5) should have the same L-series as an automorphic form. This was known for "rigid" Calabi–Yau threefolds, for which the family of Galois representations has dimension two, by the proof of Serre's modularity conjecture. The Consani–Scholton quintic provides a non-rigid example, where the dimension is four. Consani and Scholten constructed a Hilbert modular form and conjectured that its L-series agreed with the Galois representations for their curve; this was proven by . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(x,y)=P(z,w)" }, { "math_id": 1, "text": "P(x,y)=x^5+y^5-(5xy-5)(x^2+y^2-x-y)." }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\mathbb{Q}" }, { "math_id": 4, "text": "\\ell" }, { "math_id": 5, "text": "H^3(X_{\\bar\\mathbb{Q}},\\mathbb{Q}_{\\ell})" } ]
https://en.wikipedia.org/wiki?curid=58873681
58874832
Row polymorphism
Kind of polymorphism In programming language type theory, row polymorphism is a kind of polymorphism that allows one to write programs that are polymorphic on row types such as record types and polymorphic variants. A row-polymorphic type system and proof of type inference was introduced by Mitchell Wand. Row-polymorphic record type definition. The row-polymorphic record type defines a list of fields with their corresponding types, a list of missing fields, and a variable indicating the absence or presence of arbitrary additional fields. Both lists are optional, and the variable may be constrained. Specifically, the variable may be 'empty', indicating that no additional fields may be present for the record. It may be written as formula_0. This indicates a record type that has fields formula_1 with respective types of formula_2 (for formula_3), and does not have any of the fields formula_4 (for formula_5), while formula_6 expresses the fact the record may contain other fields than formula_1. Row-polymorphic record types allow us to write programs that operate only on a section of a record. For example, one may define a function that performs some two-dimensional transformation that accepts a record with two or more coordinates, and returns an identical type: formula_7 Thanks to row polymorphism, the function may perform two-dimensional transformation on a three-dimensional (in fact, "n"-dimensional) point, leaving the "z" coordinate (or any other coordinates) intact. In a more general sense, the function can perform on any record that contains the fields formula_8 and formula_9 with type formula_10. There is no loss of information: the type ensures that all the fields represented by the variable formula_6 are present in the return type. In contrast, the type definition formula_11 expresses the fact that a record of that type has exactly the formula_8 and formula_9 fields and nothing else. In this case, a classic record type is obtained. Typing operations on records. The record operations of selecting a field formula_12, adding a field formula_13, and removing a field formula_14 can be given row-polymorphic types. formula_15 formula_16 formula_17
[ { "math_id": 0, "text": "\\{\\ell_1 : T_1, \\dots, \\ell_n : T_n, \\text{absent}(f_1), \\dots, \\text{absent}(f_m), \\rho\\}" }, { "math_id": 1, "text": "\\ell_i" }, { "math_id": 2, "text": "T_i" }, { "math_id": 3, "text": "i = 1 \\dots n" }, { "math_id": 4, "text": "f_j" }, { "math_id": 5, "text": "j = 1 \\dots m" }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": "\\text{transform2d} : \\{x : \\text{Number}, y : \\text{Number}, \\rho\\} \\to \\{x : \\text{Number}, y : \\text{Number}, \\rho\\}" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "\\text{Number}" }, { "math_id": 11, "text": "\\{x : \\text{Number}, y : \\text{Number}, \\mathbf{empty}\\}" }, { "math_id": 12, "text": "r.\\ell" }, { "math_id": 13, "text": "r[\\ell:=e]" }, { "math_id": 14, "text": "r \\backslash \\ell" }, { "math_id": 15, "text": "\n\\mathrm{select_\\ell} = \\lambda r. (r.\\ell) \\;:\\; \\{ \\ell : T, \\rho \\} \\rightarrow T\n" }, { "math_id": 16, "text": "\n\\mathrm{add_\\ell} = \\lambda r. \\lambda e. r[\\ell := e] \\;:\\; \\{\\mathrm{absent}(\\ell), \\rho\\} \\rightarrow T \\rightarrow \\{\\ell : T, \\rho\\} \n" }, { "math_id": 17, "text": "\n\\mathrm{remove_\\ell} = \\lambda r. r \\backslash \\ell \\;:\\; \\{\\ell : T, \\rho\\} \\rightarrow \\{\\mathrm{absent}(\\ell), \\rho \\} \n" } ]
https://en.wikipedia.org/wiki?curid=58874832
58876827
Many-body localization
Phenomenon of isolated many-body quantum systems not reaching thermal equilibrium Many-body localization (MBL) is a dynamical phenomenon occurring in isolated many-body quantum systems. It is characterized by the system failing to reach thermal equilibrium, and retaining a memory of its initial condition in local observables for infinite times. Thermalization and localization. Textbook quantum statistical mechanics assumes that systems go to thermal equilibrium (thermalization). The process of thermalization erases local memory of the initial conditions. In textbooks, thermalization is ensured by coupling the system to an external environment or "reservoir," with which the system can exchange energy. What happens if the system is isolated from the environment, and evolves according to its own Schrödinger equation? Does the system still thermalize? Quantum mechanical time evolution is unitary and formally preserves all information about the initial condition in the quantum state at all times. However, a quantum system generically contains a macroscopic number of degrees of freedom, but can only be probed through few-body measurements which are local in real space. The meaningful question then becomes whether accessible local measurements display thermalization. This question can be formalized by considering the quantum mechanical density matrix ρ of the system. If the system is divided into a subregion A (the region being probed) and its complement B (everything else), then all information that can be extracted by measurements made on A alone is encoded in the reduced density matrix formula_0. If, in the long time limit, formula_1 approaches a thermal density matrix at a temperature set by the energy density in the state, then the system has "thermalized," and no local information about the initial condition can be extracted from local measurements. This process of "quantum thermalization" may be understood in terms of B acting as a reservoir for A. In this perspective, the entanglement entropy formula_2 of a thermalizing system in a pure state plays the role of thermal entropy. Thermalizing systems therefore generically have extensive or "volume law" entanglement entropy at any non-zero temperature. They also generically obey the eigenstate thermalization hypothesis (ETH). In contrast, if formula_3 fails to approach a thermal density matrix even in the long time limit, and remains instead close to its initial condition formula_4, then the system retains forever a memory of its initial condition in local observables. This latter possibility is referred to as "many body localization," and involves B failing to act as a reservoir for A. A system in a many body localized phase exhibits MBL, and continues to exhibit MBL even when subject to arbitrary local perturbations. Eigenstates of systems exhibiting MBL do not obey the ETH, and generically follow an "area law" for entanglement entropy (i.e. the entanglement entropy scales with the surface area of subregion A). A brief list of properties differentiating thermalizing and MBL systems is provided below. History. MBL was first proposed by P.W. Anderson in 1958 as a possibility that could arise in strongly disordered quantum systems. The basic idea was that if particles all live in a random energy landscape, then any rearrangement of particles would change the energy of the system. Since energy is a conserved quantity in quantum mechanics, such a process can only be virtual and cannot lead to any transport of particle number or energy. While localization for single particle systems was demonstrated already in Anderson's original paper (coming to be known as Anderson localization), the existence of the phenomenon for many particle systems remained a conjecture for decades. In 1980 Fleishman and Anderson demonstrated the phenomenon survived the addition of interactions to lowest order in perturbation theory. In a 1998 study, the analysis was extended to all orders in perturbation theory, in a zero-dimensional system, and the MBL phenomenon was shown to survive. In 2005 and 2006, this was extended to high orders in perturbation theory in high dimensional systems. MBL was argued to survive at least at low energy density. A series of numerical works provided further evidence for the phenomenon in one dimensional systems, at all energy densities (“infinite temperature”). Finally, in 2014 Imbrie presented a proof of MBL for certain one dimensional spin chains with strong disorder, with the localization being stable to arbitrary local perturbations – i.e. the systems were shown to be in a many body localized phase. It is now believed that MBL can arise also in periodically driven "Floquet" systems where energy is conserved only modulo the drive frequency. Emergent integrability. Many body localized systems exhibit a phenomenon known as emergent integrability. In a non-interacting Anderson insulator, the occupation number of each localized single particle orbital is separately a local integral of motion. It was conjectured (and proven by Imbrie) that a similar extensive set of local integrals of motion should also exist in the MBL phase. Consider for specificity a one dimensional spin-1/2 chain with Hamiltonian formula_5 where X, Y and Z are Pauli operators, and hI are random variables drawn from a distribution of some width W. When the disorder is strong enough (W&gt;Wc) that all eigenstates are localized, then there exists a local unitary transformation to new variables τ such that formula_6 where τ are Pauli operators that are related to the physical Pauli operators by a local unitary transformation, the ... indicates additional terms which only involve τz operators, and the coefficients fall off exponentially with distance. This Hamiltonian manifestly contains an extensive number of localized integrals of motion or "l-bits" (the operators τzi, which all commute with the Hamiltonian). If the original Hamiltonian is perturbed, the l-bits get redefined, but the integrable structure survives. Exotic orders. MBL enables the formation of exotic forms of quantum order that could not arise in thermal equilibrium, through the phenomenon of localization-protected quantum order. A form of localization-protected quantum order, arising only in periodically driven systems, is the Floquet time crystal. Experimental realizations. A number of experiments have been reported observing the MBL phenomenon. Most of these experiments involve synthetic quantum systems, such as assemblies of ultracold atoms or trapped ions. Experimental explorations of the phenomenon in solid state systems are still in their infancy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho_A=\\operatorname{Tr}_B\\rho(t)" }, { "math_id": 1, "text": "\\rho_A(t)" }, { "math_id": 2, "text": "S=-\\operatorname{Tr}(\\rho_A \\log \\rho_A)" }, { "math_id": 3, "text": "\\rho_A(T)" }, { "math_id": 4, "text": "\\rho_A(0)" }, { "math_id": 5, "text": "H=\\sum_i \\left [ J \\left ( X_i X_{i+1} + Y_i Y_{i+1} \\right ) + J^\\prime Z_i Z_{i+1} + h_i Z_i \\right ]," }, { "math_id": 6, "text": "H=\\sum_i h^\\prime_i \\tau^z_i + \\sum_{ij} J_{ij} \\tau^z_i \\tau^z_j + \\sum_{ijk} K_{ijk} \\tau^z_i \\tau^z_j \\tau^z_k + \\cdots," } ]
https://en.wikipedia.org/wiki?curid=58876827
58878004
Stochastic gradient Langevin dynamics
Optimization and sampling technique Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems. Formal definition. Given some parameter vector formula_0, its prior distribution formula_1, and a set of data points formula_2, Langevin dynamics samples from the posterior distribution formula_3 by updating the chain: formula_4 Stochastic gradient Langevin dynamics uses a modified update procedure with minibatched likelihood terms: formula_5 where formula_6 is a positive integer, formula_7 is Gaussian noise, formula_8 is the likelihood of the data given the parameter vector formula_0, and our step sizes formula_9satisfy the following conditions: formula_10 For early iterations of the algorithm, each parameter update mimics Stochastic Gradient Descent; however, as the algorithm approaches a local minimum or maximum, the gradient shrinks to zero and the chain produces samples surrounding the maximum a posteriori mode allowing for posterior inference. This process generates approximate samples from the posterior as by balancing variance from the injected Gaussian noise and stochastic gradient computation. Application. SGLD is applicable in any optimization context for which it is desirable to quickly obtain posterior samples instead of a maximum a posteriori mode. In doing so, the method maintains the computational efficiency of stochastic gradient descent when compared to traditional gradient descent while providing additional information regarding the landscape around the critical point of the objective function. In practice, SGLD can be applied to the training of Bayesian Neural Networks in Deep Learning, a task in which the method provides a distribution over model parameters. By introducing information about the variance of these parameters, SGLD characterizes the generalizability of these models at certain points in training. Additionally, obtaining samples from a posterior distribution permits uncertainty quantification by means of confidence intervals, a feature which is not possible using traditional stochastic gradient descent. Variants and associated algorithms. If gradient computations are exact, SGLD reduces down to the "Langevin Monte Carlo" algorithm, first coined in the literature of lattice field theory. This algorithm is also a reduction of Hamiltonian Monte Carlo, consisting of a single leapfrog step proposal rather than a series of steps. Since SGLD can be formulated as a modification of both stochastic gradient descent and MCMC methods, the method lies at the intersection between optimization and sampling algorithms; the method maintains SGD's ability to quickly converge to regions of low cost while providing samples to facilitate posterior inference. Considering relaxed constraints on the step sizes formula_9such that they do not approach zero asymptotically, SGLD fails to produce samples for which the Metropolis Hastings rejection rate is zero, and thus a MH rejection step becomes necessary. The resulting algorithm, dubbed the Metropolis Adjusted Langevin algorithm, requires the step: formula_11 where formula_12is a normal distribution centered one gradient descent step from formula_13and formula_14is our target distribution. Mixing rates and algorithmic convergence. Recent contributions have proven upper bounds on mixing times for both the traditional Langevin algorithm and the Metropolis adjusted Langevin algorithm. Released in Ma et al., 2018, these bounds define the rate at which the algorithms converge to the true posterior distribution, defined formally as: formula_15 where formula_16is an arbitrary error tolerance, formula_17is some initial distribution, formula_18is the posterior distribution, and formula_19is the total variation norm. Under some regularity conditions of an L-Lipschitz smooth objective function formula_20which is m-strongly convex outside of a region of radius formula_21 with condition number formula_22, we have mixing rate bounds: formula_23 formula_24 where formula_25 and formula_26refer to the mixing rates of the Unadjusted Langevin Algorithm and the Metropolis Adjusted Langevin Algorithm respectively. These bounds are important because they show computational complexity is polynomial in dimension formula_27 conditional on formula_28 being formula_29.
[ { "math_id": 0, "text": " \\theta " }, { "math_id": 1, "text": " p(\\theta) " }, { "math_id": 2, "text": " X = \\{x_i\\}_{i = 1}^N " }, { "math_id": 3, "text": " p ( \\theta \\mid X ) \\propto p (\\theta) \\prod_{i = 1}^N p(x_i \\mid \\theta) " }, { "math_id": 4, "text": "\\Delta \\theta_t = \\frac{\\varepsilon_t} 2 \\left( \\nabla \\log p(\\theta_t) + \\sum_{i=1}^N \\nabla \\log p(x_{t_i} \\mid \\theta_t) \\right) + \\eta_t " }, { "math_id": 5, "text": "\\Delta \\theta_t = \\frac{\\varepsilon_t} 2 \\left( \\nabla \\log p(\\theta_t) + \\frac{N}{n} \\sum_{i=1}^n \\nabla \\log p(x_{t_i} \\mid \\theta_t) \\right) + \\eta_t " }, { "math_id": 6, "text": "n < N" }, { "math_id": 7, "text": " \\eta_t \\sim \\mathcal{N}(0,\\varepsilon_t) " }, { "math_id": 8, "text": " p(x \\mid \\theta) " }, { "math_id": 9, "text": " \\varepsilon_t " }, { "math_id": 10, "text": "\\sum_{t = 1}^\\infty \\varepsilon_t = \\infty \\quad \\sum_{t=1}^\\infty \\varepsilon_t^2 < \\infty" }, { "math_id": 11, "text": "\\frac { p( \\mathbf{\\theta}^t \\mid \\mathbf{\\theta}^{t+1}) p^* \\left( \\mathbf {\\theta}^t \\right) } { p \\left( \\mathbf{\\theta}^{t+1} \\mid \\mathbf {\\theta}^t \\right) p^* (\\mathbf{\\theta}^{t+1})} < u, \\ u \\sim \\mathcal{U} [0,1] " }, { "math_id": 12, "text": "p(\\theta^t \\mid \\theta^{t + 1})" }, { "math_id": 13, "text": "\\theta^{t}" }, { "math_id": 14, "text": "p(\\theta)" }, { "math_id": 15, "text": "\\tau(\\varepsilon ; p^0) = \\min \\left\\{ k \\mid \\left\\| p^k - p^* \\right\\|_{\\mathrm{V}} \\leq \\varepsilon \\right\\}" }, { "math_id": 16, "text": "\\varepsilon \\in (0,1)" }, { "math_id": 17, "text": "p^0" }, { "math_id": 18, "text": "p^*" }, { "math_id": 19, "text": "||*||_{TV}" }, { "math_id": 20, "text": "U(x)" }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": "\\kappa = \\frac{L}{m}" }, { "math_id": 23, "text": "\\tau_{ULA}(\\varepsilon,p^0) \\leq \\mathcal{O} \\left( e^{32LR^2} \\kappa^2 \\frac d {\\varepsilon^2} \\ln \\left( \\frac d {\\varepsilon^2} \\right) \\right)" }, { "math_id": 24, "text": "\\tau_{MALA} (\\varepsilon,p^0) \\leq \\mathcal{O} \\left( e^{16LR^2} \\kappa^{3/2} d^{1/2} \\left( d \\ln \\kappa + \\ln \\left( \\frac 1 \\varepsilon \\right) \\right)^{3/2} \\right)" }, { "math_id": 25, "text": "\\tau_{ULA}" }, { "math_id": 26, "text": "\\tau_{MALA}" }, { "math_id": 27, "text": "d" }, { "math_id": 28, "text": "LR^2" }, { "math_id": 29, "text": "\\mathcal{O}(\\log d)" } ]
https://en.wikipedia.org/wiki?curid=58878004
58881336
CICE (sea ice model)
Computer model that simulates sea ice CICE () is a computer model that simulates the growth, melt and movement of sea ice. It has been integrated into many coupled climate system models as well as global ocean and weather forecasting models and is often used as a tool in Arctic and Southern Ocean research. CICE development began in the mid-1990s by the United States Department of Energy (DOE), and it is currently maintained and developed by a group of institutions in North America and Europe known as the CICE Consortium. Its widespread use in Earth system science in part owes to the importance of sea ice in determining Earth's planetary albedo, the strength of the global thermohaline circulation in the world's oceans, and in providing surface boundary conditions for atmospheric circulation models, since sea ice occupies a significant proportion (4-6%) of Earth's surface. CICE is a type of cryospheric model. Development. Development of CICE began in 1994 by Elizabeth Hunke at Los Alamos National Laboratory (LANL). Since its initial release in 1998 following development of the Elastic-Viscous-Plastic (EVP) sea ice rheology within the model, it has been substantially developed by an international community of model users and developers. Enthalpy-conserving thermodynamics and improvements to the sea ice thickness distribution were added to the model between 1998 and 2005. The first institutional user outside of LANL was Naval Postgraduate School in the late-1990s, where it was subsequently incorporated into the Regional Arctic System Model (RASM) in 2011. The National Center for Atmospheric Research (NCAR) was the first to incorporate CICE into a global climate model in 2002, and developers of the NCAR Community Earth System Model (CESM) have continued to contribute to CICE innovations and have used it to investigate polar variability in Earth's climate system. The United States Navy began using CICE shortly after 2000 for polar research and sea ice forecasting and it continues to do so today. Since 2000, CICE development or coupling to oceanic and atmospheric models for weather and climate prediction has occurred at the University of Reading, University College London, the U.K. Met Office Hadley Centre, Environment and Climate Change Canada, the Danish Meteorological Institute, the Commonwealth Science and Industrial Research Organisation, and Beijing Normal University, among other institutions. As a result of model development in the global community of CICE users, the model's computer code now includes a comprehensive saline ice physics and biogeochemistry library that incorporates mushy-layer thermodynamics, anisotropic continuum mechanics, Delta-Eddington radiative transfer, melt-pond physics and land-fast ice. CICE version 6 is open-source software and was released in 2018 on GitHub. Keystone Equations. There are two main physics equations solved using numerical methods in CICE that underpin the model's predictions of sea ice thickness, concentration and velocity, as well as predictions made with many equations not shown here giving, for example, surface albedo, ice salinity, snow cover, divergence, and biogeochemical cycles. The first keystone equation is Newton's second law for sea ice: formula_0 where formula_1 is the mass per unit area of saline ice on the sea surface, formula_2 is the drift velocity of the ice, formula_3 is the Coriolis parameter, formula_4 is the upward unit vector normal to the sea surface, formula_5 and formula_6 are the wind and water stress on the ice, respectively, formula_7 is acceleration due to gravity, formula_8 is sea surface height and formula_9 is internal ice the two-dimensional stress tensor within the ice. Each of the terms require information about the ice thickness, roughness, and concentration, as well as the state of the atmospheric and oceanic boundary layers. Ice mass per unit area formula_1 is determined using the second keystone equation in CICE, which describes evolution of the sea ice thickness distribution formula_10 for different thicknesses formula_11 spread of the area for which sea ice velocity is calculated above: formula_12 where formula_13 is the change in the thickness distribution due to thermodynamic growth and melt, formula_14 is redistribution function due to sea ice mechanics and is associated with internal ice stress formula_9, and formula_15 describes advection of sea ice in a Lagrangian reference frame. From this, ice mass is given by: formula_16 for density formula_17 of sea ice. Code Design. CICE version 6 is coded in FORTRAN90. It is organized into a dynamical core (dycore) and a separate column physics package called "Icepack", which is maintained as a CICE submodule on GitHub. The momentum equation and thickness advection described above are time-stepped on a quadrilateral Arakawa B-grid within the dynamical core, while Icepack solves diagnostic and prognostic equations necessary for calculating radiation physics, hydrology, thermodynamics, and vertical biogeochemistry, including terms necessary to calculate formula_5, formula_6, formula_9, formula_13, and formula_14 defined above. CICE can be run independently, as in the first figure on this page, but is frequently coupled with earth systems models through an external flux coupler, such as the CESM Flux Coupler from NCAR for which results are shown in the second figure for the CESM Large Ensemble. The column physics were separated into Icepack for the version 6 release to permit insertion into earth system models that use their own sea ice dynamical core, including the new DOE Energy Exascale Earth System Model (E3SM), which uses an unstructured grid in the sea ice component of the Model for Prediction Across Scales (MPAS), as demonstrated in the final figure. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m\\frac{d \\mathbf{u}}{d t}=-mf\\mathrm{k}\\times\\mathbf{u}+\\tau_a+\\tau_w-m\\mathrm{\\hat{g}}\\nabla\\mu+\\nabla\\cdot\\mathbf{\\sigma}" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "\\mathbf{u}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\mathrm{k}" }, { "math_id": 5, "text": "\\tau_a" }, { "math_id": 6, "text": "\\tau_w" }, { "math_id": 7, "text": "\\mathrm{\\hat{g}}" }, { "math_id": 8, "text": "\\mu" }, { "math_id": 9, "text": "\\mathbf{\\sigma}" }, { "math_id": 10, "text": "g(h)" }, { "math_id": 11, "text": "h" }, { "math_id": 12, "text": "\\frac{dg}{dt}=\\theta + \\psi - g(\\nabla \\cdot \\mathbf{u})" }, { "math_id": 13, "text": "\\theta" }, { "math_id": 14, "text": "\\psi" }, { "math_id": 15, "text": "-g(\\nabla \\cdot \\mathbf{u})" }, { "math_id": 16, "text": "m=\\rho\\int_0^\\infty h \\, g(h) \\, dh" }, { "math_id": 17, "text": "\\rho" } ]
https://en.wikipedia.org/wiki?curid=58881336
58884863
Gan–Gross–Prasad conjecture
In mathematics, the Gan–Gross–Prasad conjecture is a restriction problem in the representation theory of real or p-adic Lie groups posed by Gan Wee Teck, Benedict Gross, and Dipendra Prasad. The problem originated from a conjecture of Gross and Prasad for special orthogonal groups but was later generalized to include all four classical groups. In the cases considered, it is known that the multiplicity of the restrictions is at most one and the conjecture describes when the multiplicity is precisely one. Motivation. A motivating example is the following classical branching problem in the theory of compact Lie groups. Let formula_0 be an irreducible finite dimensional representation of the compact unitary group formula_1, and consider its restriction to the naturally embedded subgroup formula_2. It is known that this restriction is multiplicity-free, but one may ask precisely which irreducible representations of formula_2 occur in the restriction. By the Cartan–Weyl theory of highest weights, there is a classification of the irreducible representations of formula_1 via their highest weights which are in natural bijection with sequences of integers formula_3. Now suppose that formula_0 has highest weight formula_4. Then an irreducible representation formula_5 of formula_2 with highest weight formula_6 occurs in the restriction of formula_0 to formula_2 (viewed as a subgroup of formula_1) if and only if formula_4 and formula_6 are interlacing, i.e. formula_7. The Gan–Gross–Prasad conjecture then considers the analogous restriction problem for other classical groups. Statement. The conjecture has slightly different forms for the different classical groups. The formulation for unitary groups is as follows. Setup. Let formula_8 be a finite-dimensional vector space over a field formula_9 not of characteristic formula_10 equipped with a non-degenerate sesquilinear form that is formula_11-Hermitian (i.e. formula_12 if the form is Hermitian and formula_13 if the form is skew-Hermitian). Let formula_14 be a non-degenerate subspace of formula_8 such that formula_15 and formula_16 is of dimension formula_17. Then let formula_18, where formula_19 is the unitary group preserving the form on formula_8, and let formula_20 be the diagonal subgroup of formula_21. Let formula_22 be an irreducible smooth representation of formula_21 and let formula_23 be either the trivial representation (the "Bessel case") or the Weil representation (the "Fourier–Jacobi case"). Let formula_24 be a generic L-parameter for formula_18, and let formula_25 be the associated Vogan L-packet. Local Gan–Gross–Prasad conjecture. If formula_26 is a local L-parameter for formula_21, then formula_27 Letting formula_28 be the "distinguished character" defined in terms of the Langlands–Deligne local constant, then furthermore formula_29 Global Gan–Gross–Prasad conjecture. For a quadratic field extension formula_30, let formula_31 where formula_32 is the global L-function obtained as the product of local L-factors given by the local Langlands conjectures. The conjecture states that the following are equivalent: Current status. Local Gan–Gross–Prasad conjecture. In a series of four papers between 2010 and 2012, Jean-Loup Waldspurger proved the local Gan–Gross–Prasad conjecture for tempered representations of special orthogonal groups over p-adic fields. In 2012, Colette Moeglin and Waldspurger then proved the local Gan–Gross–Prasad conjecture for generic non-tempered representations of special orthogonal groups over p-adic fields. In his 2013 thesis, Raphaël Beuzart-Plessis proved the local Gan–Gross–Prasad conjecture for the tempered representations of unitary groups in the p-adic Hermitian case under the same hypotheses needed to establish the local Langlands conjecture. Hongyu He proved the Gan-Gross-Prasad conjectures for discrete series representations of the real unitary group U(p,q). Global Gan–Gross–Prasad conjecture. In a series of papers between 2004 and 2009, David Ginzburg, Dihua Jiang, and Stephen Rallis showed the (1) implies (2) direction of the global Gan–Gross–Prasad conjecture for all quasisplit classical groups. In the Bessel case of the global Gan–Gross–Prasad conjecture for unitary groups, Wei Zhang used the theory of the relative trace formula by Hervé Jacquet and the work on the fundamental lemma by Zhiwei Yun to prove that the conjecture is true subject to certain local conditions in 2014. In the Fourier–Jacobi case of the global Gan–Gross–Prasad conjecture for unitary groups, Yifeng Liu and Hang Xue showed that the conjecture holds in the skew-Hermitian case, subject to certain local conditions. In the Bessel case of the global Gan–Gross–Prasad conjecture for special orthogonal groups and unitary groups, Dihua Jiang and Lei Zhang used the theory of twisted automorphic descents to prove that (1) implies (2) in its full generality, i.e. for any irreducible cuspidal automorphic representation with a generic global Arthur parameter, and that (2) implies (1) subject to a certain global assumption. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" }, { "math_id": 1, "text": "U(n)" }, { "math_id": 2, "text": "U(n-1)" }, { "math_id": 3, "text": "\\underline{a} = (a_1 \\geq a_2 \\geq \\cdots \\geq a_n)" }, { "math_id": 4, "text": "\\underline{a}" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "\\underline{b}" }, { "math_id": 7, "text": "a_1 \\geq b_1 \\geq a_2 \\geq b_2 \\geq \\cdots \\geq b_{n-1} \\geq a_n" }, { "math_id": 8, "text": "V" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": "2" }, { "math_id": 11, "text": "\\varepsilon" }, { "math_id": 12, "text": "\\varepsilon = 1" }, { "math_id": 13, "text": "\\varepsilon = -1" }, { "math_id": 14, "text": "W" }, { "math_id": 15, "text": "V = W \\oplus W^\\perp" }, { "math_id": 16, "text": "W^\\perp" }, { "math_id": 17, "text": "(\\varepsilon + 1)/2" }, { "math_id": 18, "text": "G = G(V) \\times G(W)" }, { "math_id": 19, "text": "G(V)" }, { "math_id": 20, "text": "H = \\Delta G(W)" }, { "math_id": 21, "text": "G" }, { "math_id": 22, "text": "\\pi = \\pi_1 \\boxtimes \\pi_2" }, { "math_id": 23, "text": "\\nu" }, { "math_id": 24, "text": "\\varphi = \\varphi_1 \\times \\varphi_2" }, { "math_id": 25, "text": "\\Pi_\\varphi" }, { "math_id": 26, "text": "\\varphi" }, { "math_id": 27, "text": "\\sum_{\\text{relevant } \\pi \\in \\Pi_\\varphi} \\dim \\operatorname{Hom}_H (\\pi \\otimes \\overline{\\nu}, \\mathbb{C}) = 1." }, { "math_id": 28, "text": "\\eta_{\\mathrm{GP}}" }, { "math_id": 29, "text": "\\operatorname{Hom}_H (\\pi(\\varphi, \\eta) \\otimes \\overline{\\nu}, \\mathbb{C}) \\neq 0 \\text{ if and only if } \\eta = \\eta_{\\mathrm{GP}}." }, { "math_id": 30, "text": "E/F" }, { "math_id": 31, "text": "L_E(s, \\pi_1 \\times \\pi_2) := L_E(s, \\pi_1 \\boxtimes \\pi_2, \\mathrm{std}_n \\boxtimes \\mathrm{std}_{n-1})" }, { "math_id": 32, "text": "L_E" }, { "math_id": 33, "text": "P_H" }, { "math_id": 34, "text": "v" }, { "math_id": 35, "text": "\\operatorname{Hom}_{H(F_v)}(\\pi_v, \\nu_v) \\neq 0" }, { "math_id": 36, "text": "L_E(1/2, \\pi_1 \\times \\pi_2) \\neq 0" } ]
https://en.wikipedia.org/wiki?curid=58884863
5888998
Lenoir cycle
The Lenoir cycle is an idealized thermodynamic cycle often used to model a pulse jet engine. It is based on the operation of an engine patented by Jean Joseph Etienne Lenoir in 1860. This engine is often thought of as the first commercially produced internal combustion engine. The absence of any compression process in the design leads to lower thermal efficiency than the more well known Otto cycle and Diesel cycle. The cycle. In the cycle, an ideal gas undergoes 1–2: Constant volume (isochoric) heat addition; 2–3: Isentropic expansion; 3–1: Constant pressure (isobaric) heat rejection. The expansion process is isentropic and hence involves no heat interaction. Energy is absorbed as heat during the isochoric heating and rejected as work during the isentropic expansion. Waste heat is rejected during the isobaric cooling which consumes some work. Constant volume heat addition (1–2). In the ideal gas version of the traditional Lenoir cycle, the first stage (1–2) involves the addition of heat in a constant volume manner. This results in the following for the first law of thermodynamics: formula_0 There is no work during the process because the volume is held constant: formula_1 and from the definition of constant volume specific heats for an ideal gas: formula_2 Where "R" is the ideal gas constant and "γ" is the ratio of specific heats (approximately 287 J/(kg·K) and 1.4 for air respectively). The pressure after the heat addition can be calculated from the ideal gas law: formula_3 Isentropic expansion (2–3). The second stage (2–3) involves a reversible adiabatic expansion of the fluid back to its original pressure. It can be determined for an isentropic process that the second law of thermodynamics results in the following: formula_4 Where formula_5 for this specific cycle. The first law of thermodynamics results in the following for this expansion process: formula_6 because for an adiabatic process: formula_7 Constant pressure heat rejection (3–1). The final stage (3–1) involves a constant pressure heat rejection back to the original state. From the first law of thermodynamics we find: formula_8. From the definition of work: formula_9, we recover the following for the heat rejected during this process: formula_10. As a result, we can determine the heat rejected as follows: formula_11. For an ideal gas, formula_12. Efficiency. The overall efficiency of the cycle is determined by the total work over the heat input, which for a Lenoir cycle equals formula_13 Note that we gain work during the expansion process but lose some during the heat rejection process. Alternatively, the first law of thermodynamics can be used to put the efficiency in terms of the heat absorbed and heat rejected, formula_14 Utilizing that, for the isobaric process, "T"3/"T"1 "V"3/"V"1, and for the adiabatic process, "T"2/"T"3 ("V"3/"V"1)"γ"−1, the efficiency can be put in terms of the compression ratio, formula_15 where "r" "V"3/"V"1 is defined to be &gt; 1. Comparing this to the Otto cycle's efficiency graphically, it can be seen that the Otto cycle is more efficient at a given compression ratio. Alternatively, using the relationship given by process 2–3, the efficiency can be put in terms of "rp" "p"2/"p"3, the pressure ratio, formula_16
[ { "math_id": 0, "text": "{}_1Q_2 = mc_v \\left( {T_2 - T_1 } \\right)" }, { "math_id": 1, "text": "{}_1W_2 = \\int_1^2 {p\\,dV} = 0" }, { "math_id": 2, "text": "c_v = \\frac{R}{{\\gamma - 1}}" }, { "math_id": 3, "text": "p_2 v_2 = RT_2 " }, { "math_id": 4, "text": "\\frac{{T_2 }}{{T_3 }} = \\left( {\\frac{{p_2 }}{{p_3 }}} \\right)^{{\\textstyle{{\\gamma - 1} \\over \\gamma }}} = \\left( {\\frac{{V_3 }}{{V_2 }}} \\right)^{\\gamma - 1} " }, { "math_id": 5, "text": "p_3 = p_1" }, { "math_id": 6, "text": " {}_2W_3 = \\int_2^3 {p\\,dV} " }, { "math_id": 7, "text": "{}_2 Q_3 = 0\n" }, { "math_id": 8, "text": "{}_3 Q_1 - {}_3W_1 = U_1 - U_3 " }, { "math_id": 9, "text": "{}_3W_1 = \\int_3^1 {p\\,dV} = p_1 \\left( {V_1 - V_3 } \\right)" }, { "math_id": 10, "text": "{}_3Q_1 = \\left( {U_1 + p_1 V_1 } \\right) - \\left( {U_3 + p_3 V_3 } \\right) = H_1 - H_3 " }, { "math_id": 11, "text": "\n{}_3 Q_1 = mc_p \\left( {T_1 - T_3 } \\right)\n" }, { "math_id": 12, "text": "c_p = \\frac{{\\gamma R}}\n{{\\gamma - 1}}" }, { "math_id": 13, "text": "\\eta _{\\rm th} = \\frac{{{}_2W_3 + {}_3W_1 }}\n{{{}_1Q_2 }}." }, { "math_id": 14, "text": "\\eta_{\\rm th} = 1 - \\frac{{}_3Q_1}{{}_1Q_2} = 1 - \\gamma\\left( \\frac{T_3 - T_1}{T_2 - T_1} \\right)." }, { "math_id": 15, "text": "\\eta_{\\rm th} = 1 - \\gamma\\left( \\frac{r-1}{r^\\gamma - 1}\\right)," }, { "math_id": 16, "text": "\\eta_{\\rm th} = 1 - \\gamma\\left( \\frac{r_p^{1/\\gamma}-1}{r_p - 1}\\right)." } ]
https://en.wikipedia.org/wiki?curid=5888998
58891411
Push of the past
The push of the past is a type of survivorship bias associated with evolutionary diversification when extinction is possible. Groups that survive a long time are likely to have “got off to a flying start”, and this statistical bias creates an illusion of a true slow-down of diversification rate through time. Birth–Death modelling in evolutionary studies. The evolutionary processes of speciation and extinction can be modelled with a stochastic “birth–death model” (BDM), which is an important component in the study of macroevolution. A BDM assigns each species a certain probability of splitting (formula_0) or going extinct (formula_1) per interval of time. This gives rise to an exponential distribution, with the number of species in a particular clade "N" at any time "t" given by formula_2, although this expression only gives the expected value when formula_3 and formula_4 are large (see below). In the special case of there being no extinction, this simplifies to the so-called "Yule process". Lineage-through-time plots. A different type of plot of diversity through time, called a “lineage through time” (LTT) plot, "retrospectively" reconstructs the number of lineages that led to the living species of a group. This is equivalent to constructing a dated phylogeny and then counting how many branches are present at each time interval. As we know retrospectively that all such lineages survived until the present, it follows that no extinction is possible along them. It can be shown that the rate of production of new lineages through time is given by formula_5. Survivorship bias in diversification. Rather than considering the distribution of all possible stochastic outcomes for given values of formula_6 and formula_1 it is also possible to consider what happens when certain conditions of survivorship are imposed on the possible outcomes. Push of the past. If a BDM is forward-modelled, i.e. if the fate of an original single species is modelled through time, then a wide range of possible outcomes can occur, as the process is stochastic. With significant extinction rates, any particular clade is likely to be short-lived. However, we know that relatively long-lived clades such as the plants or animals by definition did "not" go extinct. As a result, their patterns of diversification will be a sub-set of all the possible outcomes for diversifications with their particular values of formula_0 and formula_1 - all patterns with early extinction will be excluded. Imposing the condition of survival on a clade implies that rates of early diversification will be higher than expected. It can be shown that for a long-lived clade, the expected initial short-term rate of diversification is approximately formula_7, as opposed to the long-term rate of formula_5. However, the wide confidence intervals on this value mean that values of initial diversification of up to formula_8 fall within the 95% range. Long-lived clades should thus show a characteristic early burst of diversification that quickly declines to the long-term rate, an effect called the "push of the past". Pull of the present. For a normal-sized clade, the push of the past is only observed in the raw count of species through time (e.g. that reconstructed from the fossil record), but the rate of lineage increase is affected as the present is approached. This is because recently created sub-clades within a particular group have an expected lifetime, and as the present is approached, these sub-clades will not have had time to go extinct. Thus, the rate of creation of reconstructed lineages should increase in the near past from formula_5 to formula_0 in the present - living species by definition have an observed zero extinction rate. This theoretical apparent increase in the rate of lineage production has been termed the "pull of the present". In reality, the “pull of the present” has proven difficult to demonstrate: rates of lineage production in reconstructed phylogenies often show a slow-down or even decrease as the present is approached. This conundrum has been much discussed, and two major solutions have been proposed: first, that diversification is diversity dependent, so that as the carrying capacity of the environment is reached the rate of lineage production slows; secondly, that our modern species concept does not properly capture the “lineages” of BDM, and that speciation as we recognize it is only the end point of a drawn-out process of splitting of subpopulations through time, each of which could be considered to be a lineage in itself. Turnover and survivorship bias. For a given diversification rate of formula_5, it is possible to consider high turnover (λ and μ high) and low turnover (λ and μ low) scenarios. As the push of the past and pull of the present depend on the stochastic absence of extinction, it follows that both these effects are greatest when m is high, i.e. in high turnover situations. For example, if λ is 0.6 and μ 0.55 (both measured in rates per species per million years), the initial rate of species production would be 1.2 (2λ); but if they were 0.15 and 0.1 respectively, the initial rate would only be 0.3, even though the overall diversification rate (formula_5) is the same in both cases, 0.05. it can be seen that the initial rate of diversification in the push of the past can be much greater than the background rate; in the first case here, 24 times higher. Such high rates have often been observed at the origin of major groups such as the animals and angiosperms. It is possible that such striking diversifications are thus simply an effect of survivorship bias, and that if overall rates could be measured at their time of origin (including those of groups that quickly went extinct) no unusual rates would be observed. Consideration of the null hypothesis of survivorship bias is thus important when assigning causes to apparent cases of early rapid diversification, Crown group origins. The effect of the push of the past appears to be the reason that crown groups tend to emerge early within the history of a group as a whole: groups that diversify readily tend to create early new lineages. Mass extinctions and the push of the past. The push of the past is an expected effect whenever a small group is diversifying and its future survival is known to have occurred. It should thus also be seen in groups that were heavily affected by mass extinctions and went on to rediversify. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "N_{(t)}= N_{(t0)}e^{(\\lambda-\\mu)(t-t0)}" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "t" }, { "math_id": 5, "text": "\\lambda-\\mu" }, { "math_id": 6, "text": "t, \\lambda" }, { "math_id": 7, "text": "2\\lambda" }, { "math_id": 8, "text": "3\\lambda" } ]
https://en.wikipedia.org/wiki?curid=58891411
58892481
Ulam–Warburton automaton
The Ulam–Warburton cellular automaton (UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of squares. Starting with one square initially ON and all others OFF, successive iterations are generated by turning ON all squares that share precisely one edge with an ON square. This is the von Neumann neighborhood. The automaton is named after the Polish-American mathematician and scientist Stanislaw Ulam and the Scottish engineer, inventor and amateur mathematician Mike Warburton. Properties and relations. The UWCA is a 2D 5-neighbor outer totalistic cellular automaton using rule 686. The number of cells turned ON in each iteration is denoted formula_0 with an explicit formula: formula_1 and for formula_2 formula_3 where formula_4 is the Hamming weight function which counts the number of 1's in the binary expansion of formula_5 formula_6 The minimum upper bound of summation for formula_7 is such that formula_8 The total number of cells turned ON is denoted formula_9 formula_10 Table of "wt(n)", "u(n)" and "U(n)". The table shows that different inputs to formula_4 can lead to the same output. This surjective property emerges from the simple rule of growth – a new cell is born if it shares only one-edge with an existing ON cell - the process appears disorderly and is modeled by functions involving formula_4 but within the chaos there is regularity. formula_12 is OEIS sequence A147562 and formula_11 is OEIS sequence A147582 Counting cells with quadratics. For all integer sequences of the form formula_13 where formula_14 and formula_15 Let formula_16 Then the total number of ON cells in the integer sequence formula_18 is given by formula_19 Or in terms of formula_7 we have formula_20 Upper and lower bounds. formula_12 has fractal-like behavior with a sharp upper bound for formula_21 given by formula_22 The upper bound only contacts formula_12 at 'high-water' points when formula_23. These are also the generations at which the UWCA based on squares, the Hex–UWCA based on hexagons and the Sierpinski triangle return to their base shape. Limit superior and limit inferior. We have formula_24 The lower limit was obtained by Robert Price (OEIS sequence A261313 ) and took several weeks to compute and is believed to be twice the lower limit of formula_25 where formula_26 is the total number of toothpicks in the toothpick sequence up to generation formula_27 Relationship to. Hexagonal UWCA. The Hexagonal-Ulam–Warburton cellular automaton (Hex-UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of hexagons. The same growth rule for the UWCA applies and the pattern returns to a hexagon in generations formula_23, when the first hexagon is considered as generation formula_28. The UWCA has two reflection lines that pass through the corners of the initial cell dividing the square into four quadrants, similarly the Hex-UWCA has three reflection lines dividing the hexagon into six sections and the growth rule follows the symmetries. Cells whose centers lie on a line of reflection symmetry are never born. The Hex-UWCA pattern can be explored here. Sierpinski triangle. The Sierpinski triangle appears in 13th century Italian floor mosaics. Wacław Sierpiński described the triangle in 1915. If we consider the growth of the triangle, with each row corresponding to a generation and the top row generation formula_28 is a single triangle, then like the UWCA and the Hex-UWCA it returns to its starting shape, in generations formula_29 Toothpick sequence. The toothpick pattern is constructed by placing a single toothpick of unit length on a square grid, aligned with the vertical axis. At each subsequent stage, for every exposed toothpick end, place a perpendicular toothpick centred at that end. The resulting structure has a fractal-like appearance. The toothpick and UWCA structures are examples of cellular automata defined on a graph and when considered as a subgraph of the infinite square grid the structure is a tree. The toothpick sequence returns to its base rotated ‘H’ shape in generations formula_30 where formula_31 The toothpick sequence and various toothpick-like sequences can be explored here. Combinatorial game theory. A subtraction game called LIM, in which two players alternately modify three piles of tokens by taking an equal amount of tokens from two of the piles and adding the same amount to the third pile, has a set of winning positions that can be described using the Ulam–Warburton automaton. History. The beginnings of automata go back to a conversation Ulam had with Stanislaw Mazur in a coffee house in Lwów Poland when Ulam was twenty in 1929. Ulam worked with John von Neumann during the war years when they became good friends and discussed cellular automaton. Von Neumann’s used these ideas in his concept of a universal constructor and the digital computer. Ulam focussed on biological and ‘crystal like’ patterns publishing a sketch of the growth of a square based cell structure using a simple rule in 1962. Mike Warburton is an amateur mathematician working in probabilistic number theory who was educated at George Heriot's School in Edinburgh. His son's mathematics GCSE coursework involved investigating the growth of equilateral triangles or squares in the Euclidean plane with the rule – a new generation is born if and only if connected to the last by only one-edge. That coursework concluded with a recursive formula for the number of ON cells born in each generation. Later, Warburton found the sharp upper bound formula which he wrote up as a note in the Open University’s M500 magazine in 2002. David Singmaster read the article, analysed the structure and named the object the Ulam-Warburton cellular automaton in his 2003 article. Since then it has given rise to numerous integer sequences. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " u(n)," }, { "math_id": 1, "text": "u(0)=0, u(1)=1," }, { "math_id": 2, "text": " n \\ge 2" }, { "math_id": 3, "text": " u(n) = 4\\cdot 3^{wt(n-1)-1}" }, { "math_id": 4, "text": "wt(n)" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "wt(n)=n-\\sum_{k=1}^{\\infty} \\left\\lfloor\\frac{n}{2^k}\\right\\rfloor" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": " 2^k \\geq n " }, { "math_id": 9, "text": " U(n)" }, { "math_id": 10, "text": "U(n)=\\sum_{i \\mathop =0}^n u(i) = \\frac{4}{3}\\sum_{i \\mathop =0}^{n-1} 3^{wt(i)}-\\frac{1}{3}" }, { "math_id": 11, "text": "u(n)" }, { "math_id": 12, "text": "U(n)" }, { "math_id": 13, "text": "n_m=m\\cdot2^k" }, { "math_id": 14, "text": " m \\ge 1" }, { "math_id": 15, "text": " k \\ge 0" }, { "math_id": 16, "text": "a_m=\\sum_{i \\mathop =0}^{m-1} 3^{wt(i)}" }, { "math_id": 17, "text": "a_m" }, { "math_id": 18, "text": "n_m" }, { "math_id": 19, "text": "U_m(n_m)=\\frac{a_m}{m^2}\\frac{4}{3}n_m^2 - \\frac{1}{3}" }, { "math_id": 20, "text": "U_m(k)=a_m\\frac{4}{3}2^{2k} - \\frac{1}{3}" }, { "math_id": 21, "text": " n\\ge 1" }, { "math_id": 22, "text": "U_\\text{sub}(n)=\\frac{4}{3}n^2-\\frac{1}{3}" }, { "math_id": 23, "text": "n=2^k" }, { "math_id": 24, "text": "0.9026116569...=\\liminf_{n\\to\\infty}\\frac{U(n)}{n^2} < \\limsup_{n\\to\\infty}\\frac{U(n)}{n^2} = \\frac{4}{3} " }, { "math_id": 25, "text": " \\frac{T(n)}{n^2} " }, { "math_id": 26, "text": "T(n)" }, { "math_id": 27, "text": " n" }, { "math_id": 28, "text": "1" }, { "math_id": 29, "text": "n=2^k." }, { "math_id": 30, "text": "n=2^k " }, { "math_id": 31, "text": "k \\ge 1" } ]
https://en.wikipedia.org/wiki?curid=58892481
58898994
Non-orthogonal frequency-division multiplexing
Method of encoding digital data on multiple carrier frequencies Non-orthogonal frequency-division multiplexing (N-OFDM) is a method of encoding digital data on multiple carrier frequencies with non-orthogonal intervals between frequency of sub-carriers. N-OFDM signals can be used in communication and radar systems. Subcarriers system. The low-pass equivalent N-OFDM signal is expressed as: formula_0 where formula_1 are the data symbols, formula_2 is the number of sub-carriers, and formula_3 is the N-OFDM symbol time. The sub-carrier spacing formula_4 for formula_5 makes them non-orthogonal over each symbol period. History. The history of N-OFDM signals theory was started in 1992 from the Patent of Russian Federation No. 2054684. In this patent, Vadym Slyusar proposed the 1st method of optimal processing for N-OFDM signals after Fast Fourier transform (FFT). In this regard need to say that W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with formula_5 that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." In 2001, V. Slyusar proposed non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication about this method has priority in July 2002 before the conference paper regarding SEFDM of I. Darwazeh and M.R.D. Rodrigues (September, 2003). Advantages of N-OFDM. Despite the increased complexity of demodulating N-OFDM signals compared to OFDM, the transition to non-orthogonal subcarrier frequency arrangement provides several advantages: Idealized system model. This section describes a simple idealized N-OFDM system model suitable for a time-invariant AWGN channel. Transmitter N-OFDM signals. An N-OFDM carrier signal is the sum of a number of not-orthogonal subcarriers, with baseband data on each subcarrier being independently modulated commonly using some type of quadrature amplitude modulation (QAM) or phase-shift keying (PSK). This composite baseband signal is typically used to modulate a main RF carrier. formula_6 is a serial stream of binary digits. By inverse multiplexing, these are first demultiplexed into formula_7 parallel streams, and each one mapped to a (possibly complex) symbol stream using some modulation constellation (QAM, PSK, etc.). Note that the constellations may be different, so some streams may carry a higher bit-rate than others. A Digital Signal Processor (DSP) is computed on each set of symbols, giving a set of complex time-domain samples. These samples are then quadrature-mixed to passband in the standard way. The real and imaginary components are first converted to the analogue domain using digital-to-analogue converters (DACs); the analogue signals are then used to modulate cosine and sine waves at the carrier frequency, formula_8, respectively. These signals are then summed to give the transmission signal, formula_9. Demodulation. Receiver. The receiver picks up the signal formula_10, which is then quadrature-mixed down to baseband using cosine and sine waves at the carrier frequency. This also creates signals centered on formula_11, so low-pass filters are used to reject these. The baseband signals are then sampled and digitised using analog-to-digital converters (ADCs), and a forward FFT is used to convert back to the frequency domain. This returns formula_2 parallel streams, which use in appropriate symbol detector. Demodulation after FFT. The 1st method of optimal processing for N-OFDM signals after FFT was proposed in 1992. Demodulation without FFT. Demodulation by using of ADC samples. The method of optimal processing for N-OFDM signals without FFT was proposed in October 2003. In this case can be used ADC samples. N-OFDM+MIMO. The combination N-OFDM and MIMO technology is similar to OFDM. To the building of MIMO system can be used digital antenna array as transmitter and receiver of N-OFDM signals. Fast-OFDM. Fast-OFDM method was proposed in 2002. Filter-bank multi-carrier modulation (FBMC). Filter-bank multi-carrier modulation (FBMC) is. As example of FBMC can consider Wavelet N-OFDM. Wavelet N-OFDM. N-OFDM has become a technique for power-line communications (PLC). In this area of research, a wavelet transform is introduced to replace the DFT as the method of creating non-orthogonal frequencies. This is due to the advantages wavelets offer, which are particularly useful on noisy power lines. To create the sender signal the wavelet N-OFDM uses a synthesis bank consisting of a formula_2-band transmultiplexer followed by the transform function formula_12 On the receiver side, an analysis bank is used to demodulate the signal again. This bank contains an inverse transform formula_13 followed by another formula_2-band transmultiplexer. The relationship between both transform functions is formula_14 Spectrally-efficient FDM (SEFDM). N-OFDM is a spectrally efficient method. All SEFDM methods are similar to N-OFDM. Generalized frequency division multiplexing (GFDM). "Generalized frequency division multiplexing" ("GFDM") is. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\nu(t) = \\sum_{k=0}^{N-1}X_k e^{j2\\pi\\alpha kt/T},\\quad 0 \\le t < T,\n" }, { "math_id": 1, "text": "X_k" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "\\alpha/T" }, { "math_id": 5, "text": "\\alpha < 1" }, { "math_id": 6, "text": "s[n]" }, { "math_id": 7, "text": "\\scriptstyle N" }, { "math_id": 8, "text": "f_\\text{c}" }, { "math_id": 9, "text": "s(t)" }, { "math_id": 10, "text": "r(t)" }, { "math_id": 11, "text": "2 f_\\text{c}" }, { "math_id": 12, "text": " F_n(z) = \\sum_{k=0}^{L-1} f_n(k) z^{-k},\\quad 0 \\leq n < N " }, { "math_id": 13, "text": " G_n(z) = \\sum_{k=0}^{L-1} g_n(k) z^{-k},\\quad 0 \\leq n < N " }, { "math_id": 14, "text": "\\begin{align}\n f_n(k) &= g_n(L - 1 - k) \\\\\n F_n(z) &= z^{-(L-1)} G_n * (z - 1)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=58898994
58899
Direct sum of modules
Operation in abstract algebra In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion. The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces. See the article decomposition of a module for a way to write a module as a direct sum of submodules. Construction for vector spaces and abelian groups. We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth. Construction for two vector spaces. Suppose "V" and "W" are vector spaces over the field "K". The cartesian product "V" × "W" can be given the structure of a vector space over "K" by defining the operations componentwise: for "v", "v"1, "v"2 ∈ "V", "w", "w"1, "w"2 ∈ "W", and "α" ∈ "K". The resulting vector space is called the "direct sum" of "V" and "W" and is usually denoted by a plus symbol inside a circle: formula_0 It is customary to write the elements of an ordered sum not as ordered pairs ("v", "w"), but as a sum "v" + "w". The subspace "V" × {0} of "V" ⊕ "W" is isomorphic to "V" and is often identified with "V"; similarly for {0} × "W" and "W". (See "internal direct sum" below.) With this identification, every element of "V" ⊕ "W" can be written in one and only one way as the sum of an element of "V" and an element of "W". The dimension of "V" ⊕ "W" is equal to the sum of the dimensions of "V" and "W". One elementary use is the reconstruction of a finite vector space from any subspace "W" and its orthogonal complement: formula_1 This construction readily generalizes to any finite number of vector spaces. Construction for two abelian groups. For abelian groups "G" and "H" which are written additively, the direct product of "G" and "H" is also called a direct sum . Thus the Cartesian product "G" × "H" is equipped with the structure of an abelian group by defining the operations componentwise: ("g"1, "h"1) + ("g"2, "h"2) = ("g"1 + "g"2, "h"1 + "h"2) for "g"1, "g"2 in "G", and "h"1, "h"2 in "H". Integral multiples are similarly defined componentwise by "n"("g", "h") = ("ng", "nh") for "g" in "G", "h" in "H", and "n" an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above. The resulting abelian group is called the "direct sum" of "G" and "H" and is usually denoted by a plus symbol inside a circle: formula_2 It is customary to write the elements of an ordered sum not as ordered pairs ("g", "h"), but as a sum "g" + "h". The subgroup "G" × {0} of "G" ⊕ "H" is isomorphic to "G" and is often identified with "G"; similarly for {0} × "H" and "H". (See "internal direct sum" below.) With this identification, it is true that every element of "G" ⊕ "H" can be written in one and only one way as the sum of an element of "G" and an element of "H". The rank of "G" ⊕ "H" is equal to the sum of the ranks of "G" and "H". This construction readily generalises to any finite number of abelian groups. Construction for an arbitrary family of modules. One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows . Let "R" be a ring, and {"M""i" : "i" ∈ "I"} a family of left "R"-modules indexed by the set "I". The "direct sum" of {"M""i"} is then defined to be the set of all sequences formula_3 where formula_4 and formula_5 for cofinitely many indices "i". (The direct product is analogous but the indices do not need to cofinitely vanish.) It can also be defined as functions α from "I" to the disjoint union of the modules "M""i" such that α("i") ∈ "M""i" for all "i" ∈ "I" and α("i") = 0 for cofinitely many indices "i". These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set "I", with the fiber over formula_6 being formula_7. This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing formula_8 for all "i" (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element "r" from "R" by defining formula_9 for all "i". In this way, the direct sum becomes a left "R"-module, and it is denoted formula_10 It is customary to write the sequence formula_3 as a sum formula_11. Sometimes a primed summation formula_12 is used to indicate that cofinitely many of the terms are zero. Internal direct sum. Suppose "M" is an "R"-module and "M""i" is a submodule of "M" for each "i" in "I". If every "x" in "M" can be written in exactly one way as a sum of finitely many elements of the "M""i", then we say that "M" is the internal direct sum of the submodules "M""i" . In this case, "M" is naturally isomorphic to the (external) direct sum of the "M""i" as defined above . A submodule "N" of "M" is a direct summand of "M" if there exists some other submodule "N′" of "M" such that "M" is the "internal" direct sum of "N" and "N′". In this case, "N" and "N′" are called complementary submodules. Universal property. In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left "R"-modules, which means that it is characterized by the following universal property. For every "i" in "I", consider the "natural embedding" formula_19 which sends the elements of "M""i" to those functions which are zero for all arguments but "i". Now let "M" be an arbitrary "R"-module and "f""i" : "M""i" → "M" be arbitrary "R"-linear maps for every "i", then there exists precisely one "R"-linear map formula_20 such that "f" o "ji" = "f""i" for all "i". Grothendieck group. The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group. Direct sum of modules with additional structure. If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces. In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product ("see note below" and the remark on direct sums of rings). Direct sum of algebras. A direct sum of algebras formula_21 and formula_22 is the direct sum as vector spaces, with product formula_23 Consider these classical examples: formula_24 is ring isomorphic to split-complex numbers, also used in interval analysis. formula_25 is the algebra of tessarines introduced by James Cockle in 1848. formula_26 called the split-biquaternions, was introduced by William Kingdon Clifford in 1873. Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his "Lectures on Matrices" (1934), page 151. Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts: formula_27 while for the direct product a scalar factor may be collected alternately with the parts, but not both: formula_28 Ian R. Porteous uses the three direct sums above, denoting them formula_29 as rings of scalars in his analysis of "Clifford Algebras and the Classical Groups" (1995). The construction described above, as well as Wedderburn's use of the terms direct sum and direct product follow a different convention than the one in category theory. In categorical terms, Wedderburn's direct sum is a categorical product, whilst Wedderburn's direct product is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras. Direct sum of Banach spaces. The direct sum of two Banach spaces formula_21 and formula_22 is the direct sum of formula_21 and formula_22 considered as vector spaces, with the norm formula_30 for all formula_31 and formula_32 Generally, if formula_33 is a collection of Banach spaces, where formula_34 traverses the index set formula_35 then the direct sum formula_36 is a module consisting of all functions formula_37 defined over formula_38 such that formula_39 for all formula_6 and formula_40 The norm is given by the sum above. The direct sum with this norm is again a Banach space. For example, if we take the index set formula_41 and formula_42 then the direct sum formula_43 is the space formula_44 which consists of all the sequences formula_45 of reals with finite norm formula_46 A closed subspace formula_47 of a Banach space formula_21 is complemented if there is another closed subspace formula_48 of formula_21 such that formula_21 is equal to the internal direct sum formula_49 Note that not every closed subspace is complemented; e.g. formula_50 is not complemented in formula_51 Direct sum of modules with bilinear forms. Let formula_52 be a family indexed by formula_38 of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form formula_48 defined by formula_53 in which the summation makes sense even for infinite index sets formula_38 because only finitely many of the terms are non-zero. Direct sum of Hilbert spaces. If finitely many Hilbert spaces formula_54 are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as: formula_55 The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces. If infinitely many Hilbert spaces formula_56 for formula_6 are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces formula_56 to be the completion of this inner product space. Alternatively and equivalently, one can define the direct sum of the Hilbert spaces formula_56 as the space of all functions α with domain formula_35 such that formula_57 is an element of formula_56 for every formula_6 and: formula_58 The inner product of two such function α and β is then defined as: formula_59 This space is complete and we get a Hilbert space. For example, if we take the index set formula_41 and formula_42 then the direct sum formula_60 is the space formula_61 which consists of all the sequences formula_45 of reals with finite norm formula_62 Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different. Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either formula_63 This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V \\oplus W" }, { "math_id": 1, "text": "\\mathbb{R}^n = W \\oplus W^{\\perp}" }, { "math_id": 2, "text": "G \\oplus H" }, { "math_id": 3, "text": "(\\alpha_i)" }, { "math_id": 4, "text": "\\alpha_i \\in M_i" }, { "math_id": 5, "text": "\\alpha_i = 0" }, { "math_id": 6, "text": "i \\in I" }, { "math_id": 7, "text": "M_i" }, { "math_id": 8, "text": "(\\alpha + \\beta)_i = \\alpha_i + \\beta_i" }, { "math_id": 9, "text": "r(\\alpha)_i = (r\\alpha)_i" }, { "math_id": 10, "text": "\\bigoplus_{i \\in I} M_i." }, { "math_id": 11, "text": " \\sum \\alpha_i" }, { "math_id": 12, "text": " \\sum ' \\alpha_i" }, { "math_id": 13, "text": "\\operatorname{Hom}_R\\biggl( \\bigoplus_{i \\in I} M_i,L\\biggr) \\cong \\prod_{i \\in I}\\operatorname{Hom}_R\\left(M_i,L\\right)." }, { "math_id": 14, "text": " \\tau^{-1}(\\beta)(\\alpha) = \\sum_{i\\in I} \\beta(i)(\\alpha(i))" }, { "math_id": 15, "text": "p_k: A_1 \\oplus \\cdots \\oplus A_n \\to A_k" }, { "math_id": 16, "text": "i_k: A_k \\mapsto A_1 \\oplus \\cdots \\oplus A_n " }, { "math_id": 17, "text": "i_1 \\circ p_1 + \\cdots + i_n \\circ p_n" }, { "math_id": 18, "text": "p_k \\circ i_l" }, { "math_id": 19, "text": "j_i : M_i \\rightarrow \\bigoplus_{i \\in I} M_i" }, { "math_id": 20, "text": "f : \\bigoplus_{i \\in I} M_i \\rightarrow M" }, { "math_id": 21, "text": "X" }, { "math_id": 22, "text": "Y" }, { "math_id": 23, "text": "(x_1 + y_1) (x_2 + y_2) = (x_1 x_2 + y_1 y_2)." }, { "math_id": 24, "text": "\\mathbf{R} \\oplus \\mathbf{R}" }, { "math_id": 25, "text": "\\mathbf{C} \\oplus \\mathbf{C}" }, { "math_id": 26, "text": "\\mathbf{H} \\oplus \\mathbf{H}," }, { "math_id": 27, "text": "\\lambda (x \\oplus y) = \\lambda x \\oplus \\lambda y" }, { "math_id": 28, "text": "\\lambda (x,y) = (\\lambda x, y) = (x, \\lambda y)." }, { "math_id": 29, "text": "^2 R,\\ ^2 C,\\ ^2 H," }, { "math_id": 30, "text": "\\|(x, y)\\| = \\|x\\|_X + \\|y\\|_Y" }, { "math_id": 31, "text": "x \\in X" }, { "math_id": 32, "text": "y \\in Y." }, { "math_id": 33, "text": "X_i" }, { "math_id": 34, "text": "i" }, { "math_id": 35, "text": "I," }, { "math_id": 36, "text": "\\bigoplus_{i \\in I} X_i" }, { "math_id": 37, "text": "x" }, { "math_id": 38, "text": "I" }, { "math_id": 39, "text": "x(i) \\in X_i" }, { "math_id": 40, "text": "\\sum_{i \\in I} \\|x(i)\\|_{X_i} < \\infty." }, { "math_id": 41, "text": "I = \\N" }, { "math_id": 42, "text": "X_i = \\R," }, { "math_id": 43, "text": "\\bigoplus_{i \\in \\N} X_i" }, { "math_id": 44, "text": "\\ell_1," }, { "math_id": 45, "text": "\\left(a_i\\right)" }, { "math_id": 46, "text": "\\|a\\| = \\sum_i \\left|a_i\\right|." }, { "math_id": 47, "text": "A" }, { "math_id": 48, "text": "B" }, { "math_id": 49, "text": "A \\oplus B." }, { "math_id": 50, "text": "c_0" }, { "math_id": 51, "text": "\\ell^\\infty." }, { "math_id": 52, "text": "\\left\\{ \\left(M_i, b_i\\right) : i \\in I \\right\\}" }, { "math_id": 53, "text": "B\\left({\\left({x_i}\\right),\\left({y_i}\\right)}\\right) = \\sum_{i\\in I} b_i\\left({x_i,y_i}\\right)" }, { "math_id": 54, "text": "H_1, \\ldots, H_n" }, { "math_id": 55, "text": "\\left\\langle \\left(x_1, \\ldots, x_n\\right), \\left(y_1, \\ldots, y_n\\right) \\right\\rangle = \\langle x_1, y_1 \\rangle + \\cdots + \\langle x_n, y_n \\rangle." }, { "math_id": 56, "text": "H_i" }, { "math_id": 57, "text": "\\alpha(i)" }, { "math_id": 58, "text": "\\sum_i \\left\\|\\alpha_{(i)}\\right\\|^2 < \\infty." }, { "math_id": 59, "text": "\\langle\\alpha,\\beta\\rangle=\\sum_i \\langle \\alpha_i,\\beta_i \\rangle." }, { "math_id": 60, "text": "\\oplus_{i \\in \\N} X_i" }, { "math_id": 61, "text": "\\ell_2," }, { "math_id": 62, "text": "\\|a\\| = \\sqrt{\\sum_i \\left\\|a_i\\right\\|^2}." }, { "math_id": 63, "text": "\\R \\text{ or } \\Complex." } ]
https://en.wikipedia.org/wiki?curid=58899
58899080
Discrepancy game
A discrepancy game is a kind of positional game. Like most positional games, it is described by its set of "positions/points/elements" (formula_0) and a family of "sets" (formula_1- a family of subsets of formula_0). It is played by two players, called "Balancer" and "Unbalancer". Each player in turn picks an element. The goal of Balancer is to ensure that every set in formula_1 is balanced, i.e., the elements in each set are distributed roughly equally between the players. The goal of Unbalancer is to ensure that at least one set is unbalanced. Formally, the goal of balancer is defined by a vector formula_2 where "n" is the number of sets in formula_1. Balancer wins if in every set "i", the difference between the number of elements taken by Balancer and the number of elements taken by Unbalancer is at most "bi". Equivalently, we can think of Balancer as labeling each element with +1 and Unbalancer labeling each element with -1, and Balancer's goal is to ensure the absolute value of the sum of labels in set i is at most "bi". The game was introduced by Frieze, Krivelevich, Pikhurko and Szabo, and generalized by Alon, Krivelevich, Spencer and Szabo. Comparison to other games. In a Maker-Breaker game, Breaker has to take "at least one" element in every set. In an Avoider-Enforcer game, Avoider has to take "at most k-1" element in every set with "k" vertices. In a discrepancy game, Balancer has to attain both goals simultaneously: he should take at least a certain fraction, and at most a certain fraction, of the elements in each set. Winning conditions. Let "n" be the number of sets, and "ki" be the number of elements in set "i". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "(b_1,\\ldots,b_n)" }, { "math_id": 3, "text": "\\sum_{i=1}^n \\exp\\left({-b_i^2 \\over 2 k_i}\\right) \\leq 1/2" }, { "math_id": 4, "text": "b_i \\geq \\sqrt{2 k_i\\ln(2 n) }" }, { "math_id": 5, "text": "{k\\over 2} - \\sqrt{k \\ln(2 n)/2}" }, { "math_id": 6, "text": "{k\\over 2} +\\sqrt{k \\ln(2 n)/2}" }, { "math_id": 7, "text": "\\sum_{i=1}^n 2^{-k_i} < 1/4" } ]
https://en.wikipedia.org/wiki?curid=58899080
58900190
Edmond de Belamy
Painting created by artificial intelligence Edmond de Belamy, sometimes referred to as Portrait of Edmond de Belamy, is a generative adversarial network (GAN) portrait painting constructed by Paris-based arts collective Obvious in 2018 from WikiArt's artwork database. Printed on canvas, the work belongs to a series of generative images called La Famille de Belamy. The print is known for being sold for during a Christie's's auction. The name "Belamy" is a pun based on Ian Goodfellow, inventor of GANs. In French, "bel ami" means "good friend", which is an allude to Goodfellow's name. The work has been criticized as having been created with another AI artist's uncredited code. Auction. It gained media attention after Christie's announced its intention to auction the piece as the first artwork created using artificial intelligence to be featured in the "Prints &amp; Multiples" sale at the Christie's Images New York auction. The picture was originally hung on the wall to the right of a bronze work by Roy Lichtenstein. The local and online auction's bidding was started on 23 October 2018 among five parties. Six minutes into the bidding, the price went up to ; the price surpassed pre-auction estimates, which valued it at to . Seven minutes into the bidding, an anonymous phone bidder won the auction with a bid, and the print was bought for on 25 October 2018, making it the second most expensive artwork in the auction, just cheaper than Andy Warhol’s artwork, "Myths", the 254 cm × 254 cm 1981 artwork that was sold for . Obvious stated that the proceeds "will [be used] to refine the algorithm [and] create works that increasingly seem to have been created by a human being". Method. Obvious's members are Hugo Caselles-Dupré, Pierre Fautrel, and Gauthier Vernier. Caselles-Dupré stated that the algorithm used a "discriminator". Hugo Caselles-Dupré found artist Robbie Barrat’s open source algorithm that was forked from Soumith Chintala on Github. He then used the algorithm to be trained on a set of 15,000 portraits from the online art encyclopedia WikiArt, spanning the 14th to the 19th centuries. It is manually signed with ink at the bottom-right with formula_0, which is part of the loss function metaheuristic algorithm code that produced it. Description. The piece is a portrait depiction of a somewhat blurry man, primarily focused on the top-left corner of the canvas, surrounded by whiter color. The dominant colors in the portrait are brown and beige. The painting has been associated with the aesthetic provisional name that was proposed by François Chollet, "GANism", with 'characteristics' of indistinct-blurry imagery. It is generated and printed by Obvious; the canvas print measures 27 &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 in × 27 &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 in (70 cm × 70 cm) and is set within a gold-colored gilded wood frame. The work belongs to a series of eleven generative images called "La Famille de Belamy" (from French, lit. 'Belamy's family') that was meant to resemble "Belamy"'s family tree. "Edmond de Belamy" is the fictional descendant of "Madame de Belamy", a name that was given to another artificial intelligence (AI) artwork made by Obvious. The name "Belamy" is a tribute to Ian Goodfellow's name, inventor of GANs; in French, "bel ami" means "good friend", a translated pun based on "good fellow". Though the painting is not supposed to be a depiction of any real person. Reception. The piece has been criticized because it was created using a generative adversarial network (GAN) software package that was implemented by Robbie Barrat, a then-19-year-old AI artist who was not affiliated with Obvious. Although they did not originally publicize that they were using Barrat's code, Caselles-Dupré later admitted that they had used the code from Barrat with little modification. "If you’re just talking about the code, then there is not a big percentage that has been modified," Caselles-Dupré said. "But if you talk about working on the computer, making it work, there is a lot of effort there." Posts on the project's issue tracker show Obvious members requesting that Barrat provide them with support and custom features. On the same day that "Edmond de Belamy" was sold, Barrat posted two images of comparison between "Edmond de Belamy" and his "outputs from a neural network [he] trained and put online *over a year ago*" on Twitter, writing that they used his code only to later sell the results. Mario Klingemann wrote that "You could argue that probably 90 percent of the actual 'work' was done by [Barrat]." The piece has also been criticized for whether it is real "art" or not. Art critic Jonathan Jones did not acknowledge "Edmond de Belamy" as art. The piece has been placed within a tradition of works calling into question the basis of the modern art market. Research has used "Edmond de Belamy" to show how anthropomorphizing AI can affect allocations of responsibility and credit to artists. In an interview, Caselles-Dupré said: "We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it." The "false information" that he was pointing to was that the painting was the first portrait that was generated by AI. The head of Christie’s prints and multiples department said he is no expert on AI, having learned about Obvious after reading an article about a collector’s purchase of one of Obvious's previous works for around . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\min_{\\mathcal{G}}\\max_{\\mathcal{D}}E_{x}\\left[\\log(\\mathcal{D}(x))\\right]+E_{z}\\left[\\log(1-\\mathcal{D}(\\mathcal{G}(z)))\\right]" } ]
https://en.wikipedia.org/wiki?curid=58900190
58901856
Steinmetz curve
A Steinmetz curve is the curve of intersection of two right circular cylinders of radii formula_0 and formula_1 whose axes intersect perpendicularly. In case of formula_2 the Steimetz curves are the edges of a Steinmetz solid. If the cylinder axes are the x- and y-axes and formula_3, then the Steinmetz curves are given by the parametric equations: formula_4 It is named after mathematician Charles Proteus Steinmetz, along with Steinmetz's equation, Steinmetz solids, and Steinmetz equivalent circuit theory. In the case when the two cylinders have equal radii the curve degenerates to two intersecting ellipses. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "b," }, { "math_id": 2, "text": " a=b" }, { "math_id": 3, "text": "a\\le b" }, { "math_id": 4, "text": "\n\\begin{align}\nx (t) & = a \\cos t \\\\\ny (t) & = \\pm \\sqrt{b^2 - a^2 \\sin^2 t} \\\\\nz (t) & = a \\sin t\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=58901856
58908272
Strong positional game
Type of sequential game A strong positional game (also called Maker-Maker game) is a kind of positional game. Like most positional games, it is described by its set of "positions" (formula_0) and its family of "winning-sets" (formula_1- a family of subsets of formula_0). It is played by two players, called First and Second, who alternately take previously untaken positions. In a strong positional game, the winner is the first player who holds all the elements of a winning-set. If all positions are taken and no player wins, then it is a draw. Classic Tic-tac-toe is an example of a strong positional game. First player advantage. In a strong positional game, Second cannot have a winning strategy. This can be proved by a strategy-stealing argument: if Second had a winning strategy, then First could have stolen it and win too, but this is impossible since there is only one winner. Therefore, for every strong-positional game there are only two options: either First has a winning strategy, or Second has a drawing strategy. An interesting corollary is that, if a certain game does not have draw positions, then First always has a winning strategy. Comparison to Maker-Breaker game. Every strong positional game has a variant that is a Maker-Breaker game. In that variant, only the first player ("Maker") can win by holding a winning-set. The second player ("Breaker") can win only by preventing Maker from holding a winning-set. For fixed formula_0 and formula_1, the strong-positional variant is strictly harder for the first player, since in it, he needs to both "attack" (try to get a winning-set) and "defend" (prevent the second player from getting one), while in the maker-breaker variant, the first player can focus only on "attack". Hence, "every winning-strategy of First in a strong-positional game is also a winning-strategy of Maker in the corresponding maker-breaker game". The opposite is not true. For example, in the maker-breaker variant of Tic-Tac-Toe, Maker has a winning strategy, but in its strong-positional (classic) variant, Second has a drawing strategy. Similarly, the strong-positional variant is strictly easier for the second player: "every winning strategy of Breaker in a maker-breaker game is also a drawing-strategy of Second in the corresponding strong-positional game", but the opposite is not true. The extra-set paradox. Suppose First has a winning strategy. Now, we add a new set to formula_1. Contrary to intuition, it is possible that this new set will now destroy the winning strategy and make the game a draw. Intuitively, the reason is that First might have to spend some moves to prevent Second from owning this extra set. The extra-set paradox does not appear in Maker-Breaker games. Examples. The clique game. The clique game is an example of a strong positional game. It is parametrized by two integers, n and N. In it: According to Ramsey's theorem, there exists some number R(n,n) such that, for every N &gt; R(n,n), in every two-coloring of the complete graph on {1...,N}, one of the colors must contain a clique of size n. Therefore, by the above corollary, when N &gt; R(n,n), First always has a winning strategy. Multi-dimensional tic-tac-toe. Consider the game of tic-tac-toe played in a "d"-dimensional cube of length "n". By the Hales–Jewett theorem, when "d" is large enough (as a function of "n"), every 2-coloring of the cube-cells contains a monochromatic geometric line. Therefore, by the above corollary, First always has a winning strategy. Open questions. Besides these existential results, there are few constructive results related to strong-positional games. For example, while it is known that the first player has a winning strategy in a sufficiently large clique game, no specific winning strategy is currently known. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{F}" } ]
https://en.wikipedia.org/wiki?curid=58908272
589132
Special number field sieve
In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it. The special number field sieve is efficient for integers of the form "r""e" ± "s", where "r" and "s" are small (for instance Mersenne numbers). Heuristically, its complexity for factoring an integer formula_0 is of the form: formula_1 in O and L-notations. The SNFS has been used extensively by NFSNet (a volunteer distributed computing effort), NFS@Home and others to factorise numbers of the Cunningham project; for some time the records for integer factorization have been numbers factored by SNFS. Overview of method. The SNFS is based on an idea similar to the much simpler rational sieve; in particular, readers may find it helpful to read about the rational sieve first, before tackling the SNFS. The SNFS works as follows. Let "n" be the integer we want to factor. As in the rational sieve, the SNFS can be broken into two steps: The second step is identical to the case of the rational sieve, and is a straightforward linear algebra problem. The first step, however, is done in a different, more efficient way than the rational sieve, by utilizing number fields. Details of method. Let "n" be the integer we want to factor. We pick an irreducible polynomial "f" with integer coefficients, and an integer "m" such that "f"("m")≡0 (mod "n") (we will explain how they are chosen in the next section). Let "α" be a root of "f"; we can then form the ring Z[α]. There is a unique ring homomorphism φ from Z["α"] to Z/nZ that maps "α" to "m". For simplicity, we'll assume that Z["α"] is a unique factorization domain; the algorithm can be modified to work when it isn't, but then there are some additional complications. Next, we set up two parallel "factor bases", one in Z["α"] and one in Z. The one in Z["α"] consists of all the prime ideals in Z["α"] whose norm is bounded by a chosen value formula_2. The factor base in Z, as in the rational sieve case, consists of all prime integers up to some other bound. We then search for relatively prime pairs of integers ("a","b") such that: These pairs are found through a sieving process, analogous to the Sieve of Eratosthenes; this motivates the name "Number Field Sieve". For each such pair, we can apply the ring homomorphism φ to the factorization of "a"+"bα", and we can apply the canonical ring homomorphism from Z to Z/nZ to the factorization of "a"+"bm". Setting these equal gives a multiplicative relation among elements of a bigger factor base in Z/nZ, and if we find enough pairs we can proceed to combine the relations and factor "n", as described above. Choice of parameters. Not every number is an appropriate choice for the SNFS: one needs to know in advance a polynomial "f" of appropriate degree (the optimal degree is conjectured to be formula_3, which is 4, 5, or 6 for the sizes of N currently feasible to factorise) with small coefficients, and a value "x" such that formula_4 where N is the number to factorise. There is an extra condition: "x" must satisfy formula_5 for a and b no bigger than formula_6. One set of numbers for which such polynomials exist are the formula_7 numbers from the Cunningham tables; for example, when NFSNET factored &amp;NoBreak;&amp;NoBreak;, they used the polynomial &amp;NoBreak;&amp;NoBreak; with &amp;NoBreak;}&amp;NoBreak;, since &amp;NoBreak;&amp;NoBreak;, and formula_8. Numbers defined by linear recurrences, such as the Fibonacci and Lucas numbers, also have SNFS polynomials, but these are a little more difficult to construct. For example, formula_9 has polynomial formula_10, and the value of "x" satisfies formula_11. If one already knows some factors of a large number compatible with SNFS, then one could do the SNFS calculation modulo the remaining part; for the NFSNET example above, &amp;NoBreak;&amp;NoBreak; times a 197-digit composite number (the small factors were found by ECM), and the SNFS was performed modulo the 197-digit number. The number of relations required by SNFS still depends on the size of the large number, but the individual calculations are quicker modulo the smaller number. Limitations of algorithm. This algorithm, as mentioned above, is very efficient for numbers of the form "r""e"±"s", for "r" and "s" relatively small. It is also efficient for any integers which can be represented as a polynomial with small coefficients. This includes integers of the more general form "ar""e"±"bs""f", and also for many integers whose binary representation has low Hamming weight. The reason for this is as follows: The Number Field Sieve performs sieving in two different fields. The first field is usually the rationals. The second is a higher degree field. The efficiency of the algorithm strongly depends on the norms of certain elements in these fields. When an integer can be represented as a polynomial with small coefficients, the norms that arise are much smaller than those that arise when an integer is represented by a general polynomial. The reason is that a general polynomial will have much larger coefficients, and the norms will be correspondingly larger. The algorithm attempts to factor these norms over a fixed set of prime numbers. When the norms are smaller, these numbers are more likely to factor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\exp\\left(\\left(1+o(1)\\right)\\left(\\tfrac{32}{9}\\log n\\right)^{1/3}\\left(\\log\\log n\\right)^{2/3}\\right)=L_n\\left[1/3,(32/9)^{1/3}\\right]" }, { "math_id": 2, "text": "N_{\\max}" }, { "math_id": 3, "text": "\\left(3 \\frac{\\log N}{\\log \\log N}\\right) ^\\frac{1}{3}" }, { "math_id": 4, "text": "f(x) \\equiv 0 \\pmod N" }, { "math_id": 5, "text": "ax+b \\equiv 0 \\pmod N" }, { "math_id": 6, "text": "N^{1/d}" }, { "math_id": 7, "text": "a^b \\pm 1" }, { "math_id": 8, "text": "3^{480}+3 \\equiv 0 \\pmod {3^{479}+1}" }, { "math_id": 9, "text": "F_{709}" }, { "math_id": 10, "text": "n^5 + 10n^3 + 10n^2 + 10n + 3" }, { "math_id": 11, "text": "F_{142} x - F_{141} = 0" } ]
https://en.wikipedia.org/wiki?curid=589132
58918674
Division by infinity
In mathematics, division by infinity is division where the divisor (denominator) is ∞. In ordinary arithmetic, this does not have a well-defined meaning, since "∞" is a mathematical concept that does not correspond to a specific number, and moreover, there is no nonzero real number that, when added to itself an infinite number of times, gives a finite number. However, "dividing by ∞" can be given meaning as an informal way of expressing the limit of dividing a number by larger and larger divisors. Using mathematical structures that go beyond the real numbers, it is possible to define numbers that have infinite magnitude yet can still be manipulated in ways much like ordinary arithmetic. For example, on the extended real number line, dividing any real number by infinity yields zero, while in the surreal number system, dividing 1 by the infinite number formula_0 yields the infinitesimal number formula_1. In floating-point arithmetic, any finite number divided by formula_2 is equal to positive or negative zero if the numerator is finite. Otherwise, the result is NaN. The challenges of providing a rigorous meaning of "division by infinity" are analogous to those of defining division by zero. Use in technology. As infinity is difficult to deal with for most calculators and computers, many do not have a formal way of computing division by infinity. Calculators such as the TI-84 and most household calculators do not have an infinity button so it is impossible to type into the calculator "'x" divided by infinity' so instead users can type a large number such as "1e99" (formula_3) or "-1e99". By typing in some number divided by a sufficiently large number the output will be 0. In some cases this fails as there is either an overflow error or if the numerator is also a sufficiently large number then the output may be 1 or a real number. In the Wolfram language, dividing an integer by infinity will result in the result 0. Also, in some calculators such as the TI-Nspire, 1 divided by infinity can be evaluated as 0. Use in calculus. Integration. In calculus, taking the integral of a function is defined finding the area under a curve. This can be done simply by breaking up this area into rectangular sections and taking the sum of these sections. These are called Riemann sums. As the sections get narrower, the Riemann sum becomes an increasingly accurate approximation of the true area. Taking the limit of these Riemann sums, in which the sections can be heuristically regarded as "infinitely thin", gives the definite integral of the function over the prescribed interval. Conceptually this results in dividing the interval by infinity to result in infinitely small pieces. On a different note when taking an integral where one of the boundaries is infinity this is defined as an improper integral. To determine this one would take the limit as a variable a approaches infinity substituting a in for the infinity sign. This would then allow the integral to be evaluated and then the limit would be taken. In many cases evaluating this would result in a term being divided by infinity. In this case in order to evaluate the integral one would assume this to be zero. This allows for the integral to be assumed to converge meaning a finite answer can be determined from the integral using this assumption. L'Hôpital's rule. When given a ratio between two functions, the limit of this ratio can be evaluated by computing the limit of each function separately. Where the limit of the function in the denominator is infinity, and the numerator does not allow the ratio to be well determined, the limit of the ratio is said to be of indeterminate form. An example of this is: formula_4 Using L'Hôpital's rule to evaluate limits of fractions where the denominator tends towards infinity can produce results other than 0. If formula_5 then formula_6 So if formula_7 then formula_8 This means that, when using limits to give meaning to division by infinity, the result of "dividing by infinity" does not always equal 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega" }, { "math_id": 1, "text": "\\epsilon" }, { "math_id": 2, "text": "\\pm\\infty" }, { "math_id": 3, "text": "1 \\times 10^{99}" }, { "math_id": 4, "text": "\\frac{\\infty}{\\infty}" }, { "math_id": 5, "text": "\\lim_{x\\to c}\\frac{f'(x)}{g'(x)}" }, { "math_id": 6, "text": "\\lim_{x\\to c}\\frac{f(x)}{g(x)} = \\lim_{x\\to c}\\frac{f'(x)}{g'(x)}" }, { "math_id": 7, "text": "\\lim_{x\\to c}|f(x)| = \\lim_{x\\to c}|g(x)| = \\infty," }, { "math_id": 8, "text": "\\lim_{x\\to c}\\frac{f(x)}{g(x)}=L" } ]
https://en.wikipedia.org/wiki?curid=58918674
589225
Bimetallic strip
Two-sided strip that coils when heated or cooled A bimetallic strip or bimetal strip is a strip that consists of two strips of different metals which expand at different rates as they are heated. They are used to convert a temperature change into mechanical displacement. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The invention of the bimetallic strip is generally credited to John Harrison, an eighteenth-century clockmaker who made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England. Characteristics. The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases steel and brass. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The sideways displacement of the strip is much larger than the small lengthways expansion in either of the two metals. In some applications, the bimetal strip is used in the flat form. In others, it is wrapped into a coil for compactness. The greater length of the coiled version gives improved sensitivity. The radius of curvature formula_0 of a bimetallic strip depends on temperature formula_1 according the formula derived by French physicist Yvon Villarceau in 1863 in his research for improving the precision of clocks: formula_2, where formula_3 is the total thickness of the bimetal and formula_4 is a dimensionless coefficient. For each metallic strip: formula_5 is the Young modulus, formula_6 is the coefficient of thermal expansion and formula_7 is the thickness. The formula can also be rewritten as a function of the thermal misfit strain formula_8. And if the modulus and height are similar, we simply have formula_9. An equivalent formula can be derived from the beam theory. History. The earliest surviving bimetallic strip was made by the eighteenth-century clockmaker John Harrison who is generally credited with its invention. He made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. It should not be confused with the bimetallic mechanism for correcting for thermal expansion in his gridiron pendulum. His earliest examples had two individual metal strips joined by rivets but he also invented the later technique of directly fusing molten brass onto a steel substrate. A strip of this type was fitted to his last timekeeper, H5. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England. Composition. The metals involved in a bimetallic strip can vary in composition so long as their thermal expansion coefficients differ. The metal of lower thermal expansion coefficient is sometimes called the passive metal, while the other is called the active metal. Copper, steel, brass, iron, and nickel are commonly used metals in bimetallic strips. Metal alloys have been used in bimetallic strips as well, such as invar and constantan. Material selection has a significant impact on the working temperature range of a bimetallic strip, with some having a temperature limit up to 500°C, with others only reaching 150°C before failing. Applications. This effect is used in a range of mechanical and electrical devices. Clocks. Mechanical clock mechanisms are sensitive to temperature changes as each part has tiny tolerance and it leads to errors in time keeping. A bimetallic strip is used to compensate this phenomenon in the mechanism of some timepieces. The most common method is to use a bimetallic construction for the circular rim of the balance wheel. What it does is move a weight in a radial way looking at the circular plane down by the balance wheel, varying then, the momentum of inertia of the balance wheel. As the spring controlling the balance becomes weaker with the increasing temperature, the balance becomes smaller in diameter to decrease the momentum of inertia and keep the period of oscillation (and hence timekeeping) constant. Nowadays this system is not used anymore since the appearance of low temperature coefficient alloys like nivarox, parachrom and many others depending on each brand. Thermostats. In the regulation of heating and cooling, thermostats that operate over a wide range of temperatures are used. In these, one end of the bimetallic strip is mechanically fixed and attached to an electrical power source, while the other (moving) end carries an electrical contact. In adjustable thermostats another contact is positioned with a regulating knob or lever. The position so set controls the regulated temperature, called the "set point". Some thermostats use a mercury switch connected to both electrical leads. The angle of the entire mechanism is adjustable to control the set point of the thermostat. Depending upon the application, a higher temperature may open a contact (as in a heater control) or it may close a contact (as in a refrigerator or air conditioner). The electrical contacts may control the power directly (as in a household iron) or indirectly, switching electrical power through a relay or the supply of natural gas or fuel oil through an electrically operated valve. In some natural gas heaters the power may be provided with a thermocouple that is heated by a pilot light (a small, continuously burning, flame). In devices without pilot lights for ignition (as in most modern gas clothes dryers and some natural gas heaters and decorative fireplaces) the power for the contacts is provided by reduced household electrical power that operates a relay controlling an electronic ignitor, either a resistance heater or an electrically powered spark generating device. Thermometers. A direct indicating dial thermometer, common in household devices (such as a patio thermometer or a meat thermometer), uses a bimetallic strip wrapped into a coil in its most common design. The coil changes the linear movement of the metal expansion into a circular movement thanks to the helicoidal shape it draws. One end of the coil is fixed to the housing of the device as a fix point and the other drives an indicating needle inside a circular indicator. A bimetallic strip is also used in a recording thermometer. Breguet's thermometer consists of a tri-metallic helix in order to have a more accurate result. Heat engine. Heat engines are not the most efficient ones, and with the use of bimetallic strips the efficiency of the heat engine is even lower as there is no chamber to contain the heat. Moreover, the bimetallic strips cannot produce strength in its moves, the reason why is that in order to achieve reasonables bendings (movements) both metallic strips have to be thin to make the difference between the expansion noticeable. So the uses for metallic strips in heat engines are mostly in simple toys that have been built to demonstrate how the principle can be used to drive a heat engine. Electrical devices. Bimetal strips are used in miniature circuit breakers to protect circuits from excess current. A coil of wire is used to heat a bimetal strip, which bends and operates a linkage that unlatches a spring-operated contact. This interrupts the circuit and can be reset when the bimetal strip has cooled down. Bimetal strips are also used in time-delay relays, gas oven safety valves, thermal flashers for older turn signal lamps, and fluorescent lamp starters. In some devices, the current running directly through the bimetal strip is sufficient to heat it and operate contacts directly. It has also been used in mechanical PWM voltage regulators for automotive uses. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "\\frac{1}{R} - \\frac{1}{R_0} = \\frac{3}{2a} \\frac{(\\alpha_2-\\alpha_1) (T-T_0)}{h} = \\frac{3}{2a} \\frac{\\Delta\\epsilon}{h} " }, { "math_id": 3, "text": "h = h_1 + h_2" }, { "math_id": 4, "text": "a = 1 + (E_1 h_1^2 - E_2 h_2^2)^2 / (4\\ E_1 h_1\\ E_2 h_2\\ h^2)\n" }, { "math_id": 5, "text": "E_i" }, { "math_id": 6, "text": "\\alpha_i\n" }, { "math_id": 7, "text": "h_i\n" }, { "math_id": 8, "text": "\\Delta\\epsilon = (\\alpha_2-\\alpha_1)(T-T_0)\n" }, { "math_id": 9, "text": "a \\simeq 1\n" } ]
https://en.wikipedia.org/wiki?curid=589225
589277
William Huggins
British astronomer Sir William Huggins (7 February 1824 – 12 May 1910) was a British astronomer best known for his pioneering work in astronomical spectroscopy together with his wife, Margaret. Biography. William Huggins was born at Cornhill, Middlesex, in 1824. In 1875, he married Margaret Lindsay, daughter of John Murray of Dublin, who also had an interest in astronomy and scientific research. She encouraged her husband's photography and helped to put their research on a systematic footing. Huggins built a private observatory at 90 Upper Tulse Hill, London, from where he and his wife carried out extensive observations of the spectral emission lines and absorption lines of various celestial objects. On 29 August 1864, Huggins was the first to take the spectrum of a planetary nebula when he analysed NGC 6543. He was also the first to distinguish between nebulae and galaxies by showing that some (like the Orion Nebula) had pure emission spectra characteristic of gas, while others like the Andromeda Galaxy had the spectral characteristics of stars. Huggins was assisted in the analysis of spectra by his neighbor, the chemist William Allen Miller. Huggins was also the first to adopt dry plate photography in imaging astronomical objects. With observations of Sirius showing a redshift in 1868, Huggins hypothesized that a radial velocity of the star could be computed. Huggins won the Gold Medal of the Royal Astronomical Society in 1867, jointly with William Allen Miller. He later served as President of the Royal Astronomical Society from 1876 to 1878, and received the Gold Medal again (this time alone) in 1885. He served as an officer of the Royal Astronomical Society for a total of 37 years, more than any other person. Huggins was elected a Fellow of the Royal Society in June 1865, was awarded their Royal Medal (1866), Rumford Medal (1880) and Copley Medal (1898) and delivered their Bakerian Lecture in 1885. He then served as President of the Royal Society from 1900 to 1905. For example, his Presidential Address in 1904 praised the fallen Fellows and distributed the prizes of that year. He died at his home in Tulse Hill, London, after an operation for a hernia in 1910 and was buried at Golders Green Crematorium. Telescopes. In 1856 Huggins acquired a 5-inch diameter aperture telescope by Dollond. In 1858 an 8-inch telescope by Clark was added. These were both refracting telescopes. They had glass objectives. In 1871 Huggins acquired an speculum reflecting telescope from the Grubb Telescope Company. Honours and awards. Honours Awards Named after him Publications. people; series 2, no. 3) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=589277
58928004
Waiter–Client game
A Waiter-Client game (also called: Picker-Chooser game) is a kind of positional game. Like most positional games, it is described by its set of "positions/points/elements" (formula_0), and its family of "winning-sets" (formula_1- a family of subsets of formula_0). It is played by two players, called Waiter and Client. Each round, Waiter picks two elements, Client chooses one element and Waiter gets the other element (similarly to the Divide and choose protocol). In a Waiter-Client game, Waiter wins if he manages to occupy all the elements of a winning-set, while Client wins if he manages to prevent this, i.e., hold at least one element in each winning-set. So Waiter and Client have, respectively, the same goals as Maker and Breaker in a Maker-Breaker game; only the rules for taking elements are different. In a Client-Waiter game the winning conditions are reversed: Client wins if he manages to hold all the elements of a winning-set, while Waiter wins if he manages to hold at least one element in each winning-set. Comparison to Maker-Breaker games. In some cases, the Waiter is much more powerful than the player with the same goal in the Maker-Breaker variant. For example, consider a variant of tic-tac-toe in which Maker wins by taking "k" squares in a row and Breaker wins by blocking all rows. Then, when the board is infinite, Waiter can win as Maker for any "k &gt;= 1". Moreover, Waiter can win as Breaker for any "k" &gt;= 2: in each round, Waiter picks a pair of squares that are not adjacent to the pairs picked so far (for example, in round "i" he picks the squares (2"i",0) and (2"i",1)). Moreover, even when the board is finite, Waiter always wins as Breaker when "k" &gt;= 8. This leads to the following conjecture by József Beck: If Maker wins the Maker-Breaker game on formula_2 when playing second, then Waiter wins the Waiter-Client game on formula_2. Similarly, if Breaker wins the Maker-Breaker game on formula_2 when playing second, then Waiter wins the Client-Waiter game on formula_2. Special cases. k-uniform hypergraphs. Suppose the winning-sets are all of size "k" (i.e., the game-hypergraph is "k"-uniform). In a Maker-Breaker game, the Erdos-Selfridge theorem implies that Breaker wins if the number of winning-sets is less than formula_3. By the above conjecture, we would expect the same to hold in the corresponding Client-Waiter game - Waiter "should" win (as Breaker) whenever the number of winning-sets is less than formula_3 . However, currently only the following weaker bounds are known: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "(X,\\mathcal{F})" }, { "math_id": 3, "text": "2^{k-1}" }, { "math_id": 4, "text": "{2^{k-1} \\over 8(k+1)}" }, { "math_id": 5, "text": "{2^{k-1} \\over 3\\sqrt{k+1/2}}" } ]
https://en.wikipedia.org/wiki?curid=58928004
589303
Molecular orbital theory
Method for describing the electronic structure of molecules using quantum mechanics In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms. Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons—the molecular orbitals—as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation. Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry. Linear combination of atomic orbitals (LCAO) method. In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function "ψj" can be written as a simple weighted sum of the "n" constituent atomic orbitals "χi", according to the following equation: formula_0 One may determine "cij" coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital—hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent. Molecular orbital theory is used to interpret ultraviolet-visible spectroscopy (UV-VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals. History. Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see ) before valence bond theory, which came up with its own explanation in 1931. The word "orbital" was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory. Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene. The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods. The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory. Types of orbitals. Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region "between" a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength. Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels. The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams. Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*. Bond Order. Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number, n&gt;1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one. Bond Order = 1/2 [(Number of electrons in bonding MO) - (Number of electrons in anti-bonding MO)] From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, bond order,=1/2*(2-2)=0. That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol. Besides, the strength of a bond can also be realized from bond order (BO). For example: H2 :BO=(2-0)/2=1; Bond Energy= 436 kJ/mol. H2+ :BO=(1-0)/2=1/2; Bond Energy=171 kJ/mol. As bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy. Overview. MOT provides a global, delocalized perspective on chemical bonding. In MO theory, "any" electron in a molecule may be found "anywhere" in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation—the breaking of a chemical bond due to the absorption of light. Molecular orbital theory is used to interpret ultraviolet-visible spectroscopy (UV-VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. Although in MO theory "some" molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, "other" orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems. Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei...An example is the MO description of benzene, C6H6, which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons—24 coming from carbon atoms and 6 coming from hydrogen atoms—are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C-C or C-H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon-carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms. In molecules such as methane, CH4, the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies. As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\psi_j = \\sum_{i=1}^{n} c_{ij} \\chi_i." } ]
https://en.wikipedia.org/wiki?curid=589303
58935
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
The following is a timeline of galaxies, clusters of galaxies, and large-scale structure of the universe. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_n" }, { "math_id": 1, "text": "r_g" } ]
https://en.wikipedia.org/wiki?curid=58935
58936850
Serge Cantat
French mathematician Serge Marc Cantat (born 3 June 1973, in Paris) is a French mathematician, specializing in geometry and dynamical systems. Cantat received his PhD under the supervision of Étienne Ghys in 1999 at the École normale supérieure de Lyon. Cantat is a directeur de recherche of CNRS at the Institut de recherches mathématiques de Rennes (University of Rennes 1). He was previously directeur de recherche of CNRS at ENS Paris. His research deals with complex dynamics and dynamics of automorphisms of algebraic surfaces. He examined the algebraic structure of Cremona groups ("i.e." groups of birational automorphisms of formula_0-dimensional projective spaces over a field formula_1) and showed with Stéphane Lamy that for an algebraically closed field formula_1 and for dimension formula_0=2 the Cremona group formula_2 is not a simple group. In particular, if formula_1 is the field of complex numbers and formula_0=2, the Cremona group contains an infinite non-countable family of different normal subgroups. In 2018, Cantat was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro. In 2012 he received the Prix Paul Doistau–Émile Blutet for his work on dynamic systems (and especially holomorphic dynamic systems). In 2012 he was an invited speaker at the European Congress of Mathematics in Kraków. In 2012 he was awarded the Prix La Recherche.
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\mathrm{Cr}(\\mathbb{P}^n(k))" } ]
https://en.wikipedia.org/wiki?curid=58936850
5893800
Prism compressor
Optical device A prism compressor is an optical device used to shorten the duration of a positively chirped ultrashort laser pulse by giving different wavelength components a different time delay. It typically consists of two prisms and a mirror. Figure 1 shows the construction of such a compressor. Although the dispersion of the prism material causes different wavelength components to travel along different paths, the compressor is built such that all wavelength components leave the compressor at different times, but in the same direction. If the different wavelength components of a laser pulse were already separated in time, the prism compressor can make them overlap with each other, thus causing a shorter pulse. Prism compressors are typically used to compensate for dispersion inside Ti:sapphire modelocked lasers. Each time the laser pulse inside travels through the optical components inside the laser cavity, it becomes stretched. A prism compressor inside the cavity can be designed such that it exactly compensates this intra-cavity dispersion. It can also be used to compensate for dispersion of ultrashort pulses outside laser cavities. Prismatic pulse compression was first introduced, using a single prism, in 1983 by Dietel et al. and a four-prism pulse compressor was demonstrated in 1984 by Fork et al. Additional experimental developments include a prism-pair pulse compressor and a six-prism pulse compressor for semiconductor lasers. The multiple-prism dispersion theory, for pulse compression, was introduced in 1982 by Duarte and Piper, extended to second derivatives in 1987, and further extended to higher order phase derivatives in 2009. An additional compressor, using a large prism with lateral reflectors to enable a multi-pass arrangement at the prism, was introduced in 2006. Principle of operation. Almost all optical materials that are transparent for visible light have a "normal", or positive, dispersion: the refractive index decreases with increasing wavelength. This means that longer wavelengths travel faster through these materials. The same is true for the prisms in a prism compressor. However, the positive dispersion of the prisms is offset by the extra distance that the longer wavelength components have to travel through the second prism. This is a rather delicate balance, since the shorter wavelengths travel a larger distance through air. However, with a careful choice of the geometry, it is possible to create a negative dispersion that can compensate positive dispersion from other optical components. This is shown in Figure 3. By shifting prism P2 up and down, the dispersion of the compressor can be both negative around refractive index "n" = 1.6 (red curve) and positive (blue curve). The range with a negative dispersion is relatively short since prism P2 can only be moved upwards over a short distance before the light ray misses it altogether. In principle, the α angle can be varied to tune the dispersion properties of a prism compressor. In practice, however, the geometry is chosen such that the incident and refracted beam have the same angle at the central wavelength of the spectrum to be compressed. This configuration is known as the "angle of minimum deviation", and is easier to align than arbitrary angles. The refractive index of typical materials such as BK7 glass changes only a small amount (0.01 – 0.02) within the few tens of nanometers that are covered by an ultrashort pulse. Within a practical size, a prism compressor can only compensate a few hundred μm of path length differences between the wavelength components. However, by using a large refractive index material (such as SF10, SF11, etc.) the compensation distance can be extended to mm level. This technology has been used successfully inside femtosecond laser cavity for compensation of the Ti:sapphire crystal, and outside for the compensation of dispersion introduced by other elements. However, high-order dispersion will be introduced by the prism compressor itself, as well as other optical elements. It can be corrected with careful measurement of the ultrashort pulse and compensate the phase distortion. MIIPS is one of the pulse shaping techniques which can measure and compensate high-order dispersion automatically. As a muddled version of pulse shaping the end mirror is sometimes tilted or even deformed, accepting that the rays do not travel back the same path or become divergent. In Figure 4, the characteristics of the dispersion orders of a prism-pair compressor made of fused silica are depicted as a function of the insertion depth of the first prism, denoted as formula_0, for laser pulses with a central wavelength of formula_1 and spectral bandwidth formula_2. The assessment employs the Lah-Laguerre optical formalism — a generalized formulation of the high orders of dispersion. The compressor is evaluated at near the Brewster angle for a separation of formula_3 between the prisms, an insertion depth for the second prism formula_4 at the minimum wavelength formula_5, and an apex angle of formula_6 for the fused silica prisms. Dispersion theory. The angular dispersion for generalized prismatic arrays, applicable to laser pulse compression, can be calculated exactly using the multiple-prism dispersion theory. In particular, the dispersion, its first derivative, and its second derivative, are given by formula_7 formula_8 formula_9 where formula_10 formula_11 formula_12 formula_13 formula_14 formula_15 Angular quantities are defined in the article for the multiple-prism dispersion theory and higher derivatives are given by Duarte. Comparison with other pulse compressors. The most common other pulse compressor is based on gratings (see Chirped pulse amplification), which can easily create a much larger negative dispersion than a prism compressor (centimeters rather than tenths of millimeters). However, a grating compressor has losses of at least 30% due to higher-order diffraction and absorption losses in the metallic coating of the gratings. A prism compressor with an appropriate anti-reflection coating can have less than 2% loss, which makes it a feasible option inside a laser cavity. Moreover, a prism compressor is cheaper than a grating compressor. Another pulse compression technique uses "chirped mirrors", which are dielectric mirrors that are designed such that the reflection has a negative dispersion. Chirped mirrors are difficult to manufacture; moreover the amount of dispersion is rather small, which means that the laser beam must be reflected a number of times in order to achieve the same amount of dispersion as with a single prism compressor. This means that it is hard to tune. On the other hand, the dispersion of a chirped-mirror compressor can be manufactured to have a specific dispersion curve, whereas a prism compressor offers much less freedom. Chirped-mirror compressors are used in applications where pulses with a very large bandwidth have to be compressed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ell_1" }, { "math_id": 1, "text": "780 \\text{ nm}" }, { "math_id": 2, "text": "\\Delta \\lambda = 30 \\text{ nm}" }, { "math_id": 3, "text": "L = 30 \\text{ cm}" }, { "math_id": 4, "text": "\\ell_2 = \\text{1 mm}" }, { "math_id": 5, "text": "\\lambda_{min}" }, { "math_id": 6, "text": "\\alpha = 69.06^\\circ" }, { "math_id": 7, "text": "\\nabla_{n}\\phi_{2,m} = H_{2,m} + (M^{-1})\\bigg(H_{1,m} \\pm \\nabla_{n}\\phi_{2,(m-1)}\\bigg)" }, { "math_id": 8, "text": "\\nabla_{n}^2\\phi_{2,m} = \\nabla_{n}H_{2,m} + (\\nabla_{n}M^{-1})\\bigg(H_{1,m} \\pm \\nabla_{n}\\phi_{2,(m-1)}\\bigg)+(M^{-1})\\bigg(\\nabla_{n}H_{1,m} \\pm \\nabla_{n}^2\\phi_{2,(m-1)}\\bigg)" }, { "math_id": 9, "text": "\\nabla_{n}^3\\phi_{2,m} = \\nabla_{n}^2H_{2,m} + (\\nabla_{n}^2M^{-1})\\bigg(H_{1,m} \\pm \\nabla_{n}\\phi_{2,(m-1)}\\bigg)+2(\\nabla_{n}M^{-1})\\bigg(\\nabla_{n}H_{1,m} \\pm \\nabla_{n}^2\\phi_{2,(m-1)}\\bigg)+(M^{-1})\\bigg(\\nabla_{n}^2H_{1,m} \\pm \\nabla_{n}^3\\phi_{2,(m-1)}\\bigg)" }, { "math_id": 10, "text": "\\nabla_{n}= \\partial/\\partial n" }, { "math_id": 11, "text": "\\,M=k_{1,m}k_{2,m}" }, { "math_id": 12, "text": "\\,k_{1,m}=\\cos\\psi_{1,m}/\\cos\\phi_{1,m}" }, { "math_id": 13, "text": "\\,k_{2,m}=\\cos\\phi_{2,m}/\\cos\\psi_{2,m}" }, { "math_id": 14, "text": "\\,H_{1,m}=(\\tan\\phi_{1,m})/n_m" }, { "math_id": 15, "text": "\\,H_{2,m}=(\\tan\\phi_{2,m})/n_m" } ]
https://en.wikipedia.org/wiki?curid=5893800
58939052
Avoider-Enforcer game
Game where players avoid making losing-sets An Avoider-Enforcer game (also called Avoider-Forcer game or Antimaker-Antibreaker game) is a kind of positional game. Like most positional games, it is described by a set of "positions/points/elements" (formula_0) and a family of subsets (formula_1), which are called here the "losing-sets". It is played by two players, called Avoider and Enforcer, who take turns picking elements until all elements are taken. Avoider wins if he manages to avoid taking a losing set; Enforcer wins if he manages to make Avoider take a losing set. A classic example of such a game is "Sim". There, the positions are all the edges of the complete graph on 6 vertices. Players take turns to shade a line in their color, and lose when they form a full triangle of their own color: the losing sets are all the triangles. Comparison to Maker-Breaker games. The winning condition of an Avoider-Enforcer game is exactly the opposite of the winning condition of the Maker-Breaker game on the same formula_1. Thus, the Avoider-Enforcer game is the Misère game variant of the Maker-Breaker game. However, there are counter-intuitive differences between these game-types. For example, consider the biased version of the games, in which the first player takes "p" elements each turn and the second player takes "q" elements each turn (in the standard version "p"=1 and "q"=1). Maker-Breaker games are "bias-monotonic": taking more elements is always an advantage. Formally, if Maker wins the ("p":"q") Maker-Breaker game, then he also wins the ("p"+1:"q") game and the (p:q-1) game. Avoider-Enforcer games are not bias-monotonic: taking more elements is not always a "dis"advantage. For example, consider a very simple Avoider-Enforcer game where the losing sets are {w,x} and {y,z}. Then, Avoider wins the (1:1) game, Enforcer wins the (1:2) game and Avoider wins the (2:2) game. There is a "monotone" variant of the ("p":"q") Avoider-Enforcer game-rules, in which Avoider has to pick "at least" "p" elements each turn and Enforcer has to pick at least "q" elements each turn; this variant is bias-monotonic. Partial avoidance. Similarly to Maker-Breaker games, Avoider-Enforcer games also have fractional generalizations. Suppose Avoider needs to avoid taking at least a fraction "t" of the elements in any winning-set (i.e., take at most 1-"t" of the elements in any set), and Enforcer needs to prevent this, i.e., Enforcer needs to take less than a fraction "t" of the elements in some winning-set. Define the constant: formula_2(in the standard variant, formula_3). See also. Biased positional game#A winning condition for Avoider References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "c_t := (2t)^t \\cdot (2-2t)^{1-t} = 2 \\cdot t^t \\cdot (1-t)^{1-t}" }, { "math_id": 3, "text": "t=1, c_t\\to 2" }, { "math_id": 4, "text": "\\sum_{E\\in \\mathcal{F}} {c_t}^{-|E|} < 1" } ]
https://en.wikipedia.org/wiki?curid=58939052
58939295
Biased positional game
A biased positional game is a variant of a positional game. Like most positional games, it is described by a set of "positions/points/elements" (formula_0) and a family of subsets (formula_1), which are usually called the "winning-sets". It is played by two players who take turns picking elements until all elements are taken. While in the standard game each player picks one element per turn, in the biased game each player takes a different number of elements. More formally, for every two positive integers "p" and "q", a (p:q)-positional game is a game in which the first player picks "p" elements per turn and the second player picks "q" elements per turn. The main question of interest regarding biased positional games is what is their "threshold bias" - what is the bias in which the winning-power switches from one player to the other player. Example. As an example, consider the "triangle game". In this game, the elements are all edges of a complete graph on "n" vertices, and the winning-sets are all triangles (=cliques on 3 vertices). Suppose we play it as a Maker-Breaker game, i.e., the goal of Maker (the first player) is to take a triangle and the goal of Breaker (the second player) is to prevent Maker from taking a triangle. Using a simple case-analysis, it can be proved that Maker has a winning strategy whenever "n" is at least 6. Therefore, it is interesting to ask whether this advantaged can be biased by letting Breaker pick more than 1 element per turn. Indeed, it is possible to prove that: A winning condition for Breaker. In an unbiased Maker-Breaker game, the Erdos-Selfridge theorem gives a winning condition for Breaker. This condition can be generalized to biased games as follows: The strategy uses a potential function which generalized the function of Erdos-Selfridge. The potential of a (non-broken) winning-set "E" with |"E"| untaken elements is defined as formula_6. If Maker wins the game then there exists a set "E" with |"E"|=0, so its potential is 1; therefore, to prove that Breaker wins, it is sufficient to prove that the final potential-sum is less than 1. Indeed, by assumption, the potential-sum at Breaker's first turn is less than 1; and if Breaker always picks an element that maximizes the potential-drop, it is possible to show that the potential-sum always weakly decreases. When each winning-set has formula_7 elements, for some fixed "k", Breaker's winning condition simplifies to: formula_8 (when playing first) or formula_9(when playing second). This condition is tight: there are "k"-uniform set-families with formula_10sets where Maker wins. A winning condition for Maker. In an unbiased Maker-Breaker game, a theorem by Beck gives a winning condition for Maker. It uses the pair-degree of the hypergraph - denoted by formula_11. This condition can be generalized to biased games as follows: "If formula_12, then Maker has a winning-strategy in the (p:q) game when playing first." A winning condition for Avoider. In a biased Avoider-Enforcer game, the following conditions guarantee that Avoider has a winning strategy: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "q \\leq 0.5 \\sqrt{n}" }, { "math_id": 3, "text": "q \\geq 2 \\sqrt{n}" }, { "math_id": 4, "text": "\\sum_{E\\in \\mathcal{F}} (1+q)^{-|E|/p} < 1" }, { "math_id": 5, "text": "\\sum_{E\\in \\mathcal{F}} (1+q)^{-|E|/p} < {1\\over 1+q}" }, { "math_id": 6, "text": "(1+q)^{-|E|/p}" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "|\\mathcal{F}| < (q+1)^{k/p}" }, { "math_id": 9, "text": "|\\mathcal{F}| < (q+1)^{k/p-1}" }, { "math_id": 10, "text": "|\\mathcal{F}| = (q+1)^{k/p-1}" }, { "math_id": 11, "text": "d_2" }, { "math_id": 12, "text": "\\sum_{E\\in \\mathcal{F}} {p+q\\over p}^{-|E|} > {p^2 q^2 \\over (p+q)^3}\\cdot d_2 \\cdot |X|" }, { "math_id": 13, "text": "\\sum_{E\\in \\mathcal{F}} (1+1/p)^{p-|E|} < 1" }, { "math_id": 14, "text": "\\sum_{E\\in \\mathcal{F}} 2^{1-|E|} < 1" }, { "math_id": 15, "text": "|\\mathcal{F}| < (1+1/p)^{k-1}" }, { "math_id": 16, "text": "\\sum_{E\\in \\mathcal{F}} \\left(1+{q\\over p k}\\right)^{p-|E|} < 1" } ]
https://en.wikipedia.org/wiki?curid=58939295
58943723
Clique game
Positional game The clique game is a positional game where two players alternately pick edges, trying to occupy a complete clique of a given size. The game is parameterized by two integers "n" &gt; "k". The game-board is the set of all edges of a complete graph on "n" vertices. The winning-sets are all the cliques on "k" vertices. There are several variants of this game: The clique game (in its strong-positional variant) was first presented by Paul Erdős and John Selfridge, who attributed it to Simmons. They called it the Ramsey game, since it is closely related to Ramsey's theorem (see below). Winning conditions. Ramsey's theorem implies that, whenever we color a graph with 2 colors, there is at least one monochromatic clique. Moreover, for every integer "k", there exists an integer "R(k,k)" such that, in every graph with formula_0 vertices, any 2-coloring contains a monochromatic clique of size at least "k". This means that, if formula_0, the clique game can never end in a draw. a Strategy-stealing argument implies that the first player can always force at least a draw; therefore, if formula_0, Maker wins. By substituting known bounds for the Ramsey number we get that Maker wins whenever formula_1. On the other hand, the Erdos-Selfridge theorem implies that Breaker wins whenever formula_2. Beck improved these bounds as follows: Ramsey game on higher-order hypergraphs. Instead of playing on complete graphs, the clique game can also be played on complete hypergraphs of higher orders. For example, in the clique game on triplets, the game-board is the set of triplets of integers 1...,"n" (so its size is formula_5 ), and winning-sets are all sets of triplets of "k" integers (so the size of any winning-set in it is formula_6). By Ramsey's theorem on triples, if formula_7, Maker wins. The currently known upper bound on formula_8 is very large, formula_9. In contrast, Beck proves that formula_10, where formula_11 is the smallest integer such that Maker has a winning strategy. In particular, if formula_12 then the game is Maker's win. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\geq R_2(k,k)" }, { "math_id": 1, "text": "k \\leq {\\log_2 n\\over 2}" }, { "math_id": 2, "text": "k \\geq {2 \\log_2 n}" }, { "math_id": 3, "text": "k \\leq 2 \\log_2 n - 2\\log_2\\log_2 n + 2\\log_2 e - 10/3 + o(1)" }, { "math_id": 4, "text": "k \\geq 2 \\log_2 n - 2\\log_2\\log_2 n + 2\\log_2 e - 1 + o(1)" }, { "math_id": 5, "text": "{n \\choose 3}" }, { "math_id": 6, "text": "{k \\choose 3}" }, { "math_id": 7, "text": "n \\geq R_3(k,k)" }, { "math_id": 8, "text": "R_3(k,k)" }, { "math_id": 9, "text": "2^{k^2/6} < R_3(k,k) < 2^{2^{4k-10}}" }, { "math_id": 10, "text": "2^{k^2/6} < R^*_3(k,k) < k^4 2^{k^3/6}" }, { "math_id": 11, "text": "R^*_3(k,k)" }, { "math_id": 12, "text": "k^4 2^{k^3/6} < n" } ]
https://en.wikipedia.org/wiki?curid=58943723
58943959
Arithmetic progression game
Positional game The arithmetic progression game is a positional game where two players alternately pick numbers, trying to occupy a complete arithmetic progression of a given size. The game is parameterized by two integers "n" &gt; "k". The game-board is the set {1...,"n"}. The winning-sets are all the arithmetic progressions of length "k". In a Maker-Breaker game variant, the first player (Maker) wins by occupying a "k"-length arithmetic progression, otherwise the second player (Breaker) wins. The game is also called the van der Waerden game, named after Van der Waerden's theorem. It says that, for any "k", there exists some integer "W"(2,"k") such that, if the integers {1, ..., "W"(2,"k")} are partitioned arbitrarily into two sets, then at least one set contains an arithmetic progression of length "k". This means that, if formula_0, then Maker has a winning strategy. Unfortunately, this claim is not constructive - it does not show a specific strategy for Maker. Moreover, the current upper bound for "W"(2,"k") is extremely large (the currently known bounds are: formula_1). Let "W"*(2,"k") be the smallest integer such that Maker has a winning strategy. Beck proves that formula_2. In particular, if formula_3, then the game is Maker's win (even though it is much smaller than the number that guarantees no-draw). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\geq W(2,k)" }, { "math_id": 1, "text": "2^{k}/k^\\varepsilon < W(2,k) < 2^{2^{2^{2^{k+9}}}}" }, { "math_id": 2, "text": "2^{k-7k^{7/8}} < W^*(2,k) < k^3 2^{k-4}" }, { "math_id": 3, "text": "k^3 2^{k-4} < n" } ]
https://en.wikipedia.org/wiki?curid=58943959
58946850
NGC 681
Spiral galaxy in the constellation Cetus &lt;indicator name="01-sky-coordinates"&gt;&lt;templatestyles src="Template:Sky/styles.css" /&gt;Coordinates: &amp;de=-10.426425&amp;zoom=&amp;show_grid=1&amp;show_constellation_lines=1&amp;show_constellation_boundaries=1&amp;show_const_names=1&amp;show_galaxies=1&amp;img_source=IMG_all 01h 49m 10.829s, −10° 25′ 35.13″&lt;/indicator&gt; NGC 681 is an intermediate spiral galaxy in the constellation of Cetus, located approximately 66.5 million light-years from Earth. Observation history. NGC 681 was discovered by the German-born British astronomer William Herschel on 28 November 1785 and was later also observed by William's son, John Herschel. John Louis Emil Dreyer, compiler of the first "New General Catalogue of Nebulae and Clusters of Stars", described NGC 681 as being a "pretty faint, considerably large, round, small (faint) star 90 arcsec to [the] west" that becomes "gradually a little brighter [in the] middle". Physical characteristics. NGC 681 shares many structural similarities with the Sombrero Galaxy, M104, although it is smaller, less luminous, and less massive. Its thin, dusty disc is seen almost perfectly edge-on and features a small, very bright nucleus in the center of a very pronounced bulge. Distinctly unlike M104, NGC 681's disc contains many H II regions, where star formation is likely to be occurring. The galaxy has a mass of M☉, a mass-to-light ratio of 3.6 formula_0, and a spiral pattern which is asymmetrical.&lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Upsilon\\odot" } ]
https://en.wikipedia.org/wiki?curid=58946850
5894781
Standard solar model
Theoretical framework detailing the sun's structure, composition and energetics The standard solar model (SSM) is a mathematical model of the Sun as a spherical ball of gas (in varying states of ionisation, with the hydrogen in the deep interior being a completely ionised plasma). This stellar model, technically the spherically symmetric quasi-static model of a star, has stellar structure described by several differential equations derived from basic physical principles. The model is constrained by boundary conditions, namely the luminosity, radius, age and composition of the Sun, which are well determined. The age of the Sun cannot be measured directly; one way to estimate it is from the age of the oldest meteorites, and models of the evolution of the Solar System. The composition in the photosphere of the modern-day Sun, by mass, is 74.9% hydrogen and 23.8% helium. All heavier elements, called "metals" in astronomy, account for less than 2 percent of the mass. The SSM is used to test the validity of stellar evolution theory. In fact, the only way to determine the two free parameters of the stellar evolution model, the helium abundance and the mixing length parameter (used to model convection in the Sun), are to adjust the SSM to "fit" the observed Sun. A calibrated solar model. A star is considered to be at zero age (protostellar) when it is assumed to have a homogeneous composition and to be just beginning to derive most of its luminosity from nuclear reactions (so neglecting the period of contraction from a cloud of gas and dust). To obtain the SSM, a one solar mass (M☉) stellar model at zero age is evolved numerically to the age of the Sun. The abundance of elements in the zero age solar model is estimated from primordial meteorites. Along with this abundance information, a reasonable guess at the zero-age luminosity (such as the present-day Sun's luminosity) is then converted by an iterative procedure into the correct value for the model, and the temperature, pressure and density throughout the model calculated by solving the equations of stellar structure numerically assuming the star to be in a steady state. The model is then evolved numerically up to the age of the Sun. Any discrepancy from the measured values of the Sun's luminosity, surface abundances, etc. can then be used to refine the model. For example, since the Sun formed, some of the helium and heavy elements have settled out of the photosphere by diffusion. As a result, the Solar photosphere now contains about 87% as much helium and heavy elements as the protostellar photosphere had; the protostellar Solar photosphere was 71.1% hydrogen, 27.4% helium, and 1.5% metals. A measure of heavy-element settling by diffusion is required for a more accurate model. Numerical modelling of the stellar structure equations. The differential equations of stellar structure, such as the equation of hydrostatic equilibrium, are integrated numerically. The differential equations are approximated by difference equations. The star is imagined to be made up of spherically symmetric shells and the numerical integration carried out in finite steps making use of the equations of state, giving relationships for the pressure, the opacity and the energy generation rate in terms of the density, temperature and composition. Evolution of the Sun. Nuclear reactions in the core of the Sun change its composition, by converting hydrogen nuclei into helium nuclei by the proton–proton chain and (to a lesser extent in the Sun than in more massive stars) the CNO cycle. This increases the mean molecular weight in the core of the Sun, which should lead to a decrease in pressure. This does not happen as instead the core contracts. By the virial theorem half of the gravitational potential energy released by this contraction goes towards raising the temperature of the core, and the other half is radiated away. This increase in temperature also increases the pressure and restores the balance of hydrostatic equilibrium. The luminosity of the Sun is increased by the temperature rise, increasing the rate of nuclear reactions. The outer layers expand to compensate for the increased temperature and pressure gradients, so the radius also increases. No star is completely static, but stars stay on the main sequence (burning hydrogen in the core) for long periods. In the case of the Sun, it has been on the main sequence for roughly 4.6 billion years, and will become a red giant in roughly 6.5 billion years for a total main sequence lifetime of roughly 11 billion (1010) years. Thus the assumption of steady state is a very good approximation. For simplicity, the stellar structure equations are written without explicit time dependence, with the exception of the luminosity gradient equation: formula_0 Here "L" is the luminosity, "ε" is the nuclear energy generation rate per unit mass and "εν" is the luminosity due to neutrino emission (see below for the other quantities). The slow evolution of the Sun on the main sequence is then determined by the change in the nuclear species (principally hydrogen being consumed and helium being produced). The rates of the various nuclear reactions are estimated from particle physics experiments at high energies, which are extrapolated back to the lower energies of stellar interiors (the Sun burns hydrogen rather slowly). Historically, errors in the nuclear reaction rates have been one of the biggest sources of error in stellar modelling. Computers are employed to calculate the varying abundances (usually by mass fraction) of the nuclear species. A particular species will have a rate of production and a rate of destruction, so both are needed to calculate its abundance over time, at varying conditions of temperature and density. Since there are many nuclear species, a computerised reaction network is needed to keep track of how all the abundances vary together. According to the Vogt–Russell theorem, the mass and the composition structure throughout a star uniquely determine its radius, luminosity, and internal structure, as well as its subsequent evolution (though this "theorem" was only intended to apply to the slow, stable phases of stellar evolution and certainly does not apply to the transitions between stages and rapid evolutionary stages). The information about the varying abundances of nuclear species over time, along with the equations of state, is sufficient for a numerical solution by taking sufficiently small time increments and using iteration to find the unique internal structure of the star at each stage. Purpose of the standard solar model. The SSM serves two purposes: Like the Standard Model of particle physics and the standard cosmology model the SSM changes over time in response to relevant new theoretical or experimental physics discoveries. Energy transport in the Sun. The Sun has a radiative core and a convective outer envelope. In the core, the luminosity due to nuclear reactions is transmitted to outer layers principally by radiation. However, in the outer layers the temperature gradient is so great that radiation cannot transport enough energy. As a result, thermal convection occurs as thermal columns carry hot material to the surface (photosphere) of the Sun. Once the material cools off at the surface, it plunges back downward to the base of the convection zone, to receive more heat from the top of the radiative zone. In a solar model, as described in stellar structure, one considers the density formula_1, temperature "T"("r"), total pressure (matter plus radiation) "P"("r"), luminosity "l"("r") and energy generation rate per unit mass "ε"("r") in a spherical shell of a thickness dr at a distance "r" from the center of the star. Radiative transport of energy is described by the radiative temperature gradient equation: formula_2 where "κ" is the opacity of the matter, "σ" is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one. Convection is described using mixing length theory and the corresponding temperature gradient equation (for adiabatic convection) is: formula_3 where "γ" = "c"p / "c"v is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, "γ" = 5/3.) Near the base of the Sun's convection zone, the convection is adiabatic, but near the surface of the Sun, convection is not adiabatic. Simulations of near-surface convection. A more realistic description of the uppermost part of the convection zone is possible through detailed three-dimensional and time-dependent hydrodynamical simulations, taking into account radiative transfer in the atmosphere. Such simulations successfully reproduce the observed surface structure of solar granulation, as well as detailed profiles of lines in the solar radiative spectrum, without the use of parametrized models of turbulence. The simulations only cover a very small fraction of the solar radius, and are evidently far too time-consuming to be included in general solar modeling. Extrapolation of an averaged simulation through the adiabatic part of the convection zone by means of a model based on the mixing-length description, demonstrated that the adiabat predicted by the simulation was essentially consistent with the depth of the solar convection zone as determined from helioseismology. An extension of mixing-length theory, including effects of turbulent pressure and kinetic energy, based on numerical simulations of near-surface convection, has been developed. This section is adapted from the Christensen-Dalsgaard review of helioseismology, Chapter IV. Equations of state. The numerical solution of the differential equations of stellar structure requires equations of state for the pressure, opacity and energy generation rate, as described in stellar structure, which relate these variables to the density, temperature and composition. Helioseismology. Helioseismology is the study of the wave oscillations in the Sun. Changes in the propagation of these waves through the Sun reveal inner structures and allow astrophysicists to develop extremely detailed profiles of the interior conditions of the Sun. In particular, the location of the convection zone in the outer layers of the Sun can be measured, and information about the core of the Sun provides a method, using the SSM, to calculate the age of the Sun, independently of the method of inferring the age of the Sun from that of the oldest meteorites. This is another example of how the SSM can be refined. Neutrino production. Hydrogen is fused into helium through several different interactions in the Sun. The vast majority of neutrinos are produced through the pp chain, a process in which four protons are combined to produce two protons, two neutrons, two positrons, and two electron neutrinos. Neutrinos are also produced by the CNO cycle, but that process is considerably less important in the Sun than in other stars. Most of the neutrinos produced in the Sun come from the first step of the pp chain but their energy is so low (&lt;0.425 MeV) they are very difficult to detect. A rare side branch of the pp chain produces the "boron-8" neutrinos with a maximum energy of roughly 15 MeV, and these are the easiest neutrinos to detect. A very rare interaction in the pp chain produces the "hep" neutrinos, the highest energy neutrinos predicted to be produced by the Sun. They are predicted to have a maximum energy of about 18 MeV. All of the interactions described above produce neutrinos with a spectrum of energies. The electron capture of 7Be produces neutrinos at either roughly 0.862 MeV (~90%) or 0.384 MeV (~10%). Neutrino detection. The weakness of the neutrino's interactions with other particles means that most neutrinos produced in the core of the Sun can pass all the way through the Sun without being absorbed. It is possible, therefore, to observe the core of the Sun directly by detecting these neutrinos. History. The first experiment to successfully detect cosmic neutrinos was Ray Davis's chlorine experiment, in which neutrinos were detected by observing the conversion of chlorine nuclei to radioactive argon in a large tank of perchloroethylene. This was a reaction channel expected for neutrinos, but since only the number of argon decays was counted, it did not give any directional information, such as where the neutrinos came from. The experiment found about 1/3 as many neutrinos as were predicted by the standard solar model of the time, and this problem became known as the solar neutrino problem. While it is now known that the chlorine experiment detected neutrinos, some physicists at the time were suspicious of the experiment, mainly because they did not trust such radiochemical techniques. Unambiguous detection of solar neutrinos was provided by the Kamiokande-II experiment, a water Cherenkov detector with a low enough energy threshold to detect neutrinos through neutrino-electron elastic scattering. In the elastic scattering interaction the electrons coming out of the point of reaction strongly point in the direction that the neutrino was travelling, away from the Sun. This ability to "point back" at the Sun was the first conclusive evidence that the Sun is powered by nuclear interactions in the core. While the neutrinos observed in Kamiokande-II were clearly from the Sun, the rate of neutrino interactions was again suppressed compared to theory at the time. Even worse, the Kamiokande-II experiment measured about 1/2 the predicted flux, rather than the chlorine experiment's 1/3. The solution to the solar neutrino problem was finally experimentally determined by the Sudbury Neutrino Observatory (SNO). The radiochemical experiments were only sensitive to electron neutrinos, and the signal in the water Cerenkov experiments was dominated by the electron neutrino signal. The SNO experiment, by contrast, had sensitivity to all three neutrino flavours. By simultaneously measuring the electron neutrino and total neutrino fluxes the experiment demonstrated that the suppression was due to the MSW effect, the conversion of electron neutrinos from their pure flavour state into the second neutrino mass eigenstate as they passed through a resonance due to the changing density of the Sun. The resonance is energy dependent, and "turns on" near 2MeV. The water Cerenkov detectors only detect neutrinos above about 5MeV, while the radiochemical experiments were sensitive to lower energy (0.8MeV for chlorine, 0.2MeV for gallium), and this turned out to be the source of the difference in the observed neutrino rates at the two types of experiments. Proton–proton chain. All neutrinos from the proton–proton chain reaction (PP neutrinos) have been detected except hep neutrinos (next point). Three techniques have been adopted: The radiochemical technique, used by Homestake, GALLEX, GNO and SAGE allowed to measure the neutrino flux above a minimum energy. The detector SNO used scattering on deuterium that allowed to measure the energy of the events, thereby identifying the single components of the predicted SSM neutrino emission. Finally, Kamiokande, Super-Kamiokande, SNO, Borexino and KamLAND used elastic scattering on electrons, which allows the measurement of the neutrino energy. Boron8 neutrinos have been seen by Kamiokande, Super-Kamiokande, SNO, Borexino, KamLAND. Beryllium7, pep, and PP neutrinos have been seen only by Borexino to date. HEP neutrinos. The highest energy neutrinos have not yet been observed due to their small flux compared to the boron-8 neutrinos, so thus far only limits have been placed on the flux. No experiment yet has had enough sensitivity to observe the flux predicted by the SSM. CNO cycle. Neutrinos from the CNO cycle of solar energy generation – i.e., the CNO-neutrinos – are also expected to provide observable events below 1 MeV. They have not yet been observed due to experimental noise (background). Ultra-pure scintillator detectors have the potential to probe the flux predicted by the SSM. This detection could be possible already in Borexino; the next scientific occasions will be in SNO+ and, on the longer term, in LENA and JUNO, three detectors that will be larger but will use the same principles of Borexino. The Borexino Collaboration has confirmed that the CNO cycle accounts for 1% of the energy generation within the Sun's core. Future experiments. While radiochemical experiments have in some sense observed the pp and Be7 neutrinos they have measured only integral fluxes. The "holy grail" of solar neutrino experiments would detect the Be7 neutrinos with a detector that is sensitive to the individual neutrino energies. This experiment would test the MSW hypothesis by searching for the turn-on of the MSW effect. Some exotic models are still capable of explaining the solar neutrino deficit, so the observation of the MSW turn on would, in effect, finally solve the solar neutrino problem. Core temperature prediction. The flux of boron-8 neutrinos is highly sensitive to the temperature of the core of the Sun, formula_4. For this reason, a precise measurement of the boron-8 neutrino flux can be used in the framework of the standard solar model as a measurement of the temperature of the core of the Sun. This estimate was performed by Fiorentini and Ricci after the first SNO results were published, and they obtained a temperature of formula_5 from a determined neutrino flux of 5.2×106/cm2·s. Lithium depletion at the solar surface. Stellar models of the Sun's evolution predict the solar surface chemical abundance pretty well except for lithium (Li). The surface abundance of Li on the Sun is 140 times less than the protosolar value (i.e. the primordial abundance at the Sun's birth), yet the temperature at the base of the surface convective zone is not hot enough to burn – and hence deplete – Li. This is known as the solar lithium problem. A large range of Li abundances is observed in solar-type stars of the same age, mass, and metallicity as the Sun. Observations of an unbiased sample of stars of this type with or without observed planets (exoplanets) showed that the known planet-bearing stars have less than one per cent of the primordial Li abundance, and of the remainder half had ten times as much Li. It is hypothesised that the presence of planets may increase the amount of mixing and deepen the convective zone to such an extent that the Li can be burned. A possible mechanism for this is the idea that the planets affect the angular momentum evolution of the star, thus changing the rotation of the star relative to similar stars without planets; in the case of the Sun slowing its rotation. More research is needed to discover where and when the fault in the modelling lies. Given the precision of helioseismic probes of the interior of the modern-day Sun, it is likely that the modelling of the protostellar Sun needs to be adjusted. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dL}{dr} = 4 \\pi r^2 \\rho \\left( \\varepsilon - \\varepsilon_\\nu \\right)" }, { "math_id": 1, "text": "\\rho(r)" }, { "math_id": 2, "text": " {dT \\over dr} = - {3 \\kappa \\rho l \\over 16 \\pi r^2 \\sigma T^3}," }, { "math_id": 3, "text": " {dT \\over dr} = \\left(1 - {1 \\over \\gamma} \\right) {T \\over P } { dP \\over dr}," }, { "math_id": 4, "text": "\\phi(\\ce{^8B}) \\propto T^{25}" }, { "math_id": 5, "text": " T_\\text{sun} = 15.7 \\times 10^6 \\; \\text{K} \\; \\pm 1\\% " } ]
https://en.wikipedia.org/wiki?curid=5894781
589548
Prothrombin time
Blood test that evaluates clotting The prothrombin time (PT) – along with its derived measures of prothrombin ratio (PR) and international normalized ratio (INR) – is an assay for evaluating the extrinsic pathway and common pathway of coagulation. This blood test is also called "protime INR" and "PT/INR". They are used to determine the clotting tendency of blood, in such things as the measure of warfarin dosage, liver damage, and vitamin K status. PT measures the following coagulation factors: I (fibrinogen), II (prothrombin), V (proaccelerin), VII (proconvertin), and X (Stuart–Prower factor). PT is often used in conjunction with the activated partial thromboplastin time (aPTT) which measures the "intrinsic" pathway and common pathway of coagulation. Laboratory measurement. The reference range for prothrombin time depends on the analytical method used, but is usually around 12–13 seconds (results should always be interpreted using the reference range from the laboratory that performed the test), and the INR in absence of anticoagulation therapy is 0.8–1.2. The target range for INR in anticoagulant use (e.g. warfarin) is 2 to 3. In some cases, if more intense anticoagulation is thought to be required, the target range may be as high as 2.5–3.5 depending on the indication for anticoagulation. Methodology. Prothrombin time is typically analyzed by a laboratory technologist on an automated instrument at 37 °C (as a nominal approximation of normal human body temperature). Prothrombin time ratio. The prothrombin time ratio is the ratio of a subject's measured prothrombin time (in seconds) to the normal laboratory reference PT. The PT ratio varies depending on the specific reagents used, and has been replaced by the INR. Elevated INR may be useful as a rapid and inexpensive diagnostic of infection in people with COVID-19. International normalized ratio. The result (in seconds) for a prothrombin time performed on a normal individual will vary according to the type of analytical system employed. This is due to the variations between different types and batches of manufacturer's tissue factor used in the reagent to perform the test. The INR was devised to standardize the results. Each manufacturer assigns an ISI value (International Sensitivity Index) for any tissue factor they manufacture. The ISI value indicates how a particular batch of tissue factor compares to an international reference tissue factor. The ISI is usually between 0.94 and 1.4 for more sensitive and 2.0–3.0 for less sensitive thromboplastins. The INR is the ratio of a patient's prothrombin time to a normal (control) sample, raised to the power of the ISI value for the analytical system being used. formula_0 PTnormal is established as the geometric mean of the prothrombin times (PT) of a reference sample group. Interpretation. The prothrombin time is the time it takes plasma to clot after addition of tissue factor (obtained from animals such as rabbits, or recombinant tissue factor, or from brains of autopsy patients). This measures the quality of the "extrinsic pathway" (as well as the "common pathway") of coagulation. The speed of the "extrinsic pathway" is greatly affected by levels of functional factor VII in the body. Factor VII has a short half-life and the carboxylation of its glutamate residues requires vitamin K. The prothrombin time can be prolonged as a result of deficiencies in vitamin K, warfarin therapy, malabsorption, or lack of intestinal colonization by bacteria (such as in newborns). In addition, poor factor VII synthesis (due to liver disease) or increased consumption (in disseminated intravascular coagulation) may prolong the PT. The INR is typically used to monitor patients on warfarin or related oral anticoagulant therapy. The normal range for a healthy person not using warfarin is 0.8–1.2, and for people on warfarin therapy an INR of 2.0–3.0 is usually targeted, although the target INR may be higher in particular situations, such as for those with a mechanical heart valve. If the INR is outside the target range, a high INR indicates a higher risk of bleeding, while a low INR suggests a higher risk of developing a clot. In patients on a vitamin K antagonist such as warfarin with supratherapeutic INR but INR less than 10 and no bleeding, it is enough to lower the dose or omit a dose, monitor the INR and resume the vitamin K antagonist at an adjusted lower dose when the target INR is reached. For people who need rapid reversal of the vitamin K antagonist – such as due to serious bleeding – or who need emergency surgery, the effects of warfarin can be reversed with vitamin K, prothrombin complex concentrate (PCC), or fresh frozen plasma (FFP). Factors determining accuracy. Lupus anticoagulant, a circulating inhibitor predisposing for thrombosis, may skew PT results, depending on the assay used. Variations between various thromboplastin preparations have in the past led to decreased accuracy of INR readings, and a 2005 study suggested that despite international calibration efforts (by INR) there were still statistically significant differences between various kits, casting doubt on the long-term tenability of PT/INR as a measure for anticoagulant therapy. Indeed, a new prothrombin time variant, the Fiix prothrombin time, intended solely for monitoring warfarin and other vitamin K antagonists has been invented and recently become available as a manufactured test. The Fiix prothrombin time is only affected by reductions in factor II and/or factor X and this stabilizes the anticoagulant effect and appears to improve clinical outcome according to an investigator initiated randomized blinded clinical trial, The Fiix-trial. In this trial thromboembolism was reduced by 50% during long-term treatment and despite that bleeding was not increased. Statistics. An estimated 800 million PT/INR assays are performed annually worldwide. Near-patient testing. In addition to the laboratory method outlined above, near-patient testing (NPT) or home INR monitoring is becoming increasingly common in some countries. In the United Kingdom, for example, near-patient testing is used both by patients at home and by some anticoagulation clinics (often hospital-based) as a fast and convenient alternative to the lab method. After a period of doubt about the accuracy of NPT results, a new generation of machines and reagents seems to be gaining acceptance for its ability to deliver results close in accuracy to those of the lab. In a typical NPT set up, a small table-top device is used. A drop of capillary blood is obtained with an automated finger-prick, which is almost painless. This drop is placed on a disposable test strip with which the machine has been prepared. The resulting INR comes up on the display a few seconds later. A similar form of testing is used by people with diabetes for monitoring blood sugar levels, which is easily taught and routinely practiced. Local policy determines whether the patient or a coagulation specialist (pharmacist, nurse, general practitioner or hospital doctor) interprets the result and determines the dose of medication. In Germany and Austria, patients may adjust the medication dose themselves, while in the UK and the US this remains in the hands of a health care professional. A significant advantage of home testing is the evidence that patient self-testing with medical support and patient self-management (where patients adjust their own anticoagulant dose) improves anticoagulant control. A meta analysis which reviewed 14 trials showed that home testing led to a reduced incidence of complications (bleeding and thrombosis) and improved the time in the therapeutic range, which is an indirect measure of anticoagulant control. In 2022, a smartphone system was introduced by researchers to perform PT/INR testing in an inexpensive and accessible manner. It uses the vibration motor and camera ubiquitous on smartphones to track micro-mechanical movements of a copper particle and compute PT/INR values. Other advantages of the NPT approach are that it is fast and convenient, usually less painful, and offers, in home use, the ability for patients to measure their own INRs when required. Among its problems are that quite a steady hand is needed to deliver the blood to the exact spot, that some patients find the finger-pricking difficult, and that the cost of the test strips must also be taken into account. In the UK these are available on prescription so that elderly and unwaged people will not pay for them and others will pay only a standard prescription charge, which at the moment represents only about 20% of the retail price of the strips. In the US, NPT in the home is currently reimbursed by Medicare for patients with mechanical heart valves, while private insurers may cover for other indications. Medicare is now covering home testing for patients with chronic atrial fibrillation. Home testing requires a doctor's prescription and that the meter and supplies are obtained from a Medicare-approved Independent Diagnostic Testing Facility (IDTF). There is some evidence to suggest that NPT may be less accurate for certain patients, for example those who have the lupus anticoagulant. Guidelines. International guidelines were published in 2005 to govern home monitoring of oral anticoagulation by the International Self-Monitoring Association for Oral Anticoagulation. The international guidelines study stated, "The consensus agrees that patient self-testing and patient self-management are effective methods of monitoring oral anticoagulation therapy, providing outcomes at least as good as, and possibly better than, those achieved with an anticoagulation clinic. All patients must be appropriately selected and trained. Currently, available self-testing/self-management devices give INR results which are comparable with those obtained in laboratory testing." Medicare coverage for home testing of INR has been expanded in order to allow more people access to home testing of INR in the US. The release on 19 March 2008 said, "[t]he Centers for Medicare &amp; Medicaid Services (CMS) expanded Medicare coverage for home blood testing of prothrombin time (PT) International Normalized Ratio (INR) to include beneficiaries who are using the drug warfarin, an anticoagulant (blood thinner) medication, for chronic atrial fibrillation or venous thromboembolism." In addition, "those Medicare beneficiaries and their physicians managing conditions related to chronic atrial fibrillation or venous thromboembolism will benefit greatly through the use of the home test." History. The prothrombin time was developed by Armand J. Quick and colleagues in 1935, and a second method was published by Paul Owren, also called the "p and p" or "prothrombin and proconvertin" method. It aided in the identification of the anticoagulants dicumarol and warfarin, and was used subsequently as a measure of activity for warfarin when used therapeutically. The INR was invented in the early 1980s by Tom Kirkwood working at the UK National Institute for Biological Standards and Control (and subsequently at the UK National Institute for Medical Research) to provide a consistent way of expressing the prothrombin time ratio, which had previously suffered from a large degree of variation between centres using different reagents. The INR was coupled to Dr Kirkwood's simultaneous invention of the International Sensitivity Index (ISI), which provided the means to calibrate different batches of thromboplastins to an international standard. The INR became widely accepted worldwide, especially after endorsement by the World Health Organization. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\text{INR}= \\left(\\frac{\\text{PT}_\\text{test}}{\\text{PT}_\\text{normal}}\\right)^\\text{ISI}\n" } ]
https://en.wikipedia.org/wiki?curid=589548
5895822
Sensitivity index
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected. Definition. The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation. Equal variances/covariances. For two univariate distributions formula_2 and formula_3 with the same standard deviation, it is denoted by formula_4 ('dee-prime'): formula_5. In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix formula_6, (whose symmetric square-root, the standard deviation matrix, is formula_7), this generalizes to the Mahalanobis distance between the two distributions: formula_8, where formula_9 is the 1d slice of the sd along the unit vector formula_10 through the means, i.e. the formula_4 equals the formula_4 along the 1d slice through the means. For two bivariate distributions with equal variance-covariance, this is given by: formula_11, where formula_12 is the correlation coefficient, and here formula_13 and formula_14, i.e. including the signs of the mean differences instead of the absolute. formula_4 is also estimated as formula_15. Unequal variances/covariances. When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to formula_4 for equal variance/covariance. Bayes discriminability index. This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification formula_0 by an ideal observer, or its complement, the optimal accuracy formula_16: formula_17, where formula_18 is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed (Matlab code), and may also be used as an approximation when the distributions are close to normal. formula_1 is a positive-definite statistical distance measure that is free of assumptions about the distributions, like the Kullback-Leibler divergence formula_19. formula_20 is asymmetric, whereas formula_21 is symmetric for the two distributions. However, formula_1 does not satisfy the triangle inequality, so it is not a full metric. In particular, for a yes/no task between two univariate normal distributions with means formula_22 and variances formula_23, the Bayes-optimal classification accuracies are: formula_24, where formula_25 denotes the non-central chi-squared distribution, formula_26, and formula_27. The Bayes discriminability formula_28 formula_1 can also be computed from the ROC curve of a yes/no task between two univariate normal distributions with a single shifting criterion. It can also be computed from the ROC curve of any two distributions (in any number of variables) with a shifting likelihood-ratio, by locating the point on the ROC curve that is farthest from the diagonal. For a two-interval task between these distributions, the optimal accuracy is formula_29 (formula_30 denotes the generalized chi-squared distribution), where formula_31. The Bayes discriminability formula_32. RMS sd discriminability index. A common approximate (i.e. sub-optimal) discriminability index that has a closed-form is to take the average of the variances, i.e. the rms of the two standard deviations: formula_33 (also denoted by formula_34). It is formula_35 times the formula_36-score of the area under the receiver operating characteristic curve (AUC) of a single-criterion observer. This index is extended to general dimensions as the Mahalanobis distance using the pooled covariance, i.e. with formula_37 as the common sd matrix. Average sd discriminability index. Another index is formula_38, extended to general dimensions using formula_39 as the common sd matrix. Comparison of the indices. It has been shown that for two univariate normal distributions, formula_40, and for multivariate normal distributions, formula_41 still. Thus, formula_42 and formula_43 underestimate the maximum discriminability formula_1 of univariate normal distributions. formula_42 can underestimate formula_1 by a maximum of approximately 30%. At the limit of high discriminability for univariate normal distributions, formula_43 converges to formula_1. These results often hold true in higher dimensions, but not always. Simpson and Fitter promoted formula_42 as the best index, particularly for two-interval tasks, but Das and Geisler have shown that formula_1 is the optimal discriminability in all cases, and formula_43 is often a better closed-form approximation than formula_42, even for two-interval tasks. The approximate index formula_44, which uses the geometric mean of the sd's, is less than formula_1 at small discriminability, but greater at large discriminability. Contribution to discriminability by each dimension. In general, the contribution to the total discriminability by each dimension or feature may be measured using the amount by which the discriminability drops when that dimension is removed. If the total Bayes discriminability is formula_4 and the Bayes discriminability with dimension formula_45 removed is formula_46, we can define the contribution of dimension formula_45 as formula_47. This is the same as the individual discriminability of dimension formula_45 when the covariance matrices are equal and diagonal, but in the other cases, this measure more accurately reflects the contribution of a dimension than its individual discriminability. Scaling the discriminability of two distributions. We may sometimes want to scale the discriminability of two data distributions by moving them closer or farther apart. One such case is when we are modeling a detection or classification task, and the model performance exceeds that of the subject or observed data. In that case, we can move the model variable distributions closer together so that it matches the observed performance, while also predicting which specific data points should start overlapping and be misclassified. There are several ways of doing this. One is to compute the mean vector and covariance matrix of the two distributions, then effect a linear transformation to interpolate the mean and sd matrix (square root of the covariance matrix) of one of the distributions towards the other. Another way that is by computing the decision variables of the data points (log likelihood ratio that a point belongs to one distribution vs another) under a multinormal model, then moving these decision variables closer together or farther apart. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e_b" }, { "math_id": 1, "text": "d'_b" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "d'" }, { "math_id": 5, "text": "d' = \\frac{\\left\\vert \\mu_a - \\mu_b \\right\\vert}{\\sigma}" }, { "math_id": 6, "text": "\\mathbf{\\Sigma}" }, { "math_id": 7, "text": "\\mathbf{S}" }, { "math_id": 8, "text": "d'=\\sqrt{(\\boldsymbol{\\mu}_a-\\boldsymbol{\\mu}_b)'\\mathbf{\\Sigma}^{-1}(\\boldsymbol{\\mu}_a-\\boldsymbol{\\mu}_b)} = \\lVert \\mathbf{S}^{-1}(\\boldsymbol{\\mu}_a-\\boldsymbol{\\mu}_b) \\rVert = \\lVert \\boldsymbol{\\mu}_a-\\boldsymbol{\\mu}_b \\rVert /\\sigma_{\\boldsymbol{\\mu}}" }, { "math_id": 9, "text": "\\sigma_{\\boldsymbol{\\mu}} = 1/ \\lVert\\mathbf{S}^{-1}\\boldsymbol{\\mu}\\rVert" }, { "math_id": 10, "text": "\\boldsymbol{\\mu}" }, { "math_id": 11, "text": "{d'}^2 =\\frac{1}{1-\\rho^2} \\left({d'}^2_x+{d'}^2_y-2\\rho {d'}_x {d'}_y \\right)" }, { "math_id": 12, "text": "\\rho" }, { "math_id": 13, "text": "d'_x=\\frac{{\\mu_b}_x-{\\mu_a}_x}{\\sigma_x}" }, { "math_id": 14, "text": "d'_y=\\frac{{\\mu_b}_y-{\\mu_a}_y}{\\sigma_y}" }, { "math_id": 15, "text": "Z(\\text{hit rate})-Z(\\text{false alarm rate})" }, { "math_id": 16, "text": "a_b" }, { "math_id": 17, "text": "d'_b=-2Z\\left(\\text{Bayes error rate } e_b\\right)=2Z\\left(\\text{best accuracy rate } a_b\\right)" }, { "math_id": 18, "text": "Z" }, { "math_id": 19, "text": "D_\\text{KL}" }, { "math_id": 20, "text": "D_\\text{KL}(a,b)" }, { "math_id": 21, "text": "d'_b(a,b)" }, { "math_id": 22, "text": "\\mu_a,\\mu_b" }, { "math_id": 23, "text": "v_a>v_b" }, { "math_id": 24, "text": " p(A|a)=p({\\chi'}^2_{1,v_a \\lambda} > v_b c), \\; \\; p(B|b)=p({\\chi'}^2_{1,v_b \\lambda} < v_a c)" }, { "math_id": 25, "text": "\\chi'^2" }, { "math_id": 26, "text": "\\lambda=\\left(\\frac{\\mu_a-\\mu_b}{v_a-v_b}\\right)^2" }, { "math_id": 27, "text": "c=\\lambda+\\frac{\\ln v_a -\\ln v_b}{v_a-v_b}" }, { "math_id": 28, "text": "d'_b=2Z\\left(\\frac{p\\left(A|a\\right)+p\\left(B|b\\right)}{2} \\right)." }, { "math_id": 29, "text": "a_b=p \\left( \\tilde{\\chi}^2_{\\boldsymbol{w}, \\boldsymbol{k}, \\boldsymbol{\\lambda},0,0}>0 \\right)" }, { "math_id": 30, "text": "\\tilde{\\chi}^2" }, { "math_id": 31, "text": " \\boldsymbol{w}=\\begin{bmatrix} \\sigma_s^2 & -\\sigma_n^2 \\end{bmatrix}, \\; \\boldsymbol{k}=\\begin{bmatrix} 1 & 1 \\end{bmatrix}, \\; \\boldsymbol{\\lambda}=\\frac{\\mu_s-\\mu_n}{\\sigma_s^2-\\sigma_n^2} \\begin{bmatrix} \\sigma_s^2 & \\sigma_n^2 \\end{bmatrix}" }, { "math_id": 32, "text": "d'_b=2Z\\left(a_b\\right)" }, { "math_id": 33, "text": "d'_a=\\left\\vert \\mu_a -\\mu_b \\right\\vert/\\sigma_\\text{rms}" }, { "math_id": 34, "text": "d_a" }, { "math_id": 35, "text": "\\sqrt{2}" }, { "math_id": 36, "text": "z" }, { "math_id": 37, "text": "\\mathbf{S}_\\text{rms}=\\left[\\left(\\mathbf{\\Sigma}_a+\\mathbf{\\Sigma}_b\\right)/2 \\right]^\\frac{1}{2}" }, { "math_id": 38, "text": "d'_e=\\left\\vert \\mu_a -\\mu_b \\right\\vert/\\sigma_\\text{avg}" }, { "math_id": 39, "text": "\\mathbf{S}_\\text{avg}=\\left(\\mathbf{S}_a+\\mathbf{S}_b\\right)/2" }, { "math_id": 40, "text": " d'_a \\leq d'_e \\leq d'_b" }, { "math_id": 41, "text": " d'_a \\leq d'_e" }, { "math_id": 42, "text": "d'_a" }, { "math_id": 43, "text": "d'_e" }, { "math_id": 44, "text": "d'_{gm}" }, { "math_id": 45, "text": "i" }, { "math_id": 46, "text": "d'_{-i}" }, { "math_id": 47, "text": "\\sqrt{d'^2-{d'_{-i}}^2}" } ]
https://en.wikipedia.org/wiki?curid=5895822
58958793
Beraha constants
Mathematical constants The Beraha constants are a series of mathematical constants by which the formula_0 Beraha constant is given by formula_1 Notable examples of Beraha constants include formula_2is formula_3, where formula_4 is the golden ratio, formula_5is the silver constant (also known as the silver root), and formula_6. The following table summarizes the first ten Beraha constants. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\text{th}" }, { "math_id": 1, "text": "B (n) = 2 + 2 \\cos \\left ( \\frac{2\\pi}{n} \\right )." }, { "math_id": 2, "text": "B (5)" }, { "math_id": 3, "text": "\\varphi + 1" }, { "math_id": 4, "text": "\\varphi" }, { "math_id": 5, "text": "B (7)" }, { "math_id": 6, "text": "B (10) = \\varphi + 2" }, { "math_id": 7, "text": "\\lambda = 1" }, { "math_id": 8, "text": "\\lambda = \\infty" } ]
https://en.wikipedia.org/wiki?curid=58958793
58962
Timeline of geology
Chronological list of notable events in the history of the science of geology Timeline of geology References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_L" }, { "math_id": 1, "text": "M_0" }, { "math_id": 2, "text": "M_W" } ]
https://en.wikipedia.org/wiki?curid=58962
58962919
Box-making game
A box-making game (often called just a box game) is a biased positional game where two players alternately pick elements from a family of pairwise-disjoint sets ("boxes"). The first player - called "BoxMaker" - tries to pick all elements of a single box. The second player - called "BoxBreaker" - tries to pick at least one element of all boxes. The box game was first presented by Paul Erdős and Václav Chvátal. It was solved later by Hamidoune and Las-Vergnas. Definition. A box game is defined by: The first player, "BoxMaker", picks "p" balls (from the same or different boxes). Then the second player, "BoxBreaker", breaks "q" boxes. And so on. BoxMaker wins if he has managed to pick all balls in at least one box, before BoxBreaker managed to break this box. BoxBreaker wins if he has managed to break all the boxes. Strategies. In general, the optimal strategy for BoxBreaker is to break the boxes with the smallest number of remaining elements. The optimal strategy for BoxMaker is to try to balance the sizes of all boxes. By simulating these strategies, Hamidoune and Las-Vergnas found a sufficient and necessary condition for each player in the ("p":"q") box game. For the special case where "q"=1, each of the following conditions is sufficient: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_1,\\ldots,A_n" }, { "math_id": 1, "text": "p\\cdot \\sum_{i=1}^n {1\\over i} < k" }, { "math_id": 2, "text": "n < (q+1)^{k/p}" }, { "math_id": 3, "text": "p\\cdot \\log_2{n} < k" }, { "math_id": 4, "text": "{1\\over n-j+1}\\sum_{i=j}^n c_i" }, { "math_id": 5, "text": "c_i" }, { "math_id": 6, "text": "k_1,\\ldots,k_n" }, { "math_id": 7, "text": "\\sum_{i=1}^n e^{-k_i/p} < {1/e}" }, { "math_id": 8, "text": "\\sum_{i=1}^n (1+q)^{-k_1/p} < {1\\over 1+q}" }, { "math_id": 9, "text": "\\sum_{i=1}^n 2^{-k_i/p} < {1\\over 2}" } ]
https://en.wikipedia.org/wiki?curid=58962919
5896529
Active load
An active load or dynamic load is a component or a circuit that functions as a current-stable nonlinear resistor. Circuit design. In circuit design, an active load is a circuit component made up of "active devices", such as transistors, intended to present a high small-signal impedance yet not requiring a large DC voltage drop, as would occur if a large resistor were used instead. Such large AC load impedances may be desirable, for example, to increase the AC gain of some types of amplifier. Most commonly the active load is the output part of a current mirror and is represented in an idealized manner as a current source. Usually, it is only a "constant-current resistor" that is a part of the whole current source including a "constant voltage source" as well (the power supply "VCC" on the figures below). Common base example. In Figure 1 the load is a resistor, and the current through the resistor is determined by Ohm's law as: formula_0. As a consequence of this relation, the voltage drop across the resistor is tied to the current at the Q-point. If the bias current is fixed for some performance reason, any increase in load resistance automatically leads to a lower voltage for "V"out. which in turn lowers the voltage drop "VCB" between collector and base, limiting the signal swing at the amplifier output (if the output swing is larger than "VCB", the transistor is driven out of active mode during part of the signal cycle). In contrast, using the active load of Figure 2, the AC impedance of the ideal current source is infinite regardless of the voltage drop "VCC" − "V"out, which allows even a large value of "VCB". and consequently a large output signal swing. Differential amplifiers. Active loads are frequently used in op-amp differential input stages, in order to enormously increase the gain. Practical limitations. In practice the ideal current source is replaced by a current mirror, which is less ideal in two ways. First, its AC resistance is large, but not infinite. Second, the mirror requires a small voltage drop to maintain operation (to keep the output transistors of the mirror in active mode). As a result, the current mirror does limit the allowable output voltage swing, but this limitation is much less than for a resistor, and also does not depend upon the choice of bias current, leaving more flexibility than a resistor in designing the circuit. Test equipment. In the area of electronic test equipment, an active load is used for automatic testing of power supplies and other sources of electrical power to ensure that their output voltage and current are within their specifications over a range of load conditions, from no load to maximum load. One approach to test loads uses a set of resistors of different values, and manual intervention. In contrast, an active load presents to the source a resistance value varied by electronic control, either by an analogue adjusting device such as a multi-turn potentiometer or, in automated test setups, by a digital computer. The load resistance can often be varied rapidly in order to test the power supply's transient response. Just like a resistor, an active load converts the power supply's electrical energy to heat. The heat-dissipating devices (usually transistors) in an active load therefore have to be designed to withstand the resulting temperature rise, and are usually cooled by means of heatsinks. For added convenience, active loads often include circuitry to measure the current and voltage delivered to the inputs, and may display these measurements on numeric readouts.
[ { "math_id": 0, "text": "I_C = \\frac {V_{CC} - V_\\text{out}} {R_C}" } ]
https://en.wikipedia.org/wiki?curid=5896529
5896724
Row equivalence
Equivalence of matrices under row operations In linear algebra, two matrices are row equivalent if one can be changed to the other by a sequence of elementary row operations. Alternatively, two "m" × "n" matrices are row equivalent if and only if they have the same row space. The concept is most commonly applied to matrices that represent systems of linear equations, in which case two matrices of the same size are row equivalent if and only if the corresponding homogeneous systems have the same set of solutions, or equivalently the matrices have the same null space. Because elementary row operations are reversible, row equivalence is an equivalence relation. It is commonly denoted by a tilde (~). There is a similar notion of column equivalence, defined by elementary column operations; two matrices are column equivalent if and only if their transpose matrices are row equivalent. Two rectangular matrices that can be converted into one another allowing both elementary row and column operations are called simply equivalent. Elementary row operations. An elementary row operation is any one of the following moves: Two matrices "A" and "B" are row equivalent if it is possible to transform "A" into "B" by a sequence of elementary row operations. Row space. The row space of a matrix is the set of all possible linear combinations of its row vectors. If the rows of the matrix represent a system of linear equations, then the row space consists of all linear equations that can be deduced algebraically from those in the system. Two "m" × "n" matrices are row equivalent if and only if they have the same row space. For example, the matrices formula_0 are row equivalent, the row space being all vectors of the form formula_1. The corresponding systems of homogeneous equations convey the same information: formula_2 In particular, both of these systems imply every equation of the form formula_3 Equivalence of the definitions. The fact that two matrices are row equivalent if and only if they have the same row space is an important theorem in linear algebra. The proof is based on the following observations: This line of reasoning also proves that every matrix is row equivalent to a unique matrix with reduced row echelon form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{pmatrix}1 & 0 & 0 \\\\ 0 & 1 & 1\\end{pmatrix}\n\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;\n\\begin{pmatrix}1 & 0 & 0 \\\\ 1 & 1 & 1\\end{pmatrix}" }, { "math_id": 1, "text": "\\begin{pmatrix}a & b & b\\end{pmatrix}" }, { "math_id": 2, "text": "\\begin{matrix}x = 0 \\\\ y+z=0\\end{matrix}\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;\\begin{matrix} x=0 \\\\ x+y+z=0.\\end{matrix}" }, { "math_id": 3, "text": "ax+by+bz=0.\\," } ]
https://en.wikipedia.org/wiki?curid=5896724
5897031
Elementary matrix
Matrix which differs from the identity matrix by one elementary row operation In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GL"n"(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form. Elementary row operations. There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations): formula_0 formula_1 formula_2 If E is an elementary matrix, as described below, to apply the elementary row operation to a matrix A, one multiplies A by the elementary matrix on the left, EA. The elementary matrix for any row operation is obtained by executing the operation on the identity matrix. This fact can be understood as an instance of the Yoneda lemma applied to the category of matrices. Row-switching transformations. The first type of row operation on a matrix A switches all matrix elements on row i with their counterparts on a different row j. The corresponding elementary matrix is obtained by swapping row i and row j of the identity matrix. formula_3 So Ti,j A is the matrix produced by exchanging row i and row j of A. Coefficient wise, the matrix Ti,j is defined by : formula_4 Row-multiplying transformations. The next type of row operation on a matrix A multiplies all elements on row i by m where m is a non-zero scalar (usually a real number). The corresponding elementary matrix is a diagonal matrix, with diagonal entries 1 everywhere except in the ith position, where it is m. formula_9 So "Di"("m")"A" is the matrix produced from A by multiplying row i by m. Coefficient wise, the "Di"("m") matrix is defined by : formula_10 Row-addition transformations. The final type of row operation on a matrix A adds row j multiplied by a scalar m to row i. The corresponding elementary matrix is the identity matrix but with an m in the ("i, j") position. formula_14 So "Lij"("m")"A" is the matrix produced from A by adding m times row j to row i. And "A Lij"("m") is the matrix produced from A by adding m times column i to column j. Coefficient wise, the matrix "Li,j"("m") is defined by : formula_15 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_i \\leftrightarrow R_j" }, { "math_id": 1, "text": "kR_i \\rightarrow R_i,\\ \\mbox{where } k \\neq 0" }, { "math_id": 2, "text": "R_i + kR_j \\rightarrow R_i, \\mbox{where } i \\neq j " }, { "math_id": 3, "text": "T_{i,j} = \\begin{bmatrix}\n 1 & & & & & & \\\\\n & \\ddots & & & & & \\\\\n & & 0 & & 1 & & \\\\\n & & & \\ddots & & & \\\\\n & & 1 & & 0 & & \\\\\n & & & & & \\ddots & \\\\\n & & & & & & 1\n\\end{bmatrix}" }, { "math_id": 4, "text": "\n[T_{i,j}]_{k,l} =\n\\begin{cases}\n0 & k \\neq i, k \\neq j ,k \\neq l \\\\\n1 & k \\neq i, k \\neq j, k = l\\\\ \n0 & k = i, l \\neq j\\\\\n1 & k = i, l = j\\\\\n0 & k = j, l \\neq i\\\\\n1 & k = j, l = i\\\\\n\\end{cases}" }, { "math_id": 5, "text": "T_{i,j}^{-1} = T_{i,j}." }, { "math_id": 6, "text": "\\det(T_{i,j}) = -1." }, { "math_id": 7, "text": "\\det(T_{i,j}A) = -\\det(A)." }, { "math_id": 8, "text": "T_{i,j}=D_i(-1)\\,L_{i,j}(-1)\\,L_{j,i}(1)\\,L_{i,j}(-1)." }, { "math_id": 9, "text": "D_i(m) = \\begin{bmatrix}\n 1 & & & & & & \\\\\n & \\ddots & & & & & \\\\\n & & 1 & & & & \\\\\n & & & m & & & \\\\\n & & & & 1 & & \\\\\n & & & & & \\ddots & \\\\\n & & & & & & 1\n\\end{bmatrix}" }, { "math_id": 10, "text": "\n[D_i(m)]_{k,l} = \\begin{cases} \n0 & k \\neq l \\\\\n1 & k = l, k \\neq i \\\\\nm & k = l, k= i\n\\end{cases}" }, { "math_id": 11, "text": "D_i(m)^{-1} = D_i \\left(\\tfrac 1 m \\right)." }, { "math_id": 12, "text": "\\det(D_i(m)) = m." }, { "math_id": 13, "text": "\\det(D_i(m)A) = m\\det(A)." }, { "math_id": 14, "text": "L_{ij}(m) = \\begin{bmatrix}\n 1 & & & & & & \\\\\n & \\ddots & & & & & \\\\\n & & 1 & & & & \\\\\n & & & \\ddots & & & \\\\\n & & m & & 1 & & \\\\\n & & & & & \\ddots & \\\\\n & & & & & & 1\n\\end{bmatrix}" }, { "math_id": 15, "text": "[L_{i,j}(m)]_{k,l} = \\begin{cases}\n0 & k \\neq l, k \\neq i, l \\neq j \\\\\n1 & k = l \\\\\nm & k = i, l = j\n\\end{cases}" }, { "math_id": 16, "text": "L_{ij}(m)^{-1} = L_{ij}(-m)." }, { "math_id": 17, "text": "\\det(L_{ij}(m)) = 1." }, { "math_id": 18, "text": "\\det(L_{ij}(m)A) = \\det(A)." } ]
https://en.wikipedia.org/wiki?curid=5897031
5897139
Galois/Counter Mode
Authenticated encryption mode for block ciphers In cryptography, Galois/Counter Mode (GCM) is a mode of operation for symmetric-key cryptographic block ciphers which is widely adopted for its performance. GCM throughput rates for state-of-the-art, high-speed communication channels can be achieved with inexpensive hardware resources. The GCM algorithm provides both data authenticity (integrity) and confidentiality and belongs to the class of authenticated encryption with associated data (AEAD) methods. This means that as input it takes a key K, some plaintext P, and some associated data AD; it then encrypts the plaintext using the key to produce ciphertext C, and computes an authentication tag T from the ciphertext and the associated data (which remains unencrypted). A recipient with knowledge of K, upon reception of AD, C and T, can decrypt the ciphertext to recover the plaintext P and can check the tag T to ensure that neither ciphertext nor associated data were tampered with. GCM uses a block cipher with block size 128 bits (commonly AES-128) operated in counter mode for encryption, and uses arithmetic in the Galois field GF(2128) to compute the authentication tag; hence the name. Galois Message Authentication Code (GMAC) is an authentication-only variant of the GCM which can form an incremental message authentication code. Both GCM and GMAC can accept initialization vectors of arbitrary length. Different block cipher modes of operation can have significantly different performance and efficiency characteristics, even when used with the same block cipher. GCM can take full advantage of parallel processing and implementing GCM can make efficient use of an instruction pipeline or a hardware pipeline. By contrast, the cipher block chaining (CBC) mode of operation incurs pipeline stalls that hamper its efficiency and performance. Basic operation. Like in normal counter mode, blocks are numbered sequentially, and then this block number is combined with an initialization vector (IV) and encrypted with a block cipher "E", usually AES. The result of this encryption is then XORed with the plaintext to produce the ciphertext. Like all counter modes, this is essentially a stream cipher, and so it is essential that a different IV is used for each stream that is encrypted. The ciphertext blocks are considered coefficients of a polynomial which is then evaluated at a key-dependent point "H", using finite field arithmetic. The result is then encrypted, producing an authentication tag that can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag. Mathematical basis. GCM combines the well-known counter mode of encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of the Galois field multiplication used for authentication. This feature permits higher throughput than encryption algorithms, like CBC, which use chaining modes. The GF(2128) field used is defined by the polynomial formula_0 The authentication tag is constructed by feeding blocks of data into the GHASH function and encrypting the result. This GHASH function is defined by formula_1 where "H" = "Ek"(0128) is the "hash key", a string of 128 zero bits encrypted using the block cipher, "A" is data which is only authenticated (not encrypted), "C" is the ciphertext, "m" is the number of 128-bit blocks in "A" (rounded up), "n" is the number of 128-bit blocks in "C" (rounded up), and the variable "Xi" for "i" = 0, ..., "m" + "n" + 1 is defined below. First, the authenticated text and the cipher text are separately zero-padded to multiples of 128 bits and combined into a single message "Si": formula_2 where len("A") and len("C") are the 64-bit representations of the bit lengths of "A" and "C", respectively, "v" = len("A") mod 128 is the bit length of the final block of "A", "u" = len("C") mod 128 is the bit length of the final block of "C", and formula_3 denotes concatenation of bit strings. Then "Xi" is defined as: formula_4 The second form is an efficient iterative algorithm (each "Xi" depends on "X""i"−1) produced by applying Horner's method to the first. Only the final "X""m"+"n"+1 remains an output. If it is necessary to parallelize the hash computation, this can be done by interleaving "k" times: formula_5 If the length of the IV is not 96, the GHASH function is used to calculate "Counter 0": formula_6 GCM was designed by John Viega and David A. McGrew to be an improvement to Carter–Wegman counter mode (CWC mode). In November 2007, NIST announced the release of NIST Special Publication 800-38D "Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC" making GCM and GMAC official standards. Use. GCM mode is used in the IEEE 802.1AE (MACsec) Ethernet security, WPA3-Enterprise Wifi security protocol, IEEE 802.11ad (also dubbed WiGig), ANSI (INCITS) Fibre Channel Security Protocols (FC-SP), IEEE P1619.1 tape storage, IETF IPsec standards, SSH, TLS 1.2 and TLS 1.3. AES-GCM is included in the NSA Suite B Cryptography and its latest replacement in 2018 Commercial National Security Algorithm (CNSA) suite. GCM mode is used in the SoftEther VPN server and client, as well as OpenVPN since version 2.4. Performance. GCM requires one block cipher operation and one 128-bit multiplication in the Galois field per each block (128 bit) of encrypted and authenticated data. The block cipher operations are easily pipelined or parallelized; the multiplication operations are easily pipelined and can be parallelized with some modest effort (either by parallelizing the actual operation, by adapting Horner's method per the original NIST submission, or both). Intel has added the PCLMULQDQ instruction, highlighting its use for GCM. In 2011, SPARC added the XMULX and XMULXHI instructions, which also perform 64 × 64 bit carry-less multiplication. In 2015, SPARC added the XMPMUL instruction, which performs XOR multiplication of much larger values, up to 2048 × 2048 bit input values producing a 4096-bit result. These instructions enable fast multiplication over GF(2"n"), and can be used with any field representation. Impressive performance results are published for GCM on a number of platforms. Käsper and Schwabe described a "Faster and Timing-Attack Resistant AES-GCM" that achieves 10.68 cycles per byte AES-GCM authenticated encryption on 64-bit Intel processors. Dai et al. report 3.5 cycles per byte for the same algorithm when using Intel's AES-NI and PCLMULQDQ instructions. Shay Gueron and Vlad Krasnov achieved 2.47 cycles per byte on the 3rd generation Intel processors. Appropriate patches were prepared for the OpenSSL and NSS libraries. When both authentication and encryption need to be performed on a message, a software implementation can achieve speed gains by overlapping the execution of those operations. Performance is increased by exploiting instruction-level parallelism by interleaving operations. This process is called function stitching, and while in principle it can be applied to any combination of cryptographic algorithms, GCM is especially suitable. Manley and Gregg show the ease of optimizing when using function stitching with GCM. They present a program generator that takes an annotated C version of a cryptographic algorithm and generates code that runs well on the target processor. GCM has been criticized in the embedded world (for example by Silicon Labs) because the parallel processing is not suited for performant use of cryptographic hardware engines. As a result, GCM reduces the performance of encryption for some of the most performance-sensitive devices. Specialized hardware accelerators for ChaCha20-Poly1305 are less complex compared to AES accelerators. Patents. According to the authors' statement, GCM is unencumbered by patents. Security. GCM is proven secure in the concrete security model. It is secure when it is used with a block cipher that is indistinguishable from a random permutation; however, security depends on choosing a unique initialization vector for every encryption performed with the same key ("see" stream cipher attack). For any given key, GCM is limited to encrypting 239 − 256 bits of plain text (64 GiB). NIST Special Publication 800-38D includes guidelines for initialization vector selection. The authentication strength depends on the length of the authentication tag, like with all symmetric message authentication codes. The use of shorter authentication tags with GCM is discouraged. The bit-length of the tag, denoted "t", is a security parameter. In general, "t" may be any one of the following five values: 128, 120, 112, 104, or 96. For certain applications, "t" may be 64 or 32, but the use of these two tag lengths constrains the length of the input data and the lifetime of the key. Appendix C in NIST SP 800-38D provides guidance for these constraints (for example, if "t" = 32 and the maximal packet size is 210 bytes, the authentication decryption function should be invoked no more than 211 times; if "t" = 64 and the maximal packet size is 215 bytes, the authentication decryption function should be invoked no more than 232 times). Like with any message authentication code, if the adversary chooses a "t"-bit tag at random, it is expected to be correct for given data with probability measure 2−"t". With GCM, however, an adversary can increase their likelihood of success by choosing tags with "n" words – the total length of the ciphertext plus any additional authenticated data (AAD) – with probability measure 2−"t" by a factor of "n". Although, one must bear in mind that these optimal tags are still dominated by the algorithm's survival measure 1 − "n"⋅2−"t" for arbitrarily large "t". Moreover, GCM is neither well-suited for use with very short tag-lengths nor very long messages. Ferguson and Saarinen independently described how an attacker can perform optimal attacks against GCM authentication, which meet the lower bound on its security. Ferguson showed that, if "n" denotes the total number of blocks in the encoding (the input to the GHASH function), then there is a method of constructing a targeted ciphertext forgery that is expected to succeed with a probability of approximately "n"⋅2−"t". If the tag length "t" is shorter than 128, then each successful forgery in this attack increases the probability that subsequent targeted forgeries will succeed, and leaks information about the hash subkey, "H". Eventually, "H" may be compromised entirely and the authentication assurance is completely lost. Independent of this attack, an adversary may attempt to systematically guess many different tags for a given input to authenticated decryption and thereby increase the probability that one (or more) of them, eventually, will be considered valid. For this reason, the system or protocol that implements GCM should monitor and, if necessary, limit the number of unsuccessful verification attempts for each key. Saarinen described GCM weak keys. This work gives some valuable insights into how polynomial hash-based authentication works. More precisely, this work describes a particular way of forging a GCM message, given a valid GCM message, that works with probability of about "n"⋅2−128 for messages that are "n" × 128 bits long. However, this work does not show a more effective attack than was previously known; the success probability in observation 1 of this paper matches that of lemma 2 from the INDOCRYPT 2004 analysis (setting "w" = 128 and "l" = "n" × 128). Saarinen also described a GCM variant Sophie Germain Counter Mode (SGCM) based on Sophie Germain primes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^{128} + x^7 + x^2 + x + 1" }, { "math_id": 1, "text": "\\operatorname{GHASH}(H, A, C) = X_{m+n+1}" }, { "math_id": 2, "text": "S_i = \\begin{cases}\n A_i & \\text{for }i = 1, \\ldots, m - 1 \\\\\n A^*_m \\parallel 0^{128-v} & \\text{for }i = m \\\\\n C_{i-m} & \\text{for }i = m + 1, \\ldots, m + n - 1 \\\\\n C^*_n \\parallel 0^{128-u} & \\text{for }i = m + n \\\\\n \\operatorname{len}(A) \\parallel \\operatorname{len}(C) & \\text{for }i = m + n + 1\n\\end{cases}" }, { "math_id": 3, "text": "\\parallel" }, { "math_id": 4, "text": "X_i = \\sum_{j=1}^i S_j \\cdot H^{i-j+1} = \\begin{cases}\n 0 & \\text{for } i = 0 \\\\\n \\left(X_{i-1} \\oplus S_i\\right) \\cdot H & \\text{for } i = 1, \\ldots, m + n + 1\n\\end{cases}" }, { "math_id": 5, "text": "\\begin{align}\n X^'_i &= \\begin{cases}\n 0 & \\text{for } i \\leq 0 \\\\\n \\left(X^'_{i-k} \\oplus S_i \\right) \\cdot H^k & \\text{for } i = 1, \\ldots, m + n + 1 - k \\\\\n \\end{cases} \\\\[6pt]\n X_i & = \\sum_{j=1}^k \\left( X^'_{i+j-2k} \\oplus S_{i+j-k} \\right) \\cdot H^{k-j+1}\n\\end{align}" }, { "math_id": 6, "text": "\\mathrm{Counter 0} = \\begin{cases}\n IV \\parallel 0^{31} \\parallel 1 & \\text{for } \\operatorname{len}(IV) = 96 \\\\\n \\operatorname{GHASH}\\left(IV \\parallel 0^{s} \\parallel 0^{64} \\parallel \\operatorname{len}_{64}(IV) \\right) \\text{ with } s = 128 - \\operatorname{len}(IV) \\mod 128 & \\text{otherwise}\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=5897139
58977300
A. Brooks Harris
American physicist Arthur Brooks Harris, called Brooks Harris, (born 25 March 1935) is an American physicist. Biography. Harris was born in Boston, Massachusetts, and studied at Harvard University with bachelor's degree in 1956, master's degree in 1959, and PhD in experimental solid state physics from Horst Meyer in 1962. Harris was in 1961/62 at Duke University to complete his doctoral thesis with Meyer and then was an instructor there from 1962 to 1964. During 1961–1964 at Duke University Harris retrained himself as a theorist in condensed matter physics and then spent the academic year 1964/65 as a researcher working with John Hubbard in the UK at the Atomic Energy Research Establishment (Harwell Laboratory) near Harwell, Oxfordshire. At the University of Pennsylvania, Harris became in 1965 an assistant professor and in 1977 a full professor, continuing there until his retirement as professor emeritus. He was a visiting professor at University of British Columbia in 1976, at the University of Oxford in 1973, 1986, and 1994, at Tel Aviv University in 1987 and 1995, and at McMaster University in 2005. He was visiting scientist at Sandia National Laboratories in 1974 and at the National Institute of Standards and Technology (NIST) in 2002. In 2007 he received the Lars Onsager Prize for his contributions to the statistical physics of disordered systems, especially for the development of the Harris criterion. From 1967 to 1969 he was Sloan Fellow and in 1972/73 a Guggenheim Fellow. In 1989 he was elected a Fellow of the American Physical Society. Harris has been married to Peggy since 1958 and has three children, eight grandchildren, and two great-grandchildren. Research. Upon receiving the Lars Onsager Prize, Harris wrote in 2007: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; He has also collaborated in theoretical condensed matter physics with R. J. Birgeneau (MIT), J. Yeomans (Oxford), R. D. Kamien (Penn), C. Broholm (Johns Hopkins), and A. Ramirez (Bell Labs). In 1973 he developed at Oxford the Harris criterion, which indicates the extent to which the critical exponents of a phase transition are modified by a small amount of randomness ("e.g.", defects, dislocations, or impurities). Such impurities "smear" the phase transition and lead to local variations in the transition temperature. Let formula_0 denote the spatial dimension of the system and let formula_1 denote the critical exponent of correlation length. The Harris criterion states that if formula_2 the impurities do not affect the critical behavior (so that the critical behavior is then stable against the random interference). For example, in the classical three-dimensional Heisenberg model formula_3 and thus the Harris criterion is satisfied, while the three-dimensional Ising model has formula_4 and thus does not satisfy the criterion (formula_5).
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": " \\nu " }, { "math_id": 2, "text": " \\nu \\geq \\frac {2} {d} " }, { "math_id": 3, "text": " \\nu = 0 {.} 698 " }, { "math_id": 4, "text": " \\nu = 0 {. } 627 " }, { "math_id": 5, "text": " d = 3 " } ]
https://en.wikipedia.org/wiki?curid=58977300
58977312
Blasius theorem
In fluid dynamics, Blasius theorem states that "the force experienced by a two-dimensional fixed body in a steady irrotational flow is given by" formula_0 "and the moment about the origin experienced by the body is given by" formula_1 Here, The first formula is sometimes called "Blasius–Chaplygin formula". The theorem is named after Paul Richard Heinrich Blasius, who derived it in 1911. The Kutta–Joukowski theorem directly follows from this theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_x-iF_y = \\frac{i\\rho}{2} \\oint_C \\left(\\frac{\\mathrm{d}w}{\\mathrm{d}z}\\right)^2\\mathrm{d}z" }, { "math_id": 1, "text": "M=\\Re\\left\\{-\\frac{\\rho}{2}\\oint_C z \\left(\\frac{\\mathrm{d}w}{\\mathrm{d}z}\\right)^2\\mathrm{d}z\\right\\}." }, { "math_id": 2, "text": "(F_x,F_y)" }, { "math_id": 3, "text": "\\rho" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "w=\\phi+ i\\psi" }, { "math_id": 6, "text": "\\phi" }, { "math_id": 7, "text": "\\psi" }, { "math_id": 8, "text": "{\\mathrm{d}w}/{\\mathrm{d}z} = u_x-i u_y" }, { "math_id": 9, "text": "(u_x,u_y)" }, { "math_id": 10, "text": "z=x+iy" }, { "math_id": 11, "text": "(x,y)" }, { "math_id": 12, "text": "\\Re" }, { "math_id": 13, "text": "M" } ]
https://en.wikipedia.org/wiki?curid=58977312
58992
Linear-feedback shift register
Type of shift register in computing In computing, a linear-feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state. The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value. The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function can produce a sequence of bits that appears random and has a very long cycle. Applications of LFSRs include generating pseudo-random numbers, pseudo-noise sequences, fast digital counters, and whitening sequences. Both hardware and software implementations of LFSRs are common. The mathematics of a cyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR. In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well. Fibonacci LFSRs. The bit positions that affect the next state are called the "taps". In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit, which is always also a tap. To obtain the next state, the tap bits are XOR-ed sequentially; then, all bits are shifted one place to the right, with the rightmost bit being discarded, and that result of XOR-ing the tap bits is fed back into the now-vacant leftmost bit. To obtain the pseudorandom output stream, read the rightmost bit after each state transition. The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered a binary numeral system just as valid as Gray code or the natural binary code. The arrangement of taps for feedback in an LFSR can be expressed in finite field arithmetic as a polynomial mod 2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is formula_0 The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e. "x"0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively. The LFSR is maximal-length if and only if the corresponding feedback polynomial is primitive over the Galois field GF(2). This means that the following conditions are necessary (but not sufficient): Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references. There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in an "n"-bit LFSR is ["n", "A", "B", "C", 0], where the 0 corresponds to the "x"0 = 1 term, then the corresponding "mirror" sequence is ["n", "n" − "C", "n" − "B", "n" − "A", 0]. So the tap sequence [32, 22, 2, 1, 0] has as its counterpart [32, 31, 30, 10, 0]. Both give a maximum-length sequence. An example in C is below: unsigned lfsr_fib(void) uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */ uint16_t lfsr = start_state; uint16_t bit; /* Must be 16-bit to allow bit«15 later in the code */ unsigned period = 0; do { /* taps: 16 14 13 11; feedback polynomial: x^16 + x^14 + x^13 + x^11 + 1 */ bit = ((lfsr » 0) ^ (lfsr » 2) ^ (lfsr » 3) ^ (lfsr » 5)) &amp; 1u; lfsr = (lfsr » 1) | (bit « 15); ++period; while (lfsr != start_state); return period; If a fast parity or popcount operation is available, the feedback bit can be computed more efficiently as the dot product of the register with the characteristic polynomial: If a rotation operation is available, the new state can be computed as This LFSR configuration is also known as standard, many-to-one or external XOR gates. The alternative Galois configuration is described in the next section. Example in Python. A sample python implementation of a similar (16 bit taps at [16,15,13,4]) Fibonacci LFSR would be start_state = 1 « 15 | 1 lfsr = start_state period = 0 while True: # taps: 16 15 13 4; feedback polynomial: x^16 + x^15 + x^13 + x^4 + 1 bit = (lfsr ^ (lfsr » 1) ^ (lfsr » 3) ^ (lfsr » 12)) &amp; 1 lfsr = (lfsr » 1) | (bit « 15) period += 1 if lfsr == start_state: print(period) break Where a register of 16 bits is used and the xor tap at the fourth, 13th, 15th and sixteenth bit establishes a maximum sequence length. Galois LFSRs. Named after the French mathematician Évariste Galois, an LFSR in Galois configuration, which is also known as modular, internal XORs, or one-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time). In the Galois configuration, when the system is clocked, bits that are not taps are shifted one position to the right unchanged. The taps, on the other hand, are XORed with the output bit before they are stored in the next position. The new output bit is the next input bit. The effect of this is that when the output bit is zero, all the bits in the register shift to the right unchanged, and the input bit becomes zero. When the output bit is one, the bits in the tap positions all flip (if they are 0, they become 1, and if they are 1, they become 0), and then the entire register is shifted to the right and the input bit becomes 1. To generate the same output stream, the order of the taps is the "counterpart" (see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle. Below is a C code example for the 16-bit maximal-period Galois LFSR example in the figure: unsigned lfsr_galois(void) uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */ uint16_t lfsr = start_state; unsigned period = 0; do unsigned lsb = lfsr &amp; 1u; /* Get LSB (i.e., the output bit). */ lfsr »= 1; /* Shift register */ if (lsb) /* If the output bit is 1, */ lfsr ^= 0xB400u; /* apply toggle mask. */ unsigned msb = (int16_t) lfsr &lt; 0; /* Get MSB (i.e., the output bit). */ lfsr «= 1; /* Shift register */ if (msb) /* If the output bit is 1, */ lfsr ^= 0x002Du; /* apply toggle mask. */ ++period; while (lfsr != start_state); return period; The branch if (lsb) lfsr ^= 0xB400u;can also be written as lfsr ^= (-lsb) &amp; 0xB400u; which may produce more efficient code on some compilers. In addition, the left-shifting variant may produce even better code, as the msb is the carry from the addition of codice_2 to itself. Galois LFSR parallel computation. State and resulting bits can also be combined and computed in parallel. The following function calculates the next 64 bits using 63-bit polynomial x⁶³ + x⁶² + 1: uint64_t prsg63(uint64_t lfsr) { lfsr = lfsr « 32 | (lfsr«1 ^ lfsr«2) » 32; lfsr = lfsr « 32 | (lfsr«1 ^ lfsr«2) » 32; return lfsr; Non-binary Galois LFSR. Binary Galois LFSRs like the ones shown above can be generalized to any "q"-ary alphabet {0, 1, ..., "q" − 1} (e.g., for binary, "q" = 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to addition modulo-"q" (note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-"q") by a "q"-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generate Galois fields for arbitrary prime values of "q". Xorshift LFSRs. As shown by George Marsaglia and further analysed by Richard P. Brent, linear feedback shift registers can be implemented using XOR and Shift operations. This approach lends itself to fast execution in software because these operations typically map efficiently into modern processor instructions. Below is a C code example for a 16-bit maximal-period Xorshift LFSR using the 7,9,13 triplet from John Metcalf: unsigned lfsr_xorshift(void) uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */ uint16_t lfsr = start_state; unsigned period = 0; do { // 7,9,13 triplet from http://www.retroprogramming.com/2017/07/xorshift-pseudorandom-numbers-in-z80.html lfsr ^= lfsr » 7; lfsr ^= lfsr « 9; lfsr ^= lfsr » 13; ++period; while (lfsr != start_state); return period; Matrix forms. Binary LFSRs of both Fibonacci and Galois configurations can be expressed as linear functions using matrices in formula_1 (see GF(2)). Using the companion matrix of the characteristic polynomial of the LFSR and denoting the seed as a column vector formula_2, the state of the register in Fibonacci configuration after formula_3 steps is given by formula_4 Matrix for the corresponding Galois form is : formula_5 For a suitable initialisation, formula_6 the top coefficient of the column vector : formula_7 gives the term "a""k" of the original sequence. These forms generalize naturally to arbitrary fields. Example polynomials for maximal LFSRs. The following table lists examples of maximal-length feedback polynomials (primitive polynomials) for shift-register lengths up to 24. The formalism for maximum-length LFSRs was developed by Solomon W. Golomb in his 1967 book. The number of different primitive polynomials grows exponentially with shift-register length and can be calculated exactly using Euler's totient function (sequence in the OEIS). Xilinx published an extended list of tap counters up to 168 bit. Tables of maximum length polynomials are available from http://users.ece.cmu.edu/~koopman/lfsr/ and can be generated by the https://github.com/hayguen/mlpolygen project. Applications. LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such as direct-sequence spread spectrum radio. LFSRs have also been used for generating an approximation of white noise in various programmable sound generators. Uses as counters. The repeating sequence of states of an LFSR allows it to be used as a clock divider or as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable. LFSR counters have simpler feedback logic than natural binary counters or Gray-code counters, and therefore can operate at higher clock rates. However, it is necessary to ensure that the LFSR never enters an all-zeros state, for example by presetting it at start-up to any other state in the sequence. The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states. Uses in cryptography. LFSRs have long been used as pseudo-random number generators for use in stream ciphers, due to the ease of construction from simple electromechanical or electronic circuits, long periods, and very uniformly distributed output streams. However, an LFSR is a linear system, leading to fairly easy cryptanalysis. For example, given a stretch of known plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using the Berlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext. Three general methods are employed to reduce this problem in LFSR-based stream ciphers: Important LFSR-based stream ciphers include A5/1 and A5/2, used in GSM cell phones, E0, used in Bluetooth, and the shrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses. The linear feedback shift register has a strong relationship to linear congruential generators. Uses in circuit testing. LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis. Test-pattern generation. Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for an "n"-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications. Signature analysis. In built-in self-test (BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time. Recent applications are proposing set-reset flip-flops as "taps" of the LFSR. This allows the BIST system to optimise storage, since set-reset flip-flops can save the initial seed to generate the whole stream of bits from the LFSR. Nevertheless, this requires changes in the architecture of BIST, is an option for specific applications. Uses in digital broadcasting and communications. Scrambling. To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the same bit rate as the transmitted symbol stream, this technique is referred to as scrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is called "chipping code". The chipping code is combined with the data using exclusive or before transmitting using binary phase-shift keying or a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method of spread-spectrum communication. When used only for the spread-spectrum property, this technique is called direct-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is called code-division multiple access. Neither scheme should be confused with encryption or encipherment; scrambling and spreading with LFSRs do "not" protect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation. Digital broadcasting systems that use linear-feedback registers: Other digital communications systems using LFSRs: Other uses. LFSRs are also used in radio jamming systems to generate pseudo-random noise to raise the noise floor of a target communication system. The German time signal DCF77, in addition to amplitude keying, employs phase-shift keying driven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "x^{16} + x^{14} + x^{13} + x^{11} + 1." }, { "math_id": 1, "text": "\\mathbb{F}_2" }, { "math_id": 2, "text": "(a_0, a_1, \\dots, a_{n-1})^\\mathrm{T}" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "\\begin{pmatrix} a_{k} \\\\ a_{k+1} \\\\ a_{k+2} \\\\ \\vdots \\\\ a_{k+n-1} \\end{pmatrix} =\n\\begin{pmatrix} 0 & 1 & 0 & \\cdots & 0 \\\\ 0 & 0 & 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & \\ddots & 0\\\\ 0 & 0 & \\cdots & 0& 1\\\\ c_{0} & c_{1} & \\cdots & \\cdots & c_{n-1} \\end{pmatrix}\n\\begin{pmatrix} a_{k-1} \\\\ a_{k} \\\\ a_{k+1} \\\\ \\vdots \\\\ a_{k+n-2} \\end{pmatrix} =\n\\begin{pmatrix} 0 & 1 & 0 & \\cdots & 0 \\\\ 0 & 0 & 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & \\ddots & 0\\\\ 0 & 0 & \\cdots & 0& 1\\\\ c_{0} & c_{1} & \\cdots & \\cdots & c_{n-1} \\end{pmatrix}^k\n\\begin{pmatrix} a_0 \\\\ a_1 \\\\ a_2 \\\\ \\vdots \\\\ a_{n-1} \\end{pmatrix}" }, { "math_id": 5, "text": "\n\\begin{pmatrix} c_0 & 1 & 0 & \\cdots & 0 \\\\ c_1 & 0 & 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & \\ddots & 0\\\\ c_{n-2} & 0 & \\cdots & 0& 1\\\\ c_{n-1} & 0 & \\cdots & \\cdots & 0 \\end{pmatrix}" }, { "math_id": 6, "text": "a'_i=\\sum_{i=0}^ja_{i-j}c_{n-j},\\ 0\\leq i < n" }, { "math_id": 7, "text": "\n\\begin{pmatrix} c_0 & 1 & 0 & \\cdots & 0 \\\\ c_1 & 0 & 1 & \\ddots & \\vdots \\\\ \\vdots & \\vdots & \\ddots & \\ddots & 0\\\\ c_{n-2} & 0 & \\cdots & 0& 1\\\\ c_{n-1} & 0 & \\cdots & \\cdots & 0 \\end{pmatrix}^k\n\\begin{pmatrix} a'_0 \\\\ a'_1 \\\\ a'_2 \\\\ \\vdots \\\\ a'_{n-1} \\end{pmatrix}" } ]
https://en.wikipedia.org/wiki?curid=58992
58997040
Dirichlet average
Averages of functions under the Dirichlet distribution Dirichlet averages are averages of functions under the Dirichlet distribution. An important one are dirichlet averages that have a certain argument structure, namely formula_0 where formula_1 and formula_2 is the Dirichlet measure with dimension "N". They were introduced by the mathematician Bille C. Carlson in the '70s who noticed that the simple notion of this type of averaging generalizes and unifies many special functions, among them generalized hypergeometric functions or various orthogonal polynomials:. They also play an important role for the solution of elliptic integrals (see Carlson symmetric form) and are connected to statistical applications in various ways, for example in Bayesian analysis. Notable Dirichlet averages. Some Dirichlet averages are so fundamental that they are named. A few are listed below. R-function. The (Carlson) R-function is the Dirichlet average of formula_3, formula_4 with formula_5. Sometimes formula_6 is also denoted by formula_7. Exact solutions: For formula_8 it is possible to write an exact solution in the form of an iterative sum formula_9 where formula_10, formula_11 is the dimension of formula_12 or formula_13 and formula_14. S-function. The (Carlson) S-function is the Dirichlet average of formula_15, formula_16 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " F(\\mathbf{b};\\mathbf{z})=\\int f( \\mathbf{u} \\cdot \\mathbf{z}) \\, d \\mu_b(\\mathbf{u}), " }, { "math_id": 1, "text": "\\mathbf{u}\\cdot\\mathbf{z}=\\sum_i^N u_i \\cdot z_i" }, { "math_id": 2, "text": "d \\mu_b(\\mathbf{u})=u_1^{b_1-1} \\cdots u_N^{b_N-1} d\\mathbf{u}" }, { "math_id": 3, "text": "x^n" }, { "math_id": 4, "text": "R_n(\\mathbf{b}, \\mathbf{z})=\\int (\\mathbf{u} \\cdot \\mathbf{z})^n \\, d \\mu_b(\\mathbf{u})" }, { "math_id": 5, "text": "n " }, { "math_id": 6, "text": "R_n(\\mathbf{b}, \\mathbf{z})" }, { "math_id": 7, "text": "R(-n;\\mathbf{b}, \\mathbf{z})" }, { "math_id": 8, "text": "n \\geq 0, n \\in \\mathbb{N}" }, { "math_id": 9, "text": "R_n(\\mathbf{b},\\mathbf{z})=\\frac{\\Gamma(n+1)\\Gamma(b)}{\\Gamma(b+n)} \\cdot D_n \\text{ with } D_n=\\frac{1}{n}\\sum_{k=1}^n \\left(\\sum_{i=1}^N b_i \\cdot z_i^k\\right) \\cdot D_{n-k}" }, { "math_id": 10, "text": "D_0=1" }, { "math_id": 11, "text": "N" }, { "math_id": 12, "text": "\\mathbf{b}" }, { "math_id": 13, "text": "\\mathbf{z}" }, { "math_id": 14, "text": "b=\\sum b_i" }, { "math_id": 15, "text": "e^x" }, { "math_id": 16, "text": "S(\\mathbf{b}, \\mathbf{z})=\\int \\exp(\\mathbf{u} \\cdot \\mathbf{z}) \\, d \\mu_b(\\mathbf{u}). " } ]
https://en.wikipedia.org/wiki?curid=58997040
58998360
Anne Schilling
American mathematician Anne Schilling is an American mathematician specializing in algebraic combinatorics, representation theory, and mathematical physics. She is a professor of mathematics at the University of California, Davis. Education. Schilling completed her Ph.D. in 1997 at Stony Brook University. Her dissertation, "Bose-Fermi Identities and Bailey Flows in Statistical Mechanics and Conformal Field Theory", was supervised by Barry M. McCoy. From 1997 until 1999, she was a postdoctoral fellow at the Institute for Theoretical Physics at Amsterdam University and from 1999 until 2001, she was a C.L.E. Moore Instructor at the Mathematics Department at M.I.T.. After that she joined the faculty at the Department of Mathematics at UC Davis. Books. With Thomas Lam, Luc Lapointe, Jennifer Morse, Mark Shimozono, and Mike Zabrocki, Schilling is the author of the research monograph "formula_0-Schur Functions and Affine Schubert Calculus" (Fields Institute Monographs 33, Springer, 2014). With Isaiah Lankham and Bruno Nachtergaele, Schilling is the author of the textbook on linear algebra, "Linear Algebra as an Introduction to Abstract Mathematics" (World Scientific, 2016). With Daniel Bump, she is the author of a more advanced book on crystal bases in representation theory, "Crystal Bases: Representations and Combinatorics" (World Scientific, 2017). Recognition. Schilling was a Fulbright Scholar from 1992-1993 as a doctoral student. In 2002 she received a Humboldt Research Fellowship. She was awarded a Simons Fellowship for the academic year 2012–2013. She was included in the 2019 class of fellows of the American Mathematical Society "for contributions to algebraic combinatorics, combinatorial representation theory, and mathematical physics and for service to the profession". Schilling was selected as the 43rd Emmy Noether Lecturer at the Joint Mathematics Meetings in San Francisco on January 3–6, 2024. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=58998360
59006
Number sign
Typographic symbol (#) The symbol # is known variously in English-speaking regions as the number sign, hash, or pound sign. The symbol has historically been used for a wide range of purposes including the designation of an ordinal number and as a ligatured abbreviation for pounds avoirdupois – having been derived from the now-rare ℔. Since 2007, widespread usage of the symbol to introduce metadata tags on social media platforms has led to such tags being known as "hashtags", and from that, the symbol itself is sometimes called a hashtag. The symbol is distinguished from similar symbols by its combination of level horizontal strokes and right-tilting vertical strokes. History. It is believed that the symbol traces its origins to the symbol ℔, an abbreviation of the Roman term "libra pondo", which translates as "pound weight". The abbreviation "lb" was printed as a dedicated ligature including a horizontal line across (which indicated abbreviation). Ultimately, the symbol was reduced for clarity as an overlay of two horizontal strokes "=" across two slash-like strokes "//". The symbol is described as the "number" character in an 1853 treatise on bookkeeping, and its double meaning is described in a bookkeeping text from 1880. The instruction manual of the Blickensderfer model 5 typewriter (c. 1896) appears to refer to the symbol as the "number mark". Some early-20th-century U.S. sources refer to it as the "number sign", although this could also refer to the numero sign (№). A 1917 manual distinguishes between two uses of the sign: "number (written before a figure)" and "pounds (written after a figure)". The use of the phrase "pound sign" to refer to this symbol is found from 1932 in U.S. usage. The term "hash sign" is found in South African writings from the late 1960s and from other non-North-American sources in the 1970s. For mechanical devices, the symbol appeared on the keyboard of the Remington Standard typewriter (c. 1886). It appeared in many of the early teleprinter codes and from there was copied to ASCII, which made it available on computers and thus caused many more uses to be found for the character. The symbol was introduced on the bottom right button of touch-tone keypads in 1968, but that button was not extensively used until the advent of large-scale voicemail (PBX systems, etc.) in the early 1980s. One of the uses in computers was to label the following text as having a different interpretation (such as a command or a comment) from the rest of the text. It was adopted for use within internet relay chat (IRC) networks circa 1988 to label groups and topics. This usage inspired Chris Messina to propose a similar system to be used on Twitter to tag topics of interest on the microblogging network; this became known as a hashtag. Although used initially and most popularly on Twitter, hashtag use has extended to other social media sites. Names. Number sign "Number sign" is the name chosen by the Unicode consortium. Most common in Canada and the northeastern United States. American telephone equipment companies which serve Canadian callers often have an option in their programming to denote Canadian English, which in turn instructs the system to say "number sign" to callers instead of "pound". Pound sign or pound In the United States, the "#" key on a phone is commonly referred to as the pound sign, "pound key", or simply "pound". Dialing instructions to an extension such as #77, for example, can be read as "pound seven seven". This name is rarely used outside the United States, where the term "pound sign" is understood to mean the currency symbol £. Hash, hash mark, hashmark In the United Kingdom, Australia, and some other countries, it is generally called a "hash" (probably from "hatch", referring to cross-hatching). Programmers also use this term; for instance is "hash, bang" or "shebang". Hashtag Derived from the previous, the word "hashtag" is often used when reading social media messages aloud, indicating the start of a hashtag. For instance, the text "#foo" is often read out loud as "hashtag foo" (as opposed to "hash foo"). This leads to the common belief that the symbol itself is called "hashtag". Twitter documentation refers to it as "the hashtag symbol". Hex "Hex" is commonly used in Singapore and Malaysia, as spoken by many recorded telephone directory-assistance menus: "Please enter your phone number followed by the 'hex' key". The term "hex" is discouraged in Singapore in favour of "hash". In Singapore, a hash is also called "hex" in apartment addresses, where it precedes the floor number. &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Octothorp, octothorpe, octathorp, octatherp Most scholars believe the word was invented by workers at the Bell Telephone Laboratories by 1968, who needed a word for the symbol on the telephone keypad. Don MacPherson is said to have created the word by combining "octo" and the last name of Jim Thorpe, an Olympic medalist. Howard Eby and Lauren Asplund claim to have invented the word as a joke in 1964, combining "octo" with the syllable "therp" which, because of the "th" digraph, was hard to pronounce in different languages. "The Merriam-Webster New Book of Word Histories", 1991, has a long article that is consistent with Doug Kerr's essay, which says "octotherp" was the original spelling, and that the word arose in the 1960s among telephone engineers as a joke. Other hypotheses for the origin of the word include the last name of James Oglethorpe or using the Old English word for village, "thorp", because the symbol looks like a village surrounded by eight fields. The word was popularized within and outside Bell Labs. The first appearance of "octothorp" in a US patent is in a 1973 filing. This patent also refers to the six-pointed asterisk (✻) used on telephone buttons as a "sextile". Sharp Use of the name "sharp" is due to the symbol's resemblance to . The same derivation is seen in the name of the Microsoft programming languages C#, J# and F#. Microsoft says that the name "C#" is pronounced 'see sharp'." According to the ECMA-334 C# Language Specification, the name of the language is written "C#" ("LATIN CAPITAL LETTER C (U+0043) followed by the NUMBER SIGN # (U+0023)") and pronounced "C Sharp". Square On telephones, the International Telecommunication Union specification ITU-T E.161 3.2.2 states: "The symbol may be referred to as the square or the most commonly used equivalent term in other languages." Formally, this is not a number sign but rather another character, . The real or virtual keypads on almost all modern telephones use the simple instead, as does most documentation. Other Names that may be seen include: crosshatch, crunch, fence, flash, garden fence, garden gate, gate, grid, hak, mesh, oof, pig-pen, punch mark, rake, scratch, scratch mark, tic-tac-toe, and unequal. Usage. When ⟨#⟩ prefixes a number, it is read as "number". "A #2 pencil", for example, indicates "a number-two pencil". The abbreviations 'No.' and '№' are used commonly and interchangeably. The use of ⟨#⟩ as an abbreviation for "number" is common in informal writing, but use in print is rare. Where Americans might write "Symphony #5", British and Irish people usually write "Symphony No. 5". When ⟨#⟩ is after a number, it is read as "pound" or "pounds", meaning the unit of weight. The text "5# bag of flour" would mean "five-pound bag of flour". The abbreviations "lb." and "℔" are used commonly and interchangeably. This usage is rare outside North America, where "lb' or "lbs" is used. ⟨#⟩ is "not" a replacement for the pound sign ⟨£⟩, but British typewriters and keyboards have a key where American keyboards have a key. Many early computer and teleprinter codes (such as BS 4730 (the UK national variant of the ISO/IEC 646 character set) substituted "£" for "#" to make the British versions, thus it was common for the same binary code to display as on US equipment and on British equipment ("$" was not substituted to avoid confusing dollars and pounds in financial communications). Unicode. The number sign was assigned code 35 (hex 0x23) in ASCII where it was inherited by many character sets. In EBCDIC it is often at 0x7B or 0xEC. Unicode characters with "number sign" in their names: Additionally, a Unicode named sequence KEYCAP NUMBER SIGN is defined for the grapheme cluster (#️⃣). On keyboards. On the standard US keyboard layout, the # symbol is . On standard UK and some other European keyboards, the same keystrokes produce the pound (sterling) sign, £ symbol, and may be moved to a separate key above the right shift key. If there is no key, the symbol can be produced on Windows with , on Mac OS with , and on Linux with . Explanatory notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|S|" }, { "math_id": 1, "text": "S = \\{s_1,s_2,s_3, \\dots , s_n\\}" }, { "math_id": 2, "text": "s_i" }, { "math_id": 3, "text": "\\#S = n = |S|." }, { "math_id": 4, "text": "a \\mid b" } ]
https://en.wikipedia.org/wiki?curid=59006
59020460
Leontovich boundary condition
The Shchukin-Leontovich boundary condition is a boundary condition in classical electrodynamics that relates to the tangential components of the electric E"t" and magnetic H"t" fields on the surface of well-conducting bodies. Definition. As originally formulated by Soviet physicists Alexander Shchukin and Mikhail Leontovich, the boundary condition is given as formula_0 where formula_1 and formula_2 represent the tangential components of the electric and magnetic fields, formula_3 is the effective surface impedance, and formula_4 is a unit normal pointing into the conducting material. This condition is accurate when the conductivity of the conductor is large, which is the case for most metals. More generally, for cases when the radii of curvature of the conducting surface is large with respect to the skin depth, the resulting fields on the interior can be well approximated by plane waves, thus giving rise to the Shchukin-Leontovitch condition. A generalization of the Shchukin-Leontovich impedance boundary condition for a flat surface of a uniform half-space with an arbitrary dielectric constant, presented as a one-sided non-local relation, was formulated in. Applications. The Shchukin-Leontovich boundary condition is useful in many scattering problems where one material is a metal with large (but finite) conductivity. As the condition provides a relationship between the electric and magnetic fields at the surface of the conductor, without knowledge of the fields within, the task of finding the total fields is considerably simplified. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{E_t} = \\zeta_s \\mathbf{H_t}\\times \\hat{n}," }, { "math_id": 1, "text": "\\mathbf{E_t}" }, { "math_id": 2, "text": "\\mathbf{H_t}" }, { "math_id": 3, "text": "\\zeta_s = \\sqrt{\\mu/\\epsilon}" }, { "math_id": 4, "text": "\\hat{n}" } ]
https://en.wikipedia.org/wiki?curid=59020460
5902168
Elasticity of substitution
Economic metric Elasticity of substitution is the ratio of percentage change in capital-labour ratio with the percentage change in Marginal Rate of Technical Substitution. In a competitive market, it measures the percentage change in the two inputs used in response to a percentage change in their prices. It gives a measure of the curvature of an isoquant, and thus, the substitutability between inputs (or goods), i.e. how easy it is to substitute one input (or good) for the other. History of the concept. John Hicks introduced the concept in 1932. Joan Robinson independently discovered it in 1933 using a mathematical formulation that was equivalent to Hicks's, though that was not implemented at the time. Definition. The general definition of the elasticity of X with respect to Y is formula_0, which reduces to formula_1 for infinitesimal changes and differentiable variables. The elasticity of substitution is the change in the ratio of the use of two goods with respect to the ratio of their marginal values or prices. The most common application is to the ratio of capital (K) and labor (L) used with respect to the ratio of their marginal products formula_2 and formula_3 or of the rental price (r) and the wage (w). Another application is to the ratio of consumption goods 1 and 2 with respect to the ratio of their marginal utilities or their prices. We will start with the consumption application. Let the utility over consumption be given by formula_4 and let formula_5. Then the elasticity of substitution is: formula_6 where formula_7 is the marginal rate of substitution. (These differentials are taken along the isoquant that passes through the base point. That is, the inputs formula_8 and formula_9 are not varied independently, but instead one input is varied freely while the other input is constrained to lie on the isoquant that passes through the base point. Because of this constraint, the MRS and the ratio of inputs are one-to-one functions of each other under suitable convexity assumptions.) The last equality presents formula_10, where formula_11 are the prices of goods 1 and 2. This is a relationship from the first order condition for a consumer utility maximization problem in Arrow–Debreu interior equilibrium, where the marginal utilities of two goods are proportional to prices. Intuitively we are looking at how a consumer's choices over consumption items change as their relative prices change. Note also that formula_12: formula_13 An equivalent characterization of the elasticity of substitution is: formula_14 In discrete-time models, the elasticity of substitution of consumption in periods formula_15 and formula_16 is known as elasticity of intertemporal substitution. Similarly, if the production function is formula_17 then the elasticity of substitution is: formula_18 where formula_19 is the marginal rate of technical substitution. The inverse of elasticity of substitution is elasticity of complementarity. Example. Consider Cobb–Douglas production function formula_20. The marginal rate of technical substitution is formula_21 It is convenient to change the notations. Denote formula_22 Rewriting this we have formula_23 Then the elasticity of substitution is formula_24 Economic interpretation. Given an original allocation/combination and a specific substitution on allocation/combination for the original one, the larger the magnitude of the elasticity of substitution (the marginal rate of substitution elasticity of the relative allocation) means the more likely to substitute. There are always 2 sides to the market; here we are talking about the receiver, since the elasticity of preference is that of the receiver. The elasticity of substitution also governs how the relative expenditure on goods or factor inputs changes as relative prices change. Let formula_25 denote expenditure on formula_9 relative to that on formula_8. That is: formula_26 As the relative price formula_27 changes, relative expenditure changes according to: formula_28 Thus, whether or not an increase in the relative price of formula_9 leads to an increase or decrease in the relative "expenditure" on formula_9 depends on whether the elasticity of substitution is less than or greater than one. Intuitively, the direct effect of a rise in the relative price of formula_9 is to increase expenditure on formula_9, since a given quantity of formula_9 is more costly. On the other hand, assuming the goods in question are not Giffen goods, a rise in the relative price of formula_9 leads to a fall in relative demand for formula_9, so that the quantity of formula_9 purchased falls, which reduces expenditure on formula_9. Which of these effects dominates depends on the magnitude of the elasticity of substitution. When the elasticity of substitution is less than one, the first effect dominates: relative demand for formula_9 falls, but by proportionally less than the rise in its relative price, so that relative expenditure rises. In this case, the goods are gross complements. Conversely, when the elasticity of substitution is greater than one, the second effect dominates: the reduction in relative quantity exceeds the increase in relative price, so that relative expenditure on formula_9 falls. In this case, the goods are gross substitutes. Note that when the elasticity of substitution is exactly one (as in the Cobb–Douglas case), expenditure on formula_9 relative to formula_8 is independent of the relative prices. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E^X_Y = \\frac{\\%\\ \\mbox{change in X}}{\\%\\ \\mbox{change in Y}}" }, { "math_id": 1, "text": "E^X_Y = \\frac{dX}{dY} \\frac{Y}{X}" }, { "math_id": 2, "text": "MP_K" }, { "math_id": 3, "text": "MP_L" }, { "math_id": 4, "text": "U(c_1,c_2)" }, { "math_id": 5, "text": "U_{c_i}= \\partial U(c_1,c_2)/\\partial {c_i}" }, { "math_id": 6, "text": " E_{21} =\\frac{d \\ln (c_2/c_1) }{d \\ln (MRS_{12})}\n =\\frac{d \\ln (c_2/c_1) }{d \\ln (U_{c_1}/U_{c_2})}\n =\\frac{\\frac{d (c_2/c_1) }{c_2/c_1}}{\\frac{d (U_{c_1}/U_{c_2})}{U_{c_1}/U_{c_2}}}\n =\\frac{\\frac{d (c_2/c_1) }{c_2/c_1}}{\\frac{d (p_1/p_2)}{p_1/p_2}}\n" }, { "math_id": 7, "text": "MRS" }, { "math_id": 8, "text": "c_1" }, { "math_id": 9, "text": "c_2" }, { "math_id": 10, "text": "MRS_{12} = p_1/p_2 " }, { "math_id": 11, "text": "p_1, p_2" }, { "math_id": 12, "text": " E_{21} = E_{12}" }, { "math_id": 13, "text": " E_{21} =\\frac{d \\ln (c_2/c_1) }{d \\ln (U_{c_1}/U_{c_2})}\n =\\frac{d \\left(-\\ln (c_2/c_1)\\right) }{d \\left(-\\ln (U_{c_1}/U_{c_2})\\right)}\n =\\frac{d \\ln (c_1/c_2) }{d \\ln (U_{c_2}/U_{c_1})}\n = E_{12}\n" }, { "math_id": 14, "text": " E_{21} =\\frac{d \\ln (c_2/c_1) }{d \\ln (MRS_{12})}\n =-\\frac{d \\ln (c_2/c_1) }{d \\ln (MRS_{21})}\n =-\\frac{d \\ln (c_2/c_1) }{d \\ln (U_{c_2}/U_{c_1})}\n =-\\frac{\\frac{d (c_2/c_1) }{c_2/c_1}}{\\frac{d (U_{c_2}/U_{c_1})}{U_{c_2}/U_{c_1}}}\n =-\\frac{\\frac{d (c_2/c_1) }{c_2/c_1}}{\\frac{d (p_2/p_1)}{p_2/p_1}}\n" }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "t+1" }, { "math_id": 17, "text": "f(x_1,x_2)" }, { "math_id": 18, "text": " \\sigma_{21} =\\frac{d \\ln (x_2/x_1) }{d \\ln MRTS_{12}}\n =\\frac{d \\ln (x_2/x_1) }{d \\ln (\\frac{df}{dx_1}/\\frac{df}{dx_2})}\n =\\frac{\\frac{d (x_2/x_1) }{x_2/x_1}}{\\frac{d (\\frac{df}{dx_1}/\\frac{df}{dx_2})}{\\frac{df}{dx_1}/\\frac{df}{dx_2}}}\n =-\\frac{\\frac{d (x_2/x_1) }{x_2/x_1}}{\\frac{d (\\frac{df}{dx_2}/\\frac{df}{dx_1})}{\\frac{df}{dx_2}/\\frac{df}{dx_1}}}\n" }, { "math_id": 19, "text": "MRTS" }, { "math_id": 20, "text": "f(x_1,x_2)=x_1^a x_2^{1-a}" }, { "math_id": 21, "text": "MRTS_{21} = \\frac{1-a}{a} \\frac{x_1}{x_2}" }, { "math_id": 22, "text": "\\frac{1-a}{a} \\frac{x_1}{x_2}=\\theta" }, { "math_id": 23, "text": "\\frac{x_1}{x_2} = \\frac{a}{1-a}\\theta " }, { "math_id": 24, "text": "\\sigma_{21} = \\frac{d \\ln (\\frac{x_1}{x_2})}{d \\ln (MRTS_{21})} = \\frac{d \\ln (\\frac{x_1}{x_2})}{d \\ln (\\theta)} = \\frac{d \\frac{x_1}{x_2}}{\\frac{x_1}{x_2}} \\frac{\\theta}{d \\theta} = \\frac{d \\frac{x_1}{x_2}}{d \\theta} \\frac{\\theta}{\\frac{x_1}{x_2}} = \\frac{a}{1-a} \\frac{1-a}{a} \\frac{x_1}{x_2} \\frac{x_2}{x_1} = 1\n\n" }, { "math_id": 25, "text": "S_{21}" }, { "math_id": 26, "text": " S_{21} \\equiv \\frac{p_2 c_2}{p_1 c_1}\n" }, { "math_id": 27, "text": "p_2/p_1" }, { "math_id": 28, "text": " \\frac{dS_{21}}{d\\left(p_2/p_1\\right)} = \\frac{c_2}{c_1} + \\frac{p_2}{p_1}\\cdot\\frac{d\\left(c_2/c_1\\right)}{d\\left(p_2/p_1\\right)}\n = \\frac{c_2}{c_1}\\left[1 + \\frac{d\\left(c_2/c_1\\right)}{d\\left(p_2/p_1\\right)}\\cdot\\frac{p_2/p_1}{c_2/c_1} \\right]\n = \\frac{c_2}{c_1}\\left(1 - E_{21} \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=5902168
5902964
Elasticity of a function
In mathematics, the elasticity or point elasticity of a positive differentiable function "f" of a positive variable (positive input, positive output) at point "a" is defined as formula_0 formula_1 or equivalently formula_2 It is thus the ratio of the relative (percentage) change in the function's output formula_3 with respect to the relative change in its input formula_4, for infinitesimal changes from a point formula_5. Equivalently, it is the ratio of the infinitesimal change of the logarithm of a function with respect to the infinitesimal change of the logarithm of the argument. Generalizations to multi-input–multi-output cases also exist in the literature. The elasticity of a function is a constant formula_6 if and only if the function has the form formula_7 for a constant formula_8. The elasticity at a point is the limit of the arc elasticity between two points as the separation between those two points approaches zero. The concept of elasticity is widely used in economics and metabolic control analysis (MCA); see elasticity (economics) and elasticity coefficient respectively for details. Rules. Rules for finding the elasticity of products and quotients are simpler than those for derivatives. Let "f, g" be differentiable. Then formula_9 formula_10 formula_11 formula_12 The derivative can be expressed in terms of elasticity as formula_13 Let "a" and "b" be constants. Then formula_14 formula_15, formula_16. Estimating point elasticities. In economics, the price elasticity of demand refers to the elasticity of a demand function "Q"("P"), and can be expressed as (dQ/dP)/(Q(P)/P) or the ratio of the value of the marginal function (dQ/dP) to the value of the average function (Q(P)/P). This relationship provides an easy way of determining whether a demand curve is elastic or inelastic at a particular point. First, suppose one follows the usual convention in mathematics of plotting the independent variable (P) horizontally and the dependent variable (Q) vertically. Then the slope of a line tangent to the curve at that point is the value of the marginal function at that point. The slope of a ray drawn from the origin through the point is the value of the average function. If the absolute value of the slope of the tangent is greater than the slope of the ray then the function is elastic at the point; if the slope of the secant is greater than the absolute value of the slope of the tangent then the curve is inelastic at the point. If the tangent line is extended to the horizontal axis the problem is simply a matter of comparing angles created by the lines and the horizontal axis. If the marginal angle is greater than the average angle then the function is elastic at the point; if the marginal angle is less than the average angle then the function is inelastic at that point. If, however, one follows the convention adopted by economists and plots the independent variable "P" on the vertical axis and the dependent variable "Q" on the horizontal axis, then the opposite rules would apply. The same graphical procedure can also be applied to a supply function or other functions. Semi-elasticity. A semi-elasticity (or semielasticity) gives the percentage change in "f(x)" in terms of a change (not percentage-wise) in "x". Algebraically, the semi-elasticity S of a function "f" at point "x" is formula_17 The semi-elasticity will be constant for exponential functions of the form, formula_18 since, formula_19 An example of semi-elasticity is modified duration in bond trading. The opposite definition is sometimes used in the literature. That is, the term "semi-elasticity" is also sometimes used for the change (not percentage-wise) in "f(x)" in terms of a percentage change in "x" which would be formula_20 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Ef(a) = \\frac{a}{f(a)}f'(a)" }, { "math_id": 1, "text": "=\\lim_{x\\to a}\\frac{f(x)-f(a)}{x-a}\\frac{a}{f(a)}=\\lim_{x\\to a}\\frac{f(x)-f(a)}{f(a)}\\frac{a}{x-a}=\\lim_{x\\to a}\\frac{\\frac{f(x)}{f(a)}-1}{\\frac{x}{a}-1}\\approx \\frac{\\%\\Delta f(a)}{\\%\\Delta a} " }, { "math_id": 2, "text": "Ef(x) = \\frac{d \\log f(x)}{d \\log x}." }, { "math_id": 3, "text": "f(x)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "(a, f(a))" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "f(x) = C x ^ \\alpha" }, { "math_id": 8, "text": "C>0" }, { "math_id": 9, "text": "E ( f(x) \\cdot g(x) ) = E f(x) + E g(x)" }, { "math_id": 10, "text": "E \\frac{f(x)}{g(x)} = E f(x) - E g(x)" }, { "math_id": 11, "text": "E ( f(x) + g(x) ) = \\frac{f(x) \\cdot E(f(x)) + g(x) \\cdot E(g(x))}{f(x) + g(x)} " }, { "math_id": 12, "text": "E ( f(x) - g(x) ) = \\frac{f(x) \\cdot E(f(x)) - g(x) \\cdot E(g(x))}{f(x) - g(x)} " }, { "math_id": 13, "text": "D f(x) = \\frac{E f(x) \\cdot f(x)}{x}" }, { "math_id": 14, "text": "E ( a ) = 0 \\ " }, { "math_id": 15, "text": " E ( a \\cdot f(x) ) = E f(x) " }, { "math_id": 16, "text": " E (b x^a) = a \\ " }, { "math_id": 17, "text": "Sf(x) = \\frac{1}{f(x)}f'(x) = \\frac{d \\ln f(x)}{d x}" }, { "math_id": 18, "text": "f(x) = C \\alpha^x" }, { "math_id": 19, "text": " \\ln{f} = \\ln{C\\alpha^x} = \\ln{C} + x \\ln{\\alpha} \\implies \\frac{d \\ln{f}}{d x} = \\ln{\\alpha}. " }, { "math_id": 20, "text": "\\frac{d f(x)}{d\\ln(x)}=\\frac{d f(x)}{dx}x" } ]
https://en.wikipedia.org/wiki?curid=5902964
59031392
Quantum speed limit
Limitation on the minimum time for a quantum system to evolve between two states In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, which was verified in a cavity QED experiment. QSL have been used to explore the limits of computation and complexity. In 2017, QSLs were studied in a quantum oscillator at high temperature. In 2018, it was shown that QSL are not restricted to the quantum domain and that similar bounds hold in classical systems. In 2021, both the Mandelstam-Tamm and the Margolus–Levitin QSL bounds were concurrently tested in a single experiment which indicated there are "two different regimes: one where the Mandelstam-Tamm limit constrains the evolution at all times, and a second where a crossover to the Margolus-Levitin limit occurs at longer times." Preliminary definitions. The speed limit theorems can be stated for pure states, and for mixed states; they take a simpler form for pure states. An arbitrary pure state can be written as a linear combination of energy eigenstates: formula_0 The task is to provide a lower bound for the time interval formula_1 required for the initial state formula_2 to evolve into a state orthogonal to formula_2. The time evolution of a pure state is given by the Schrödinger equation: formula_3 Orthogonality is obtained when formula_4 and the minimum time interval formula_5 required to achieve this condition is called the orthogonalization interval or orthogonalization time. Mandelstam–Tamm limit. For pure states, the Mandelstam–Tamm theorem states that the minimum time formula_6 required for a state to evolve into an orthogonal state is bounded below: formula_7, where formula_8, is the variance of the system's energy and formula_9 is the Hamiltonian operator. The quantum evolution is independent of the particular Hamiltonian used to transport the quantum system along a given curve in the projective Hilbert space; the distance along this curve is measured by the Fubini–Study metric. This is sometimes called the quantum angle, as it can be understood as the arccos of the inner product of the initial and final states. For mixed states. The Mandelstam–Tamm limit can also be stated for mixed states and for time-varying Hamiltonians. In this case, the Bures metric must be employed in place of the Fubini–Study metric. A mixed state can be understood as a sum over pure states, weighted by classical probabilities; likewise, the Bures metric is a weighted sum of the Fubini–Study metric. For a time-varying Hamiltonian formula_10 and time-varying density matrix formula_11 the variance of the energy is given by formula_12 The Mandelstam–Tamm limit then takes the form formula_13, where formula_14 is the Bures distance between the starting and ending states. The Bures distance is geodesic, giving the shortest possible distance of any continuous curve connecting two points, with formula_15 understood as an infinitessimal path length along a curve parametrized by formula_16 Equivalently, the time formula_17 taken to evolve from formula_18 to formula_19 is bounded as formula_20 where formula_21 is the time-averaged uncertainty in energy. For a pure state evolving under a time-varying Hamiltonian, the time formula_17 taken to evolve from one pure state to another pure state orthogonal to it is bounded as formula_22 This follows, as for a pure state, one has the density matrix formula_23 The quantum angle (Fubini–Study distance) is then formula_24 and so one concludes formula_25 when the initial and final states are orthogonal. Margolus–Levitin limit. For the case of a pure state, Margolus and Levitin obtain a different limit, that formula_26 where formula_27 is the average energy, formula_28 This form applies when the Hamiltonian is not time-dependent, and the ground-state energy is defined to be zero. For time-varying states. The Margolus–Levitin theorem can also be generalized to the case where the Hamiltonian varies with time, and the system is described by a mixed state. In this form, it is given by formula_29 with the ground-state defined so that it has energy zero at all times. This provides a result for time varying states. Although it also provides a bound for mixed states, the bound (for mixed states) can be so loose as to be uninformative. The Margolus–Levitin theorem has not yet been established in time-dependent quantum systems, whose Hamiltonians formula_10 are driven by arbitrary time-dependent parameters, except for the adiabatic case. Levitin–Toffoli limit. A 2009 result by Lev B. Levitin and Tommaso Toffoli states that the precise bound for the Mandelstam–Tamm theorem is attained only for a qubit state. This is a two-level state in an equal superposition formula_30 for energy eigenstates formula_31 and formula_32. The states formula_33 and formula_34 are unique up to degeneracy of the energy level formula_35 and an arbitrary phase factor formula_36 This result is sharp, in that this state also satisfies the Margolus–Levitin bound, in that formula_37 and so formula_38 This result establishes that the combined limits are strict: formula_39 Levitin and Toffoli also provide a bound for the average energy in terms of the maximum. For any pure state formula_40 the average energy is bounded as formula_41 where formula_42 is the maximum energy eigenvalue appearing in formula_43 (This is the quarter-pinched sphere theorem in disguise, transported to complex projective space.) Thus, one has the bound formula_44 The strict lower bound formula_45 is again attained for the qubit state formula_46 with formula_47. Bremermann's limit. The quantum speed limit bounds establish an upper bound at which computation can be performed. Computational machinery is constructed out of physical matter that follows quantum mechanics, and each operation, if it is to be unambiguous, must be a transition of the system from one state to an orthogonal state. Suppose the computing machinery is a physical system evolving under Hamiltonian that does not change with time. Then, according to the Margolus–Levitin theorem, the number of operations per unit time per unit energy is bounded above by formula_48 This establishes a strict upper limit on the number of calculations that can be performed by physical matter. The processing rate of "all" forms of computation cannot be higher than about 6 × 1033 operations per second per joule of energy. This is including "classical" computers, since even classical computers are still made of matter that follows quantum mechanics. This bound is not merely a fanciful limit: it has practical ramifications for quantum-resistant cryptography. Imagining a computer operating at this limit, a brute-force search to break a 128-bit encryption key requires only modest resources. Brute-forcing a 256-bit key requires planetary-scale computers, while a brute-force search of 512-bit keys is effectively unattainable within the lifetime of the universe, even if galactic-sized computers were applied to the problem. The Bekenstein bound limits the amount of information that can be stored within a volume of space. The maximal rate of change of information within that volume of space is given by the quantum speed limit. This product of limits is sometimes called the Bremermann–Bekenstein limit; it is saturated by Hawking radiation. That is, Hawking radiation is emitted at the maximal allowed rate set by these bounds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\psi\\rangle = \\sum_n c_n |E_n\\rangle." }, { "math_id": 1, "text": "t_\\perp" }, { "math_id": 2, "text": "|\\psi\\rangle" }, { "math_id": 3, "text": "|\\psi_t\\rangle = \\sum_n c_n e^{itE_n/\\hbar}|E_n\\rangle." }, { "math_id": 4, "text": "\\langle\\psi_0|\\psi_t\\rangle=0" }, { "math_id": 5, "text": "t=t_\\perp" }, { "math_id": 6, "text": "t_{\\perp}" }, { "math_id": 7, "text": "t_{\\perp} \\ge \\frac{\\pi\\hbar}{2\\,\\delta E}= \\frac{h}{4\\,\\delta E}" }, { "math_id": 8, "text": "(\\delta E)^2 = \\left\\langle \\psi|H^2|\\psi\\right\\rangle - (\\left\\langle \\psi|H|\\psi\\right\\rangle)^2\n=\\frac{1}{2}\\sum_{n,m} |c_n|^2 |c_m|^2 (E_n-E_m)^2\n" }, { "math_id": 9, "text": "H" }, { "math_id": 10, "text": "H_t" }, { "math_id": 11, "text": "\\rho_t," }, { "math_id": 12, "text": "\\sigma^2_H(t)=|\\text{tr}(\\rho_t H^2_{t})|-|\\text{tr}(\\rho_t H_{t})|^2" }, { "math_id": 13, "text": "\\int_0^{\\tau} \\sigma_H(t) dt \\geq \\hbar D_B(\\rho_0, \\rho_{\\tau})" }, { "math_id": 14, "text": "D_B" }, { "math_id": 15, "text": "\\sigma_H(t)" }, { "math_id": 16, "text": "t." }, { "math_id": 17, "text": "\\tau" }, { "math_id": 18, "text": "\\rho" }, { "math_id": 19, "text": "\\rho'" }, { "math_id": 20, "text": "\\tau \\geq \\frac{\\hbar}{\\overline\\sigma_H}D_B(\\rho, \\rho')" }, { "math_id": 21, "text": "\\overline \\sigma_H = \\frac{1}{\\tau}\\int_0^\\tau \\sigma_H(t)dt" }, { "math_id": 22, "text": "\\tau \\geq \\frac{\\hbar}{\\overline\\sigma_H} \\frac{\\pi}{2}" }, { "math_id": 23, "text": "\\rho_t=|\\psi_t\\rangle\\langle\\psi_t|." }, { "math_id": 24, "text": "D_B(\\rho_0,\\rho_t)=\\arccos| \\langle\\psi_0|\\psi_t\\rangle|" }, { "math_id": 25, "text": "D_B=\\arccos 0=\\pi/2" }, { "math_id": 26, "text": "\\tau_\\perp \\geq \\frac{h}{4\\langle E\\rangle}," }, { "math_id": 27, "text": "\\langle E\\rangle" }, { "math_id": 28, "text": "\\langle E \\rangle = E_\\text{avg} = \\langle \\psi |H | \\psi \\rangle =\\sum_n |c_n|^2 E_n." }, { "math_id": 29, "text": "\\int_0^{\\tau}|\\text{tr}(\\rho_t H_{t})| dt \\geq \\hbar D_B(\\rho_0, \\rho_{\\tau})" }, { "math_id": 30, "text": "\\left|\\psi_q\\right\\rangle = \\frac{1}{\\sqrt{2}}\\left(\\left|E_0\\right\\rangle + e^{i \\varphi}\\left|E_1\\right\\rangle \\right)" }, { "math_id": 31, "text": "E_0=0" }, { "math_id": 32, "text": "E_1=\\pm \\pi\\hbar /\\Delta t" }, { "math_id": 33, "text": "\\left|E_0\\right\\rangle" }, { "math_id": 34, "text": "\\left|E_1\\right\\rangle" }, { "math_id": 35, "text": "E_1" }, { "math_id": 36, "text": "\\varphi." }, { "math_id": 37, "text": "E_\\text{avg}=\\delta E" }, { "math_id": 38, "text": "t_{\\perp}=\\hbar\\pi/2E_\\text{avg}=\\hbar\\pi/2\\delta E." }, { "math_id": 39, "text": "t_\\perp\\ge\\max\\left(\\frac{\\pi\\hbar}{2\\,\\delta E}\\;,\\; \\frac{\\pi\\hbar}{2\\,E_\\text{avg}}\\right)" }, { "math_id": 40, "text": "\\left|\\psi\\right\\rangle," }, { "math_id": 41, "text": "\\frac{E_\\text{max}}{4} \\le E_\\text{avg} \\le \\frac{E_\\text{max}}{2}" }, { "math_id": 42, "text": "E_\\text{max}" }, { "math_id": 43, "text": "\\left|\\psi\\right\\rangle." }, { "math_id": 44, "text": "\\frac{\\pi \\hbar}{E_\\text{max}} \\le t_{\\perp} \\le \\frac{2 \\pi \\hbar}{E_\\text{max}}" }, { "math_id": 45, "text": "E_\\text{max} t_{\\perp} = \\pi \\hbar" }, { "math_id": 46, "text": "\\left|\\psi_q\\right\\rangle" }, { "math_id": 47, "text": "E_\\text{max} = E_1" }, { "math_id": 48, "text": "\\frac{2}{\\hbar \\pi} = 6 \\times 10^{33} \\mathrm{s}^{-1}\\cdot \\mathrm{J}^{-1} " } ]
https://en.wikipedia.org/wiki?curid=59031392
5903656
Tidal heating
Orbital and friction heating on a planet or moon oceans, or interior Tidal heating (also known as tidal working or tidal flexing) occurs through the tidal friction processes: orbital and rotational energy is dissipated as heat in either (or both) the surface ocean or interior of a planet or satellite. When an object is in an elliptical orbit, the tidal forces acting on it are stronger near periapsis than near apoapsis. Thus the deformation of the body due to tidal forces (i.e. the tidal bulge) varies over the course of its orbit, generating internal friction which heats its interior. This energy gained by the object comes from its orbital energy and/or rotational energy, so over time in a two-body system, the initial elliptical orbit decays into a circular orbit (tidal circularization) and the rotational periods of the two bodies adjust towards matching the orbital period (tidal locking). Sustained tidal heating occurs when the elliptical orbit is prevented from circularizing due to additional gravitational forces from other bodies that keep tugging the object back into an elliptical orbit. In this more complex system, orbital and rotational energy still is being converted to thermal energy; however, now the orbit's semimajor axis would shrink rather than its eccentricity. Moons of Gas Giants. Tidal heating is responsible for the geologic activity of the most volcanically active body in the Solar System: Io, a moon of Jupiter. Io's eccentricity persists as the result of its orbital resonances with the Galilean moons Europa and Ganymede. The same mechanism has provided the energy to melt the lower layers of the ice surrounding the rocky mantle of Jupiter's next-closest large moon, Europa. However, the heating of the latter is weaker, because of reduced flexing—Europa has half Io's orbital frequency and a 14% smaller radius; also, while Europa's orbit is about twice as eccentric as Io's, tidal force falls off with the cube of distance and is only a quarter as strong at Europa. Jupiter maintains the moons' orbits via tides they raise on it and thus its rotational energy ultimately powers the system. Saturn's moon Enceladus is similarly thought to have a liquid water ocean beneath its icy crust, due to tidal heating related to its resonance with Dione. The water vapor geysers which eject material from Enceladus are thought to be powered by friction generated within its interior. Earth. Munk &amp; Wunsch (1998) estimated that Earth experiences 3.7 TW (0.0073 W/m2) of tidal heating, of which 95% (3.5 TW or 0.0069 W/m2) is associated with ocean tides and 5% (0.2 TW or 0.0004 W/m2) is associated with Earth tides, with 3.2 TW being due to tidal interactions with the Moon and 0.5 TW being due to tidal interactions with the Sun. Egbert &amp; Ray (2001) confirmed that overall estimate, writing "The total amount of tidal energy dissipated in the Earth-Moon-Sun system is now well-determined. The methods of space geodesy—altimetry, satellite laser ranging, lunar laser ranging—have converged to 3.7 TW..." Heller et al. (2021) estimated that shortly after the Moon was formed, when the Moon orbited 10-15 times closer to Earth than it does now, tidal heating might have contributed ~10 W/m2 of heating over perhaps 100 million years, and that this could have accounted for a temperature increase of up to 5°C on the early Earth. Moon. Harada et al. (2014) proposed that tidal heating may have created a molten layer at the core-mantle boundary within Earth's Moon. Io. Jupiter's closest moon Io experiences considerable tidal heating. Formula. The tidal heating rate, formula_0, in a satellite that is spin-synchronous, coplanar (formula_1), and has an eccentric orbit is given by: formula_2 where formula_3, formula_4, formula_5, and formula_6 are respectively the satellite's mean radius, mean orbital motion, orbital distance, and eccentricity. formula_7 is the host (or central) body's mass and formula_8 represents the imaginary portion of the second-order Love number which measures the efficiency at which the satellite dissipates tidal energy into frictional heat. This imaginary portion is defined by interplay of the body's rheology and self-gravitation. It, therefore, is a function of the body's radius, density, and rheological parameters (the shear modulus, viscosity, and others – dependent upon the rheological model). The rheological parameters' values, in turn, depend upon the temperature and the concentration of partial melt in the body's interior. The tidally dissipated power in a nonsynchronised rotator is given by a more complex expression. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\dot{E}_\\text{Tidal}" }, { "math_id": 1, "text": "I=0" }, { "math_id": 2, "text": "\\dot{E}_\\text{Tidal} = -\\operatorname{Im}(k_2)\\frac{21}{2} \\frac{G M_h^2 R^5 n e^2}{a^6}" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "M_{h}" }, { "math_id": 8, "text": "\\operatorname{Im}(k_{2})" } ]
https://en.wikipedia.org/wiki?curid=5903656
59038
Fourier series
Decomposition of periodic functions into sums of simpler sinusoidal forms A Fourier series () is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below. The study of the convergence of Fourier series focus on the behaviors of the "partial sums", which means studying the behavior of the sum as more and more terms from the series are summed. The figures below illustrate some partial Fourier series results for the components of a square wave. Fourier series are closely related to the Fourier transform, which can be used to find the frequency information for functions that are not periodic. Periodic functions can be identified with functions on a circle; for this reason Fourier series are the subject of Fourier analysis on a circle, usually denoted as formula_1 or formula_2. The Fourier transform is also part of Fourier analysis, but is defined for functions on formula_3. Since Fourier's time, many different approaches to defining and understanding the concept of Fourier series have been discovered, all of which are consistent with one another, but each of which emphasizes different aspects of the topic. Some of the more powerful and elegant approaches are based on mathematical ideas and tools that were not available in Fourier's time. Fourier originally defined the Fourier series for real-valued functions of real arguments, and used the sine and cosine functions in the decomposition. Many other Fourier-related transforms have since been defined, extending his initial idea to many applications and birthing an area of mathematics called Fourier analysis. Common forms of the Fourier series. A Fourier series is a continuous, periodic function created by a summation of harmonically related sinusoidal functions. It has several different, but equivalent, forms, shown here as partial sums. But in theory formula_4 The subscripted symbols, called "coefficients", and the period, formula_5 determine the function formula_6 as follows: Fourier series, amplitude-phase form Fourier series, sine-cosine form Fourier series, exponential form The harmonics are indexed by an integer, formula_7 which is also the number of cycles the corresponding sinusoids make in interval formula_8. Therefore, the sinusoids have: Clearly these series can represent functions that are just a sum of one or more of the harmonic frequencies. The remarkable thing is that it can also represent the intermediate frequencies and/or non-sinusoidal functions because of the infinite number of terms. The amplitude-phase form is particularly useful for its insight into the rationale for the series coefficients. (see ) The exponential form is most easily generalized for complex-valued functions. (see ) The equivalence of these forms requires certain relationships among the coefficients. For instance, the trigonometric identity: Equivalence of polar and rectangular forms means that: Therefore formula_12 and formula_13 are the rectangular coordinates of a vector with polar coordinates formula_14 and formula_15 The coefficients can be given/assumed, such as a music synthesizer or time samples of a waveform. In the latter case, the exponential form of Fourier series synthesizes a discrete-time Fourier transform where variable formula_10 represents frequency instead of time. But typically the coefficients are determined by frequency/harmonic analysis of a given real-valued function formula_16 and formula_10 represents time: Fourier series analysis The objective is for formula_17 to converge to formula_18 at most or all values of formula_10 in an interval of length formula_19 For the well-behaved functions typical of physical processes, equality is customarily assumed, and the Dirichlet conditions provide sufficient conditions. The notationformula_20 represents integration over the chosen interval. Typical choices are formula_21 and formula_22. Some authors define formula_23 because it simplifies the arguments of the sinusoid functions, at the expense of generality. And some authors assume that formula_18 is also formula_24-periodic, in which case formula_17 approximates the entire function. The formula_25 scaling factor is explained by taking a simple case: formula_26 Only the formula_27 term of Eq.2 is needed for convergence, with formula_28 and formula_29  Accordingly Eq.5 provides: formula_30       as required. Exponential form coefficients. Another applicable identity is Euler's formula: formula_31 Substituting this into Eq.1 and comparison with Eq.3 ultimately reveals: Exponential form coefficients Conversely: Inverse relationships formula_32 Substituting Eq.5 into Eq.6 also reveals: Fourier series analysis Complex-valued functions. Eq.7 and Eq.3 also apply when formula_18 is a complex-valued function. This follows by expressing formula_33 and formula_34 as separate real-valued Fourier series, and formula_35 Derivation. The coefficients formula_14 and formula_36 can be understood and derived in terms of the cross-correlation between formula_18 and a sinusoid at frequency formula_11. For a general frequency formula_37 and an analysis interval formula_38 the cross-correlation function: Derivation of Eq.1 is essentially a matched filter, with "template" formula_39. The maximum of formula_40 is a measure of the amplitude formula_41 of frequency formula_42 in the function formula_18, and the value of formula_43 at the maximum determines the phase formula_44 of that frequency. Figure 2 is an example, where formula_18 is a square wave (not shown), and frequency formula_42 is the formula_45 harmonic. It is also an example of deriving the maximum from just two samples, instead of searching the entire function. Combining Eq.8 with Eq.4 gives: formula_46 The derivative of formula_47 is zero at the phase of maximum correlation. formula_48 Therefore, computing formula_12 and formula_13 according to Eq.5 creates the component's phase formula_36 of maximum correlation. And the component's amplitude is: formula_49 Other common notations. The notation formula_50 is inadequate for discussing the Fourier coefficients of several different functions. Therefore, it is customarily replaced by a modified form of the function (formula_51 in this case), such as formula_52 or formula_53, and functional notation often replaces subscripting: formula_54 In engineering, particularly when the variable formula_10 represents time, the coefficient sequence is called a frequency domain representation. Square brackets are often used to emphasize that the domain of this function is a discrete set of frequencies. Another commonly used frequency domain representation uses the Fourier series coefficients to modulate a Dirac comb: formula_55 where formula_42 represents a continuous frequency domain. When variable formula_10 has units of seconds, formula_42 has units of hertz. The "teeth" of the comb are spaced at multiples (i.e. harmonics) of formula_56, which is called the fundamental frequency. formula_57 can be recovered from this representation by an inverse Fourier transform: formula_58 The constructed function formula_0 is therefore commonly referred to as a Fourier transform, even though the Fourier integral of a periodic function is not convergent at the harmonic frequencies. Analysis example. Consider a sawtooth function: formula_59 formula_60 In this case, the Fourier coefficients are given by formula_61 It can be shown that the Fourier series converges to formula_18 at every point formula_10 where formula_62 is differentiable, and therefore: When formula_63, the Fourier series converges to 0, which is the half-sum of the left- and right-limit of "s" at formula_63. This is a particular instance of the Dirichlet theorem for Fourier series. This example leads to a solution of the Basel problem. Convergence. A proof that a Fourier series is a valid representation of any periodic function (that satisfies the Dirichlet conditions) is overviewed in . In engineering applications, the Fourier series is generally assumed to converge except at jump discontinuities since the functions encountered in engineering are better-behaved than functions encountered in other disciplines. In particular, if formula_62 is continuous and the derivative of formula_18 (which may not exist everywhere) is square integrable, then the Fourier series of formula_62 converges absolutely and uniformly to formula_18. If a function is square-integrable on the interval formula_64, then the Fourier series converges to the function at almost everywhere. It is possible to define Fourier coefficients for more general functions or distributions, in which case point wise convergence often fails, and convergence in norm or weak convergence is usually studied. It is possible to give explicit examples of a continuous function whose Fourier series diverges at 0: for instance, the even and 2π-periodic function "f" defined for all "x" in [0,π] by formula_65 Because the function is even the Fourier series contains only cosines: formula_66 The coefficients are: formula_67 As m increases, the coefficients will be positive and increasing until they reach a value of about formula_68 at formula_69 for some n and then become negative (starting with a value around formula_70) and getting smaller, before starting a new such wave. At formula_71 the Fourier series is simply the running sum of formula_72 and this builds up to around formula_73 in the nth wave before returning to around zero, showing that the series does not converge at zero but reaches higher and higher peaks. Note that though the function is continuous, it is not differentiable. History. The Fourier series is named in honor of Jean-Baptiste Joseph Fourier (1768–1830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d'Alembert, and Daniel Bernoulli. Fourier introduced the series for the purpose of solving the heat equation in a metal plate, publishing his initial results in his 1807 "Mémoire sur la propagation de la chaleur dans les corps solides" ("Treatise on the propagation of heat in solid bodies"), and publishing his "Théorie analytique de la chaleur" ("Analytical theory of heat") in 1822. The "Mémoire" introduced Fourier analysis, specifically Fourier series. Through Fourier's research the fact was established that an arbitrary (at first, continuous and later generalized to any piecewise-smooth) function can be represented by a trigonometric series. The first announcement of this great discovery was made by Fourier in 1807, before the French Academy. Early ideas of decomposing a periodic function into the sum of simple oscillating functions date back to the 3rd century BC, when ancient astronomers proposed an empiric model of planetary motions, based on deferents and epicycles. The heat equation is a partial differential equation. Prior to Fourier's work, no solution to the heat equation was known in the general case, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was to model a complicated heat source as a superposition (or linear combination) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series. From a modern point of view, Fourier's results are somewhat informal, due to the lack of a precise notion of function and integral in the early nineteenth century. Later, Peter Gustav Lejeune Dirichlet and Bernhard Riemann expressed Fourier's results with greater precision and formality. Although the original motivation was to solve the heat equation, it later became obvious that the same techniques could be applied to a wide array of mathematical and physical problems, and especially those involving linear differential equations with constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has many such applications in electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, quantum mechanics, econometrics, shell theory, etc. Beginnings. Joseph Fourier wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;formula_74 Multiplying both sides by formula_75, and then integrating from formula_76 to formula_77 yields: formula_78 This immediately gives any coefficient "ak" of the trigonometrical series for φ("y") for any function which has such an expansion. It works because if φ has such an expansion, then (under suitable convergence assumptions) the integral formula_79 can be carried out term-by-term. But all terms involving formula_80 for "j" ≠ "k" vanish when integrated from −1 to 1, leaving only the formula_81 term. In these few lines, which are close to the modern formalism used in Fourier series, Fourier revolutionized both mathematics and physics. Although similar trigonometric series were previously used by Euler, d'Alembert, Daniel Bernoulli and Gauss, Fourier believed that such trigonometric series could represent any arbitrary function. In what sense that is actually true is a somewhat subtle issue and the attempts over many years to clarify this idea have led to important discoveries in the theories of convergence, function spaces, and harmonic analysis. When Fourier submitted a later competition essay in 1811, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: "...the manner in which the author arrives at these equations is not exempt of difficulties and...his analysis to integrate them still leaves something to be desired on the score of generality and even rigour". Fourier's motivation. The Fourier series expansion of the sawtooth function (above) looks more complicated than the simple formula formula_82, so it is not immediately apparent why one would need the Fourier series. While there are many applications, Fourier's motivation was in solving the heat equation. For example, consider a metal plate in the shape of a square whose sides measure formula_83 meters, with coordinates formula_84. If there is no heat source within the plate, and if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by formula_85, is maintained at the temperature gradient formula_86 degrees Celsius, for formula_10 in formula_87, then one can show that the stationary heat distribution (or the heat distribution after a long period of time has elapsed) is given by formula_88 Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by multiplying each term of Eq.9 by formula_89. While our example function formula_18 seems to have a needlessly complicated Fourier series, the heat distribution formula_90 is nontrivial. The function formula_91 cannot be written as a closed-form expression. This method of solving the heat problem was made possible by Fourier's work. Other applications. Another application is to solve the Basel problem by using Parseval's theorem. The example generalizes and one may compute ζ(2"n"), for any positive integer "n". Table of common Fourier series. Some common pairs of periodic functions and their Fourier series coefficients are shown in the table below. Table of basic properties. This table shows some mathematical operations in the time domain and the corresponding effect in the Fourier series coefficients. Notation: Symmetry properties. When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: formula_97 From this, various relationships are apparent, for example: Other properties. Riemann–Lebesgue lemma. If formula_98 is integrable, formula_99, formula_100 and formula_101 This result is known as the Riemann–Lebesgue lemma. Parseval's theorem. If formula_62 belongs to formula_102 (periodic over an interval of length formula_24) then: formula_103 Plancherel's theorem. If formula_104 are coefficients and formula_105 then there is a unique function formula_106 such that formula_107 for every formula_108. Convolution theorems. Given formula_24-periodic functions, formula_109 and formula_110 with Fourier series coefficients formula_53 and formula_111 formula_112 Derivative property. We say that formula_62 belongs to formula_122 if formula_62 is a 2π-periodic function on formula_123 which is formula_124 times differentiable, and its formula_81 derivative is continuous. Compact groups. One of the interesting properties of the Fourier transform which we have mentioned, is that it carries convolutions to pointwise products. If that is the property which we seek to preserve, one can produce Fourier series on any compact group. Typical examples include those classical groups that are compact. This generalizes the Fourier transform to all spaces of the form "L"2("G"), where "G" is a compact group, in such a way that the Fourier transform carries convolutions to pointwise products. The Fourier series exists and converges in similar ways to the [−"π","π"] case. An alternative extension to compact groups is the Peter–Weyl theorem, which proves results about representations of compact groups analogous to those about finite groups. Riemannian manifolds. If the domain is not a group, then there is no intrinsically defined convolution. However, if formula_136 is a compact Riemannian manifold, it has a Laplace–Beltrami operator. The Laplace–Beltrami operator is the differential operator that corresponds to Laplace operator for the Riemannian manifold formula_136. Then, by analogy, one can consider heat equations on formula_136. Since Fourier arrived at his basis by attempting to solve the heat equation, the natural generalization is to use the eigensolutions of the Laplace–Beltrami operator as a basis. This generalizes Fourier series to spaces of the type formula_137, where formula_136 is a Riemannian manifold. The Fourier series converges in ways similar to the formula_138 case. A typical example is to take formula_136 to be the sphere with the usual metric, in which case the Fourier basis consists of spherical harmonics. Locally compact Abelian groups. The generalization to compact groups discussed above does not generalize to noncompact, nonabelian groups. However, there is a straightforward generalization to Locally Compact Abelian (LCA) groups. This generalizes the Fourier transform to formula_139 or formula_140, where formula_141 is an LCA group. If formula_141 is compact, one also obtains a Fourier series, which converges similarly to the formula_138 case, but if formula_141 is noncompact, one obtains instead a Fourier integral. This generalization yields the usual Fourier transform when the underlying locally compact Abelian group is formula_123. Extensions. Fourier series on a square. We can also define the Fourier series for functions of two variables formula_10 and formula_142 in the square formula_143: formula_144 Aside from being useful for solving partial differential equations such as the heat equation, one notable application of Fourier series on the square is in image compression. In particular, the JPEG image compression standard uses the two-dimensional discrete cosine transform, a discrete form of the Fourier cosine transform, which uses only cosine as the basis function. For two-dimensional arrays with a staggered appearance, half of the Fourier series coefficients disappear, due to additional symmetry. Fourier series of Bravais-lattice-periodic-function. A three-dimensional Bravais lattice is defined as the set of vectors of the form: formula_145 where formula_146 are integers and formula_147 are three linearly independent vectors. Assuming we have some function, formula_148, such that it obeys the condition of periodicity for any Bravais lattice vector formula_149, formula_150, we could make a Fourier series of it. This kind of function can be, for example, the effective potential that one electron "feels" inside a periodic crystal. It is useful to make the Fourier series of the potential when applying Bloch's theorem. First, we may write any arbitrary position vector formula_151 in the coordinate-system of the lattice: formula_152 where formula_153 meaning that formula_154 is defined to be the magnitude of formula_147, so formula_155 is the unit vector directed along formula_147. Thus we can define a new function, formula_156 This new function, formula_157, is now a function of three-variables, each of which has periodicity formula_158, formula_159, and formula_160 respectively: formula_161 This enables us to build up a set of Fourier coefficients, each being indexed by three independent integers formula_162. In what follows, we use function notation to denote these coefficients, where previously we used subscripts. If we write a series for formula_163 on the interval formula_164 for formula_165, we can define the following: formula_166 And then we can write: formula_167 Further defining: formula_168 We can write formula_163 once again as: formula_169 Finally applying the same for the third coordinate, we define: formula_170 We write formula_163 as: formula_171 Re-arranging: formula_172 Now, every "reciprocal" lattice vector can be written (but does not mean that it is the only way of writing) as formula_173, where formula_174 are integers and formula_175 are reciprocal lattice vectors to satisfy formula_176 (formula_177 for formula_178, and formula_179 for formula_180). Then for any arbitrary reciprocal lattice vector formula_181 and arbitrary position vector formula_151 in the original Bravais lattice space, their scalar product is: formula_182 So it is clear that in our expansion of formula_183, the sum is actually over reciprocal lattice vectors: formula_184 where formula_185 Assuming formula_186 we can solve this system of three linear equations for formula_10, formula_142, and formula_187 in terms of formula_165, formula_188 and formula_189 in order to calculate the volume element in the original rectangular coordinate system. Once we have formula_10, formula_142, and formula_187 in terms of formula_165, formula_188 and formula_189, we can calculate the Jacobian determinant: formula_190 which after some calculation and applying some non-trivial cross-product identities can be shown to be equal to: formula_191 (it may be advantageous for the sake of simplifying calculations, to work in such a rectangular coordinate system, in which it just so happens that formula_192 is parallel to the "x" axis, formula_193 lies in the "xy"-plane, and formula_194 has components of all three axes). The denominator is exactly the volume of the primitive unit cell which is enclosed by the three primitive-vectors formula_192, formula_193 and formula_194. In particular, we now know that formula_195 We can write now formula_196 as an integral with the traditional coordinate system over the volume of the primitive cell, instead of with the formula_165, formula_188 and formula_189 variables: formula_197 writing formula_198 for the volume element formula_199; and where formula_200 is the primitive unit cell, thus, formula_201 is the volume of the primitive unit cell. Hilbert space interpretation. In the language of Hilbert spaces, the set of functions formula_202 is an orthonormal basis for the space formula_203 of square-integrable functions on formula_138. This space is actually a Hilbert space with an inner product given for any two elements formula_42 and formula_163 by: formula_204 where formula_205 is the complex conjugate of formula_206 The basic Fourier series result for Hilbert spaces can be written as formula_207 This corresponds exactly to the complex exponential formulation given above. The version with sines and cosines is also justified with the Hilbert space interpretation. Indeed, the sines and cosines form an orthogonal set: formula_208 formula_209 (where "δ""mn" is the Kronecker delta), and formula_210 furthermore, the sines and cosines are orthogonal to the constant function formula_211. An "orthonormal basis" for formula_203 consisting of real functions is formed by the functions formula_211 and formula_212, formula_213 with "n"= 1,2... The density of their span is a consequence of the Stone–Weierstrass theorem, but follows also from the properties of classical kernels like the Fejér kernel. Fourier theorem proving convergence of Fourier series. These theorems, and informal variations of them that don't specify the convergence conditions, are sometimes referred to generically as "Fourier's theorem" or "the Fourier theorem". The earlier Eq.3: formula_214 is a trigonometric polynomial of degree formula_215 that can be generally expressed as: formula_216 Least squares property. Parseval's theorem implies that: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — The trigonometric polynomial formula_217 is the unique best trigonometric polynomial of degree formula_215 approximating formula_18, in the sense that, for any trigonometric polynomial formula_218 of degree formula_215, we have: formula_219 where the Hilbert space norm is defined as: formula_220 Convergence theorems. Because of the least squares property, and because of the completeness of the Fourier basis, we obtain an elementary convergence result. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If formula_62 belongs to formula_221 (an interval of length formula_24), then formula_222 converges to formula_62 in formula_221, that is,  formula_223 converges to 0 as formula_224. We have already mentioned that if formula_62 is continuously differentiable, then formula_225 is the formula_226 Fourier coefficient of the derivative formula_127. It follows, essentially from the Cauchy–Schwarz inequality, that formula_222 is absolutely summable. The sum of this series is a continuous function, equal to formula_62, since the Fourier series converges in the mean to formula_62: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem —  If formula_125, then formula_222 converges to formula_62 uniformly (and hence also pointwise.) This result can be proven easily if formula_62 is further assumed to be formula_227, since in that case formula_228 tends to zero as formula_229. More generally, the Fourier series is absolutely summable, thus converges uniformly to formula_62, provided that formula_62 satisfies a Hölder condition of order formula_230. In the absolutely summable case, the inequality: formula_231 proves uniform convergence. Many other results concerning the convergence of Fourier series are known, ranging from the moderately simple result that the series converges at formula_10 if formula_62 is differentiable at formula_10, to Lennart Carleson's much more sophisticated result that the Fourier series of an formula_232 function actually converges almost everywhere. Divergence. Since Fourier series have such good convergence properties, many are often surprised by some of the negative results. For example, the Fourier series of a continuous "T"-periodic function need not converge pointwise. The uniform boundedness principle yields a simple non-constructive proof of this fact. In 1922, Andrey Kolmogorov published an article titled "Une série de Fourier-Lebesgue divergente presque partout" in which he gave an example of a Lebesgue-integrable function whose Fourier series diverges almost everywhere. He later constructed an example of an integrable function whose Fourier series diverges everywhere. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. "This article incorporates material from example of Fourier series on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "S(f)" }, { "math_id": 1, "text": "\\mathbb{T}" }, { "math_id": 2, "text": "S_1" }, { "math_id": 3, "text": "\\mathbb{R}^n" }, { "math_id": 4, "text": "N \\rightarrow \\infty." }, { "math_id": 5, "text": "P," }, { "math_id": 6, "text": "s_{\\scriptscriptstyle N}(x)" }, { "math_id": 7, "text": "n," }, { "math_id": 8, "text": " P" }, { "math_id": 9, "text": "\\tfrac{P}{n}" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "\\tfrac{n}{P}" }, { "math_id": 12, "text": "A_n" }, { "math_id": 13, "text": "B_n" }, { "math_id": 14, "text": "D_n" }, { "math_id": 15, "text": "\\varphi_n." }, { "math_id": 16, "text": "s(x)," }, { "math_id": 17, "text": " s_{\\scriptstyle{\\infty}}" }, { "math_id": 18, "text": "s(x)" }, { "math_id": 19, "text": "P." }, { "math_id": 20, "text": "\\int_P" }, { "math_id": 21, "text": "[-P/2, P/2]" }, { "math_id": 22, "text": "[0, P]" }, { "math_id": 23, "text": "P \\triangleq 2 \\pi" }, { "math_id": 24, "text": "P" }, { "math_id": 25, "text": "\\tfrac{2}{P}" }, { "math_id": 26, "text": "s(x) = \\cos \\left( 2\\pi \\tfrac{k}{P} x \\right)." }, { "math_id": 27, "text": "n=k" }, { "math_id": 28, "text": "A_k =1" }, { "math_id": 29, "text": "B_k = 0." }, { "math_id": 30, "text": "A_k = \\frac{2}{P} \\underbrace{\\int_P \\cos^2 \\left( 2\\pi \\tfrac{k}{P} x \\right) \\,dx}_{P/2} = 1," }, { "math_id": 31, "text": "\n\\begin{align}\n\\cos\\left(2\\pi \\tfrac{n}{P} x - \\varphi_n \\right) &{}\\equiv \\tfrac{1}{2} e^{ i \\left(2\\pi \\tfrac{n}{P}x - \\varphi_n \\right)} + \\tfrac{1}{2} e^{-i \\left(2\\pi \\tfrac{n}{P}x - \\varphi_n \\right)} \\\\[6pt]\n& = \\left(\\tfrac{1}{2} e^{-i \\varphi_n}\\right) \\cdot e^{i 2\\pi \\tfrac{+n}{P}x} \n+\\left(\\tfrac{1}{2} e^{-i \\varphi_n}\\right)^* \\cdot e^{i 2\\pi \\tfrac{-n}{P}x}\n\\end{align}\n" }, { "math_id": 32, "text": "\\begin{aligned} A_0 &= C_0 &\\\\\n A_n &= C_n+C_{-n} \\qquad &\\textrm{for}~ n > 0 \\\\\n B_n &= i(C_n-C_{-n}) \\qquad &\\textrm{for}~ n > 0 \\end{aligned}" }, { "math_id": 33, "text": "\\operatorname{Re}(s_N (x))" }, { "math_id": 34, "text": "\\operatorname{Im}(s_N(x))" }, { "math_id": 35, "text": "s_N(x) = \\operatorname{Re}(s_N(x)) + i\\ \\operatorname{Im}(s_N(x))." }, { "math_id": 36, "text": "\\varphi_n" }, { "math_id": 37, "text": "f," }, { "math_id": 38, "text": "[x_0,x_0+P]," }, { "math_id": 39, "text": "\\cos(2\\pi f x)" }, { "math_id": 40, "text": "\\Chi_f(\\tau)" }, { "math_id": 41, "text": "(D)" }, { "math_id": 42, "text": "f" }, { "math_id": 43, "text": "\\tau" }, { "math_id": 44, "text": "(\\varphi)" }, { "math_id": 45, "text": "4^{\\text{th}}" }, { "math_id": 46, "text": "\\begin{align}\n\\Chi_n(\\varphi) &= \\tfrac{2}{P} \\int_P s(x) \\cdot \\cos\\left(2\\pi \\tfrac{n}{P} x-\\varphi \\right)\\, dx\n; \\quad \\varphi \\in [0, 2\\pi]\\\\\n&=\\cos(\\varphi)\\cdot \\underbrace{\\tfrac{2}{P}\\int_P s(x) \\cdot \\cos\\left(2\\pi \\tfrac{n}{P} x\\right)\\, dx}_{A}\n+\\sin(\\varphi)\\cdot \\underbrace{\\tfrac{2}{P}\\int_P s(x) \\cdot \\sin\\left(2\\pi \\tfrac{n}{P} x\\right)\\, dx}_{B}\\\\\n&=\\cos(\\varphi)\\cdot A + \\sin(\\varphi)\\cdot B\n\\end{align}" }, { "math_id": 47, "text": "\\Chi_n(\\varphi)" }, { "math_id": 48, "text": "\\Chi'_n(\\varphi)=\\sin(\\varphi)\\cdot A - \\cos(\\varphi)\\cdot B = 0\n\\quad \\longrightarrow\\quad \\tan(\\varphi) = \\frac{B}{A} \\quad \\longrightarrow\\quad \\varphi = \\arctan(B, A)" }, { "math_id": 49, "text": "\n\\begin{align}\nD_n \\triangleq \\Chi_n(\\varphi_n)\\ &= \\cos(\\varphi_n)\\cdot A_n + \\sin(\\varphi_n)\\cdot B_n \\\\\n&=\\frac{A_n}{\\sqrt{A_n^2+B_n^2}}\\cdot A_n + \\frac{B _n}{\\sqrt{A_n^2+B_n^2}}\\cdot B_n =\\frac{A_n^2+B_n^2}{\\sqrt{A_n^2+B_n^2}}\n&= \\sqrt{A_n^2+B_n^2}.\n\\end{align}\n" }, { "math_id": 50, "text": "C_n" }, { "math_id": 51, "text": "s," }, { "math_id": 52, "text": "\\widehat{s}(n)" }, { "math_id": 53, "text": "S[n]" }, { "math_id": 54, "text": "\\begin{align}\ns(x) &= \\sum_{n=-\\infty}^\\infty \\widehat{s}(n)\\cdot e^{i 2\\pi \\tfrac{n}{P} x} && \\scriptstyle \\text{common mathematics notation} \\\\\n&= \\sum_{n=-\\infty}^\\infty S[n]\\cdot e^{i 2\\pi \\tfrac{n}{P} x} && \\scriptstyle \\text{common engineering notation}\n\\end{align}" }, { "math_id": 55, "text": "S(f) \\ \\triangleq \\ \\sum_{n=-\\infty}^\\infty S[n]\\cdot \\delta \\left(f-\\frac{n}{P}\\right)," }, { "math_id": 56, "text": "\\tfrac{1}{P}" }, { "math_id": 57, "text": "s_{\\infty}(x)" }, { "math_id": 58, "text": "\\begin{align}\n\\mathcal{F}^{-1}\\{S(f)\\} &= \\int_{-\\infty}^\\infty \\left( \\sum_{n=-\\infty}^\\infty S[n]\\cdot \\delta \\left(f-\\frac{n}{P}\\right)\\right) e^{i 2 \\pi f x}\\,df, \\\\[6pt]\n&= \\sum_{n=-\\infty}^\\infty S[n]\\cdot \\int_{-\\infty}^\\infty \\delta\\left(f-\\frac{n}{P}\\right) e^{i 2 \\pi f x}\\,df, \\\\[6pt]\n&= \\sum_{n=-\\infty}^\\infty S[n]\\cdot e^{i 2\\pi \\tfrac{n}{P} x} \\ \\ \\triangleq \\ s_\\infty(x).\n\\end{align}" }, { "math_id": 59, "text": "s(x) = \\frac{x}{\\pi}, \\quad \\mathrm{for } -\\pi < x < \\pi," }, { "math_id": 60, "text": "s(x + 2\\pi k) = s(x), \\quad \\mathrm{for } -\\pi < x < \\pi \\text{ and } k \\in \\mathbb{Z}." }, { "math_id": 61, "text": "\\begin{align}\nA_n & = \\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}s(x) \\cos(nx)\\,dx = 0, \\quad n \\ge 0. \\\\[4pt]\nB_n & = \\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}s(x) \\sin(nx)\\, dx\\\\[4pt]\n&= -\\frac{2}{\\pi n}\\cos(n\\pi) + \\frac{2}{\\pi^2 n^2}\\sin(n\\pi)\\\\[4pt]\n&= \\frac{2\\,(-1)^{n+1}}{\\pi n}, \\quad n \\ge 1.\\end{align}" }, { "math_id": 62, "text": "s" }, { "math_id": 63, "text": "x=\\pi" }, { "math_id": 64, "text": "[x_0,x_0+P]" }, { "math_id": 65, "text": "f(x) = \\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\sin\\left[ \\left( 2^{n^3} +1 \\right) \\frac{x}{2}\\right]." }, { "math_id": 66, "text": "\\sum_{m=0}^\\infty C_m \\cos(mx)." }, { "math_id": 67, "text": "C_m=\\frac 1\\pi\\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\left\\{\\frac 2{2^{n^3} +1-2m}+\\frac 2{2^{n^3} +1+2m}\\right\\}" }, { "math_id": 68, "text": "C_m\\approx 2/(n^2\\pi)" }, { "math_id": 69, "text": "m=2^{n^3}/2" }, { "math_id": 70, "text": "-2/(n^2\\pi)" }, { "math_id": 71, "text": "x=0" }, { "math_id": 72, "text": "C_m," }, { "math_id": 73, "text": "\\frac 1{n^2\\pi}\\sum_{k=0}^{2^{n^3}/2}\\frac 2{2k+1}\\sim\\frac 1{n^2\\pi}\\ln 2^{n^3}=\\frac n\\pi\\ln 2" }, { "math_id": 74, "text": "\\varphi(y)=a_0\\cos\\frac{\\pi y}{2}+a_1\\cos 3\\frac{\\pi y}{2}+a_2\\cos5\\frac{\\pi y}{2}+\\cdots." }, { "math_id": 75, "text": "\\cos(2k+1)\\frac{\\pi y}{2}" }, { "math_id": 76, "text": "y=-1" }, { "math_id": 77, "text": "y=+1" }, { "math_id": 78, "text": "a_k=\\int_{-1}^1\\varphi(y)\\cos(2k+1)\\frac{\\pi y}{2}\\,dy." }, { "math_id": 79, "text": "\\begin{align}\na_k&=\\int_{-1}^1\\varphi(y)\\cos(2k+1)\\frac{\\pi y}{2}\\,dy \\\\\n&= \\int_{-1}^1\\left(a\\cos\\frac{\\pi y}{2}\\cos(2k+1)\\frac{\\pi y}{2}+a'\\cos 3\\frac{\\pi y}{2}\\cos(2k+1)\\frac{\\pi y}{2}+\\cdots\\right)\\,dy\n\\end{align}" }, { "math_id": 80, "text": "\\cos(2j+1)\\frac{\\pi y}{2} \\cos(2k+1)\\frac{\\pi y}{2}" }, { "math_id": 81, "text": "k^{\\text{th}}" }, { "math_id": 82, "text": "s(x)=\\tfrac{x}{\\pi}" }, { "math_id": 83, "text": "\\pi" }, { "math_id": 84, "text": "(x,y) \\in [0,\\pi] \\times [0,\\pi]" }, { "math_id": 85, "text": "y=\\pi" }, { "math_id": 86, "text": "T(x,\\pi)=x" }, { "math_id": 87, "text": "(0,\\pi)" }, { "math_id": 88, "text": "T(x,y) = 2\\sum_{n=1}^\\infty \\frac{(-1)^{n+1}}{n} \\sin(nx) {\\sinh(ny) \\over \\sinh(n\\pi)}." }, { "math_id": 89, "text": "\\sinh(ny)/\\sinh(n\\pi)" }, { "math_id": 90, "text": "T(x,y)" }, { "math_id": 91, "text": "T" }, { "math_id": 92, "text": "A_0, A_n, B_n" }, { "math_id": 93, "text": "s(x),r(x)" }, { "math_id": 94, "text": "x \\in [0,P]. " }, { "math_id": 95, "text": "S[n], R[n]" }, { "math_id": 96, "text": "r." }, { "math_id": 97, "text": "\n\\begin{array}{rccccccccc}\n\\text{Time domain} & s & = & s_{_{\\text{RE}}} & + & s_{_{\\text{RO}}} & + & i s_{_{\\text{IE}}} & + & \\underbrace{i\\ s_{_{\\text{IO}}}} \\\\\n&\\Bigg\\Updownarrow\\mathcal{F} & &\\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F} & &\\ \\ \\Bigg\\Updownarrow\\mathcal{F}\\\\\n\\text{Frequency domain} & S & = & S_\\text{RE} & + & \\overbrace{\\,i\\ S_\\text{IO}\\,} & + & i S_\\text{IE} & + & S_\\text{RO}\n\\end{array}\n" }, { "math_id": 98, "text": "S" }, { "math_id": 99, "text": "\\lim_{|n| \\to \\infty} S[n]=0" }, { "math_id": 100, "text": "\\lim_{n \\to +\\infty} a_n=0" }, { "math_id": 101, "text": " \\lim_{n \\to +\\infty} b_n=0." }, { "math_id": 102, "text": "L^2(P)" }, { "math_id": 103, "text": "\\frac{1}{P}\\int_{P} |s(x)|^2 \\, dx = \\sum_{n=-\\infty}^\\infty \\Bigl|S[n]\\Bigr|^2." }, { "math_id": 104, "text": "c_0,\\, c_{\\pm 1},\\, c_{\\pm 2}, \\ldots" }, { "math_id": 105, "text": "\\sum_{n=-\\infty}^\\infty |c_n|^2 < \\infty" }, { "math_id": 106, "text": "s\\in L^2(P)" }, { "math_id": 107, "text": "S[n] = c_n" }, { "math_id": 108, "text": "n" }, { "math_id": 109, "text": "s_{_P}" }, { "math_id": 110, "text": "r_{_P}" }, { "math_id": 111, "text": "R[n]," }, { "math_id": 112, "text": "n \\in \\mathbb{Z}," }, { "math_id": 113, "text": "h_{_P}(x) \\triangleq s_{_P}(x)\\cdot r_{_P}(x)" }, { "math_id": 114, "text": "R" }, { "math_id": 115, "text": "H[n] = \\{S*R\\}[n]." }, { "math_id": 116, "text": "h_{_P}(x) \\triangleq \\int_{P} s_{_P}(\\tau)\\cdot r_{_P}(x-\\tau)\\, d\\tau" }, { "math_id": 117, "text": "H[n] = P \\cdot S[n]\\cdot R[n]." }, { "math_id": 118, "text": "\\left \\{c_n \\right \\}_{n \\in Z}" }, { "math_id": 119, "text": "c_0(\\mathbb{Z})" }, { "math_id": 120, "text": "L^1([0,2\\pi])" }, { "math_id": 121, "text": "\\ell^2(\\mathbb{Z})" }, { "math_id": 122, "text": "C^k(\\mathbb{T})" }, { "math_id": 123, "text": "\\mathbb{R}" }, { "math_id": 124, "text": "k" }, { "math_id": 125, "text": "s \\in C^1(\\mathbb{T})" }, { "math_id": 126, "text": "\\widehat{s'}[n]" }, { "math_id": 127, "text": "s'" }, { "math_id": 128, "text": "\\widehat{s}[n]" }, { "math_id": 129, "text": "\\widehat{s'}[n] = in \\widehat{s}[n]" }, { "math_id": 130, "text": "s \\in C^k(\\mathbb{T})" }, { "math_id": 131, "text": "\\widehat{s^{(k)}}[n] = (in)^k \\widehat{s}[n]" }, { "math_id": 132, "text": "k\\geq 1" }, { "math_id": 133, "text": "\\widehat{s^{(k)}}[n]\\to 0" }, { "math_id": 134, "text": "n\\to\\infty" }, { "math_id": 135, "text": "|n|^k\\widehat{s}[n]" }, { "math_id": 136, "text": "X" }, { "math_id": 137, "text": "L^2(X)" }, { "math_id": 138, "text": "[-\\pi,\\pi]" }, { "math_id": 139, "text": "L^1(G)" }, { "math_id": 140, "text": "L^2(G)" }, { "math_id": 141, "text": "G" }, { "math_id": 142, "text": "y" }, { "math_id": 143, "text": "[-\\pi,\\pi]\\times[-\\pi,\\pi]" }, { "math_id": 144, "text": "\\begin{align}\nf(x,y) & = \\sum_{j,k \\in \\Z} c_{j,k}e^{ijx}e^{iky},\\\\[5pt]\nc_{j,k} & = \\frac{1}{4 \\pi^2} \\int_{-\\pi}^\\pi \\int_{-\\pi}^\\pi f(x,y) e^{-ijx}e^{-iky}\\, dx \\, dy.\n\\end{align}" }, { "math_id": 145, "text": "\\mathbf{R} = n_1\\mathbf{a}_1 + n_2\\mathbf{a}_2 + n_3\\mathbf{a}_3" }, { "math_id": 146, "text": "n_i" }, { "math_id": 147, "text": "\\mathbf{a}_i" }, { "math_id": 148, "text": "f(\\mathbf{r})" }, { "math_id": 149, "text": "\\mathbf{R}" }, { "math_id": 150, "text": "f(\\mathbf{r}) = f(\\mathbf{R}+\\mathbf{r})" }, { "math_id": 151, "text": "\\mathbf{r}" }, { "math_id": 152, "text": "\\mathbf{r} = x_1\\frac{\\mathbf{a}_{1}}{a_1}+ x_2\\frac{\\mathbf{a}_{2}}{a_2}+ x_3\\frac{\\mathbf{a}_{3}}{a_3}," }, { "math_id": 153, "text": "a_i \\triangleq |\\mathbf{a}_i|," }, { "math_id": 154, "text": "a_i" }, { "math_id": 155, "text": "\\hat{\\mathbf{a}_{i}} = \\frac {\\mathbf{a}_{i}}{a_i}" }, { "math_id": 156, "text": "g(x_1,x_2,x_3) \\triangleq f(\\mathbf{r}) = f \\left (x_1\\frac{\\mathbf{a}_{1}}{a_1}+x_2\\frac{\\mathbf{a}_{2}}{a_2}+x_3\\frac{\\mathbf{a}_{3}}{a_3} \\right )." }, { "math_id": 157, "text": "g(x_1,x_2,x_3)" }, { "math_id": 158, "text": "a_1" }, { "math_id": 159, "text": "a_2" }, { "math_id": 160, "text": "a_3" }, { "math_id": 161, "text": "g(x_1,x_2,x_3) = g(x_1+a_1,x_2,x_3) = g(x_1,x_2+a_2,x_3) = g(x_1,x_2,x_3+a_3)." }, { "math_id": 162, "text": "m_1,m_2,m_3" }, { "math_id": 163, "text": "g" }, { "math_id": 164, "text": "\\left [ 0, a_1\\right ]" }, { "math_id": 165, "text": "x_1" }, { "math_id": 166, "text": "h^\\mathrm{one}(m_1, x_2, x_3) \\triangleq \\frac{1}{a_1}\\int_0^{a_1} g(x_1, x_2, x_3)\\cdot e^{-i 2\\pi \\tfrac{m_1}{a_1} x_1}\\, dx_1" }, { "math_id": 167, "text": "g(x_1, x_2, x_3)=\\sum_{m_1=-\\infty}^\\infty h^\\mathrm{one}(m_1, x_2, x_3) \\cdot e^{i 2\\pi \\tfrac{m_1}{a_1} x_1}" }, { "math_id": 168, "text": "\\begin{align}\nh^\\mathrm{two}(m_1, m_2, x_3) & \\triangleq \\frac{1}{a_2}\\int_0^{a_2} h^\\mathrm{one}(m_1, x_2, x_3)\\cdot e^{-i 2\\pi \\tfrac{m_2}{a_2} x_2}\\, dx_2 \\\\[12pt]\n& = \\frac{1}{a_2}\\int_0^{a_2} dx_2 \\frac{1}{a_1}\\int_0^{a_1} dx_1 g(x_1, x_2, x_3)\\cdot e^{-i 2\\pi \\left(\\tfrac{m_1}{a_1} x_1+\\tfrac{m_2}{a_2} x_2\\right)}\n\\end{align}" }, { "math_id": 169, "text": "g(x_1, x_2, x_3)=\\sum_{m_1=-\\infty}^\\infty \\sum_{m_2=-\\infty}^\\infty h^\\mathrm{two}(m_1, m_2, x_3) \\cdot e^{i 2\\pi \\tfrac{m_1}{a_1} x_1} \\cdot e^{i 2\\pi \\tfrac{m_2}{a_2} x_2}" }, { "math_id": 170, "text": "\\begin{align}\nh^\\mathrm{three}(m_1, m_2, m_3) & \\triangleq \\frac{1}{a_3}\\int_0^{a_3} h^\\mathrm{two}(m_1, m_2, x_3)\\cdot e^{-i 2\\pi \\tfrac{m_3}{a_3} x_3}\\, dx_3 \\\\[12pt]\n& = \\frac{1}{a_3}\\int_0^{a_3} dx_3 \\frac{1}{a_2}\\int_0^{a_2} dx_2 \\frac{1}{a_1}\\int_0^{a_1} dx_1 g(x_1, x_2, x_3)\\cdot e^{-i 2\\pi \\left(\\tfrac{m_1}{a_1} x_1+\\tfrac{m_2}{a_2} x_2 + \\tfrac{m_3}{a_3} x_3\\right)}\n\\end{align}" }, { "math_id": 171, "text": "g(x_1, x_2, x_3)=\\sum_{m_1=-\\infty}^\\infty \\sum_{m_2=-\\infty}^\\infty \\sum_{m_3=-\\infty}^\\infty h^\\mathrm{three}(m_1, m_2, m_3) \\cdot e^{i 2\\pi \\tfrac{m_1}{a_1} x_1} \\cdot e^{i 2\\pi \\tfrac{m_2}{a_2} x_2}\\cdot e^{i 2\\pi \\tfrac{m_3}{a_3} x_3}" }, { "math_id": 172, "text": "g(x_1, x_2, x_3)=\\sum_{m_1, m_2, m_3 \\in \\Z } h^\\mathrm{three}(m_1, m_2, m_3) \\cdot e^{i 2\\pi \\left( \\tfrac{m_1}{a_1} x_1+ \\tfrac{m_2}{a_2} x_2 + \\tfrac{m_3}{a_3} x_3\\right)}. " }, { "math_id": 173, "text": "\\mathbf{G} = m_1\\mathbf{g}_1 + m_2\\mathbf{g}_2 + m_3\\mathbf{g}_3" }, { "math_id": 174, "text": "m_i" }, { "math_id": 175, "text": "\\mathbf{g}_i" }, { "math_id": 176, "text": "\\mathbf{g_i} \\cdot \\mathbf{a_j}=2\\pi\\delta_{ij}" }, { "math_id": 177, "text": "\\delta_{ij} = 1" }, { "math_id": 178, "text": "i = j" }, { "math_id": 179, "text": "\\delta_{ij} = 0" }, { "math_id": 180, "text": "i \\neq j" }, { "math_id": 181, "text": "\\mathbf{G}" }, { "math_id": 182, "text": "\\mathbf{G} \\cdot \\mathbf{r} = \\left ( m_1\\mathbf{g}_1 + m_2\\mathbf{g}_2 + m_3\\mathbf{g}_3 \\right ) \\cdot \\left (x_1\\frac{\\mathbf{a}_1}{a_1}+ x_2\\frac{\\mathbf{a}_2}{a_2} +x_3\\frac{\\mathbf{a}_3}{a_3} \\right ) = 2\\pi \\left( x_1\\frac{m_1}{a_1}+x_2\\frac{m_2}{a_2}+x_3\\frac{m_3}{a_3} \\right )." }, { "math_id": 183, "text": "g(x_1,x_2,x_3) = f(\\mathbf{r})" }, { "math_id": 184, "text": "f(\\mathbf{r})=\\sum_{\\mathbf{G}} h(\\mathbf{G}) \\cdot e^{i \\mathbf{G} \\cdot \\mathbf{r}}, " }, { "math_id": 185, "text": "h(\\mathbf{G}) = \\frac{1}{a_3} \\int_0^{a_3} dx_3 \\, \\frac{1}{a_2}\\int_0^{a_2} dx_2 \\, \\frac{1}{a_1}\\int_0^{a_1} dx_1 \\, f\\left(x_1\\frac{\\mathbf{a}_1}{a_1} + x_2\\frac{\\mathbf{a}_2}{a_2} + x_3\\frac{\\mathbf{a}_3}{a_3} \\right)\\cdot e^{-i \\mathbf{G} \\cdot \\mathbf{r}}. " }, { "math_id": 186, "text": "\\mathbf{r} = (x,y,z) = x_1\\frac{\\mathbf{a}_1}{a_1}+x_2\\frac{\\mathbf{a}_2}{a_2}+x_3\\frac{\\mathbf{a}_3}{a_3}," }, { "math_id": 187, "text": "z" }, { "math_id": 188, "text": "x_2" }, { "math_id": 189, "text": "x_3" }, { "math_id": 190, "text": "\\begin{vmatrix}\n\\dfrac{\\partial x_1}{\\partial x} & \\dfrac{\\partial x_1}{\\partial y} & \\dfrac{\\partial x_1}{\\partial z} \\\\[12pt]\n\\dfrac{\\partial x_2}{\\partial x} & \\dfrac{\\partial x_2}{\\partial y} & \\dfrac{\\partial x_2}{\\partial z} \\\\[12pt]\n\\dfrac{\\partial x_3}{\\partial x} & \\dfrac{\\partial x_3}{\\partial y} & \\dfrac{\\partial x_3}{\\partial z}\n\\end{vmatrix}" }, { "math_id": 191, "text": "\\frac{a_1 a_2 a_3}{\\mathbf{a}_1\\cdot(\\mathbf{a}_2 \\times \\mathbf{a}_3)}" }, { "math_id": 192, "text": "\\mathbf{a}_1" }, { "math_id": 193, "text": "\\mathbf{a}_2" }, { "math_id": 194, "text": "\\mathbf{a}_3" }, { "math_id": 195, "text": "dx_1 \\, dx_2 \\, dx_3 = \\frac{a_1 a_2 a_3}{\\mathbf{a}_1\\cdot(\\mathbf{a}_2 \\times \\mathbf{a}_3)} \\cdot dx \\, dy \\, dz. " }, { "math_id": 196, "text": "h(\\mathbf{G})" }, { "math_id": 197, "text": "h(\\mathbf{G}) = \\frac{1}{\\mathbf{a}_1\\cdot(\\mathbf{a}_2 \\times \\mathbf{a}_3)}\\int_{C} d\\mathbf{r} f(\\mathbf{r})\\cdot e^{-i \\mathbf{G} \\cdot \\mathbf{r}} " }, { "math_id": 198, "text": "d\\mathbf{r}" }, { "math_id": 199, "text": "dx \\, dy \\, dz" }, { "math_id": 200, "text": "C" }, { "math_id": 201, "text": "\\mathbf{a}_1\\cdot(\\mathbf{a}_2 \\times \\mathbf{a}_3)" }, { "math_id": 202, "text": "\\left\\{e_n=e^{inx}: n \\in \\Z\\right\\}" }, { "math_id": 203, "text": "L^2([-\\pi,\\pi])" }, { "math_id": 204, "text": "\\langle f,\\, g \\rangle \\;\\triangleq \\; \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi} f(x)g^*(x)\\,dx," }, { "math_id": 205, "text": "g^{*}(x)" }, { "math_id": 206, "text": "g(x)." }, { "math_id": 207, "text": "f=\\sum_{n=-\\infty}^\\infty \\langle f,e_n \\rangle \\, e_n." }, { "math_id": 208, "text": "\\int_{-\\pi}^{\\pi} \\cos(mx)\\, \\cos(nx)\\, dx = \\frac{1}{2}\\int_{-\\pi}^{\\pi} \\cos((n-m)x)+\\cos((n+m)x)\\, dx = \\pi \\delta_{mn}, \\quad m, n \\ge 1, " }, { "math_id": 209, "text": "\\int_{-\\pi}^{\\pi} \\sin(mx)\\, \\sin(nx)\\, dx = \\frac{1}{2}\\int_{-\\pi}^{\\pi} \\cos((n-m)x)-\\cos((n+m)x)\\, dx = \\pi \\delta_{mn}, \\quad m, n \\ge 1" }, { "math_id": 210, "text": "\\int_{-\\pi}^{\\pi} \\cos(mx)\\, \\sin(nx)\\, dx = \\frac{1}{2}\\int_{-\\pi}^{\\pi} \\sin((n+m)x)+\\sin((n-m)x)\\, dx = 0;" }, { "math_id": 211, "text": "1" }, { "math_id": 212, "text": "\\sqrt{2} \\cos (nx)" }, { "math_id": 213, "text": "\\sqrt{2} \\sin (nx)" }, { "math_id": 214, "text": "s_{_N}(x) = \\sum_{n=-N}^N S[n]\\ e^{i 2\\pi\\tfrac{n}{P} x}," }, { "math_id": 215, "text": "N" }, { "math_id": 216, "text": "p_{_N}(x)=\\sum_{n=-N}^N p[n]\\ e^{i 2\\pi\\tfrac{n}{P}x}." }, { "math_id": 217, "text": "s_{_N}" }, { "math_id": 218, "text": "p_{_N} \\neq s_{_N}" }, { "math_id": 219, "text": "\\|s_{_N} - s\\|_2 < \\|p_{_N} - s\\|_2," }, { "math_id": 220, "text": "\\| g \\|_2 = \\sqrt{{1 \\over P} \\int_P |g(x)|^2 \\, dx}." }, { "math_id": 221, "text": "L^2 (P)" }, { "math_id": 222, "text": "s_{\\infty}" }, { "math_id": 223, "text": "\\|s_{_N} - s\\|_2" }, { "math_id": 224, "text": "N \\to \\infty" }, { "math_id": 225, "text": "(i\\cdot n) S[n]" }, { "math_id": 226, "text": "n^{\\text{th}}" }, { "math_id": 227, "text": "C^2" }, { "math_id": 228, "text": "n^2S[n]" }, { "math_id": 229, "text": "n \\rightarrow \\infty" }, { "math_id": 230, "text": "\\alpha > 1/2" }, { "math_id": 231, "text": "\\sup_x |s(x) - s_{_N}(x)| \\le \\sum_{|n| > N} |S[n]|" }, { "math_id": 232, "text": "L^2" } ]
https://en.wikipedia.org/wiki?curid=59038