id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
15019690
Baskakov operator
In functional analysis, a branch of mathematics, the Baskakov operators are generalizations of Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators. They are defined by formula_0 where formula_1 (formula_2 can be formula_3), formula_4, and formula_5 is a sequence of functions defined on formula_6 that have the following properties for all formula_7: They are named after V. A. Baskakov, who studied their convergence to bounded, continuous functions. Basic results. The Baskakov operators are linear and positive.
[ { "math_id": 0, "text": "[\\mathcal{L}_n(f)](x) = \\sum_{k=0}^\\infty {(-1)^k \\frac{x^k}{k!} \\phi_n^{(k)}(x) f\\left(\\frac{k}{n}\\right)}" }, { "math_id": 1, "text": "x\\in[0,b)\\subset\\mathbb{R}" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "\\infty" }, { "math_id": 4, "text": "n\\in\\mathbb{N}" }, { "math_id": 5, "text": "(\\phi_n)_{n\\in\\mathbb{N}}" }, { "math_id": 6, "text": "[0,b]" }, { "math_id": 7, "text": "n,k\\in\\mathbb{N}" }, { "math_id": 8, "text": "\\phi_n\\in\\mathcal{C}^\\infty[0,b]" }, { "math_id": 9, "text": "\\phi_n" }, { "math_id": 10, "text": "[0,b)" }, { "math_id": 11, "text": "\\phi_n(0) = 1" }, { "math_id": 12, "text": "(-1)^k\\phi_n^{(k)}\\geq 0" }, { "math_id": 13, "text": "c" }, { "math_id": 14, "text": "\\phi_n^{(k+1)} = -n\\phi_{n+c}^{(k)}" }, { "math_id": 15, "text": "n>\\max\\{0,-c\\}" } ]
https://en.wikipedia.org/wiki?curid=15019690
15022
Infrared
Form of electromagnetic radiation Infrared (IR; sometimes called infrared light) is electromagnetic radiation (EMR) with wavelengths longer than that of visible light but shorter than microwaves. The infrared spectral band begins with waves that are just longer than those of red light (the longest waves in the visible spectrum), so IR is invisible to the human eye. IR is generally understood to include wavelengths from around to . IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of the solar spectrum. Longer IR wavelengths (30–100 μm) are sometimes included as part of the terahertz radiation band. Almost all black-body radiation from objects near room temperature is in the IR band. As a form of electromagnetic radiation, IR carries energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon. It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Definition and relationship to the electromagnetic spectrum. There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 700 nm to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Nature. Sunlight, at an effective temperature of 5,780 K (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kW per square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. Regions. In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Visible limit. Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions. Commonly used subdivision scheme. A commonly used subdivision scheme is: NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". CIE division scheme. The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 scheme. ISO 20473 specifies the following scheme: Astronomy division scheme. Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. Sensor response division scheme. A third scheme divides up the band based on the response of various detectors: Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., from lasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. Telecommunication bands. In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed. Heat. Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications. Night vision. Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography. Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging. A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging. In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Tracking. Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Heating. Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared radiation is used in cooking, known as broiling or grilling. One energy advantage is that the IR energy heats only opaque objects, such as food, rather than the air around them. Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Cooling. A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces maximize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes. Communications. IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Infrared light with a wavelength around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of encoded audio versions of printed signs is being researched as an aid for visually impaired people through the RIAS (Remote Infrared Audible Signage) project. Transmitting IR data from one device to another is sometimes referred to as beaming. Spectroscopy. Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. Thin film metrology. In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Meteorology. Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can have a temperature similar to the surrounding land or sea surface and does not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low cloud can be distinguished, producing a "fog" satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. Climatology. In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomy. Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared. Cleaning. Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Art conservation and analysis. Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's "Woman Ironing" and "Blue Room", where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well. Biological systems. The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system. Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat ("Desmodus rotundus"), a variety of jewel beetles ("Melanophila acuminata"), darkly pigmented butterflies ("Pachliopta aristolochiae" and "Troides rhadamantus plateni"), and possibly blood-sucking bugs ("Triatoma infestans"). By detecting the heat that their prey emits, crotaline and boid snakes identify and capture their prey using their IR-sensitive pit organs. Comparably, IR-sensitive pits on the Common Vampire Bat ("Desmodus rotundus") aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle, "Melanophila acuminata", locates forest fires via infrared pit organs, where on recently burnt trees, they deposit their eggs. Thermoreceptors on the wings and antennae of butterflies with dark pigmentation, such "Pachliopta aristolochiae" and "Troides rhadamantus plateni", shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs ("Triatoma infestans") locate their warm-blooded victims by sensing their body heat. Some fungi like "Venturia inaequalis" require near-infrared light for ejection. Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters. Photobiomodulation. Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms. Health hazards. Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places. Scientific history. The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century. An earlier experiment in 1790 by Marc-Auguste Pictet demonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light. Other important dates include: Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "E = J(T, n)" } ]
https://en.wikipedia.org/wiki?curid=15022
15023
Icosidodecahedron
Archimedean solid with 32 faces In geometry, an icosidodecahedron or pentagonal gyrobirotunda is a polyhedron with twenty ("icosi") triangular faces and twelve ("dodeca") pentagonal faces. An icosidodecahedron has 30 identical vertices, with two triangles and two pentagons meeting at each, and 60 identical edges, each separating a triangle from a pentagon. As such, it is one of the Archimedean solids and more particularly, a quasiregular polyhedron. Construction. One way to construct the icosidodecahedron is to start with two pentagonal rotunda by attaching them to their bases. These rotundas cover their decagonal base so that the resulting polyhedron has 32 faces, 30 vertices, and 60 edges. This construction is similar to one of the Johnson solids, the pentagonal orthobirotunda. The difference is that the icosidodecahedron is constructed by twisting its rotundas by 36°, a process known as gyration, resulting in the pentagonal face connecting to the triangular one. The icosidodecahedron has an alternative name, "pentagonal gyrobirotunda". Convenient Cartesian coordinates for the vertices of an icosidodecahedron with unit edges are given by the even permutations of: formula_0 where formula_1 denotes the golden ratio. Properties. The surface area of an icosidodecahedron formula_2 can be determined by calculating the area of all pentagonal faces. The volume of an icosidodecahedron formula_3 can be determined by slicing it off into two pentagonal rotunda, after which summing up their volumes. Therefore, its surface area and volume can be formulated as: formula_4 The dihedral angle of an icosidodecahedron between pentagon-to-triangle is formula_5 determined by calculating the angle of a pentagonal rotunda. An icosidodecahedron has icosahedral symmetry, and its first stellation is the compound of a dodecahedron and its dual icosahedron, with the vertices of the icosidodecahedron located at the midpoints of the edges of either. The icosidodecahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. The polygonal faces that meet for every vertex are two equilateral triangles and two regular pentagons, and the vertex figure of an icosidodecahedron is formula_6. Its dual polyhedron is rhombic triacontahedron, a Catalan solid. The icosidodecahedron has 6 central decagons. Projected into a sphere, they define 6 great circles. used these 6 great circles, along with 15 and 10 others in two other polyhedra to define his 31 great circles of the spherical icosahedron. The long radius (center to vertex) of the icosidodecahedron is in the golden ratio to its edge length; thus its radius is formula_1 if its edge length is 1, and its edge length is formula_7 if its radius is 1. Only a few uniform polytopes have this property, including the four-dimensional 600-cell, the three-dimensional icosidodecahedron, and the two-dimensional decagon. (The icosidodecahedron is the equatorial cross-section of the 600-cell, and the decagon is the equatorial cross-section of the icosidodecahedron.) These "radially golden" polytopes can be constructed, with their radii, from golden triangles which meet at the center, each contributing two radii and an edge. Related polytopes. The icosidodecahedron is a rectified dodecahedron and also a rectified icosahedron, existing as the full-edge truncation between these regular solids. The icosidodecahedron contains 12 pentagons of the dodecahedron and 20 triangles of the icosahedron: The icosidodecahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3."n")2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *"n"32 all of these tilings are wythoff construction within a fundamental domain of symmetry, with generator points at the right angle corner of the domain. Related polyhedra. The truncated cube can be turned into an icosidodecahedron by dividing the octagons into two pentagons and two triangles. It has pyritohedral symmetry. Eight uniform star polyhedra share the same vertex arrangement. Of these, two also share the same edge arrangement: the small icosihemidodecahedron (having the triangular faces in common), and the small dodecahemidodecahedron (having the pentagonal faces in common). The vertex arrangement is also shared with the compounds of five octahedra and of five tetrahemihexahedra. Related polychora. In four-dimensional geometry, the icosidodecahedron appears in the regular 600-cell as the equatorial slice that belongs to the vertex-first passage of the 600-cell through 3D space. In other words: the 30 vertices of the 600-cell which lie at arc distances of 90 degrees on its circumscribed hypersphere from a pair of opposite vertices, are the vertices of an icosidodecahedron. The wireframe figure of the 600-cell consists of 72 flat regular decagons. Six of these are the equatorial decagons to a pair of opposite vertices, and these six form the wireframe figure of an icosidodecahedron. If a 600-cell is stereographically projected to 3-space about any vertex and all points are normalised, the geodesics upon which edges fall comprise the icosidodecahedron's barycentric subdivision. Graph. The skeleton of an icosidodecahedron can be represented as the graph with 30 vertices and 60 edges, one of the Archimedean graphs. It is quartic, meaning that each of its vertex is connected by four other vertices. Appearance. The icosidodecahedron may appears in structural, as in the geodesic dome of the Hoberman sphere. Icosidodecahedra can be found in all eukaryotic cells, including human cells, as Sec13/31 COPII coat-protein formations. The icosidodecahedron may also found in popular culture. In Star Trek universe, the Vulcan game of logic Kal-Toh has the goal of creating a shape with two nested holographic icosidodecahedra joined at the midpoints of their segments. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " (0, 0, \\pm \\varphi), \\qquad \\left(\\pm \\frac{1}{2}, \\pm \\frac{\\varphi}{2}, \\pm \\frac{\\varphi^2}{2} \\right), " }, { "math_id": 1, "text": " \\varphi " }, { "math_id": 2, "text": " A " }, { "math_id": 3, "text": " V " }, { "math_id": 4, "text": "\\begin{align}\nA &= \\left(5\\sqrt{3}+3\\sqrt{25+10\\sqrt{5}}\\right) a^2 &\\approx 29.306a^2 \\\\\nV &= \\frac{45+17\\sqrt{5}}{6}a^3 &\\approx 13.836a^3.\n\\end{align}" }, { "math_id": 5, "text": " \\arccos \\left(-\\sqrt{\\frac{5 + 2\\sqrt{5}}{15}} \\right) \\approx 142.62^\\circ, " }, { "math_id": 6, "text": " (3 \\cdot 5)^2 = 3^2 \\cdot 5^2 " }, { "math_id": 7, "text": " 1/\\varphi " } ]
https://en.wikipedia.org/wiki?curid=15023
150248
Water wheel
Machine for converting the energy of flowing or falling water into useful forms of power A water wheel is a machine for converting the energy of flowing or falling water into useful forms of power, often in a watermill. A water wheel consists of a wheel (usually constructed from wood or metal), with a number of blades or buckets arranged on the outside rim forming the driving car. Water wheels were still in commercial use well into the 20th century, but they are no longer in common use today. Uses included milling flour in gristmills, grinding wood into pulp for papermaking, hammering wrought iron, machining, ore crushing and pounding fibre for use in the manufacture of cloth. Some water wheels are fed by water from a mill pond, which is formed when a flowing stream is dammed. A channel for the water flowing to or from a water wheel is called a mill race. The race bringing water from the mill pond to the water wheel is a headrace; the one carrying water after it has left the wheel is commonly referred to as a tailrace. Waterwheels were used for various purposes from things such as agriculture to metallurgy in ancient civilizations spanning the Hellenistic Greek world, Rome, China and India. Waterwheels saw continued use in the post-classical age, like in medieval Europe and the Islamic Golden Age, but also elsewhere. In the mid- to late 18th century John Smeaton's scientific investigation of the water wheel led to significant increases in efficiency, supplying much-needed power for the Industrial Revolution. Water wheels began being displaced by the smaller, less expensive and more efficient turbine, developed by Benoît Fourneyron, beginning with his first model in 1827. Turbines are capable of handling high "heads", or elevations, that exceed the capability of practical-sized waterwheels. The main difficulty of water wheels is their dependence on flowing water, which limits where they can be located. Modern hydroelectric dams can be viewed as the descendants of the water wheel, as they too take advantage of the movement of water downhill. Types. Water wheels come in two basic designs: The latter can be subdivided according to where the water hits the wheel into backshot (pitch-back), overshot, breastshot, undershot, and stream-wheels. The term "undershot" can refer to any wheel where the water passes under the wheel but it usually implies that the water entry is low on the wheel. Overshot and backshot water wheels are typically used where the available height difference is more than a couple of meters. Breastshot wheels are more suited to large flows with a moderate head. Undershot and stream wheel use large flows at little or no head. There is often an associated millpond, a reservoir for storing water and hence energy until it is needed. Larger heads store more gravitational potential energy for the same amount of water so the reservoirs for overshot and backshot wheels tend to be smaller than for breast shot wheels. Overshot and pitchback water wheels are suitable where there is a small stream with a height difference of more than , often in association with a small reservoir. Breastshot and undershot wheels can be used on rivers or high volume flows with large reservoirs. Vertical axis. A horizontal wheel with a vertical axle. Commonly called a tub wheel, Norse mill or Greek mill, the horizontal wheel is a primitive and inefficient form of the modern turbine. However, if it delivers the required power then the efficiency is of secondary importance. It is usually mounted inside a mill building below the working floor. A jet of water is directed on to the paddles of the water wheel, causing them to turn. This is a simple system usually without gearing so that the vertical axle of the water wheel becomes the drive spindle of the mill. Stream. A stream wheel is a vertically mounted water wheel that is rotated by the water in a water course striking paddles or blades at the bottom of the wheel. This type of water wheel is the oldest type of horizontal axis wheel. They are also known as free surface wheels because the water is not constrained by millraces or wheel pits. Stream wheels are cheaper and simpler to build and have less of an environmental impact than other types of wheels. They do not constitute a major change of the river. Their disadvantages are their low efficiency, which means that they generate less power and can only be used where the flow rate is sufficient. A typical flat board undershot wheel uses about 20 percent of the energy in the flow of water striking the wheel as measured by English civil engineer John Smeaton in the 18th century. More modern wheels have higher efficiencies. Stream wheels gain little or no advantage from the head, a difference in water level. Stream wheels mounted on floating platforms are often referred to as hip wheels and the mill as a ship mill. They were sometimes mounted immediately downstream from bridges where the flow restriction of the bridge piers increased the speed of the current. Historically they were very inefficient but major advances were made in the eighteenth century. Undershot wheel. An undershot wheel is a vertically mounted water wheel with a horizontal axle that is rotated by the water from a low weir striking the wheel in the bottom quarter. Most of the energy gain is from the movement of the water and comparatively little from the head. They are similar in operation and design to stream wheels. The term undershot is sometimes used with related but different meanings: This is the oldest type of vertical water wheel. Breastshot wheel. The word breastshot is used in a variety of ways. Some authors restrict the term to wheels where the water enters at about the 10 o’clock position, others 9 o’clock, and others for a range of heights. In this article it is used for wheels where the water entry is significantly above the bottom and significantly below the top, typically the middle half. They are characterized by: Both kinetic (movement) and potential (height and weight) energy are utilised. The small clearance between the wheel and the masonry requires that a breastshot wheel has a good trash rack ('screen' in British English) to prevent debris from jamming between the wheel and the apron and potentially causing serious damage. Breastshot wheels are less efficient than overshot and backshot wheels but they can handle high flow rates and consequently high power. They are preferred for steady, high-volume flows such as are found on the Fall Line of the North American East Coast. Breastshot wheels are the most common type in the United States of America and are said to have powered the industrial revolution. Overshot wheel. A vertically mounted water wheel that is rotated by water entering buckets just past the top of the wheel is said to be overshot. The term is sometimes, erroneously, applied to backshot wheels, where the water goes down behind the wheel. A typical overshot wheel has the water channeled to the wheel at the top and slightly beyond the axle. The water collects in the buckets on that side of the wheel, making it heavier than the other "empty" side. The weight turns the wheel, and the water flows out into the tail-water when the wheel rotates enough to invert the buckets. The overshot design is very efficient, it can achieve 90%, and does not require rapid flow. Nearly all of the energy is gained from the weight of water lowered to the tailrace although a small contribution may be made by the kinetic energy of the water entering the wheel. They are suited to larger heads than the other type of wheel so they are ideally suited to hilly countries. However even the largest water wheel, the Laxey Wheel in the Isle of Man, only utilises a head of around . The world's largest head turbines, Bieudron Hydroelectric Power Station in Switzerland, utilise about . Overshot wheels require a large head compared to other types of wheel which usually means significant investment in constructing the headrace. Sometimes the final approach of the water to the wheel is along a flume or penstock, which can be lengthy. Backshot wheel. A backshot wheel (also called pitchback) is a variety of overshot wheel where the water is introduced just before the summit of the wheel. In many situations, it has the advantage that the bottom of the wheel is moving in the same direction as the water in the tailrace which makes it more efficient. It also performs better than an overshot wheel in flood conditions when the water level may submerge the bottom of the wheel. It will continue to rotate until the water in the wheel pit rises quite high on the wheel. This makes the technique particularly suitable for streams that experience significant variations in flow and reduces the size, complexity, and hence cost of the tailrace. The direction of rotation of a backshot wheel is the same as that of a breastshot wheel but in other respects, it is very similar to the overshot wheel. See below. Hybrid. Overshot and backshot. Some wheels are overshot at the top and backshot at the bottom thereby potentially combining the best features of both types. The photograph shows an example at Finch Foundry in Devon, UK. The head race is the overhead timber structure and a branch to the left supplies water to the wheel. The water exits from under the wheel back into the stream. Reversible. A special type of overshot/backshot wheel is the reversible water wheel. This has two sets of blades or buckets running in opposite directions so that it can turn in either direction depending on which side the water is directed. Reversible wheels were used in the mining industry in order to power various means of ore conveyance. By changing the direction of the wheel, barrels or baskets of ore could be lifted up or lowered down a shaft or inclined plane. There was usually a cable drum or a chain basket on the axle of the wheel. It is essential that the wheel have braking equipment to be able to stop the wheel (known as a braking wheel). The oldest known drawing of a reversible water wheel was by Georgius Agricola and dates to 1556. History. As in all machinery, rotary motion is more efficient in water-raising devices than oscillating motion. In terms of power source, waterwheels can be turned by either human respectively animal force or by the water current itself. Waterwheels come in two basic designs, either equipped with a vertical or a horizontal axle. The latter type can be subdivided, depending on where the water hits the wheel paddles, into overshot, breastshot and undershot wheels. The two main functions of waterwheels were historically water-lifting for irrigation purposes and milling, particularly of grain. In case of horizontal-axle mills, a system of gears is required for power transmission, which vertical-axle mills do not need. China. The earliest waterwheel working like a lever was described by Zhuangzi in the late Warring States period (476-221 BC). It says that the waterwheel was invented by Zigong, a disciple of Confucius in the 5th century BC. By at least the 1st century AD, the Chinese of the Eastern Han Dynasty were using water wheels to crush grain in mills and to power the piston-bellows in forging iron ore into cast iron. In the text known as the "Xin Lun" written by Huan Tan about 20 AD (during the usurpation of Wang Mang), it states that the legendary mythological king known as Fu Xi was the one responsible for the pestle and mortar, which evolved into the tilt-hammer and then trip hammer device (see trip hammer). Although the author speaks of the mythological Fu Xi, a passage of his writing gives hint that the water wheel was in widespread use by the 1st century AD in China (Wade-Giles spelling): Fu Hsi invented the pestle and mortar, which is so useful, and later on it was cleverly improved in such a way that the whole weight of the body could be used for treading on the tilt-hammer ("tui"), thus increasing the efficiency ten times. Afterwards the power of animals—donkeys, mules, oxen, and horses—was applied by means of machinery, and water-power too used for pounding, so that the benefit was increased a hundredfold. In the year 31 AD, the engineer and Prefect of Nanyang, Du Shi (d. 38), applied a complex use of the water wheel and machinery to power the bellows of the blast furnace to create cast iron. Du Shi is mentioned briefly in the "Book of Later Han" ("Hou Han Shu") as follows (in Wade-Giles spelling): In the seventh year of the Chien-Wu reign period (31 AD) Tu Shih was posted to be Prefect of Nanyang. He was a generous man and his policies were peaceful; he destroyed evil-doers and established the dignity (of his office). Good at planning, he loved the common people and wished to save their labor. He invented a water-power reciprocator ("shui phai") for the casting of (iron) agricultural implements. Those who smelted and cast already had the push-bellows to blow up their charcoal fires, and now they were instructed to use the rushing of the water ("chi shui") to operate it ... Thus the people got great benefit for little labor. They found the 'water(-powered) bellows' convenient and adopted it widely. Water wheels in China found practical uses such as this, as well as extraordinary use. The Chinese inventor Zhang Heng (78–139) was the first in history to apply motive power in rotating the astronomical instrument of an armillary sphere, by use of a water wheel. The mechanical engineer Ma Jun (c. 200–265) from Cao Wei once used a water wheel to power and operate a large mechanical puppet theater for the Emperor Ming of Wei (r. 226–239). Western world. Greco-Roman world. The technological breakthrough occurred in the technologically developed Hellenistic period between the 3rd and 1st century BC. Water-lifting. The compartmented water wheel comes in two basic forms, the wheel with compartmented body (Latin "tympanum") and the wheel with compartmented rim or a rim with separate, attached containers. The wheels could be either turned by men treading on its outside or by animals by means of a sakia gear. While the tympanum had a large discharge capacity, it could lift the water only to less than the height of its own radius and required a large torque for rotating. These constructional deficiencies were overcome by the wheel with a compartmented rim which was a less heavy design with a higher lift. The earliest literary reference to a water-driven, compartmented wheel appears in the technical treatise "Pneumatica" (chap. 61) of the Greek engineer Philo of Byzantium (c. 280 – c. 220 BC). In his "Parasceuastica" (91.43−44), Philo advises the use of such wheels for submerging siege mines as a defensive measure against enemy sapping. Compartmented wheels appear to have been the means of choice for draining dry docks in Alexandria under the reign of Ptolemy IV (221−205 BC). Several Greek papyri of the 3rd to 2nd century BC mention the use of these wheels, but do not give further details. The non-existence of the device in the Ancient Near East before Alexander's conquest can be deduced from its pronounced absence from the otherwise rich oriental iconography on irrigation practices. Unlike other water-lifting devices and pumps of the period though, the invention of the compartmented wheel cannot be traced to any particular Hellenistic engineer and may have been made in the late 4th century BC in a rural context away from the metropolis of Alexandria. The earliest depiction of a compartmented wheel is from a tomb painting in Ptolemaic Egypt which dates to the 2nd century BC. It shows a pair of yoked oxen driving the wheel via a sakia gear, which is here for the first time attested, too. The Greek sakia gear system is already shown fully developed to the point that "modern Egyptian devices are virtually identical". It is assumed that the scientists of the Museum of Alexandria, at the time the most active Greek research center, may have been involved in its invention. An episode from the Alexandrian War in 48 BC tells of how Caesar's enemies employed geared waterwheels to pour sea water from elevated places on the position of the trapped Romans. Around 300 AD, the noria was finally introduced when the wooden compartments were replaced with inexpensive ceramic pots that were tied to the outside of an open-framed wheel. The Romans used waterwheels extensively in mining projects, with enormous Roman-era waterwheels found in places like modern-day Spain. They were reverse overshot water-wheels designed for dewatering deep underground mines. Several such devices are described by Vitruvius, including the reverse overshot water-wheel and the Archimedean screw. Many were found during modern mining at the copper mines at Rio Tinto in Spain, one system involving 16 such wheels stacked above one another so as to lift water about 80 feet from the mine sump. Part of such a wheel was found at Dolaucothi, a Roman gold mine in south Wales in the 1930s when the mine was briefly re-opened. It was found about 160 feet below the surface, so must have been part of a similar sequence as that discovered at Rio Tinto. It has recently been carbon dated to about 90 AD, and since the wood from which it was made is much older than the deep mine, it is likely that the deep workings were in operation perhaps 30–50 years after. It is clear from these examples of drainage wheels found in sealed underground galleries in widely separated locations that building water wheels was well within their capabilities, and such verticals water wheels commonly used for industrial purposes. Watermilling. Taking indirect evidence into account from the work of the Greek technician Apollonius of Perge, the British historian of technology M.J.T. Lewis dates the appearance of the vertical-axle watermill to the early 3rd century BC, and the horizontal-axle watermill to around 240 BC, with Byzantium and Alexandria as the assigned places of invention. A watermill is reported by the Greek geographer Strabon (c. 64 BC – c. AD 24) to have existed sometime before 71 BC in the palace of the Pontian king Mithradates VI Eupator, but its exact construction cannot be gleaned from the text (XII, 3, 30 C 556). The first clear description of a geared watermill offers the late 1st century BC Roman architect Vitruvius who tells of the sakia gearing system as being applied to a watermill. Vitruvius's account is particularly valuable in that it shows how the watermill came about, namely by the combination of the separate Greek inventions of the toothed gear and the waterwheel into one effective mechanical system for harnessing water power. Vitruvius' waterwheel is described as being immersed with its lower end in the watercourse so that its paddles could be driven by the velocity of the running water (X, 5.2). About the same time, the overshot wheel appears for the first time in a poem by Antipater of Thessalonica, which praises it as a labour-saving device (IX, 418.4–6). The motif is also taken up by Lucretius (ca. 99–55 BC) who likens the rotation of the waterwheel to the motion of the stars on the firmament (V 516). The third horizontal-axled type, the breastshot waterwheel, comes into archaeological evidence by the late 2nd century AD context in central Gaul. Most excavated Roman watermills were equipped with one of these wheels which, although more complex to construct, were much more efficient than the vertical-axle waterwheel. In the 2nd century AD Barbegal watermill complex a series of sixteen overshot wheels was fed by an artificial aqueduct, a proto-industrial grain factory which has been referred to as "the greatest known concentration of mechanical power in the ancient world". In Roman North Africa, several installations from around 300 AD were found where vertical-axle waterwheels fitted with angled blades were installed at the bottom of a water-filled, circular shaft. The water from the mill-race which entered tangentially the pit created a swirling water column that made the fully submerged wheel act like true water turbines, the earliest known to date. Navigation. Apart from its use in milling and water-raising, ancient engineers applied the paddled waterwheel for automatons and in navigation. Vitruvius (X 9.5–7) describes multi-geared paddle wheels working as a ship odometer, the earliest of its kind. The first mention of paddle wheels as a means of propulsion comes from the 4th–5th century military treatise "De Rebus Bellicis" (chapter XVII), where the anonymous Roman author describes an ox-driven paddle-wheel warship. Early Medieval Europe. Ancient water-wheel technology continued unabated in the early medieval period where the appearance of new documentary genres such as legal codes, monastic charters, but also hagiography was accompanied with a sharp increase in references to watermills and wheels. The earliest vertical-wheel in a tide mill is from 6th-century Killoteran near Waterford, Ireland, while the first known horizontal-wheel in such a type of mill is from the Irish Little Island (c. 630). As for the use in a common Norse or Greek mill, the oldest known horizontal-wheels were excavated in the Irish Ballykilleen, dating to c. 636. The earliest excavated water wheel driven by tidal power was the Nendrum Monastery mill in Northern Ireland which has been dated to 787, although a possible earlier mill dates to 619. Tide mills became common in estuaries with a good tidal range in both Europe and America generally using undershot wheels. Cistercian monasteries, in particular, made extensive use of water wheels to power watermills of many kinds. An early example of a very large water wheel is the still extant wheel at the early 13th century Real Monasterio de Nuestra Senora de Rueda, a Cistercian monastery in the Aragon region of Spain. Grist mills (for grain) were undoubtedly the most common, but there were also sawmills, fulling mills and mills to fulfil many other labour-intensive tasks. The water wheel remained competitive with the steam engine well into the Industrial Revolution. At around the 8th to 10th century, a number of irrigation technologies were brought into Spain and thus introduced to Europe. One of those technologies is the Noria, which is basically a wheel fitted with buckets on the peripherals for lifting water. It is similar to the undershot water wheel mentioned later in this article. It allowed peasants to power watermills more efficiently. According to Thomas Glick's book, "Irrigation and Society in Medieval Valencia", the Noria probably originated from somewhere in Persia. It has been used for centuries before the technology was brought into Spain by Arabs who had adopted it from the Romans. Thus the distribution of the Noria in the Iberian peninsula "conforms to the area of stabilized Islamic settlement". This technology has a profound effect on the life of peasants. The Noria is relatively cheap to build. Thus it allowed peasants to cultivate land more efficiently in Europe. Together with the Spaniards, the technology spread to the New World in Mexico and South America following Spanish expansion Domesday inventory of English mills c. 1086. The assembly convened by William of Normandy, commonly referred to as the "Domesday" or Doomsday survey, took an inventory of all potentially taxable property in England, which included over six thousand mills spread across three thousand different locations, up from less than a hundred in the previous century. Locations. The type of water wheel selected was dependent upon the location. Generally if only small volumes of water and high waterfalls were available a millwright would choose to use an overshot wheel. The decision was influenced by the fact that the buckets could catch and use even a small volume of water. For large volumes of water with small waterfalls the undershot wheel would have been used, since it was more adapted to such conditions and cheaper to construct. So long as these water supplies were abundant the question of efficiency remained irrelevant. By the 18th century, with increased demand for power coupled with limited water locales, an emphasis was made on efficiency scheme. Economic influence. By the 11th century there were parts of Europe where the exploitation of water was commonplace. The water wheel is understood to have actively shaped and forever changed the outlook of Westerners. Europe began to transit from human and animal muscle labor towards mechanical labor with the advent of the water wheel. Medievalist Lynn White Jr. contended that the spread of inanimate power sources was eloquent testimony to the emergence of the West of a new attitude toward, power, work, nature, and above all else technology. Harnessing water-power enabled gains in agricultural productivity, food surpluses and the large scale urbanization starting in the 11th century. The usefulness of water power motivated European experiments with other power sources, such as wind and tidal mills. Waterwheels influenced the construction of cities, more specifically canals. The techniques that developed during this early period such as stream jamming and the building of canals, put Europe on a hydraulically focused path, for instance water supply and irrigation technology was combined to modify supply power of the wheel. Illustrating the extent to which there was a great degree of technological innovation that met the growing needs of the feudal state. Applications of the water wheel. The water mill was used for grinding grain, producing flour for bread, malt for beer, or coarse meal for porridge. Hammermills used the wheel to operate hammers. One type was fulling mill, which was used for cloth making. The trip hammer was also used for making wrought iron and for working iron into useful shapes, an activity that was otherwise labour-intensive. The water wheel was also used in papermaking, beating material to a pulp. In the 13th century water mills used for hammering throughout Europe improved the productivity of early steel manufacturing. Along with the mastery of gunpowder, waterpower provided European countries worldwide military leadership from the 15th century. 17th- and 18th-century Europe. Millwrights distinguished between the two forces, impulse and weight, at work in water wheels long before 18th-century Europe. Fitzherbert, a 16th-century agricultural writer, wrote "druieth the wheel as well as with the weight of the water as with strengthe [impulse]". Leonardo da Vinci also discussed water power, noting "the blow [of the water] is not weight, but excites a power of weight, almost equal to its own power". However, even realisation of the two forces, weight and impulse, confusion remained over the advantages and disadvantages of the two, and there was no clear understanding of the superior efficiency of weight. Prior to 1750 it was unsure as to which force was dominant and was widely understood that both forces were operating with equal inspiration amongst one another. The waterwheel sparked questions of the laws of nature, specifically the laws of force. Evangelista Torricelli's work on water wheels used an analysis of Galileo's work on falling bodies, that the velocity of a water sprouting from an orifice under its head was exactly equivalent to the velocity a drop of water acquired in falling freely from the same height. Industrial Europe. The water wheel was a driving force behind the earliest stages of industrialization in Britain. Water-powered reciprocating devices were used in trip hammers and blast furnace bellows. Richard Arkwright's water frame was powered by a water wheel. The most powerful water wheel built in the United Kingdom was the 100 hp Quarry Bank Mill water wheel near Manchester. A high breastshot design, it was retired in 1904 and replaced with several turbines. It has now been restored and is a museum open to the public. The biggest working water wheel in mainland Britain has a diameter of and was built by the De Winton company of Caernarfon. It is located within the Dinorwic workshops of the National Slate Museum in Llanberis, North Wales. The largest working water wheel in the world is the Laxey Wheel (also known as "Lady Isabella") in the village of Laxey, Isle of Man. It is in diameter and wide and is maintained by Manx National Heritage. During the Industrial Revolution, in the first half of the 19th century engineers started to design better wheels. In 1823 Jean-Victor Poncelet invented a very efficient undershot wheel design that could work on very low heads, which was commercialized and became popular by late 1830s. Other designs, as the Sagebien wheel, followed later. At the same time Claude Burdin was working on a radically different machine which he called "turbine", and his pupil Benoît Fourneyron designed the first commercial one in the 1830s. Development of water turbines led to decreased popularity of water wheels. The main advantage of turbines is that its ability to harness head is much greater than the diameter of the turbine, whereas a water wheel cannot effectively harness head greater than its diameter. The migration from water wheels to modern turbines took about one hundred years. North America. Water wheels were used to power sawmills, grist mills and for other purposes during development of the United States. The diameter water wheel at McCoy, Colorado, built in 1922, is a surviving one out of many which lifted water for irrigation out of the Colorado River. Two early improvements were suspension wheels and rim gearing. Suspension wheels are constructed in the same manner as a bicycle wheel, the rim being supported under tension from the hub- this led to larger lighter wheels than the former design where the heavy spokes were under compression. Rim-gearing entailed adding a notched wheel to the rim or shroud of the wheel. A stub gear engaged the rim-gear and took the power into the mill using an independent line shaft. This removed the rotative stress from the axle which could thus be lighter, and also allowed more flexibility in the location of the power train. The shaft rotation was geared up from that of the wheel which led to less power loss. An example of this design pioneered by Thomas Hewes and refined by William Armstrong Fairburn can be seen at the 1849 restored wheel at the Portland Basin Canal Warehouse. Somewhat related were fish wheels used in the American Northwest and Alaska, which lifted salmon out of the flow of rivers. Australia. Australia has a relatively dry climate, nonetheless, where suitable water resources were available, water wheels were constructed in 19th-century Australia. These were used to power sawmills, flour mills, and stamper batteries used to crush gold-bearing ore. Notable examples of water wheels used in gold recovery operations were the large Garfield water wheel near Chewton—one of at least seven water wheels in the surrounding area—and the two water wheels at Adelong Falls; some remnants exist at both sites. The mining area at Walhalla once had at least two water wheels, one of which was rolled to its site from Port Albert, on its rim using a novel trolley arrangement, taking nearly 90 days. A water wheel at Jindabyne, constructed in 1847, was the first machine used to extract energy—for flour milling—from the Snowy River. Compact water wheels, known as Dethridge wheels, were used not as sources of power but to measure water flows to irrigated land. New Zealand. Water wheels were used extensively in New Zealand. The well-preserved remains of the Young Australian mine's overshot water wheel exist near the ghost town of Carricktown, and those of the Phoenix flour mill's water wheel are near Oamaru. India. The early history of the watermill in India is obscure. Ancient Indian texts dating back to the 4th century BC refer to the term "cakkavattaka" (turning wheel), which commentaries explain as "arahatta-ghati-yanta" (machine with wheel-pots attached). On this basis, Joseph Needham suggested that the machine was a noria. Terry S. Reynolds, however, argues that the "term used in Indian texts is ambiguous and does not clearly indicate a water-powered device." Thorkild Schiøler argued that it is "more likely that these passages refer to some type of tread- or hand-operated water-lifting device, instead of a water-powered water-lifting wheel." According to Greek historical tradition, India received water-mills from the Roman Empire in the early 4th century AD when a certain Metrodoros introduced "water-mills and baths, unknown among them [the Brahmans] till then". Irrigation water for crops was provided by using water raising wheels, some driven by the force of the current in the river from which the water was being raised. This kind of water raising device was used in ancient India, predating, according to Pacey, its use in the later Roman Empire or China, even though the first literary, archaeological and pictorial evidence of the water wheel appeared in the Hellenistic world. Around 1150, the astronomer Bhaskara Achārya observed water-raising wheels and imagined such a wheel lifting enough water to replenish the stream driving it, effectively, a perpetual motion machine. The construction of water works and aspects of water technology in India is described in Arabic and Persian works. During medieval times, the diffusion of Indian and Persian irrigation technologies gave rise to an advanced irrigation system which bought about economic growth and also helped in the growth of material culture. Islamic world. After the spread of Islam engineers of the Islamic world continued the water technologies of the ancient Near East; as evident in the excavation of a canal in the Basra region with remains of a water wheel dating from the 7th century. Hama in Syria still preserves some of its large wheels, on the river Orontes, although they are no longer in use. One of the largest had a diameter of about and its rim was divided into 120 compartments. Another wheel that is still in operation is found at Murcia in Spain, La Nora, and although the original wheel has been replaced by a steel one, the Moorish system during al-Andalus is otherwise virtually unchanged. Some medieval Islamic compartmented water wheels could lift water as high as . Muhammad ibn Zakariya al-Razi's "Kitab al-Hawi" in the 10th century described a noria in Iraq that could lift as much as , or . This is comparable to the output of modern norias in East Asia, which can lift up to , or . The industrial uses of watermills in the Islamic world date back to the 7th century, while horizontal-wheeled and vertical-wheeled water mills were both in widespread use by the 9th century. A variety of industrial watermills were used in the Islamic world, including gristmills, hullers, sawmills, shipmills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial watermills in operation, from al-Andalus and North Africa to the Middle East and Central Asia. Muslim and Christian engineers also used crankshafts and water turbines, gears in watermills and water-raising machines, and dams as a source of water, used to provide additional power to watermills and water-raising machines. Fulling mills and steel mills may have spread from Islamic Spain to Christian Spain in the 12th century. Industrial water mills were also employed in large factory complexes built in al-Andalus between the 11th and 13th centuries. The engineers of the Islamic world developed several solutions to achieve the maximum output from a water wheel. One solution was to mount them to piers of bridges to take advantage of the increased flow. Another solution was the shipmill, a type of water mill powered by water wheels mounted on the sides of ships moored in midstream. This technique was employed along the Tigris and Euphrates rivers in 10th-century Iraq, where large shipmills made of teak and iron could produce 10 tons of flour from grain every day for the granary in Baghdad. The flywheel mechanism, which is used to smooth out the delivery of power from a driving device to a driven machine, was invented by Ibn Bassal (fl. 1038–1075) of Al-Andalus; he pioneered the use of the flywheel in the saqiya (chain pump) and noria. The engineers Al-Jazari in the 13th century and Taqi al-Din in the 16th century described many inventive water-raising machines in their technological treatises. They also employed water wheels to power a variety of devices, including various water clocks and automata. Modern developments. Hydraulic wheel. A recent development of the breastshot wheel is a hydraulic wheel which effectively incorporates automatic regulation systems. The Aqualienne is one example. It generates between 37 kW and 200 kW of electricity from a waterflow with a head of . It is designed to produce electricity at the sites of former watermills. Efficiency. Overshot (and particularly backshot) wheels are the most efficient type; a backshot steel wheel can be more efficient (about 60%) than all but the most advanced and well-constructed turbines. In some situations an overshot wheel is preferable to a turbine. The development of the hydraulic turbine wheels with their improved efficiency (>67%) opened up an alternative path for the installation of water wheels in existing mills, or redevelopment of abandoned mills. The power of a wheel. The energy available to the wheel has two components: The kinetic energy can be accounted for by converting it into an equivalent head, the velocity head, and adding it to the actual head. For still water the velocity head is zero, and to a good approximation it is negligible for slowly moving water, and can be ignored. The velocity in the tail race is not taken into account because for a perfect wheel the water would leave with zero energy which requires zero velocity. That is impossible, the water has to move away from the wheel, and represents an unavoidable cause of inefficiency. The power is how fast that energy is delivered which is determined by the flow rate. It has been estimated that the ancient donkey or slave-powered quern of Rome made about one-half of a horsepower, the horizontal waterwheel creating slightly more than one-half of a horsepower, the undershot vertical waterwheel produced about three horsepower, and the medieval overshot waterwheel produced up to forty to sixty horsepower. Quantities and units. dotted notation Measurements. The pressure head formula_14 is the difference in height between the head race and tail race water surfaces. The velocity head formula_15 is calculated from the velocity of the water in the head race at the same place as the pressure head is measured from. The velocity (speed) formula_16 can be measured by the pooh sticks method, timing a floating object over a measured distance. The water at the surface moves faster than water nearer to the bottom and sides so a correction factor should be applied as in the formula below. There are many ways to measure the volume flow rate. Two of the simplest are: Hydraulic wheel part reaction turbine. A parallel development is the hydraulic wheel/part reaction turbine that also incorporates a weir into the centre of the wheel but uses blades angled to the water flow. The WICON-Stem Pressure Machine (SPM) exploits this flow. Estimated efficiency 67%. The University of Southampton School of Civil Engineering and the Environment in the UK has investigated both types of Hydraulic wheel machines and has estimated their hydraulic efficiency and suggested improvements, i.e. The Rotary Hydraulic Pressure Machine. (Estimated maximum efficiency 85%). These type of water wheels have high efficiency at part loads / variable flows and can operate at very low heads, < . Combined with direct drive Axial Flux Permanent Magnet Alternators and power electronics they offer a viable alternative for low head hydroelectric power generation. Explanatory notes. <templatestyles src="Citation/styles.css"/>^ Dotted notation. A dot above the quantity indicates that it is a rate. In other how much each second or how much per second. In this article q is a volume of water and formula_17 is a volume of water per second. q, as in quantity of water, is used to avoid confusion with v for velocity. Citations. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\eta=" }, { "math_id": 1, "text": "\\rho=" }, { "math_id": 2, "text": "A=" }, { "math_id": 3, "text": "D=" }, { "math_id": 4, "text": "P=" }, { "math_id": 5, "text": "d=" }, { "math_id": 6, "text": "g=" }, { "math_id": 7, "text": "h=" }, { "math_id": 8, "text": "h_p=" }, { "math_id": 9, "text": "h_v=" }, { "math_id": 10, "text": "k=" }, { "math_id": 11, "text": "v=" }, { "math_id": 12, "text": "\\dot q =" }, { "math_id": 13, "text": "t=" }, { "math_id": 14, "text": "h_p" }, { "math_id": 15, "text": "h_v" }, { "math_id": 16, "text": "v" }, { "math_id": 17, "text": "\\dot q" } ]
https://en.wikipedia.org/wiki?curid=150248
1502669
Rendering equation
Integral equation In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation. The physical basis for the rendering equation is the law of conservation of energy. Assuming that "L" denotes radiance, we have that at each particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light (Lr). The reflected light itself is the sum from all directions of the incoming light (Li) multiplied by the surface reflection and cosine of the incident angle. Equation form. The rendering equation may be written in the form formula_0 formula_1 where Two noteworthy features are: its linearity—it is composed only of multiplications and additions, and its spatial homogeneity—it is the same in all positions and orientations. These mean a wide range of factorings and rearrangements of the equation are possible. It is a Fredholm integral equation of the second kind, similar to those that arise in quantum field theory. Note this equation's spectral and time dependence — formula_18 may be sampled at or integrated over sections of the visible spectrum to obtain, for example, a trichromatic color sample. A pixel value for a single frame in an animation may be obtained by fixing formula_19 motion blur can be produced by averaging formula_18 over some given time interval (by integrating over the time interval and dividing by the length of the interval). Note that a solution to the rendering equation is the function formula_18. The function formula_20 is related to formula_18 via a ray-tracing operation: The incoming radiance from some direction at one point is the outgoing radiance at some other point in the opposite direction. Applications. Solving the rendering equation for any given scene is the primary challenge in realistic rendering. One approach to solving the equation is based on finite element methods, leading to the radiosity algorithm. Another approach using Monte Carlo methods has led to many different algorithms including path tracing, photon mapping, and Metropolis light transport, among others. Limitations. Although the equation is very general, it does not capture every aspect of light reflection. Some missing aspects include the following: For scenes that are either not composed of simple surfaces in a vacuum or for which the travel time for light is an important factor, researchers have generalized the rendering equation to produce a "volume rendering equation" suitable for volume rendering and a "transient rendering equation" for use with data from a time-of-flight camera.
[ { "math_id": 0, "text": "L_{\\text{o}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t) = L_{\\text{e}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t) + L_{\\text{r}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t)" }, { "math_id": 1, "text": "L_{\\text{r}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t) = \\int_\\Omega f_{\\text{r}}(\\mathbf x, \\omega_{\\text{i}}, \\omega_{\\text{o}}, \\lambda, t) L_{\\text{i}}(\\mathbf x, \\omega_{\\text{i}}, \\lambda, t) (\\omega_{\\text{i}}\\cdot\\mathbf n) \\operatorname d \\omega_{\\text{i}}" }, { "math_id": 2, "text": "L_{\\text{o}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t)" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "\\omega_{\\text{o}}" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "\\mathbf x" }, { "math_id": 7, "text": "L_{\\text{e}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t)" }, { "math_id": 8, "text": "L_{\\text{r}}(\\mathbf x, \\omega_{\\text{o}}, \\lambda, t)" }, { "math_id": 9, "text": "\\int_\\Omega \\dots \\operatorname d\\omega_{\\text{i}}" }, { "math_id": 10, "text": "\\Omega" }, { "math_id": 11, "text": "\\mathbf n" }, { "math_id": 12, "text": "\\omega_{\\text{i}}" }, { "math_id": 13, "text": "\\omega_{\\text{i}}\\cdot\\mathbf n > 0" }, { "math_id": 14, "text": "f_{\\text{r}}(\\mathbf x, \\omega_{\\text{i}}, \\omega_{\\text{o}}, \\lambda, t)" }, { "math_id": 15, "text": "L_{\\text{i}}(\\mathbf x, \\omega_{\\text{i}}, \\lambda, t)" }, { "math_id": 16, "text": "\\omega_{\\text{i}} \\cdot \\mathbf n" }, { "math_id": 17, "text": "\\cos \\theta_i" }, { "math_id": 18, "text": "L_{\\text{o}}" }, { "math_id": 19, "text": "t;" }, { "math_id": 20, "text": "L_{\\text{i}}" } ]
https://en.wikipedia.org/wiki?curid=1502669
15027727
Universal Soil Loss Equation
The Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes soil erosion processes. Erosion models play critical roles in soil and water resource conservation and nonpoint source pollution assessments, including: sediment load assessment and inventory, conservation planning and design for sediment control, and for the advancement of scientific understanding. The USLE or one of its derivatives are main models used by United States government agencies to measure water erosion. The USLE was developed in the U.S., based on soil erosion data collected beginning in the 1930s by the U.S. Department of Agriculture (USDA) Soil Conservation Service (now the USDA Natural Resources Conservation Service). The model has been used for decades for purposes of conservation planning both in the United States where it originated and around the world, and has been used to help implement the United States' multibillion-dollar conservation program. The Revised Universal Soil Loss Equation (RUSLE) and the Modified Universal Soil Loss Equation (MUSLE) continue to be used for similar purposes. Overview of erosion models. The two primary types of erosion models are process-based models and empirically based models. Process-based (physically based) models mathematically describe the erosion processes of detachment, transport, and deposition and through the solutions of the equations describing those processes provide estimates of soil loss and sediment yields from specified land surface areas. Erosion science is not sufficiently advanced for there to exist completely process-based models which do not include empirical aspects. The primary indicator, perhaps, for differentiating process-based from other types of erosion models is the use of the sediment continuity equation discussed below. Empirical models relate management and environmental factors directly to soil loss and/or sedimentary yields through statistical relationships. Lane et al. provided a detailed discussion regarding the nature of process-based and empirical erosion models, as well as a discussion of what they termed conceptual models, which lie somewhere between the process-based and purely empirical models. Current research effort involving erosion modeling is weighted toward the development of process-based erosion models. On the other hand, the standard model for most erosion assessment and conservation planning is the empirically based USLE, and there continues to be active research and development of USLE-based erosion prediction technology. Description of USLE. The USLE was developed from erosion plot and rainfall simulator experiments. The USLE is composed of six factors to predict the long-term average annual soil loss (A). The equation includes the rainfall erosivity factor (R), the soil erodibility factor (K), the topographic factors (L and S) and the cropping management factors (C and P). The equation takes the simple product form: formula_0 The USLE has another concept of experimental importance, the unit plot concept. The unit plot is defined as the standard plot condition to determine the soil's erodibility. These conditions are when the LS factor = 1 (slope = 9% and length = 22.1 m (72.6 ft) where the plot is fallow and tillage is up and down slope and no conservation practices are applied (CP=1). In this state: formula_1 A simpler method to predict K was presented by Wischmeier et al. which includes the particle size of the soil, organic matter content, soil structure and profile permeability. The soil erodibility factor K can be approximated from a nomograph if this information is known. The LS factors can easily be determined from a slope effect chart by knowing the length and gradient of the slope. The cropping management factor (C) and conservation practices factor (P) are more difficult to obtain and must be determined empirically from plot data. They are described in soil loss ratios (C or P with / C or P without). Over the last few decades, various techniques have emerged to compute the five RUSLE factors. However, determining the P factor has proven to be challenging as there is usually a lack of geospatial information on the specific soil conservation practices in a given region. Thus, to estimate the P factor value in the RUSLE formula, a combination of land use type and slope gradient is often used, where a lower value indicates more effective control of soil erosion. The practice of creating field boundaries, such as stone walls, hedgerows, earth banks, and lynchets, was effective in preventing or reducing soil erosion in pre-industrial agriculture. Recently a novel P-factor model for Europe has been developed from the data retrieved during a statistical survey that recorded the occurrence of stone walls and grass margins in EU countries. While this is one of the first efforts to incorporate cultural landscape features into a soil erosion model on a continental scale, the authors of the study pointed out several limitations, such as the small number of surveyed points and the chosen interpolation technique. It has been demonstrated that landscape archaeology has the potential to fill this gap in the data about soil conservation practices using a GIS-based tool called Historic Landscape Characterisation (HLC). Starting from the assumptions that the construction of field boundaries has always represented an effective method to limit soil erosion and that the efficiency of any conservation measures to mitigate soil erosion increases with the increasing of the slope, a new P factor equation has been developed integrating the HLC within the RUSLE model. In a recent study, modelling landscape archaeological data in a soil loss estimation equation enables deeper reflection on how historic strategies for soil management might relate to current environmental and climate conditions. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A = R K L S C P" }, { "math_id": 1, "text": "K = A/R" } ]
https://en.wikipedia.org/wiki?curid=15027727
1502835
Magnetohydrodynamic generator
Magnetohydrodynamic converter that transforms thermal and kinetic energy into electricity A magnetohydrodynamic generator (MHD generator) is a magnetohydrodynamic converter that transforms thermal energy and kinetic energy directly into electricity. An MHD generator, like a conventional generator, relies on moving a conductor through a magnetic field to generate electric current. The MHD generator uses hot conductive ionized gas (a plasma) as the moving conductor. The mechanical dynamo, in contrast, uses the motion of mechanical devices to accomplish this. MHD generators are different from traditional electric generators in that they operate without moving parts (e.g. no turbine) to limit the upper temperature. They therefore have the highest known theoretical thermodynamic efficiency of any electrical generation method. MHD has been extensively developed as a topping cycle to increase the efficiency of electric generation, especially when burning coal or natural gas. The hot exhaust gas from an MHD generator can heat the boilers of a steam power plant, increasing overall efficiency. Practical MHD generators have been developed for fossil fuels, but these were overtaken by less expensive combined cycles in which the exhaust of a gas turbine or molten carbonate fuel cell heats steam to power a steam turbine. MHD dynamos are the complement of MHD accelerators, which have been applied to pump liquid metals, seawater, and plasmas. Natural MHD dynamos are an active area of research in plasma physics and are of great interest to the geophysics and astrophysics communities since the magnetic fields of the Earth and Sun are produced by these natural dynamos. Principle. The Lorentz Force Law describes the effects of a charged particle moving in a constant magnetic field. The simplest form of this law is given by the vector equation. formula_0 where The vector F is perpendicular to both v and B according to the right hand rule. Power generation. Typically, for a large power station to approach the operational efficiency of computer models, steps must be taken to increase the electrical conductivity of the conductive substance. The heating of a gas to its plasma state or the addition of other easily ionizable substances like the salts of alkali metals can accomplish this increase. In practice, a number of issues must be considered in the implementation of an MHD generator: generator efficiency, economics, and toxic byproducts. These issues are affected by the choice of one of the three MHD generator designs: the Faraday generator, the Hall generator, and the disc generator. Faraday generator. The Faraday generator is named for Michael Faraday's experiments on moving charged particles in the Thames River. A simple Faraday generator would consist of a wedge-shaped pipe or tube of some non-conductive material. When an electrically conductive fluid flows through the tube, in the presence of a significant perpendicular magnetic field, a voltage is induced in the fluid, which can be drawn off as electrical power by placing the electrodes on the sides at 90-degree angles to the magnetic field. There are limitations on the density and type of field used. The amount of power that can be extracted is proportional to the cross-sectional area of the tube and the speed of the conductive flow. The conductive substance is also cooled and slowed by this process. MHD generators typically reduce the temperature of the conductive substance from plasma temperatures to just over 1000 °C. The main practical problem of a Faraday generator is that differential voltages and currents in the fluid short through the electrodes on the sides of the duct. The most powerful waste is from the Hall effect current. This makes the Faraday duct very inefficient. Most further refinements of MHD generators have tried to solve this problem. The optimal magnetic field on duct-shaped MHD generators is a sort of saddle shape. To get this field, a large generator requires an extremely powerful magnet. Many research groups have tried to adapt superconducting magnets to this purpose, with varying success. (For references, please see the discussion of generator efficiency, below.) Hall generator. The typical solution, historically, has been to use the Hall effect to create a current that flows with the fluid. (See illustration.) This design has arrays of short, segmented electrodes on the sides of the duct. The first and last electrodes in the duct power the load. Each other electrode is shorted to an electrode on the opposite side of the duct. These shorts of the Faraday current induce a powerful magnetic field within the fluid, but in a chord of a circle at right angles to the Faraday current. This secondary, induced field makes the current flow in a rainbow shape between the first and last electrodes. Losses are less than in a Faraday generator, and voltages are higher because there is less shorting of the final induced current. However, this design has problems because the speed of the material flow requires the middle electrodes to be offset to "catch" the Faraday currents. As the load varies, the fluid flow speed varies, misaligning the Faraday current with its intended electrodes, and making the generator's efficiency very sensitive to its load. Disc generator. The third and, currently, the most efficient design is the Hall effect disc generator. This design currently holds the efficiency and energy density records for MHD generation. A disc generator has fluid flowing between the center of a disc, and a duct wrapped around the edge. (The ducts are not shown.) The magnetic excitation field is made by a pair of circular Helmholtz coils above and below the disk. (The coils are not shown.) The Faraday currents flow in a perfect dead short around the periphery of the disk. The Hall effect currents flow between ring electrodes near the center duct and ring electrodes near the periphery duct. The wide flat gas flow reduced the distance, hence the resistance of the moving fluid. This increases efficiency. Another significant advantage of this design is that the magnets are more efficient. First, they cause simple parallel field lines. Second, because the fluid is processed in a disk, the magnet can be closer to the fluid, and in this magnetic geometry, magnetic field strengths increase as the 7th power of distance. Finally, the generator is compact for its power, so the magnet is also smaller. The resulting magnet uses a much smaller percentage of the generated power. Generator efficiency. The efficiency of the direct energy conversion in MHD power generation increases with the magnetic field strength and the plasma conductivity, which depends directly on the plasma temperature, and more precisely on the electron temperature. As very hot plasmas can only be used in pulsed MHD generators (for example using shock tubes) due to the fast thermal material erosion, it was envisaged to use nonthermal plasmas as working fluids in steady MHD generators, where only free electrons are heated a lot (10,000–20,000 kelvins) while the main gas (neutral atoms and ions) remains at a much lower temperature, typically 2500 kelvins. The goal was to preserve the materials of the generator (walls and electrodes) while improving the limited conductivity of such poor conductors to the same level as a plasma in thermodynamic equilibrium; i.e. completely heated to more than 10,000 kelvins, a temperature that no material could stand. But Evgeny Velikhov first discovered theoretically in 1962 and experimentally in 1963 that an ionization instability, later called the Velikhov instability or electrothermal instability, quickly arises in any MHD converter using magnetized nonthermal plasmas with hot electrons, when a critical Hall parameter is reached, hence depending on the degree of ionization and the magnetic field. Such an instability greatly degrades the performance of nonequilibrium MHD generators. The prospects of this technology, which initially predicted awesome efficiencies, crippled MHD programs all over the world as no solution to mitigate the instability was found at that time. Consequently, without implementing solutions to master the electrothermal instability, practical MHD generators had to limit the Hall parameter or use moderately heated thermal plasmas instead of cold plasmas with hot electrons, which severely lowers efficiency. As of 1994, the 22% efficiency record for closed-cycle disc MHD generators was held by Tokyo Technical Institute. The peak enthalpy extraction in these experiments reached 30.2%. Typical open-cycle Hall & duct coal MHD generators are lower, near 17%. These efficiencies make MHD unattractive, by itself, for utility power generation, since conventional Rankine cycle power plants easily reach 40%. However, the exhaust of an MHD generator burning fossil fuel is almost as hot as a flame. By routing its exhaust gases into a heat exchanger for a turbine Brayton cycle or steam generator Rankine cycle, MHD can convert fossil fuels into electricity with an estimated efficiency of up to 60 percent, compared to the 40 percent of a typical coal plant. A magnetohydrodynamic generator might also be the first stage of a gas core reactor. Material and design issues. MHD generators have difficult problems in regard to materials, both for the walls and the electrodes. Materials must not melt or corrode at very high temperatures. Exotic ceramics were developed for this purpose and must be selected to be compatible with the fuel and ionization seed. The exotic materials and the difficult fabrication methods contribute to the high cost of MHD generators. Also, MHDs work better with stronger magnetic fields. The most successful magnets have been superconducting, and very close to the channel. A major difficulty was refrigerating these magnets while insulating them from the channel. The problem is worse because the magnets work better when they are closer to the channel. There are also severe risks of damage to the hot, brittle ceramics from differential thermal cracking. The magnets are usually near absolute zero, while the channel is several thousand degrees. For MHDs, both alumina (Al2O3) and magnesium peroxide (MgO2) were reported to work for the insulating walls. Magnesium peroxide degrades near moisture. Alumina is water-resistant and can be fabricated to be quite strong, so in practice, most MHDs have used alumina for the insulating walls. For the electrodes of clean MHDs (i.e. burning natural gas), one good material was a mix of 80% CeO2, 18% ZrO2, and 2% Ta2O5. Coal-burning MHDs have intensely corrosive environments with slag. The slag both protects and corrodes MHD materials. In particular, migration of oxygen through the slag accelerates the corrosion of metallic anodes. Nonetheless, very good results have been reported with stainless steel electrodes at 900K. Another, perhaps superior option is a spinel ceramic, FeAl2O4 - Fe3O4. The spinel was reported to have electronic conductivity, absence of a resistive reaction layer but with some diffusion of iron into the alumina. The diffusion of iron could be controlled with a thin layer of very dense alumina, and water cooling in both the electrodes and alumina insulators. Attaching the high-temperature electrodes to conventional copper bus bars is also challenging. The usual methods establish a chemical passivation layer, and cool the busbar with water. Economics. MHD generators have not been employed for large-scale mass energy conversion because other techniques with comparable efficiency have a lower lifecycle investment cost. Advances in natural gas turbines achieved similar thermal efficiencies at lower costs, by having the turbine's exhaust drive a Rankine cycle steam plant. To get more electricity from coal, it is cheaper to simply add more low-temperature steam-generating capacity. A coal-fueled MHD generator is a type of Brayton power cycle, similar to the power cycle of a combustion turbine. However, unlike the combustion turbine, there are no moving mechanical parts; the electrically conducting plasma provides the moving electrical conductor. The side walls and electrodes merely withstand the pressure within, while the anode and cathode conductors collect the electricity that is generated. All Brayton cycles are heat engines. Ideal Brayton cycles also have an ideal efficiency equal to ideal Carnot cycle efficiency. Thus, the potential for high energy efficiency from an MHD generator. All Brayton cycles have higher potential for efficiency the higher the firing temperature. While a combustion turbine is limited in maximum temperature by the strength of its air/water or steam-cooled rotating airfoils; there are no rotating parts in an open-cycle MHD generator. This upper bound in temperature limits the energy efficiency in combustion turbines. The upper bound on Brayton cycle temperature for an MHD generator is not limited, so inherently an MHD generator has a higher potential capability for energy efficiency. The temperatures at which linear coal-fueled MHD generators can operate are limited by factors that include: (a) the combustion fuel, oxidizer, and oxidizer preheat temperature which limit the maximum temperature of the cycle; (b) the ability to protect the sidewalls and electrodes from melting; (c) the ability to protect the electrodes from electrochemical attack from the hot slag coating the walls combined with the high current or arcs that impinge on the electrodes as they carry off the direct current from the plasma; and (d) by the capability of the electrical insulators between each electrode. Coal-fired MHD plants with oxygen/air and high oxidant preheats would probably provide potassium-seeded plasmas of about 4200°F, 10 atmospheres pressure, and begin expansion at Mach1.2. These plants would recover MHD exhaust heat for oxidant preheat, and for combined cycle steam generation. With aggressive assumptions, one DOE-funded feasibility study of where the technology could go, 1000 MWe Advanced Coal-Fired MHD/Steam Binary Cycle Power Plant Conceptual Design, published in June 1989, showed that a large coal-fired MHD combined cycle plant could attain a HHV energy efficiency approaching 60 percent—well in excess of other coal-fueled technologies, so the potential for low operating costs exists. However, no testing at those aggressive conditions or size has yet occurred, and there are no large MHD generators now under test. There is simply an inadequate reliability track record to provide confidence in a commercial coal-fuelled MHD design. U25B MHD testing in Russia using natural gas as fuel used a superconducting magnet, and had an output of 1.4 megawatts. A coal-fired MHD generator series of tests funded by the U.S. Department of Energy (DOE) in 1992 produced MHD power from a larger superconducting magnet at the Component Development and Integration Facility (CDIF) in Butte, Montana. None of these tests were conducted for long-enough durations to verify the commercial durability of the technology. Neither of the test facilities were in large-enough scale for a commercial unit. Superconducting magnets are used in the larger MHD generators to eliminate one of the large parasitic losses: the power needed to energize the electromagnet. Superconducting magnets, once charged, consume no power and can develop intense magnetic fields 4 teslas and higher. The only parasitic load for the magnets are to maintain refrigeration, and to make up the small losses for the non-supercritical connections. Because of the high temperatures, the non-conducting walls of the channel must be constructed from an exceedingly heat-resistant substance such as yttrium oxide or zirconium dioxide to retard oxidation. Similarly, the electrodes must be both conductive and heat-resistant at high temperatures. The AVCO coal-fueled MHD generator at the CDIF was tested with water-cooled copper electrodes capped with platinum, tungsten, stainless steel, and electrically conducting ceramics. Toxic byproducts. MHD reduces the overall production of hazardous fossil fuel wastes because it increases plant efficiency. In MHD coal plants, the patented commercial "Econoseed" process developed by the U.S. (see below) recycles potassium ionization seed from the fly ash captured by the stack-gas scrubber. However, this equipment is an additional expense. If molten metal is the armature fluid of an MHD generator, care must be taken with the coolant of the electromagnetics and channel. The alkali metals commonly used as MHD fluids react violently with water. Also, the chemical byproducts of heated, electrified alkali metals and channel ceramics may be poisonous and environmentally persistent. History. The first practical MHD power research was funded in 1938 in the U.S. by Westinghouse in its Pittsburgh, Pennsylvania laboratories, headed by Hungarian Bela Karlovitz. The initial patent on MHD is by B. Karlovitz, U.S. Patent No. 2,210,918, "Process for the Conversion of Energy", August 13, 1940. World War II interrupted development. In 1962, the First International Conference on MHD Power was held in Newcastle upon Tyne, UK by Dr. Brian C. Lindley of the International Research and Development Company Ltd. The group set up a steering committee to set up further conferences and disseminate ideas. In 1964, the group set up a second conference in Paris, France, in consultation with the European Nuclear Energy Agency. Since membership in the ENEA was limited, the group persuaded the International Atomic Energy Agency to sponsor a third conference, in Salzburg, Austria, July 1966. Negotiations at this meeting converted the steering committee into a periodic reporting group, the ILG-MHD (international liaison group, MHD), under the ENEA, and later in 1967, also under the International Atomic Energy Agency. Further research in the 1960s by R. Rosa established the practicality of MHD for fossil-fueled systems. In the 1960s, AVCO Everett Aeronautical Research began a series of experiments, ending with the Mk. V generator of 1965. This generated 35MW, but used about 8 MW to drive its magnet. In 1966, the ILG-MHD had its first formal meeting in Paris, France. It began issuing a periodic status report in 1967. This pattern persisted, in this institutional form, up until 1976. Toward the end of the 1960s, interest in MHD declined because nuclear power was becoming more widely available. In the late 1970s, as interest in nuclear power declined, interest in MHD increased. In 1975, UNESCO became persuaded the MHD might be the most efficient way to utilise world coal reserves, and in 1976, sponsored the ILG-MHD. In 1976, it became clear that no nuclear reactor in the next 25 years would use MHD, so the International Atomic Energy Agency and ENEA (both nuclear agencies) withdrew support from the ILG-MHD, leaving UNESCO as the primary sponsor of the ILG-MHD. Former Yugoslavia development. Over more than a ten-year span, engineers in former Yugoslavian Institute of Thermal and Nuclear Technology (ITEN), Energoinvest Co., Sarajevo, had built the first experimental Magneto-Hydrodynamic facility power generator in 1989. It was here it was first patented. U.S. development. In the 1980s, the U.S. Department of Energy began a vigorous multiyear program, culminating in a 1992 50 MW demonstration coal combustor at the Component Development and Integration Facility (CDIF) in Butte, Montana. This program also had significant work at the Coal-Fired-In-Flow-Facility (CFIFF) at University of Tennessee Space Institute. This program combined four parts: Initial prototypes at the CDIF were operated for short durations, with various coals: Montana Rosebud, and a high-sulphur corrosive coal, Illinois No. 6. A great deal of engineering, chemistry, and material science was completed. After the final components were developed, operational testing completed with 4,000 hours of continuous operation, 2,000 on Montana Rosebud, 2,000 on Illinois No. 6. The testing ended in 1993. Japanese development. The Japanese program in the late 1980s concentrated on closed-cycle MHD. The belief was that it would have higher efficiencies, and smaller equipment, especially in the clean, small, economical plant capacities near 100 megawatts (electrical) which are suited to Japanese conditions. Open-cycle coal-powered plants are generally thought to become economical above 200 megawatts. The first major series of experiments was FUJI-1, a blow-down system powered from a shock tube at the Tokyo Institute of Technology. These experiments extracted up to 30.2% of enthalpy, and achieved power densities near 100 megawatts per cubic meter. This facility was funded by Tokyo Electric Power, other Japanese utilities, and the Department of Education. Some authorities believe this system was a disc generator with a helium and argon carrier gas and potassium ionization seed. In 1994, there were detailed plans for FUJI-2, a 5 MWe continuous closed-cycle facility, powered by natural gas, to be built using the experience of FUJI-1. The basic MHD design was to be a system with inert gases using a disk generator. The aim was an enthalpy extraction of 30% and an MHD thermal efficiency of 60%. FUJI-2 was to be followed by a retrofit to a 300MWe natural gas plant. Australian development. In 1986, Professor Hugo Karl Messerle at The University of Sydney researched coal-fueled MHD. This resulted in a 28MWe topping facility that was operated outside Sydney. Messerle also wrote one of the most recent reference works (see below), as part of a UNESCO education program. A detailed obituary for Hugo is located on the Australian Academy of Technological Sciences and Engineering (ATSE) website. Italian development. The Italian program began in 1989 with a budget of about 20 million $US, and had three main development areas: Chinese development. A joint U.S.-China national programme ended in 1992 by retrofitting the coal-fired No. 3 plant in Asbach. A further eleven-year program was approved in March 1994. This established centres of research in: The 1994 study proposed a 10W (electrical, 108MW thermal) generator with the MHD and bottoming cycle plants connected by steam piping, so either could operate independently. Russian developments. In 1971, the natural-gas-fired U-25 plant was completed near Moscow, with a designed capacity of 25 megawatts. By 1974 it delivered 6 megawatts of power. By 1994, Russia had developed and operated the coal-operated facility U-25, at the High-Temperature Institute of the Russian Academy of Science in Moscow. U-25's bottoming plant was actually operated under contract with the Moscow utility, and fed power into Moscow's grid. There was substantial interest in Russia in developing a coal-powered disc generator. In 1986 the first industrial power plant with MHD generator was built, but in 1989 the project was cancelled before MHD launch and this power plant later joined to Ryazan Power Station as a 7th unit with ordinary construction. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathbf{F} = Q \\left(\\mathbf{v}\\times\\mathbf{B}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1502835
150287
Fixed-point combinator
Higher-order function Y for which Y f = f (Y f) In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator), is a higher-order function (i.e. a function which takes a function as argument) that returns some "fixed point" (a value that is mapped to itself) of its argument function, if one exists. Formally, if formula_0 is a fixed-point combinator and the function formula_1 has one or more fixed points, then formula_2 is one of these fixed points, i.e. formula_3 Fixed-point combinators can be defined in the lambda calculus and in functional programming languages and provide a means to allow for recursive definitions. Y combinator in lambda calculus. In the classical untyped lambda calculus, every function has a fixed point. A particular implementation of formula_0 is Haskell Curry's paradoxical combinator Y, given by formula_4 Verification. The following calculation verifies that formula_8 is indeed a fixed point of the function formula_9: The lambda term formula_11 may not, in general, β-reduce to the term formula_12. However, both terms β-reduce to the same term, as shown. Uses. Applied to a function with one variable, the Y combinator usually does not terminate. More interesting results are obtained by applying the Y combinator to functions of two or more variables. The additional variables may be used as a counter, or index. The resulting function behaves like a "while" or a "for" loop in an imperative language. Used in this way, the Y combinator implements simple recursion. The lambda calculus does not allow a function to appear as a term in its own definition as is possible in many programming languages, but a function can be passed as an argument to a higher-order function that applies it in a recursive manner. The Y combinator may also be used in implementing Curry's paradox. The heart of Curry's paradox is that untyped lambda calculus is unsound as a deductive system, and the Y combinator demonstrates this by allowing an anonymous expression to represent zero, or even many values. This is inconsistent in mathematical logic. Example implementations. An example implementation of the Y combinator in two languages is presented below. Y = lambda f: (lambda x: f(x(x)))(lambda x: f(x(x))) Y(Y) // Y Combinator in C++, using C++ 14 extensions int main() { auto Y = [](auto f) { auto f1 = [f](auto x) -> decltype(f) { return f(x(x)); return f1(f1); Y(Y); Note that both of these programs, while formally correct, are useless in practice; they both loop indefinitely until they terminate through stack overflow. More generally, as both Python and C++ use strict evaluation, the Y combinator is generally useless in those languages; see below for the Z combinator, which can be used in strict programming languages. Fixed-point combinator. The Y combinator is an implementation of a fixed-point combinator in lambda calculus. Fixed-point combinators may also be easily defined in other functional and imperative languages. The implementation in lambda calculus is more difficult due to limitations in lambda calculus. The fixed-point combinator may be used in a number of different areas: Fixed-point combinators may be applied to a range of different functions, but normally will not terminate unless there is an extra parameter. When the function to be fixed refers to its parameter, another call to the function is invoked, so the calculation never gets started. Instead, the extra parameter is used to trigger the start of the calculation. The type of the fixed point is the return type of the function being fixed. This may be a real or a function or any other type. In the untyped lambda calculus, the function to apply the fixed-point combinator to may be expressed using an encoding, like Church encoding. In this case particular lambda terms (which define functions) are considered as values. "Running" (beta reducing) the fixed-point combinator on the encoding gives a lambda term for the result which may then be interpreted as fixed-point value. Alternately, a function may be considered as a lambda term defined purely in lambda calculus. These different approaches affect how a mathematician and a programmer may regard a fixed-point combinator. A mathematician may see the Y combinator applied to a function as being an expression satisfying the fixed-point equation, and therefore a solution. In contrast, a person only wanting to apply a fixed-point combinator to some general programming task may see it only as a means of implementing recursion. Values and domains. Many functions do not have any fixed points, for instance formula_13 with formula_14. Using Church encoding, natural numbers can be represented in lambda calculus, and this function "f" can be defined in lambda calculus. However, its domain will now contain "all" lambda expression, not just those representing natural numbers. The Y combinator, applied to "f", will yield a fixed-point for "f", but this fixed-point won't represent a natural number. If trying to compute "Y f" in an actual programming language, an infinite loop will occur. Function versus implementation. The fixed-point combinator may be defined in mathematics and then implemented in other languages. General mathematics defines a function based on its extensional properties. That is, two functions are equal if they perform the same mapping. Lambda calculus and programming languages regard function identity as an intensional property. A function's identity is based on its implementation. A lambda calculus function (or term) is an implementation of a mathematical function. In the lambda calculus there are a number of combinators (implementations) that satisfy the mathematical definition of a fixed-point combinator. Definition of the term "combinator". Combinatory logic is a higher-order functions theory. A combinator is a "closed" lambda expression, meaning that it has no free variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables. Recursive definitions and fixed-point combinators. Fixed-point combinators can be used to implement recursive definition of functions. However, they are rarely used in practical programming. Strongly normalizing type systems such as the simply typed lambda calculus disallow non-termination and hence fixed-point combinators often cannot be assigned a type or require complex type system features. Furthermore fixed-point combinators are often inefficient compared to other strategies for implementing recursion, as they require more function reductions and construct and take apart a tuple for each group of mutually recursive definitions. The factorial function. The factorial function provides a good example of how a fixed-point combinator may be used to define recursive functions. The standard recursive definition of the factorial function in mathematics can be written as formula_15 where "n" is a non-negative integer. If we want to implement this in lambda calculus, where integers are represented using Church encoding, we run into the problem that the lambda calculus does not allow the name of a function ('fact') to be used in the function's definition. This can be circumvented using a fixed-point combinator formula_16 as follows. Define a function "F" of two arguments "f" and "n": formula_17 Now define formula_20. Then formula_21 is a fixed-point of "F", which gives formula_22 as desired. Fixed-point combinators in lambda calculus. The Y combinator, discovered by Haskell B. Curry, is defined as formula_23 Other fixed-point combinators. In untyped lambda calculus fixed-point combinators are not especially rare. In fact there are infinitely many of them. In 2005 Mayer Goldberg showed that the set of fixed-point combinators of untyped lambda calculus is recursively enumerable. The Y combinator can be expressed in the SKI-calculus as formula_24 Additional combinators (B, C, K, W system) allow for a much shorter definition. With formula_25 the self-application combinator, since formula_26 and formula_27, the above becomes formula_28 The simplest fixed-point combinator in the SK-calculus, found by John Tromp, is formula_29 although note that it is not in normal form, which is longer. This combinator corresponds to the lambda expression formula_30 The following fixed-point combinator is simpler than the Y combinator, and β-reduces into the Y combinator; it is sometimes cited as the Y combinator itself: formula_31 Another common fixed-point combinator is the Turing fixed-point combinator (named after its discoverer, Alan Turing): formula_32 Its advantage over formula_10 is that formula_33 beta-reduces to formula_34, whereas formula_35 and formula_36 only beta-reduce to a common term. formula_37 also has a simple call-by-value form: formula_38 The analog for mutual recursion is a "polyvariadic fix-point combinator", which may be denoted Y*. Strict fixed-point combinator. In a strict programming language the Y combinator will expand until stack overflow, or never halt in case of tail call optimization. The Z combinator will work in strict languages (also called eager languages, where applicative evaluation order is applied). The Z combinator has the next argument defined explicitly, preventing the expansion of formula_39 in the right-hand side of the definition: formula_40 and in lambda calculus it is an eta-expansion of the "Y" combinator: formula_41 Non-standard fixed-point combinators. If F is a fixed-point combinator in untyped lambda calculus, then we have formula_42 Terms that have the same Böhm tree as a fixed-point combinator, i.e. have the same infinite extension formula_43, are called "non-standard fixed-point combinators". Any fixed-point combinator is also a non-standard one, but not all non-standard fixed-point combinators are fixed-point combinators because some of them fail to satisfy the fixed-point equation that defines the "standard" ones. These combinators are called "strictly non-standard fixed-point combinators"; an example is the following combinator: formula_44 where formula_45 formula_46 The set of non-standard fixed-point combinators is not recursively enumerable. Implementation in other languages. The Y combinator is a particular implementation of a fixed-point combinator in lambda calculus. Its structure is determined by the limitations of lambda calculus. It is not necessary or helpful to use this structure in implementing the fixed-point combinator in other languages. Simple examples of fixed-point combinators implemented in some programming paradigms are given below. Lazy functional implementation. In a language that supports lazy evaluation, like in Haskell, it is possible to define a fixed-point combinator using the defining equation of the fixed-point combinator which is conventionally named codice_0. Since Haskell has lazy datatypes, this combinator can also be used to define fixed points of data constructors (and not only to implement recursive functions). The definition is given here, followed by some usage examples. In Hackage, the original sample is: fix, fix' :: (a -> a) -> a fix f = let x = f x in x -- Lambda dropped. Sharing. -- Original definition in Data.Function. -- alternative: fix' f = f (fix' f) -- Lambda lifted. Non-sharing. fix (\x -> 9) -- this evaluates to 9 fix (\x -> 3:x) -- evaluates to the lazy infinite list [3,3,3...] fact = fix fac -- evaluates to the factorial function where fac f 0 = 1 fac f x = x * f (x-1) fact 5 -- evaluates to 120 Strict functional implementation. In a strict functional language, as illustrated below with OCaml, the argument to "f" is expanded beforehand, yielding an infinite call sequence, formula_47. This may be resolved by defining fix with an extra parameter. let rec fix f x = f (fix f) x (* note the extra x; here fix f = \x-> f (fix f) x *) let factabs fact = function (* factabs has extra level of lambda abstraction *) 0 -> 1 | x -> x * fact (x-1) let _ = (fix factabs) 5 (* evaluates to "120" *) In a multi-paradigm functional language (one decorated with imperative features), such as Lisp, Peter Landin suggested the use of a variable assignment to create a fixed-point combinator, as in the below example using Scheme: (define Y! (lambda (f) ((lambda (i) (set! i (f (lambda (x) (i x)))) ;; (set! i expr) assigns i the value of expr i) ;; replacing it in the present lexical scope #f))) Using a lambda calculus with axioms for assignment statements, it can be shown that codice_1 satisfies the same fixed-point law as the call-by-value Y combinator: formula_48 In more idiomatic modern Lisp usage, this would typically be handled via a lexically scoped label (a codice_2 expression), as lexical scope was not introduced to Lisp until the 1970s: (define Y* (lambda (f) ((lambda (i) (let ((i (f (lambda (x) (i x))))) ;; (let ((i expr)) i) locally defines i as expr i)) ;; non-recursively: thus i in expr is not expr #f))) Or without the internal label: (define Y (lambda (f) ((lambda (i) (i i)) (lambda (i) (f (lambda (x) (apply (i i) x))))))) Imperative language implementation. This example is a slightly interpretive implementation of a fixed-point combinator. A class is used to contain the "fix" function, called "fixer". The function to be fixed is contained in a class that inherits from fixer. The "fix" function accesses the function to be fixed as a virtual function. As for the strict functional definition, "fix" is explicitly given an extra parameter "x", which means that lazy evaluation is not needed. template <typename R, typename D> class fixer public: R fix(D x) return f(x); private: virtual R f(D) = 0; class fact : public fixer<long, long> virtual long f(long x) if (x == 0) return 1; return x * fix(x-1); }; long result = fact().fix(5); Typing. In System F (polymorphic lambda calculus) a polymorphic fixed-point combinator has type ∀a.(a → a) → a where "a" is a type variable. That is, "fix" takes a function, which maps a → a and uses it to return a value of type a. In the simply typed lambda calculus extended with recursive data types, fixed-point operators can be written, but the type of a "useful" fixed-point operator (one whose application always returns) may be restricted. In the simply typed lambda calculus, the fixed-point combinator Y cannot be assigned a type because at some point it would deal with the self-application sub-term formula_49 by the application rule: formula_50 where formula_51 has the infinite type formula_52. No fixed-point combinator can in fact be typed; in those systems, any support for recursion must be explicitly added to the language. Type for the Y combinator. In programming languages that support recursive data types, it is possible to type the Y combinator by appropriately accounting for the recursion at the type level. The need to self-apply the variable x can be managed using a type (codice_3), which is defined so as to be isomorphic to (codice_4). For example, in the following Haskell code, we have codice_5 and codice_6 being the names of the two directions of the isomorphism, with types: In :: (Rec a -> a) -> Rec a out :: Rec a -> (Rec a -> a) which lets us write: y :: (a -> a) -> a y = \f -> (\x -> f (out x x)) (In (\x -> f (out x x))) Or equivalently in OCaml: type 'a recc = In of ('a recc -> 'a) let out (In x) = x let y f = (fun x a -> f (out x x) a) (In (fun x a -> f (out x x) a)) Alternatively: let y f = (fun x -> f (fun z -> out x x z)) (In (fun x -> f (fun z -> out x x z))) General information. Because fixed-point combinators can be used to implement recursion, it is possible to use them to describe specific types of recursive computations, such as those in fixed-point iteration, iterative methods, recursive join in relational databases, data-flow analysis, FIRST and FOLLOW sets of non-terminals in a context-free grammar, transitive closure, and other types of closure operations. A function for which "every" input is a fixed point is called an identity function. Formally: formula_53 In contrast to universal quantification over all formula_51, a fixed-point combinator constructs "one" value that is a fixed point of formula_1. The remarkable property of a fixed-point combinator is that it constructs a fixed point for an "arbitrary given" function formula_1. Other functions have the special property that, after being applied once, further applications don't have any effect. More formally: formula_54 Such functions are called idempotent (see also Projection (mathematics)). An example of such a function is the function that returns "0" for all even integers, and "1" for all odd integers. In lambda calculus, from a computational point of view, applying a fixed-point combinator to an identity function or an idempotent function typically results in non-terminating computation. For example, we obtain formula_55 where the resulting term can only reduce to itself and represents an infinite loop. Fixed-point combinators do not necessarily exist in more restrictive models of computation. For instance, they do not exist in simply typed lambda calculus. The Y combinator allows recursion to be defined as a set of rewrite rules, without requiring native recursion support in the language. In programming languages that support anonymous functions, fixed-point combinators allow the definition and use of anonymous recursive functions, i.e. without having to bind such functions to identifiers. In this setting, the use of fixed-point combinators is sometimes called "anonymous recursion". Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\textrm{fix}" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "\\textrm{fix}\\ f" }, { "math_id": 3, "text": "f\\ (\\textrm{fix}\\ f) = \\textrm{fix}\\ f\\ ." }, { "math_id": 4, "text": "Y = \\lambda f. \\ (\\lambda x.f\\ (x\\ x))\\ (\\lambda x.f\\ (x\\ x))" }, { "math_id": 5, "text": "\\lambda x.f\\ (x\\ x)" }, { "math_id": 6, "text": "f\\ (x\\ x) " }, { "math_id": 7, "text": "(x\\ x)" }, { "math_id": 8, "text": "Y g" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "Y" }, { "math_id": 11, "text": "g\\ (Y\\ g)" }, { "math_id": 12, "text": "Y\\ g" }, { "math_id": 13, "text": "f : \\N \\to \\N" }, { "math_id": 14, "text": "f(n)=n+1" }, { "math_id": 15, "text": "\\operatorname{fact}\\ n = \\begin{cases}\n1 & \\text{if} ~ n = 0 \\\\\nn \\times \\operatorname{fact}(n - 1) & \\text{otherwise.}\n\\end{cases}" }, { "math_id": 16, "text": "\\textsf{fix}" }, { "math_id": 17, "text": "F\\ f\\ n = (\\operatorname{IsZero}\\ n)\\ 1\\ (\\operatorname{multiply}\\ n\\ (f\\ (\\operatorname{pred}\\ n)))" }, { "math_id": 18, "text": "(\\operatorname{IsZero}\\ n)" }, { "math_id": 19, "text": "\\operatorname{pred}\\ n" }, { "math_id": 20, "text": "\\operatorname{fact}=\\textsf{fix}\\ F" }, { "math_id": 21, "text": "\\operatorname{fact}" }, { "math_id": 22, "text": "\\begin{align} \\operatorname{fact} n \n &= F\\ \\operatorname{fact}\\ n \\\\\n &= (\\operatorname{IsZero}\\ n)\\ 1\\ (\\operatorname{multiply}\\ n\\ (\\operatorname{fact}\\ (\\operatorname{pred}\\ n)))\\ \n\\end{align}" }, { "math_id": 23, "text": "Y = \\lambda f.(\\lambda x.f\\ (x\\ x)) \\ (\\lambda x.f\\ (x\\ x))" }, { "math_id": 24, "text": "Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))" }, { "math_id": 25, "text": "U = SII" }, { "math_id": 26, "text": "S(Kx)yz = x(yz) = Bxyz" }, { "math_id": 27, "text": "Sx(Ky)z = xzy = Cxyz" }, { "math_id": 28, "text": "Y = S (K U) (S B (K U)) = B U (C B U)" }, { "math_id": 29, "text": "Y' = S S K (S (K (S S (S (S S K)))) K)" }, { "math_id": 30, "text": "Y' = (\\lambda x y. x y x) (\\lambda y x. y (x y x))" }, { "math_id": 31, "text": "X = \\lambda f.(\\lambda x.x x) (\\lambda x.f (x x))" }, { "math_id": 32, "text": "\\Theta = (\\lambda x y. y (x x y))\\ (\\lambda x y. y (x x y))" }, { "math_id": 33, "text": "\\Theta\\ f" }, { "math_id": 34, "text": "f\\ (\\Theta f)" }, { "math_id": 35, "text": "Y\\ f" }, { "math_id": 36, "text": "f\\ (Y f)" }, { "math_id": 37, "text": "\\Theta" }, { "math_id": 38, "text": "\\Theta_{v} = (\\lambda x y. y (\\lambda z. x x y z))\\ (\\lambda x y. y (\\lambda z. x x y z))" }, { "math_id": 39, "text": "Z g" }, { "math_id": 40, "text": "Z g v = g (Z g) v\\ ." }, { "math_id": 41, "text": "Z = \\lambda f.(\\lambda x.f (\\lambda v.x x v)) \\ (\\lambda x.f (\\lambda v.x x v))\\ ." }, { "math_id": 42, "text": "F=\\lambda x. F x = \\lambda x. x (F x)= \\lambda x. x (x (F x)) = \\cdots " }, { "math_id": 43, "text": "\\lambda x.x (x (x \\cdots ))" }, { "math_id": 44, "text": "N = B M (B (B M) B)" }, { "math_id": 45, "text": "B = \\lambda x y z.x (y z)" }, { "math_id": 46, "text": "M = \\lambda x.x x\\ ." }, { "math_id": 47, "text": "f\\ (f ... (f\\ (\\mathsf{fix}\\ f))... )\\ x" }, { "math_id": 48, "text": "(Y_!\\ \\lambda x.e) e' = (\\lambda x.e)\\ (Y_!\\ \\lambda x.e) e'" }, { "math_id": 49, "text": "x~x" }, { "math_id": 50, "text": "{\\Gamma\\vdash x\\!:\\!t_1 \\to t_2\\quad\\Gamma\\vdash x\\!:\\!t_1}\\over{\\Gamma\\vdash x~x\\!:\\!t_2}" }, { "math_id": 51, "text": "x" }, { "math_id": 52, "text": "t_1 = t_1\\to t_2" }, { "math_id": 53, "text": "\\forall x (f\\ x = x)" }, { "math_id": 54, "text": "\\forall x (f\\ (f\\ x) = f\\ x)" }, { "math_id": 55, "text": "(Y \\ \\lambda x.x) = (\\lambda x.(xx) \\ \\lambda x.(xx))" } ]
https://en.wikipedia.org/wiki?curid=150287
1502985
Two-sided Laplace transform
Mathematical operation In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment-generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If "f"("t") is a real- or complex-valued function of the real variable "t" defined for all real numbers, then the two-sided Laplace transform is defined by the integral formula_0 The integral is most commonly understood as an improper integral, which converges if and only if both integrals formula_1 exist. There seems to be no generally accepted notation for the two-sided transform; the formula_2 used here recalls "bilateral". The two-sided transform used by some authors is formula_3 In pure mathematics the argument "t" can be any variable, and Laplace transforms are used to study how differential operators transform the function. In science and engineering applications, the argument "t" often represents time (in seconds), and the function "f"("t") often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time "t" cannot depend on an output which is a higher value of "t". In population ecology, the argument "t" often represents spatial displacement in a dispersal kernel. When working with functions of time, "f"("t") is called the time domain representation of the signal, while "F"("s") is called the s-domain (or "Laplace domain") representation. The inverse transformation then represents a "synthesis" of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the "analysis" of the signal into its frequency components. Relationship to the Fourier transform. The Fourier transform can be defined in terms of the two-sided Laplace transform: formula_4 Note that definitions of the Fourier transform differ, and in particular formula_5 is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as formula_6 The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip formula_7 which may not include the real axis where the Fourier transform is supposed to converge. This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its most general form in the context of Laplace, not Fourier, transforms. At the same time, nowadays Laplace transform theory falls within the ambit of more general integral transforms, or even general harmonic analysis. In that framework and nomenclature, Laplace transforms are simply another form of Fourier analysis, even if more general in hindsight. Relationship to other integral transforms. If "u" is the Heaviside step function, equal to zero when its argument is less than zero, to one-half when its argument equals zero, and to one when its argument is greater than zero, then the Laplace transform formula_8 may be defined in terms of the two-sided Laplace transform by formula_9 On the other hand, we also have formula_10 where formula_11 is the function that multiplies by minus one (formula_12), so either version of the Laplace transform can be defined in terms of the other. The Mellin transform may be defined in terms of the two-sided Laplace transform by formula_13 with formula_14 as above, and conversely we can get the two-sided transform from the Mellin transform by formula_15 The moment-generating function of a continuous probability density function "ƒ"("x") can be expressed as formula_16. Properties. The following properties can be found in and Most properties of the bilateral Laplace transform are very similar to properties of the unilateral Laplace transform, but there are some important differences: Parseval's theorem and Plancherel's theorem. Let formula_17 and formula_18 be functions with bilateral Laplace transforms formula_19 and formula_20 in the strips of convergence formula_21. Let formula_22 with formula_23. Then Parseval's theorem holds: formula_24 This theorem is proved by applying the inverse Laplace transform on the convolution theorem in form of the cross-correlation. Let formula_25 be a function with bilateral Laplace transform formula_26 in the strip of convergence formula_27. Let formula_22 with formula_28. Then the Plancherel theorem holds: formula_29 Uniqueness. For any two functions formula_30 for which the two-sided Laplace transforms formula_31 exist, if formula_32 i.e. formula_33 for every value of formula_34 then formula_35 almost everywhere. Region of convergence. Bilateral transform requirements for convergence are more difficult than for unilateral transforms. The region of convergence will be normally smaller. If "f" is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform "F"("s") of "f" converges provided that the limit formula_36 exists. The Laplace transform converges absolutely if the integral formula_37 exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense. The set of values for which "F"("s") converges absolutely is either of the form Re("s") > "a" or else Re("s") ≥ "a", where "a" is an extended real constant, −∞ ≤ "a" ≤ ∞. (This follows from the dominated convergence theorem.) The constant "a" is known as the abscissa of absolute convergence, and depends on the growth behavior of "f"("t"). Analogously, the two-sided transform converges absolutely in a strip of the form "a" < Re("s") < "b", and possibly including the lines Re("s") = "a" or Re("s") = "b". The subset of values of "s" for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence. Similarly, the set of values for which "F"("s") converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at "s" = "s"0, then it automatically converges for all "s" with Re("s") > Re("s"0). Therefore, the region of convergence is a half-plane of the form Re("s") > "a", possibly including some points of the boundary line Re("s") = "a". In the region of convergence Re("s") > Re("s"0), the Laplace transform of "f" can be expressed by integrating by parts as the integral formula_38 That is, in the region of convergence "F"("s") can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. There are several Paley–Wiener theorems concerning the relationship between the decay properties of "f" and the properties of the Laplace transform within the region of convergence. In engineering applications, a function corresponding to a linear time-invariant (LTI) system is "stable" if every bounded input produces a bounded output. Causality. Bilateral transforms do not respect causality. They make sense when applied over generic functions but when working with functions of time (signals) unilateral transforms are preferred. Table of selected bilateral Laplace transforms. Following list of interesting examples for the bilateral Laplace transform can be deduced from the corresponding Fourier or unilateral Laplace transformations (see also ): References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathcal{B}\\{f\\}(s) = F(s) = \\int_{-\\infty}^\\infty e^{-st} f(t)\\, dt." }, { "math_id": 1, "text": "\\int_0^\\infty e^{-st} f(t) \\, dt,\\quad \\int_{-\\infty}^0 e^{-st} f(t)\\, dt" }, { "math_id": 2, "text": "\\mathcal{B}" }, { "math_id": 3, "text": "\\mathcal{T}\\{f\\}(s) = s\\mathcal{B}\\{f\\}(s) = sF(s) = s \\int_{-\\infty}^\\infty e^{-st} f(t)\\, dt." }, { "math_id": 4, "text": "\\mathcal{F}\\{f(t)\\} = F(s = i\\omega) = F(\\omega)." }, { "math_id": 5, "text": "\\mathcal{F}\\{f(t)\\} = F(s = i\\omega) = \\frac{1}{\\sqrt{2\\pi}} \\mathcal{B}\\{f(t)\\}(s)" }, { "math_id": 6, "text": "\\mathcal{B}\\{f(t)\\}(s) = \\mathcal{F}\\{f(t)\\}(-is)." }, { "math_id": 7, "text": "a < \\Im(s) < b" }, { "math_id": 8, "text": "\\mathcal{L}" }, { "math_id": 9, "text": "\\mathcal{L}\\{f\\} = \\mathcal{B}\\{f u\\}." }, { "math_id": 10, "text": "\\mathcal{B}\\{f\\} = \\mathcal{L}\\{f\\} + \\mathcal{L}\\{f\\circ m\\}\\circ m," }, { "math_id": 11, "text": "m:\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 12, "text": "m(x) = -x" }, { "math_id": 13, "text": "\\mathcal{M}\\{f\\} = \\mathcal{B}\\{f \\circ {\\exp} \\circ m\\}," }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "\\mathcal{B}\\{f\\} = \\mathcal{M}\\{f\\circ m \\circ \\log \\}." }, { "math_id": 16, "text": "\\mathcal{B}\\{f\\}(-s)" }, { "math_id": 17, "text": "f_1(t)" }, { "math_id": 18, "text": "f_2(t)" }, { "math_id": 19, "text": "F_1(s)" }, { "math_id": 20, "text": "F_2(s)" }, { "math_id": 21, "text": "\\alpha_{1,2}<\\real s<\\beta_{1,2}" }, { "math_id": 22, "text": "c\\in\\mathbb{R}" }, { "math_id": 23, "text": "\\max(-\\beta_1,\\alpha_2)<c<\\min(-\\alpha_1,\\beta_2)" }, { "math_id": 24, "text": "\n\\int_{-\\infty}^{\\infty} \\overline{f_1(t)}\\,f_2(t)\\,dt = \\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \\overline{F_1(-\\overline{s})}\\,F_2(s)\\,ds\n" }, { "math_id": 25, "text": "f(t)" }, { "math_id": 26, "text": "F(s)" }, { "math_id": 27, "text": "\\alpha<\\Re s<\\beta" }, { "math_id": 28, "text": " \\alpha<c<\\beta " }, { "math_id": 29, "text": "\n\\int_{-\\infty}^{\\infty} e^{-2c\\,t} \\, |f(t)|^2 \\,dt = \\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} |F(c+ir)|^2 \\, dr\n" }, { "math_id": 30, "text": " f,g " }, { "math_id": 31, "text": " \\mathcal{T} \\{f\\}, \\mathcal{T} \\{g\\} " }, { "math_id": 32, "text": " \\mathcal{T}\\{f\\} = \\mathcal{T} \\{g\\}, " }, { "math_id": 33, "text": " \\mathcal{T}\\{f\\}(s) = \\mathcal{T}\\{g\\}(s) " }, { "math_id": 34, "text": " s\\in\\mathbb R, " }, { "math_id": 35, "text": " f=g " }, { "math_id": 36, "text": "\\lim_{R\\to\\infty}\\int_0^R f(t)e^{-st}\\, dt" }, { "math_id": 37, "text": "\\int_0^\\infty \\left|f(t)e^{-st}\\right|\\, dt" }, { "math_id": 38, "text": "F(s) = (s-s_0)\\int_0^\\infty e^{-(s-s_0)t}\\beta(t)\\, dt,\\quad \\beta(u) = \\int_0^u e^{-s_0t}f(t)\\, dt." } ]
https://en.wikipedia.org/wiki?curid=1502985
15031252
ETHE1
Protein-coding gene in the species Homo sapiens Protein ETHE1, mitochondrial, also known as "ethylmalonic encephalopathy 1 protein" and "per sulfide dioxygenase", is a protein that in humans is encoded by the "ETHE1" gene located on chromosome 19. Structure. The human ETHE1 gene consists of 7 exons and encodes for a protein that is approximately 27 kDa in size. Function. This gene encodes a protein that is expressed mainly in the gastrointestinal tract, but also in several other tissues such as the liver and the thyroid. The ETHE1 protein is thought to localize primarily to the mitochondrial matrix and functions as a sulfur dioxygenase. Sulfur deoxygenates are proteins that function in sulfur metabolism. The ETHE1 protein is thought to catalyze the following reaction: sulfur + O2 + H2O formula_0 sulfite + 2 H+ (overall reaction) (1a) glutathione + sulfur formula_0 S-sulfanylglutathione (glutathione persulfide, spontaneous reaction) (1b) S-sulfanylglutathione + O2 + H2O formula_0 glutathione + sulfite + 2 H+ and requires iron and possibly glutathione as cofactors. The physiological substrate of ETHE1 is thought to be glutathione persulfide, an intermediate metabolite involved in hydrogen sulfide degradation. Clinical significance. Mutations in ETHE1 gene are thought to cause ethylmalonic encephalopathy, a rare inborn error of metabolism. Patients carrying ETHE1 mutations have been found to exhibit lower activity of ETHE1 and affinity for the ETHE1 substrate. Mouse models of Ethe1 genetic ablation likewise exhibited reduced sulfide dioxygenase catabolism and cranial features of ethylmalonic encephalopathy. Decrease in sulfide dioxygenase activity results in abnormal catabolism of hydrogen sulfide, a gas-phase signaling molecule in the central nervous system, whose accumulation is thought to inhibit cytochrome c oxidase activity in the respiratory chain of the mitochondrion. However, other metabolic pathways may also be involved that could exert a modulatory effect on hydrogen sulfide toxicity. Interactions. ETHE1 has been shown to interact with RELA. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=15031252
15031355
ALOX12B
Protein-coding gene in the species Homo sapiens Arachidonate 12-lipoxygenase, 12R type, also known as ALOX12B, 12"R"-LOX, and arachidonate lipoxygenase 3, is a lipoxygenase-type enzyme composed of 701 amino acids and encoded by the "ALOX12B" gene. The gene is located on chromosome 17 at position 13.1 where it forms a cluster with two other lipoxygenases, ALOXE3 and ALOX15B. Among the human lipoxygenases, ALOX12B is most closely (54% identity) related in amino acid sequence to ALOXE3 Activity. ALOX12B oxygenates arachidonic acid by adding molecular oxygen (O2) in the form of a hydroperoxyl (HO2) residue to its 12th carbon thereby forming 12("R")-hydroperoxy-5"Z",8"Z",10"E",14"Z"-icosatetraenoic acid (also termed 12("R")-HpETE or 12"R"-HpETE). When formed in cells, 12"R"-HpETE may be quickly reduced to its hydroxyl analog (OH), 12("R")-hydroxy-5"'Z",8"Z",10"E",14"Z"-eicosatetraenoic acid (also termed 12("R")-HETE or 12"R"-HETE), by ubiquitous peroxidase-type enzymes. These sequential metabolic reactions are: arachidonic acid + O2 formula_0 12"R"-HpETE → 12"R"-HETE ALOX12B is also capable of metabolizing free linoleic acid to 9("R")-hydroperoxy-10(E),12(Z)-octadecadienoic acid (9"R"-HpODE) which is also rapidly converted to its hydroxyl derivative, 9-Hydroxyoctadecadienoic acid (9"R"-HODE). Linoleic acid + O2 formula_0 9"R"-HpODE → 9"R"-HODE The "S" stereoisomer of 9"R"-HODE, 9"S"-HODE, has a range of biological activities related to oxidative stress and pain perception (see 9-Hydroxyoctadecadienoic acid. It is known or likely that 9"R"-HODE possesses at least some of these activities. For example, 9"R"-HODE, similar to 9"S"-HODE, mediates the perception of acute and chronic pain induced by heat, UV light, and inflammation in the skin of rodents (see 9-Hydroxyoctadecadienoic acid#9-HODEs as mediators of pain perception). However, production of these LA metabolites does not appear to be the primary function of ALOX12B; ALOX12B's primary function appears to be to metabolize linoleic acid that is not free but rather esterified to certain Proposed principal activity of ALOX12B. ALOX12B targets Linoleic acid (LA). LA is the most abundant fatty acid in the skin epidermis, being present mainly esterified to the omega-hydroxyl residue of amide-linked omega-hydroxylated very long chain fatty acids (VLCFAs) in a unique class of ceramides termed esterified omega-hydroxyacyl-sphingosine (EOS). EOS is an intermediate component in a proposed multi-step metabolic pathway which delivers VLCFAs to the cornified lipid envelop in the skin's Stratum corneum; the presence of these wax-like, hydrophobic VLCFAs is needed to maintain the skin's integrity and functionality as a water barrier (see Lung microbiome#Role of the epithelial barrier). ALOX12B metabolizes the LA in EOS to its 9-hydroperoxy derivative; ALOXE3 then converts this derivative to three products: a) 9"R",10"R"-trans-epoxide,13"R"-hydroxy-10"E"-octadecenoic acid, b) 9-keto-10"E",12"Z"-octadecadienoic acid, and c) 9"R",10"R"-trans-epoxy-13-keto-11"E"-octadecenoic acid. These ALOX12B-oxidized products signal for the hydrolysis (i.e. removal) of the oxidized products from EOS; this allows the multi-step metabolic pathway to proceed in delivering the VLCFAs to the cornified lipid envelop in the skin's Stratum corneum. Tissue distribution. ALOX12B protein has been detected in humans that in the same tissues the express ALOXE3 and ALOX15B viz., upper layers of the human skin and tongue and in tonsils. mRNA for it has been detected in additional tissues such as the lung, testis, adrenal gland, ovary, prostate, and skin with lower abundance levels detected in salivary and thyroid glands, pancreas, brain, and plasma blood leukocytes. Clinical significance. Congenital ichthyosiform erythrodema. Deletions of "Alox12b" or "AloxE2" genes in mice cause a congenital scaly skin disease which is characterized by a greatly reduced skin water barrier function and is similar in other ways to the autosomal recessive nonbullous Congenital ichthyosiform erythroderma (ARCI) disease of humans. Mutations in many of the genes that encode proteins, including ALOX12B and ALOXE3, which conduct the steps that bring and then bind VLCFA to the stratums corneum are associated with ARCI. ARCI refers to nonsyndromic (i.e. not associated with other signs or symptoms) congenital Ichthyosis including Harlequin-type ichthyosis, Lamellar ichthyosis, and Congenital ichthyosiform erythroderma. ARCI has an incidence of about 1/200,000 in European and North American populations; 40 different mutations in "ALOX12B" and 13 different mutations in "ALOXE3" genes account for a total of about 10% of ARCI case; these mutations uniformly cause a total loss of ALOX12B or ALOXE3 function (see mutations). Proliferative skin diseases. In psoriasis and other proliferative skin diseases such as the erythrodermas underlying lung cancer, cutaneous T cell lymphoma, and drug reactions, and in discoid lupus, seborrheic dermatitis, subacute cutaneous lupus erythematosus, and pemphigus foliaceus, cutaneous levels of ALOX12B mRNA and 12"R"-HETE are greatly increased. It is not clear if these increases contribute to the disease by, for example, 12"R"-HETE induction of inflammation, or are primarily a consequence of skin proliferation. Embryogenesis. The expression of Alox12b and Aloxe3 mRNA in mice parallels, and is proposed to be instrumental for, skin development in mice embryogenesis; the human orthologs of these genes, i.e. ALOX12B and ALOXE3, may have a similar role in humans. Essential fatty acid deficiency. Severe dietary deficiency of polyunsaturated omega 6 fatty acids leads to the essential fatty acid deficiency syndrome that is characterized by scaly skin and excessive water loss; in humans and animal models the syndrome is fully reversed by dietary omega 6 fatty acids, particularly linoleic acid. It is proposed that this deficiency disease resembles and has a similar basis to Congenital ichthyosiform erythrodema; that is, it is at least in part due to a deficiency of linoleic acid and thereby in the EOS-based delivery of VLCFA to the stratum corneum. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=15031355
1503224
Local boundedness
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number. Locally bounded function. A real-valued or complex-valued function formula_0 defined on some topological space formula_1 is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;locally bounded functional if for any formula_2 there exists a neighborhood formula_3 of formula_4 such that formula_5 is a bounded set. That is, for some number formula_6 one has formula_7 In other words, for each formula_8 one can find a constant, depending on formula_9 which is larger than all the values of the function in the neighborhood of formula_10 Compare this with a bounded function, for which the constant does not depend on formula_10 Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below). This definition can be extended to the case when formula_11 takes values in some metric space formula_12 Then the inequality above needs to be replaced with formula_13 where formula_14 is some point in the metric space. The choice of formula_15 does not affect the definition; choosing a different formula_15 will at most increase the constant formula_16 for which this inequality is true. Locally bounded family. A set (also called a family) "U" of real-valued or complex-valued functions defined on some topological space formula_1 is called locally bounded if for any formula_2 there exists a neighborhood formula_3 of formula_4 and a positive number formula_6 such that formula_42 for all formula_43 and formula_44 In other words, all the functions in the family must be locally bounded, and around each point they need to be bounded by the same constant. This definition can also be extended to the case when the functions in the family "U" take values in some metric space, by again replacing the absolute value with the distance function. Topological vector spaces. Local boundedness may also refer to a property of topological vector spaces, or of functions from a topological space into a topological vector space (TVS). Locally bounded topological vector spaces. A subset formula_61 of a topological vector space (TVS) formula_1 is called bounded if for each neighborhood formula_62 of the origin in formula_1 there exists a real number formula_63 such that formula_64 A &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;locally bounded TVS is a TVS that possesses a bounded neighborhood of the origin. By Kolmogorov's normability criterion, this is true of a locally convex space if and only if the topology of the TVS is induced by some seminorm. In particular, every locally bounded TVS is pseudometrizable. Locally bounded functions. Let formula_11 a function between topological vector spaces is said to be a locally bounded function if every point of formula_1 has a neighborhood whose image under formula_0 is bounded. The following theorem relates local boundedness of functions with the local boundedness of topological vector spaces: Theorem. A topological vector space formula_1 is locally bounded if and only if the identity map formula_65 is locally bounded.
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "x_0 \\in X" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "x_0" }, { "math_id": 5, "text": "f(A)" }, { "math_id": 6, "text": "M > 0" }, { "math_id": 7, "text": "|f(x)| \\leq M \\quad \\text{ for all } x \\in A." }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "x," }, { "math_id": 10, "text": "x." }, { "math_id": 11, "text": "f : X \\to Y" }, { "math_id": 12, "text": "(Y, d)." }, { "math_id": 13, "text": "d(f(x), y) \\leq M \\quad \\text{ for all } x \\in A," }, { "math_id": 14, "text": "y \\in Y" }, { "math_id": 15, "text": "y" }, { "math_id": 16, "text": "r" }, { "math_id": 17, "text": "f : \\R \\to \\R" }, { "math_id": 18, "text": "f(x) = \\frac{1}{x^2+1}" }, { "math_id": 19, "text": "0 \\leq f(x) \\leq 1" }, { "math_id": 20, "text": "f(x) = 2x+3" }, { "math_id": 21, "text": "a," }, { "math_id": 22, "text": "|f(x)| \\leq M" }, { "math_id": 23, "text": "(a - 1, a + 1)," }, { "math_id": 24, "text": "M = 2|a| + 5." }, { "math_id": 25, "text": "f(x) = \\begin{cases}\n \\frac{1}{x}, & \\mbox{if } x \\neq 0, \\\\\n 0, & \\mbox{if } x = 0 \n\\end{cases}\n" }, { "math_id": 26, "text": "f : U \\to \\R" }, { "math_id": 27, "text": "U \\subseteq \\R," }, { "math_id": 28, "text": "a" }, { "math_id": 29, "text": "a \\in U" }, { "math_id": 30, "text": "\\delta > 0" }, { "math_id": 31, "text": "|f(x) - f(a)| < 1" }, { "math_id": 32, "text": "x \\in U" }, { "math_id": 33, "text": "|x - a| < \\delta" }, { "math_id": 34, "text": "|f(x)| = |f(x) - f(a) + f(a)| \\leq |f(x) - f(a)| + |f(a)| < 1 + |f(a)|," }, { "math_id": 35, "text": "M = 1 + |f(a)|" }, { "math_id": 36, "text": "(a - \\delta, a + \\delta)" }, { "math_id": 37, "text": "f(0) = 1" }, { "math_id": 38, "text": "f(x) = 0" }, { "math_id": 39, "text": "x \\neq 0." }, { "math_id": 40, "text": "M = 1" }, { "math_id": 41, "text": "(-1, 1)," }, { "math_id": 42, "text": "|f(x)| \\leq M" }, { "math_id": 43, "text": "x \\in A" }, { "math_id": 44, "text": "f \\in U." }, { "math_id": 45, "text": "f_n : \\R \\to \\R" }, { "math_id": 46, "text": "f_n(x) = \\frac{x}{n}" }, { "math_id": 47, "text": "n = 1, 2, \\ldots" }, { "math_id": 48, "text": "\\left(x_0 - a, x_0 + 1\\right)." }, { "math_id": 49, "text": "n \\geq 1" }, { "math_id": 50, "text": "|f_n(x)| \\leq M" }, { "math_id": 51, "text": "M = 1 + |x_0|." }, { "math_id": 52, "text": "M" }, { "math_id": 53, "text": "n." }, { "math_id": 54, "text": "f_n(x) = \\frac{1}{x^2+n^2}" }, { "math_id": 55, "text": "n" }, { "math_id": 56, "text": "\\R" }, { "math_id": 57, "text": "M = 1." }, { "math_id": 58, "text": "A." }, { "math_id": 59, "text": "f_n(x) = x+n" }, { "math_id": 60, "text": "f_n(x)" }, { "math_id": 61, "text": "B \\subseteq X" }, { "math_id": 62, "text": "U" }, { "math_id": 63, "text": "s > 0" }, { "math_id": 64, "text": "B \\subseteq t U \\quad \\text{ for all } t > s." }, { "math_id": 65, "text": "\\operatorname{id}_X : X \\to X" } ]
https://en.wikipedia.org/wiki?curid=1503224
15034041
Landing footprint
A landing footprint, also called a landing ellipse, is the area of uncertainty of a spacecraft's landing zone on an astronomical body. After atmospheric entry, the landing point of a spacecraft will depend upon the degree of control (if any), entry angle, entry mass, atmospheric conditions, and drag. (Note that the Moon and the asteroids have no aerial factors.) By aggregating such numerous variables it is possible to model a spacecraft's landing zone to a certain degree of precision. By simulating entry under varying conditions an probable ellipse can be calculated; the size of the ellipse represents the degree of uncertainty for a given confidence interval. Mathematical explanation. To create a landing footprint for a spacecraft, the standard approach is to use the Monte Carlo method to generate distributions of initial entry conditions and atmospheric parameters, solve the reentry equations of motion, and catalog the final longitude/latitude pair formula_0 at touchdown. It is commonly assumed that the resulting distribution of landing sites follows a bivariate Gaussian distribution: formula_1 where: Once the parameters formula_6 are estimated from the numerical simulations, an ellipse can be calculated for a percentile formula_7. It is known that for a real-valued vector formula_8 with a multivariate Gaussian joint distribution, the square of the Mahalanobis distance has a chi-squared distribution with formula_9 degrees of freedom: formula_10 This can be seen by defining the vector formula_11, which leads to formula_12 and is the definition of the chi-squared statistic used to construct the resulting distribution. So for the bivariate Gaussian distribution, the boundary of the ellipse at a given percentile is formula_13. This is the equation of a circle centered at the origin with radius formula_14, leading to the equations: formula_15 where formula_16 is the angle. The matrix square root formula_17 can be found from the eigenvalue decomposition of the covariance matrix, from which formula_18 can be written as: formula_19 where the eigenvalues lie on the diagonal of formula_20. The values of formula_21 then define the landing footprint for a given level of confidence, which is expressed through the choice of percentile. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\lambda,\\phi) " }, { "math_id": 1, "text": "f(x) = \\frac{1}{2\\pi\\sqrt{|\\Sigma|}} \\exp\\left[ -\\frac{1}{2}(x-\\mu)^{T}\\Sigma^{-1}(x-\\mu) \\right] " }, { "math_id": 2, "text": "x = (\\lambda,\\phi)" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\Sigma" }, { "math_id": 5, "text": "|\\Sigma|" }, { "math_id": 6, "text": "(\\mu,\\Sigma)" }, { "math_id": 7, "text": "p " }, { "math_id": 8, "text": "x\\in\\mathbb{R}^{n} " }, { "math_id": 9, "text": "n " }, { "math_id": 10, "text": "(x-\\mu)^{T}\\Sigma^{-1}(x-\\mu) \\sim \\chi_{n}^{2} " }, { "math_id": 11, "text": "z = \\Sigma^{-1/2}(x-\\mu) " }, { "math_id": 12, "text": "Q = z_{1}^{2}+\\cdots+z_{n}^{2} " }, { "math_id": 13, "text": "z^{T}z = \\chi_{2}^{2}(p) " }, { "math_id": 14, "text": "\\sqrt{\\chi_{2}^{2}(p)} " }, { "math_id": 15, "text": "z = \\sqrt{\\chi_{2}^{2}(p)} \\begin{pmatrix} \\cos \\theta \\\\ \\sin \\theta \\end{pmatrix} \\Rightarrow x(\\theta) = \\mu + \\sqrt{\\chi_{2}^{2}(p)} \\Sigma^{1/2} \\begin{pmatrix} \\cos \\theta \\\\ \\sin \\theta \\end{pmatrix} " }, { "math_id": 16, "text": "\\theta\\in[0,2\\pi) " }, { "math_id": 17, "text": "\\Sigma^{1/2} " }, { "math_id": 18, "text": "\\Sigma " }, { "math_id": 19, "text": "\\Sigma = V \\Lambda V^{T} \\implies \\Sigma^{1/2} = V \\Lambda^{1/2} V^{T} " }, { "math_id": 20, "text": "\\Lambda " }, { "math_id": 21, "text": "x " } ]
https://en.wikipedia.org/wiki?curid=15034041
1503566
Logarithmically convex function
Function whose composition with the logarithm is convex In mathematics, a function "f" is logarithmically convex or superconvex if formula_0, the composition of the logarithm with "f", is itself a convex function. Definition. Let "X" be a convex subset of a real vector space, and let "f" : "X" → R be a function taking non-negative values. Then "f" is: Here we interpret formula_2 as formula_3. Explicitly, "f" is logarithmically convex if and only if, for all "x"1, "x"2 ∈ "X" and all "t" ∈ [0, 1], the two following equivalent conditions hold: formula_4 Similarly, "f" is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all "t" ∈ (0, 1). The above definition permits "f" to be zero, but if "f" is logarithmically convex and vanishes anywhere in "X", then it vanishes everywhere in the interior of "X". Equivalent conditions. If "f" is a differentiable function defined on an interval "I" ⊆ R, then "f" is logarithmically convex if and only if the following condition holds for all "x" and "y" in "I": formula_5 This is equivalent to the condition that, whenever "x" and "y" are in "I" and "x" &gt; "y", formula_6 Moreover, "f" is strictly logarithmically convex if and only if these inequalities are always strict. If "f" is twice differentiable, then it is logarithmically convex if and only if, for all "x" in "I", formula_7 If the inequality is always strict, then "f" is strictly logarithmically convex. However, the converse is false: It is possible that "f" is strictly logarithmically convex and that, for some "x", we have formula_8. For example, if formula_9, then "f" is strictly logarithmically convex, but formula_10. Furthermore, formula_11 is logarithmically convex if and only if formula_12 is convex for all formula_13. Sufficient conditions. If formula_14 are logarithmically convex, and if formula_15 are non-negative real numbers, then formula_16 is logarithmically convex. If formula_17 is any family of logarithmically convex functions, then formula_18 is logarithmically convex. If formula_19 is convex and formula_20 is logarithmically convex and non-decreasing, then formula_21 is logarithmically convex. Properties. A logarithmically convex function "f" is a convex function since it is the composite of the increasing convex function formula_22 and the function formula_23, which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring function formula_24 is convex, but its logarithm formula_25 is not. Therefore the squaring function is not logarithmically convex. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. "This article incorporates material from logarithmically convex function on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "{\\log}\\circ f" }, { "math_id": 1, "text": "{\\log} \\circ f" }, { "math_id": 2, "text": "\\log 0" }, { "math_id": 3, "text": "-\\infty" }, { "math_id": 4, "text": "\\begin{align}\n\\log f(tx_1 + (1 - t)x_2) &\\le t\\log f(x_1) + (1 - t)\\log f(x_2), \\\\\nf(tx_1 + (1 - t)x_2) &\\le f(x_1)^tf(x_2)^{1-t}.\n\\end{align}" }, { "math_id": 5, "text": "\\log f(x) \\ge \\log f(y) + \\frac{f'(y)}{f(y)}(x - y)." }, { "math_id": 6, "text": "\\left(\\frac{f(x)}{f(y)}\\right)^{\\frac{1}{x - y}} \\ge \\exp\\left(\\frac{f'(y)}{f(y)}\\right)." }, { "math_id": 7, "text": "f''(x)f(x) \\ge f'(x)^2." }, { "math_id": 8, "text": "f''(x)f(x) = f'(x)^2" }, { "math_id": 9, "text": "f(x) = \\exp(x^4)" }, { "math_id": 10, "text": "f''(0)f(0) = 0 = f'(0)^2" }, { "math_id": 11, "text": "f\\colon I \\to (0, \\infty)" }, { "math_id": 12, "text": "e^{\\alpha x}f(x)" }, { "math_id": 13, "text": "\\alpha\\in\\mathbb R" }, { "math_id": 14, "text": "f_1, \\ldots, f_n" }, { "math_id": 15, "text": "w_1, \\ldots, w_n" }, { "math_id": 16, "text": "f_1^{w_1} \\cdots f_n^{w_n}" }, { "math_id": 17, "text": "\\{f_i\\}_{i \\in I}" }, { "math_id": 18, "text": "g = \\sup_{i \\in I} f_i" }, { "math_id": 19, "text": "f \\colon X \\to I \\subseteq \\mathbf{R}" }, { "math_id": 20, "text": "g \\colon I \\to \\mathbf{R}_{\\ge 0}" }, { "math_id": 21, "text": "g \\circ f" }, { "math_id": 22, "text": "\\exp" }, { "math_id": 23, "text": "\\log\\circ f" }, { "math_id": 24, "text": "f(x) = x^2" }, { "math_id": 25, "text": "\\log f(x) = 2\\log |x|" }, { "math_id": 26, "text": "f(x) = \\exp(|x|^p)" }, { "math_id": 27, "text": "p \\ge 1" }, { "math_id": 28, "text": "p > 1" }, { "math_id": 29, "text": "f(x) = \\frac{1}{x^p}" }, { "math_id": 30, "text": "(0,\\infty)" }, { "math_id": 31, "text": "p>0." } ]
https://en.wikipedia.org/wiki?curid=1503566
15036487
Subcountability
In constructive mathematics, a collection formula_0 is subcountable if there exists a partial surjection from the natural numbers onto it. This may be expressed as formula_1 where formula_2 denotes that formula_3 is a surjective function from a formula_4 onto formula_0. The surjection is a member of formula_5 and here the subclass formula_4 of formula_6 is required to be a set. In other words, all elements of a subcountable collection formula_0 are functionally in the image of an indexing set of counting numbers formula_7 and thus the set formula_0 can be understood as being dominated by the countable set formula_6. Discussion. Nomenclature. Note that nomenclature of countability and finiteness properties vary substantially - in part because many of them coincide when assuming excluded middle. To reiterate, the discussion here concerns the property defined in terms of surjections onto the set formula_0 being characterized. The language here is common in constructive set theory texts, but the name "subcountable" has otherwise also been given to properties in terms of injections out of the set being characterized. The set formula_6 in the definition can also be abstracted away, and in terms of the more general notion formula_0 may be called a "subquotient of formula_6". Example. Important cases are those where the set in question is some subclass of a bigger class of functions as studied in computability theory. For context, recall that being total is famously not a decidable property of functions. Indeed, Rice's theorem on index sets, most domains of indices are, in fact, not computable sets. There cannot be a "computable" surjection formula_8 from formula_6 onto the set of total computable functions formula_0, as demonstrated via the function formula_9 from the diagonal construction, which could never be in such a surjections image. However, via the codes of all possible partial computable functions, which also allows non-terminating programs, such subsets of functions, such as the total functions, are seen to be subcountable sets: The total functions are the range of some strict subset formula_4 of the natural numbers. Being dominated by an uncomputable set of natural numbers, the name "subcountable" thus conveys that the set formula_0 is no bigger than formula_6. At the same time, for some particular restrictive constructive semantics of function spaces, in cases when formula_4 is provenly not computably enumerable, such formula_4 is then also not countable, and the same holds for formula_0. Note that no effective map between all counting numbers formula_6 and the unbounded and non-finite indexing set formula_4 is asserted in the definition of subcountability - merely the subset relation formula_7. A demonstration that formula_0 is subcountable at the same time implies that it is classically (non-constructively) formally countable, but this does not reflect any effective countability. In other words, the fact that an algorithm listing all total functions in sequence cannot be coded up is not captured by classical axioms regarding set and function existence. We see that, depending on the axioms of a theory, subcountability may be more likely to be provable than countability. Relation to excluded middle. In constructive logics and set theories tie the existence of a function between infinite (non-finite) sets to questions of decidability and possibly of effectivity. There, the subcountability property splits from countability and is thus not a redundant notion. The indexing set formula_4 of natural numbers may be posited to exist, e.g. as a subset via set theoretical axioms like the separation axiom schema. Then by definition of formula_7, formula_10 But this set may then still fail to be detachable, in the sense that formula_11 may not be provable without assuming it as an axiom. One may fail to effectively count the subcountable set formula_0 if one fails to map the counting numbers formula_6 into the indexing set formula_4, for this reason. Being countable implies being subcountable. In the appropriate context with Markov's principle, the converse is equivalent to the law of excluded middle, i.e. that for all proposition formula_12 holds formula_13. In particular, constructively this converse direction does not generally hold. In classical mathematics. Asserting all laws of classical logic, the disjunctive property of formula_4 discussed above indeed does hold for all sets. Then, for nonempty formula_0, the properties numerable (which here shall mean that formula_0 injects into formula_6), countable (formula_6 has formula_0 as its range), subcountable (a subset of formula_6 surjects into formula_0) and also not formula_14-productive (a countability property essentially defined in terms of subsets of formula_0) are all equivalent and express that a set is finite or countably infinite. Non-classical assertions. Without the law of excluded middle, it can be consistent to assert the subcountability of sets that classically (i.e. non-constructively) exceed the cardinality of the natural numbers. Note that in a constructive setting, a countability claim about the function space formula_15 out of the full set formula_6, as in formula_16, may be disproven. But subcountability formula_17 of an uncountable set formula_15 by a set formula_7 that is not effectively detachable from formula_6 may be permitted. A constructive proof is also classically valid. If a set is proven uncountable constructively, then in a classical context is it provably not subcountable. As this applies to formula_15, the classical framework with its large function space is incompatible with the constructive Church's thesis, an axiom of Russian constructivism. Subcountable and ω-productive are mutually exclusive. A set formula_0 shall be called formula_14-productive if, whenever any of its subsets formula_18 is the range of some partial function on formula_6, there always exists an element formula_19 that remains in the complement of that range. If there exists any surjection onto some formula_0, then its corresponding compliment as described would equal the empty set formula_20, and so a subcountable set is never formula_14-productive. As defined above, the property of being formula_14-productive associates the range formula_21 of any partial function to a particular value formula_22 not in the functions range, formula_23. In this way, a set formula_0 being formula_14-productive speaks for how hard it is to generate all the elements of it: They cannot be generated from the naturals using a single function. The formula_14-productivity property constitutes an obstruction to subcountability. As this also implies uncountability, diagonal arguments often involve this notion, explicitly since the late seventies. One may establish the impossibility of "computable" enumerability of formula_0 by considering only the computably enumerable subsets formula_21 and one may require the set of all obstructing formula_24's to be the image of a total recursive so called production function. formula_5 denotes the space that exactly hold all the partial functions on formula_6 that have, as their range, only subsets formula_21 of formula_0. In set theory, functions are modeled as collection of pairs. Whenever formula_25 is a set, the set of sets of pairs formula_26 may be used to characterize the space of partial functions on formula_6. The for an formula_14-productive set formula_0 one finds formula_27 Read constructively, this associates any partial function formula_28 with an element formula_24 not in that functions range. This property emphasizes the incompatibility of an formula_14-productive set formula_0 with any surjective (possibly partial) function. Below this is applied in the study of subcountability assumptions. Set theories. Cantorian arguments on subsets of the naturals. As reference theory we look at the constructive set theory CZF, which has Replacement, Bounded Separation, strong Infinity, is agnostic towards the existence of power sets, but includes the axiom that asserts that any function space formula_29 is set, given formula_30 are also sets. In this theory, it is moreover consistent to assert that "every" set is subcountable. The compatibility of various further axioms is discussed in this section by means of possible surjections on an infinite set of counting numbers formula_31. Here formula_6 shall denote a model of the standard natural numbers. Recall that for functions formula_32, by definition of total functionality, there exists a unique return value for all values formula_33 in the domain, formula_34 and for a subcountable set, the surjection is still total on a subset of formula_6. Constructively, fewer such existential claims will be provable than classically. The situations discussed below—onto power classes versus onto function spaces—are different from one another: Opposed to general subclass defining predicates and their truth values (not necessarily provably just true and false), a function (which in programming terms is terminating) does makes accessible information about data for all its subdomains (subsets of the formula_0). When as characteristic functions for their subsets, functions, through their return values, decide subset membership. As membership in a generally defined set is not necessarily decidable, the (total) functions formula_35 are not automatically in bijection with all the subsets of formula_0. So constructively, subsets are a more elaborate concept than characteristic functions. In fact, in the context of some non-classical axioms on top of CZF, even the power class of a singleton, e.g. the class formula_36 of all subsets of formula_37, is shown to be a proper class. Onto power classes. Below, the fact is used that the special case formula_38 of the negation introduction law implies that formula_39 is contradictory. For simplicitly of the argument, assume formula_25 is a set. Then consider a subset formula_7 and a function formula_40. Further, as in Cantor's theorem about power sets, define formula_41 where, formula_42 This is a subclass of formula_6 defined in dependency of formula_28 and it can also be written formula_43 It exists as subset via Separation. Now assuming there exists a number formula_44 with formula_45 implies the contradiction formula_46 So as a set, one finds formula_25 is formula_14-productive in that we can define an obstructing formula_24 for any given surjection. Also note that the existence of a surjection formula_47 would automatically make formula_25 into a set, via Replacement in CZF, and so this function existence is unconditionally impossible. We conclude that the subcountability axiom, asserting all "sets" are subcountable, is incompatible with formula_25 being a set, as implied e.g. by the power set axiom. Following the above prove makes it clear that we cannot map formula_4 onto just formula_48 either. Bounded separation indeed implies that no set formula_0 whatsoever maps onto formula_49. Relatedly, for any function formula_50, a similar analysis using the subset of its range formula_51 shows that formula_52 cannot be an injection. The situation is more complicated for function spaces. In classical ZFC without Powerset or any of its equivalents, it is also consistent that all subclasses of the reals which are sets are subcountable. In that context, this translates to the statement that all sets of real numbers are countable. Of course, that theory does not have the function space set formula_15. Onto function spaces. By definition of function spaces, the set formula_15 holds those subsets of the set formula_53 which are provably total and functional. Asserting the permitted subcountability of all sets turns, in particular, formula_15 into a subcountable set. So here we consider a surjective function formula_54 and the subset of formula_53 separated as formula_55 with the diagonalizing predicate defined as formula_56 which we can also phrase without the negations as formula_57 This set is classically provably a function in formula_15, designed to take the value formula_58 for particular inputs formula_59. And it can classically be used to prove that the existence of formula_3 as a surjection is actually contradictory. However, constructively, unless the proposition formula_44 in its definition is decidable so that the set actually defined a functional assignment, we cannot prove this set to be a member of the function space. And so we just cannot draw the classical conclusion. In this fashion, subcountability of formula_15 is permitted, and indeed models of the theory exist. Nevertheless, also in the case of CZF, the existence of a full surjection formula_16, with domain formula_6, is indeed contradictory. The decidable membership of formula_60 makes the set also not countable, i.e. uncountable. Beyond these observations, also note that for any non-zero number formula_61, the functions formula_62 in formula_63 involving the surjection formula_3 cannot be extended to all of formula_6 by a similar contradiction argument. This can be expressed as saying that there are then partial functions that cannot be extended to full functions in formula_64. Note that when given a formula_65, one cannot necessarily decide whether formula_44, and so one cannot even decide whether the value of a potential function extension on formula_59 is already determined for the previously characterized surjection formula_3. The subcountibility axiom, asserting all sets are subcountable, is incompatible with any new axiom making formula_4 countable, including LEM. Models. The above analysis affects formal properties of codings of formula_66. Models for the non-classical extension of CZF theory by subcountability postulates have been constructed. Such non-constructive axioms can be seen as choice principles which, however, do not tend to increase the proof-theoretical strengths of the theories much. The notion of size. Subcountability as judgement of small size shall not be conflated with the standard mathematical definition of cardinality relations as defined by Cantor, with smaller cardinality being defined in terms of injections and equality of cardinalities being defined in terms of bijections. Constructively, the preorder "formula_68" on the class of sets fails to be decidable and anti-symmetric. The function space formula_15 (and also formula_69) in a moderately rich set theory is always found to be neither finite nor in bijection with formula_70, by Cantor's diagonal argument. This is what it means to be uncountable. But the argument that the cardinality of that set would thus in some sense exceed that of the natural numbers relies on a restriction to just the classical size conception and its induced ordering of sets by cardinality. As seen in the example of the function space considered in computability theory, not every infinite subset of formula_6 necessarily is in constructive bijection with formula_6, thus making room for a more refined distinction between uncountable sets in constructive contexts. Motivated by the above sections, the infinite set formula_15 may be considered "smaller" than the class formula_25. Related properties. A subcountable set has alternatively also been called "subcountably indexed". The analogous notion exists in which "formula_71" in the definition is replaced by the existence of a set that is a subset of some finite set. This property is variously called "subfinitely indexed". In category theory all these notions are subquotients.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\exists (I\\subseteq{\\mathbb N}).\\, \\exists f.\\, (f\\colon I\\twoheadrightarrow X)," }, { "math_id": 2, "text": "f\\colon I\\twoheadrightarrow X" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "I" }, { "math_id": 5, "text": "{\\mathbb N}\\rightharpoonup X" }, { "math_id": 6, "text": "{\\mathbb N}" }, { "math_id": 7, "text": "I\\subseteq{\\mathbb N}" }, { "math_id": 8, "text": "n\\mapsto f_n" }, { "math_id": 9, "text": "n\\mapsto f_n(n)+1" }, { "math_id": 10, "text": "\\forall (i\\in I). (i\\in{\\mathbb N})." }, { "math_id": 11, "text": "\\forall (n\\in {\\mathbb N}). \\big((n\\in I) \\lor \\neg(n\\in I)\\big)" }, { "math_id": 12, "text": "\\phi" }, { "math_id": 13, "text": "\\phi\\lor \\neg \\phi" }, { "math_id": 14, "text": "\\omega" }, { "math_id": 15, "text": "{\\mathbb N}^{\\mathbb N}" }, { "math_id": 16, "text": "{\\mathbb N}\\twoheadrightarrow{\\mathbb N}^{\\mathbb N}" }, { "math_id": 17, "text": "I\\twoheadrightarrow{\\mathbb N}^{\\mathbb N}" }, { "math_id": 18, "text": "W\\subset X" }, { "math_id": 19, "text": "d\\in X\\setminus W" }, { "math_id": 20, "text": "X\\setminus X" }, { "math_id": 21, "text": "W" }, { "math_id": 22, "text": "d\\in X" }, { "math_id": 23, "text": "d\\notin W" }, { "math_id": 24, "text": "d" }, { "math_id": 25, "text": "{\\mathcal P}{\\mathbb N}" }, { "math_id": 26, "text": "\\cup_{I\\subseteq{\\mathbb N}} X^I" }, { "math_id": 27, "text": "\\forall (w\\in({\\mathbb N}\\rightharpoonup X)). \\exists (d\\in X). \\forall(n\\in{\\mathbb N}). w(n) \\neq d." }, { "math_id": 28, "text": "w" }, { "math_id": 29, "text": "Y^X" }, { "math_id": 30, "text": "X, Y" }, { "math_id": 31, "text": "I\\subseteq {\\mathbb N}" }, { "math_id": 32, "text": "g\\colon X\\to Y" }, { "math_id": 33, "text": "x\\in X" }, { "math_id": 34, "text": "\\exists!(y\\in Y). g(x)=y," }, { "math_id": 35, "text": "X\\to\\{0,1\\}" }, { "math_id": 36, "text": "{\\mathcal P}\\{0\\}" }, { "math_id": 37, "text": "\\{0\\}" }, { "math_id": 38, "text": "(P\\to \\neg P)\\to\\neg P" }, { "math_id": 39, "text": "P\\leftrightarrow \\neg P" }, { "math_id": 40, "text": "w\\colon I\\to{\\mathcal P}{\\mathbb N}" }, { "math_id": 41, "text": "d=\\{k \\in {\\mathbb N}\\mid k\\in I \\land D(k)\\}" }, { "math_id": 42, "text": "D(k)=\\neg (k\\in w(k))." }, { "math_id": 43, "text": "d=\\{k \\in I\\mid \\neg (k\\in w(k))\\}." }, { "math_id": 44, "text": "n\\in I" }, { "math_id": 45, "text": "w(n)=d" }, { "math_id": 46, "text": "n\\in d\\,\\leftrightarrow\\,\\neg(n\\in d)." }, { "math_id": 47, "text": "f\\colon I\\twoheadrightarrow{\\mathcal P}{\\mathbb N}" }, { "math_id": 48, "text": "{\\mathcal P}I" }, { "math_id": 49, "text": "{\\mathcal P}X" }, { "math_id": 50, "text": "h\\colon{\\mathcal P}Y\\to Y" }, { "math_id": 51, "text": "\\{y\\in Y\\mid \\exists(S\\in{\\mathcal P}Y). y=h(S)\\land y\\notin S\\}" }, { "math_id": 52, "text": "h" }, { "math_id": 53, "text": "{\\mathbb N}\\times{\\mathbb N}" }, { "math_id": 54, "text": "f\\colon I\\twoheadrightarrow{\\mathbb N}^{\\mathbb N}" }, { "math_id": 55, "text": "\\Big\\{\\langle n, y\\rangle \\in {\\mathbb N}\\times{\\mathbb N} \\mid \\big(n\\in I\\land D(n, y)\\big) \\lor \\big(\\neg(n\\in I)\\land y=1\\big)\\Big\\}" }, { "math_id": 56, "text": "D(n, y) = \\big(\\neg(f(n)(n)\\ge 1)\\land y=1\\big) \\lor \\big(\\neg(f(n)(n)=0)\\land y=0\\big)" }, { "math_id": 57, "text": "D(n, y) = \\big(f(n)(n)=0\\land y=1\\big) \\lor \\big(f(n)(n)\\ge 1\\land y=0\\big)." }, { "math_id": 58, "text": "y=0" }, { "math_id": 59, "text": "n" }, { "math_id": 60, "text": "I={\\mathbb N}" }, { "math_id": 61, "text": "a" }, { "math_id": 62, "text": "i\\mapsto f(i)(i)+a" }, { "math_id": 63, "text": "I\\to{\\mathbb N}" }, { "math_id": 64, "text": "{\\mathbb N}\\to{\\mathbb N}" }, { "math_id": 65, "text": "n\\in{\\mathbb N}" }, { "math_id": 66, "text": "\\mathbb R" }, { "math_id": 67, "text": "{\\mathsf {ML_1V}}" }, { "math_id": 68, "text": "\\le" }, { "math_id": 69, "text": " \\{0,1\\}^{\\mathbb N} " }, { "math_id": 70, "text": " {\\mathbb N} " }, { "math_id": 71, "text": "\\exists(I\\subseteq{\\mathbb N})" } ]
https://en.wikipedia.org/wiki?curid=15036487
15037
Income
Wealth gained by a person or company within a given time period Income is the consumption and saving opportunity gained by an entity within a specified timeframe, which is generally expressed in monetary terms. Income is difficult to define conceptually and the definition may be different across fields. For example, a person's income in an economic sense may be different from their income as defined by law. An extremely important definition of income is Haig–Simons income, which defines income as "Consumption + Change in net worth" and is widely used in economics. For households and individuals in the United States, income is defined by tax law as a sum that includes any wage, salary, profit, interest payment, rent, or other form of earnings received in a calendar year. Discretionary income is often defined as gross income minus taxes and other deductions (e.g., mandatory pension contributions), and is widely used as a basis to compare the welfare of taxpayers. In the field of public economics, the concept may comprise the accumulation of both monetary and non-monetary consumption ability, with the former (monetary) being used as a proxy for total income. For a firm, gross income can be defined as sum of all revenue minus the cost of goods sold. Net income nets out expenses: net income equals revenue minus cost of goods sold, expenses, depreciation, interest, and taxes. Economic definitions. Full and Haig–Simons income. "Full income" refers to the accumulation of both the monetary and the non-monetary consumption-ability of any given entity, such as a person or a household. According to what the economist Nicholas Barr describes as the "classical definition of income" (the 1938 Haig–Simons definition): "income may be defined as the... sum of (1) the market value of rights exercised in consumption and (2) the change in the value of the store of property rights..." Since the consumption potential of non-monetary goods, such as leisure, cannot be measured, monetary income may be thought of as a proxy for full income. As such, however, it is criticized for being unreliable, "i.e." failing to accurately reflect affluence (and thus the consumption opportunities) of any given agent. It omits the utility a person may derive from non-monetary income and, on a macroeconomic level, fails to accurately chart social welfare. According to Barr, "in practice money income as a proportion of total income varies widely and unsystematically. Non-observability of full income prevents a complete characterization of the individual opportunity set, forcing us to use the unreliable yardstick of money income. Factor income. In economics, "factor income" is the return accruing for a person, or a nation, derived from the "factors of production": rental income, wages generated by labor, the interest created by capital, and profits from entrepreneurial ventures. In consumer theory 'income' is another name for the "budget constraint", an amount formula_0 to be spent on different goods x and y in quantities formula_1 and formula_2 at prices formula_3 and formula_4. The basic equation for this is formula_5 This equation implies two things. First buying one more unit of good x implies buying formula_6 less units of good y. So, formula_6 is the "relative" price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed formula_0 and fixed formula_7 then its relative price falls. The usual hypothesis, the law of demand, is that the quantity demanded of x would increase at the lower price. The analysis can be generalized to more than two goods. The theoretical generalization to more than one period is a multi-period wealth and income constraint. For example, the same person can gain more productive skills or acquire more productive income-earning assets to earn a higher income. In the multi-period case, something might also happen to the economy beyond the control of the individual to reduce (or increase) the flow of income. Changing measured income and its relation to consumption over time might be modeled accordingly, such as in the permanent income hypothesis. Legal definitions. Definitions under the Internal Revenue Code. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Except as otherwise provided in this subtitle, gross income means all income from whatever source derived, including (but not limited to) the following items: (1) Compensation for services, including fees, commissions, fringe benefits, and similar items; (2) Gross income derived from business; (3) Gains derived from dealings in property; (4) Interest; (5) Rents; (6) Royalties; (7) Dividends; (8) Annuities; (9) Income from life insurance and endowment contracts; (10) Pensions; (11) Income from discharge of indebtedness; (12) Distributive share of partnership gross income; (13) Income in respect of a decedent; and (14) Income from an interest in an estate or trust. 26 U.S. Code § 61 - Gross income defined. There are also some statutory exclusions from income. Definition under US Case law. Income is an "undeniable accessions to wealth, clearly realized, and over which the taxpayer has complete dominion." Commentators say that this is a pretty good definition of income. Taxable income is usually lower than Haig-Simons income. This is because unrealized appreciation (e.g., the increase in the value of stock over the course of a year) is economic income but not taxable income, and because there are many statutory exclusions from taxable income, including workman's compensation, SSI, gifts, child support, and in-kind government transfers. Accounting definitions. The International Accounting Standards Board (IASB) uses the following definition: "Income is increases in economic benefits during the accounting period in the form of inflows or enhancements of assets or decreases of liabilities that result in increases in equity, other than those relating to contributions from equity participants." [F.70] (IFRS Framework). Previously the IFRS conceptual framework (4.29) stated: "The definition of income encompasses both revenue and gains. Revenue arises in the course of the ordinary activities of an entity and is referred to by a variety of different names including sales, fees, interest, dividends, royalties and rent. 4.30: Gains represent other items that meet the definition of income and may, or may not, arise in the course of the ordinary activities of an entity. Gains represent increases in economic benefits and as such are no different in nature from revenue. Hence, they are not regarded as constituting a separate element in this Conceptual Framework." The current IFRS conceptual framework (4.68) no longer draws a distinction between revenue and gains. Nevertheless, the distinction continues to be drawn at the standard and reporting levels. For example, IFRS 9.5.7.1 states: "A gain or loss on a financial asset or financial liability that is measured at fair value shall be recognised in profit or loss ..." while the IASB defined IFRS XBRL taxonomy includes OtherGainsLosses, GainsLossesOnNetMonetaryPosition and similar items. US GAAP does not define income but does define comprehensive income (CON 8.4.E75): Comprehensive income is the change in equity of a business entity during a period from transactions and other events and circumstances from nonowner sources. It includes all changes in equity during a period except those resulting from investments by owners and distributions to owners. According to John Hicks' definitions, income "is the maximum amount which can be spent during a period if there is to be an expectation of maintaining intact, the capital value of prospective receipts (in money terms)". "Nonincome". Debt. Borrowing or repaying money is not income under any definition, for either the borrower or the lender. Interest and forgiveness of debt are income. Psychic income. "Non-monetary joy," such as watching a sunset or having sex, simply is not income. Similarly, nonmonetary suffering, such as heartbreak or labor, are not negative income. This may seem trivial, but the non-inclusion of psychic income has important effects on economics and tax policy. It encourages people to find happiness in nonmonetary, nontaxable ways and means that reported income may overstate or understate the well-being of a given individual. Income growth. Income per capita has been increasing steadily in most countries. Many factors contribute to people having a higher income, including education, globalisation and favorable political circumstances such as economic freedom and peace. Increases in income also tend to lead to people choosing to work fewer hours. Developed countries (defined as countries with a "developed economy") have higher incomes as opposed to developing countries tending to have lower incomes. Factors contributing to higher income. Education has a positive effect on the level of income. Education increases the skills of the workforce, which in turn increases its productivity (and thus higher wages). Gary Becker developed a Human Capital Theory, which emphasizes that investment in education and training lead to efficiency gains, and by extension to economic growth. Globalization can increase incomes by integrating markets, and allowing individuals greater possibilities of income increases through efficient allocation of resources and expanding existing wealth. Generally, countries more open to trade have higher incomes. And while globalization tends to increase average income in a country, it does so unequally. Sachs and Warner claim, that “countries with open economies will converge to the same level of income, although admittedly it will take a long time.” Income inequality. Income inequality is the extent to which income is distributed in an uneven manner. It can be measured by various methods, including the Lorenz curve and the Gini coefficient. Many economists argue that certain amounts of inequality are necessary and desirable but that excessive inequality leads to efficiency problems and social injustice. Thereby necessitating initiatives like the United Nations Sustainable Development Goal 10 aimed at reducing inequality. National Income. National income, measured by statistics such as net national income (NNI), measures the total income of individuals, corporations, and government in the economy. For more information see Measures of national income and output. The total output of an economy equals its total income. From this viewpoint, GDP can be an indicator and measurement of national income since it measures a nation’s total production of goods and services produced within the borders of one country and its total income simultaneously. GDP is measured through factors of production (inputs) and the production function (the ability to turn inputs into outputs). One important note in this is income distribution working through the factor market and how national income is divided among these factors. For this examination, the Neoclassical theory of distribution and factor prices is the modern theory to look into. Basic income. Basic income models advocate for a regular, and usually unconditional, receipt of money from the public institution. There are mana basic income models, with the most famous being Universal Basic Income. Universal Basic Income. Universal Basic Income is a periodic receival of cash given to individuals on universal and unconditional basis. Unlike other programs like the Food Stamp Program, UBI provides eligible recipients with cash instead of coupons. Instead of households, it is paid to all individuals without requiring means test and regardless of employment status. The proponents of UBI argue, that basic income is needed for social protection, mitigating automation and labour market disruptions. Opponents argue that UBI, in addition to being costly, will distort incentives for individuals to work. They might argue that there are other and more cost-effective policies that can tackle problems raised by the proponents of UBI. These policies include for example negative income tax. Income in philosophy and ethics. Throughout history, many have written about the impact of income on morality and society. Saint Paul wrote 'For the love of money is a root of all kinds of evil:' ( (ASV)). Some scholars have come to the conclusion that material progress and prosperity, as manifested in continuous income growth at both the individual and the national level, provide the indispensable foundation for sustaining any kind of morality. This argument was explicitly given by Adam Smith in his "Theory of Moral Sentiments", and has more recently been developed by Harvard economist Benjamin Friedman in his book "The Moral Consequences of Economic Growth". Income and health. A landmark systematic review from Harvard University researchers in the Cochrane Collaboration found that income given in the form of unconditional cash transfers leads to reductions in disease, improvements in food security and dietary diversity, increases in children's school attendance, decreases in extreme poverty, and higher health care spending.&lt;ref name="doi10.1002/14651858.CD011135.pub2"&gt;&lt;/ref&gt;&lt;ref name="doi10.1002/14651858.CD011135.pub3"&gt;&lt;/ref&gt; The Health Foundation published an analysis where people on the lower income spectrum were more likely to describe their health negatively. Higher income was associated with self-reported better health. Another study found that “an increase in household income of £1,000 is associated with a 3.6 month increase in life expectancy for both men and women.” A study by a Professor of Epidemiology Michael G Marmot found argues that there are two ways which could explain a positive correlation between income and health: the ability to afford goods and services necessary for biological survival, and the ability to influence life circumstances. Russell Ecob and George Davey Smith found that there is a relationship between income and a number of health measures. Greater household equivalised income is associated with better health indicators such as height, waist–hip ratio, respiratory function, malaise, limiting long-term illness. History. Income is conventionally denoted by "Y" in economics. John Hicks used "I" for income, but Keynes wrote to him in 1937, "after trying both, I believe it is easier to use Y for income and I for investment." Some consider Y as an alternative letter for the phoneme I in languages like Spanish, although Y as the "Greek I" was actually pronounced like the modern German ü or the phonetic /y/. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "P_x" }, { "math_id": 4, "text": "P_y" }, { "math_id": 5, "text": "Y=P_x \\cdot x + P_y \\cdot y" }, { "math_id": 6, "text": "\\frac{P_x}{P_y}" }, { "math_id": 7, "text": "P_y," } ]
https://en.wikipedia.org/wiki?curid=15037
1503750
Rolling resistance
Force resisting the motion when a body rolls on a surface Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body (such as a ball, tire, or wheel) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. In addition, only the so-called slip resistance involves friction, therefore the name "rolling friction" is to an extent a misnomer. Analogous with sliding friction, rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction. Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt. Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter, load on wheel, surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete. Sole rolling resistance factor is not dependent on speed. Primary cause. The primary cause of pneumatic tire rolling resistance is hysteresis: A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber. — National Academy of Sciences This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate. The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion. Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica. Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. Note that railroads also have hysteresis in the roadbed structure. Definitions. In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail). But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine. This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly. The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like). Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance. For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire. For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances. Rolling resistance coefficient. The "rolling resistance coefficient" is defined by the following equation: formula_0 where formula_3 is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: formula_5 means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that formula_3 is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, formula_3 is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. formula_3 is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), which is the same as pounds of resistance per 1000 pounds of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just formula_6. Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that formula_3 is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is formula_7 and is force per unit mass, where "g" is the acceleration of gravity in SI units (meters per second square). The above shows resistance proportional to formula_8 but does not explicitly show any variation with speed, loads, torque, surface roughness, diameter, tire inflation/wear, etc., because formula_3 itself varies with those factors. It might seem from the above definition of formula_8 that the rolling resistance is directly proportional to vehicle weight but it is not. Measurement. There are at least two popular models for calculating rolling resistance. The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance". Physical formulae. The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by formula_10 where The empirical formula for formula_13 for cast iron mine car wheels on steel rails is: formula_14 where As an alternative to using formula_8 one can use formula_17, which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula: formula_18 where The above equation, where resistance is inversely proportional to radius formula_19 seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction). See dependence on diameter. Equating this equation with the force per the rolling resistance coefficient, and solving for formula_20, gives formula_20 = formula_21. Therefore, if a source gives rolling resistance coefficient (formula_3) as a dimensionless coefficient, it can be converted to formula_20, having units of length, by multiplying formula_3 by wheel radius formula_19. Rolling resistance coefficient examples. Table of rolling resistance coefficient examples: For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s2 × 0.01 = 98.1 N). Dependence on diameter. Stagecoaches and railroads. According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail and for 19th century carriage wheels. But there are other tests on carriage wheels that do not agree. Theory of a cylinder rolling on an elastic roadway also gives this same rule These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). This disputed (or wrongly applied) -"Coulomb's law" is still found in handbooks, however. Pneumatic tires. For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters). Dependence on applied torque. The driving torque formula_22 to overcome rolling resistance formula_23 and maintain steady speed on level ground (with no air resistance) can be calculated by: formula_24 where It is noteworthy that formula_27 is usually not equal to the radius of the rolling body as a result of wheel slip. The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. Consequently, the linear speed of the vehicle differs from the wheel`s circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. Significance of rolling or slip resistance is largely dependent on the tractive force, coefficient of friction, normal load, etc. All wheels. "Applied torque" may either be driving torque applied by a motor (often through a transmission) or a braking torque applied by brakes (including regenerative braking). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%. A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels. It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance. Railroad steel wheels. In order to apply any traction to the wheels, some slippage of the wheel is required. For trains climbing up a grade, this slip is normally 1.5% to 2.5%. Slip (also known as creep) is normally roughly directly proportional to tractive effort. An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage. Pneumatic tires. Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher. Dependence on wheel load. Railroad steel wheels. The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load. If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs. Pneumatic tires. For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance. Dependence on curvature of roadway. General. When a vehicle (motor vehicle or railroad train) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle. This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering. Sound. Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway, a process which generates sound as a by-product. The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads. Factors that contribute in tires. Several factors affect the magnitude of rolling resistance a tire generates: Railroads: Components of rolling resistance. In a broad sense rolling resistance can be defined as the sum of components): Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr. Railroads normally use roller bearings which are either cylindrical (Russia) or tapered (United States). The specific rolling resistance in bearings varies with both wheel loading and speed. Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels. Comparing rolling resistance of highway vehicles and trains. The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter. As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight. An example of a very light high-speed passenger train is the N700 Series Shinkansen, which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile. In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ F = C_{rr} N " }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "C_{rr}" }, { "math_id": 4, "text": "N" }, { "math_id": 5, "text": "\\ C_{rr} = 0.01 " }, { "math_id": 6, "text": "2000 C_{rr}" }, { "math_id": 7, "text": "1000 g C_{rr}" }, { "math_id": 8, "text": " C_{rr}" }, { "math_id": 9, "text": "\\cos(\\theta) = 1" }, { "math_id": 10, "text": " C_{rr} = \\sqrt {z/d} " }, { "math_id": 11, "text": "z" }, { "math_id": 12, "text": "d" }, { "math_id": 13, "text": " C_{rr} " }, { "math_id": 14, "text": " C_{rr} = 0.0048 (18/D)^{\\frac{1}{2}}(100/W)^{\\frac{1}{4}} = \\frac{0.0643988}{\\sqrt[4]{WD^{2}}}" }, { "math_id": 15, "text": "D" }, { "math_id": 16, "text": "W" }, { "math_id": 17, "text": " b" }, { "math_id": 18, "text": " F = \\frac{N b}{r} " }, { "math_id": 19, "text": "r" }, { "math_id": 20, "text": "b" }, { "math_id": 21, "text": "C_{rr}r" }, { "math_id": 22, "text": "T" }, { "math_id": 23, "text": "R_{r}" }, { "math_id": 24, "text": " T = \\frac{V_{s}}{\\Omega} R_r " }, { "math_id": 25, "text": "V_s" }, { "math_id": 26, "text": "\\Omega" }, { "math_id": 27, "text": "V_{s} / \\Omega" } ]
https://en.wikipedia.org/wiki?curid=1503750
1503963
Chebyshev distance
Mathematical metric In mathematics, Chebyshev distance (or Tchebychev distance), maximum metric, or L∞ metric is a metric defined on a real coordinate space where the distance between two points is the greatest of their differences along any coordinate dimension. It is named after Pafnuty Chebyshev. It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board. For example, the Chebyshev distance between f6 and e2 equals 4. Definition. The Chebyshev distance between two vectors or points "x" and "y", with standard coordinates formula_0 and formula_1, respectively, is formula_2 This equals the limit of the L"p" metrics: formula_3 hence it is also known as the L∞ metric. Mathematically, the Chebyshev distance is a metric induced by the supremum norm or uniform norm. It is an example of an injective metric. In two dimensions, i.e. plane geometry, if the points "p" and "q" have Cartesian coordinates formula_4 and formula_5, their Chebyshev distance is formula_6 Under this metric, a circle of radius "r", which is the set of points with Chebyshev distance "r" from a center point, is a square whose sides have the length 2"r" and are parallel to the coordinate axes. On a chessboard, where one is using a "discrete" Chebyshev distance, rather than a continuous one, the circle of radius "r" is a square of side lengths 2"r," measuring from the centers of squares, and thus each side contains 2"r"+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square. Properties. In one dimension, all L"p" metrics are equal – they are just the absolute value of the difference. The two dimensional Manhattan distance has "circles" i.e. level sets in the form of squares, with sides of length √"2""r", oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. a linear transformation of) the planar Manhattan distance. However, this geometric equivalence between L1 and L∞ metrics does not generalize to higher dimensions. A sphere formed using the Chebyshev distance as a metric is a cube with each face perpendicular to one of the coordinate axes, but a sphere formed using Manhattan distance is an octahedron: these are dual polyhedra, but among cubes, only the square (and 1-dimensional line segment) are self-dual polytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1 and L∞ metrics are mathematically dual to each other. On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are the Moore neighborhood of that point. The Chebyshev distance is the limiting case of the order-formula_7 Minkowski distance, when formula_7 reaches infinity. Applications. The Chebyshev distance is sometimes used in warehouse logistics, as it effectively measures the time an overhead crane takes to move an object (as the crane can move on the x and y axes at the same time but at the same speed along each axis). It is also widely used in electronic Computer-Aided Manufacturing (CAM) applications, in particular, in optimization algorithms for these. Many tools, such as plotting or drilling machines, photoplotter, etc. operating in the plane, are usually controlled by two motors in x and y directions, similar to the overhead cranes. Generalizations. For the sequence space of infinite-length sequences of real or complex numbers, the Chebyshev distance generalizes to the formula_8-norm; this norm is sometimes called the Chebyshev norm. For the space of (real or complex-valued) functions, the Chebyshev distance generalizes to the uniform norm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_i" }, { "math_id": 1, "text": "y_i" }, { "math_id": 2, "text": "D_{\\rm Chebyshev}(x,y) := \\max_i(|x_i -y_i|).\\ " }, { "math_id": 3, "text": "\\lim_{p \\to \\infty} \\bigg( \\sum_{i=1}^n \\left| x_i - y_i \\right|^p \\bigg)^{1/p}," }, { "math_id": 4, "text": "(x_1,y_1)" }, { "math_id": 5, "text": "(x_2,y_2)" }, { "math_id": 6, "text": "D_{\\rm Chebyshev} = \\max \\left ( \\left | x_2 - x_1 \\right | , \\left | y_2 - y_1 \\right | \\right ) ." }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "\\ell^\\infty" } ]
https://en.wikipedia.org/wiki?curid=1503963
15040455
Inverse curve
Curve created by a geometric operation In inversive geometry, an inverse curve of a given curve C is the result of applying an inverse operation to C. Specifically, with respect to a fixed circle with center O and radius k the inverse of a point Q is the point P for which P lies on the ray OQ and "OP"·"OQ" "k"2. The inverse of the curve C is then the locus of P as Q runs over C. The point O in this construction is called the center of inversion, the circle the circle of inversion, and k the radius of inversion. An inversion applied twice is the identity transformation, so the inverse of an inverse curve with respect to the same circle is the original curve. Points on the circle of inversion are fixed by the inversion, so its inverse is itself. Equations. The inverse of the point ("x", "y") with respect to the unit circle is ("X", "Y") where formula_0 or equivalently formula_1 So the inverse of the curve determined by "f"("x", "y") 0 with respect to the unit circle is formula_2 It is clear from this that inverting an algebraic curve of degree n with respect to a circle produces an algebraic curve of degree at most 2"n". Similarly, the inverse of the curve defined parametrically by the equations formula_3 with respect to the unit circle is given parametrically as formula_4 This implies that the circular inverse of a rational curve is also rational. More generally, the inverse of the curve determined by "f"("x", "y") 0 with respect to the circle with center ("a", "b") and radius k is formula_5 The inverse of the curve defined parametrically by formula_3 with respect to the same circle is given parametrically as formula_6 In polar coordinates, the equations are simpler when the circle of inversion is the unit circle. The inverse of the point ("r", "θ") with respect to the unit circle is ("R", "Θ") where formula_7 So the inverse of the curve "f"("r", "θ") 0 is determined by "f"(, "Θ") 0 and the inverse of the curve "r" "g"("θ") is "r" Degrees. As noted above, the inverse with respect to a circle of a curve of degree n has degree at most 2"n". The degree is exactly 2"n" unless the original curve passes through the point of inversion or it is circular, meaning that it contains the circular points, (1, ±"i", 0), when considered as a curve in the complex projective plane. In general, inversion with respect to an arbitrary curve may produce an algebraic curve with proportionally larger degree. Specifically, if C is p-circular of degree n, and if the center of inversion is a singularity of order q on C, then the inverse curve will be an ("n" − "p" − "q")-circular curve of degree 2"n" − 2"p" − "q" and the center of inversion is a singularity of order "n" − 2"p" on the inverse curve. Here "q" 0 if the curve does not contain the center of inversion and "q" 1 if the center of inversion is a nonsingular point on it; similarly the circular points, (1, ±"i", 0), are singularities of order p on C. The value k can be eliminated from these relations to show that the set of p-circular curves of degree "p" + "k", where p may vary but k is a fixed positive integer, is invariant under inversion. Examples. Applying the above transformation to the lemniscate of Bernoulli formula_8 gives us formula_9 the equation of a hyperbola; since inversion is a birational transformation and the hyperbola is a rational curve, this shows the lemniscate is also a rational curve, which is to say a curve of genus zero. If we apply the transformation to the Fermat curve "xn" + "yn" 1, where n is odd, we obtain formula_10 Any rational point on the Fermat curve has a corresponding rational point on this curve, giving an equivalent formulation of Fermat's Last Theorem. As an example involving transcendental curves, the Archimedean spiral and hyperbolic spiral are inverse curves. Similarly, the Fermat spiral and the lituus are inverse curves. The logarithmic spiral is its own inverse. Particular cases. For simplicity, the circle of inversion in the following cases will be the unit circle. Results for other circles of inversion can be found by translation and magnification of the original curve. Lines. For a line passing through the origin, the polar equation is "θ" "θ"0 where "θ"0 is fixed. This remains unchanged under the inversion. The polar equation for a line not passing through the origin is formula_11 and the equation of the inverse curve is formula_12 which defines a circle passing through the origin. Applying the inversion again shows that the inverse of a circle passing through the origin is a line. Circles. In polar coordinates, the general equation for a circle that does not pass through the origin (the other cases having been covered) is formula_13 where a is the radius and ("r"0, "θ"0) are the polar coordinates of the center. The equation of the inverse curve is then formula_14 or formula_15 This is the equation of a circle with radius formula_16 and center whose polar coordinates are formula_17 Note that "R"0 may be negative. If the original circle intersects with the unit circle, then the centers of the two circles and a point of intersection form a triangle with sides 1, "a", "r"0 this is a right triangle, i.e. the radii are at right angles, exactly when formula_18 But from the equations above, the original circle is the same as the inverse circle exactly when formula_19 So the inverse of a circle is the same circle if and only if it intersects the unit circle at right angles. To summarize and generalize this and the previous section: Parabolas with center of inversion at the vertex. The equation of a parabola is, up to similarity, translating so that the vertex is at the origin and rotating so that the axis is horizontal, "x" "y"2. In polar coordinates this becomes formula_20 The inverse curve then has equation formula_21 which is the cissoid of Diocles. Conic sections with center of inversion at a focus. The polar equation of a conic section with one focus at the origin is, up to similarity formula_22 where e is the eccentricity. The inverse of this curve will then be formula_23 which is the equation of a limaçon of Pascal. When "e" 0 this is the circle of inversion. When 0 &lt; "e" &lt; 1 the original curve is an ellipse and the inverse is a simple closed curve with an acnode at the origin. When "e" 1 the original curve is a parabola and the inverse is the cardioid which has a cusp at the origin. When "e" &gt; 1 the original curve is a hyperbola and the inverse forms two loops with a crunode at the origin. Ellipses and hyperbolas with center of inversion at a vertex. The general equation of an ellipse or hyperbola is formula_24 Translating this so that the origin is one of the vertices gives formula_25 and rearranging gives formula_26 or, changing constants, formula_27 Note that parabola above now fits into this scheme by putting "c" 0 and "d" 1. The equation of the inverse is formula_28 or formula_29 This equation describes a family of curves called the conchoids of de Sluze. This family includes, in addition to the cissoid of Diocles listed above, the trisectrix of Maclaurin ("d" −) and the right strophoid ("d" −"c"). Ellipses and hyperbolas with center of inversion at the center. Inverting the equation of an ellipse or hyperbola formula_30 gives formula_31 which is the hippopede. When "d" −"c" this is the lemniscate of Bernoulli. Conics with arbitrary center of inversion. Applying the degree formula above, the inverse of a conic (other than a circle) is a circular cubic if the center of inversion is on the curve, and a bicircular quartic otherwise. Conics are rational so the inverse curves are rational as well. Conversely, any rational circular cubic or rational bicircular quartic is the inverse of a conic. In fact, any such curve must have a real singularity and taking this point as a center of inversion, the inverse curve will be a conic by the degree formula. Anallagmatic curves. An anallagmatic curve is one which inverts into itself. Examples include the circle, cardioid, oval of Cassini, strophoid, and trisectrix of Maclaurin. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X = \\frac{x}{x^2+y^2},\\qquad Y=\\frac{y}{x^2+y^2}," }, { "math_id": 1, "text": "x = \\frac{X}{X^2+Y^2},\\qquad y=\\frac{Y}{X^2+Y^2}." }, { "math_id": 2, "text": "f\\left(\\frac{X}{X^2+Y^2}, \\frac{Y}{X^2+Y^2}\\right)=0." }, { "math_id": 3, "text": "x = x(t),\\qquad y = y(t)" }, { "math_id": 4, "text": "\\begin{align}\nX=X(t)&=\\frac{x(t)}{x(t)^2 + y(t)^2}, \\\\\nY=Y(t)&=\\frac{y(t)}{x(t)^2 + y(t)^2}.\n\\end{align}" }, { "math_id": 5, "text": "f\\left(a+\\frac{k^2(X-a)}{(X-a)^2+(Y-b)^2}, b+\\frac{k^2(Y-b)}{(X-a)^2+(Y-b)^2}\\right)=0." }, { "math_id": 6, "text": "\\begin{align}\nX=X(t)&=a+\\frac{k^2\\bigl(x(t)-a\\bigr)}{\\bigl(x(t)-a\\bigr)^2 + \\bigl(y(t)-b\\bigr)^2}, \\\\\nY=Y(t)&=b+\\frac{k^2\\bigl(y(t)-b\\bigr)}{\\bigl(x(t)-a\\bigr)^2 + \\bigl(y(t)-b\\bigr)^2}.\n\\end{align}" }, { "math_id": 7, "text": "R = \\frac{1}{r},\\qquad \\Theta=\\theta." }, { "math_id": 8, "text": "\\left(x^2 + y^2\\right)^2 = a^2 \\left(x^2 - y^2\\right)" }, { "math_id": 9, "text": "a^2\\left(u^2-v^2\\right) = 1," }, { "math_id": 10, "text": "\\left(u^2+v^2\\right)^n = u^n+v^n." }, { "math_id": 11, "text": "r\\cos\\left(\\theta-\\theta_0\\right) = a" }, { "math_id": 12, "text": "r = a\\cos\\left(\\theta-\\theta_0\\right)" }, { "math_id": 13, "text": "r^2 - 2r_0 r\\cos\\left(\\theta-\\theta_0\\right) + r_0^2 - a^2 = 0,\\qquad(a>0,\\ r>0,\\ a \\ne r_0)" }, { "math_id": 14, "text": "1 - 2r_0 r\\cos\\left(\\theta-\\theta_0\\right) + \\left(r_0^2 - a^2\\right)r^2 = 0," }, { "math_id": 15, "text": "r^2 - \\frac{2r_0}{r_0^2 - a^2} r\\cos\\left(\\theta-\\theta_0\\right) + \\frac{1}{r_0^2 - a^2} = 0." }, { "math_id": 16, "text": "A = \\frac{a}{\\left|r_0^2 - a^2\\right|}" }, { "math_id": 17, "text": "\\left(R_0, \\Theta_0\\right) = \\left(\\frac{r_0}{r_0^2 - a^2}, \\theta_0\\right)." }, { "math_id": 18, "text": "r_0^2 = a^2 + 1." }, { "math_id": 19, "text": "r_0^2 - a^2 = 1. " }, { "math_id": 20, "text": "r=\\frac{\\cos\\theta}{\\sin^2\\theta}." }, { "math_id": 21, "text": "r=\\frac{\\sin^2\\theta}{\\cos\\theta} = \\sin\\theta \\tan\\theta" }, { "math_id": 22, "text": "r = \\frac{1}{1 + e \\cos \\theta}," }, { "math_id": 23, "text": "r = 1 + e \\cos \\theta," }, { "math_id": 24, "text": "\\frac{x^2}{a^2}\\pm\\frac{y^2}{b^2}=1." }, { "math_id": 25, "text": "\\frac{(x-a)^2}{a^2}\\pm\\frac{y^2}{b^2}=1" }, { "math_id": 26, "text": "\\frac{x^2}{2a}\\pm\\frac{ay^2}{2b^2}=x" }, { "math_id": 27, "text": "cx^2+dy^2=x. " }, { "math_id": 28, "text": "\\frac{cx^2}{\\left(x^2+y^2\\right)^2}+\\frac{dy^2}{\\left(x^2+y^2\\right)^2}=\\frac{x}{x^2+y^2}" }, { "math_id": 29, "text": "x\\left(x^2+y^2\\right) = cx^2+dy^2. " }, { "math_id": 30, "text": "cx^2+dy^2=1 " }, { "math_id": 31, "text": "\\left(x^2+y^2\\right)^2=cx^2+dy^2 " } ]
https://en.wikipedia.org/wiki?curid=15040455
15043853
Yurii Shirokov
Yurii Shirokov, (Широков, Юрий Михайлович 21.6.1921—5.7.1980) was a writer, physicist, and professor. He graduated from Moscow State University (Moscow) in 1948. He worked in the same university, then in the Steklov Mathematical Institute (Moscow). He wrote more than 100 scientific papers and several monographs, among which the textbook "Nuclear Physics" is particularly relevant. Algebra of generalized functions. Yu.Shirokov had constructed the Algebra of Generalized functions. Then it was applied to various systems Classical and quantum mechanics. Shirokov was, perhaps, not the first to mention that quantum mechanics has many classical limits. The Planck constant formula_0 appears in many relations, and there are many options to keep some of parameters (or even to vary them all) as formula_1. The most known limiting cases of quantum mechanics are classical waves and Newtonian mechanics. Shirokov has systematised the construction of classical limits of quantum mechanics, see Development of Shirokov's ideas. The most important ideas about the quantum mechanics and the theoretical physics in general formulated by Shirokov are not yet developed. In particular, the field theory in terms of wave packets (i.e., without divergent terms and without divergent series) is not yet constructed.
[ { "math_id": 0, "text": "~\\hbar~" }, { "math_id": 1, "text": "~\\hbar \\rightarrow 0~" } ]
https://en.wikipedia.org/wiki?curid=15043853
1504425
Possibility theory
Mathematical theory for handling uncertainty Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessary to necessary, respectively. Professor Lotfi Zadeh first introduced possibility theory in 1978 as an extension of his theory of fuzzy sets and fuzzy logic. Didier Dubois and Henri Prade further contributed to its development. Earlier, in the 1950s, economist G. L. S. Shackle proposed the min/max algebra to describe degrees of potential surprise. Formalization of possibility. For simplicity, assume that the universe of discourse Ω is a finite set. A possibility measure is a function formula_0 from formula_1 to [0, 1] such that: Axiom 1: formula_2 Axiom 2: formula_3 Axiom 3: formula_4 for any disjoint subsets formula_5 and formula_6. It follows that, like probability on finite probability spaces, the possibility measure is determined by its behavior on singletons: formula_7 Axiom 1 can be interpreted as the assumption that Ω is an exhaustive description of future states of the world, because it means that no belief weight is given to elements outside Ω. Axiom 2 could be interpreted as the assumption that the evidence from which formula_0 was constructed is free of any contradiction. Technically, it implies that there is at least one element in Ω with possibility 1. Axiom 3 corresponds to the additivity axiom in probabilities. However, there is an important practical difference. Possibility theory is computationally more convenient because Axioms 1–3 imply that: formula_4 for "any" subsets formula_5 and formula_6. Because one can know the possibility of the union from the possibility of each component, it can be said that possibility is "compositional" with respect to the union operator. Note however that it is not compositional with respect to the intersection operator. Generally: formula_8 When Ω is not finite, Axiom 3 can be replaced by: For all index sets formula_9, if the subsets formula_10 are pairwise disjoint, formula_11 Necessity. Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, the "possibility" and the "necessity "of the event. For any set formula_5, the necessity measure is defined by formula_12. In the above formula, formula_13 denotes the complement of formula_5, that is the elements of formula_14 that do not belong to formula_5. It is straightforward to show that: formula_15 for any formula_5 and that: formula_16. Note that contrary to probability theory, possibility is not self-dual. That is, for any event formula_5, we only have the inequality: formula_17 However, the following duality rule holds: For any event formula_5, either formula_18, or formula_19 Accordingly, beliefs about an event can be represented by a number and a bit. Interpretation. There are four cases that can be interpreted as follows: formula_20 means that formula_5 is necessary. formula_5 is certainly true. It implies that formula_18. formula_21 means that formula_5 is impossible. formula_5 is certainly false. It implies that formula_19. formula_18 means that formula_5 is possible. I would not be surprised at all if formula_5 occurs. It leaves formula_22 unconstrained. formula_19 means that formula_5 is unnecessary. I would not be surprised at all if formula_5 does not occur. It leaves formula_23 unconstrained. The intersection of the last two cases is formula_19 and formula_18 meaning that I believe nothing at all about formula_5. Because it allows for indeterminacy like this, possibility theory relates to the graduation of a many-valued logic, such as intuitionistic logic, rather than the classical two-valued logic. Note that unlike possibility, fuzzy logic is compositional with respect to both the union and the intersection operator. The relationship with fuzzy theory can be explained with the following classic example. Possibility theory as an imprecise probability theory. There is an extensive formal correspondence between probability and possibility theories, where the addition operator corresponds to the maximum operator. A possibility measure can be seen as a consonant plausibility measure in the Dempster–Shafer theory of evidence. The operators of possibility theory can be seen as a hyper-cautious version of the operators of the transferable belief model, a modern development of the theory of evidence. Possibility can be seen as an upper probability: any possibility distribution defines a unique credal set set of admissible probability distributions by formula_24 This allows one to study possibility theory using the tools of imprecise probabilities. Necessity logic. We call "generalized possibility" every function satisfying Axiom 1 and Axiom 3. We call "generalized necessity" the dual of a generalized possibility. The generalized necessities are related to a very simple and interesting fuzzy logic called "necessity logic". In the deduction apparatus of necessity logic the logical axioms are the usual classical tautologies. Also, there is only a fuzzy inference rule extending the usual modus ponens. Such a rule says that if "α" and "α" → "β" are proved at degree "λ" and "μ", respectively, then we can assert "β" at degree min{"λ","μ"}. It is easy to see that the theories of such a logic are the generalized necessities and that the completely consistent theories coincide with the necessities (see for example Gerla 2001). References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pi" }, { "math_id": 1, "text": "2^\\Omega" }, { "math_id": 2, "text": "\\Pi(\\varnothing) = 0" }, { "math_id": 3, "text": "\\Pi(\\Omega) = 1" }, { "math_id": 4, "text": "\\Pi(U \\cup V) = \\max \\left( \\Pi(U), \\Pi(V) \\right)" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "V" }, { "math_id": 7, "text": "\\Pi(U) = \\max_{\\omega \\in U} \\Pi (\\{\\omega\\})." }, { "math_id": 8, "text": "\\Pi(U \\cap V) \\leq \\min \\left( \\Pi(U), \\Pi(V) \\right) \\leq \\max \\left( \\Pi(U), \\Pi(V) \\right)." }, { "math_id": 9, "text": "I" }, { "math_id": 10, "text": "U_{i,\\, i \\in I}" }, { "math_id": 11, "text": "\\Pi\\left(\\bigcup_{i \\in I} U_i\\right) = \\sup_{i \\in I}\\Pi(U_i)." }, { "math_id": 12, "text": "N(U) = 1 - \\Pi(\\overline U)" }, { "math_id": 13, "text": "\\overline U" }, { "math_id": 14, "text": "\\Omega" }, { "math_id": 15, "text": "N(U) \\leq \\Pi(U)" }, { "math_id": 16, "text": "N(U \\cap V) = \\min ( N(U), N(V))" }, { "math_id": 17, "text": "\\Pi(U) + \\Pi(\\overline U) \\geq 1" }, { "math_id": 18, "text": "\\Pi(U) = 1" }, { "math_id": 19, "text": "N(U) = 0" }, { "math_id": 20, "text": "N(U) = 1" }, { "math_id": 21, "text": "\\Pi(U) = 0" }, { "math_id": 22, "text": "N(U)" }, { "math_id": 23, "text": "\\Pi(U)" }, { "math_id": 24, "text": "K = \\{\\, P \\mid \\forall S\\ P(S)\\leq \\Pi(S)\\,\\}." } ]
https://en.wikipedia.org/wiki?curid=1504425
1504593
Euler's three-body problem
Solve for a particle's motion that is acted on by the gravitational field of two other point masses In physics and astronomy, Euler's three-body problem is to solve for the motion of a particle that is acted upon by the gravitational field of two other point masses that are fixed in space. This problem is exactly solvable, and yields an approximate solution for particles moving in the gravitational fields of prolate and oblate spheroids. This problem is named after Leonhard Euler, who discussed it in memoirs published in 1760. Important extensions and analyses were contributed subsequently by Lagrange, Liouville, Laplace, Jacobi, Le Verrier, Hamilton, Poincaré, Birkhoff among others. Euler's problem also covers the case when the particle is acted upon by other inverse-square central forces, such as the electrostatic interaction described by Coulomb's law. The classical solutions of the Euler problem have been used to study chemical bonding, using a semiclassical approximation of the energy levels of a single electron moving in the field of two atomic nuclei, such as the diatomic ion HeH2+. This was first done by Wolfgang Pauli in his doctoral dissertation under Arnold Sommerfeld, a study of the first ion of molecular hydrogen, namely the hydrogen molecule-ion H2+. These energy levels can be calculated with reasonable accuracy using the Einstein–Brillouin–Keller method, which is also the basis of the Bohr model of atomic hydrogen. More recently, as explained further in the quantum-mechanical version, analytical solutions to the eigenvalues (energies) have been obtained: these are a "generalization" of the Lambert W function. The exact solution, in the full three dimensional case, can be expressed in terms of Weierstrass's elliptic functions For convenience, the problem may also be solved by numerical methods, such as Runge–Kutta integration of the equations of motion. The total energy of the moving particle is conserved, but its linear and angular momentum are not, since the two fixed centers can apply a net force and torque. Nevertheless, the particle has a second conserved quantity that corresponds to the angular momentum or to the Laplace–Runge–Lenz vector as limiting cases. The Euler three-body problem is known by a variety of names, such as the problem of two fixed centers, the Euler–Jacobi problem, and the two-center Kepler problem. Various generalizations of Euler's problem are known; these generalizations add linear and inverse cubic forces and up to five centers of force. Special cases of these generalized problems include "Darboux's problem" and "Velde's problem". Overview and history. Euler's three-body problem is to describe the motion of a particle under the influence of two centers that attract the particle with central forces that decrease with distance as an inverse-square law, such as Newtonian gravity or Coulomb's law. Examples of Euler's problem include an electron moving in the electric field of two nuclei, such as the hydrogen molecule-ion . The strength of the two inverse-square forces need not be equal; for illustration, the two nuclei may have different charges, as in the molecular ion HeH2+. In Euler's three-body problem we assume that the two centres of attraction are stationary. This is not strictly true in a case like , but the protons experience much less acceleration than the electron. However, the Euler three-body problem does not apply to a planet moving in the gravitational field of two stars, because in that case at least one of the stars experiences acceleration similar to that experienced by the planet. This problem was first considered by Leonhard Euler, who showed that it had an exact solution in 1760. Joseph Louis Lagrange solved a generalized problem in which the centers exert both linear and inverse-square forces. Carl Gustav Jacob Jacobi showed that the rotation of the particle about the axis of the two fixed centers could be separated out, reducing the general three-dimensional problem to the planar problem. In 2008, Birkhauser published a book entitled "Integrable Systems in Celestial Mechanics". In this book an Irish mathematician, Diarmuid Ó Mathúna, gives closed form solutions for both the planar two fixed centers problem and the three dimensional problem. Constants of motion. The problem of two fixed centers conserves energy; in other words, the total energy formula_0 is a constant of motion. The potential energy is given by formula_1 where formula_2 represents the particle's position, and formula_3 and formula_4 are the distances between the particle and the centers of force; formula_5 and formula_6 are constants that measure the strength of the first and second forces, respectively. The total energy equals sum of this potential energy with the particle's kinetic energy formula_7 where formula_8 and formula_9 are the particle's mass and linear momentum, respectively. The particle's linear and angular momentum are not conserved in Euler's problem, since the two centers of force act like external forces upon the particle, which may yield a net force and torque on the particle. Nevertheless, Euler's problem has a second constant of motion formula_10 where formula_11 is the separation of the two centers of force, formula_12 and formula_13 are the angles of the lines connecting the particle to the centers of force, with respect to the line connecting the centers. This second constant of motion was identified by E. T. Whittaker in his work on analytical mechanics, and generalized to formula_14 dimensions by Coulson and Joseph in 1967. In the Coulson–Joseph form, the constant of motion is written formula_15 where formula_16 denotes the momentum component along the formula_17 axis on which the attracting centers are located. This constant of motion corresponds to the total angular momentum squared formula_18 in the limit when the two centers of force converge to a single point (formula_19), and proportional to the Laplace–Runge–Lenz vector formula_20 in the limit when one of the centers goes to infinity (formula_21 while formula_22 remains finite). Quantum mechanical version. A special case of the quantum mechanical three-body problem is the hydrogen molecule ion, H2+. Two of the three bodies are nuclei and the third is a fast moving electron. The two nuclei are 1800 times heavier than the electron and thus modeled as fixed centers. It is well known that the Schrödinger wave equation is separable in prolate spheroidal coordinates and can be decoupled into two ordinary differential equations coupled by the energy eigenvalue and a separation constant. However, solutions required series expansions from basis sets. Nonetheless, through experimental mathematics, it was found that the energy eigenvalue was mathematically a "generalization" of the Lambert W function (see Lambert W function and references therein for more details). The hydrogen molecular ion in the case of clamped nuclei can be completely worked out within a Computer algebra system. The fact that its solution is an implicit function is revealing in itself. One of the successes of theoretical physics is not simply a matter that it is amenable to a mathematical treatment but that the algebraic equations involved can be symbolically manipulated until an analytical solution, preferably a closed form solution, is isolated. This type of solution for a special case of the three-body problem shows us the possibilities of what is possible as an analytical solution for the quantum three-body and many-body problem. Generalizations. An exhaustive analysis of the soluble generalizations of Euler's three-body problem was carried out by Adam Hiltebeitel in 1911. The simplest generalization of Euler's three-body problem is to add a third center of force midway between the original two centers, that exerts only a linear Hooke force. The next generalization is to augment the inverse-square force laws with a force that increases linearly with distance. The final set of generalizations is to add two fixed centers of force at positions that are imaginary numbers, with forces that are both linear and inverse-square laws, together with a force parallel to the axis of imaginary centers and varying as the inverse cube of the distance to that axis. The solution to the original Euler problem is an approximate solution for the motion of a particle in the gravitational field of a prolate body, i.e., a sphere that has been elongated in one direction, such as a cigar shape. The corresponding approximate solution for a particle moving in the field of an oblate spheroid (a sphere squashed in one direction) is obtained by making the positions of the two centers of force into imaginary numbers. The oblate spheroid solution is astronomically more important, since most planets, stars and galaxies are approximately oblate spheroids; prolate spheroids are very rare. The analogue of the oblate case in general relativity is a Kerr black hole. The geodesics around this object are known to be integrable, owing to the existence of a fourth constant of motion (in addition to energy, angular momentum, and the magnitude of four-momentum), known as the Carter constant. Euler's oblate three body problem and a Kerr black hole share the same mass moments, and this is most apparent if the metric for the latter is written in Kerr–Schild coordinates. The analogue of the oblate case augmented with a linear Hooke term is a Kerr–de Sitter black hole. As in Hooke's law, the cosmological constant term depends linearly on distance from the origin, and the Kerr–de Sitter spacetime also admits a Carter-type constant quadratic in the momenta. Mathematical solutions. Original Euler problem. In the original Euler problem, the two centers of force acting on the particle are assumed to be fixed in space; let these centers be located along the "x"-axis at ±"a". The particle is likewise assumed to be confined to a fixed plane containing the two centers of force. The potential energy of the particle in the field of these centers is given by formula_23 where the proportionality constants μ1 and μ2 may be positive or negative. The two centers of attraction can be considered as the foci of a set of ellipses. If either center were absent, the particle would move on one of these ellipses, as a solution of the Kepler problem. Therefore, according to Bonnet's theorem, the same ellipses are the solutions for the Euler problem. Introducing elliptic coordinates, formula_24 formula_25 the potential energy can be written as formula_26 and the kinetic energy as formula_27 This is a Liouville dynamical system if ξ and η are taken as φ1 and φ2, respectively; thus, the function "Y" equals formula_28 and the function "W" equals formula_29 Using the general solution for a Liouville dynamical system, one obtains formula_30 formula_31 Introducing a parameter "u" by the formula formula_32 gives the parametric solution formula_33 Since these are elliptic integrals, the coordinates ξ and η can be expressed as elliptic functions of "u". See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "\nV(\\mathbf{r}) = - \\frac{\\mu_1}{r_1} - \\frac{\\mu_2}{r_2}\n" }, { "math_id": 2, "text": "\\mathbf{r}" }, { "math_id": 3, "text": "r_1" }, { "math_id": 4, "text": "r_2" }, { "math_id": 5, "text": "\\mu_1" }, { "math_id": 6, "text": "\\mu_2" }, { "math_id": 7, "text": "\nE = \\frac{\\mathbf{p}^2}{2 m} + V(\\mathbf{r})\n" }, { "math_id": 8, "text": "m" }, { "math_id": 9, "text": "\\mathbf{p}" }, { "math_id": 10, "text": "\nC = r_{1}^{2}\\,r_{2}^{2}\\,\\frac{d\\theta_{1}}{dt} \\frac{d\\theta_{2}}{dt} + 2\\,a \\left( \\mu_{1} \\cos \\theta_{1} - \\mu_{2} \\cos \\theta_{2} \\right),\n" }, { "math_id": 11, "text": "2\\,a" }, { "math_id": 12, "text": "\\theta_1" }, { "math_id": 13, "text": "\\theta_2" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "\nB = \\mathbf{L}^2 + a^2 p_n^2 + 2\\,a\\,x_n \\left(\\frac{\\mu_1 }{r_1} - \\frac{\\mu_2}{r_2} \\right),\n" }, { "math_id": 16, "text": "p_n" }, { "math_id": 17, "text": "x_n" }, { "math_id": 18, "text": "\\mathbf{L}^2" }, { "math_id": 19, "text": "a\\rightarrow 0" }, { "math_id": 20, "text": "\\mathbf{A}" }, { "math_id": 21, "text": "a\\rightarrow\\infty" }, { "math_id": 22, "text": "|x_n - a|" }, { "math_id": 23, "text": "\nV(x, y) = \\frac{-\\mu_1}{\\sqrt{\\left( x - a \\right)^2 + y^2}} - \\frac{\\mu_2}{\\sqrt{\\left( x + a \\right)^2 + y^2}} .\n" }, { "math_id": 24, "text": "\n\\,x = \\,a \\cosh \\xi \\cos \\eta,\n" }, { "math_id": 25, "text": "\n\\,y = \\,a \\sinh \\xi \\sin \\eta,\n" }, { "math_id": 26, "text": "\n\\begin{align}\nV(\\xi, \\eta) & = \\frac{-\\mu_{1}}{a\\left( \\cosh \\xi - \\cos \\eta \\right)} - \\frac{\\mu_{2}}{a\\left( \\cosh \\xi + \\cos \\eta \\right)} \\\\[8pt]\n& = \\frac{-\\mu_{1} \\left( \\cosh \\xi + \\cos \\eta \\right) - \\mu_{2} \\left( \\cosh \\xi - \\cos \\eta \\right)}{a\\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)},\n\\end{align}\n" }, { "math_id": 27, "text": "\nT = \\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right) \\left( \\dot{\\xi}^{2} + \\dot{\\eta}^{2} \\right).\n" }, { "math_id": 28, "text": "\n\\,Y = \\cosh^{2} \\xi - \\cos^{2} \\eta\n" }, { "math_id": 29, "text": "\nW = -\\mu_{1} \\left( \\cosh \\xi + \\cos \\eta \\right) - \\mu_{2} \\left( \\cosh \\xi - \\cos \\eta \\right).\n" }, { "math_id": 30, "text": "\n\\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)^{2} \\dot{\\xi}^{2} = E \\cosh^{2} \\xi + \\left( \\frac{\\mu_{1} + \\mu_{2}}{a} \\right) \\cosh \\xi - \\gamma\n" }, { "math_id": 31, "text": "\n\\frac{ma^{2}}{2} \\left( \\cosh^{2} \\xi - \\cos^{2} \\eta \\right)^{2} \\dot{\\eta}^{2} = -E \\cos^{2} \\eta + \\left( \\frac{\\mu_{1} - \\mu_{2}}{a} \\right) \\cos \\eta + \\gamma\n" }, { "math_id": 32, "text": "\ndu = \\frac{d\\xi}{\\sqrt{E \\cosh^2 \\xi + \\left( \\frac{\\mu_1 + \\mu_2}{a} \\right) \\cosh \\xi - \\gamma}} = \n\\frac{d\\eta}{\\sqrt{-E \\cos^2 \\eta + \\left( \\frac{\\mu_1 - \\mu_2}{a} \\right) \\cos \\eta + \\gamma}},\n" }, { "math_id": 33, "text": "\nu = \\int \\frac{d\\xi}{\\sqrt{E \\cosh^{2} \\xi + \\left( \\frac{\\mu_{1} + \\mu_{2}}{a} \\right) \\cosh \\xi - \\gamma}} = \n\\int \\frac{d\\eta}{\\sqrt{-E \\cos^{2} \\eta + \\left( \\frac{\\mu_{1} - \\mu_{2}}{a} \\right) \\cos \\eta + \\gamma}}.\n" } ]
https://en.wikipedia.org/wiki?curid=1504593
1504893
Balance wheel
Time measuring device &lt;templatestyles src="Stack/styles.css"/&gt; A balance wheel, or balance, is the timekeeping device used in mechanical watches and small clocks, analogous to the pendulum in a pendulum clock. It is a weighted wheel that rotates back and forth, being returned toward its center position by a spiral torsion spring, known as the balance spring or "hairspring". It is driven by the escapement, which transforms the rotating motion of the watch gear train into impulses delivered to the balance wheel. Each swing of the wheel (called a "tick" or "beat") allows the gear train to advance a set amount, moving the hands forward. The balance wheel and hairspring together form a harmonic oscillator, which due to resonance oscillates preferentially at a certain rate, its resonant frequency or "beat", and resists oscillating at other rates. The combination of the mass of the balance wheel and the elasticity of the spring keep the time between each oscillation or "tick" very constant, accounting for its nearly universal use as the timekeeper in mechanical watches to the present. From its invention in the 14th century until tuning fork and quartz movements became available in the 1960s, virtually every portable timekeeping device used some form of balance wheel. Overview. Until the 1980s balance wheels were the timekeeping technology used in chronometers, bank vault time locks, time fuzes for munitions, alarm clocks, kitchen timers and stopwatches, but quartz technology has taken over these applications, and the main remaining use is in quality mechanical watches. Modern (2007) watch balance wheels are usually made of Glucydur, a low thermal expansion alloy of beryllium, copper and iron, with springs of a low thermal coefficient of elasticity alloy such as Nivarox. The two alloys are matched so their residual temperature responses cancel out, resulting in even lower temperature error. The wheels are smooth, to reduce air friction, and the pivots are supported on precision jewel bearings. Older balance wheels used weight screws around the rim to adjust the poise (balance), but modern wheels are computer-poised at the factory, using a laser to burn a precise pit in the rim to make them balanced. Balance wheels rotate about &lt;templatestyles src="Fraction/styles.css" /&gt;1+1⁄2 turns with each swing, that is, about 270° to each side of their center equilibrium position. The rate of the balance wheel is adjusted with the regulator, a lever with a narrow slit on the end through which the balance spring passes. This holds the part of the spring behind the slit stationary. Moving the lever slides the slit up and down the balance spring, changing its effective length, and thus the resonant vibration rate of the balance. Since the regulator interferes with the spring's action, chronometers and some precision watches have "free sprung" balances with no regulator, such as the Gyromax. Their rate is adjusted by weight screws on the balance rim. A balance's vibration rate is traditionally measured in beats (ticks) per hour, or BPH, although beats per second and Hz are also used. The length of a beat is one swing of the balance wheel, between reversals of direction, so there are two beats in a complete cycle. Balances in precision watches are designed with faster beats, because they are less affected by motions of the wrist. Alarm clocks and kitchen timers often have a rate of 4 beats per second (14,400 BPH). Watches made prior to the 1970s usually had a rate of 5 beats per second (18,000 BPH). Current watches have rates of 6 (21,600 BPH), 8 (28,800 BPH) and a few have 10 beats per second (36,000 BPH). Audemars Piguet currently produces a watch with a very high balance vibration rate of 12 beats/s (43,200 BPH). During World War II, Elgin produced a very precise stopwatch for US Air Force bomber crews that ran at 40 beats per second (144,000 BPH), earning it the nickname 'Jitterbug'. The precision of the best balance wheel watches on the wrist is around a few seconds per day. The most accurate balance wheel timepieces made were marine chronometers, which were used on ships for celestial navigation, as a precise time source to determine longitude. By World War II they had achieved accuracies of 0.1 second per day. Period of oscillation. A balance wheel's period of oscillation "T" in seconds, the time required for one complete cycle (two beats), is determined by the wheel's moment of inertia "I" in kilogram-meter2 and the stiffness (spring constant) of its balance spring "κ" in newton-meters per radian: formula_0 History. The balance wheel appeared with the first mechanical clocks, in 14th century Europe, but it seems unknown exactly when or where it was first used. It is an improved version of the foliot, an early inertial timekeeper consisting of a straight bar pivoted in the center with weights on the ends, which oscillates back and forth. The foliot weights could be slid in or out on the bar, to adjust the rate of the clock. The first clocks in northern Europe used foliots, while those in southern Europe used balance wheels. As clocks were made smaller, first as bracket clocks and lantern clocks and then as the first large watches after 1500, balance wheels began to be used in place of foliots. Since more of its weight is located on the rim away from the axis, a balance wheel could have a larger moment of inertia than a foliot of the same size, and keep better time. The wheel shape also had less air resistance, and its geometry partly compensated for thermal expansion error due to temperature changes. Addition of balance spring. These early balance wheels were crude timekeepers because they lacked the other essential element: the balance spring. Early balance wheels were pushed in one direction by the escapement until the verge flag that was in contact with a tooth on the escape wheel slipped past the tip of the tooth ("escaped") and the action of the escapement reversed, pushing the wheel back the other way. In such an "inertial" wheel, the acceleration is proportional to the drive force. In a clock or watch without balance spring, the drive force provides both the force that accelerates the wheel and also the force that slows it down and reverses it. If the drive force is increased, both acceleration and deceleration are increased, this results in the wheel getting pushed back and forth faster. This made the timekeeping strongly dependent on the force applied by the escapement. In a watch the drive force provided by the mainspring, applied to the escapement through the timepiece's gear train, declined during the watch's running period as the mainspring unwound. Without some means of equalizing the drive force, the watch slowed down during the running period between windings as the spring lost force, causing it to lose time. This is why all pre-balance spring watches required fusees (or in a few cases stackfreeds) to equalize the force from the mainspring reaching the escapement, to achieve even minimal accuracy. Even with these devices, watches prior to the balance spring were very inaccurate. The idea of the balance spring was inspired by observations that springy hog bristle curbs, added to limit the rotation of the wheel, increased its accuracy. Robert Hooke first applied a metal spring to the balance in 1658 and Jean de Hautefeuille and Christiaan Huygens improved it to its present spiral form in 1674. The addition of the spring made the balance wheel a harmonic oscillator, the basis of every modern clock. This means the wheel vibrated at a natural resonant frequency or "beat" and resisted changes in its vibration rate caused by friction or changing drive force. This crucial innovation greatly increased the accuracy of watches, from several hours per day to perhaps 10 minutes per day, changing them from expensive novelties into useful timekeepers. Temperature error. After the balance spring was added, a major remaining source of inaccuracy was the effect of temperature changes. Early watches had balance springs made of plain steel and balances of brass or steel, and the influence of temperature on these noticeably affected the rate. An increase in temperature increases the dimensions of the balance spring and the balance due to thermal expansion. The strength of a spring, the restoring force it produces in response to a deflection, is proportional to its breadth and the cube of its thickness, and inversely proportional to its length. An increase in temperature would actually make a spring stronger if it affected only its physical dimensions. However, a much larger effect in a balance spring made of plain steel is that the elasticity of the spring's metal decreases significantly as the temperature increases, the net effect being that a plain steel spring becomes weaker with increasing temperature. An increase in temperature also increases diameter of a steel or brass balance wheel, increasing its rotational inertia, its moment of inertia, making it harder for the balance spring to accelerate. The two effects of increasing temperature on physical dimensions of the spring and the balance, the strengthening of the balance spring and the increase in rotational inertia of the balance, have opposing effects and to an extent cancel each other. The major effect of temperature which affects the rate of a watch is the weakening of the balance spring with increasing temperature. In a watch that is not compensated for the effects of temperature, the weaker spring takes longer to return the balance wheel back toward the center, so the "beat" gets slower and the watch loses time. Ferdinand Berthoud found in 1773 that an ordinary brass balance and steel hairspring, subjected to a 60 °F (33 °C) temperature increase, loses 393 seconds (&lt;templatestyles src="Fraction/styles.css" /&gt;6+1⁄2 minutes) per day, of which 312 seconds is due to spring elasticity decrease. Temperature-compensated balance wheel. The need for an accurate clock for celestial navigation during sea voyages drove many advances in balance technology in 18th century Britain and France. Even a 1-second per day error in a marine chronometer could result in a error in ship's position after a 2-month voyage. John Harrison was first to apply temperature compensation to a balance wheel in 1753, using a bimetallic "compensation curb" on the spring, in the first successful marine chronometers, H4 and H5. These achieved an accuracy of a fraction of a second per day, but the compensation curb was not further used because of its complexity. A simpler solution was devised around 1765 by Pierre Le Roy, and improved by John Arnold, and Thomas Earnshaw: the "Earnshaw" or "compensating" balance wheel. The key was to make the balance wheel change size with temperature. If the balance could be made to shrink in diameter as it got warmer, the smaller moment of inertia would compensate for the weakening of the balance spring, keeping the period of oscillation the same. To accomplish this, the outer rim of the balance was made of a "sandwich" of two metals; a layer of steel on the inside fused to a layer of brass on the outside. Strips of this bimetallic construction bend toward the steel side when they are warmed, because the thermal expansion of brass is greater than steel. The rim was cut open at two points next to the spokes of the wheel, so it resembled an S-shape (see figure) with two circular bimetallic "arms". These wheels are sometimes referred to as "Z-balances". A temperature increase makes the arms bend inward toward the center of the wheel, and the shift of mass inward reduces the moment of inertia of the balance, similar to the way a spinning ice skater can reduce her moment of inertia by pulling in her arms. This reduction in the moment of inertia compensated for the reduced torque produced by the weaker balance spring. The amount of compensation is adjusted by moveable weights on the arms. Marine chronometers with this type of balance had errors of only 3–4 seconds per day over a wide temperature range. By the 1870s compensated balances began to be used in watches. Middle temperature error. The standard Earnshaw compensation balance dramatically reduced error due to temperature variations, but it didn't eliminate it. As first described by J. G. Ulrich, a compensated balance adjusted to keep correct time at a given low and high temperature will be a few seconds per day fast at intermediate temperatures. The reason is that the moment of inertia of the balance varies as the square of the radius of the compensation arms, and thus of the temperature. But the elasticity of the spring varies linearly with temperature. To mitigate this problem, chronometer makers adopted various 'auxiliary compensation' schemes, which reduced error below 1 second per day. Such schemes consisted for example of small bimetallic arms attached to the inside of the balance wheel. Such compensators could only bend in one direction toward the center of the balance wheel, but bending outward would be blocked by the wheel itself. The blocked movement causes a non-linear temperature response that could slightly better compensate the elasticity changes in the spring. Most of the chronometers that came in first in the annual Greenwich Observatory trials between 1850 and 1914 were auxiliary compensation designs. Auxiliary compensation was never used in watches because of its complexity. Better materials. The bimetallic compensated balance wheel was made obsolete in the early 20th century by advances in metallurgy. Charles Édouard Guillaume won a Nobel prize for the 1896 invention of Invar, a nickel steel alloy with very low thermal expansion, and Elinvar (from , 'invariable elasticity') an alloy whose elasticity is unchanged over a wide temperature range, for balance springs. A solid Invar balance with a spring of Elinvar was largely unaffected by temperature, so it replaced the difficult-to-adjust bimetallic balance. This led to a series of improved low temperature coefficient alloys for balances and springs. Before developing Elinvar, Guillaume also invented an alloy to compensate for middle temperature error in bimetallic balances by endowing it with a negative quadratic temperature coefficient. This alloy, named anibal, is a slight variation of invar. It almost completely negated the temperature effect of the steel hairspring, but still required a bimetal compensated balance wheel, known as a Guillaume balance wheel. This design was mostly fitted in high precision chronometers destined for competition in observatories. The quadratic coefficient is defined by its place in the equation of expansion of a material; formula_1 where: formula_2 is the length of the sample at some reference temperature formula_3 is the temperature above the reference formula_4 is the length of the sample at temperature formula_3 formula_5 is the linear coefficient of expansion formula_6 is the quadratic coefficient of expansion Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T = 2 \\pi \\sqrt{ \\frac {I}{\\kappa} } \\," }, { "math_id": 1, "text": " \\ell_\\theta = \\ell_0 \\left(1 + \\alpha \\theta + \\beta \\theta^2\\right) \\," }, { "math_id": 2, "text": "\\scriptstyle \\ell_0" }, { "math_id": 3, "text": "\\scriptstyle \\theta" }, { "math_id": 4, "text": "\\scriptstyle \\ell_\\theta" }, { "math_id": 5, "text": "\\scriptstyle \\alpha" }, { "math_id": 6, "text": "\\scriptstyle \\beta" } ]
https://en.wikipedia.org/wiki?curid=1504893
1505381
Numerical weather prediction
Weather prediction using mathematical models of the atmosphere and oceans Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs. Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires. Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions. A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models. History. The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts. The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. Initialization. The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called "initialization". On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast. A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific. Computation. An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical. These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a "time step". This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or "prog". Parameterization. Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. "Parameterization" is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between and in length. A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity. The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes. Domains. The horizontal domain of a model is either "global", covering the entire Earth, or "regional", covering only part of the Earth. Regional models (also known as "limited-area" models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself. The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height (formula_0) as the vertical coordinate. Later models substituted the geometric formula_0 coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about ) level, and thus was essentially two-dimensional. High-resolution models—also called "mesoscale models"—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable formula_1 used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain. Model output statistics. Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s. Model output statistics differ from the "perfect prog" technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds. Ensembles. In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future. Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. Since the 1990s, "ensemble forecasts" have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts. In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a "spread-skill relationship" is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7. In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called "multi-model ensemble forecasting", and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as "superensemble forecasting". This type of forecast significantly reduces errors in model output. Applications. Air quality modeling. Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their "transport", or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts. Climate modeling. A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved. Ocean surface modeling. The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. Tropical cyclone forecasting. Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models. In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. Wildfire modeling. On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere. A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under , forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=1505381
15054570
Semi-s-cobordism
In mathematics, a cobordism ("W", "M", "M"−) of an ("n" + 1)-dimensional manifold (with boundary) "W" between its boundary components, two "n"-manifolds "M" and "M"−, is called a semi-"s"-cobordism if (and only if) the inclusion formula_0 is a simple homotopy equivalence (as in an "s"-cobordism), with no further requirement on the inclusion formula_1 (not even being a homotopy equivalence). Other notations. The original creator of this topic, Jean-Claude Hausmann, used the notation "M"− for the right-hand boundary of the cobordism. Properties. A consequence of ("W", "M", "M"−) being a semi-"s"-cobordism is that the kernel of the derived homomorphism on fundamental groups formula_2 is perfect. A corollary of this is that formula_3 solves the group extension problem formula_4. The solutions to the group extension problem for prescribed quotient group formula_5 and kernel group K are classified up to congruence by group cohomology (see Mac Lane's "Homology" pp. 124-129), so there are restrictions on which n-manifolds can be the right-hand boundary of a semi-"s"-cobordism with prescribed left-hand boundary M and superperfect kernel group K. Relationship with Plus cobordisms. Note that if ("W", "M", "M"−) is a semi-"s"-cobordism, then ("W", "M"−, "M") is a plus cobordism. (This justifies the use of "M"− for the right-hand boundary of a semi-"s"-cobordism, a play on the traditional use of "M"+ for the right-hand boundary of a plus cobordism.) Thus, a semi-"s"-cobordism may be thought of as an inverse to Quillen's Plus construction in the manifold category. Note that ("M"−)+ must be diffeomorphic (respectively, piecewise-linearly (PL) homeomorphic) to "M" but there may be a variety of choices for ("M"+)− for a given closed smooth (respectively, PL) manifold "M".
[ { "math_id": 0, "text": "M \\hookrightarrow W" }, { "math_id": 1, "text": "M^- \\hookrightarrow W" }, { "math_id": 2, "text": "K = \\ker(\\pi_1(M^{-}) \\twoheadrightarrow \\pi_1(W))" }, { "math_id": 3, "text": "\\pi_1(M^{-})" }, { "math_id": 4, "text": "1 \\rightarrow K \\rightarrow \\pi_1(M^{-}) \\rightarrow \\pi_1(M) \\rightarrow 1" }, { "math_id": 5, "text": "\\pi_1(M)" } ]
https://en.wikipedia.org/wiki?curid=15054570
15054768
Degasperis–Procesi equation
Used in hydrology In mathematical physics, the Degasperis–Procesi equation formula_0 is one of only two exactly solvable equations in the following family of third-order, non-linear, dispersive PDEs: formula_1 where formula_2 and "b" are real parameters ("b"=3 for the Degasperis–Procesi equation). It was discovered by Antonio Degasperis and Michela Procesi in a search for integrable equations similar in form to the Camassa–Holm equation, which is the other integrable equation in this family (corresponding to "b"=2); that those two equations are the only integrable cases has been verified using a variety of different integrability tests. Although discovered solely because of its mathematical properties, the Degasperis–Procesi equation (with formula_3) has later been found to play a similar role in water wave theory as the Camassa–Holm equation. Soliton solutions. Among the solutions of the Degasperis–Procesi equation (in the special case formula_4) are the so-called multipeakon solutions, which are functions of the form formula_5 where the functions formula_6 and formula_7 satisfy formula_8 These ODEs can be solved explicitly in terms of elementary functions, using inverse spectral methods. When formula_3 the soliton solutions of the Degasperis–Procesi equation are smooth; they converge to peakons in the limit as formula_2 tends to zero. Discontinuous solutions. The Degasperis–Procesi equation (with formula_4) is formally equivalent to the (nonlocal) hyperbolic conservation law formula_9 where formula_10, and where the star denotes convolution with respect to "x". In this formulation, it admits weak solutions with a very low degree of regularity, even discontinuous ones (shock waves). In contrast, the corresponding formulation of the Camassa–Holm equation contains a convolution involving both formula_11 and formula_12, which only makes sense if "u" lies in the Sobolev space formula_13 with respect to "x". By the Sobolev embedding theorem, this means in particular that the weak solutions of the Camassa–Holm equation must be continuous with respect to "x". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle u_t - u_{xxt} + 2\\kappa u_x + 4u u_x = 3 u_x u_{xx} + u u_{xxx}" }, { "math_id": 1, "text": "\\displaystyle u_t - u_{xxt} + 2\\kappa u_x + (b+1)u u_x = b u_x u_{xx} + u u_{xxx}," }, { "math_id": 2, "text": "\\kappa" }, { "math_id": 3, "text": "\\kappa > 0" }, { "math_id": 4, "text": "\\kappa=0" }, { "math_id": 5, "text": "\\displaystyle u(x,t)=\\sum_{i=1}^n m_i(t) e^{-|x-x_i(t)|}" }, { "math_id": 6, "text": "m_i" }, { "math_id": 7, "text": "x_i" }, { "math_id": 8, "text": "\\dot{x}_i = \\sum_{j=1}^n m_j e^{-|x_i-x_j|},\\qquad \\dot{m}_i = 2 m_i \\sum_{j=1}^n m_j\\, \\sgn{(x_i-x_j)} e^{-|x_i-x_j|}." }, { "math_id": 9, "text": "\n\\partial_t u + \\partial_x \\left[\\frac{u^2}{2} + \\frac{G}{2} * \\frac{3 u^2}{2} \\right] = 0,\n" }, { "math_id": 10, "text": "G(x) = \\exp(-|x|)" }, { "math_id": 11, "text": "u^2" }, { "math_id": 12, "text": "u_x^2" }, { "math_id": 13, "text": "H^1 = W^{1,2}" } ]
https://en.wikipedia.org/wiki?curid=15054768
15056
Isoelectric point
Concept in molecular biology The isoelectric point (pI, pH(I), IEP), is the pH at which a molecule carries no net electrical charge or is electrically neutral in the statistical mean. The standard nomenclature to represent the isoelectric point is pH(I). However, pI is also used. For brevity, this article uses pI. The net charge on the molecule is affected by pH of its surrounding environment and can become more positively or negatively charged due to the gain or loss, respectively, of protons (H+). Surfaces naturally charge to form a double layer. In the common case when the surface charge-determining ions are H+/HO−, the net surface charge is affected by the pH of the liquid in which the solid is submerged. The pI value can affect the solubility of a molecule at a given pH. Such molecules have minimum solubility in water or salt solutions at the pH that corresponds to their pI and often precipitate out of solution. Biological amphoteric molecules such as proteins contain both acidic and basic functional groups. Amino acids that make up proteins may be positive, negative, neutral, or polar in nature, and together give a protein its overall charge. At a pH below their pI, proteins carry a net positive charge; above their pI they carry a net negative charge. Proteins can, thus, be separated by net charge in a polyacrylamide gel using either preparative native PAGE, which uses a constant pH to separate proteins, or isoelectric focusing, which uses a pH gradient to separate proteins. Isoelectric focusing is also the first step in 2-D gel polyacrylamide gel electrophoresis. In biomolecules, proteins can be separated by ion exchange chromatography. Biological proteins are made up of zwitterionic amino acid compounds; the net charge of these proteins can be positive or negative depending on the pH of the environment. The specific pI of the target protein can be used to model the process around and the compound can then be purified from the rest of the mixture. Buffers of various pH can be used for this purification process to change the pH of the environment. When a mixture containing a target protein is loaded into an ion exchanger, the stationary matrix can be either positively-charged (for mobile anions) or negatively-charged (for mobile cations). At low pH values, the net charge of most proteins in the mixture is positive – in cation exchangers, these positively-charged proteins bind to the negatively-charged matrix. At high pH values, the net charge of most proteins is negative, where they bind to the positively-charged matrix in anion exchangers. When the environment is at a pH value equal to the protein's pI, the net charge is zero, and the protein is not bound to any exchanger, and therefore, can be eluted out. Calculating pI values. For an amino acid with only one amine and one carboxyl group, the pI can be calculated from the mean of the pKas of this molecule. formula_0 The pH of an electrophoretic gel is determined by the buffer used for that gel. If the pH of the buffer is above the pI of the protein being run, the protein will migrate to the positive pole (negative charge is attracted to a positive pole). If the pH of the buffer is below the pI of the protein being run, the protein will migrate to the negative pole of the gel (positive charge is attracted to the negative pole). If the protein is run with a buffer pH that is equal to the pI, it will not migrate at all. This is also true for individual amino acids. Examples. In the two examples (on the right) the isoelectric point is shown by the green vertical line. In glycine the pK values are separated by nearly 7 units. Thus in the gas phase, the concentration of the neutral species, glycine (GlyH), is effectively 100% of the analytical glycine concentration. Glycine may exist as a zwitterion at the isoelectric point, but the equilibrium constant for the isomerization reaction in solution &lt;chem&gt;H2NCH2CO2H &lt;=&gt; H3N+CH2CO2-&lt;/chem&gt; is not known. The other example, adenosine monophosphate is shown to illustrate the fact that a third species may, in principle, be involved. In fact the concentration of is negligible at the isoelectric point in this case. If the pI is greater than the pH, the molecule will have a positive charge. Peptides and proteins. A number of algorithms for estimating isoelectric points of peptides and proteins have been developed. Most of them use Henderson–Hasselbalch equation with different pK values. For instance, within the model proposed by Bjellqvist and co-workers, the pKs were determined between closely related immobilines by focusing the same sample in overlapping pH gradients. Some improvements in the methodology (especially in the determination of the pK values for modified amino acids) have been also proposed. More advanced methods take into account the effect of adjacent amino acids ±3 residues away from a charged aspartic or glutamic acid, the effects on free C terminus, as well as they apply a correction term to the corresponding pK values using genetic algorithm. Other recent approaches are based on a support vector machine algorithm and pKa optimization against experimentally known protein/peptide isoelectric points. Moreover, experimentally measured isoelectric point of proteins were aggregated into the databases. Recently, a database of isoelectric points for all proteins predicted using most of the available methods had been also developed. In practice, a protein with an excess of basic aminoacids (arginine, lysine and/or histidine) will bear an isoelectric point roughly greater than 7 (basic), while a protein with an excess of acidic aminoacids (aspartic acid and/or glutamic acid) will often have an isoelectric point lower than 7 (acidic). The electrophoretic linear (horizontal) separation of proteins by Ip along a pH gradient in a polyacrylamide gel (also known as isoelectric focusing), followed by a standard molecular weight linear (vertical) separation in a second polyacrylamide gel (SDS-PAGE), constitutes the so called two-dimensional gel electrophoresis or PAGE 2D. This technique allows a thorough separation of proteins as distinct "spots", with proteins of high molecular weight and low Ip migrating to the upper-left part of the bidimensional gel, while proteins with low molecular weight and high Ip locate to the bottom-right region of the same gel. Ceramic materials. The isoelectric points (IEP) of metal oxide ceramics are used extensively in material science in various aqueous processing steps (synthesis, modification, etc.). In the absence of chemisorbed or physisorbed species particle surfaces in aqueous suspension are generally assumed to be covered with surface hydroxyl species, M-OH (where M is a metal such as Al, Si, etc.). At pH values above the IEP, the predominant surface species is M-O−, while at pH values below the IEP, M-OH2+ species predominate. Some approximate values of common ceramics are listed below: "Note: The following list gives the isoelectric point at 25 °C for selected materials in water. The exact value can vary widely, depending on material factors such as purity and phase as well as physical parameters such as temperature. Moreover, the precise measurement of isoelectric points can be difficult, thus many sources often cite differing values for isoelectric points of these materials." Mixed oxides may exhibit isoelectric point values that are intermediate to those of the corresponding pure oxides. For example, a synthetically prepared amorphous aluminosilicate (Al2O3-SiO2) was initially measured as having IEP of 4.5 (the electrokinetic behavior of the surface was dominated by surface Si-OH species, thus explaining the relatively low IEP value). Significantly higher IEP values (pH 6 to 8) have been reported for 3Al2O3-2SiO2 by others. Similarly, also IEP of barium titanate, BaTiO3 was reported in the range 5–6 while others got a value of 3. Mixtures of titania (TiO2) and zirconia (ZrO2) were studied and found to have an isoelectric point between 5.3–6.9, varying non-linearly with %(ZrO2). The surface charge of the mixed oxides was correlated with acidity. Greater titania content led to increased Lewis acidity, whereas zirconia-rich oxides displayed Br::onsted acidity. The different types of acidities produced differences in ion adsorption rates and capacities. Versus point of zero charge. The terms isoelectric point (IEP) and point of zero charge (PZC) are often used interchangeably, although under certain circumstances, it may be productive to make the distinction. In systems in which H+/OH− are the interface potential-determining ions, the point of zero charge is given in terms of pH. The pH at which the surface exhibits a neutral net electrical charge is the point of zero charge at the surface. Electrokinetic phenomena generally measure zeta potential, and a zero zeta potential is interpreted as the point of zero net charge at the shear plane. This is termed the isoelectric point. Thus, the isoelectric point is the value of pH at which the colloidal particle remains stationary in an electrical field. The isoelectric point is expected to be somewhat different from the point of zero charge at the particle surface, but this difference is often ignored in practice for so-called pristine surfaces, i.e., surfaces with no specifically adsorbed positive or negative charges. In this context, specific adsorption is understood as adsorption occurring in a Stern layer or chemisorption. Thus, point of zero charge at the surface is taken as equal to isoelectric point in the absence of specific adsorption on that surface. According to Jolivet, in the absence of positive or negative charges, the surface is best described by the point of zero charge. If positive and negative charges are both present in equal amounts, then this is the isoelectric point. Thus, the PZC refers to the absence of any type of surface charge, while the IEP refers to a state of neutral net surface charge. The difference between the two, therefore, is the quantity of charged sites at the point of net zero charge. Jolivet uses the intrinsic surface equilibrium constants, p"K"− and p"K"+ to define the two conditions in terms of the relative number of charged sites: formula_1 For large Δp"K" (&gt;4 according to Jolivet), the predominant species is MOH while there are relatively few charged species – so the PZC is relevant. For small values of Δp"K", there are many charged species in approximately equal numbers, so one speaks of the IEP. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{pI} = \\frac{\\mathrm{p}K_\\mathrm{a1} + \\mathrm{p}K_\\mathrm{a2}}{2} " }, { "math_id": 1, "text": " \\mathrm{p}K^- - \\mathrm{p}K^+ = \\Delta \\mathrm{p}K = \\log {\\frac{\\left[\\mathrm{MOH}\\right]^2}{\\left[\\mathrm{MOH}{_2^+}\\right]\\left[\\mathrm{MO}^-\\right]}} " } ]
https://en.wikipedia.org/wiki?curid=15056
15060881
Poisson sampling
In survey methodology, Poisson sampling (sometimes denoted as "PO sampling") is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample. Each element of the population may have a different probability of being included in the sample (formula_0). The probability of being included in a sample during the drawing of a single sample is denoted as the "first-order inclusion probability" of that element (formula_1). If all first-order inclusion probabilities are equal, Poisson sampling becomes equivalent to Bernoulli sampling, which can therefore be considered to be a special case of Poisson sampling. A mathematical consequence of Poisson sampling. Mathematically, the first-order inclusion probability of the "i"th element of the population is denoted by the symbol formula_0 and the second-order inclusion probability that a pair consisting of the "i"th and "j"th element of the population that is sampled is included in a sample during the drawing of a single sample is denoted by formula_2. The following relation is valid during Poisson sampling when formula_3: formula_4 formula_5 is defined to be formula_0. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi_i" }, { "math_id": 1, "text": "p_i" }, { "math_id": 2, "text": "\\pi_{ij}" }, { "math_id": 3, "text": "i\\neq j" }, { "math_id": 4, "text": " \\pi_{ij} = \\pi_{i} \\times \\pi_{j}." }, { "math_id": 5, "text": "\\pi_{ii}" } ]
https://en.wikipedia.org/wiki?curid=15060881
15061058
Bernoulli sampling
In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample. An essential property of Bernoulli sampling is that all elements of the population have equal probability of being included in the sample. Bernoulli sampling is therefore a special case of Poisson sampling. In Poisson sampling each element of the population may have a different probability of being included in the sample. In Bernoulli sampling, the probability is equal for all the elements. Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows a binomial distribution. Example. The most basic Bernoulli method generates "n" random variates to extract a sample from a population of "n" items. Suppose you want to extract a given percentage "pct" of the population. The algorithm can be described as follows: for each item in the set generate a random non-negative integer R if (R mod 100) &lt; pct then select item A percentage of 20%, say, is usually expressed as a probability "p"=0.2. In that case, random variates are generated in the unit interval. After running the algorithm, a sample of size "k" will have been selected. One would expect to have formula_0, which is more and more likely as "n" grows. In fact, It is possible to calculate the probability of obtaining a sample size of "k" by the Binomial distribution: formula_1 On the left this function is shown for four values of formula_2 and formula_3. In order to compare the values for different values of formula_2, the formula_4's in abscissa are scaled from formula_5 to the unit interval, while the value of the function, in ordinate, is multiplied by the inverse, so that the area under the graph maintains the same value —that area is related to the corresponding cumulative distribution function. The values are shown in logarithmic scale. On the right the minimum values of formula_2 that satisfy given error bounds with 95% probability. Given an error, the set of formula_4's within bounds can be described as follows: formula_6 The probability to end up within formula_7 is given again by the binomial distribution as: formula_8 The picture shows the lowest values of formula_2 such that the sum is at least 0.95. For formula_9 and formula_10 the algorithm delivers exact results for all formula_2's. The formula_11's in between are obtained by bisection. Note that, if formula_12 is an integer percentage, formula_13, guarantees that formula_14. Values as high as formula_15 can be required for such an exact match. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k \\approx n \\cdot p" }, { "math_id": 1, "text": "f(k,n,p) = \\binom{n}{k}p^k(1-p)^{n-k}" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "p=0.2" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "\\left[0, n\\right]" }, { "math_id": 6, "text": "K_{n, p} = \\left\\{ k \\in \\N: \\left\\vert \\frac k n - p \\right\\vert < \\mathrm{error} \\right\\}" }, { "math_id": 7, "text": "K" }, { "math_id": 8, "text": "\\sum_{k \\in K} f(k, n, p)." }, { "math_id": 9, "text": "p = 0.0" }, { "math_id": 10, "text": "p = 1.00" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "100 \\cdot p" }, { "math_id": 13, "text": "\\mathrm{error} = 0.005" }, { "math_id": 14, "text": "100 \\cdot k/n = 100 \\cdot p" }, { "math_id": 15, "text": "n = 38400" } ]
https://en.wikipedia.org/wiki?curid=15061058
15061620
Sampling design
In the theory of finite population sampling, a sampling design specifies for every possible sample its probability of being drawn. Mathematical formulation. Mathematically, a sampling design is denoted by the function formula_0 which gives the probability of drawing a sample formula_1 An example of a sampling design. During Bernoulli sampling, formula_0 is given by formula_2 where for each element formula_3 is the probability of being included in the sample and formula_4 is the total number of elements in the sample formula_5 and formula_6 is the total number of elements in the population (before sampling commenced). Sample design for managerial research. In business research, companies must often generate samples of customers, clients, employees, and so forth to gather their opinions. Sample design is also a critical component of marketing research and employee research for many organizations. During sample design, firms must answer questions such as: These issues require very careful consideration, and good commentaries are provided in several sources. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P(S)" }, { "math_id": 1, "text": "S." }, { "math_id": 2, "text": " P(S) = q^{N_\\text{sample}(S)} \\times (1-q)^{(N_\\text{pop} - N_\\text{sample}(S))} " }, { "math_id": 3, "text": "q" }, { "math_id": 4, "text": "N_\\text{sample}(S)" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "N_\\text{pop}" } ]
https://en.wikipedia.org/wiki?curid=15061620
15062158
Schröder number
Mathematical integer sequence In mathematics, the Schröder number formula_0 also called a "large Schröder number" or "big Schröder number", describes the number of lattice paths from the southwest corner formula_1 of an formula_2 grid to the northeast corner formula_3 using only single steps north, formula_4 northeast, formula_5 or east, formula_6 that do not rise above the SW–NE diagonal. The first few Schröder numbers are 1, 2, 6, 22, 90, 394, 1806, 8558, ... (sequence in the OEIS). where formula_7 and formula_8 They were named after the German mathematician Ernst Schröder. Examples. The following figure shows the 6 such paths through a formula_9 grid: Related constructions. A Schröder path of length formula_10 is a lattice path from formula_1 to formula_11 with steps northeast, formula_5 east, formula_12 and southeast, formula_13 that do not go below the formula_14-axis. The formula_10th Schröder number is the number of Schröder paths of length formula_10. The following figure shows the 6 Schröder paths of length 2. Similarly, the Schröder numbers count the number of ways to divide a rectangle into formula_15 smaller rectangles using formula_10 cuts through formula_10 points given inside the rectangle in general position, each cut intersecting one of the points and dividing only a single rectangle in two (i.e., the number of structurally-different guillotine partitions). This is similar to the process of triangulation, in which a shape is divided into nonoverlapping triangles instead of rectangles. The following figure shows the 6 such dissections of a rectangle into 3 rectangles using two cuts: Pictured below are the 22 dissections of a rectangle into 4 rectangles using three cuts: The Schröder number formula_16 also counts the separable permutations of length formula_17 Related sequences. Schröder numbers are sometimes called "large" or "big" Schröder numbers because there is another Schröder sequence: the "little Schröder numbers", also known as the Schröder-Hipparchus numbers or the "super-Catalan numbers". The connections between these paths can be seen in a few ways: Schröder paths are similar to Dyck paths but allow the horizontal step instead of just diagonal steps. Another similar path is the type of path that the Motzkin numbers count; the Motzkin paths allow the same diagonal paths but allow only a single horizontal step, (1,0), and count such paths from formula_1 to formula_26. There is also a triangular array associated with the Schröder numbers that provides a recurrence relation (though not just with the Schröder numbers). The first few terms are 1, 1, 2, 1, 4, 6, 1, 6, 16, 22, ... (sequence in the OEIS). It is easier to see the connection with the Schröder numbers when the sequence is in its triangular form: Then the Schröder numbers are the diagonal entries, i.e. formula_27 where formula_28 is the entry in row formula_10 and column formula_29. The recurrence relation given by this arrangement is formula_30 with formula_31 and formula_32 for formula_33. Another interesting observation to make is that the sum of the formula_10th row is the formula_34st little Schröder number; that is, formula_35. Recurrence relations. With formula_36, formula_37, formula_38 for formula_39 and also formula_40 for formula_39 Generating function. The generating function formula_41 of the sequence formula_42 is formula_43. It can be expressed in terms of the generating function for Catalan numbers formula_44 as formula_45 Uses. One topic of combinatorics is tiling shapes, and one particular instance of this is domino tilings; the question in this instance is, "How many dominoes (that is, formula_46 or formula_47 rectangles) can we arrange on some shape such that none of the dominoes overlap, the entire shape is covered, and none of the dominoes stick out of the shape?" The shape that the Schröder numbers have a connection with is the Aztec diamond. Shown below for reference is an Aztec diamond of order 4 with a possible domino tiling. It turns out that the determinant of the formula_48 Hankel matrix of the Schröder numbers, that is, the square matrix whose formula_49th entry is formula_50 is the number of domino tilings of the order formula_10 Aztec diamond, which is formula_51 That is, formula_52 For example: *formula_53 *formula_54 *formula_55 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_n," }, { "math_id": 1, "text": "(0,0)" }, { "math_id": 2, "text": "n \\times n" }, { "math_id": 3, "text": "(n,n)," }, { "math_id": 4, "text": "(0,1);" }, { "math_id": 5, "text": "(1,1);" }, { "math_id": 6, "text": "(1,0)," }, { "math_id": 7, "text": "S_0=1" }, { "math_id": 8, "text": "S_1=2." }, { "math_id": 9, "text": "2 \\times 2" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "(2n,0)" }, { "math_id": 12, "text": "(2,0);" }, { "math_id": 13, "text": "(1,-1)," }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "n+1" }, { "math_id": 16, "text": "S_n" }, { "math_id": 17, "text": "n-1." }, { "math_id": 18, "text": "(n,n)" }, { "math_id": 19, "text": "(1,1)," }, { "math_id": 20, "text": "(2,0)," }, { "math_id": 21, "text": "(1,-1)" }, { "math_id": 22, "text": "s_n" }, { "math_id": 23, "text": "S_n = 2s_n" }, { "math_id": 24, "text": "n>0" }, { "math_id": 25, "text": "(S_0 = s_0 = 1)." }, { "math_id": 26, "text": "(n,0)" }, { "math_id": 27, "text": "S_n = T(n,n)" }, { "math_id": 28, "text": "T(n,k)" }, { "math_id": 29, "text": "k" }, { "math_id": 30, "text": "T(n,k) = T(n,k-1) + T(n-1,k-1) + T(n-1,k) " }, { "math_id": 31, "text": "T(1,k)=1" }, { "math_id": 32, "text": "T(n,k)=0" }, { "math_id": 33, "text": "k>n" }, { "math_id": 34, "text": "(n+1)" }, { "math_id": 35, "text": "\\sum_{k=0}^n T(n,k) = s_{n+1}" }, { "math_id": 36, "text": "S_0 = 1" }, { "math_id": 37, "text": "S_1 = 2" }, { "math_id": 38, "text": "S_{n} = 3S_{n-1} + \\sum_{k=1}^{n-2}S_{k} S_{n-k-1} " }, { "math_id": 39, "text": "n \\geq 2" }, { "math_id": 40, "text": "S_{n} =\\frac{6n-3}{n+1}S_{n-1} - \\frac{n-2}{n+1}S_{n-2} " }, { "math_id": 41, "text": "G(x)" }, { "math_id": 42, "text": "(S_n)_{n\\geq0}" }, { "math_id": 43, "text": "G(x) = \\frac{1 - x - \\sqrt{1 - 6x + x^2}}{2x} = \\sum_{n=0}^\\infty S_n x^n" }, { "math_id": 44, "text": "C(x) = \\frac{1 - \\sqrt{1 - 4x}}{2x}" }, { "math_id": 45, "text": "G(x) = \\frac1{1-x} C\\big(\\frac{x}{(1-x)^2}\\big)." }, { "math_id": 46, "text": "1 \\times 2 " }, { "math_id": 47, "text": "2 \\times 1 " }, { "math_id": 48, "text": "(2n-1)\\times(2n-1)" }, { "math_id": 49, "text": "(i,j)" }, { "math_id": 50, "text": "S_{i+j-1}," }, { "math_id": 51, "text": "2^{n(n+1)/2}." }, { "math_id": 52, "text": "\n\\begin{vmatrix}\nS_1 & S_2 & \\cdots & S_n \\\\\nS_2 & S_3 & \\cdots & S_{n+1} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nS_n & S_{n+1} & \\cdots & S_{2n-1}\n\\end{vmatrix}\n= 2^{n(n+1)/2}.\n" }, { "math_id": 53, "text": "\n\\begin{vmatrix}\n2\n\\end{vmatrix}\n= 2 = 2^{1(2)/2}\n" }, { "math_id": 54, "text": "\n\\begin{vmatrix}\n2 & 6 \\\\\n6 & 22\n\\end{vmatrix}\n= 8 = 2^{2(3)/2}\n" }, { "math_id": 55, "text": "\n\\begin{vmatrix}\n2 & 6 & 22 \\\\\n6 & 22 & 90 \\\\\n22 & 90 & 394\n\\end{vmatrix}\n= 64 = 2^{3(4)/2}\n" } ]
https://en.wikipedia.org/wiki?curid=15062158
1506351
Magnetoreception
Biological ability to perceive magnetic fields Magnetoreception is a sense which allows an organism to detect the Earth's magnetic field. Animals with this sense include some arthropods, molluscs, and vertebrates (fish, amphibians, reptiles, birds, and mammals). The sense is mainly used for orientation and navigation, but it may help some animals to form regional maps. Experiments on migratory birds provide evidence that they make use of a cryptochrome protein in the eye, relying on the quantum radical pair mechanism to perceive magnetic fields. This effect is extremely sensitive to weak magnetic fields, and readily disturbed by radio-frequency interference, unlike a conventional iron compass. Birds have iron-containing materials in their upper beaks. There is some evidence that this provides a magnetic sense, mediated by the trigeminal nerve, but the mechanism is unknown. Cartilaginous fish including sharks and stingrays can detect small variations in electric potential with their electroreceptive organs, the ampullae of Lorenzini. These appear to be able to detect magnetic fields by induction. There is some evidence that these fish use magnetic fields in navigation. History. Biologists have long wondered whether migrating animals such as birds and sea turtles have an inbuilt magnetic compass, enabling them to navigate using the Earth's magnetic field. Until late in the 20th century, evidence for this was essentially only behavioural: many experiments demonstrated that animals could indeed derive information from the magnetic field around them, but gave no indication of the mechanism. In 1972, Roswitha and Wolfgang Wiltschko showed that migratory birds responded to the direction and inclination (dip) of the magnetic field. In 1977, M. M. Walker and colleagues identified iron-based (magnetite) magnetoreceptors in the snouts of rainbow trout. In 2003, G. Fleissner and colleagues found iron-based receptors in the upper beaks of homing pigeons, both seemingly connected to the animal's trigeminal nerve. Research took a different direction in 2000, however, when Thorsten Ritz and colleagues suggested that a photoreceptor protein in the eye, cryptochrome, was a magnetoreceptor, working at a molecular scale by quantum entanglement. Proposed mechanisms. In animals. In animals, the mechanism for magnetoreception is still under investigation. Two main hypotheses are currently being discussed: one proposing a quantum compass based on a radical pair mechanism, the other postulating a more conventional iron-based magnetic compass with magnetite particles. Cryptochrome. According to the first model, magnetoreception is possible via the radical pair mechanism, which is well-established in spin chemistry. The mechanism requires two molecules, each with unpaired electrons, at a suitable distance from each other. When these can exist in states either with their spin axes in the same direction, or in opposite directions, the molecules oscillate rapidly between the two states. That oscillation is extremely sensitive to magnetic fields. Because the Earth's magnetic field is extremely weak, at 0.5 gauss, the radical pair mechanism is currently the only credible way that the Earth's magnetic field could cause chemical changes (as opposed to the mechanical forces which would be detected via magnetic crystals acting like a compass needle). In 1978, Schulten and colleagues proposed that this was the mechanism of magnetoreception. In 2000, scientists proposed that cryptochrome – a flavoprotein in the rod cells in the eyes of birds – was the "magnetic molecule" behind this effect. It is the only protein known to form photoinduced radical-pairs in animals. The function of cryptochrome varies by species, but its mechanism is always the same: exposure to blue light excites an electron in a chromophore, which causes the formation of a radical-pair whose electrons are quantum entangled, enabling the precision needed for magnetoreception. Many lines of evidence point to cryptochrome and radical pairs as the mechanism of magnetoreception in birds: These findings together suggest that the Cry4a of migratory birds has been selected for its magnetic sensitivity. Behavioral experiments on migratory birds also support this theory. Caged migratory birds such as robins display migratory restlessness, known by ethologists as "Zugunruhe", in spring and autumn: they often orient themselves in the direction in which they would migrate. In 2004, Thorsten Ritz showed that a weak radio-frequency electromagnetic field, chosen to be at the same frequency as the singlet-triplet oscillation of cryptochrome radical pairs, effectively interfered with the birds' orientation. The field would not have interfered with an iron-based compass. Further, birds are unable to detect a 180 degree reversal of the magnetic field, something they would straightforwardly detect with an iron-based compass. From 2007 onwards, Henrik Mouritsen attempted to replicate this experiment. Instead, he found that robins were unable to orient themselves in the wooden huts he used. Suspecting extremely weak radio-frequency interference from other electrical equipment on the campus, he tried shielding the huts with aluminium sheeting, which blocks electrical noise but not magnetic fields. When he earthed the sheeting, the robins oriented correctly; when the earthing was removed, the robins oriented at random. Finally, when the robins were tested in a hut far from electrical equipment, the birds oriented correctly. These effects imply a radical-pair compass, not an iron one. In 2016, Wiltschko and colleagues showed that cryptochrome can be activated in the dark, removing the objection that the blue light-activated mechanism would not work when birds were migrating at night. A different radical pair is formed by re-oxidation of fully-reduced FADH−. Experiments with European robins, using flickering light and a magnetic field switched off when the light was on, showed that the birds detected the field without light. The birds were unaffected by local anaesthesia of the upper beak, showing that in these test conditions orientation was not from iron-based receptors in the beak. In their view, cryptochrome and its radical pairs provide the only model that can explain the avian magnetic compass. A scheme with three radicals rather than two has been proposed as more resistant to spin relaxation and explaining the observed behaviour better. Iron-based. The second proposed model for magnetoreception relies on clusters composed of iron, a natural mineral with strong magnetism, used by magnetotactic bacteria. Iron clusters have been observed in the upper beak of homing pigeons, and other taxa. Iron-based systems could form a magnetoreceptive basis for many species including turtles. Both the exact location and ultrastructure of birds' iron-containing magnetoreceptors remain unknown; they are believed to be in the upper beak, and to be connected to the brain by the trigeminal nerve. This system is in addition to the cryptochrome system in the retina of birds. Iron-based systems of unknown function might also exist in other vertebrates. Electromagnetic induction. Another possible mechanism of magnetoreception in animals is electromagnetic induction in cartilaginous fish, namely sharks, stingrays, and chimaeras. These fish have electroreceptive organs, the ampullae of Lorenzini, which can detect small variations in electric potential. The organs are mucus-filled and consist of canals that connect pores in the skin of the mouth and nose to small sacs within the animal's flesh. They are used to sense the weak electric fields of prey and predators. These organs have been predicted to sense magnetic fields, by means of Faraday's law of induction: as a conductor moves through a magnetic field an electric potential is generated. In this case the conductor is the animal moving through a magnetic field, and the potential induced (Vind) depends on the time (t)-varying rate of magnetic flux (Φ) through the conductor according to formula_0 The ampullae of Lorenzini detect very small fluctuations in the potential difference between the pore and the base of the electroreceptor sac. An increase in potential results in a decrease in the rate of nerve activity. This is analogous to the behavior of a current-carrying conductor. Sandbar sharks, "Carcharinus plumbeus", have been shown to be able to detect magnetic fields; the experiments provided non-definitive evidence that the animals had a magnetoreceptor, rather than relying on induction and electroreceptors. Electromagnetic induction has not been studied in non-aquatic animals. The yellow stingray, "Urobatis jamaicensis", is able to distinguish between the intensity and inclination angle of a magnetic field in the laboratory. This suggests that cartilaginous fishes may use the Earth's magnetic field for navigation. Passive alignment in bacteria. Magnetotactic bacteria of multiple taxa contain sufficient magnetic material in the form of magnetosomes, nanometer-sized particles of magnetite, that the Earth's magnetic field passively aligns them, just as it does with a compass needle. The bacteria are thus not actually sensing the magnetic field. A possible but unexplored mechanism of magnetoreception in animals is through endosymbiosis with magnetotactic bacteria, whose DNA is widespread in animals. This would involve having these bacteria living inside an animal, and their magnetic alignment being used as part of a magnetoreceptive system. Unanswered questions. It remains likely that two or more complementary mechanisms play a role in magnetic field detection in animals. Of course, this potential dual mechanism theory raises the questions of to what degree each method is responsible for the stimulus, and how they produce a signal in response to the weak magnetic field of the Earth. In addition, it is possible that magnetic senses may be different for different species. Some species may only be able to detect north and south, while others may only be able to differentiate between the equator and the poles. Although the ability to sense direction is important in migratory navigation, many animals have the ability to sense small fluctuations in earth's magnetic field to map their position to within a few kilometers. Taxonomic range. Magnetoreception is widely distributed taxonomically. It is present in many of the animals so far investigated. These include arthropods, molluscs, and among vertebrates in fish, amphibians, reptiles, birds, and mammals. Its status in other groups remains unknown. The ability to detect and respond to magnetic fields may exist in plants, possibly as in animals mediated by cryptochrome. Experiments by different scientists have identified multiple effects, including changes to growth rate, seed germination, mitochondrial structure, and responses to gravity (geotropism). The results have sometimes been controversial, and no mechanism has been definitely identified. The ability may be widely distributed, but its taxonomic range in plants is unknown. In molluscs. The giant sea slug "Tochuina gigantea" (formerly "T. tetraquetra"), a mollusc, orients its body between north and east prior to a full moon. A 1991 experiment offered a right turn to geomagnetic south and a left turn to geomagnetic east (a Y-shaped maze). 80% of "Tochuina" made a turn to magnetic east. When the field was reversed, the animals displayed no preference for either turn. "Tochuina"'s nervous system is composed of individually identifiable neurons, four of which are stimulated by changes in the applied magnetic field, and two which are inhibited by such changes. The tracks of the similar species "Tritonia exsulans" become more variable in direction when close to strong rare-earth magnets placed in their natural habitat, suggesting that the animal uses its magnetic sense continuously to help it travel in a straight line. In insects. The fruit fly "Drosophila melanogaster" may be able to orient to magnetic fields. In one choice test, flies were loaded into an apparatus with two arms that were surrounded by electric coils. Current was run through each of the coils, but only one was configured to produce a 5-Gauss magnetic field (about ten times stronger than the Earth's magnetic field) at a time. The flies were trained to associate the magnetic field with a sucrose reward. Flies with an altered cryptochrome, such as with an antisense mutation, were not sensitive to magnetic fields. Magnetoreception has been studied in detail in insects including honey bees, ants and termites. Ants and bees navigate using their magnetic sense both locally (near their nests) and when migrating. In particular, the Brazilian stingless bee "Schwarziana quadripunctata" is able to detect magnetic fields using the thousands of hair-like sensilla on its antennae. In vertebrates. In fish. Studies of magnetoreception in bony fish have been conducted mainly with salmon. Both sockeye salmon ("Oncorhynchus nerka") and Chinook salmon ("Oncorhynchus tschawytscha") have a compass sense. This was demonstrated in experiments in the 1980s by changing the axis of a magnetic field around a circular tank of young fish; they reoriented themselves in line with the field. In amphibians. Some of the earliest studies of amphibian magnetoreception were conducted with cave salamanders ("Eurycea lucifuga"). Researchers housed groups of cave salamanders in corridors aligned with either magnetic north–south, or magnetic east–west. In tests, the magnetic field was experimentally rotated by 90°, and salamanders were placed in cross-shaped structures (one corridor along the new north–south axis, one along the new east–west axis). The salamanders responded to the field's rotation. Red-spotted newts ("Notophthalmus viridescens") respond to drastic increases in water temperature by heading for land. The behaviour is disrupted if the magnetic field is experimentally altered, showing that the newts use the field for orientation. Both European toads ("Bufo bufo") and natterjack toads ("Epidalea calamita)" toads rely on vision and olfaction when migrating to breeding sites, but magnetic fields may also play a role. When randomly displaced from their breeding sites, these toads can navigate their way back, but this ability can be disrupted by fitting them with small magnets. In reptiles. The majority of study on magnetoreception in reptiles involves turtles. Early support for magnetoreception in turtles was provided in a 1991 study on hatchling loggerhead turtles which demonstrated that loggerheads can use the magnetic field as a compass to determine direction. Subsequent studies have demonstrated that loggerhead and green turtles can also use the magnetic field of the earth as a map, because different parameters of the Earth's magnetic field vary with geographic location. The map in sea turtles was the first ever described though similar abilities have now been reported in lobsters, fish, and birds. Magnetoreception by land turtles was shown in a 2010 experiment on "Terrapene carolina", a box turtle. After teaching a group of these box turtles to swim to either the east or west end of an experimental tank, a strong magnet disrupted the learned routes. Orientation toward the sea, as seen in turtle hatchlings, may rely partly on magnetoreception. In loggerhead and leatherback turtles, breeding takes place on beaches, and, after hatching, offspring crawl rapidly to the sea. Although differences in light density seem to drive this behaviour, magnetic alignment appears to play a part. For instance, the natural directional preferences held by these hatchlings (which lead them from beaches to the sea) reverse upon experimental inversion of the magnetic poles. In birds. Homing pigeons use magnetic fields as part of their complex navigation system. William Keeton showed that time-shifted homing pigeons (acclimatised in the laboratory to a different time-zone) are unable to orient themselves correctly on a clear, sunny day; this is attributed to time-shifted pigeons being unable to compensate accurately for the movement of the sun during the day. Conversely, time-shifted pigeons released on overcast days navigate correctly, suggesting that pigeons can use magnetic fields to orient themselves; this ability can be disrupted with magnets attached to the birds' backs. Pigeons can detect magnetic anomalies as weak as 1.86 gauss. For a long time the trigeminal system was the suggested location for a magnetite-based magnetoreceptor in the pigeon. This was based on two findings: First, magnetite-containing cells were reported in specific locations in the upper beak. However, the cells proved to be immune system macrophages, not neurons able to detect magnetic fields. Second, pigeon magnetic field detection is impaired by sectioning the trigeminal nerve and by application of lidocaine, an anaesthetic, to the olfactory mucosa. However, lidocaine treatment might lead to unspecific effects and not represent a direct interference with potential magnetoreceptors. As a result, an involvement of the trigeminal system is still debated. In the search for magnetite receptors, a large iron-containing organelle (the cuticulosome) of unknown function was found in the inner ear of pigeons. Areas of the pigeon brain that respond with increased activity to magnetic fields are the posterior vestibular nuclei, dorsal thalamus, hippocampus, and visual hyperpallium. Domestic hens have iron mineral deposits in the sensory dendrites in the upper beak and are capable of magnetoreception. Beak trimming causes loss of the magnetic sense. In mammals. Some mammals are capable of magnetoreception. When woodmice are removed from their home area and deprived of visual and olfactory cues, they orient towards their homes until an inverted magnetic field is applied to their cage. When the same mice are allowed access to visual cues, they are able to orient themselves towards home despite the presence of inverted magnetic fields. This indicates that woodmice use magnetic fields to orient themselves when no other cues are available. The magnetic sense of woodmice is likely based on a radical-pair mechanism. The Zambian mole-rat, a subterranean mammal, uses magnetic fields to aid in nest orientation. In contrast to woodmice, Zambian mole-rats do not rely on radical-pair based magnetoreception, perhaps due to their subterranean lifestyle. Experimental exposure to magnetic fields leads to an increase in neural activity within the superior colliculus, as measured by immediate gene expression. The activity level of neurons within two levels of the superior colliculus, the outer sublayer of the intermediate gray layer and the deep gray layer, were elevated in a non-specific manner when exposed to various magnetic fields. However, within the inner sublayer of the intermediate gray layer (InGi) there were two or three clusters of cells that respond in a more specific manner. The more time the mole rats were exposed to a magnetic field, the greater the immediate early gene expression within the InGi. Magnetic fields appear to play a role in bat orientation. They use echolocation to orient themselves over short distances, typically ranging from a few centimetres up to 50 metres. When non-migratory big brown bats ("Eptesicus fuscus") are taken from their home roosts and exposed to magnetic fields rotated 90 degrees from magnetic north, they become disoriented; it is unclear whether they use the magnetic sense as a map, a compass, or a compass calibrator. Another bat species, the greater mouse-eared bat ("Myotis myotis"), appears to use the Earth's magnetic field in its home range as a compass, but needs to calibrate this at sunset or dusk. In migratory soprano pipistrelles ("Pipistrellus pygmaeus"), experiments using mirrors and Helmholtz coils show that they calibrate the magnetic field using the position of the solar disk at sunset. Red foxes ("Vulpes vulpes") may be influenced by the Earth's magnetic field when predating small rodents like mice and voles. They attack these prey using a specific high-jump, preferring a north-eastern compass direction. Successful attacks are tightly clustered to the north. It is unknown whether humans can sense magnetic fields. The ethmoid bone in the nose contains magnetic materials. Magnetosensitive cryptochrome 2 (cry2) is present in the human retina. Human alpha brain waves are affected by magnetic fields, but it is not known whether behaviour is affected. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_{ind}=-\\frac{d\\phi}{dt}" } ]
https://en.wikipedia.org/wiki?curid=1506351
15063922
Direct limit of groups
The direct limit of a direct system of groups In mathematics, a direct limit of groups is the direct limit of a direct system of groups. These are central objects of study in algebraic topology, especially stable homotopy theory and homological algebra. They are sometimes called stable groups, though this term normally means something quite different in model theory. Certain examples of stable groups are easier to study than "unstable" groups, the groups occurring in the limit. This is a priori surprising, given that they are generally infinite-dimensional, constructed as limits of groups with finite-dimensional representations. Examples. Each family of classical groups forms a direct system, via inclusion of matrices in the upper left corner, such as formula_0. The stable groups are denoted formula_1 or formula_2. Bott periodicity computes the homotopy of the stable unitary group and stable orthogonal group. The Whitehead group of a ring (the first K-group) can be defined in terms of formula_1. Stable homotopy groups of spheres are the stable groups associated with the suspension functor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{GL}(n,A) \\to \\operatorname{GL}(n+1,A)" }, { "math_id": 1, "text": "\\operatorname{GL}(A)" }, { "math_id": 2, "text": "\\operatorname{GL}(\\infty,A)" } ]
https://en.wikipedia.org/wiki?curid=15063922
15064678
Envy ratio
Envy ratio, in finance, is the ratio of the price paid by investors to that paid by the management team for their respective shares of the equity. Overview. The ratio is used to consider an opportunity for a management buyout. Managers are often allowed to invest at a lower valuation to make their ownership possible and to create a personal financial incentive for them to approve the buyout and to work diligently towards the success of the investment. The envy ratio is somewhat similar to the concept of financial leverage; managers can increase returns on their investments by using other investors' money. formula_0 Example. If private equity investors paid $500M for 80% of a company's equity, and a management team paid $60M for 20%, then ER=(500/0,8)/(60/0,2)=2.08x. This means that the investors paid for a share 2.08 times more than did the managers. The ratio demonstrates how generous institutional investors are to a management team—the higher the ratio is, the better is the deal for management. As a rule of thumb, management should be expected to invest anywhere from six months to one year's gross salary to demonstrate commitment and have some personal financial risk. In any transaction, the envy ratio is affected by how keen the investors are to do the deal; the competition they are facing; and economic factors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Envy ratio} = {\\mbox{Investment by investors / Percent of equity} \\over \\mbox{Investment by managers / Percent of equity}}" } ]
https://en.wikipedia.org/wiki?curid=15064678
15066189
Leapfrog integration
Mathematics concept In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form formula_0 or equivalently of the form formula_1 particularly in the case of a dynamical system of classical mechanics. The method is known by different names in different disciplines. In particular, it is similar to the velocity Verlet method, which is a variant of Verlet integration. Leapfrog integration is equivalent to updating positions formula_2 and velocities formula_3 at different interleaved time points, staggered in such a way that they "leapfrog" over each other. Leapfrog integration is a second-order method, in contrast to Euler integration, which is only first-order, yet requires the same number of function evaluations per step. Unlike Euler integration, it is stable for oscillatory motion, as long as the time-step formula_4 is constant, and formula_5. Using Yoshida coefficients, applying the leapfrog integrator multiple times with the correct timesteps, a much higher order integrator can be generated. Algorithm. In leapfrog integration, the equations for updating position and velocity are formula_6 where formula_7 is position at step formula_8, formula_9 is the velocity, or first derivative of formula_10, at step formula_11, formula_12 is the acceleration, or second derivative of formula_10, at step formula_8, and formula_4 is the size of each time step. These equations can be expressed in a form that gives velocity at integer steps as well: formula_13 However, in this synchronized form, the time-step formula_4 must be constant to maintain stability. The synchronised form can be re-arranged to the 'kick-drift-kick' form; formula_14 which is primarily used where variable time-steps are required. The separation of the acceleration calculation onto the beginning and end of a step means that if time resolution is increased by a factor of two (formula_15), then only one extra (computationally expensive) acceleration calculation is required. One use of this equation is in gravity simulations, since in that case the acceleration depends only on the positions of the gravitating masses (and not on their velocities), although higher-order integrators (such as Runge–Kutta methods) are more frequently used. There are two primary strengths to leapfrog integration when applied to mechanics problems. The first is the time-reversibility of the Leapfrog method. One can integrate forward "n" steps, and then reverse the direction of integration and integrate backwards "n" steps to arrive at the same starting position. The second strength is its symplectic nature, which sometimes allows for the conservation of a (slightly modified) energy of a dynamical system (only true for certain simple systems). This is especially useful when computing orbital dynamics, as many other integration schemes, such as the (order-4) Runge–Kutta method, do not conserve energy and allow the system to drift substantially over time. Because of its time-reversibility, and because it is a symplectic integrator, leapfrog integration is also used in Hamiltonian Monte Carlo, a method for drawing random samples from a probability distribution whose overall normalization is unknown. Yoshida algorithms. The leapfrog integrator can be converted into higher order integrators using techniques due to Haruo Yoshida. In this approach, the leapfrog is applied over a number of different timesteps. It turns out that when the correct timesteps are used in sequence, the errors cancel and far higher order integrators can be easily produced. 4th order Yoshida integrator. One step under the 4th order Yoshida integrator requires four intermediary steps. The position and velocity are computed at different times. Only three (computationally expensive) acceleration calculations are required. The equations for the 4th order integrator to update position and velocity are formula_16 where formula_17 are the starting position and velocity, formula_18 are intermediary position and velocity at intermediary step formula_19, formula_20 is the acceleration at the position formula_21, and formula_22 are the final position and velocity under one 4th order Yoshida step. Coefficients formula_23 and formula_24 are derived in (see the equation (4.6)) formula_25 All intermediary steps form one formula_4 step which implies that coefficients sum up to one: formula_26 and formula_27. Please note that position and velocity are computed at different times and some intermediary steps are backwards in time. To illustrate this, we give the numerical values of formula_28 coefficients: formula_29, formula_30, formula_31, formula_32 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ddot x = \\frac{d^2 x}{dt^2} = A(x)," }, { "math_id": 1, "text": "\\dot v = \\frac{dv}{dt} = A(x), \\;\\dot x = \\frac{dx}{dt} = v," }, { "math_id": 2, "text": "x(t)" }, { "math_id": 3, "text": "v(t)=\\dot x(t)" }, { "math_id": 4, "text": "\\Delta t" }, { "math_id": 5, "text": "\\Delta t < 2/\\omega" }, { "math_id": 6, "text": "\\begin{align}\n a_i &= A(x_i), \\\\\n v_{i+1/2} &= v_{i-1/2} + a_{i}\\, \\Delta t, \\\\\n x_{i+1} &= x_{i} + v_{i+1/2}\\, \\Delta t,\n\\end{align}" }, { "math_id": 7, "text": "x_i" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "v_{i+1/2\\,}" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "i+1/2\\," }, { "math_id": 12, "text": "a_{i}=A(x_i)" }, { "math_id": 13, "text": "\\begin{align}\n x_{i+1} &= x_i + v_i\\, \\Delta t + \\tfrac{1}{2}\\,a_i\\, \\Delta t^{\\,2}, \\\\\n v_{i+1} &= v_i + \\tfrac{1}{2}(a_i + a_{i+1})\\,\\Delta t.\n\\end{align}" }, { "math_id": 14, "text": "\\begin{align}\n v_{i+1/2} &= v_i + a_i \\frac{\\Delta t}{2}, \\\\\n x_{i+1} &= x_i +v_{i+1/2}\\Delta t,\\\\\n v_{i+1} &= v_{i+1/2} + a_{i+1} \\frac{\\Delta t}{2},\n\\end{align}" }, { "math_id": 15, "text": "\\Delta t \\rightarrow \\Delta t/2" }, { "math_id": 16, "text": "\\begin{align}\n x_i^1 &= x_i + c_1\\, v_i\\, \\Delta t, \\\\\n v_i^1 &= v_i + d_1\\, a(x_i^1)\\, \\Delta t, \\\\\n x_i^2 &= x_i^1 + c_2\\, v_i^1\\, \\Delta t, \\\\\n v_i^2 &= v_i^1 + d_2\\, a(x_i^2)\\, \\Delta t, \\\\\n x_i^3 &= x_i^2 + c_3\\, v_i^2\\, \\Delta t, \\\\\n v_i^3 &= v_i^2 + d_3\\, a(x_i^3)\\, \\Delta t, \\\\\n x_{i+1} &\\equiv x_i^4 = x_i^3 + c_4\\, v_i^3\\, \\Delta t, \\\\\n v_{i+1} &\\equiv v_i^4 = v_i^3 \\\\\n\\end{align}" }, { "math_id": 17, "text": "x_i, v_i" }, { "math_id": 18, "text": "x_i^n, v_i^n" }, { "math_id": 19, "text": "n" }, { "math_id": 20, "text": "a(x_i^n)" }, { "math_id": 21, "text": "x_i^n" }, { "math_id": 22, "text": "x_{i+1},v_{i+1}" }, { "math_id": 23, "text": "(c_1, c_2, c_3, c_4)" }, { "math_id": 24, "text": "(d_1, d_2, d_3)" }, { "math_id": 25, "text": "\\begin{align}\n w_0 &\\equiv - \\frac{\\sqrt[3]{2}}{2-\\sqrt[3]{2}}, \\\\\n w_1 &\\equiv \\frac{1}{2-\\sqrt[3]{2}}, \\\\\n c_1 &= c_4 \\equiv \\frac{w_1}{2}, c_2 = c_3 \\equiv \\frac{w_0+w_1}{2}, \\\\\n d_1 &= d_3 \\equiv w_1, d_2 \\equiv w_0 \\\\\n\\end{align}" }, { "math_id": 26, "text": "\\sum_{i=1}^{4} c_i = 1" }, { "math_id": 27, "text": "\\sum_{i=1}^{3} d_i = 1" }, { "math_id": 28, "text": "c_{n}" }, { "math_id": 29, "text": "c_1=0.6756" }, { "math_id": 30, "text": "c_2=-0.1756" }, { "math_id": 31, "text": "c_3=-0.1756" }, { "math_id": 32, "text": "c_4=0.6756." } ]
https://en.wikipedia.org/wiki?curid=15066189
15069829
Photo-Carnot engine
A photo-Carnot engine is a Carnot cycle engine in which the working medium is a photon inside a cavity with perfectly reflecting walls. Radiation is the working fluid, and the piston is driven by radiation pressure. A quantum Carnot engine is one in which the atoms in the heat bath are given a small bit of quantum coherence. The phase of the atomic coherence provides a new control parameter. The deep physics behind the second law of thermodynamics is not violated; nevertheless, the quantum Carnot engine has certain features that are not possible in a classical engine. Derivation. The internal energy of the photo-Carnot engine is proportional to the volume (unlike the ideal-gas equivalent) as well as the 4th power of the temperature (see Stefan–Boltzmann law) using formula_0 : formula_1 The radiation pressure is only proportional to this 4th power of temperature but no other variables, meaning that for this photo-Carnot engine an isotherm is equivalent to an isobar: formula_2 Using the first law of thermodynamics (formula_3) we can determine the work done through an adiabatic (formula_4) expansion by using the chain rule (formula_5) and setting it equal to formula_6 Combining these formula_7 gives us formula_8 which we can solve to find formula_9, or equivalently formula_10 Since the photo-Carnot engine needs a quantum coherence in the gas which is lost during the process, the rebuild of coherency takes more energy than is produced with the machine. The efficiency of this reversible engine including the coherency must at most be the Carnot efficiency, regardless of the mechanism and so formula_11 Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a = \\frac {4\\sigma}{c}" }, { "math_id": 1, "text": "U = V\\varepsilon aT^{4} \\,." }, { "math_id": 2, "text": "P = \\frac{U}{3V} = \\frac{\\varepsilon aT^{4}}{3} \\,." }, { "math_id": 3, "text": "dU = dW + dQ" }, { "math_id": 4, "text": "dQ = 0" }, { "math_id": 5, "text": "dU = \\varepsilon aT^{4} dV + 4\\varepsilon aVT^{3} dT" }, { "math_id": 6, "text": "dW_V = -PdV = -\\frac{1}{3} \\varepsilon aT^{4} dV \\,." }, { "math_id": 7, "text": "dW_V = dU" }, { "math_id": 8, "text": "-\\frac{1}{3} T dV = V dT" }, { "math_id": 9, "text": "T^{3} V= \\text{const} \\," }, { "math_id": 10, "text": " PV^{4/3}=\\text{const}\\,." }, { "math_id": 11, "text": "\\eta \\le \\frac{T_H - T_C}{T_H} =1-\\frac{T_C}{T_H}\\,." } ]
https://en.wikipedia.org/wiki?curid=15069829
1507685
Darcy (unit)
The darcy (or darcy unit) and millidarcy (md or mD) are units of permeability, named after Henry Darcy. They are not SI units, but they are widely used in petroleum engineering and geology. The unit has also been used in biophysics and biomechanics, where the flow of fluids such as blood through capillary beds and cerebrospinal fluid through the brain interstitial space is being examined. A darcy has dimensional units of length2. Definition. Permeability measures the ability of fluids to flow through rock (or other porous media). The darcy is defined using Darcy's law, which can be written as: formula_0 where: The darcy is referenced to a mixture of unit systems. A medium with a permeability of 1 darcy permits a flow of 1 cm3/s of a fluid with viscosity 1 cP (1 mPa·s) under a pressure gradient of 1 atm/cm acting across an area of 1 cm2. Typical values of permeability range as high as 100,000 darcys for gravel, to less than 0.01 microdarcy for granite. Sand has a permeability of approximately 1 darcy. Tissue permeability, whose measurement "in vivo" is still in its infancy, is somewhere in the range of 0.01 to 100 darcy. Origin. The darcy is named after Henry Darcy. Rock permeability is usually expressed in millidarcys (md) because rocks hosting hydrocarbon or water accumulations typically exhibit permeability ranging from 5 to 500 md. The odd combination of units comes from Darcy's original studies of water flow through columns of sand. Water has a viscosity of 1.0019 cP at about room temperature. The unit abbreviation "d" is not capitalized (contrary to industry use). The American Association of Petroleum Geologists uses the following unit abbreviations and grammar in their publications: Conversions. Converted to SI units, 1 darcy is equivalent to or 0.9869233 μm2. This conversion is usually approximated as 1 μm2. This is the reciprocal of 1.013250—the conversion factor from atmospheres to bars. Specifically in the hydrology domain, permeability of soil or rock may also be defined as the flux of water under hydrostatic pressure (~ 0.1 bar/m) at a temperature of 20 °C. In this specific setup, 1 darcy is equivalent to 0.831 m/day. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q = \\frac{A k\\,\\Delta P}{\\mu\\,\\Delta x}" } ]
https://en.wikipedia.org/wiki?curid=1507685
1507717
Mark and recapture
Animal population estimation method Mark and recapture is a method commonly used in ecology to estimate an animal population's size where it is impractical to count every individual. A portion of the population is captured, marked, and released. Later, another portion will be captured and the number of marked individuals within the sample is counted. Since the number of marked individuals within the second sample should be proportional to the number of marked individuals in the whole population, an estimate of the total population size can be obtained by dividing the number of marked individuals by the proportion of marked individuals in the second sample. The method assumes, rightly or wrongly, that the probability of capture is the same for all individuals. Other names for this method, or closely related methods, include capture-recapture, capture-mark-recapture, mark-recapture, sight-resight, mark-release-recapture, multiple systems estimation, band recovery, the Petersen method, and the Lincoln method. Another major application for these methods is in epidemiology, where they are used to estimate the completeness of ascertainment of disease registers. Typical applications include estimating the number of people needing particular services (e.g. services for children with learning disabilities, services for medically frail elderly living in the community), or with particular conditions (e.g. illegal drug addicts, people infected with HIV, etc.). Field work related to mark-recapture. Typically a researcher visits a study area and uses traps to capture a group of individuals alive. Each of these individuals is marked with a unique identifier (e.g., a numbered tag or band), and then is released unharmed back into the environment. A mark-recapture method was first used for ecological study in 1896 by C.G. Johannes Petersen to estimate plaice, "Pleuronectes platessa", populations. Sufficient time should be allowed to pass for the marked individuals to redistribute themselves among the unmarked population. Next, the researcher returns and captures another sample of individuals. Some individuals in this second sample will have been marked during the initial visit and are now known as recaptures. Other organisms captured during the second visit, will not have been captured during the first visit to the study area. These unmarked animals are usually given a tag or band during the second visit and then are released. Population size can be estimated from as few as two visits to the study area. Commonly, more than two visits are made, particularly if estimates of survival or movement are desired. Regardless of the total number of visits, the researcher simply records the date of each capture of each individual. The "capture histories" generated are analyzed mathematically to estimate population size, survival, or movement. When capturing and marking organisms, ecologists need to consider the welfare of the organisms. If the chosen identifier harms the organism, then its behavior might become irregular. Notation. Let "N" = Number of animals in the population "n" = Number of animals marked on the first visit "K" = Number of animals captured on the second visit "k" = Number of recaptured animals that were marked A biologist wants to estimate the size of a population of turtles in a lake. She captures 10 turtles on her first visit to the lake, and marks their backs with paint. A week later she returns to the lake and captures 15 turtles. Five of these 15 turtles have paint on their backs, indicating that they are recaptured animals. This example is (n, K, k) = (10, 15, 5). The problem is to estimate "N". Lincoln–Petersen estimator. The Lincoln–Petersen method (also known as the Petersen–Lincoln index or Lincoln index) can be used to estimate population size if only two visits are made to the study area. This method assumes that the study population is "closed". In other words, the two visits to the study area are close enough in time so that no individuals die, are born, or move into or out of the study area between visits. The model also assumes that no marks fall off animals between visits to the field site by the researcher, and that the researcher correctly records all marks. Given those conditions, estimated population size is: formula_0 Derivation. It is assumed that all individuals have the same probability of being captured in the second sample, regardless of whether they were previously captured in the first sample (with only two samples, this assumption cannot be tested directly). This implies that, in the second sample, the proportion of marked individuals that are caught (formula_1) should equal the proportion of the total population that is marked (formula_2). For example, if half of the marked individuals were recaptured, it would be assumed that half of the total population was included in the second sample. In symbols, formula_3 A rearrangement of this gives formula_4 the formula used for the Lincoln–Petersen method. Sample calculation. In the example (n, K, k) = (10, 15, 5) the Lincoln–Petersen method estimates that there are 30 turtles in the lake. formula_5 Chapman estimator. The Lincoln–Petersen estimator is asymptotically unbiased as sample size approaches infinity, but is biased at small sample sizes. An alternative less biased estimator of population size is given by the Chapman estimator: formula_6 Sample calculation. The example (n, K, k) = (10, 15, 5) gives formula_7 Note that the answer provided by this equation must be truncated not rounded. Thus, the Chapman method estimates 28 turtles in the lake. Surprisingly, Chapman's estimate was one conjecture from a range of possible estimators: "In practice, the whole number immediately less than ("K"+1)("n"+1)/("k"+1) or even "Kn"/("k"+1) will be the estimate. The above form is more convenient for mathematical purposes."(see footnote, page 144). Chapman also found the estimator could have considerable negative bias for small "Kn"/"N" (page 146), but was unconcerned because the estimated standard deviations were large for these cases. Confidence interval. An approximate formula_8 confidence interval for the population size "N" can be obtained as: formula_9 where formula_10 corresponds to the formula_11 quantile of a standard normal random variable, and formula_12 The example ("n, K, k") = (10, 15, 5) gives the estimate "N" ≈ 30 with a 95% confidence interval of 22 to 65. It has been shown that this confidence interval has actual coverage probabilities that are close to the nominal formula_8 level even for small populations and extreme capture probabilities (near to 0 or 1), in which cases other confidence intervals fail to achieve the nominal coverage levels. Bayesian estimate. The mean value ± standard deviation is formula_13 where formula_14 for formula_15 formula_16 for formula_17 A derivation is found here: . The example ("n, K, k") = (10, 15, 5) gives the estimate "N" ≈ 42 ± 21.5 Capture probability. The capture probability refers to the probability of a detecting an individual animal or person of interest, and has been used in both ecology and epidemiology for detecting animal or human diseases, respectively. The capture probability is often defined as a two-variable model, in which "f" is defined as the fraction of a finite resource devoted to detecting the animal or person of interest from a high risk sector of an animal or human population, and "q" is the frequency of time that the problem (e.g., an animal disease) occurs in the high-risk versus the low-risk sector. For example, an application of the model in the 1920s was to detect typhoid carriers in London, who were either arriving from zones with high rates of tuberculosis (probability "q" that a passenger with the disease came from such an area, where "q"&gt;0.5), or low rates (probability 1−"q"). It was posited that only 5 out of 100 of the travelers could be detected, and 10 out of 100 were from the high risk area. Then the capture probability "P" was defined as: formula_18 where the first term refers to the probability of detection (capture probability) in a high risk zone, and the latter term refers to the probability of detection in a low risk zone. Importantly, the formula can be re-written as a linear equation in terms of "f": formula_19 Because this is a linear function, it follows that for certain versions of "q" for which the slope of this line (the first term multiplied by "f") is positive, all of the detection resource should be devoted to the high-risk population ("f" should be set to 1 to maximize the capture probability), whereas for other value of "q", for which the slope of the line is negative, all of the detection should be devoted to the low-risk population ("f" should be set to 0. We can solve the above equation for the values of "q" for which the slope will be positive to determine the values for which "f" should be set to 1 to maximize the capture probability: formula_20 which simplifies to: formula_21 This is an example of linear optimization. In more complex cases, where more than one resource "f" is devoted to more than two areas, multivariate optimization is often used, through the simplex algorithm or its derivatives. More than two visits. The literature on the analysis of capture-recapture studies has blossomed since the early 1990s. There are very elaborate statistical models available for the analysis of these experiments. A simple model which easily accommodates the three source, or the three visit study, is to fit a Poisson regression model. Sophisticated mark-recapture models can be fit with several packages for the Open Source R programming language. These include "Spatially Explicit Capture-Recapture (secr)", "Loglinear Models for Capture-Recapture Experiments (Rcapture)", and "Mark-Recapture Distance Sampling (mrds)". Such models can also be fit with specialized programs such as MARK or E-SURGE. Other related methods which are often used include the Jolly–Seber model (used in open populations and for multiple census estimates) and Schnabel estimators (an expansion to the Lincoln–Petersen method for closed populations). These are described in detail by Sutherland. Integrated approaches. Modelling mark-recapture data is trending towards a more integrative approach, which combines mark-recapture data with population dynamics models and other types of data. The integrated approach is more computationally demanding, but extracts more information from the data improving parameter and uncertainty estimates. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\hat{N} = \\frac{nK}{k}," }, { "math_id": 1, "text": "k/K" }, { "math_id": 2, "text": "n/N" }, { "math_id": 3, "text": "\\frac{k}{K} = \\frac{n}{N}." }, { "math_id": 4, "text": "\\hat{N}=\\frac{nK}{k}, " }, { "math_id": 5, "text": "\\hat{N} = \\frac{nK}{k} = \\frac{10\\times 15}{5} = 30" }, { "math_id": 6, "text": "\\hat{N}_C = \\frac{(n+1)(K+1)}{k+1} - 1" }, { "math_id": 7, "text": "\\hat{N}_C = \\frac{(n+1)(K+1)}{k+1} -1= \\frac{11\\times 16}{6}-1 = 28.3" }, { "math_id": 8, "text": "100(1-\\alpha)\\%" }, { "math_id": 9, "text": "K + n - k - 0.5 + \\frac{(K-k+0.5)(n-k+0.5)}{(k+0.5)} \\exp(\\pm z_{\\alpha/2}\\hat{\\sigma}_{0.5}) ," }, { "math_id": 10, "text": "z_{\\alpha/2}" }, { "math_id": 11, "text": "1-\\alpha/2" }, { "math_id": 12, "text": "\\hat{\\sigma}_{0.5} = \\sqrt{\\frac{1}{k+0.5}+\\frac{1}{K-k+0.5}+\\frac{1}{n-k+0.5} + \\frac{k+0.5}{(n-k+0.5)(K-k+0.5)}}." }, { "math_id": 13, "text": "N\\approx \\mu\\pm\\sqrt{\\mu \\epsilon}" }, { "math_id": 14, "text": "\\mu=\\frac{(n-1)(K-1)}{k-2}" }, { "math_id": 15, "text": "k>2" }, { "math_id": 16, "text": "\\epsilon=\\frac{(n-k+1)(K-k+1)}{(k-2)(k-3)} " }, { "math_id": 17, "text": "k>3" }, { "math_id": 18, "text": "P = \\frac{5}{10}fq+\\frac{5}{90}(1-f)(1-q), " }, { "math_id": 19, "text": "P = \\left(\\frac{5}{10}q-\\frac{5}{90}(1-q)\\right)f + \\frac{5}{90}(1-q)." }, { "math_id": 20, "text": "\\left( \\frac{5}{10} q - \\frac{5}{90}(1-q) \\right) > 0, " }, { "math_id": 21, "text": "q > \\frac{1}{10}. " } ]
https://en.wikipedia.org/wiki?curid=1507717
15080345
Coase conjecture
Monopoly pricing model The Coase conjecture, developed first by Ronald Coase, is an argument in monopoly theory. The conjecture sets up a situation in which a monopolist sells a durable good to a market where resale is impossible and faces consumers who have different valuations. The conjecture proposes that a monopolist that does not know individuals' valuations will have to sell its product at a low price if the monopolist tries to separate consumers by offering different prices in different periods. This is because the monopolist is, in effect, in price competition with itself over several periods and the consumer with the highest valuation, if he is patient enough, can simply wait for the lowest price. Thus the monopolist will have to offer a competitive price in the first period which will be low. The conjecture holds only when there is an infinite time horizon, as otherwise a possible action for the monopolist would be to announce a very high price until the second to last period, and then sell at the static monopoly price in the last period. The monopolist could avoid this problem by committing to a stable linear pricing strategy or adopting other business strategies. Simple two-consumer model. Imagine there are consumers, called formula_0 and formula_1 with valuations of good with formula_2 and formula_3 respectively. The valuations are such as formula_4. The monopoly cannot directly identify individual consumers but it knows that there are 2 different valuations of a good. The good being sold is durable so that once a consumer buys it, the consumer will still have it in all subsequent periods. This means that after the monopolist has sold to all consumers, there can be no further sales. Also assume that production is such that average cost and marginal cost are both equal to zero. The monopolist could try to charge at a formula_5 in the first period and then in the second period formula_6, hence price discriminating. This will not result in consumer formula_1 buying in the first period because, by waiting, she could get price equal to formula_2. To make consumer formula_1 indifferent between buying in the first period or the second period, the monopolist will have to charge a price of formula_7 where formula_8 is a discount factor between 0 and 1. This price is such as formula_9. Hence by waiting, formula_1 forces the monopolist to compete on price with its future self. "n" consumers. Imagine there are formula_10 consumers with valuations ranging from formula_3 to a valuation just above zero. The monopolist will want to sell to the consumer with the lowest valuation. This is because production is costless and by charging a price just above zero it still makes a profit. Hence to separate the consumers, the monopoly will charge first consumer formula_11 where formula_10 is the number of consumers. If the discount factor is high enough this price will be close to zero. Hence the conjecture is proved.
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" }, { "math_id": 4, "text": "x<y<2x" }, { "math_id": 5, "text": "\\text{price} = y" }, { "math_id": 6, "text": "\\text{price} =x " }, { "math_id": 7, "text": "\\text{price} = dx +(1-d)y" }, { "math_id": 8, "text": "d" }, { "math_id": 9, "text": "dx + (1-d)y < y" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "(1-d^n)y" } ]
https://en.wikipedia.org/wiki?curid=15080345
15081922
Rice–Shapiro theorem
Generalization of Rice's theorem In computability theory, the Rice–Shapiro theorem is a generalization of Rice's theorem, named after Henry Gordon Rice and Norman Shapiro. It states that when a semi-decidable property of partial computable functions is true on a certain partial function, one can extract a "finite" subfunction such that the property is still true. The informal idea of the theorem is that the "only general way" to obtain information on the behavior of a program is to run the program, and because a computation is finite, one can only try the program on a finite number of inputs. A closely related theorem is the Kreisel-Lacombe-Shoenfield-Tseitin theorem, which was obtained independently by Georg Kreisel, Daniel Lacombe and Joseph R. Shoenfield , and by Grigori Tseitin. Formal statement. Rice-Shapiro theorem.482 Let "P" be a set of partial computable functions such that the index set of "P" (i.e., the set of indices "e" such that "φ""e" ∈ "P", for some fixed admissible numbering "φ") is semi-decidable. Then for any partial computable function "f", it holds that "P" contains "f" if and only if "P" contains a "finite" subfunction of "f" (i.e., a partial function defined in finitely many points, which takes the same values as "f" on those points). Kreisel-Lacombe-Shoenfield-Tseitin theorem.362 Let "P" be a set of total computable functions such that the index set of "P" is decidable with a promise that the input is the index of a total computable function (i.e., there is a partial computable function "D" which, given an index "e" such that "φ""e" is total, returns 1 if "φ""e" ∈ "P" and 0 otherwise; "D"("e") need not be defined if "φ""e" is not total). We say that two total functions "f", "g" "agree until "n" if for all "k" ≤ "n" it holds that "f"("k") = "g"("k"). Then for any total computable function "f", there exists "n" such that for all total computable function "g" which agrees with "f" until "n", "f" ∈ "P" ⟺ "g" ∈ "P". Discussion. The two theorems are closely related, and also relate to Rice's theorem. Specifically: Examples. By the Rice-Shapiro theorem, it is neither semi-decidable nor co-semi-decidable whether a given program: By the Kreisel-Lacombe-Shoenfield-Tseitin theorem, it is undecidable whether a given program "which is assumed to always terminate": Proof of the Rice-Shapiro theorem. Let "P" be a set of partial computable functions with semi-decidable index set. Upward closedness. We first prove that "P" is an upward closed set, i.e., if "f" ⊆ "g" and "f" ∈ "P", then "g" ∈ "P" (here, "f" ⊆ "g" means that "f" is a subfunction of "g", i.e., the graph of "f" is contained in the graph of "g"). The proof uses a diagonal argument typical of theorems in computability. Assume for contradiction that there are two functions "f" and "g" such that "f" ∈ "P", "g" ∉ "P" and "f" ⊆ "g". We build a program "p" as follows. This program takes an input "x". Using a standard dovetailing technique, "p" runs two tasks in parallel. We distinguish two cases. Extracting a finite subfunction. Next, we prove that if "P" contains a partial computable function "f", then it contains a finite subfunction of "f". Let us fix a partial computable function "f" in "P". We build a program "p" which takes input "x" and runs the following steps: Suppose that "φ""p" ∉ "P". This implies that the semi-algorithm for semi-deciding "P" used in the first step never returns true. Then, "p" computes "f", and this contradicts the assumption "f" ∈ "P". Thus, we must have "φ""p" ∈ "P", and the algorithm for semi-deciding "P" returns true on "p" after a certain number of steps "n". The partial function "φ""p" can only be defined on inputs "x" such that "x" ≤ "n", and it returns "f"("x") on such inputs, thus it is a finite subfunction of "f" that belongs to "P". Conclusion. It only remains to assemble the two parts of the proof. If "P" contains a partial computable function "f", then it contains a finite subfunction of "f" by the second part, and conversely, if it contains a finite subfunction of "f", then it contains "f", because it is upward closed by the first part. Thus, the theorem is proved. Proof of the Kreisel-Lacombe-Shoenfield-Tseitin theorem. Preliminaries. A total function formula_0 is said to be ultimately zero if it always takes the value zero except for a finite number of points, i.e., there exists "N" such that for all "n" ≥ "N", "h"("n") = 0. Note that such a function is always computable (it can be computed by simply checking if the input is in a certain predefined list, and otherwise returning zero). We fix "U" a computable enumeration of all total functions which are ultimately zero, that is, "U" is such that: We can build "U" by standard techniques (e.g., for increasing "N", enumerate ultimately zero functions which are bounded by "N" and zero on inputs larger than "N"). Approximating by ultimately zero functions. Let "P" be as in the statement of the theorem: a set of total computable functions such that there is an algorithm which, given an index "e" and a promise that "φ""e" is computable, decides whether "φ""e" ∈ "P". We first prove a lemma: For all total computable function "f", and for all integer "N", there exists an ultimately zero function "h" such that "h" agrees with "f" until "N", and "f" ∈ "P" ⟺ "h" ∈ "P". To prove this lemma, fix a total computable function "f" and an integer "N", and let "B" be the boolean "f" ∈ "P". Build a program "p" which takes input "x" and takes these steps: Clearly, "p" always terminates, i.e., "φ""p" is total. Therefore, the promise to "P" run on "p" is fulfilled. Suppose for contradiction that one of "f" and "φ""p" belongs to "P" and the other does not, i.e., ("φ""p" ∈ "P") ≠ "B". Then we see that "p" computes "f", since "P" does not return "B" on "p" no matter the amount of steps. Thus, we have "f" = "φ""p", contradicting the fact that one of "f" and "φ""p" belongs to "P" and the other does not. This argument proves that "f" ∈ "P" ⟺ "φ""p" ∈ "P". Then, the second step makes "p" return zero for sufficiently large "x", thus "φ""p" is ultimately zero; and by construction (due to the first step), "φ""p" agrees with "f" until "N". Therefore, we can take "h" = "φ""p" and the lemma is proved. Main proof. With the previous lemma, we can now prove the Kreisel-Lacombe-Shoenfield-Tseitin theorem. Again, fix "P" as in the theorem statement, let "f" a total computable function and let "B" be the boolean "f" ∈ "P"". Build the program "p" which takes input "x" and runs these steps: We first prove that "P" returns "B" on "p". Suppose by contradiction that this is not the case ("P" returns ¬"B", or "P" does not terminate). Then "p" actually computes "f". In particular, "φ""p" is total, so the promise to "P" when run on "p" is fulfilled, and "P" returns the boolean "φ""p" ∈ "P", which is "f" ∈ "P", i.e., "B", contradicting the assumption. Let "n" be the number of steps that "P" takes to return "B" on "p". We claim that "n" satisfies the conclusion of the theorem: for all total computable function "g" which agrees with "f" until "n", it holds that "f" ∈ "P" ⟺ "g" ∈ "P". Assume by contradiction that there exists "g" total computable which agrees with "f" until "n" and such that ("g" ∈ "P") ≠ "B". Applying the lemma again, there exists "k" such that "U"("k") agrees with "g" until "n" and "g" ∈ "P" ⟺ "U"("k") ∈ "P". For such "k", "U"("k") agrees with "g" until "n" and "g" agrees with "f" until "n", thus "U"("k") also agrees with "f" until "n", and since ("g" ∈ "P") ≠ "B" and "g" ∈ "P" ⟺ "U"("k") ∈ "P", we have ("U"("k") ∈ "P") ≠ "B". Therefore, "U"("k") satisfies the conditions of the parallel search step in the program "p", namely: "U"("k") agrees with "f" until "n" and ("U"("k") ∈ "P") ≠ "B". This proves that the search in the second step always terminates. We fix "k" to be the value that it finds. We observe that "φ""p" = "U"("k"). Indeed, either the second step of "p" returns "U"("k")("x"), or the third step returns "f"("x"), but the latter case only happens for "x" ≤ "n", and we know that "U"("k") agrees with "f" until "n". In particular, "φ""p" = "U"("k") is total. This makes the promise to "P" run on "p" fulfilled, therefore "P" returns "φ""p" ∈ "P" on "p". We have found a contradiction: one the one hand, the boolean "φ""p" ∈ "P" is the return value of "P" on "p", which is "B", and on the other hand, we have "φ""p" = "U"("k"), and we know that ("U"("k") ∈ "P") ≠ "B". Perspective from effective topology. For any finite unary function formula_1 on integers, let formula_2 denote the 'frustum' of all partial-recursive functions that are defined, and agree with formula_1, on formula_1's domain. Equip the set of all partial-recursive functions with the topology generated by these frusta as base. Note that for every frustum formula_3, the index set formula_4 is recursively enumerable. More generally it holds for every set formula_5 of partial-recursive functions: formula_6 is recursively enumerable iff formula_5 is a recursively enumerable union of frusta. Applications. The Kreisel-Lacombe-Shoenfield-Tseitin theorem has been applied to foundational problems in computational social choice (more broadly, algorithmic game theory). For instance, Kumabe and Mihara apply this result to an investigation of the Nakamura numbers for simple games in cooperative game theory and social choice theory.
[ { "math_id": 0, "text": "h : \\mathbb{N} \\rarr \\mathbb{N}" }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "C(\\theta)" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "Ix(C)" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "Ix(A)" } ]
https://en.wikipedia.org/wiki?curid=15081922
1508379
White's law
White's law, named after Leslie White and published in 1943, states that, other factors remaining constant, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Description. White spoke of culture as a general human phenomenon and claimed not to speak of 'cultures' in the plural. His theory, published in 1959 in "The Evolution of Culture: The Development of Civilization to the Fall of Rome", rekindled the interest in social evolutionism and is counted prominently among the neoevolutionists. He believed that culture – meaning the sum total of all human cultural activity on the planet – was evolving. White differentiated between three components of culture: Argument synopsis. White's materialist approach is evident in the following quote: "man as an animal species, and consequently culture as a whole, is dependent upon the material, mechanical means of adjustment to the natural environment". This technological component can be described as material, mechanical, physical and chemical instruments, as well as the way people use these techniques. White's argument on the importance of technology goes as follows: For White "the primary function of culture" and the one that determines its level of advancement is its ability to "harness and control energy". White's law states that the measure by which to judge the relative degree of evolvedness of culture was the amount of energy it could capture (energy consumption). White differentiates between five stages of human development. In the first, people use energy of their own muscles. In the second, they use the energy of domesticated animals. In the third, they use the energy of plants (so White refers to agricultural revolution here). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White's energy formula. White introduced a formula: formula_0 ...where E is a measure of energy consumed per capita per year, T is the measure of efficiency of technical factors utilising the energy and C represents the degree of cultural development. In his own words: "the basic law of cultural evolution" was "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased." Therefore "we find that progress and development are affected by the improvement of the mechanical means with which energy is harnessed and put to work as well as by increasing the amounts of energy employed". Although White stops short of promising that technology is the panacea for all the problems that affect mankind, like technological utopians do, his theory treats technological factor as the most important factor in the evolution of society and is similar to the later works of Gerhard Lenski, the theory of Kardashev scale of Russian astronomer, Nikolai Kardashev and to some notions of technological singularity. Earlier research. In 1915, Geographer James Fairgrieve outlined a similar law of history. In its widest sense on its material side, he wrote, history is the story of man's increasing ability to control energy. By energy he meant the capacity for doing work, for causing—not controlling—movement of men and machines. Man's life is taken up by the one endeavor to harness as much energy as possible and to waste as little as possible. Any means where he can harness more or waste less marks an advance and important event in the world history. Inventions mark stages of progress. The forthcoming League of Nations, he believed in the first year of World War I, would be another stage in progress in saving energy, as it would save the energy wasted in wars. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \nC = ET, \n" } ]
https://en.wikipedia.org/wiki?curid=1508379
1508434
Coplanarity
Geometric property of objects being in the same plane In geometry, a set of points in space are coplanar if there exists a geometric plane that contains them all. For example, three points are always coplanar, and if the points are distinct and non-collinear, the plane they determine is unique. However, a set of four or more distinct points will, in general, not lie in a single plane. Two lines in three-dimensional space are coplanar if there is a plane that includes them both. This occurs if the lines are parallel, or if they intersect each other. Two lines that are not coplanar are called skew lines. Distance geometry provides a solution technique for the problem of determining whether a set of points is coplanar, knowing only the distances between them. Properties in three dimensions. In three-dimensional space, two linearly independent vectors with the same initial point determine a plane through that point. Their cross product is a normal vector to that plane, and any vector orthogonal to this cross product through the initial point will lie in the plane. This leads to the following coplanarity test using a scalar triple product: Four distinct points, "x"1, "x"2, "x"3, "x"4, are coplanar if and only if, formula_0 which is also equivalent to formula_1 If three vectors a, b, c are coplanar, then if a ⋅ b = 0 (i.e., a and b are orthogonal) then formula_2 where &amp;NoBreak;&amp;NoBreak; denotes the unit vector in the direction of a. That is, the vector projections of c on a and c on b add to give the original c. Coplanarity of points in "n" dimensions whose coordinates are given. Since three or fewer points are always coplanar, the problem of determining when a set of points are coplanar is generally of interest only when there are at least four points involved. In the case that there are exactly four points, several "ad hoc" methods can be employed, but a general method that works for any number of points uses vector methods and the property that a plane is determined by two linearly independent vectors. In an n-dimensional space where "n" ≥ 3, a set of k points formula_3 are coplanar if and only if the matrix of their relative differences, that is, the matrix whose columns (or rows) are the vectors formula_4 is of rank 2 or less. For example, given four points formula_5 if the matrix formula_6 is of rank 2 or less, the four points are coplanar. In the special case of a plane that contains the origin, the property can be simplified in the following way: A set of k points and the origin are coplanar if and only if the matrix of the coordinates of the k points is of rank 2 or less. Geometric shapes. A skew polygon is a polygon whose vertices are not coplanar. Such a polygon must have at least four vertices; there are no skew triangles. A polyhedron that has positive volume has vertices that are not all coplanar. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[(x_2 - x_1) \\times (x_4 - x_1)] \\cdot (x_3 - x_1) = 0." }, { "math_id": 1, "text": "(x_2 - x_1) \\cdot [(x_4 - x_1) \\times (x_3 - x_1)] = 0." }, { "math_id": 2, "text": "(\\mathbf{c}\\cdot\\mathbf{\\hat a})\\mathbf{\\hat a} + (\\mathbf{c}\\cdot\\mathbf{\\hat b})\\mathbf{\\hat b} = \\mathbf{c}, " }, { "math_id": 3, "text": "\\{p_0,\\ p_1,\\ \\dots,\\ p_{k-1}\\}" }, { "math_id": 4, "text": "\\overrightarrow{p_0 p_1},\\ \\overrightarrow{p_0 p_2},\\ \\dots,\\ \\overrightarrow{p_0 p_{k-1}} " }, { "math_id": 5, "text": "\\begin{align}\nX &= (x_1, x_2, \\dots, x_n), \\\\\nY &= (y_1, y_2, \\dots, y_n), \\\\\nZ &= (z_1, z_2, \\dots, z_n), \\\\\nW &= (w_1, w_2, \\dots, w_n), \n\\end{align}" }, { "math_id": 6, "text": "\\begin{bmatrix}\nx_1 - w_1 & x_2 - w_2 & \\dots & x_n - w_n \\\\\ny_1 - w_1 & y_2 - w_2 & \\dots & y_n - w_n \\\\\nz_1 - w_1 & z_2 - w_2 & \\dots & z_n - w_n \\\\\n\\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=1508434
1508442
Logarithmically concave function
Type of mathematical function In convex analysis, a non-negative function "f" : R"n" → R+ is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality formula_0 for all "x","y" ∈ dom "f" and 0 &lt; "θ" &lt; 1. If "f" is strictly positive, this is equivalent to saying that the logarithm of the function, log ∘ "f", is concave; that is, formula_1 for all "x","y" ∈ dom "f" and 0 &lt; "θ" &lt; 1. Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function. Similarly, a function is "log-convex" if it satisfies the reverse inequality formula_2 for all "x","y" ∈ dom "f" and 0 &lt; "θ" &lt; 1. formula_3 formula_5, i.e. formula_6 is negative semi-definite. For functions of one variable, this condition simplifies to formula_7 formula_8 is concave, and hence also "f" "g" is log-concave. formula_9 is log-concave (see Prékopa–Leindler inequality). formula_10 is log-concave. Log-concave distributions. Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability distribution with specified mean "μ" and Deviation risk measure "D". As it happens, many common probability distributions are log-concave. Some examples: Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave. The following distributions are non-log-concave for all parameters: Note that the cumulative distribution function (CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's: The following are among the properties of log-concave distributions: formula_11 which is decreasing as it is the derivative of a concave function. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n f(\\theta x + (1 - \\theta) y) \\geq f(x)^{\\theta} f(y)^{1 - \\theta}\n " }, { "math_id": 1, "text": "\n \\log f(\\theta x + (1 - \\theta) y) \\geq \\theta \\log f(x) + (1-\\theta) \\log f(y)\n " }, { "math_id": 2, "text": "\n f(\\theta x + (1 - \\theta) y) \\leq f(x)^{\\theta} f(y)^{1 - \\theta}\n " }, { "math_id": 3, "text": "f''(x)=e^{-\\frac{x^2}{2}} (x^2-1) \\nleq 0" }, { "math_id": 4, "text": "\\Rightarrow" }, { "math_id": 5, "text": "f(x)\\nabla^2f(x) \\preceq \\nabla f(x)\\nabla f(x)^T" }, { "math_id": 6, "text": "f(x)\\nabla^2f(x) - \\nabla f(x)\\nabla f(x)^T" }, { "math_id": 7, "text": "f(x)f''(x) \\leq (f'(x))^2" }, { "math_id": 8, "text": "\\log\\,f(x) + \\log\\,g(x) = \\log(f(x)g(x))" }, { "math_id": 9, "text": "g(x)=\\int f(x,y) dy" }, { "math_id": 10, "text": "(f*g)(x)=\\int f(x-y)g(y) dy = \\int h(x,y) dy" }, { "math_id": 11, "text": "\\frac{d}{dx}\\log\\left(1-F(x)\\right) = -\\frac{f(x)}{1-F(x)}" } ]
https://en.wikipedia.org/wiki?curid=1508442
1508507
Vector projection
Concept in linear algebra The vector projection (also known as the vector component or vector resolution) of a vector a on (or onto) a nonzero vector b is the orthogonal projection of a onto a straight line parallel to b. The projection of a onto b is often written as formula_0 or a∥b. The vector component or vector resolute of a perpendicular to b, sometimes also called the vector rejection of a "from" b (denoted formula_1 or a⊥b), is the orthogonal projection of a onto the plane (or, in general, hyperplane) that is orthogonal to b. Since both formula_2 and formula_1 are vectors, and their sum is equal to a, the rejection of a from b is given by: formula_3 To simplify notation, this article defines formula_4 and formula_5 Thus, the vector formula_6 is parallel to formula_7 the vector formula_8 is orthogonal to formula_7 and formula_9 The projection of a onto b can be decomposed into a direction and a scalar magnitude by writing it as formula_10 where formula_11 is a scalar, called the "scalar projection" of a onto b, and b̂ is the unit vector in the direction of b. The scalar projection is defined as formula_12 where the operator ⋅ denotes a dot product, ‖a‖ is the length of a, and "θ" is the angle between a and b. The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of b, that is, if the angle between the vectors is more than 90 degrees. The vector projection can be calculated using the dot product of formula_13 and formula_14 as: formula_15 Notation. This article uses the convention that vectors are denoted in a bold font (e.g. a1), and scalars are written in normal font (e.g. "a"1). The dot product of vectors a and b is written as formula_16, the norm of a is written ‖a‖, the angle between a and b is denoted "θ". Definitions based on angle "θ". Scalar projection. The scalar projection of a on b is a scalar equal to formula_17 where "θ" is the angle between a and b. A scalar projection can be used as a scale factor to compute the corresponding vector projection. Vector projection. The vector projection of a on b is a vector whose magnitude is the scalar projection of a on b with the same direction as b. Namely, it is defined as formula_18 where formula_11 is the corresponding scalar projection, as defined above, and formula_19 is the unit vector with the same direction as b: formula_20 Vector rejection. By definition, the vector rejection of a on b is: formula_21 Hence, formula_22 Definitions in terms of a and b. When θ is not known, the cosine of θ can be computed in terms of a and b, by the following property of the dot product a ⋅ b formula_23 Scalar projection. By the above-mentioned property of the dot product, the definition of the scalar projection becomes: formula_24 In two dimensions, this becomes formula_25 Vector projection. Similarly, the definition of the vector projection of a onto b becomes: formula_26 which is equivalent to either formula_27 or formula_28 Scalar rejection. In two dimensions, the scalar rejection is equivalent to the projection of a onto formula_29, which is formula_30 rotated 90° to the left. Hence, formula_31 Such a dot product is called the "perp dot product." Vector rejection. By definition, formula_32 Hence, formula_33 By using the Scalar rejection using the perp dot product this gives formula_34 Properties. Scalar projection. The scalar projection a on b is a scalar which has a negative sign if 90 degrees &lt; "θ" ≤ 180 degrees. It coincides with the length ‖c‖ of the vector projection if the angle is smaller than 90°. More exactly: Vector projection. The vector projection of a on b is a vector a1 which is either null or parallel to b. More exactly: Vector rejection. The vector rejection of a on b is a vector a2 which is either null or orthogonal to b. More exactly: Matrix representation. The orthogonal projection can be represented by a projection matrix. To project a vector onto the unit vector a = ("ax, ay, az"), it would need to be multiplied with this projection matrix: formula_35 Uses. The vector projection is an important operation in the Gram–Schmidt orthonormalization of vector space bases. It is also used in the separating axis theorem to detect whether two convex shapes intersect. Generalizations. Since the notions of vector length and angle between vectors can be generalized to any "n"-dimensional inner product space, this is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another. In some cases, the inner product coincides with the dot product. Whenever they don't coincide, the inner product is used instead of the dot product in the formal definitions of projection and rejection. For a three-dimensional inner product space, the notions of projection of a vector onto another and rejection of a vector from another can be generalized to the notions of projection of a vector onto a plane, and rejection of a vector from a plane. The projection of a vector on a plane is its orthogonal projection on that plane. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane. Both are vectors. The first is parallel to the plane, the second is orthogonal. For a given vector and plane, the sum of projection and rejection is equal to the original vector. Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplane, and rejection from a hyperplane. In geometric algebra, they can be further generalized to the notions of projection and rejection of a general multivector onto/from any invertible "k"-blade. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{proj}_\\mathbf{b} \\mathbf{a}" }, { "math_id": 1, "text": "\\operatorname{oproj}_{\\mathbf{b}} \\mathbf{a}" }, { "math_id": 2, "text": "\\operatorname{proj}_{\\mathbf{b}} \\mathbf{a}" }, { "math_id": 3, "text": "\\operatorname{oproj}_{\\mathbf{b}} \\mathbf{a} = \\mathbf{a} - \\operatorname{proj}_{\\mathbf{b}} \\mathbf{a}." }, { "math_id": 4, "text": "\\mathbf{a}_1 := \\operatorname{proj}_{\\mathbf{b}} \\mathbf{a}" }, { "math_id": 5, "text": "\\mathbf{a}_2 := \\operatorname{oproj}_{\\mathbf{b}} \\mathbf{a}." }, { "math_id": 6, "text": "\\mathbf{a}_1" }, { "math_id": 7, "text": "\\mathbf{b}," }, { "math_id": 8, "text": "\\mathbf{a}_2" }, { "math_id": 9, "text": "\\mathbf{a} = \\mathbf{a}_1 + \\mathbf{a}_2." }, { "math_id": 10, "text": "\\mathbf{a}_1 = a_1\\mathbf{\\hat b}" }, { "math_id": 11, "text": "a_1" }, { "math_id": 12, "text": "a_1 = \\left\\|\\mathbf{a}\\right\\|\\cos\\theta = \\mathbf{a}\\cdot\\mathbf{\\hat b}" }, { "math_id": 13, "text": "\\mathbf{a}" }, { "math_id": 14, "text": "\\mathbf{b}" }, { "math_id": 15, "text": "\\operatorname{proj}_{\\mathbf{b}} \\mathbf{a} = \\left(\\mathbf{a} \\cdot \\mathbf{\\hat b}\\right) \\mathbf{\\hat b} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\| } \\frac {\\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\|} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\|^2}{\\mathbf{b}} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\mathbf{b} \\cdot \\mathbf{b}}{\\mathbf{b}} ~ ." }, { "math_id": 16, "text": "\\mathbf{a}\\cdot\\mathbf{b}" }, { "math_id": 17, "text": " a_1 = \\left\\|\\mathbf{a}\\right\\| \\cos \\theta , " }, { "math_id": 18, "text": "\\mathbf{a}_1 = a_1 \\mathbf{\\hat b} = (\\left\\|\\mathbf{a}\\right\\| \\cos \\theta) \\mathbf{\\hat b}" }, { "math_id": 19, "text": "\\mathbf{\\hat b}" }, { "math_id": 20, "text": "\\mathbf{\\hat b} = \\frac {\\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\|}" }, { "math_id": 21, "text": "\\mathbf{a}_2 = \\mathbf{a} - \\mathbf{a}_1" }, { "math_id": 22, "text": "\\mathbf{a}_2 = \\mathbf{a} - \\left(\\left\\|\\mathbf{a}\\right\\| \\cos \\theta\\right) \\mathbf{\\hat b}" }, { "math_id": 23, "text": " \\mathbf{a} \\cdot \\mathbf{b} = \\left\\|\\mathbf{a}\\right\\| \\left\\|\\mathbf{b}\\right\\| \\cos \\theta" }, { "math_id": 24, "text": "a_1 = \\left\\|\\mathbf{a}\\right\\| \\cos \\theta = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} { \\left\\|\\mathbf{b}\\right\\|}." }, { "math_id": 25, "text": "a_1 = \\frac {\\mathbf{a}_x \\mathbf{b}_x + \\mathbf{a}_y \\mathbf{b}_y} {\\left\\|\\mathbf{b}\\right\\|}." }, { "math_id": 26, "text": "\\mathbf{a}_1 = a_1 \\mathbf{\\hat b} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\| } \\frac {\\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\|}," }, { "math_id": 27, "text": "\\mathbf{a}_1 = \\left(\\mathbf{a} \\cdot \\mathbf{\\hat b}\\right) \\mathbf{\\hat b}," }, { "math_id": 28, "text": "\\mathbf{a}_1 = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\|^2}{\\mathbf{b}} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\mathbf{b} \\cdot \\mathbf{b}}{\\mathbf{b}} ~ ." }, { "math_id": 29, "text": "\\mathbf{b}^\\perp = \\begin{pmatrix}-\\mathbf{b}_y & \\mathbf{b}_x\\end{pmatrix}" }, { "math_id": 30, "text": "\\mathbf{b} = \\begin{pmatrix}\\mathbf{b}_x & \\mathbf{b}_y\\end{pmatrix}" }, { "math_id": 31, "text": "a_2 = \\left\\|\\mathbf{a}\\right\\| \\sin \\theta = \\frac {\\mathbf{a} \\cdot \\mathbf{b}^\\perp} {\\left\\|\\mathbf{b}\\right\\|} = \\frac {\\mathbf{a}_y \\mathbf{b}_x - \\mathbf{a}_x \\mathbf{b}_y} {\\left\\|\\mathbf{b}\\right\\| }." }, { "math_id": 32, "text": "\\mathbf{a}_2 = \\mathbf{a} - \\mathbf{a}_1 " }, { "math_id": 33, "text": "\\mathbf{a}_2 = \\mathbf{a} - \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\mathbf{b} \\cdot \\mathbf{b}}{\\mathbf{b}}." }, { "math_id": 34, "text": "\\mathbf{a}_2 = \\frac{\\mathbf{a}\\cdot\\mathbf{b}^\\perp}{\\mathbf{b}\\cdot\\mathbf{b}}\\mathbf{b}^\\perp" }, { "math_id": 35, "text": "P_\\mathbf{a} = \\mathbf{a} \\mathbf{a}^\\textsf{T} =\n \\begin{bmatrix} a_x \\\\ a_y \\\\ a_z \\end{bmatrix}\n \\begin{bmatrix} a_x & a_y & a_z \\end{bmatrix} =\n \\begin{bmatrix}\n a_x^2 & a_x a_y & a_x a_z \\\\\n a_x a_y & a_y^2 & a_y a_z \\\\\n a_x a_z & a_y a_z & a_z^2 \\\\\n \\end{bmatrix}" } ]
https://en.wikipedia.org/wiki?curid=1508507
1508518
Scalar projection
In mathematics, the scalar projection of a vector formula_0 on (or onto) a vector formula_1 also known as the scalar resolute of formula_0 in the direction of formula_1 is given by: formula_2 where the operator formula_3 denotes a dot product, formula_4 is the unit vector in the direction of formula_1 formula_5 is the length of formula_6 and formula_7 is the angle between formula_0 and formula_8. The term scalar component refers sometimes to scalar projection, as, in Cartesian coordinates, the components of a vector are the scalar projections in the directions of the coordinate axes. The scalar projection is a scalar, equal to the length of the orthogonal projection of formula_0 on formula_8, with a negative sign if the projection has an opposite direction with respect to formula_8. Multiplying the scalar projection of formula_0 on formula_8 by formula_9 converts it into the above-mentioned orthogonal projection, also called vector projection of formula_0 on formula_8. Definition based on angle "θ". If the angle formula_7 between formula_0 and formula_8 is known, the scalar projection of formula_0 on formula_8 can be computed using formula_10 (formula_11 in the figure) The formula above can be inverted to obtain the angle, "θ". Definition in terms of a and b. When formula_7 is not known, the cosine of formula_7 can be computed in terms of formula_0 and formula_1 by the following property of the dot product formula_12: formula_13 By this property, the definition of the scalar projection formula_14 becomes: formula_15 Properties. The scalar projection has a negative sign if formula_16. It coincides with the length of the corresponding vector projection if the angle is smaller than 90°. More exactly, if the vector projection is denoted formula_17 and its length formula_18: formula_19 if formula_20 formula_21 if formula_22 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{a}" }, { "math_id": 1, "text": "\\mathbf{b}," }, { "math_id": 2, "text": "s = \\left\\|\\mathbf{a}\\right\\|\\cos\\theta = \\mathbf{a}\\cdot\\mathbf{\\hat b}," }, { "math_id": 3, "text": "\\cdot" }, { "math_id": 4, "text": "\\hat{\\mathbf{b}}" }, { "math_id": 5, "text": "\\left\\|\\mathbf{a}\\right\\|" }, { "math_id": 6, "text": "\\mathbf{a}," }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "\\mathbf{b}" }, { "math_id": 9, "text": "\\mathbf{\\hat b}" }, { "math_id": 10, "text": "s = \\left\\|\\mathbf{a}\\right\\| \\cos \\theta ." }, { "math_id": 11, "text": "s = \\left\\|\\mathbf{a}_1\\right\\|" }, { "math_id": 12, "text": " \\mathbf{a} \\cdot \\mathbf{b}" }, { "math_id": 13, "text": " \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{a}\\right\\| \\left\\|\\mathbf{b}\\right\\|} = \\cos \\theta" }, { "math_id": 14, "text": "s" }, { "math_id": 15, "text": " s = \\left\\|\\mathbf{a}_1\\right\\| = \\left\\|\\mathbf{a}\\right\\| \\cos \\theta = \\left\\|\\mathbf{a}\\right\\| \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{a}\\right\\| \\left\\|\\mathbf{b}\\right\\|} = \\frac {\\mathbf{a} \\cdot \\mathbf{b}} {\\left\\|\\mathbf{b}\\right\\| }\\," }, { "math_id": 16, "text": "90^\\circ < \\theta \\le 180^\\circ" }, { "math_id": 17, "text": "\\mathbf{a}_1" }, { "math_id": 18, "text": "\\left\\|\\mathbf{a}_1\\right\\|" }, { "math_id": 19, "text": "s = \\left\\|\\mathbf{a}_1\\right\\| " }, { "math_id": 20, "text": "0^\\circ \\le \\theta \\le 90^\\circ," }, { "math_id": 21, "text": "s = -\\left\\|\\mathbf{a}_1\\right\\| " }, { "math_id": 22, "text": "90^\\circ < \\theta \\le 180^\\circ." } ]
https://en.wikipedia.org/wiki?curid=1508518
1508682
Mixed-data sampling
Econometric models involving data sampled at different frequencies are of general interest. Mixed-data sampling (MIDAS) is an econometric regression developed by Eric Ghysels with several co-authors. There is now a substantial literature on MIDAS regressions and their applications, including Ghysels, Santa-Clara and Valkanov (2006), Ghysels, Sinko and Valkanov, Andreou, Ghysels and Kourtellos (2010) and Andreou, Ghysels and Kourtellos (2013). MIDAS Regressions. A MIDAS regression is a direct forecasting tool which can relate future low-frequency data with current and lagged high-frequency indicators, and yield different forecasting models for each forecast horizon. It can flexibly deal with data sampled at different frequencies and provide a direct forecast of the low-frequency variable. It incorporates each individual high-frequency data in the regression, which solves the problems of losing potentially useful information and including mis-specification. A simple regression example has the independent variable appearing at a higher frequency than the dependent variable: formula_0 where "y" is the dependent variable, "x" is the regressor, "m" denotes the frequency – for instance if "y" is yearly formula_1 is quarterly – formula_2 is the disturbance and formula_3 is a lag distribution, for instance the Beta function or the Almon Lag. For example formula_4. The regression models can be viewed in some cases as substitutes for the Kalman filter when applied in the context of mixed frequency data. Bai, Ghysels and Wright (2013) examine the relationship between MIDAS regressions and Kalman filter state space models applied to mixed frequency data. In general, the latter involves a system of equations, whereas, in contrast, MIDAS regressions involve a (reduced form) single equation. As a consequence, MIDAS regressions might be less efficient, but also less prone to specification errors. In cases where the MIDAS regression is only an approximation, the approximation errors tend to be small. Machine Learning MIDAS Regressions. The MIDAS can also be used for machine learning time series and panel data nowcasting. The machine learning MIDAS regressions involve Legendre polynomials. High-dimensional mixed frequency time series regressions involve certain data structures that once taken into account should improve the performance of unrestricted estimators in small samples. These structures are represented by groups covering lagged dependent variables and groups of lags for a single (high-frequency) covariate. To that end, the machine learning MIDAS approach exploits the sparse-group LASSO (sg-LASSO) regularization that accommodates conveniently such structures. The attractive feature of the sg-LASSO estimator is that it allows us to combine effectively the approximately sparse and dense signals. Software packages. Several software packages feature MIDAS regressions and related econometric methods. These include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_t = \\beta_0 + \\beta_1 B(L^{1/m};\\theta)x_t^{(m)} + \\varepsilon_t^{(m)}," }, { "math_id": 1, "text": "x_t^{(4)}" }, { "math_id": 2, "text": "\\varepsilon" }, { "math_id": 3, "text": "B(L^{1/m};\\theta)" }, { "math_id": 4, "text": "B(L^{1/m};\\theta) = \\sum_{k=0}^K B(k; \\theta) L^{k/m}" } ]
https://en.wikipedia.org/wiki?curid=1508682
15088121
Stochastic ordering
In probability theory and statistics, a stochastic order quantifies the concept of one random variable being "bigger" than another. These are usually partial orders, so that one random variable formula_0 may be neither stochastically greater than, less than, nor equal to another random variable formula_1. Many different orders exist, which have different applications. Usual stochastic order. A real random variable formula_0 is less than a random variable formula_1 in the "usual stochastic order" if formula_2 where formula_3 denotes the probability of an event. This is sometimes denoted formula_4 or formula_5. If additionally formula_6 for some formula_7, then formula_0 is stochastically strictly less than formula_1, sometimes denoted formula_8. In decision theory, under this circumstance, B is said to be first-order stochastically dominant over "A". Characterizations. The following rules describe situations when one random variable is stochastically less than or equal to another. Strict version of some of these rules also exist. Other properties. If formula_9 and formula_26 then formula_27 (the random variables are equal in distribution). Stochastic dominance. Stochastic dominance relations are a family of stochastic orderings used in decision theory: There also exist higher-order notions of stochastic dominance. With the definitions above, we have formula_35. Multivariate stochastic order. An formula_36-valued random variable formula_0 is less than an formula_36-valued random variable formula_1 in the "usual stochastic order" if formula_37 Other types of multivariate stochastic orders exist. For instance the upper and lower orthant order which are similar to the usual one-dimensional stochastic order. formula_0 is said to be smaller than formula_1 in upper orthant order if formula_38 and formula_0 is smaller than formula_1 in lower orthant order if formula_39 All three order types also have integral representations, that is for a particular order formula_0 is smaller than formula_1 if and only if formula_40 for all formula_41 in a class of functions formula_42. formula_42 is then called generator of the respective order. Other dominance orders. The following stochastic orders are useful in the theory of random social choice. They are used to compare the outcomes of random social choice functions, in order to check them for efficiency or other desirable criteria. The dominance orders below are ordered from the most conservative to the least conservative. They are exemplified on random variables over the finite support {30,20,10}. Deterministic dominance, denoted formula_43, means that every possible outcome of formula_0 is at least as good as every possible outcome of formula_1: for all "x"&lt;"y", formula_44. In other words: formula_45. For example, formula_46. Bilinear dominance, denoted formula_47, means that, for every possible outcome, the probability that formula_0 yields the better one and formula_1 yields the worse one is at least as large as the probability the other way around: for all x&lt;y, formula_48 For example, formula_49. Stochastic dominance (already mentioned above), denoted formula_50, means that, for every possible outcome "x", the probability that formula_0 yields at least "x" is at least as large as the probability that formula_1 yields at least "x": for all x, formula_51. For example, formula_52. Pairwise-comparison dominance, denoted formula_53, means that the probability that that formula_0 yields a better outcome than formula_1 is larger than the other way around: formula_54. For example, formula_55. Downward-lexicographic dominance, denoted formula_56, means that formula_0 has a larger probability than formula_1 of returning the best outcome, or both formula_0 and formula_1 have the same probability to return the best outcome but formula_0 has a larger probability than formula_1 of returning the second-best best outcome, etc. Upward-lexicographic dominance is defined analogously based on the probability to return the "worst" outcomes. See lexicographic dominance. Other stochastic orders. Hazard rate order. The "hazard rate" of a non-negative random variable formula_57 with absolutely continuous distribution function formula_58 and density function formula_59 is defined as formula_60 Given two non-negative variables formula_57 and formula_61 with absolutely continuous distribution formula_58 and formula_62, and with hazard rate functions formula_63 and formula_64, respectively, formula_57 is said to be smaller than formula_61 in the hazard rate order (denoted as formula_65) if formula_66 for all formula_67, or equivalently if formula_68 is decreasing in formula_69. Likelihood ratio order. Let formula_57 and formula_61 two continuous (or discrete) random variables with densities (or discrete densities) formula_70 and formula_71, respectively, so that formula_72 increases in formula_69 over the union of the supports of formula_57 and formula_61; in this case, formula_57 is smaller than formula_61 in the "likelihood ratio order" (formula_73). Variability orders. If two variables have the same mean, they can still be compared by how "spread out" their distributions are. This is captured to a limited extent by the variance, but more fully by a range of stochastic orders. Convex order. Convex order is a special kind of variability order. Under the convex ordering, formula_0 is less than formula_1 if and only if for all convex formula_10, formula_74. Laplace transform order. Laplace transform order compares both size and variability of two random variables. Similar to convex order, Laplace transform order is established by comparing the expectation of a function of the random variable where the function is from a special class: formula_75. This makes the Laplace transform order an integral stochastic order with the generator set given by the function set defined above with formula_76 a positive real number. Realizable monotonicity. Considering a family of probability distributions formula_77 on partially ordered space formula_78 indexed with formula_79 (where formula_80 is another partially ordered space, the concept of complete or realizable monotonicity may be defined. It means, there exists a family of random variables formula_81 on the same probability space, such that the distribution of formula_82 is formula_83 and formula_84 almost surely whenever formula_85. It means the existence of a monotone coupling. See References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\\Pr(A>x) \\le \\Pr(B>x)\\text{ for all }x \\in (-\\infty,\\infty)," }, { "math_id": 3, "text": "\\Pr(\\cdot)" }, { "math_id": 4, "text": "A \\preceq B" }, { "math_id": 5, "text": "A \\le_{st} B" }, { "math_id": 6, "text": "\\Pr(A>x) < \\Pr(B>x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "A \\prec B" }, { "math_id": 9, "text": "A\\preceq B" }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "{\\rm E}[u(A)] \\le {\\rm E}[u(B)]" }, { "math_id": 12, "text": "u(A) \\preceq u(B)" }, { "math_id": 13, "text": "u:\\mathbb{R}^n\\to\\mathbb{R}" }, { "math_id": 14, "text": "A_i" }, { "math_id": 15, "text": "B_i" }, { "math_id": 16, "text": "A_i \\preceq B_i" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "u(A_1,\\dots,A_n) \\preceq u(B_1,\\dots,B_n)" }, { "math_id": 19, "text": "\\sum_{i=1}^n A_i \\preceq \\sum_{i=1}^n B_i" }, { "math_id": 20, "text": "A_{(i)} \\preceq B_{(i)}" }, { "math_id": 21, "text": "C" }, { "math_id": 22, "text": "\\sum_c\\Pr(C=c)=1" }, { "math_id": 23, "text": "\\Pr(A>u|C=c)\\le \\Pr(B>u|C=c)" }, { "math_id": 24, "text": "c" }, { "math_id": 25, "text": "\\Pr(C=c)>0" }, { "math_id": 26, "text": "{\\rm E}[A]={\\rm E}[B]" }, { "math_id": 27, "text": " A \\mathrel{\\overset{d}{=}} B" }, { "math_id": 28, "text": "A \\prec_{(0)} B" }, { "math_id": 29, "text": "A \\le B" }, { "math_id": 30, "text": "A < B" }, { "math_id": 31, "text": "A \\prec_{(1)} B" }, { "math_id": 32, "text": "\\Pr(A>x) \\le \\Pr(B>x)" }, { "math_id": 33, "text": "A \\prec_{(2)} B" }, { "math_id": 34, "text": "\\int_{-\\infty}^x [\\Pr(B>t) - \\Pr(A>t)] \\, dt \\geq 0" }, { "math_id": 35, "text": "A \\prec_{(i)} B \\implies A \\prec_{(i+1)} B" }, { "math_id": 36, "text": "\\mathbb R^d" }, { "math_id": 37, "text": "{\\rm E}[f(A)] \\le {\\rm E}[f(B)]\\text{ for all bounded, increasing functions } f\\colon\\mathbb R^d\\longrightarrow\\mathbb R " }, { "math_id": 38, "text": "\\Pr(A>\\mathbf x) \\le \\Pr(B>\\mathbf x)\\text{ for all } \\mathbf x \\in \\mathbb R^d " }, { "math_id": 39, "text": "\\Pr(A\\le\\mathbf x) \\le \\Pr(B\\le\\mathbf x)\\text{ for all } \\mathbf x \\in \\mathbb R^d " }, { "math_id": 40, "text": "{\\rm E}[f(A)] \\le {\\rm E}[f(B)] " }, { "math_id": 41, "text": "f\\colon\\mathbb R^d\\longrightarrow \\mathbb R" }, { "math_id": 42, "text": "\\mathcal G" }, { "math_id": 43, "text": "A\\succeq_{dd} B" }, { "math_id": 44, "text": "\\Pr[A=x]\\cdot \\Pr[B=y] = 0" }, { "math_id": 45, "text": "\\Pr[A\\geq B] = 1" }, { "math_id": 46, "text": "0.6 * 30 + 0.4 * 20 \\succeq_{dd} 0.5 * 20 + 0.5 * 10" }, { "math_id": 47, "text": "A\\succeq_{bd} B" }, { "math_id": 48, "text": "\\Pr[A=x]\\cdot \\Pr[B=y] \\leq \\Pr[A=y]\\cdot \\Pr[B=x]" }, { "math_id": 49, "text": "0.5 * 30 + 0.5 * 20 \\succeq_{bd} 0.33 * 30 + 0.33 * 20 + 0.34 * 10" }, { "math_id": 50, "text": "A\\succeq_{sd} B" }, { "math_id": 51, "text": "\\Pr[A\\geq x]\\geq \\Pr[B\\geq x]" }, { "math_id": 52, "text": "0.5 * 30 + 0.5 * 10 \\succeq_{sd} 0.5 * 20 + 0.5*10" }, { "math_id": 53, "text": "A\\succeq_{pc} B" }, { "math_id": 54, "text": "\\Pr[A\\geq B]\\geq\\Pr[B\\geq A]" }, { "math_id": 55, "text": "0.67 * 30 + 0.33 * 10 \\succeq_{pc} 1.0 * 20" }, { "math_id": 56, "text": "A\\succeq_{dl} B" }, { "math_id": 57, "text": "X" }, { "math_id": 58, "text": "F" }, { "math_id": 59, "text": "f" }, { "math_id": 60, "text": "r(t) = \\frac{d}{dt}(-\\log(1-F(t))) = \\frac{f(t)}{1-F(t)}." }, { "math_id": 61, "text": "Y" }, { "math_id": 62, "text": "G" }, { "math_id": 63, "text": "r" }, { "math_id": 64, "text": "q" }, { "math_id": 65, "text": "X \\preceq_{hr}Y" }, { "math_id": 66, "text": "r(t)\\ge q(t)" }, { "math_id": 67, "text": "t\\ge 0" }, { "math_id": 68, "text": "\\frac{1-F(t)}{1-G(t)}" }, { "math_id": 69, "text": "t" }, { "math_id": 70, "text": "f \\left( t \\right)" }, { "math_id": 71, "text": "g \\left( t \\right)" }, { "math_id": 72, "text": "\\frac{g \\left( t \\right)}{f \\left( t \\right)}" }, { "math_id": 73, "text": "X \\preceq _{lr} Y" }, { "math_id": 74, "text": "{\\rm E}[u(A)] \\leq {\\rm E}[u(B)]" }, { "math_id": 75, "text": "u(x) = -\\exp(-\\alpha x)" }, { "math_id": 76, "text": " \\alpha " }, { "math_id": 77, "text": " ({P}_{\\alpha})_{\\alpha \\in F} " }, { "math_id": 78, "text": " (E,\\preceq) " }, { "math_id": 79, "text": " \\alpha \\in F " }, { "math_id": 80, "text": " (F,\\preceq) " }, { "math_id": 81, "text": " (X_\\alpha)_{\\alpha} " }, { "math_id": 82, "text": " X_\\alpha " }, { "math_id": 83, "text": " {P}_\\alpha " }, { "math_id": 84, "text": " X_\\alpha \\preceq X_\\beta " }, { "math_id": 85, "text": " \\alpha \\preceq \\beta " } ]
https://en.wikipedia.org/wiki?curid=15088121
15089522
Category of topological vector spaces
Topological category In mathematics, the category of topological vector spaces is the category whose objects are topological vector spaces and whose morphisms are continuous linear maps between them. This is a category because the composition of two continuous linear maps is again a continuous linear map. The category is often denoted TVect or TVS. Fixing a topological field "K", one can also consider the subcategory TVect"K" of topological vector spaces over "K" with continuous "K"-linear maps as the morphisms. TVect is a concrete category. Like many categories, the category TVect is a concrete category, meaning its objects are sets with additional structure (i.e. a vector space structure and a topology) and its morphisms are functions preserving this structure. There are obvious forgetful functors into the category of topological spaces, the category of vector spaces and the category of sets. TVectformula_0 is a topological category. The category is topological, which means loosely speaking that it relates to its "underlying category", the category of vector spaces, in the same way that Top relates to Set. Formally, for every "K"-vector space formula_1 and every family formula_2 of topological "K"-vector spaces formula_3 and "K"-linear maps formula_4 there exists a vector space topology formula_5 on formula_1 so that the following property is fulfilled: Whenever formula_6 is a "K"-linear map from a topological "K"-vector space formula_7 it holds that formula_8 is continuous formula_9 formula_10 is continuous. The topological vector space formula_11 is called "initial object" or "initial structure" with respect to the given data. If one replaces "vector space" by "set" and "linear map" by "map", one gets a characterisation of the usual initial topologies in Top. This is the reason why categories with this property are called "topological". There are numerous consequences of this property. For example: formula_14 and the forgetful functor from formula_15 to Set is right adjoint, the forgetful functor from formula_13 to Top is right adjoint too (and the corresponding left adjoints fit in an analogue commutative diagram). This left adjoint defines "free topological vector spaces". Explicitly these are free "K"-vector spaces equipped with a certain initial topology.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "( (V_i,\\tau_i),f_i)_{i\\in I}" }, { "math_id": 3, "text": "(V_i,\\tau_i)" }, { "math_id": 4, "text": "f_i: V\\to V_i," }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "g: Z\\to V" }, { "math_id": 7, "text": "(Z,\\sigma)," }, { "math_id": 8, "text": "g: (Z,\\sigma)\\to (V,\\tau)" }, { "math_id": 9, "text": "\\iff" }, { "math_id": 10, "text": "\\forall i \\in I: f_i\\circ g: (Z,\\sigma)\\to(V_i,\\tau_i)" }, { "math_id": 11, "text": "(V,\\tau)" }, { "math_id": 12, "text": "(\\tau_i,f_i)_{i\\in I}" }, { "math_id": 13, "text": "\\textbf{TVect}_K" }, { "math_id": 14, "text": "\\begin{array}{ccc}\n\\textbf{Vect}_K & \\rightarrow & \\textbf{Set} \\\\\n\\uparrow & & \\uparrow \\\\\n\\textbf{TVect}_K & \\rightarrow & \\textbf{Top}\n\\end{array}" }, { "math_id": 15, "text": "\\textbf{Vect}_K" } ]
https://en.wikipedia.org/wiki?curid=15089522
150907
Ideal class group
In number theory, measure of non-unique factorization In mathematics, the ideal class group (or class group) of an algebraic number field "K" is the quotient group "JK"&amp;hairsp;/"PK" where "JK" is the group of fractional ideals of the ring of integers of "K", and "PK" is its subgroup of principal ideals. The class group is a measure of the extent to which unique factorization fails in the ring of integers of "K". The order of the group, which is finite, is called the class number of "K". The theory extends to Dedekind domains and their fields of fractions, for which the multiplicative properties are intimately tied to the structure of the class group. For example, the class group of a Dedekind domain is trivial if and only if the ring is a unique factorization domain. History and origin of the ideal class group. Ideal class groups (or, rather, what were effectively ideal class groups) were studied some time before the idea of an ideal was formulated. These groups appeared in the theory of quadratic forms: in the case of binary integral quadratic forms, as put into something like a final form by Carl Friedrich Gauss, a composition law was defined on certain equivalence classes of forms. This gave a finite abelian group, as was recognised at the time. Later Ernst Kummer was working towards a theory of cyclotomic fields. It had been realised (probably by several people) that failure to complete proofs in the general case of Fermat's Last Theorem by factorisation using the roots of unity was for a very good reason: a failure of unique factorization – i.e., the fundamental theorem of arithmetic – to hold in the rings generated by those roots of unity was a major obstacle. Out of Kummer's work for the first time came a study of the obstruction to the factorisation. We now recognise this as part of the ideal class group: in fact Kummer had isolated the "p"-torsion in that group for the field of "p"-roots of unity, for any prime number "p", as the reason for the failure of the standard method of attack on the Fermat problem (see regular prime). Somewhat later again Richard Dedekind formulated the concept of an ideal, Kummer having worked in a different way. At this point the existing examples could be unified. It was shown that while rings of algebraic integers do not always have unique factorization into primes (because they need not be principal ideal domains), they do have the property that every proper ideal admits a unique factorization as a product of prime ideals (that is, every ring of algebraic integers is a Dedekind domain). The size of the ideal class group can be considered as a measure for the deviation of a ring from being a principal ideal domain; a ring is a principal ideal domain if and only if it has a trivial ideal class group. Definition. If "R" is an integral domain, define a relation ~ on nonzero fractional ideals of "R" by "I" ~ "J" whenever there exist nonzero elements "a" and "b" of "R" such that ("a")"I" = ("b")"J". (Here the notation ("a") means the principal ideal of "R" consisting of all the multiples of "a".) It is easily shown that this is an equivalence relation. The equivalence classes are called the "ideal classes" of "R". Ideal classes can be multiplied: if ["I"] denotes the equivalence class of the ideal "I", then the multiplication ["I"]["J"] = ["IJ"] is well-defined and commutative. The principal ideals form the ideal class ["R"] which serves as an identity element for this multiplication. Thus a class ["I"] has an inverse ["J"] if and only if there is an ideal "J" such that "IJ" is a principal ideal. In general, such a "J" may not exist and consequently the set of ideal classes of "R" may only be a monoid. However, if "R" is the ring of algebraic integers in an algebraic number field, or more generally a Dedekind domain, the multiplication defined above turns the set of fractional ideal classes into an abelian group, the ideal class group of "R". The group property of existence of inverse elements follows easily from the fact that, in a Dedekind domain, every non-zero ideal (except "R") is a product of prime ideals. Properties. The ideal class group is trivial (i.e. has only one element) if and only if all ideals of "R" are principal. In this sense, the ideal class group measures how far "R" is from being a principal ideal domain, and hence from satisfying unique prime factorization (Dedekind domains are unique factorization domains if and only if they are principal ideal domains). The number of ideal classes (the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;class number of "R") may be infinite in general. In fact, every abelian group is isomorphic to the ideal class group of some Dedekind domain. But if "R" is a ring of algebraic integers, then the class number is always "finite". This is one of the main results of classical algebraic number theory. Computation of the class group is hard, in general; it can be done by hand for the ring of integers in an algebraic number field of small discriminant, using Minkowski's bound. This result gives a bound, depending on the ring, such that every ideal class contains an ideal norm less than the bound. In general the bound is not sharp enough to make the calculation practical for fields with large discriminant, but computers are well suited to the task. The mapping from rings of integers "R" to their corresponding class groups is functorial, and the class group can be subsumed under the heading of algebraic K-theory, with "K"0("R") being the functor assigning to "R" its ideal class group; more precisely, "K"0("R") = Z×"C"("R"), where "C"("R") is the class group. Higher K groups can also be employed and interpreted arithmetically in connection to rings of integers. Relation with the group of units. It was remarked above that the ideal class group provides part of the answer to the question of how much ideals in a Dedekind domain behave like elements. The other part of the answer is provided by the group of units of the Dedekind domain, since passage from principal ideals to their generators requires the use of units (and this is the rest of the reason for introducing the concept of fractional ideal, as well): Define a map from "R"× to the set of all nonzero fractional ideals of "R" by sending every element to the principal (fractional) ideal it generates. This is a group homomorphism; its kernel is the group of units of "R", and its cokernel is the ideal class group of "R". The failure of these groups to be trivial is a measure of the failure of the map to be an isomorphism: that is the failure of ideals to act like ring elements, that is to say, like numbers. Examples of ideal class groups. Class numbers of quadratic fields. If formula_0 is a square-free integer (a product of distinct primes) other than 1, then formula_1 is a quadratic extension of Q. If formula_2, then the class number of the ring formula_3 of algebraic integers of formula_1 is equal to 1 for precisely the following values of formula_0: formula_4. This result was first conjectured by Gauss and proven by Kurt Heegner, although Heegner's proof was not believed until Harold Stark gave a later proof in 1967. (See Stark–Heegner theorem.) This is a special case of the famous class number problem. If, on the other hand, "d" &gt; 0, then it is unknown whether there are infinitely many fields formula_1 with class number 1. Computational results indicate that there are a great many such fields. However, it is not even known if there are infinitely many number fields with class number 1. For "d" &lt; 0, the ideal class group of formula_1 is isomorphic to the class group of integral binary quadratic forms of discriminant equal to the discriminant of formula_1. For "d" &gt; 0, the ideal class group may be half the size since the class group of integral binary quadratic forms is isomorphic to the narrow class group of formula_1. For real quadratic integer rings, the class number is given in OEIS A003649; for the imaginary case, they are given in OEIS A000924. Example of a non-trivial class group. The quadratic integer ring "R" = Z[√−5] is the ring of integers of Q(√−5). It does "not" possess unique factorization; in fact the class group of "R" is cyclic of order 2. Indeed, the ideal "J" = (2, 1 + √−5) is not principal, which can be proved by contradiction as follows. formula_3 has a norm function formula_5, which satisfies formula_6, and formula_7 if and only if formula_8 is a unit in formula_3. First of all, formula_9, because the quotient ring of formula_3 modulo the ideal formula_10 is isomorphic to formula_11, so that the quotient ring of formula_3 modulo formula_12 is isomorphic to formula_13. If "J" were generated by an element "x" of "R", then "x" would divide both 2 and 1 + √−5. Then the norm formula_14 would divide both formula_15 and formula_16, so "N"(x) would divide 2. If formula_17 then formula_18 is a unit and formula_19, a contradiction. But formula_14 cannot be 2 either, because "R" has no elements of norm 2, because the Diophantine equation formula_20 has no solutions in integers, as it has no solutions modulo 5. One also computes that "J" 2 = (2), which is principal, so the class of "J" in the ideal class group has order two. Showing that there aren't any "other" ideal classes requires more effort. The fact that this "J" is not principal is also related to the fact that the element 6 has two distinct factorisations into irreducibles: 6 = 2 × 3 = (1 + √−5) × (1 − √−5). Connections to class field theory. Class field theory is a branch of algebraic number theory which seeks to classify all the abelian extensions of a given algebraic number field, meaning Galois extensions with abelian Galois group. A particularly beautiful example is found in the Hilbert class field of a number field, which can be defined as the maximal unramified abelian extension of such a field. The Hilbert class field "L" of a number field "K" is unique and has the following properties: Neither property is particularly easy to prove. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "\\mathbf{Q}(\\sqrt{d}\\,)" }, { "math_id": 2, "text": "d < 0" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "d = -1,-2,-3,-7,-11, -19, -43, -67, -163" }, { "math_id": 5, "text": "N(a + b \\sqrt{-5}) = a^2 + 5 b^2 " }, { "math_id": 6, "text": "N(uv) = N(u)N(v)" }, { "math_id": 7, "text": "N(u) = 1" }, { "math_id": 8, "text": "u" }, { "math_id": 9, "text": " J \\ne R" }, { "math_id": 10, "text": "(1 + \\sqrt{-5})" }, { "math_id": 11, "text": "\\mathbf{Z} / 6 \\mathbf{Z}" }, { "math_id": 12, "text": "J" }, { "math_id": 13, "text": "\\mathbf{Z} / 3 \\mathbf{Z}" }, { "math_id": 14, "text": "N(x)" }, { "math_id": 15, "text": "N(2) = 4" }, { "math_id": 16, "text": "N(1 + \\sqrt{-5}) = 6" }, { "math_id": 17, "text": "N(x) = 1" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "J = R" }, { "math_id": 20, "text": "b^2 + 5 c^2 = 2" } ]
https://en.wikipedia.org/wiki?curid=150907
1509289
Magnetostatics
Branch of physics about magnetism in systems with steady electric currents Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less. Magnetostatics is even a good approximation when the currents are not static – as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic storage devices as in computer memory. Applications. Magnetostatics as a special case of Maxwell's equations. Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current formula_0, the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field. The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below. Where ∇ with the dot denotes divergence, and B is the magnetic flux density, the first integral is over a surface formula_1 with oriented surface element formula_2. Where ∇ with the cross denotes curl, J is the current density and H is the magnetic field intensity, the second integral is a line integral around a closed loop formula_3 with line element formula_4. The current going through the loop is formula_5. The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the formula_6 term against the formula_7 term. If the formula_0 term is substantially larger, then the smaller term may be ignored without significant loss of accuracy. Re-introducing Faraday's law. A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term formula_8. Plugging this result into Faraday's Law finds a value for formula_9 (which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields. Solving for the magnetic field. Current sources. If all currents in a system are known (i.e., if a complete description of the current density formula_10 is available) then the magnetic field can be determined, at a position r, from the currents by the Biot–Savart equation: formula_11 This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes air-core inductors and air-core transformers. One advantage of this technique is that, if a coil has a complex geometry, it can be divided into sections and the integral evaluated for each section. Since this equation is primarily used to solve linear problems, the contributions can be added. For a very difficult geometry, numerical integration may be used. For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of formula_12 can be found from the magnetic potential. The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero, formula_13 and the relation of the vector potential to current is: formula_14 Magnetization. Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin. In such materials the magnetization must be explicitly included using the relation formula_15 Except in the case of conductors, electric currents can be ignored. Then Ampère's law is simply formula_16 This has the general solution formula_17 where formula_18 is a scalar potential. Substituting this in Gauss's law gives formula_19 Thus, the divergence of the magnetization, formula_20 has a role analogous to the electric charge in electrostatics and is often referred to as an effective charge density formula_21. The vector potential method can also be employed with an effective current density formula_22 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{J}" }, { "math_id": 1, "text": " S" }, { "math_id": 2, "text": " d\\mathbf{S}" }, { "math_id": 3, "text": " C" }, { "math_id": 4, "text": "\\mathbf{l}" }, { "math_id": 5, "text": " I_\\text{enc}" }, { "math_id": 6, "text": " \\mathbf{J}" }, { "math_id": 7, "text": " \\partial \\mathbf{D} / \\partial t" }, { "math_id": 8, "text": " \\partial \\mathbf{B} / \\partial t" }, { "math_id": 9, "text": " \\mathbf{E}" }, { "math_id": 10, "text": " \\mathbf{J}(\\mathbf{r})" }, { "math_id": 11, "text": "\\mathbf{B}(\\mathbf{r}) = \\frac{\\mu_0}{4\\pi} \\int{\\frac{\\mathbf{J}(\\mathbf{r}') \\times \\left(\\mathbf{r} - \\mathbf{r}'\\right)}{|\\mathbf{r} - \\mathbf{r}'|^3} \\mathrm{d}^3\\mathbf{r}'}" }, { "math_id": 12, "text": " \\mathbf{B}" }, { "math_id": 13, "text": " \\mathbf{B} = \\nabla \\times \\mathbf{A}, " }, { "math_id": 14, "text": " \\mathbf{A}(\\mathbf{r}) = \\frac{\\mu_{0}}{4\\pi} \\int{ \\frac{\\mathbf{J(\\mathbf{r}')} } {|\\mathbf{r}-\\mathbf{r}'|} \\mathrm{d}^3\\mathbf{r}'}. " }, { "math_id": 15, "text": " \\mathbf{B} = \\mu_0(\\mathbf{M}+\\mathbf{H})." }, { "math_id": 16, "text": " \\nabla\\times\\mathbf{H} = 0." }, { "math_id": 17, "text": " \\mathbf{H} = -\\nabla \\Phi_M, " }, { "math_id": 18, "text": "\\Phi_M" }, { "math_id": 19, "text": " \\nabla^2 \\Phi_M = \\nabla\\cdot\\mathbf{M}." }, { "math_id": 20, "text": " \\nabla\\cdot\\mathbf{M}," }, { "math_id": 21, "text": "\\rho_M" }, { "math_id": 22, "text": " \\mathbf{J_M} = \\nabla \\times \\mathbf{M}. " } ]
https://en.wikipedia.org/wiki?curid=1509289
15093839
Color triangle
Arrangement of colors within a triangle A color triangle is an arrangement of colors within a triangle, based on the additive or subtractive combination of three primary colors at its corners. An additive color space defined by three primary colors has a chromaticity gamut that is a color triangle, when the amounts of the primaries are constrained to be nonnegative. Before the theory of additive color was proposed by Thomas Young and further developed by James Clerk Maxwell and Hermann von Helmholtz, triangles were also used to organize colors, for example around a system of red, yellow, and blue primary colors. After the development of the CIE system, color triangles were used as chromaticity diagrams, including briefly with the trilinear coordinates representing the chromaticity values. Since the sum of the three chromaticity values has a fixed value, it suffices to depict only two of the three values, using Cartesian co-ordinates. In the modern "x,y" diagram, the large triangle bounded by the imaginary primaries X, Y, and Z has corners (1,0), (0,1), and (0,0), respectively; color triangles with real primaries are often shown within this space. Maxwell's disc. Maxwell was intrigued by James David Forbes's use of color tops. By rapidly spinning the top, Forbes created the illusion of a single color that was a mixture of the primaries: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[The] experiments of Professor J. D. Forbes, which I witnessed in 1849… [established] that blue and yellow do not make green, but a pinkish tint, when neither prevails in the combination…[and the] result of mixing yellow and blue was, I believe, not previously known. Maxwell took this a step further by using a circular scale around the rim with which to measure the ratios of the primaries, choosing vermilion (V), emerald (EG), and ultramarine (U). Initially, he compared the color he observed on the spinning top with a paper of different color, in order to find a match. Later, he mounted a pair of papers, snow white (SW) and ivory black (Bk), in an inner circle, thereby creating shades of gray. By adjusting the ratio of primaries, he matched the observed gray of the inner wheel, for example: formula_0 To determine the chromaticity of an arbitrary color, he replaced one of the primaries with a sample of the test color and adjusted the ratios until he found a match. For pale chrome (PC) he found formula_1. Next, he rearranged the equation to express the test color (PC, in this example) in terms of the primaries. This would be the precursor to the color matching functions of the CIE 1931 color space, whose chromaticity diagram is shown above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "0.37V+0.27U+0.36EG=0.28SW+0.72BK" }, { "math_id": 1, "text": "0.33PC+0.55U+0.12EG=0.37SW+0.63BK" } ]
https://en.wikipedia.org/wiki?curid=15093839
15094152
Gradient method
In optimization, a gradient method is an algorithm to solve problems of the form formula_0 with the search directions defined by the gradient of the function at the current point. Examples of gradient methods are the gradient descent and the conjugate gradient. See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "\\min_{x\\in\\mathbb R^n}\\; f(x)" } ]
https://en.wikipedia.org/wiki?curid=15094152
15094186
Graph automorphism
Mapping a graph onto itself without changing edge-vertex connectivity In the mathematical field of graph theory, an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge–vertex connectivity. Formally, an automorphism of a graph "G" = ("V", "E") is a permutation σ of the vertex set V, such that the pair of vertices ("u", "v") form an edge if and only if the pair ("σ"("u"), "σ"("v")) also form an edge. That is, it is a graph isomorphism from G to itself. Automorphisms may be defined in this way both for directed graphs and for undirected graphs. The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms a group, the automorphism group of the graph. In the opposite direction, by Frucht's theorem, all groups can be represented as the automorphism group of a connected graph – indeed, of a cubic graph. Computational complexity. Constructing the automorphism group of a graph, in the form of a list of generators, is polynomial-time equivalent to the graph isomorphism problem, and therefore solvable in quasi-polynomial time, that is with running time formula_0 for some fixed formula_1. Consequently, like the graph isomorphism problem, the problem of finding a graph's automorphism group is known to belong to the complexity class NP, but not known to be in P nor to be NP-complete, and therefore may be NP-intermediate. The easier problem of testing whether a graph has any symmetries (nontrivial automorphisms), known as the graph automorphism problem, also has no known polynomial time solution. There is a polynomial time algorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant. The graph automorphism problem is polynomial-time many-one reducible to the graph isomorphism problem, but the converse reduction is unknown. By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is ♯P-complete. Algorithms, software and applications. While no "worst-case" polynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, including NAUTY, BLISS and SAUCY. SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produce Canonical Labeling, whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph on n vertices, the automorphism group can be specified by no more than formula_2 generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function of n, which is important in runtime analysis of these algorithms. However, this has not been established for a fact, as of March 2012. Practical applications of Graph Automorphism include graph drawing and other visualization tasks, solving structured instances of Boolean Satisfiability arising in the context of Formal verification and Logistics. Molecular symmetry can predict or explain chemical properties. Symmetry display. Several graph drawing researchers have investigated algorithms for drawing graphs in such a way that the automorphisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible, or by explicitly identifying symmetries and using them to guide vertex placement in the drawing. It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized. Graph families defined by their automorphisms. Several families of graphs are defined by having certain types of automorphisms: Inclusion relationships between these families are indicated by the following table: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{O((\\log n)^c)}" }, { "math_id": 1, "text": "c > 0" }, { "math_id": 2, "text": "n - 1" } ]
https://en.wikipedia.org/wiki?curid=15094186
15097
Ionosphere
Ionized part of Earth's upper atmosphere The ionosphere () is the ionized part of the upper atmosphere of Earth, from about to above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. It also affects GPS signals that travel through this layer. History of discovery. As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923. In 1925, observations during a solar eclipse in New York by Dr. Alfred N. Goldsmith and his team demonstrated the influence of sunlight on radio wave propagation, revealing that short waves became weak or inaudible while long waves steadied during the eclipse, thus contributing to the understanding of the ionosphere's role in radio transmission. In 1926, Scottish physicist Robert Watson-Watt introduced the term "ionosphere" in a letter published only in 1969 in "Nature": &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;We have in quite recent years seen the universal adoption of the term 'stratosphere'..and..the companion term 'troposphere'... The term 'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series. In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere. In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963, the first operational geosynchronous satellite Syncom 2 was launched. On board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica. Geophysics. The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about to more than . It exists primarily due to ultraviolet radiation from the Sun. The lowest part of the Earth's atmosphere, the troposphere, extends from the surface to about . Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above , in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially "ionized" and contains a plasma which is referred to as the ionosphere. Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are "ionizing," since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present. Ionization depends primarily on the Sun and its Extreme Ultraviolet (EUV) and X-ray irradiance which varies strongly with solar activity. The more magnetically active the Sun is, the more sunspot active regions there are on the Sun at any one time. Sunspot active regions are the source of increased coronal heating and accompanying increases in EUV and X-ray irradiance, particularly during episodic magnetic eruptions that include solar flares that increase ionization on the sunlit side of the Earth and solar energetic particle events that can increase ionization in the polar regions. Thus the degree of ionization in the ionosphere follows both a diurnal (time of day) cycle and the 11-year solar cycle. There is also a seasonal dependence in ionization degree since the local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. Sydney Chapman proposed that the region below the ionosphere be called neutrosphere (the neutral atmosphere). Layers of ionization. At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves. D layer. The D layer is the innermost layer, above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, solar flares can generate hard X-rays (wavelength &lt; 1 nm) that ionize N2 and O2. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions. Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime. During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours. E layer. The E layer is the middle layer, above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O2). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer. This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 its existence was detected by Edward V. Appleton and Miles Barnett. Es layer. The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, frequently up to 50 MHz and rarely up to 450 MHz. Sporadic-E events may last for just a few minutes to many hours. Sporadic E propagation makes VHF-operating by radio amateurs very exciting when long-distance propagation paths that are generally unreachable "open up" to two-way communication. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs every day during June and July in northern hemisphere mid-latitudes when high signal levels are often reached. The skip distances are generally around . Distances for one hop propagation can be anywhere from . Multi-hop propagation over is also common, sometimes to distances of or more. F layer. The F layer or region, also known as the Appleton–Barnett layer, extends from about to more than above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F2) at night, but during the day, a secondary peak (labelled F1) often forms in the electron density profile. Because the F2 layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications. Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere. From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region. Ionospheric model. An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: "electron density, electron and ion temperature" and, since several species of ions are present, "ionic composition". Radio propagation depends uniquely on electron density. Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457). Persistent anomalies to the idealized model. Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions. Winter anomaly. At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity. Equatorial anomaly. Within approximately ± 20 degrees of the "magnetic equator", is the "equatorial anomaly." It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the "equatorial fountain". Equatorial electrojet. The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) ( altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. Ephemeral ionospheric perturbations. X-rays: sudden ionospheric disturbances (SID). When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout that can persist for many hours after strong flares. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out steadily declines as the electrons in the D-region recombine rapidly and propagation gradually returns to pre-flare conditions over minutes to hours depending on the solar flare strength and frequency. Protons: polar cap absorption (PCA). Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions. Storms. Geomagnetic storms and ionospheric storms are temporary and intense disturbances of the Earth's magnetosphere and ionosphere. During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely. In the Northern and Southern polar regions of the Earth aurorae will be observable in the night sky. Lightning. Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events. Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast. In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur. Applications. Radio communication. Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Shortwave broadcasting is useful in crossing international boundaries and covering large areas at low cost. Automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts and to assist with emergency communications during natural disasters. Armed forces use shortwave so as to be independent of vulnerable infrastructure, including satellites, and the low latency of shortwave communications make it attractive to stock traders, where milliseconds count. Mechanism of refraction. When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough. A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics). The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below: formula_0 where N = electron density per m3 and fcritical is in Hz. The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time. formula_1 where formula_2 = angle of arrival, the angle of the wave relative to the horizon, and sin is the sine function. The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer. GPS/GNSS ionospheric correction. There are a number of models used to understand the effects of the ionosphere on global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. GALILEO broadcasts 3 coefficients to compute the effective ionization level, which is then used by the NeQuick model to compute a range delay along the line-of-sight. Other applications. The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction. Measurements. Overview. Scientists explore the structure of the ionosphere by a wide variety of methods. They include: A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska. The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 countries and multiple radars in both hemispheres. Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo Telescope located in Puerto Rico, was originally intended to study Earth's ionosphere. Ionograms. Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available). Incoherent scatter radars. Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Incoherent scatter radars can also measure neutral atmosphere movements, such as atmospheric tides, after making assumptions about ion-neutral collision frequency across the ionospheric dynamo region. GNSS radio occultation. Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed. Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC. Indices of the ionosphere. In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere. Solar intensity. F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. The two indices have been shown to be correlated with each other. However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere. Ionospheres of other planets and natural satellites. Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. The atmosphere of Titan includes an ionosphere that ranges from about in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, and Triton. See also. &lt;templatestyles src="Col-float/styles.css" /&gt; &lt;templatestyles src="Col-float/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{\\text{critical}} = 9 \\times\\sqrt{N}" }, { "math_id": 1, "text": "f_\\text{muf} = \\frac{f_\\text{critical}}{ \\sin \\alpha} " }, { "math_id": 2, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=15097
1509729
Order of a kernel
In statistics, the order of a kernel is the degree of the first non-zero moment of a kernel. Definitions. The literature knows two major definitions of the "order of a kernel". Namely are: Definition 1. Let formula_0 be an integer. Then, formula_1 is a "kernel of order formula_2" if the functions formula_3 are "integrable" and satisfy formula_4 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\ell \\geq 1 " }, { "math_id": 1, "text": " K: \\mathbb{R} \\rightarrow \\mathbb{R} " }, { "math_id": 2, "text": " \\ell " }, { "math_id": 3, "text": " u\\mapsto u^{j}K(u), ~ j=0,1,...,\\ell " }, { "math_id": 4, "text": " \\int K(u)du=1, ~ \\int u^{j}K(u)du=0,~ ~j=1,...,\\ell. " } ]
https://en.wikipedia.org/wiki?curid=1509729
1509837
Reduced form
In statistics, and particularly in econometrics, the reduced form of a system of equations is the result of solving the system for the endogenous variables. This gives the latter as functions of the exogenous variables, if any. In econometrics, the equations of a structural form model are estimated in their theoretically given form, while an alternative approach to estimation is to first solve the theoretical equations for the endogenous variables to obtain reduced form equations, and then to estimate the reduced form equations. Let "Y" be the vector of the variables to be explained (endogeneous variables) by a statistical model and "X" be the vector of explanatory (exogeneous) variables. In addition let formula_0 be a vector of error terms. Then the general expression of a structural form is formula_1, where "f" is a function, possibly from vectors to vectors in the case of a multiple-equation model. The reduced form of this model is given by formula_2, with "g" a function. Structural and reduced forms. Exogenous variables are variables which are not determined by the system. If we assume that demand is influenced not only by price, but also by an exogenous variable, "Z", we can consider the structural supply and demand model supply:    formula_3 demand:   formula_4 where the terms formula_5 are random errors (deviations of the quantities supplied and demanded from those implied by the rest of each equation). By solving for the unknowns (endogenous variables) "P" and "Q", this structural model can be rewritten in the reduced form: formula_6 formula_7 where the parameters formula_8 depend on the parameters formula_9 of the structural model, and where the reduced form errors formula_10 each depend on the structural parameters and on both structural errors. Note that both endogenous variables depend on the exogenous variable "Z". If the reduced form model is estimated using empirical data, obtaining estimated values for the coefficients formula_11 some of the structural parameters can be recovered: By combining the two reduced form equations to eliminate "Z", the structural coefficients of the supply side model (formula_12 and formula_13) can be derived: formula_14 formula_15 Note however, that this still does not allow us to identify the structural parameters of the demand equation. For that, we would need an exogenous variable which is included in the supply equation of the structural model, but not in the demand equation. The general linear case. Let "y" be a column vector of "M" endogenous variables. In the case above with "Q" and "P", we had "M" = 2. Let "z" be a column vector of "K" exogenous variables; in the case above "z" consisted only of "Z". The structural linear model is formula_16 where formula_17 is a vector of structural shocks, and "A" and "B" are matrices; "A" is a square "M"  × "M" matrix, while "B" is "M" × "K". The reduced form of the system is: formula_18 with vector formula_19 of reduced form errors that each depends on all structural errors, where the matrix "A" must be nonsingular for the reduced form to exist and be unique. Again, each endogenous variable depends on potentially each exogenous variable. Without restrictions on the "A" and "B", the coefficients of "A" and "B" cannot be identified from data on "y" and "z": each row of the structural model is just a linear relation between "y" and "z" with unknown coefficients. (This is again the parameter identification problem.) The "M" reduced form equations (the rows of the matrix equation "y" = Π "z" above) can be identified from the data because each of them contains only one endogenous variable.
[ { "math_id": 0, "text": " \\varepsilon " }, { "math_id": 1, "text": " f(Y, X, \\varepsilon) = 0 " }, { "math_id": 2, "text": " Y = g(X, \\varepsilon) " }, { "math_id": 3, "text": " Q = a_S + b_S P + u_S, " }, { "math_id": 4, "text": " Q = a_D + b_D P + c Z + u_D, " }, { "math_id": 5, "text": "u_i" }, { "math_id": 6, "text": " Q = \\pi_{10} + \\pi_{11} Z + e_Q, " }, { "math_id": 7, "text": " P = \\pi_{20} + \\pi_{21} Z + e_P, " }, { "math_id": 8, "text": "\\pi_{ij}" }, { "math_id": 9, "text": "a_i , b_i, c" }, { "math_id": 10, "text": "e_i" }, { "math_id": 11, "text": "\\pi_{ij}," }, { "math_id": 12, "text": "a_S" }, { "math_id": 13, "text": "b_S" }, { "math_id": 14, "text": " a_S = (\\pi_{10}\\pi_{21} - \\pi_{11}\\pi_{20}) / \\pi_{21} ," }, { "math_id": 15, "text": " b_S = \\pi_{11} / \\pi_{21} ." }, { "math_id": 16, "text": " A y = B z + v, " }, { "math_id": 17, "text": "v" }, { "math_id": 18, "text": " y = A^{-1}Bz+ A^{-1}v = \\Pi z + w, " }, { "math_id": 19, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=1509837
15098681
Cone of curves
In mathematics, the cone of curves (sometimes the Kleiman-Mori cone) of an algebraic variety formula_0 is a combinatorial invariant of importance to the birational geometry of formula_0. Definition. Let formula_0 be a proper variety. By definition, a (real) "1-cycle" on formula_0 is a formal linear combination formula_1 of irreducible, reduced and proper curves formula_2, with coefficients formula_3. "Numerical equivalence" of 1-cycles is defined by intersections: two 1-cycles formula_4 and formula_5 are numerically equivalent if formula_6 for every Cartier divisor formula_7 on formula_0. Denote the real vector space of 1-cycles modulo numerical equivalence by formula_8. We define the "cone of curves" of formula_0 to be formula_9 where the formula_2 are irreducible, reduced, proper curves on formula_0, and formula_10 their classes in formula_8. It is not difficult to see that formula_11 is indeed a convex cone in the sense of convex geometry. Applications. One useful application of the notion of the cone of curves is the Kleiman condition, which says that a (Cartier) divisor formula_7 on a complete variety formula_0 is ample if and only if formula_12 for any nonzero element formula_13 in formula_14, the closure of the cone of curves in the usual real topology. (In general, formula_11 need not be closed, so taking the closure here is important.) A more involved example is the role played by the cone of curves in the theory of minimal models of algebraic varieties. Briefly, the goal of that theory is as follows: given a (mildly singular) projective variety formula_0, find a (mildly singular) variety formula_15 which is birational to formula_0, and whose canonical divisor formula_16 is nef. The great breakthrough of the early 1980s (due to Mori and others) was to construct (at least morally) the necessary birational map from formula_0 to formula_15 as a sequence of steps, each of which can be thought of as contraction of a formula_17-negative extremal ray of formula_11. This process encounters difficulties, however, whose resolution necessitates the introduction of the flip. A structure theorem. The above process of contractions could not proceed without the fundamental result on the structure of the cone of curves known as the Cone Theorem. The first version of this theorem, for smooth varieties, is due to Mori; it was later generalised to a larger class of varieties by Kawamata, Kollár, Reid, Shokurov, and others. Mori's version of the theorem is as follows: Cone Theorem. Let formula_0 be a smooth projective variety. Then 1. There are countably many rational curves formula_2 on formula_0, satisfying formula_18, and formula_19 2. For any positive real number formula_20 and any ample divisor formula_21, formula_22 where the sum in the last term is finite. The first assertion says that, in the closed half-space of formula_8 where intersection with formula_17 is nonnegative, we know nothing, but in the complementary half-space, the cone is spanned by some countable collection of curves which are quite special: they are rational, and their 'degree' is bounded very tightly by the dimension of formula_0. The second assertion then tells us more: it says that, away from the hyperplane formula_23, extremal rays of the cone cannot accumulate. When formula_0 is a Fano variety, formula_24 because formula_25 is ample. So the cone theorem shows that the cone of curves of a Fano variety is generated by rational curves. If in addition the variety formula_0 is defined over a field of characteristic 0, we have the following assertion, sometimes referred to as the Contraction Theorem: 3. Let formula_26 be an extremal face of the cone of curves on which formula_17 is negative. Then there is a unique morphism formula_27 to a projective variety "Z", such that formula_28 and an irreducible curve formula_4 in formula_0 is mapped to a point by formula_29 if and only if formula_30. (See also: contraction morphism).
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "C=\\sum a_iC_i" }, { "math_id": 2, "text": "C_i" }, { "math_id": 3, "text": "a_i \\in \\mathbb{R}" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "C'" }, { "math_id": 6, "text": "C \\cdot D = C' \\cdot D" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "N_1(X)" }, { "math_id": 9, "text": "NE(X) = \\left\\{\\sum a_i[C_i], \\ 0 \\leq a_i \\in \\mathbb{R} \\right\\} " }, { "math_id": 10, "text": "[C_i]" }, { "math_id": 11, "text": "NE(X)" }, { "math_id": 12, "text": "D \\cdot x > 0" }, { "math_id": 13, "text": "x" }, { "math_id": 14, "text": "\\overline{NE(X)}" }, { "math_id": 15, "text": "X'" }, { "math_id": 16, "text": "K_{X'}" }, { "math_id": 17, "text": "K_X" }, { "math_id": 18, "text": "0< -K_X \\cdot C_i \\leq \\operatorname{dim} X +1 " }, { "math_id": 19, "text": "\\overline{NE(X)} = \\overline{NE(X)}_{K_X\\geq 0} + \\sum_i \\mathbf{R}_{\\geq0} [C_i]." }, { "math_id": 20, "text": "\\epsilon" }, { "math_id": 21, "text": "H" }, { "math_id": 22, "text": "\\overline{NE(X)} = \\overline{NE(X)}_{K_X+\\epsilon H\\geq0} + \\sum \\mathbf{R}_{\\geq0} [C_i]," }, { "math_id": 23, "text": "\\{C : K_X \\cdot C = 0\\}" }, { "math_id": 24, "text": "\\overline{NE(X)}_{K_X\\geq 0} = 0 " }, { "math_id": 25, "text": " -K_X " }, { "math_id": 26, "text": "F \\subset \\overline{NE(X)}" }, { "math_id": 27, "text": "\\operatorname{cont}_F : X \\rightarrow Z" }, { "math_id": 28, "text": "(\\operatorname{cont}_F)_* \\mathcal{O}_X = \\mathcal{O}_Z" }, { "math_id": 29, "text": "\\operatorname{cont}_F" }, { "math_id": 30, "text": "[C] \\in F" } ]
https://en.wikipedia.org/wiki?curid=15098681
151001
C-symmetry
Symmetry of physical laws under a charge-conjugation transformation In physics, charge conjugation is a transformation that switches all particles with their corresponding antiparticles, thus changing the sign of all charges: not only electric charge but also the charges relevant to other forces. The term C-symmetry is an abbreviation of the phrase "charge conjugation symmetry", and is used in discussions of the symmetry of physical laws under charge-conjugation. Other important discrete symmetries are P-symmetry (parity) and T-symmetry (time reversal). These discrete symmetries, C, P and T, are symmetries of the equations that describe the known fundamental forces of nature: electromagnetism, gravity, the strong and the weak interactions. Verifying whether some given mathematical equation correctly models nature requires giving physical interpretation not only to continuous symmetries, such as motion in time, but also to its discrete symmetries, and then determining whether nature adheres to these symmetries. Unlike the continuous symmetries, the interpretation of the discrete symmetries is a bit more intellectually demanding and confusing. An early surprise appeared in the 1950s, when Chien Shiung Wu demonstrated that the weak interaction violated P-symmetry. For several decades, it appeared that the combined symmetry CP was preserved, until CP-violating interactions were discovered. Both discoveries lead to Nobel prizes. The C-symmetry is particularly troublesome, physically, as the universe is primarily filled with matter, not anti-matter, whereas the naive C-symmetry of the physical laws suggests that there should be equal amounts of both. It is currently believed that CP-violation during the early universe can account for the "excess" matter, although the debate is not settled. Earlier textbooks on cosmology, predating the 1970s, routinely suggested that perhaps distant galaxies were made entirely of anti-matter, thus maintaining a net balance of zero in the universe. This article focuses on exposing and articulating the C-symmetry of various important equations and theoretical systems, including the Dirac equation and the structure of quantum field theory. The various fundamental particles can be classified according to behavior under charge conjugation; this is described in the article on C-parity. Informal overview. Charge conjugation occurs as a symmetry in three different but closely related settings: a symmetry of the (classical, non-quantized) solutions of several notable differential equations, including the Klein–Gordon equation and the Dirac equation, a symmetry of the corresponding quantum fields, and in a general setting, a symmetry in (pseudo-)Riemannian geometry. In all three cases, the symmetry is ultimately revealed to be a symmetry under complex conjugation, although exactly what is being conjugated where can be at times obfuscated, depending on notation, coordinate choices and other factors. In classical fields. The charge conjugation symmetry is interpreted as that of electrical charge, because in all three cases (classical, quantum and geometry), one can construct Noether currents that resemble those of classical electrodynamics. This arises because electrodynamics itself, via Maxwell's equations, can be interpreted as a structure on a U(1) fiber bundle, the so-called circle bundle. This provides a geometric interpretation of electromagnetism: the electromagnetic potential formula_0 is interpreted as the gauge connection (the Ehresmann connection) on the circle bundle. This geometric interpretation then allows (literally almost) anything possessing a complex-number-valued structure to be coupled to the electromagnetic field, provided that this coupling is done in a gauge-invariant way. Gauge symmetry, in this geometric setting, is a statement that, as one moves around on the circle, the coupled object must also transform in a "circular way", tracking in a corresponding fashion. More formally, one says that the equations must be gauge invariant under a change of local coordinate frames on the circle. For U(1), this is just the statement that the system is invariant under multiplication by a phase factor formula_1 that depends on the (space-time) coordinate formula_2 In this geometric setting, charge conjugation can be understood as the discrete symmetry formula_3 that performs complex conjugation, that reverses the sense of direction around the circle. In quantum theory. In quantum field theory, charge conjugation can be understood as the exchange of particles with anti-particles. To understand this statement, one must have a minimal understanding of what quantum field theory is. In (vastly) simplified terms, it is a technique for performing calculations to obtain solutions for a system of coupled differential equations via perturbation theory. A key ingredient to this process is the quantum field, one for each of the (free, uncoupled) differential equations in the system. A quantum field is conventionally written as formula_4 where formula_5 is the momentum, formula_6 is a spin label, formula_7 is an auxiliary label for other states in the system. The formula_8 and formula_9 are creation and annihilation operators (ladder operators) and formula_10 are solutions to the (free, non-interacting, uncoupled) differential equation in question. The quantum field plays a central role because, in general, it is not known how to obtain exact solutions to the system of coupled differential questions. However, via perturbation theory, approximate solutions can be constructed as combinations of the free-field solutions. To perform this construction, one has to be able to extract and work with any one given free-field solution, on-demand, when required. The quantum field provides exactly this: it enumerates all possible free-field solutions in a vector space such that any one of them can be singled out at any given time, via the creation and annihilation operators. The creation and annihilation operators obey the canonical commutation relations, in that the one operator "undoes" what the other "creates". This implies that any given solution formula_11 must be paired with its "anti-solution" formula_12 so that one undoes or cancels out the other. The pairing is to be performed so that all symmetries are preserved. As one is generally interested in Lorentz invariance, the quantum field contains an integral over all possible Lorentz coordinate frames, written above as an integral over all possible momenta (it is an integral over the fiber of the frame bundle). The pairing requires that a given formula_13 is associated with a formula_14 of the opposite momentum and energy. The quantum field is also a sum over all possible spin states; the dual pairing again matching opposite spins. Likewise for any other quantum numbers, these are also paired as opposites. There is a technical difficulty in carrying out this dual pairing: one must describe what it means for some given solution formula_15 to be "dual to" some other solution formula_16 and to describe it in such a way that it remains consistently dual when integrating over the fiber of the frame bundle, when integrating (summing) over the fiber that describes the spin, and when integrating (summing) over any other fibers that occur in the theory. When the fiber to be integrated over is the U(1) fiber of electromagnetism, the dual pairing is such that the direction (orientation) on the fiber is reversed. When the fiber to be integrated over is the SU(3) fiber of the color charge, the dual pairing again reverses orientation. This "just works" for SU(3) because it has two dual fundamental representations formula_17 and formula_18 which can be naturally paired. This prescription for a quantum field naturally generalizes to any situation where one can enumerate the continuous symmetries of the system, and define duals in a coherent, consistent fashion. The pairing ties together opposite charges in the fully abstract sense. In physics, a charge is associated with a generator of a continuous symmetry. Different charges are associated with different eigenspaces of the Casimir invariants of the universal enveloping algebra for those symmetries. This is the case for "both" the Lorentz symmetry of the underlying spacetime manifold, "as well as" the symmetries of any fibers in the fiber bundle posed above the spacetime manifold. Duality replaces the generator of the symmetry with minus the generator. Charge conjugation is thus associated with reflection along the line bundle or determinant bundle of the space of symmetries. The above then is a sketch of the general idea of a quantum field in quantum field theory. The physical interpretation is that solutions formula_11 correspond to particles, and solutions formula_12 correspond to antiparticles, and so charge conjugation is a pairing of the two. This sketch also provides enough hints to indicate what charge conjugation might look like in a general geometric setting. There is no particular forced requirement to use perturbation theory, to construct quantum fields that will act as middle-men in a perturbative expansion. Charge conjugation can be given a general setting. In geometry. For general Riemannian and pseudo-Riemannian manifolds, one has a tangent bundle, a cotangent bundle and a metric that ties the two together. There are several interesting things one can do, when presented with this situation. One is that the smooth structure allows differential equations to be posed on the manifold; the tangent and cotangent spaces provide enough structure to perform calculus on manifolds. Of key interest is the Laplacian, and, with a constant term, what amounts to the Klein–Gordon operator. Cotangent bundles, by their basic construction, are always symplectic manifolds. Symplectic manifolds have canonical coordinates formula_19 interpreted as position and momentum, obeying canonical commutation relations. This provides the core infrastructure to extend duality, and thus charge conjugation, to this general setting. A second interesting thing one can do is to construct a spin structure. Perhaps the most remarkable thing about this is that it is a very recognizable generalization to a formula_20-dimensional pseudo-Riemannian manifold of the conventional physics concept of spinors living on a (1,3)-dimensional Minkowski spacetime. The construction passes through a complexified Clifford algebra to build a Clifford bundle and a spin manifold. At the end of this construction, one obtains a system that is remarkably familiar, if one is already acquainted with Dirac spinors and the Dirac equation. Several analogies pass through to this general case. First, the spinors are the Weyl spinors, and they come in complex-conjugate pairs. They are naturally anti-commuting (this follows from the Clifford algebra), which is exactly what one wants to make contact with the Pauli exclusion principle. Another is the existence of a chiral element, analogous to the gamma matrix formula_21 which sorts these spinors into left and right-handed subspaces. The complexification is a key ingredient, and it provides "electromagnetism" in this generalized setting. The spinor bundle doesn't "just" transform under the pseudo-orthogonal group formula_22, the generalization of the Lorentz group formula_23, but under a bigger group, the complexified spin group formula_24 It is bigger in that it has a double covering by formula_25 The formula_26 piece can be identified with electromagnetism in several different ways. One way is that the Dirac operators on the spin manifold, when squared, contain a piece formula_27 with formula_28 arising from that part of the connection associated with the formula_26 piece. This is entirely analogous to what happens when one squares the ordinary Dirac equation in ordinary Minkowski spacetime. A second hint is that this formula_26 piece is associated with the determinant bundle of the spin structure, effectively tying together the left and right-handed spinors through complex conjugation. What remains is to work through the discrete symmetries of the above construction. There are several that appear to generalize P-symmetry and T-symmetry. Identifying the formula_29 dimensions with time, and the formula_30 dimensions with space, one can reverse the tangent vectors in the formula_29 dimensional subspace to get time reversal, and flipping the direction of the formula_30 dimensions corresponds to parity. The C-symmetry can be identified with the reflection on the line bundle. To tie all of these together into a knot, one finally has the concept of transposition, in that elements of the Clifford algebra can be written in reversed (transposed) order. The net result is that not only do the conventional physics ideas of fields pass over to the general Riemannian setting, but also the ideas of the discrete symmetries. There are two ways to react to this. One is to treat it as an interesting curiosity. The other is to realize that, in low dimensions (in low-dimensional spacetime) there are many "accidental" isomorphisms between various Lie groups and other assorted structures. Being able to examine them in a general setting disentangles these relationships, exposing more clearly "where things come from". Charge conjugation for Dirac fields. The laws of electromagnetism (both classical and quantum) are invariant under the exchange of electrical charges with their negatives. For the case of electrons and quarks, both of which are fundamental particle fermion fields, the single-particle field excitations are described by the Dirac equation formula_31 One wishes to find a charge-conjugate solution formula_32 A handful of algebraic manipulations are sufficient to obtain the second from the first. Standard expositions of the Dirac equation demonstrate a conjugate field formula_33 interpreted as an anti-particle field, satisfying the complex-transposed Dirac equation formula_34 Note that some but not all of the signs have flipped. Transposing this back again gives almost the desired form, provided that one can find a 4×4 matrix formula_35 that transposes the gamma matrices to insert the required sign-change: formula_36 The charge conjugate solution is then given by the involution formula_37 The 4×4 matrix formula_38 called the charge conjugation matrix, has an explicit form given in the article on gamma matrices. Curiously, this form is not representation-independent, but depends on the specific matrix representation chosen for the gamma group (the subgroup of the Clifford algebra capturing the algebraic properties of the gamma matrices). This matrix is representation dependent due to a subtle interplay involving the complexification of the spin group describing the Lorentz covariance of charged particles. The complex number formula_39 is an arbitrary phase factor formula_40 generally taken to be formula_41 Charge conjugation, chirality, helicity. The interplay between chirality and charge conjugation is a bit subtle, and requires articulation. It is often said that charge conjugation does not alter the chirality of particles. This is not the case for "fields", the difference arising in the "hole theory" interpretation of particles, where an anti-particle is interpreted as the absence of a particle. This is articulated below. Conventionally, formula_21 is used as the chirality operator. Under charge conjugation, it transforms as formula_42 and whether or not formula_43 equals formula_21 depends on the chosen representation for the gamma matrices. In the Dirac and chiral basis, one does have that formula_44, while formula_45 is obtained in the Majorana basis. A worked example follows. Weyl spinors. For the case of massless Dirac spinor fields, chirality is equal to helicity for the positive energy solutions (and minus the helicity for negative energy solutions).§ 2-4-3, page 87 ff One obtains this by writing the massless Dirac equation as formula_46 Multiplying by formula_47 one obtains formula_48 where formula_49 is the angular momentum operator and formula_50 is the totally antisymmetric tensor. This can be brought to a slightly more recognizable form by defining the 3D spin operator formula_51 taking a plane-wave state formula_52, applying the on-shell constraint that formula_53 and normalizing the momentum to be a 3D unit vector: formula_54 to write formula_55 Examining the above, one concludes that angular momentum eigenstates (helicity eigenstates) correspond to eigenstates of the chiral operator. This allows the massless Dirac field to be cleanly split into a pair of Weyl spinors formula_56 and formula_57 each individually satisfying the Weyl equation, but with opposite energy: formula_58 and formula_59 Note the freedom one has to equate negative helicity with negative energy, and thus the anti-particle with the particle of opposite helicity. To be clear, the formula_6 here are the Pauli matrices, and formula_60 is the momentum operator. Charge conjugation in the chiral basis. Taking the Weyl representation of the gamma matrices, one may write a (now taken to be massive) Dirac spinor as formula_61 The corresponding dual (anti-particle) field is formula_62 The charge-conjugate spinors are formula_63 where, as before, formula_39 is a phase factor that can be taken to be formula_41 Note that the left and right states are inter-changed. This can be restored with a parity transformation. Under parity, the Dirac spinor transforms as formula_64 Under combined charge and parity, one then has formula_65 Conventionally, one takes formula_66 globally. See however, the note below. Majorana condition. The Majorana condition imposes a constraint between the field and its charge conjugate, namely that they must be equal: formula_67 This is perhaps best stated as the requirement that the Majorana spinor must be an eigenstate of the charge conjugation involution. Doing so requires some notational care. In many texts discussing charge conjugation, the involution formula_68 is not given an explicit symbolic name, when applied to "single-particle solutions" of the Dirac equation. This is in contrast to the case when the "quantized field" is discussed, where a unitary operator formula_69 is defined (as done in a later section, below). For the present section, let the involution be named as formula_70 so that formula_71 Taking this to be a linear operator, one may consider its eigenstates. The Majorana condition singles out one such: formula_72 There are, however, two such eigenstates: formula_73 Continuing in the Weyl basis, as above, these eigenstates are formula_74 and formula_75 The Majorana spinor is conventionally taken as just the positive eigenstate, namely formula_76 The chiral operator formula_21 exchanges these two, in that formula_77 This is readily verified by direct substitution. Bear in mind that formula_78 "does not have" a 4×4 matrix representation! More precisely, there is no complex 4×4 matrix that can take a complex number to its complex conjugate; this inversion would require an 8×8 real matrix. The physical interpretation of complex conjugation as charge conjugation becomes clear when considering the complex conjugation of scalar fields, described in a subsequent section below. The projectors onto the chiral eigenstates can be written as formula_79 and formula_80 and so the above translates to formula_81 This directly demonstrates that charge conjugation, applied to single-particle complex-number-valued solutions of the Dirac equation flips the chirality of the solution. The projectors onto the charge conjugation eigenspaces are formula_82 and formula_83 Geometric interpretation. The phase factor formula_84 can be given a geometric interpretation. It has been noted that, for massive Dirac spinors, the "arbitrary" phase factor formula_84 may depend on both the momentum, and the helicity (but not the chirality). This can be interpreted as saying that this phase may vary along the fiber of the spinor bundle, depending on the local choice of a coordinate frame. Put another way, a spinor field is a local section of the spinor bundle, and Lorentz boosts and rotations correspond to movements along the fibers of the corresponding frame bundle (again, just a choice of local coordinate frame). Examined in this way, this extra phase freedom can be interpreted as the phase arising from the electromagnetic field. For the Majorana spinors, the phase would be constrained to not vary under boosts and rotations. Charge conjugation for quantized fields. The above describes charge conjugation for the single-particle solutions only. When the Dirac field is second-quantized, as in quantum field theory, the spinor and electromagnetic fields are described by operators. The charge conjugation involution then manifests as a unitary operator formula_69 (in calligraphic font) acting on the particle fields, expressed as where the non-calligraphic formula_88 is the same 4×4 matrix given before. Charge reversal in electroweak theory. Charge conjugation does not alter the chirality of particles. A left-handed neutrino would be taken by charge conjugation into a left-handed antineutrino, which does not interact in the Standard Model. This property is what is meant by the "maximal violation" of C-symmetry in the weak interaction. Some postulated extensions of the Standard Model, like left-right models, restore this C-symmetry. Scalar fields. The Dirac field has a "hidden" formula_26 gauge freedom, allowing it to couple directly to the electromagnetic field without any further modifications to the Dirac equation or the field itself. This is not the case for scalar fields, which must be explicitly "complexified" to couple to electromagnetism. This is done by "tensoring in" an additional factor of the complex plane formula_89 into the field, or constructing a Cartesian product with formula_26. One very conventional technique is simply to start with two real scalar fields, formula_90 and formula_91 and create a linear combination formula_92 The charge conjugation involution is then the mapping formula_93 since this is sufficient to reverse the sign on the electromagnetic potential (since this complex number is being used to couple to it). For real scalar fields, charge conjugation is just the identity map: formula_94 and formula_95 and so, for the complexified field, charge conjugation is just formula_96 The "mapsto" arrow formula_97 is convenient for tracking "what goes where"; the equivalent older notation is simply to write formula_98 and formula_99 and formula_100 The above describes the conventional construction of a charged scalar field. It is also possible to introduce additional algebraic structure into the fields in other ways. In particular, one may define a "real" field behaving as formula_101. As it is real, it cannot couple to electromagnetism by itself, but, when complexified, would result in a charged field that transforms as formula_102 Because C-symmetry is a discrete symmetry, one has some freedom to play these kinds of algebraic games in the search for a theory that correctly models some given physical reality. In physics literature, a transformation such as formula_103 might be written without any further explanation. The formal mathematical interpretation of this is that the field formula_90 is an element of formula_104 where formula_105 Thus, properly speaking, the field should be written as formula_106 which behaves under charge conjugation as formula_107 It is very tempting, but not quite formally correct to just multiply these out, to move around the location of this minus sign; this mostly "just works", but a failure to track it properly will lead to confusion. Combination of charge and parity reversal. It was believed for some time that C-symmetry could be combined with the parity-inversion transformation (see P-symmetry) to preserve a combined CP-symmetry. However, violations of this symmetry have been identified in the weak interactions (particularly in the kaons and B mesons). In the Standard Model, this CP violation is due to a single phase in the CKM matrix. If CP is combined with time reversal (T-symmetry), the resulting CPT-symmetry can be shown using only the Wightman axioms to be universally obeyed. In general settings. The analog of charge conjugation can be defined for higher-dimensional gamma matrices, with an explicit construction for Weyl spinors given in the article on Weyl–Brauer matrices. Note, however, spinors as defined abstractly in the representation theory of Clifford algebras are not fields; rather, they should be thought of as existing on a zero-dimensional spacetime. The analog of T-symmetry follows from formula_108 as the T-conjugation operator for Dirac spinors. Spinors also have an inherent P-symmetry, obtained by reversing the direction of all of the basis vectors of the Clifford algebra from which the spinors are constructed. The relationship to the P and T symmetries for a fermion field on a spacetime manifold are a bit subtle, but can be roughly characterized as follows. When a spinor is constructed via a Clifford algebra, the construction requires a vector space on which to build. By convention, this vector space is the tangent space of the spacetime manifold at a given, fixed spacetime point (a single fiber in the tangent manifold). P and T operations applied to the spacetime manifold can then be understood as also flipping the coordinates of the tangent space as well; thus, the two are glued together. Flipping the parity or the direction of time in one also flips it in the other. This is a convention. One can become unglued by failing to propagate this connection. This is done by taking the tangent space as a vector space, extending it to a tensor algebra, and then using an inner product on the vector space to define a Clifford algebra. Treating each such algebra as a fiber, one obtains a fiber bundle called the Clifford bundle. Under a change of basis of the tangent space, elements of the Clifford algebra transform according to the spin group. Building a principle fiber bundle with the spin group as the fiber results in a spin structure. All that is missing in the above paragraphs are the spinors themselves. These require the "complexification" of the tangent manifold: tensoring it with the complex plane. Once this is done, the Weyl spinors can be constructed. These have the form formula_109 where the formula_110 are the basis vectors for the vector space formula_111, the tangent space at point formula_112 in the spacetime manifold formula_113 The Weyl spinors, together with their complex conjugates span the tangent space, in the sense that formula_114 The alternating algebra formula_115 is called the spinor space, it is where the spinors live, as well as products of spinors (thus, objects with higher spin values, including vectors and tensors). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A_\\mu" }, { "math_id": 1, "text": "e^{i\\phi(x)}" }, { "math_id": 2, "text": "x." }, { "math_id": 3, "text": "z = (x + iy) \\mapsto \\overline z = (x - iy)" }, { "math_id": 4, "text": "\\psi(x) = \\int d^3p \\sum_{\\sigma,n} \n e^{-ip\\cdot x} a\\left(\\vec p, \\sigma, n\\right) u\\left(\\vec p, \\sigma, n\\right) + \n e^{ip\\cdot x} a^\\dagger\\left(\\vec p, \\sigma, n\\right) v\\left(\\vec p, \\sigma, n\\right)\n" }, { "math_id": 5, "text": "\\vec p" }, { "math_id": 6, "text": "\\sigma" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "a" }, { "math_id": 9, "text": "a^\\dagger" }, { "math_id": 10, "text": "u, v" }, { "math_id": 11, "text": "u\\left(\\vec p, \\sigma, n\\right)" }, { "math_id": 12, "text": "v\\left(\\vec p, \\sigma, n\\right)" }, { "math_id": 13, "text": "u\\left(\\vec p\\right)" }, { "math_id": 14, "text": "v\\left(\\vec p\\right)" }, { "math_id": 15, "text": "u" }, { "math_id": 16, "text": "v," }, { "math_id": 17, "text": "\\mathbf{3}" }, { "math_id": 18, "text": "\\overline\\mathbf{3}" }, { "math_id": 19, "text": "x,p" }, { "math_id": 20, "text": "(p,q)" }, { "math_id": 21, "text": "\\gamma_5" }, { "math_id": 22, "text": "SO(p,q)" }, { "math_id": 23, "text": "SO(1,3)" }, { "math_id": 24, "text": "\\mathrm{Spin}^\\mathbb{C}(p,q)." }, { "math_id": 25, "text": "SO(p,q)\\times U(1)." }, { "math_id": 26, "text": "U(1)" }, { "math_id": 27, "text": "F=dA" }, { "math_id": 28, "text": "A" }, { "math_id": 29, "text": "p" }, { "math_id": 30, "text": "q" }, { "math_id": 31, "text": "(i{\\partial\\!\\!\\!\\big /} - q{A\\!\\!\\!\\big /} - m) \\psi = 0" }, { "math_id": 32, "text": "(i{\\partial\\!\\!\\!\\big /} + q{A\\!\\!\\!\\big /} - m) \\psi^c = 0" }, { "math_id": 33, "text": "\\overline\\psi = \\psi^\\dagger\\gamma^0," }, { "math_id": 34, "text": "\\overline\\psi(-i{\\partial\\!\\!\\!\\big /} - q{A\\!\\!\\!\\big /} - m) = 0 " }, { "math_id": 35, "text": "C" }, { "math_id": 36, "text": "C^{-1}\\gamma_\\mu C = -\\gamma_\\mu^\\textsf{T}" }, { "math_id": 37, "text": "\\psi \\mapsto \\psi^c=\\eta_c\\, C\\overline\\psi^\\textsf{T}" }, { "math_id": 38, "text": "C," }, { "math_id": 39, "text": "\\eta_c" }, { "math_id": 40, "text": "|\\eta_c|=1," }, { "math_id": 41, "text": "\\eta_c=1." }, { "math_id": 42, "text": "C\\gamma_5 C^{-1} = \\gamma_5^\\textsf{T}" }, { "math_id": 43, "text": "\\gamma_5^\\textsf{T}" }, { "math_id": 44, "text": "\\gamma_5^\\textsf{T} = \\gamma_5" }, { "math_id": 45, "text": "\\gamma_5^\\textsf{T} = -\\gamma_5" }, { "math_id": 46, "text": "i\\partial\\!\\!\\!\\big /\\psi = 0 " }, { "math_id": 47, "text": "\\gamma^5\\gamma^0 = -i\\gamma^1\\gamma^2\\gamma^3" }, { "math_id": 48, "text": "{\\epsilon_{ij}}^m\\sigma^{ij}\\partial_m \\psi = \\gamma_5 \\partial_t \\psi" }, { "math_id": 49, "text": "\\sigma^{\\mu\\nu} = i\\left[\\gamma^\\mu, \\gamma^\\nu\\right]/2" }, { "math_id": 50, "text": "\\epsilon_{ijk}" }, { "math_id": 51, "text": "\\Sigma^m\\equiv {\\epsilon_{ij}}^m\\sigma^{ij}," }, { "math_id": 52, "text": "\\psi(x) = e^{-ik\\cdot x}\\psi(k)" }, { "math_id": 53, "text": "k \\cdot k = 0" }, { "math_id": 54, "text": "{\\hat k}_i = k_i/k_0" }, { "math_id": 55, "text": "\\left(\\Sigma \\cdot \\hat k\\right) \\psi = \\gamma_5 \\psi~." }, { "math_id": 56, "text": "\\psi_\\text{L}" }, { "math_id": 57, "text": "\\psi_\\text{R}," }, { "math_id": 58, "text": "\\left(-p_0 + \\sigma\\cdot\\vec p\\right)\\psi_\\text{R} = 0" }, { "math_id": 59, "text": "\\left(p_0 + \\sigma\\cdot\\vec p\\right)\\psi_\\text{L} = 0" }, { "math_id": 60, "text": "p_\\mu = i\\partial_\\mu" }, { "math_id": 61, "text": "\\psi = \\begin{pmatrix} \\psi_\\text{L}\\\\ \\psi_\\text{R} \\end{pmatrix}" }, { "math_id": 62, "text": "\\overline{\\psi}^\\textsf{T}\n = \\left( \\psi^\\dagger \\gamma^0 \\right)^\\textsf{T}\n = \\begin{pmatrix} 0 & I \\\\ I & 0\\end{pmatrix} \\begin{pmatrix} \\psi_\\text{L}^* \\\\ \\psi_\\text{R}^* \\end{pmatrix}\n = \\begin{pmatrix} \\psi_\\text{R}^* \\\\ \\psi_\\text{L}^* \\end{pmatrix}\n" }, { "math_id": 63, "text": "\\psi^c\n = \\begin{pmatrix} \\psi_\\text{L}^c\\\\ \\psi_\\text{R}^c \\end{pmatrix} \n = \\eta_c C \\overline\\psi^\\textsf{T}\n = \\eta_c \\begin{pmatrix} -i\\sigma^2 & 0 \\\\ 0 & i\\sigma^2\\end{pmatrix} \\begin{pmatrix} \\psi_\\text{R}^* \\\\ \\psi_\\text{L}^* \\end{pmatrix}\n = \\eta_c \\begin{pmatrix} -i\\sigma^2\\psi_\\text{R}^* \\\\ i\\sigma^2\\psi_\\text{L}^* \\end{pmatrix}\n" }, { "math_id": 64, "text": "\\psi\\left(t, \\vec x\\right) \\mapsto \\psi^p\\left(t, \\vec x\\right) = \\gamma^0 \\psi\\left(t, -\\vec x\\right)" }, { "math_id": 65, "text": "\\psi\\left(t, \\vec x\\right) \\mapsto \\psi^{cp}\\left(t, \\vec x\\right) \n = \\begin{pmatrix} \\psi_\\text{L}^{cp} \\left(t, \\vec x\\right)\\\\ \\psi_\\text{R}^{cp}\\left(t,\\vec x\\right) \\end{pmatrix} \n = \\eta_c \\begin{pmatrix} -i\\sigma^2\\psi_\\text{L}^*\\left(t, -\\vec x\\right) \\\\ i\\sigma^2\\psi_\\text{R}^*\\left(t, -\\vec x\\right) \\end{pmatrix}" }, { "math_id": 66, "text": "\\eta_c = 1" }, { "math_id": 67, "text": "\\psi = \\psi^c." }, { "math_id": 68, "text": "\\psi\\mapsto\\psi^c" }, { "math_id": 69, "text": "\\mathcal{C}" }, { "math_id": 70, "text": "\\mathsf{C}:\\psi\\mapsto\\psi^c" }, { "math_id": 71, "text": "\\mathsf{C}\\psi = \\psi^c." }, { "math_id": 72, "text": "\\mathsf{C}\\psi = \\psi." }, { "math_id": 73, "text": "\\mathsf{C}\\psi^{(\\pm)} = \\pm \\psi^{(\\pm)}." }, { "math_id": 74, "text": "\\psi^{(+)} = \\begin{pmatrix} \\psi_\\text{L}\\\\ i\\sigma^2\\psi_\\text{L}^* \\end{pmatrix}" }, { "math_id": 75, "text": "\\psi^{(-)} = \\begin{pmatrix} i\\sigma^2\\psi_\\text{R}^*\\\\ \\psi_\\text{R} \\end{pmatrix}" }, { "math_id": 76, "text": "\\psi^{(+)}." }, { "math_id": 77, "text": "\\gamma_5\\mathsf{C} = - \\mathsf{C}\\gamma_5" }, { "math_id": 78, "text": "\\mathsf{C}" }, { "math_id": 79, "text": "P_\\text{L} = \\left(1 - \\gamma_5\\right)/2" }, { "math_id": 80, "text": "P_\\text{R} = \\left(1 + \\gamma_5\\right)/2," }, { "math_id": 81, "text": "P_\\text{L}\\mathsf{C} = \\mathsf{C}P_\\text{R}~." }, { "math_id": 82, "text": "P^{(+)} = (1 + \\mathsf{C})P_\\text{L}" }, { "math_id": 83, "text": "P^{(-)} = (1 - \\mathsf{C})P_\\text{R}." }, { "math_id": 84, "text": "\\ \\eta_c\\ " }, { "math_id": 85, "text": "\\psi \\mapsto \\psi^c = \\mathcal{C}\\ \\psi\\ \\mathcal{C}^\\dagger = \\eta_c\\ C\\ \\overline\\psi^\\textsf{T}" }, { "math_id": 86, "text": "\\overline\\psi \\mapsto \\overline\\psi^c = \\mathcal{C}\\ \\overline\\psi\\ \\mathcal{C}^\\dagger = \\eta^*_c\\ \\psi^\\textsf{T}\\ C^{-1}" }, { "math_id": 87, "text": "A_\\mu \\mapsto A^c_\\mu = \\mathcal{C}\\ A_\\mu\\ \\mathcal{C}^\\dagger = -A_\\mu\\ " }, { "math_id": 88, "text": "\\ C\\ " }, { "math_id": 89, "text": "\\mathbb{C}" }, { "math_id": 90, "text": "\\phi" }, { "math_id": 91, "text": "\\chi" }, { "math_id": 92, "text": "\\psi \\mathrel\\stackrel{\\mathrm{def}}{=} {\\phi + i\\chi \\over \\sqrt{2}}" }, { "math_id": 93, "text": "\\mathsf{C}:i\\mapsto -i" }, { "math_id": 94, "text": "\\mathsf{C}:\\phi\\mapsto \\phi" }, { "math_id": 95, "text": "\\mathsf{C}:\\chi\\mapsto \\chi" }, { "math_id": 96, "text": "\\mathsf{C}:\\psi\\mapsto \\psi^*." }, { "math_id": 97, "text": "\\mapsto" }, { "math_id": 98, "text": "\\mathsf{C}\\phi=\\phi" }, { "math_id": 99, "text": "\\mathsf{C}\\chi = \\chi" }, { "math_id": 100, "text": "\\mathsf{C}\\psi = \\psi^*." }, { "math_id": 101, "text": "\\mathsf{C}:\\phi\\mapsto -\\phi" }, { "math_id": 102, "text": "\\mathsf{C}:\\psi\\mapsto -\\psi^*." }, { "math_id": 103, "text": "\\mathsf{C}:\\phi \\mapsto \\phi^c = -\\phi" }, { "math_id": 104, "text": "\\mathbb{R}\\times\\mathbb{Z}_2" }, { "math_id": 105, "text": "\\mathbb{Z}_2 = \\{+1, -1\\}." }, { "math_id": 106, "text": "\\phi = (r, c)" }, { "math_id": 107, "text": "\\mathsf{C}: (r, c) \\mapsto (r, -c)." }, { "math_id": 108, "text": "\\gamma^1\\gamma^3" }, { "math_id": 109, "text": "w_j = \\frac{1}{\\sqrt{2}}\\left(e_{2j} - ie_{2j+1}\\right)" }, { "math_id": 110, "text": "e_j" }, { "math_id": 111, "text": "V=T_pM" }, { "math_id": 112, "text": "p\\in M" }, { "math_id": 113, "text": "M." }, { "math_id": 114, "text": "V \\otimes \\mathbb{C} = W\\oplus \\overline W" }, { "math_id": 115, "text": "\\wedge W" } ]
https://en.wikipedia.org/wiki?curid=151001
151013
T-symmetry
Time reversal symmetry in physics T-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal, formula_0 Since the second law of thermodynamics states that entropy increases as time flows toward the future, in general, the macroscopic universe does not show symmetry under time reversal. In other words, time is said to be non-symmetric, or asymmetric, except for special equilibrium states when the second law of thermodynamics predicts the time symmetry to hold. However, quantum noninvasive measurements are predicted to violate time symmetry even in equilibrium, contrary to their classical counterparts, although this has not yet been experimentally confirmed. Time "asymmetries" (see Arrow of time) generally are caused by one of three categories: Macroscopic phenomena. The second law of thermodynamics. Daily experience shows that T-symmetry does not hold for the behavior of bulk materials. Of these macroscopic laws, most notable is the second law of thermodynamics. Many other phenomena, such as the relative motion of bodies with friction, or viscous motion of fluids, reduce to this, because the underlying mechanism is the dissipation of usable energy (for example, kinetic energy) into heat. The question of whether this time-asymmetric dissipation is really inevitable has been considered by many physicists, often in the context of Maxwell's demon. The name comes from a thought experiment described by James Clerk Maxwell in which a microscopic demon guards a gate between two halves of a room. It only lets slow molecules into one half, only fast ones into the other. By eventually making one side of the room cooler than before and the other hotter, it seems to reduce the entropy of the room, and reverse the arrow of time. Many analyses have been made of this; all show that when the entropy of room and demon are taken together, this total entropy does increase. Modern analyses of this problem have taken into account Claude E. Shannon's relation between entropy and information. Many interesting results in modern computing are closely related to this problem—reversible computing, quantum computing and physical limits to computing, are examples. These seemingly metaphysical questions are today, in these ways, slowly being converted into hypotheses of the physical sciences. The current consensus hinges upon the Boltzmann–Shannon identification of the logarithm of phase space volume with the negative of Shannon information, and hence to entropy. In this notion, a fixed initial state of a macroscopic system corresponds to relatively low entropy because the coordinates of the molecules of the body are constrained. As the system evolves in the presence of dissipation, the molecular coordinates can move into larger volumes of phase space, becoming more uncertain, and thus leading to increase in entropy. Big Bang. One resolution to irreversibility is to say that the constant increase of entropy we observe happens "only" because of the initial state of our universe. Other possible states of the universe (for example, a universe at heat death equilibrium) would actually result in no increase of entropy. In this view, the apparent T-asymmetry of our universe is a problem in cosmology: why did the universe start with a low entropy? This view, supported by cosmological observations (such as the isotropy of the cosmic microwave background) connects this problem to the question of "initial conditions" of the universe. Black holes. The laws of gravity seem to be time reversal invariant in classical mechanics; however, specific solutions need not be. An object can cross through the event horizon of a black hole from the outside, and then fall rapidly to the central region where our understanding of physics breaks down. Since within a black hole the forward light-cone is directed towards the center and the backward light-cone is directed outward, it is not even possible to define time-reversal in the usual manner. The only way anything can escape from a black hole is as Hawking radiation. The time reversal of a black hole would be a hypothetical object known as a white hole. From the outside they appear similar. While a black hole has a beginning and is inescapable, a white hole has an ending and cannot be entered. The forward light-cones of a white hole are directed outward; and its backward light-cones are directed towards the center. The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out. The modern view of black hole irreversibility is to relate it to the second law of thermodynamics, since black holes are viewed as thermodynamic objects. For example, according to the gauge–gravity duality conjecture, all microscopic processes in a black hole are reversible, and only the collective behavior is irreversible, as in any other macroscopic, thermal system. Kinetic consequences: detailed balance and Onsager reciprocal relations. In physical and chemical kinetics, T-symmetry of the mechanical microscopic equations implies two important laws: the principle of detailed balance and the Onsager reciprocal relations. T-symmetry of the microscopic description together with its kinetic consequences are called microscopic reversibility. Effect of time reversal on some variables of classical physics. Even. Classical variables that do not change upon time reversal include: formula_1, position of a particle in three-space formula_2, acceleration of the particle formula_3, force on the particle formula_4, energy of the particle formula_5, electric potential (voltage) formula_6, electric field formula_7, electric displacement formula_8, density of electric charge formula_9, electric polarization Energy density of the electromagnetic field formula_10, Maxwell stress tensor All masses, charges, coupling constants, and other physical constants, except those associated with the weak force. Odd. Classical variables that time reversal negates include: formula_11, the time when an event occurs formula_12, velocity of a particle formula_13, linear momentum of a particle formula_14, angular momentum of a particle (both orbital and spin) formula_15, electromagnetic vector potential formula_16, magnetic field formula_17, magnetic auxiliary field formula_18, density of electric current formula_19, magnetization formula_20, Poynting vector formula_21, power (rate of work done). Example: Magnetic Field and Onsager reciprocal relations. Let us consider the example of a system of charged particles subject to a constant external magnetic field: in this case the canonical time reversal operation that reverses the velocities and the time formula_11 and keeps the coordinates untouched is no more a symmetry for the system. Under this consideration, it seems that only Onsager–Casimir reciprocal relations could hold; these equalities relate two different systems, one subject to formula_16 and another to formula_22, and so their utility is limited. However, there was proved that it is possible to find other time reversal operations which preserve the dynamics and so Onsager reciprocal relations; in conclusion, one cannot state that the presence of a magnetic field always breaks T-symmetry. Microscopic phenomena: time reversal invariance. Most systems are asymmetric under time reversal, but there may be phenomena with symmetry. In classical mechanics, a velocity "v" reverses under the operation of "T", but an acceleration does not. Therefore, one models dissipative phenomena through terms that are odd in "v". However, delicate experiments in which known sources of dissipation are removed reveal that the laws of mechanics are time reversal invariant. Dissipation itself is originated in the second law of thermodynamics. The motion of a charged body in a magnetic field, "B" involves the velocity through the Lorentz force term "v"×"B", and might seem at first to be asymmetric under "T". A closer look assures us that "B" also changes sign under time reversal. This happens because a magnetic field is produced by an electric current, "J", which reverses sign under "T". Thus, the motion of classical charged particles in electromagnetic fields is also time reversal invariant. (Despite this, it is still useful to consider the time-reversal non-invariance in a "local" sense when the external field is held fixed, as when the magneto-optic effect is analyzed. This allows one to analyze the conditions under which optical phenomena that locally break time-reversal, such as Faraday isolators and directional dichroism, can occur.) In physics one separates the laws of motion, called kinematics, from the laws of force, called dynamics. Following the classical kinematics of Newton's laws of motion, the kinematics of quantum mechanics is built in such a way that it presupposes nothing about the time reversal symmetry of the dynamics. In other words, if the dynamics are invariant, then the kinematics will allow it to remain invariant; if the dynamics is not, then the kinematics will also show this. The structure of the quantum laws of motion are richer, and we examine these next. Time reversal in quantum mechanics. This section contains a discussion of the three most important properties of time reversal in quantum mechanics; chiefly, −1 (for fermions). The strangeness of this result is clear if one compares it with parity. If parity transforms a pair of quantum states into each other, then the sum and difference of these two basis states are states of good parity. Time reversal does not behave like this. It seems to violate the theorem that all abelian groups be represented by one-dimensional irreducible representations. The reason it does this is that it is represented by an anti-unitary operator. It thus opens the way to spinors in quantum mechanics. On the other hand, the notion of quantum-mechanical time reversal turns out to be a useful tool for the development of physically motivated quantum computing and simulation settings, providing, at the same time, relatively simple tools to assess their complexity. For instance, quantum-mechanical time reversal was used to develop novel boson sampling schemes and to prove the duality between two fundamental optical operations, beam splitter and squeezing transformations. Formal notation. In formal mathematical presentations of T-symmetry, three different kinds of notation for T need to be carefully distinguished: the T that is an involution, capturing the actual reversal of the time coordinate, the T that is an ordinary finite dimensional matrix, acting on spinors and vectors, and the T that is an operator on an infinite-dimensional Hilbert space. For a real (not complex) classical (unquantized) scalar field formula_23, the time reversal involution can simply be written as formula_24 as time reversal leaves the scalar value at a fixed spacetime point unchanged, up to an overall sign formula_25. A slightly more formal way to write this is formula_26 which has the advantage of emphasizing that formula_27 is a map, and thus the "mapsto" notation formula_28 whereas formula_29 is a factual statement relating the old and new fields to one-another. Unlike scalar fields, spinor and vector fields formula_30 might have a non-trivial behavior under time reversal. In this case, one has to write formula_31 where formula_32 is just an ordinary matrix. For complex fields, complex conjugation may be required, for which the mapping formula_33 can be thought of as a 2x2 matrix. For a Dirac spinor, formula_32 cannot be written as a 4x4 matrix, because, in fact, complex conjugation is indeed required; however, it can be written as an 8x8 matrix, acting on the 8 real components of a Dirac spinor. In the general setting, there is no "ab initio" value to be given for formula_32; its actual form depends on the specific equation or equations which are being examined. In general, one simply states that the equations must be time-reversal invariant, and then solves for the explicit value of formula_32 that achieves this goal. In some cases, generic arguments can be made. Thus, for example, for spinors in three-dimensional Euclidean space, or four-dimensional Minkowski space, an explicit transformation can be given. It is conventionally given as formula_34 where formula_35 is the y-component of the angular momentum operator and formula_36 is complex conjugation, as before. This form follows whenever the spinor can be described with a linear differential equation that is first-order in the time derivative, which is generally the case in order for something to be validly called "a spinor". The formal notation now makes it clear how to extend time-reversal to an arbitrary tensor field formula_37 In this case, formula_38 Covariant tensor indexes will transform as formula_39 and so on. For quantum fields, there is also a third T, written as formula_40 which is actually an infinite dimensional operator acting on a Hilbert space. It acts on quantized fields formula_41 as formula_42 This can be thought of as a special case of a tensor with one covariant, and one contravariant index, and thus two formula_43's are required. All three of these symbols capture the idea of time-reversal; they differ with respect to the specific space that is being acted on: functions, vectors/spinors, or infinite-dimensional operators. The remainder of this article is not cautious to distinguish these three; the "T" that appears below is meant to be either formula_27 or formula_32 or formula_40 depending on context, left for the reader to infer. Anti-unitary representation of time reversal. Eugene Wigner showed that a symmetry operation "S" of a Hamiltonian is represented, in quantum mechanics either by a unitary operator, "S" "U", or an antiunitary one, "S" "UK" where "U" is unitary, and "K" denotes complex conjugation. These are the only operations that act on Hilbert space so as to preserve the "length" of the projection of any one state-vector onto another state-vector. Consider the parity operator. Acting on the position, it reverses the directions of space, so that "PxP"−1 −"x". Similarly, it reverses the direction of "momentum", so that "PpP"−1 −"p", where "x" and "p" are the position and momentum operators. This preserves the canonical commutator ["x", "p"] "iħ", where "ħ" is the reduced Planck constant, only if "P" is chosen to be unitary, "PiP"−1 "i". On the other hand, the "time reversal" operator "T", it does nothing to the x-operator, "TxT"−1 "x", but it reverses the direction of p, so that "TpT"−1 −"p". The canonical commutator is invariant only if "T" is chosen to be anti-unitary, i.e., "TiT"−1 −"i". Another argument involves energy, the time-component of the four-momentum. If time reversal were implemented as a unitary operator, it would reverse the sign of the energy just as space-reversal reverses the sign of the momentum. This is not possible, because, unlike momentum, energy is always positive. Since energy in quantum mechanics is defined as the phase factor exp(–"iEt") that one gets when one moves forward in time, the way to reverse time while preserving the sign of the energy is to also reverse the sense of "i", so that the sense of phases is reversed. Similarly, any operation that reverses the sense of phase, which changes the sign of "i", will turn positive energies into negative energies unless it also changes the direction of time. So every antiunitary symmetry in a theory with positive energy must reverse the direction of time. Every antiunitary operator can be written as the product of the time reversal operator and a unitary operator that does not reverse time. For a particle with spin "J", one can use the representation formula_44 where "J""y" is the "y"-component of the spin, and use of "TJT"−1 −"J" has been made. Electric dipole moments. This has an interesting consequence on the electric dipole moment (EDM) of any particle. The EDM is defined through the shift in the energy of a state when it is put in an external electric field: Δ"e" d·"E" + "E"·δ·"E", where "d" is called the EDM and δ, the induced dipole moment. One important property of an EDM is that the energy shift due to it changes sign under a parity transformation. However, since d is a vector, its expectation value in a state |ψ⟩ must be proportional to ⟨ψ| "J" |ψ⟩, that is the expected spin. Thus, under time reversal, an invariant state must have vanishing EDM. In other words, a non-vanishing EDM signals both "P" and "T" symmetry-breaking. Some molecules, such as water, must have EDM irrespective of whether T is a symmetry. This is correct; if a quantum system has degenerate ground states that transform into each other under parity, then time reversal need not be broken to give EDM. Experimentally observed bounds on the electric dipole moment of the nucleon currently set stringent limits on the violation of time reversal symmetry in the strong interactions, and their modern theory: quantum chromodynamics. Then, using the CPT invariance of a relativistic quantum field theory, this puts strong bounds on strong CP violation. Experimental bounds on the electron electric dipole moment also place limits on theories of particle physics and their parameters. Kramers' theorem. For "T", which is an anti-unitary "Z"2 symmetry generator "T"2 = "UKUK" = "UU"* = "U" ("U"T)−1 = Φ, where Φ is a diagonal matrix of phases. As a result, "U" Φ"U"T and "U"T "U"Φ, showing that "U" = Φ "U" Φ. This means that the entries in Φ are ±1, as a result of which one may have either "T"2 ±1. This is specific to the anti-unitarity of "T". For a unitary operator, such as the parity, any phase is allowed. Next, take a Hamiltonian invariant under "T". Let |"a"⟩ and "T"|"a"⟩ be two quantum states of the same energy. Now, if "T"2 −1, then one finds that the states are orthogonal: a result called Kramers' theorem. This implies that if "T"2 −1, then there is a twofold degeneracy in the state. This result in non-relativistic quantum mechanics presages the spin statistics theorem of quantum field theory. Quantum states that give unitary representations of time reversal, i.e., have "' "T"2 1, are characterized by a multiplicative quantum number, sometimes called the T-parity. Time reversal of the known dynamical laws. Particle physics codified the basic laws of dynamics into the standard model. This is formulated as a quantum field theory that has CPT symmetry, i.e., the laws are invariant under simultaneous operation of time reversal, parity and charge conjugation. However, time reversal itself is seen not to be a symmetry (this is usually called CP violation). There are two possible origins of this asymmetry, one through the mixing of different flavours of quarks in their weak decays, the second through a direct CP violation in strong interactions. The first is seen in experiments, the second is strongly constrained by the non-observation of the EDM of a neutron. Time reversal violation is unrelated to the second law of thermodynamics, because due to the conservation of the CPT symmetry, the effect of time reversal is to rename particles as antiparticles and "vice versa". Thus the second law of thermodynamics is thought to originate in the initial conditions in the universe. Time reversal of noninvasive measurements. Strong measurements (both classical and quantum) are certainly disturbing, causing asymmetry due to the second law of thermodynamics. However, noninvasive measurements should not disturb the evolution, so they are expected to be time-symmetric. Surprisingly, it is true only in classical physics but not in quantum physics, even in a thermodynamically invariant equilibrium state. This type of asymmetry is independent of CPT symmetry but has not yet been confirmed experimentally due to extreme conditions of the checking proposal. References. Inline citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " T: t \\mapsto -t." }, { "math_id": 1, "text": "\\vec x" }, { "math_id": 2, "text": "\\vec a" }, { "math_id": 3, "text": "\\vec F" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "V" }, { "math_id": 6, "text": "\\vec E" }, { "math_id": 7, "text": "\\vec D" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\vec P" }, { "math_id": 10, "text": "T_{ij}" }, { "math_id": 11, "text": "t" }, { "math_id": 12, "text": "\\vec v" }, { "math_id": 13, "text": "\\vec p" }, { "math_id": 14, "text": "\\vec l" }, { "math_id": 15, "text": "\\vec A" }, { "math_id": 16, "text": "\\vec B" }, { "math_id": 17, "text": "\\vec H" }, { "math_id": 18, "text": "\\vec j" }, { "math_id": 19, "text": "\\vec M" }, { "math_id": 20, "text": "\\vec S" }, { "math_id": 21, "text": "\\mathcal{P}" }, { "math_id": 22, "text": "-\\vec B" }, { "math_id": 23, "text": "\\phi" }, { "math_id": 24, "text": "\\mathsf{T} \\phi(t,\\vec{x}) = \\phi^\\prime(-t,\\vec{x}) = s\\phi(t,\\vec{x})" }, { "math_id": 25, "text": "s=\\pm 1" }, { "math_id": 26, "text": "\\mathsf{T}: \\phi(t,\\vec{x}) \\mapsto \\phi^\\prime(-t,\\vec{x}) = s\\phi(t,\\vec{x})" }, { "math_id": 27, "text": "\\mathsf{T}" }, { "math_id": 28, "text": "\\mapsto ~," }, { "math_id": 29, "text": "\\phi^\\prime(-t,\\vec{x}) = s\\phi(t,\\vec{x})" }, { "math_id": 30, "text": "\\psi" }, { "math_id": 31, "text": "\\mathsf{T}: \\psi(t,\\vec{x}) \\mapsto \\psi^\\prime(-t,\\vec{x}) = T\\psi(t,\\vec{x})" }, { "math_id": 32, "text": "T" }, { "math_id": 33, "text": "K: (x+iy) \\mapsto (x-iy)" }, { "math_id": 34, "text": "T=e^{i\\pi J_y}K" }, { "math_id": 35, "text": "J_y" }, { "math_id": 36, "text": "K" }, { "math_id": 37, "text": "\\psi_{abc\\cdots}" }, { "math_id": 38, "text": "\\mathsf{T}: \\psi_{abc\\cdots}(t,\\vec{x}) \\mapsto \\psi_{abc\\cdots}^\\prime(-t,\\vec{x}) = {T_a}^d \\,{T_b}^e \\,{T_c}^f \\cdots \\psi_{def\\cdots}(t,\\vec{x})" }, { "math_id": 39, "text": "{T_a}^b = {(T^{-1})_b}^a" }, { "math_id": 40, "text": "\\mathcal{T}," }, { "math_id": 41, "text": "\\Psi" }, { "math_id": 42, "text": "\\mathsf{T}: \\Psi(t,\\vec{x}) \\mapsto \\Psi^\\prime(-t,\\vec{x}) = \\mathcal{T} \\Psi(t,\\vec{x}) \\mathcal{T}^{-1}" }, { "math_id": 43, "text": "\\mathcal{T}" }, { "math_id": 44, "text": "T = e^{-i\\pi J_y/\\hbar} K," } ]
https://en.wikipedia.org/wiki?curid=151013
1510134
Autoregressive integrated moving average
Statistical model used in time series analysis In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of expected value (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.e., the trend). When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular (a.k.a. purely nondeterministic) wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework so that it is eliminated by the seasonal differencing. The autoregressive (AR) part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The moving average (MA) part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible. Non-seasonal ARIMA models are generally denoted ARIMA("p","d","q") where parameters "p", "d", and "q" are non-negative integers, "p" is the order (number of time lags) of the autoregressive model, "d" is the degree of differencing (the number of times the data have had past values subtracted), and "q" is the order of the moving-average model. Seasonal ARIMA models are usually denoted ARIMA("p","d","q")("P","D","Q")"m", where "m" refers to the number of periods in each season, and the uppercase "P","D","Q" refer to the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model. When two out of the three terms are zeros, the model may be referred to based on the non-zero parameter, dropping "AR", "I" or "MA" from the acronym describing the model. For example, &amp;NoBreak;&amp;NoBreak; is AR(1), &amp;NoBreak;&amp;NoBreak; is I(1), and &amp;NoBreak;&amp;NoBreak; is MA(1). ARIMA models can be estimated following the Box–Jenkins approach. Definition. Given time series data "X""t" where "t" is an integer index and the "X""t" are real numbers, an formula_0 model is given by formula_1 or equivalently by formula_2 where formula_3 is the lag operator, the formula_4 are the parameters of the autoregressive part of the model, the formula_5 are the parameters of the moving average part and the formula_6 are error terms. The error terms formula_6 are generally assumed to be independent, identically distributed variables sampled from a normal distribution with zero mean. Assume now that the polynomial formula_7 has a unit root (a factor formula_8) of multiplicity "d". Then it can be rewritten as: formula_9 An ARIMA("p","d","q") process expresses this polynomial factorisation property with "p"="p'−d", and is given by: formula_10 and thus can be thought as a particular case of an ARMA("p+d","q") process having the autoregressive polynomial with "d" unit roots. (For this reason, no process that is accurately described by an ARIMA model with "d" &gt; 0 is wide-sense stationary.) The above can be generalized as follows. formula_11 This defines an ARIMA("p","d","q") process with drift formula_12. Other special forms. The explicit identification of the factorization of the autoregression polynomial into factors as above can be extended to other cases, firstly to apply to the moving average polynomial and secondly to include other special factors. For example, having a factor formula_13 in a model is one way of including a non-stationary seasonality of period "s" into the model; this factor has the effect of re-expressing the data as changes from "s" periods ago. Another example is the factor formula_14, which includes a (non-stationary) seasonality of period 2. The effect of the first type of factor is to allow each season's value to drift separately over time, whereas with the second type values for adjacent seasons move together. Identification and specification of appropriate factors in an ARIMA model can be an important step in modeling as it can allow a reduction in the overall number of parameters to be estimated while allowing the imposition on the model of types of behavior that logic and experience suggest should be there. Differencing. A stationary time series's properties do not depend on the time at which the series is observed. Specifically, for a wide-sense stationary time series, the mean and the variance/autocovariance keep constant over time. Differencing in statistics is a transformation applied to a non-stationary time-series in order to make it stationary "in the mean sense" (viz., to remove the non-constant trend), but having nothing to do with the non-stationarity of the variance or autocovariance. Likewise, the seasonal differencing is applied to a seasonal time-series to remove the seasonal component. From the perspective of signal processing, especially the Fourier spectral analysis theory, the trend is the low-frequency part in the spectrum of a non-stationary time series, while the season is the periodic-frequency part in the spectrum of it. Therefore, the differencing works as a high-pass (i.e., low-stop) filter and the seasonal-differencing as a comb filter to suppress the low-frequency trend and the periodic-frequency season in the spectrum domain (rather than directly in the time domain), respectively. To difference the data, the difference between consecutive observations is computed. Mathematically, this is shown as formula_15 Differencing removes the changes in the level of a time series, eliminating trend and seasonality and consequently stabilizing the mean of the time series. Sometimes it may be necessary to difference the data a second time to obtain a stationary time series, which is referred to as second-order differencing: formula_16 Another method of differencing data is seasonal differencing, which involves computing the difference between an observation and the corresponding observation in the previous season e.g a year. This is shown as: formula_17 The differenced data are then used for the estimation of an ARMA model. Examples. Some well-known special cases arise naturally or are mathematically equivalent to other popular forecasting models. For example: Choosing the order. The order p and q can be determined using the sample autocorrelation function (ACF), partial autocorrelation function (PACF), and/or extended autocorrelation function (EACF) method. Other alternative methods include AIC, BIC, etc. To determine the order of a non-seasonal ARIMA model, a useful criterion is the Akaike information criterion (AIC). It is written as formula_21 where "L "is the likelihood of the data, "p "is the order of the autoregressive part and "q "is the order of the moving average part. The "k" represents the intercept of the ARIMA model. For AIC, if "k" = 1 then there is an intercept in the ARIMA model ("c "≠ 0) and if "k "= 0 then there is no intercept in the ARIMA model ("c "= 0). The corrected AIC for ARIMA models can be written as formula_22 The Bayesian Information Criterion (BIC) can be written as formula_23 The objective is to minimize the AIC, AICc or BIC values for a good model. The lower the value of one of these criteria for a range of models being investigated, the better the model will suit the data. The AIC and the BIC are used for two completely different purposes. While the AIC tries to approximate models towards the reality of the situation, the BIC attempts to find the perfect fit. The BIC approach is often criticized as there never is a perfect fit to real-life complex data; however, it is still a useful method for selection as it penalizes models more heavily for having more parameters than the AIC would. AICc can only be used to compare ARIMA models with the same orders of differencing. For ARIMAs with different orders of differencing, RMSE can be used for model comparison. Forecasts using ARIMA models. The ARIMA model can be viewed as a "cascade" of two models. The first is non-stationary: formula_24 while the second is wide-sense stationary: formula_25 Now forecasts can be made for the process formula_26, using a generalization of the method of autoregressive forecasting. Forecast intervals. The forecast intervals (confidence intervals for forecasts) for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these assumptions does not hold, then the forecast intervals may be incorrect. For this reason, researchers plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals. 95% forecast interval: formula_27, where formula_28 is the variance of formula_29. For formula_30, formula_31 for all ARIMA models regardless of parameters and orders. For ARIMA(0,0,q), formula_32 formula_33 In general, forecast intervals from ARIMA models will increase as the forecast horizon increases. Variations and extensions. A number of variations on the ARIMA model are commonly employed. If multiple time series are used then the formula_34 can be thought of as vectors and a VARIMA model may be appropriate. Sometimes a seasonal effect is suspected in the model; in that case, it is generally considered better to use a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model. If the time-series is suspected to exhibit long-range dependence, then the "d" parameter may be allowed to have non-integer values in an autoregressive fractionally integrated moving average model, which is also called a Fractional ARIMA (FARIMA or ARFIMA) model. Software implementations. Various packages that apply methodology like Box–Jenkins parameter optimization are available to find the right parameters for the ARIMA model. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{ARMA}(p',q)" }, { "math_id": 1, "text": "X_t-\\alpha_1X_{t-1}- \\dots -\\alpha_{p'}X_{t-p'} = \\varepsilon_t + \\theta_1 \\varepsilon_{t-1} + \\cdots +\\theta_q \\varepsilon_{t-q}," }, { "math_id": 2, "text": "\n\\left(\n 1 - \\sum_{i=1}^{p'} \\alpha_i L^i\n\\right) X_t\n=\n\\left(\n 1 + \\sum_{i=1}^q \\theta_i L^i\n\\right) \\varepsilon_t \\,\n" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "\\alpha_i" }, { "math_id": 5, "text": "\\theta_i" }, { "math_id": 6, "text": "\\varepsilon_t" }, { "math_id": 7, "text": "\\textstyle \\left( 1 - \\sum_{i=1}^{p'} \\alpha_i L^i \\right)" }, { "math_id": 8, "text": "(1-L)" }, { "math_id": 9, "text": "\n\\left(\n 1 - \\sum_{i=1}^{p'} \\alpha_i L^i\n\\right)\n=\n\\left(\n 1 - \\sum_{i=1}^{p'-d} \\varphi_i L^i\n\\right)\n\\left(\n 1 - L\n\\right)^d.\n" }, { "math_id": 10, "text": "\n\\left( 1 - \\sum_{i=1}^p \\varphi_i L^i \\right)\n(1-L)^d X_t\n= \\left( 1 + \\sum_{i=1}^q \\theta_i L^i \\right) \\varepsilon_t \\,\n" }, { "math_id": 11, "text": "\n\\left( 1 - \\sum_{i=1}^p \\varphi_i L^i\\right)\n(1-L)^d X_t = \\delta + \\left( 1 + \\sum_{i=1}^q \\theta_i L^i \\right) \\varepsilon_t . \\,\n" }, { "math_id": 12, "text": " \\frac{\\delta}{1 - \\sum \\varphi_i} " }, { "math_id": 13, "text": "( 1 - L^s)" }, { "math_id": 14, "text": "\\left( 1 -\\sqrt{3} L + L^2 \\right)" }, { "math_id": 15, "text": "\ny_t'= y_t - y_{t-1} \\,\n" }, { "math_id": 16, "text": "\n\\begin{align}\ny_t^* & = y_t' - y_{t-1}' \\\\\n& =(y_t - y_{t-1})-(y_{t-1} - y_{t-2}) \\\\\n& =y_ t - 2y_{t-1} + y_{t-2}\n\\end{align}\n" }, { "math_id": 17, "text": "\ny_t'= y_t - y_{t-m} \\quad \\text{where } m=\\text{duration of season}.\n" }, { "math_id": 18, "text": "X_t = X_{t-1} + \\varepsilon_t" }, { "math_id": 19, "text": "X_t = c + X_{t-1} + \\varepsilon_t" }, { "math_id": 20, "text": "X_t = 2X_{t-1} - X_{t-2} +(\\alpha + \\beta - 2) \\varepsilon_{t-1} + (1-\\alpha)\\varepsilon_{t-2} + \\varepsilon_{t}" }, { "math_id": 21, "text": " \\text{AIC} = -2\\log(L)+2(p+q+k), " }, { "math_id": 22, "text": "\\text{AICc}= \\text{AIC}+ \\frac{2(p+q+k)(p+q+k+1)}{T-p-q-k-1}." }, { "math_id": 23, "text": "\\text{BIC}= \\text{AIC}+((\\log T)-2)(p+q+k)." }, { "math_id": 24, "text": "\nY_t = (1-L)^d X_t\n" }, { "math_id": 25, "text": "\n\\left( 1 - \\sum_{i=1}^p \\varphi_i L^i \\right) Y_t =\n\\left( 1 + \\sum_{i=1}^q \\theta_i L^i \\right) \\varepsilon_t \\, .\n" }, { "math_id": 26, "text": "Y_t" }, { "math_id": 27, "text": "\\hat{y}_{T+h\\,\\mid\\, T}\\pm1.96\\sqrt{v_{T+h\\,\\mid\\, T}}" }, { "math_id": 28, "text": "v_{T+h\\mid T}" }, { "math_id": 29, "text": "y_{T+h} \\mid y_1,\\dots,y_T" }, { "math_id": 30, "text": "h=1" }, { "math_id": 31, "text": "v_{T+h\\,\\mid\\, T}=\\hat{\\sigma}^2" }, { "math_id": 32, "text": "y_t=e_t+\\sum_{i=1}^q\\theta_ie_{t-i}." }, { "math_id": 33, "text": "v_{T+h\\,\\mid\\, T} = \\hat{\\sigma}^2 \\left[1+\\sum_{i=1}^{h-1}\\theta_ie_{t-i}\\right], \\text{ for } h=2,3,\\ldots " }, { "math_id": 34, "text": "X_t" } ]
https://en.wikipedia.org/wiki?curid=1510134
15101979
Term discrimination
Term discrimination is a way to rank keywords in how useful they are for information retrieval. Overview. This is a method similar to tf-idf but it deals with finding keywords suitable for information retrieval and ones that are not. Please refer to Vector Space Model first. This method uses the concept of "Vector Space Density" that the less dense an occurrence matrix is, the better an information retrieval query will be. An optimal index term is one that can distinguish two different documents from each other and relate two similar documents. On the other hand, a sub-optimal index term can not distinguish two different document from two similar documents. The discrimination value is the difference in the occurrence matrix's vector-space density versus the same matrix's vector-space without the index term's density. Let: formula_0 be the occurrence matrix formula_1 be the occurrence matrix without the index term formula_2 and formula_3 be density of formula_0. Then: The discrimination value of the index term formula_2 is: formula_4 How to compute. Given an occurrency matrix: formula_0 and one keyword: formula_2 A higher value is better because including the keyword will result in better information retrieval. Qualitative Observations. Keywords that are "sparse" should be poor discriminators because they have poor "recall," whereas keywords that are "frequent" should be poor discriminators because they have poor "precision."
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "A_k" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "Q(A)" }, { "math_id": 4, "text": "DV_k = Q(A) - Q(A_k)" }, { "math_id": 5, "text": "C" }, { "math_id": 6, "text": "D_i" }, { "math_id": 7, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=15101979
151042
Viz.
Latin phrase and abbreviation The abbreviation viz. (or viz without a full stop) is short for the Latin , which itself is a contraction of the Latin phrase videre licet, meaning "it is permitted to see". It is used as a synonym for "namely", "that is to say", "to wit", "which is", or "as follows". It is typically used to introduce examples or further details to illustrate a point: for example, "all types of data "viz." text, audio, video, pictures, graphics, can be transmitted through networking". Etymology. "Viz." is shorthand for the Latin adverb using scribal abbreviation, a system of medieval Latin shorthand. It consists of the first two letters, "vi", followed by the last two, "et", using . With the adoption of movable type printing, the (then current) blackletter form of the letter ⟨z⟩, formula_0, was substituted for this symbol since few typefaces included it. Usage. In contrast to "i.e." and "e.g.", "viz." is used to indicate a detailed description of something stated before, and when it precedes a list of group members, it implies (near) completeness. Compared with "scilicet". A similar expression is , from earlier , abbreviated as "sc.", which is Latin for "it is permitted to know." "Sc." provides a parenthetic clarification, removes an ambiguity, or supplies a word omitted in preceding text, while "viz." is usually used to elaborate or detail text which precedes it. In legal usage, "scilicet" appears abbreviated as "ss." It can also appear as a section sign (§) in a caption, where it is used to provide a statement of venue, that is to say a location where an action is to take place. "Scilicet" can be read as "namely," "to wit," or "that is to say," or pronounced in English-speaking countries, or also anglicized as . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{z}" } ]
https://en.wikipedia.org/wiki?curid=151042
1510587
Bayesian search theory
Method for finding lost objects Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS "Scorpion", and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370. Procedure. The usual procedure is as follows: In other words, first search where it most probably will be found, then search where finding it is less probable, then search where the probability is even less (but still possible due to limitations on fuel, range, water currents, etc.), until insufficient hope of locating the object at acceptable cost remains. The advantages of the Bayesian method are that all information available is used coherently (i.e., in a "leak-proof" manner) and the method automatically produces estimates of the cost for a given success probability. That is, even before the start of searching, one can say, hypothetically, "there is a 65% chance of finding it in a 5-day search. That probability will rise to 90% after a 10-day search and 97% after 15 days" or a similar statement. Thus the economic viability of the search can be estimated before committing resources to a search. Apart from the USS "Scorpion", other vessels located by Bayesian search theory include the MV "Derbyshire", the largest British vessel ever lost at sea, and the SS "Central America". It also proved successful in the search for a lost hydrogen bomb following the 1966 Palomares B-52 crash in Spain, and the recovery in the Atlantic Ocean of the crashed Air France Flight 447. Bayesian search theory is incorporated into the CASP (Computer Assisted Search Program) mission planning software used by the United States Coast Guard for search and rescue. This program was later adapted for inland search by adding terrain and ground cover factors for use by the United States Air Force and Civil Air Patrol. Mathematics. Suppose a grid square has a probability "p" of containing the wreck and that the probability of successfully detecting the wreck if it is there is "q". If the square is searched and no wreck is found, then, by Bayes' theorem, the revised probability of the wreck being in the square is given by formula_0 For every other grid square, if its prior probability is "r", its posterior probability is given by formula_1 USS "Scorpion". In May 1968, the U.S. Navy's nuclear submarine USS "Scorpion" (SSN-589) failed to arrive as expected at her home port of Norfolk, Virginia. The command officers of the U.S. Navy were nearly certain that the vessel had been lost off the Eastern Seaboard, but an extensive search there failed to discover the remains of "Scorpion". Then, a Navy deep-water expert, John P. Craven, suggested that "Scorpion" had sunk elsewhere. Craven organised a search southwest of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, "Mizar", and he took advice from Metron Inc., a firm of consultant mathematicians in order to maximise his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of "Scorpion". The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched. At the end of October 1968, the Navy's oceanographic research ship, , located sections of the hull of "Scorpion" on the seabed, about southwest of the Azores, under more than of water. This was after the Navy had released sound tapes from its underwater "SOSUS" listening system, which contained the sounds of the destruction of "Scorpion". The court of inquiry was subsequently reconvened and other vessels, including the bathyscaphe "Trieste II", were dispatched to the scene, collecting many pictures and other data. Although Craven received much credit for locating the wreckage of "Scorpion", Gordon Hamilton, an acoustics expert who pioneered the use of hydroacoustics to pinpoint Polaris missile splashdown locations, was instrumental in defining a compact "search box" wherein the wreck was ultimately found. Hamilton had established a listening station in the Canary Islands that obtained a clear signal of what some scientists believe was the noise of the vessel's pressure hull imploding as she passed crush depth. A Naval Research Laboratory scientist named Chester "Buck" Buchanan, using a towed camera sled of his own design aboard "Mizar", finally located "Scorpion". The towed camera sled, which was fabricated by J. L. "Jac" Hamm of Naval Research Laboratory's Engineering Services Division, is housed in the National Museum of the United States Navy. Buchanan had located the wrecked hull of "Thresher" in 1964 using this technique. Optimal distribution of search effort. The classical book on this subject "The Theory of Optimal Search" (Operations Research Society of America, 1975) by Lawrence D. Stone of Metron Inc. won the 1975 Lanchester Prize by the American Operations Research Society. Searching in boxes. Assume that a stationary object is hidden in one of n boxes (locations). For each location formula_2 there are three known parameters: the cost formula_3 of a single search, the probability formula_4 of finding the object by a single search if the object is there, and the probability formula_5 that the object is there. A searcher looks for the object. They know the a priori probabilities at the beginning and update them by Bayes’ law after each (unsuccessful) attempt. The problem of finding the object in minimal expected cost is a classical problem solved by David Blackwell.(and David Assaf expanded his result to more than one object. ) Surprisingly, the optimal policy is easy to describe: at each stage look into the location which maximizes formula_6. This is actually a special case of the Gittins index. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " p' = \\frac{p(1-q)}{(1-p)+p(1-q)} = p \\frac{1-q}{1-pq} < p." }, { "math_id": 1, "text": " r' = r \\frac{1}{1- pq} > r. " }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "c_i" }, { "math_id": 4, "text": "a_i" }, { "math_id": 5, "text": "p_i" }, { "math_id": 6, "text": "\\frac{p_i a_i}{c_i}" } ]
https://en.wikipedia.org/wiki?curid=1510587
15105978
Well
Excavation or structure to provide access to groundwater A well is an excavation or structure created in the earth by digging, driving, or drilling to access liquid resources, usually water. The oldest and most common kind of well is a water well, to access groundwater in underground aquifers. The well water is drawn up by a pump, or using containers, such as buckets or large water bags that are raised mechanically or by hand. Water can also be injected back into the aquifer through the well. Wells were first constructed at least eight thousand years ago and historically vary in construction from a simple scoop in the sediment of a dry watercourse to the qanats of Iran, and the stepwells and sakiehs of India. Placing a lining in the well shaft helps create stability, and linings of wood or wickerwork date back at least as far as the Iron Age. Wells have traditionally been sunk by hand digging, as is still the case in rural areas of the developing world. These wells are inexpensive and low-tech as they use mostly manual labour, and the structure can be lined with brick or stone as the excavation proceeds. A more modern method called caissoning uses pre-cast reinforced concrete well rings that are lowered into the hole. Driven wells can be created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen of perforated pipe, after which a pump is installed to collect the water. Deeper wells can be excavated by hand drilling methods or machine drilling, using a bit in a borehole. Drilled wells are usually cased with a factory-made pipe composed of steel or plastic. Drilled wells can access water at much greater depths than dug wells. Two broad classes of well are shallow or unconfined wells completed within the uppermost saturated aquifer at that location, and deep or confined wells, sunk through an impermeable stratum into an aquifer beneath. A collector well can be constructed adjacent to a freshwater lake or stream with water percolating through the intervening material. The site of a well can be selected by a hydrogeologist, or groundwater surveyor. Water may be pumped or hand drawn. Impurities from the surface can easily reach shallow sources and contamination of the supply by pathogens or chemical contaminants needs to be avoided. Well water typically contains more minerals in solution than surface water and may require treatment before being potable. Soil salination can occur as the water table falls and the surrounding soil begins to dry out. Another environmental problem is the potential for methane to seep into the water. History. Very early neolithic wells are known from the Eastern Mediterranean: The oldest reliably dated well is from the pre-pottery neolithic (PPN) site of Kissonerga-Mylouthkia on Cyprus. At around 8400 BC a shaft (well 116) of circular diameter was driven through limestone to reach an aquifer at a depth of . Well 2070 from Kissonerga-Mylouthkia, dating to the late PPN, reaches a depth of . Other slightly younger wells are known from this site and from neighbouring Parekklisha-Shillourokambos. A first stone lined well of depth is documented from a drowned final PPN (c. 7000 BC) site at ‘Atlit-Yam off the coast near modern Haifa in Israel. Wood-lined wells are known from the early Neolithic Linear Pottery culture, for example in Ostrov, Czech Republic, dated 5265 BC, Kückhoven (an outlying centre of Erkelenz), dated 5300 BC, and Eythra in Schletz (an outlying centre of Asparn an der Zaya) in Austria, dated 5200 BC. The neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text "The Book of Changes", originally a divination text of the Western Zhou dynasty (1046 -771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. A well excavated at the Hemedu excavation site was believed to have been built during the neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation. In Egypt, shadoofs and sakias are used. The sakia is much more efficient, as it can bring up water from a depth of 10 metres (versus the 3 metres of the shadoof). The sakia is the Egyptian version of the noria. Some of the world's oldest known wells, located in Cyprus, date to 7000–8,500 BC. Two wells from the Neolithic period, around 6500 BC, have been discovered in Israel. One is in Atlit, on the northern coast of Israel, and the other is in the Jezreel Valley. Wells for other purposes came along much later, historically. The first recorded salt well was dug in the Sichuan province of China around 2,250 years ago. This was the first time that ancient water well technology was applied successfully for the exploitation of salt, and marked the beginning of Sichuan's salt drilling industry. The earliest known oil wells were also drilled in China, in 347 CE. These wells had depths of up to about and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine and produce salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. The ancient records of China and Japan are said to contain many allusions to the use of natural gas for lighting and heating. Petroleum was known as "Burning water" in Japan in the 7th century. Types. Dug wells. Until recent centuries, all artificial wells were pumpless hand-dug wells of varying degrees of sophistication, and they remain a very important source of potable water in some rural developing areas, where they are routinely dug and used today. Their indispensability has produced a number of literary references, literal and figurative, including the reference to the incident of Jesus meeting a woman at Jacob's well (John 4:6) in the Bible and the "Ding Dong Bell" nursery rhyme about a cat in a well. Hand-dug wells are excavations with diameters large enough to accommodate one or more people with shovels digging down to below the water table. The excavation is braced horizontally to avoid landslide or erosion endangering the people digging. They can be lined with stone or brick; extending this lining upwards above the ground surface to form a wall around the well serves to reduce both contamination and accidental falls into the well. A more modern method called caissoning uses reinforced concrete or plain concrete pre-cast well rings that are lowered into the hole. A well-digging team digs under a cutting ring and the well column slowly sinks into the aquifer, whilst protecting the team from collapse of the well bore. Hand-dug wells are inexpensive and low tech (compared to drilling) and they use mostly manual labour to access groundwater in rural locations of developing countries. They may be built with a high degree of community participation, or by local entrepreneurs who specialize in hand-dug wells. They have been successfully excavated to . They have low operational and maintenance costs, in part because water can be extracted by hand, without a pump. The water often comes from an aquifer or groundwater, and can be easily deepened, which may be necessary if the ground water level drops, by telescoping the lining further down into the aquifer. The yield of existing hand dug wells may be improved by deepening or introducing vertical tunnels or perforated pipes. Drawbacks to hand-dug wells are numerous. It can be impractical to hand dig wells in areas where hard rock is present, and they can be time-consuming to dig and line even in favourable areas. Because they exploit shallow aquifers, the well may be susceptible to yield fluctuations and possible contamination from surface water, including sewage. Hand dug well construction generally requires the use of a well trained construction team, and the capital investment for equipment such as concrete ring moulds, heavy lifting equipment, well shaft formwork, motorized de-watering pumps, and fuel can be large for people in developing countries. Construction of hand dug wells can be dangerous due to collapse of the well bore, falling objects and asphyxiation, including from dewatering pump exhaust fumes. The Woodingdean Water Well, hand-dug between 1858 and 1862, is the deepest hand-dug well at . The Big Well in Greensburg, Kansas, is billed as the world's largest hand-dug well, at deep and in diameter. However, the "Well of Joseph" in the Cairo Citadel at deep and the Pozzo di San Patrizio (St. Patrick's Well) built in 1527 in Orvieto, Italy, at deep by wide are both larger by volume. Driven wells. Driven wells may be very simply created in unconsolidated material with a "well hole structure", which consists of a hardened drive point and a screen (perforated pipe). The point is simply hammered into the ground, usually with a tripod and "driver", with pipe sections added as needed. A driver is a weighted pipe that slides over the pipe being driven and is repeatedly dropped on it. When groundwater is encountered, the well is washed of sediment and a pump installed. Drilled wells. Drilled wells are constructed using various types of drilling machines, such as top-head rotary, table rotary, or cable tool, which all use drilling stems that rotate to cut into the formation, thus the term "drilling." Drilled wells can be excavated by simple hand drilling methods (augering, sludging, jetting, driving, hand percussion) or machine drilling (auger, rotary, percussion, down the hole hammer). Deep rock rotary drilling method is most common. Rotary can be used in 90% of formation types (consolidated). Drilled wells can get water from a much deeper level than dug wells can − often down to several hundred metres. Drilled wells with electric pumps are used throughout the world, typically in rural or sparsely populated areas, though many urban areas are supplied partly by municipal wells. Most shallow well drilling machines are mounted on large trucks, trailers, or tracked vehicle carriages. Water wells typically range from deep, but in some areas it can go deeper than . Rotary drilling machines use a segmented steel drilling string, typically made up of 3m (10ft), to 8m (26ft) sections of steel tubing that are threaded together, with a bit or other drilling device at the bottom end. Some rotary drilling machines are designed to install (by driving or drilling) a steel casing into the well in conjunction with the drilling of the actual bore hole. Air and/or water is used as a circulation fluid to displace cuttings and cool bits during the drilling. Another form of rotary-style drilling, termed "mud rotary", makes use of a specially made mud, or drilling fluid, which is constantly being altered during the drill so that it can consistently create enough hydraulic pressure to hold the side walls of the bore hole open, regardless of the presence of a casing in the well. Typically, boreholes drilled into solid rock are not cased until after the drilling process is completed, regardless of the machinery used. The oldest form of drilling machinery is the cable tool, still used today. Specifically designed to raise and lower a bit into the bore hole, the "spudding" of the drill causes the bit to be raised and dropped onto the bottom of the hole, and the design of the cable causes the bit to twist at approximately &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4 revolution per drop, thereby creating a drilling action. Unlike rotary drilling, cable tool drilling requires the drilling action to be stopped so that the bore hole can be bailed or emptied of drilled cuttings. Cable tool drilling rigs are rare as they tend to be 10x slower to drill through materials compared to similar diameter rotary air or rotary mud equipped rigs. Drilled wells are usually cased with a factory-made pipe, typically steel (in air rotary or cable tool drilling) or plastic/PVC (in mud rotary wells, also present in wells drilled into solid rock). The casing is constructed by welding, either chemically or thermally, segments of casing together. If the casing is installed during the drilling, most drills will drive the casing into the ground as the bore hole advances, while some newer machines will actually allow for the casing to be rotated and drilled into the formation in a similar manner as the bit advancing just below. PVC or plastic is typically solvent welded and then lowered into the drilled well, vertically stacked with their ends nested and either glued or splined together. The sections of casing are usually or more in length, and in diameter, depending on the intended use of the well and local groundwater conditions. Surface contamination of wells in the United States is typically controlled by the use of a "surface seal". A large hole is drilled to a predetermined depth or to a confining formation (clay or bedrock, for example), and then a smaller hole for the well is completed from that point forward. The well is typically cased from the surface down into the smaller hole with a casing that is the same diameter as that hole. The annular space between the large bore hole and the smaller casing is filled with bentonite clay, concrete, or other sealant material. This creates an impermeable seal from the surface to the next confining layer that keeps contaminants from traveling down the outer sidewalls of the casing or borehole and into the aquifer. In addition, wells are typically capped with either an engineered well cap or seal that vents air through a screen into the well, but keeps insects, small animals, and unauthorized persons from accessing the well. At the bottom of wells, based on formation, a screening device, filter pack, slotted casing, or open bore hole is left to allow the flow of water into the well. Constructed screens are typically used in unconsolidated formations (sands, gravels, etc.), allowing water and a percentage of the formation to pass through the screen. Allowing some material to pass through creates a large area filter out of the rest of the formation, as the amount of material present to pass into the well slowly decreases and is removed from the well. Rock wells are typically cased with a PVC liner/casing and screen or slotted casing at the bottom, this is mostly present just to keep rocks from entering the pump assembly. Some wells utilize a "filter pack" method, where an undersized screen or slotted casing is placed inside the well and a filter medium is packed around the screen, between the screen and the borehole or casing. This allows the water to be filtered of unwanted materials before entering the well and pumping zone. Classification. There are two broad classes of drilled-well types, based on the type of aquifer the well is in: A special type of water well may be constructed adjacent to freshwater lakes or streams. Commonly called a collector well but sometimes referred to by the trade name Ranney well or Ranney collector, this type of well involves sinking a caisson vertically below the top of the aquifer and then advancing lateral collectors out of the caisson and beneath the surface water body. Pumping from within the caisson induces infiltration of water from the surface water body into the aquifer, where it is collected by the collector well laterals and conveyed into the caisson where it can be pumped to the ground surface. Two additional broad classes of well types may be distinguished, based on the use of the well: A water well constructed for pumping groundwater can be used passively as a monitoring well and a small diameter well can be pumped, but this distinction by use is common. Siting. Before excavation, information about the geology, water table depth, seasonal fluctuations, recharge area and rate should be found if possible. This work can be done by a hydrogeologist, or a groundwater surveyor using a variety of tools including electro-seismic surveying, any available information from nearby wells, geologic maps, and sometimes geophysical imaging. These professionals provide advice that is almost as accurate a driller who has experience and knowledge of nearby wells/bores and the most suitable drilling technique based on the expected target depth. Contamination. Shallow pumping wells can often supply drinking water at a very low cost. However, impurities from the surface easily reach shallow sources, which leads to a greater risk of contamination for these wells compared to deeper wells. Contaminated wells can lead to the spread of various waterborne diseases. Dug and driven wells are relatively easy to contaminate; for instance, most dug wells are unreliable in the majority of the United States. Some research has found that, in cold regions, changes in river flow and flooding caused by extreme rainfall or snowmelt can degrade well water quality. Pathogens. Most of the bacteria, viruses, parasites, and fungi that contaminate well water comes from fecal material from humans and other animals. Common bacterial contaminants include "E. coli", "Salmonella", "Shigella", and "Campylobacter jejuni". Common viral contaminants include "norovirus", "sapovirus", "rotavirus", enteroviruses, and hepatitis A and E. Parasites include "Giardia lamblia", "Cryptosporidium", "Cyclospora cayetanensis", and microsporidia. Chemical contamination. Chemical contamination is a common problem with groundwater. Nitrates from sewage, sewage sludge or fertilizer are a particular problem for babies and young children. Pollutant chemicals include pesticides and volatile organic compounds from gasoline, dry-cleaning, the fuel additive methyl tert-butyl ether (MTBE), and perchlorate from rocket fuel, airbag inflators, and other artificial and natural sources. Several minerals are also contaminants, including lead leached from brass fittings or old lead pipes, chromium VI from electroplating and other sources, naturally occurring arsenic, radon, and uranium—all of which can cause cancer—and naturally occurring fluoride, which is desirable in low quantities to prevent tooth decay, but can cause dental fluorosis in higher concentrations. Some chemicals are commonly present in water wells at levels that are not toxic, but can cause other problems. Calcium and magnesium cause what is known as hard water, which can precipitate and clog pipes or burn out water heaters. Iron and manganese can appear as dark flecks that stain clothing and plumbing, and can promote the growth of iron and manganese bacteria that can form slimy black colonies that clog pipes. Prevention. The quality of the well water can be significantly increased by lining the well, sealing the well head, fitting a self-priming hand pump, constructing an apron, ensuring the area is kept clean and free from stagnant water and animals, moving sources of contamination (pit latrines, garbage pits, on-site sewer systems) and carrying out hygiene education. The well should be cleaned with 1% chlorine solution after construction and periodically every 6 months. Well holes should be covered to prevent loose debris, animals, animal excrement, and wind-blown foreign matter from falling into the hole and decomposing. The cover should be able to be in place at all times, including when drawing water from the well. A suspended roof over an open hole helps to some degree, but ideally the cover should be tight fitting and fully enclosing, with only a screened air vent. Minimum distances and soil percolation requirements between sewage disposal sites and water wells need to be observed. Rules regarding the design and installation of private and municipal septic systems take all these factors into account so that nearby drinking water sources are protected. Education of the general population in society also plays an important role in protecting drinking water. Mitigation. Cleanup of contaminated groundwater tends to be very costly. Effective remediation of groundwater is generally very difficult. Contamination of groundwater from surface and subsurface sources can usually be dramatically reduced by correctly centering the casing during construction and filling the casing annulus with an appropriate sealing material. The sealing material (grout) should be placed from immediately above the production zone back to surface, because, in the absence of a correctly constructed casing seal, contaminated fluid can travel into the well through the casing annulus. Centering devices are important (usually one per length of casing or at maximum intervals of 9 m) to ensure that the grouted annular space is of even thickness. Upon the construction of a new test well, it is considered best practice to invest in a complete battery of chemical and biological tests on the well water in question. Point-of-use treatment is available for individual properties and treatment plants are often constructed for municipal water supplies that suffer from contamination. Most of these treatment methods involve the filtration of the contaminants of concern, and additional protection may be garnered by installing well-casing screens only at depths where contamination is not present. Wellwater for personal use is often filtered with reverse osmosis water processors; this process can remove very small particles. A simple, effective way of killing microorganisms is to bring the water to a full boil for one to three minutes, depending on location. A household well contaminated by microorganisms can initially be treated by shock chlorination using bleach, generating concentrations hundreds of times greater than found in community water systems; however, this will not fix any structural problems that led to the contamination and generally requires some expertise and testing for effective application. After the filtration process, it is common to implement an ultraviolet (UV) system to kill pathogens in the water. UV light affects the DNA of the pathogen by UV-C photons breaking through the cell wall. UV disinfection has been gaining popularity in the past decades as it is a chemical-free method of water treatment. Environmental problems. A risk with the placement of water wells is soil salination which occurs when the water table of the soil begins to drop and salt begins to accumulate as the soil begins to dry out. Another environmental problem that is very prevalent in water well drilling is the potential for methane to seep through. Soil salination. The potential for soil salination is a large risk when choosing the placement of water wells. Soil salination is caused when the water table of the soil drops over time and salt begins to accumulate. In turn, the increased amount of salt begins to dry the soil out. The increased level of salt in the soil can result in the degradation of soil and can be very harmful to vegetation. Methane. Methane, an asphyxiant, is a chemical compound that is the main component of natural gas. When methane is introduced into a confined space, it displaces oxygen, reducing oxygen concentration to a level low enough to pose a threat to humans and other aerobic organisms but still high enough for a risk of spontaneous or externally caused explosion. This potential for explosion is what poses such a danger in regards to the drilling and placement of water wells. Low levels of methane in drinking water are not considered toxic. When methane seeps into a water supply, it is commonly referred to as "methane migration". This can be caused by old natural gas wells near water well systems becoming abandoned and no longer monitored. Lately, however, the described wells/pumps are no longer very efficient and can be replaced by either handpumps or treadle pumps. Another alternative is the use of self-dug wells, electrical deep-well pumps (for higher depths). Appropriate technology organizations as Practical Action are now supplying information on how to build/set-up (DIY) handpumps and treadle pumps in practice. PFAS/PFOS Fire fighting foam. Per- and polyfluoroalkyl substances (PFAS or PFASs) are a group of synthetic organofluorine chemical compounds that have multiple fluorine atoms attached to an alkyl chain. PFAS are a group of "forever chemicals" that spread very quickly and very far in ground water polluting it permanently. Water wells near certain airports where any form fire fighting or training activities occurred up to 2010 are likely to be contaminated by PFAS. Water security. A study concluded that of ~39 million groundwater wells 6-20% are at high risk of running dry if local groundwater levels decline by less than five meters, or – as with many areas and possibly more than half of major aquifers – continue to decline. Society and culture. Springs and wells have had cultural significance since prehistoric times, leading to the foundation of towns such as Wells and Bath in Somerset. Interest in health benefits led to the growth of spa towns including many with "wells" in their name, examples being Llandrindod Wells and Royal Tunbridge Wells. Eratosthenes is sometimes claimed to have used a well in his calculation of the Earth's circumference; however, this is just a simplification used in a shorter explanation of Cleomedes, since Eratosthenes had used a more elaborate and precise method. Many incidents in the Bible take place around wells, such as the finding of a wife for Isaac in Genesis and Jesus's talk with the Samaritan woman in the Gospels. A simple model for water well recovery. For a well with impermeable walls, the water in the well is resupplied from the bottom of the well. The rate at which water flows into the well will depend on the pressure difference between the ground water at the well bottom and the well water at the well bottom. The pressure of a column of water of height "z" will be equal to the weight of the water in the column divided by the cross-sectional area of the column, so the pressure of the ground water a distance "zT" below the top of the water table will be: formula_0 where "ρ" is the mass density of the water and "g" is the acceleration due to gravity. When the water in the well is below the water table level, the pressure at the bottom of the well due to the water in the well will be less than "Pg" and water will be forced into the well. Referring to the diagram, if "z" is the distance from the bottom of the well to the well water level and "zT" is the distance from the bottom of the well to the top of the water table, the pressure difference will be: formula_1 Applying Darcy's Law, the volume rate ("F") at which water is forced into the well will be proportional to this pressure difference: formula_2 where "R" is the resistance to the flow, which depends on the well cross section, the pressure gradient at the bottom of the well, and the characteristics of the substrate at the well bottom. (e.g., porosity). The volume flow rate into the well can be written as a function of the rate of change of the well water level: formula_3 Combining the above three equations yields a simple differential equation in "z": formula_4 which may be solved: formula_5 where "z0" is the well water level at time "t=0" and "τ" is the well time constant: formula_6 Note that if "dz/dt" for a depleted well can be measured, it will be equal to formula_7 and the time constant τ can be calculated. According to the above model, it will take an infinite amount of time for a well to fully recover, but if we consider a well that is 99% recovered to be "practically" recovered, the time for a well to practically recover from a level at "z" will be: formula_8 For a well that is fully depleted ("z=0") it would take a time of about "4.6 τ" to practically recover. The above model does not take into account the depletion of the aquifer due to the pumping which lowered the well water level (See aquifer test and groundwater flow equation). Also, practical wells may have impermeable walls only up to, but not including the bedrock, which will give a larger surface area for water to enter the well. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_g=\\rho \\,g \\,z_T" }, { "math_id": 1, "text": "\\Delta P=\\rho\\, g\\, (z_T-z)" }, { "math_id": 2, "text": "\\Delta P=R\\, F" }, { "math_id": 3, "text": "F=A \\frac{dz}{dt}" }, { "math_id": 4, "text": "R A \\frac{dz}{dt} = \\rho g(z_T-z)" }, { "math_id": 5, "text": "z = z_T -(z_T-z_0)e^{-t/\\tau}" }, { "math_id": 6, "text": "\\tau = \\frac{R A}{\\rho g}" }, { "math_id": 7, "text": "z_T/\\tau" }, { "math_id": 8, "text": "T_r = \\tau \\ln\\left(\\frac{1-z/z_T}{1-0.99}\\right)" } ]
https://en.wikipedia.org/wiki?curid=15105978
1510738
Desorption
Release of an atoms or molecules from a surface Desorption is the physical process where adsorbed atoms or molecules are released from a surface into the surrounding vacuum or fluid. This occurs when a molecule gains enough energy to overcome the activation barrier and the binding energy that keep it attached to the surface. Desorption is the reverse of the process of adsorption, which differs from absorption in that adsorption it refers to substances bound to the surface, rather than being absorbed into the bulk. Desorption can occur from any of several processes, or a combination of them: it may result from heat (thermal energy); incident light such as infrared, visible, or ultraviolet photons; or an incident beam of energetic particles such as electrons. It may also occur following chemical reactions such as oxidation or reduction in an electrochemical cell or after a chemical reaction of a adsorbed compounds in which the surface may act as a catalyst. Mechanisms. Depending on the nature of the adsorbent-to-surface bond, there are a multitude of mechanisms for desorption. The surface bond of a sorbant can be cleaved thermally, through chemical reactions or by radiation, all which may result in desorption of the species. Thermal desorption. Thermal desorption is the process by which an adsorbate is heated and this induces desorption of atoms or molecules from the surface. The first use of thermal desorption was by LeRoy Apker in 1948. It is one of the most frequently used modes of desorption, and can be used to determine surface coverages of adsorbates and to evaluate the activation energy of desorption. Thermal desorption is typically described by the Polanyi-Wigner equation: formula_0 where "r" is the rate of desorption, formula_1 is the adsorbate coverage, "t" the time, "n" is the order of desorption, formula_2 the pre-exponential factor, "E" is the activation energy, "R" is the gas constant and T is the absolute temperature. The adsorbate coverage is defined as the ratio between occupied and available adsorption sites. The order of desorption, also known as the kinetic order, describes the relationship between the adsorbate coverage and the rate of desorption. In first order desorption, n 1, the rate of the particles is directly proportional to adsorbate coverage. Atomic or simple molecular desorption tend to be of the first order and in this case the temperature at which maximum desorption occurs is independent of initial adsorbate coverage. Whereas, in second order desorption the temperature of maximum rate of desorption decreases with increased initial adsorbate coverage. This is because second order is re-combinative desorption and with a larger initial coverage there is a higher probability the two particles will find each other and recombine into the desorption product. An example of second order desorption, n 2, is when two hydrogen atoms on the surface desorb and form a gaseous H2 molecule. There is also zeroth order desorption which commonly occurs on thick molecular layers, in this case the desorption rate does not depend on the particle concentration. In the case of zeroth order, n 0, the desorption will continue to increase with temperature until a sudden drop once all the molecules have been desorbed. In a typical thermal desorption experiment, one would often assume a constant heating of the sample, and so temperature will increase linearly with time. The rate of heating can be represented by formula_3 Therefore, the temperature can be represented by: formula_4 where formula_5 is the starting time and formula_6 is the initial temperature. At the "desorption temperature", there is sufficient thermal energy for the molecules to escape the surface. One can use the thermal desorption as a technique to investigate the binding energy of a metal. There are several different procedures for performing analysis of thermal desorption. For example, Redhead's peak maximum method is one of the ways to determine the activation energy in desorption experiments. For first order desorption, the activation energy is estimated from the temperature ("T""p") at which the desorption rate is a maximum. Using the equation for rate of desorption (Polyani Winer equation), one can find "T""p", and Redhead shows that the relationship between "T""p" and "E" can be approximated to be linear, given that the ratio of the rate constant to the heating rate is within the range 108 – 1013. By varying the heating rate, and then plotting a graph of formula_7 against formula_8, one can find the activation energy using the following equation: formula_9 This method is straightforward, routinely applied and can give a value for activation energy within an error of 30%. However a drawback of this method, is that the rate constant in the Polanyi-Wigner equation and the activation energy are assumed to be independent of the surface coverage. Due to improvement in computational power, there are now several ways to perform thermal desorption analysis without assuming independence of the rate constant and activation energy. For example, the "complete analysis" method uses a family of desorption curves for several different surface coverages and integrates to obtain coverage as a function of temperature. Next, the desorption rate for a particular coverage is determined from each curve and an Arrhenius plot of the logarithm of the rate of desorption against 1/T is made. An example of an Arrhenius plot can be seen in the figure on the right. The activation energy can be found from the gradient of this Arrhenius plot. It also became possible to account for an effect of the disorder on the value of activation energy "E", that leads to a non-Debye desorption kinetics at large times and allows to explain both desorption from close-to-perfect silicon surfaces and desorption from microporous adsorbents like "NaX" zeolites. Another analysis technique involves simulating thermal desorption spectra and comparing to experimental data. This technique relies on kinetic Monte Carlo simulations and requires an understanding of the lattice interactions of the adsorbed atoms. These interactions are described from first principles by the Lattice Gas Hamiltonian, which varies depending on the arrangement of the atoms. An example of this method used to investigate the desorption of oxygen from rhodium can be found in the following paper: "Kinetic Monte Carlo simulations of temperature programed desorption of O/Rh(111)". Reductive or oxidative desorption. In some cases, the adsorbed molecule is chemically bonded to the surface/material, providing a strong adhesion and limiting desorption. If this is the case, desorption requires a chemical reaction which cleaves the chemical bonds. One way to accomplish this is to apply a voltage to the surface, resulting in either reduction or oxidation of the adsorbed molecule (depending on the bias and the adsorbed molecules). In a typical example of reductive desorption, a self-assembled monolayers of alkyl thiols on a gold surface can be removed by applying a negative bias to the surface resulting in reduction of the sulfur head-group. The chemical reaction for this process would be: formula_10 where R is an alkyl chain (e.g. CH3), S is the sulfur atom of the thiol group, Au is a gold surface atom and e− is an electron supplied by an external voltage source. Another application for reductive/oxidative desorption is to clean active carbon material through electrochemical regeneration. Electron-stimulated desorption. Electron-stimulated desorption occurs as a result of an electron beam incident upon a surface in vacuum, as is common in particle physics and industrial processes such as scanning electron microscopy (SEM). At atmospheric pressure, molecules may weakly bond to surfaces in what is known as adsorption. These molecules may form monolayers at a density of 1015 atoms/cm2 for a perfectly smooth surface. One monolayer or several may form, depending on the bonding capabilities of the molecules. If an electron beam is incident upon the surface, it provides energy to break the bonds of the surface with molecules in the adsorbed monolayer(s), causing pressure to increase in the system. Once a molecule is desorbed into the vacuum volume, it is removed via the vacuum's pumping mechanism (re-adsorption is negligible). Hence, fewer molecules are available for desorption, and an increasing number of electrons are required to maintain constant desorption. One of the leading models on electron stimulated desorption is described by Peter Antoniewicz In short, his theory is that the adsorbate becomes ionized by the incident electrons and then the ion experiences an image charge potential which attracts it towards the surface. As the ion moves closer to the surface, the possibility of electron tunnelling from the substrate increases and through this process ion neutralisation can occur. The neutralised ion still has kinetic energy from before, and if this energy plus the gained potential energy is greater than the binding energy then the ion can desorb from the surface. As ionisation is required for this process, this suggests the atom cannot desorb at low excitation energies, which agrees with experimental data on electron simulated desorption. Understanding electron stimulated desorption is crucial for accelerators such as the Large Hadron Collider, where surfaces are subjected to an intense bombardment of energetic electrons. In particular, in the beam vacuum systems the desorption of gases can strongly impact the accelerators performance by modifying the secondary electron yield of the surfaces. IR photodesorption. IR photodesorption is a type of desorption that occurs when an infrared light hits a surface and activates processes involving the excitation of an internal vibrational mode of the previously absorbed molecules followed by the desorption of the species into the gas phase. One can selectively excite electrons or vibrations of the adsorbate or of the adsorbate-substrate coupled system. This relaxation of the bonds together with a sufficient energy exchange from the incident light to the system will eventually lead to desorption. Generally, the phenomenon is more effective for weaker-bound physisorbed species, which have a smaller adsorption potential depth compared to that of the chemisorbed ones. In fact, a shallower potential requires lower laser intensities to set a molecule free from the surface and make IR-photodesorption experiments feasible, because the measured desorption times are usually longer than the inverse of the other relaxation rates in the problem. Phonon activated desorption. In 2005, a mode of desorption was discovered by John Weaver et al. that has elements of both thermal and electron stimulated desorption. This mode is of particular interest as desorption can occur in a closed system without external stimulus. The mode was discovered whilst investigating bromine absorbed on silicone using scanning tunnelling microscopy. In the experiment, the Si-Br wafers were heated to temperatures ranging from 620 to 775 K. However, it was not simple thermal desorption bond breaking that was observed as the activation energies calculated from Arrhenius plots were found to be lower than the Si-Br bond strength. Instead, the optical phonons of the Silicon weaken the surface bond through vibrations and also provide the energy for electron to excite to the antibonding state. Application. Desorption is a physical process that can be very useful for several applications. In this section two applications of thermal desorption are explained. One of them is actually a technique of thermal desorption, temperature programmed desorption, rather than an application itself, but it has plenty of very important applications. The other one is the application of thermal desorption with the aim of reducing pollution. Temperature programmed desorption (TPD). Temperature programmed desorption (TPD) is one of the most widely used surface analysis techniques available for materials research science. It has several applications such as knowing the desorption rates and binding energies of chemical compounds and elements, evaluation of active sites on catalyst surfaces and the understanding of the mechanisms of catalytic reactions including adsorption, surface reaction and desorption, analysing material compositions, surface interactions and surface contaminates. Therefore, TPD is increasingly important in many industries including, but not limited to, quality control and industrial research on products such as polymers, pharmaceuticals, clays and minerals, food packaging, and metals and alloys. When TPD is used with the aim of knowing desorption rates of products that were previously adsorbed on a surface, it consists of heating a cold crystal surface that adsorbed a gas or a mixture of gases at a controlled rate. Then, the adsorbates will react as they are heated and then they will desorb from the surface. The results of applying TPD are the desorption rates of each of the product species that have been desorbed as a function of the temperature of the surface, this is called the TPD spectrum of the product. Also, as the temperature at which each of the surface compounds has been desorbed is known, it is possible to compute the energy that bounded the desorbed compound to the surface, the activation energy. Thermal desorption for removal of pollution. Desorption, specifically thermal desorption, can be applied as an environmental remediation technique. This physical process is designed to remove contaminants at relatively low temperatures, ranging from 90 to 560 °C, from the solid matrix. The contaminated media is heated to volatilize water and organic contaminants, followed by treatment in a gas treatment system in which after removal, the contaminants are collected or thermally destroyed. They are transported using a carrier gas or vacuum to a vapor treatment system for removal/transformation into less toxic compounds. Thermal desorption systems operate at a lower design temperature, which is sufficiently high to achieve adequate volatilization of organic contaminants. Temperatures and residence times are designed to volatilize selected contaminants but typically will not oxidize them. It is applicable at sites where high direct waste burial is present, and a short timeframe is necessary to allow for continued use or redevelopment of the site. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r(\\theta) = - \\frac{\\text{d}\\theta}{\\text{d}t} = \\upsilon(\\theta) \\theta^n \\exp\\left(\\frac{-E(\\theta)}{RT}\\right)" }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "\\upsilon" }, { "math_id": 3, "text": "\\beta = \\frac{\\mathrm{d}T}{\\mathrm{d}t}" }, { "math_id": 4, "text": "T(t) = \\beta(t - t_0) + T_0" }, { "math_id": 5, "text": " t_0 " }, { "math_id": 6, "text": " T_0 " }, { "math_id": 7, "text": "\\log(\\beta)" }, { "math_id": 8, "text": "\\log(T_p)" }, { "math_id": 9, "text": "\\frac{\\mathrm{d}\\log(\\beta)}{\\mathrm{d}\\log(T_p)} = \\frac{E}{RT_p} + 2 " }, { "math_id": 10, "text": "R - S - Au + e^- \\longrightarrow R - S^- + Au " } ]
https://en.wikipedia.org/wiki?curid=1510738
1510857
Witt algebra
Algebra of meromorphic vector fields on the Riemann sphere In mathematics, the complex Witt algebra, named after Ernst Witt, is the Lie algebra of meromorphic vector fields defined on the Riemann sphere that are holomorphic except at two fixed points. It is also the complexification of the Lie algebra of polynomial vector fields on a circle, and the Lie algebra of derivations of the ring C["z","z"−1]. There are some related Lie algebras defined over finite fields, that are also called Witt algebras. The complex Witt algebra was first defined by Élie Cartan (1909), and its analogues over finite fields were studied by Witt in the 1930s. Basis. A basis for the Witt algebra is given by the vector fields formula_0, for "n" in "formula_1". The Lie bracket of two basis vector fields is given by formula_2 This algebra has a central extension called the Virasoro algebra that is important in two-dimensional conformal field theory and string theory. Note that by restricting "n" to 1,0,-1, one gets a subalgebra. Taken over the field of complex numbers, this is just the Lie algebra formula_3 of the Lorentz group formula_4. Over the reals, it is the algebra sl(2,R) = su(1,1). Conversely, su(1,1) suffices to reconstruct the original algebra in a presentation. Over finite fields. Over a field "k" of characteristic "p"&gt;0, the Witt algebra is defined to be the Lie algebra of derivations of the ring "k"["z"]/"z""p" The Witt algebra is spanned by "L""m" for −1≤ "m" ≤ "p"−2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_n=-z^{n+1} \\frac{\\partial}{\\partial z}" }, { "math_id": 1, "text": "\\mathbb Z" }, { "math_id": 2, "text": "[L_m,L_n]=(m-n)L_{m+n}." }, { "math_id": 3, "text": "\\mathfrak{sl}(2,\\mathbb{C})" }, { "math_id": 4, "text": "\\mathrm{SO}(3,1)" } ]
https://en.wikipedia.org/wiki?curid=1510857
15109
Inverse limit
Construction in category theory In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory. By working in the dual category, that is by reversing the arrows, an inverse limit becomes a direct limit or "inductive limit", and a "limit" becomes a colimit. Formal definition. Algebraic objects. We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let formula_0 be a directed poset (not all authors require "I" to be directed). Let ("A""i")"i"∈"I" be a family of groups and suppose we have a family of homomorphisms formula_1 for all formula_2 (note the order) with the following properties: Then the pair formula_6 is called an inverse system of groups and morphisms over formula_7, and the morphisms formula_8 are called the transition morphisms of the system. We define the inverse limit of the inverse system formula_6 as a particular subgroup of the direct product of the "formula_4"'s: formula_9 The inverse limit formula_10 comes equipped with "natural projections" which pick out the "i"th component of the direct product for each formula_11 in formula_7. The inverse limit and the natural projections satisfy a universal property described in the next section. This same construction may be carried out if the formula_4's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category. General definition. The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let formula_12 be an inverse system of objects and morphisms in a category "C" (same definition as above). The inverse limit of this system is an object "X" in "C" together with morphisms π"i": "X" → "X""i" (called "projections") satisfying π"i" = formula_8 ∘ π"j" for all "i" ≤ "j". The pair ("X", π"i") must be universal in the sense that for any other such pair ("Y", ψ"i") there exists a unique morphism "u": "Y" → "X" such that the diagram commutes for all "i" ≤ "j". The inverse limit is often denoted formula_13 with the inverse system formula_14 being understood. In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits "X" and "X"' of an inverse system, there exists a "unique" isomorphism "X"′ → "X" commuting with the projection maps. Inverse systems and inverse limits in a category "C" admit an alternative description in terms of functors. Any partially ordered set "I" can be considered as a small category where the morphisms consist of arrows "i" → "j" if and only if "i" ≤ "j". An inverse system is then just a contravariant functor "I" → "C". Let formula_15 be the category of these functors (with natural transformations as morphisms). An object "X" of "C" can be considered a trivial inverse system, where all objects are equal to "X" and all arrow are the identity of "X". This defines a "trivial functor" from "C" to formula_16 The inverse limit, if it exists, is defined as a right adjoint of this trivial functor. Derived functors of the inverse limit. For an abelian category "C", the inverse limit functor formula_30 is left exact. If "I" is ordered (not simply partially ordered) and countable, and "C" is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms "f""ij" that ensures the exactness of formula_31. Specifically, Eilenberg constructed a functor formula_32 (pronounced "lim one") such that if ("A""i", "f""ij"), ("B""i", "g""ij"), and ("C""i", "h""ij") are three inverse systems of abelian groups, and formula_33 is a short exact sequence of inverse systems, then formula_34 is an exact sequence in Ab. Mittag-Leffler condition. If the ranges of the morphisms of an inverse system of abelian groups ("A""i", "f""ij") are "stationary", that is, for every "k" there exists "j" ≥ "k" such that for all "i" ≥ "j" :formula_35 one says that the system satisfies the Mittag-Leffler condition. The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem. The following situations are examples where the Mittag-Leffler condition is satisfied: An example where formula_36 is non-zero is obtained by taking "I" to be the non-negative integers, letting "A""i" = "p""i"Z, "B""i" = Z, and "C""i" = "B""i" / "A""i" = Z/"p""i"Z. Then formula_37 where Z"p" denotes the p-adic integers. Further results. More generally, if "C" is an arbitrary abelian category that has enough injectives, then so does "C""I", and the right derived functors of the inverse limit functor can thus be defined. The "n"th right derived functor is denoted formula_38 In the case where "C" satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim1 on Ab"I" to series of functors limn such that formula_39 It was thought for almost 40 years that Roos had proved (in ) that lim1 "A""i" = 0 for ("A""i", "f""ij") an inverse system with surjective transition morphisms and "I" the set of non-negative integers (such inverse systems are often called "Mittag-Leffler sequences"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim1 "A""i" ≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct if "C" has a set of generators (in addition to satisfying (AB3) and (AB4*)). Barry Mitchell has shown (in "The cohomological dimension of a directed set") that if "I" has cardinality formula_40 (the "d"th infinite cardinal), then "R""n"lim is zero for all "n" ≥ "d" + 2. This applies to the "I"-indexed diagrams in the category of "R"-modules, with "R" a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which lim"n", on diagrams indexed by a countable set, is nonzero for "n" &gt; 1). Related concepts and generalizations. The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits.
[ { "math_id": 0, "text": "(I, \\leq)" }, { "math_id": 1, "text": "f_{ij}: A_j \\to A_i" }, { "math_id": 2, "text": "i \\leq j" }, { "math_id": 3, "text": "f_{ii}" }, { "math_id": 4, "text": "A_i" }, { "math_id": 5, "text": "f_{ik} = f_{ij} \\circ f_{jk} \\quad \\text{for all } i \\leq j \\leq k." }, { "math_id": 6, "text": "((A_i)_{i\\in I}, (f_{ij})_{i\\leq j\\in I})" }, { "math_id": 7, "text": "I" }, { "math_id": 8, "text": "f_{ij}" }, { "math_id": 9, "text": "A = \\varprojlim_{i\\in I}{A_i} = \\left\\{\\left.\\vec a \\in \\prod_{i\\in I}A_i \\;\\right|\\; a_i = f_{ij}(a_j) \\text{ for all } i \\leq j \\text{ in } I\\right\\}." }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": " (X_i, f_{ij})" }, { "math_id": 13, "text": "X = \\varprojlim X_i" }, { "math_id": 14, "text": "(X_i, f_{ij})" }, { "math_id": 15, "text": "C^{I^\\mathrm{op}}" }, { "math_id": 16, "text": "C^{I^\\mathrm{op}}." }, { "math_id": 17, "text": "\\mathbb{Z}/p^n\\mathbb{Z}" }, { "math_id": 18, "text": "(n_1, n_2, \\dots)" }, { "math_id": 19, "text": "n_i\\equiv n_j \\mbox{ mod } p^{i}" }, { "math_id": 20, "text": "i<j." }, { "math_id": 21, "text": "\\mathbb{R}/p^n\\mathbb{Z}" }, { "math_id": 22, "text": "(x_1, x_2, \\dots)" }, { "math_id": 23, "text": "x_i\\equiv x_j \\mbox{ mod } p^{i}" }, { "math_id": 24, "text": "n + r" }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "r\\in [0, 1)" }, { "math_id": 27, "text": "\\textstyle R[[t]]" }, { "math_id": 28, "text": "\\textstyle R[t]/t^nR[t]" }, { "math_id": 29, "text": "\\textstyle R[t]/t^{n+j}R[t]" }, { "math_id": 30, "text": "\\varprojlim:C^I\\rightarrow C" }, { "math_id": 31, "text": "\\varprojlim" }, { "math_id": 32, "text": "\\varprojlim{}^1:\\operatorname{Ab}^I\\rightarrow\\operatorname{Ab}" }, { "math_id": 33, "text": "0\\rightarrow A_i\\rightarrow B_i\\rightarrow C_i\\rightarrow0" }, { "math_id": 34, "text": "0\\rightarrow\\varprojlim A_i\\rightarrow\\varprojlim B_i\\rightarrow\\varprojlim C_i\\rightarrow\\varprojlim{}^1A_i" }, { "math_id": 35, "text": " f_{kj}(A_j)=f_{ki}(A_i)" }, { "math_id": 36, "text": "\\varprojlim{}^1" }, { "math_id": 37, "text": "\\varprojlim{}^1A_i=\\mathbf{Z}_p/\\mathbf{Z}" }, { "math_id": 38, "text": "R^n\\varprojlim:C^I\\rightarrow C." }, { "math_id": 39, "text": "\\varprojlim{}^n\\cong R^n\\varprojlim." }, { "math_id": 40, "text": "\\aleph_d" } ]
https://en.wikipedia.org/wiki?curid=15109
15112
Wave interference
Phenomenon resulting from the superposition of two waves In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their "phase difference". The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves. Etymology. The word "interference" is derived from the Latin words "inter" which means "between" and "fere" which means "hit or strike", and was used in the context of wave superposition by Thomas Young in 1801. Mechanisms. The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference, the wave amplitudes cancel each other out, and the energy is redistributed to other areas. For example, when two pebbles are dropped in a pond, a pattern is observable; but eventually waves continue, and only when they reach the shore is the energy absorbed away from the medium. Constructive interference occurs when the phase difference between the waves is an even multiple of π (180°), whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre. Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can, for example, in water. Superposition in the EM field is an assumed phenomenon and necessary to explain how two light beams pass through each other and continue on their respective paths. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. In addition to classical wave model for understanding optical interference, quantum matter waves also demonstrate interference. Real-valued wave functions. The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is formula_0 where formula_1 is the peak amplitude, formula_2 is the wavenumber and formula_3 is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right formula_4 where formula_5 is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is formula_6 Using the trigonometric identity for the sum of two cosines: formula_7 this can be written formula_8 This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of formula_9. Between two plane waves. A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the "x"-axis. The phase difference at the point A is given by formula_16 It can be seen that the two waves are in phase when formula_17 and are half a cycle out of phase when formula_18 Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is formula_19 and "df" is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle θ. The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout. Between two spherical waves. A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right. When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar. Multiple beams. Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time. It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases. It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as formula_20 for formula_21 waves from formula_22 to formula_23, where formula_24 To show that formula_25 one merely assumes the converse, then multiplies both sides by formula_26 The Fabry–Pérot interferometer uses interference between multiple reflections. A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion. Complex valued wave functions. Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include: Optical wave interference. Because the frequency of light waves (~1014 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point r is: formula_27 formula_28 where A represents the magnitude of the displacement, φ represents the phase and ω represents the angular frequency. The displacement of the summed waves is formula_29 The intensity of the light at r is given by formula_30 This can be expressed in terms of the intensities of the individual waves as formula_31 Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2π. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity. Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state. Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as "every photon interferes with itself". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible. Light source requirements. The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap. Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications. A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors. Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements. This has also been observed for widefield interference between two incoherent laser sources. It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified. Optical arrangements. To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems. In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems. In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror. Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively. Quantum interference. Quantum interference – the observed wave-behavior of matter – resembles optical interference. Let formula_32 be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability formula_33 of observing the object at position formula_34 is formula_35 where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms formula_36: formula_37 Usually, formula_38 and formula_39 correspond to distinct situations A and B. When this is the case, the equation formula_36 indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at formula_34 is the probability of finding the object at formula_34 when it is in situation A plus the probability of finding the object at formula_34 when it is in situation B plus an extra term. This extra term, which is called the "quantum interference term", is formula_40 in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from formula_41 in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all formula_34, then there is no quantum mechanical interference associated with situations A and B. The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. One slit becomes formula_38 and the other becomes formula_39. The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern. Applications. Beat. In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, "perceived" as a periodic variation in volume whose rate is the difference of the two frequencies. With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency. Optical interferometry. Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement. Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light. In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity. Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement. Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components. Radio interferometry. In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array. Acoustic interferometry. An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W_1(x,t) = A\\cos(kx - \\omega t)" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "k = 2\\pi/\\lambda" }, { "math_id": 3, "text": "\\omega = 2\\pi f" }, { "math_id": 4, "text": "W_2(x,t) = A\\cos(kx - \\omega t + \\varphi)" }, { "math_id": 5, "text": "\\varphi" }, { "math_id": 6, "text": "W_1 + W_2 = A[\\cos(kx - \\omega t) + \\cos(kx - \\omega t + \\varphi)]." }, { "math_id": 7, "text": "\\cos a + \\cos b = 2\\cos\\left({a-b \\over 2}\\right)\\cos\\left({a+b \\over 2}\\right)," }, { "math_id": 8, "text": "W_1 + W_2 = 2A\\cos\\left({\\varphi \\over 2}\\right)\\cos\\left(kx - \\omega t + {\\varphi \\over 2}\\right)." }, { "math_id": 9, "text": "\\varphi/2" }, { "math_id": 10, "text": "\\varphi = \\ldots,-4\\pi, -2\\pi, 0, 2\\pi, 4\\pi,\\ldots" }, { "math_id": 11, "text": "\\left|\\cos(\\varphi/2)\\right| = 1" }, { "math_id": 12, "text": "W_1 + W_2 = 2A\\cos(kx - \\omega t)" }, { "math_id": 13, "text": "\\varphi = \\ldots,-3\\pi,\\, -\\pi,\\, \\pi,\\, 3\\pi,\\, 5\\pi,\\ldots" }, { "math_id": 14, "text": "\\cos(\\varphi/2) = 0\\," }, { "math_id": 15, "text": "W_1 + W_2 = 0" }, { "math_id": 16, "text": " \\Delta \\varphi = \\frac {2 \\pi d} {\\lambda} = \\frac {2 \\pi x \\sin \\theta} {\\lambda}." }, { "math_id": 17, "text": " \\frac {x \\sin \\theta} {\\lambda} = 0, \\pm 1, \\pm 2, \\ldots ," }, { "math_id": 18, "text": " \\frac {x \\sin \\theta} {\\lambda} = \\pm \\frac {1}{2}, \\pm \\frac {3}{2}, \\ldots " }, { "math_id": 19, "text": " d_f = \\frac {\\lambda} {\\sin \\theta}" }, { "math_id": 20, "text": "A e^{i \\varphi_n}" }, { "math_id": 21, "text": "N" }, { "math_id": 22, "text": "n=0" }, { "math_id": 23, "text": "n = N-1" }, { "math_id": 24, "text": "\\varphi_n - \\varphi_{n-1} = \\frac{2\\pi}{N}." }, { "math_id": 25, "text": "\\sum_{n=0}^{N-1} A e^{i \\varphi_n} = 0" }, { "math_id": 26, "text": " e^{i \\frac{2\\pi}{N}}." }, { "math_id": 27, "text": "U_1 (\\mathbf r,t) = A_1(\\mathbf r) e^{i [\\varphi_1 (\\mathbf r) - \\omega t]}" }, { "math_id": 28, "text": "U_2 (\\mathbf r,t) = A_2(\\mathbf r) e^{i [\\varphi_2 (\\mathbf r) - \\omega t]}" }, { "math_id": 29, "text": "U (\\mathbf r,t) = A_1(\\mathbf r) e^{i [\\varphi_1 (\\mathbf r) - \\omega t]}+A_2(\\mathbf r) e^{i [\\varphi_2 (\\mathbf r) - \\omega t]}." }, { "math_id": 30, "text": " I(\\mathbf r) = \\int U (\\mathbf r,t) U^* (\\mathbf r,t) \\, dt \\propto A_1^2 (\\mathbf r)+ A_2^2 (\\mathbf r) + 2 A_1 (\\mathbf r) A_2 (\\mathbf r) \\cos [\\varphi_1 (\\mathbf r)-\\varphi_2 (\\mathbf r)]. " }, { "math_id": 31, "text": " I(\\mathbf r) = I_1 (\\mathbf r)+ I_2 (\\mathbf r) + 2 \\sqrt{ I_1 (\\mathbf r) I_2 (\\mathbf r)} \\cos [\\varphi_1 (\\mathbf r)-\\varphi_2 (\\mathbf r)]." }, { "math_id": 32, "text": "\\Psi (x, t)" }, { "math_id": 33, "text": "P(x)" }, { "math_id": 34, "text": "x" }, { "math_id": 35, "text": "P(x) = |\\Psi (x, t)|^2 = \\Psi^* (x, t) \\Psi (x, t)" }, { "math_id": 36, "text": "\\Psi (x, t) = \\Psi_A (x, t) + \\Psi_B (x, t)" }, { "math_id": 37, "text": "P(x) = |\\Psi (x, t)|^2 = |\\Psi_A (x, t)|^2 + |\\Psi_B (x, t)|^2 + (\\Psi_A^* (x, t) \\Psi_B (x, t) + \\Psi_A (x, t) \\Psi_B^* (x, t))" }, { "math_id": 38, "text": "\\Psi_A (x, t)" }, { "math_id": 39, "text": "\\Psi_B (x, t)" }, { "math_id": 40, "text": "\\Psi_A^* (x, t) \\Psi_B (x, t) + \\Psi_A (x, t) \\Psi_B^* (x, t)" }, { "math_id": 41, "text": "|\\Psi_A (x, t)|^2 + |\\Psi_B (x, t)|^2" } ]
https://en.wikipedia.org/wiki?curid=15112
15112730
Laughlin wavefunction
In condensed matter physics, the Laughlin wavefunction is an ansatz, proposed by Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of a uniform jellium background when the filling factor of the lowest Landau level is formula_0 where formula_1 is an odd positive integer. It was constructed to explain the observation of the formula_2 fractional quantum Hall effect (FQHE), and predicted the existence of additional formula_3 states as well as quasiparticle excitations with fractional electric charge formula_4, both of which were later experimentally observed. Laughlin received one third of the Nobel Prize in Physics in 1998 for this discovery. Context and analytical expression. If we ignore the jellium and mutual Coulomb repulsion between the electrons as a zeroth order approximation, we have an infinitely degenerate lowest Landau level (LLL) and with a filling factor of 1/"n", we'd expect that all of the electrons would lie in the LLL. Turning on the interactions, we can make the approximation that all of the electrons lie in the LLL. If formula_5 is the single particle wavefunction of the LLL state with the lowest orbital angular momenta, then the Laughlin ansatz for the multiparticle wavefunction is formula_6 where position is denoted by formula_7 in (Gaussian units) formula_8 and formula_9 and formula_10 are coordinates in the x–y plane. Here formula_11 is the reduced Planck constant, formula_12 is the electron charge, formula_13 is the total number of particles, and formula_14 is the magnetic field, which is perpendicular to the xy plane. The subscripts on z identify the particle. In order for the wavefunction to describe fermions, n must be an odd integer. This forces the wavefunction to be antisymmetric under particle interchange. The angular momentum for this state is formula_15. True ground state in FQHE at "ν" = 1/3. Consider formula_16 above: resultant formula_17 is a trial wavefunction; it is not exact, but qualitatively, it reproduces many features of the exact solution and quantitatively, it has very high overlaps with the exact ground state for small systems. Assuming Coulomb repulsion between any two electrons, that ground state formula_18 can be determined using exact diagonalisation and the overlaps have been calculated to be close to one. Moreover, with short-range interaction (Haldane pseudopotentials for formula_19 set to zero), Laughlin wavefunction becomes exact, i.e. formula_20. Energy of interaction for two particles. The Laughlin wavefunction is the multiparticle wavefunction for quasiparticles. The expectation value of the interaction energy for a pair of quasiparticles is formula_21 where the screened potential is (see "") formula_22 where formula_23 is a confluent hypergeometric function and formula_24 is a Bessel function of the first kind. Here, formula_25 is the distance between the centers of two current loops, formula_26 is the magnitude of the electron charge, formula_27 is the quantum version of the Larmor radius, and formula_28 is the thickness of the electron gas in the direction of the magnetic field. The angular momenta of the two individual current loops are formula_29 and formula_30 where formula_31. The inverse screening length is given by (Gaussian units) formula_32 where formula_33 is the cyclotron frequency, and formula_34 is the area of the electron gas in the xy plane. The interaction energy evaluates to: To obtain this result we have made the change of integration variables formula_35 and formula_36 and noted (see Common integrals in quantum field theory) formula_37 formula_38 formula_39 The interaction energy has minima for (Figure 1) formula_40 and formula_41 For these values of the ratio of angular momenta, the energy is plotted in Figure 2 as a function of formula_42. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu=1/n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\nu=1/3" }, { "math_id": 3, "text": "\\nu = 1/n" }, { "math_id": 4, "text": "e/n" }, { "math_id": 5, "text": "\\psi_0" }, { "math_id": 6, "text": "\n\\langle z_1,z_2,z_3,\\ldots , z_N \\mid n,N\\rangle\n=\n\\psi_{n,N}(z_1,z_2, z_3, \\ldots, z_N ) \n=\nD \\left[ \\prod_{N \\geqslant i > j \\geqslant 1}\\left( z_i-z_j \\right)^n \\right] \\prod^N_{k=1}\\exp\\left( - \\mid z_k \\mid^2 \\right)\n" }, { "math_id": 7, "text": "z={1 \\over 2 \\mathit l_B} \\left( x + iy\\right) " }, { "math_id": 8, "text": " \\mathit l_B = \\sqrt{\\hbar c\\over e B} " }, { "math_id": 9, "text": " x " }, { "math_id": 10, "text": " y " }, { "math_id": 11, "text": " \\hbar " }, { "math_id": 12, "text": " e " }, { "math_id": 13, "text": " N " }, { "math_id": 14, "text": " B " }, { "math_id": 15, "text": " n\\hbar " }, { "math_id": 16, "text": "n=3" }, { "math_id": 17, "text": "\\Psi_L(z_1,z_2, z_3, \\ldots, z_N)\\propto \\Pi_{i<j} (z_i-z_j)^3" }, { "math_id": 18, "text": "\\Psi_{ED}" }, { "math_id": 19, "text": "m>3" }, { "math_id": 20, "text": "\\langle \\Psi_{ED}|\\Psi_L\\rangle=1" }, { "math_id": 21, "text": "\n\\langle V \\rangle\n=\n\\langle n, N \\mid V \\mid n, N\\rangle, \\; \\; \\; N=2\n" }, { "math_id": 22, "text": "\n V\\left( r_{12}\\right)\n=\n\\left( { 2 e^2 \\over L_B}\\right) \\int_0^{\\infty} {{k\\;dk \\;} \\over \n k^2 + k_B^2 r_{B}^2 }\n\\; M \\left ( \\mathit l + 1, 1, -{k^2 \\over 4} \\right) \\;M \\left ( \\mathit l^{\\prime} + 1, 1, -{k^2 \\over 4} \\right) \\;\\mathcal J_0 \\left ( k{r_{12}\\over r_{B}} \\right)\n" }, { "math_id": 23, "text": "M" }, { "math_id": 24, "text": "\\mathcal J_0" }, { "math_id": 25, "text": "r_{12}" }, { "math_id": 26, "text": "e" }, { "math_id": 27, "text": "r_{B}= \\sqrt{2} \\mathit l_B" }, { "math_id": 28, "text": "L_B" }, { "math_id": 29, "text": "\\mathit l \\hbar" }, { "math_id": 30, "text": "\\mathit l^{\\prime} \\hbar" }, { "math_id": 31, "text": "\\mathit l + \\mathit l^{\\prime} = n" }, { "math_id": 32, "text": "\n k_B^2 = {4 \\pi e^2 \\over \\hbar \\omega_c A L_B}\n" }, { "math_id": 33, "text": "\\omega_c " }, { "math_id": 34, "text": "A " }, { "math_id": 35, "text": "\nu_{12} = {z_1 - z_2 \\over \\sqrt{2} }\n" }, { "math_id": 36, "text": "\nv_{12} = {z_1 + z_2 \\over \\sqrt{2} }\n" }, { "math_id": 37, "text": "\n{1 \\over \\left( 2 \\pi\\right)^2\\; 2^{2n} \\; n! }\n\\int d^2z_1 \\; d^2z_2 \\; \\mid z_1 - z_2 \\mid^{2n} \\; \\exp \\left[ - 2 \\left( \\mid z_1 \\mid^2 + \\mid z_2\\mid^2 \\right) \\right] \\;\\mathcal J_0 \\left ( \\sqrt{2}\\; { k\\mid z_1 - z_2 \\mid } \\right)\n=\n" }, { "math_id": 38, "text": "\n{1 \\over \\left( 2 \\pi\\right)^2\\; 2^{n} \\; n! }\n\\int d^2u_{12} \\; d^2v_{12} \\; \\mid u_{12}\\mid^{2n} \\; \\exp \\left[ - 2 \\left( \\mid u_{12}\\mid^2 + \\mid v_{12}\\mid^2 \\right) \\right] \\;\\mathcal J_0 \\left ( {2} k\\mid u_{12} \\mid \\right)\n=\n" }, { "math_id": 39, "text": "\nM \\left ( n + 1, 1, -{k^2 \\over 2 } \\right)\n." }, { "math_id": 40, "text": "{\\mathit l \\over n} ={1\\over 3}, {2\\over 5}, {3\\over 7}, \\mbox{etc.,} " }, { "math_id": 41, "text": "{\\mathit l \\over n} ={2\\over3}, {3\\over 5}, {4\\over 7}, \\mbox{etc.} " }, { "math_id": 42, "text": "n " } ]
https://en.wikipedia.org/wiki?curid=15112730
15114520
Six exponentials theorem
Condition on transcendence of numbers In mathematics, specifically transcendental number theory, the six exponentials theorem is a result that, given the right conditions on the exponents, guarantees the transcendence of at least one of a set of exponentials. Statement. If "x"1, "x"2, ..., "x""d" are "d" complex numbers that are linearly independent over the rational numbers, and "y"1, "y"2, ..., "y""l" are "l" complex numbers that are also linearly independent over the rational numbers, and if "dl" &gt; "d" + "l", then at least one of the following "dl" numbers is transcendental: formula_0 The most interesting case is when "d" = 3 and "l" = 2, in which case there are six exponentials, hence the name of the result. The theorem is weaker than the related but thus far unproved four exponentials conjecture, whereby the strict inequality "dl" &gt; "d" + "l" is replaced with "dl" ≥ "d" + "l", thus allowing "d" = "l" = 2. The theorem can be stated in terms of logarithms by introducing the set "L" of logarithms of algebraic numbers: formula_1 The theorem then says that if λ"ij" are elements of "L" for "i" = 1, 2 and "j" = 1, 2, 3, such that λ11, λ12, and λ13 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then the matrix formula_2 has rank 2. History. A special case of the result where "x"1, "x"2, and "x"3 are logarithms of positive integers, "y"1 = 1, and "y"2 is real, was first mentioned in a paper by Leonidas Alaoglu and Paul Erdős from 1944 in which they try to prove that the ratio of consecutive colossally abundant numbers is always prime. They claimed that Carl Ludwig Siegel knew of a proof of this special case, but it is not recorded. Using the special case they manage to prove that the ratio of consecutive colossally abundant numbers is always either a prime or a semiprime. The theorem was first explicitly stated and proved in its complete form independently by Serge Lang and Kanakanahalli Ramachandra in the 1960s. Five exponentials theorem. A stronger, related result is the five exponentials theorem, which is as follows. Let "x"1, "x"2 and "y"1, "y"2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let γ be a non-zero algebraic number. Then at least one of the following five numbers is transcendental: formula_3 This theorem implies the six exponentials theorem and in turn is implied by the as yet unproven four exponentials conjecture, which says that in fact one of the first four numbers on this list must be transcendental. Sharp six exponentials theorem. Another related result that implies both the six exponentials theorem and the five exponentials theorem is the sharp six exponentials theorem. This theorem is as follows. Let "x"1, "x"2, and "x"3 be complex numbers that are linearly independent over the rational numbers, and let "y"1 and "y"2 be a pair of complex numbers that are linearly independent over the rational numbers, and suppose that β"ij" are six algebraic numbers for 1 ≤ "i" ≤ 3 and 1 ≤ "j" ≤ 2 such that the following six numbers are algebraic: formula_4 Then "x""i" "y""j" = β"ij" for 1 ≤ "i" ≤ 3 and 1 ≤ "j" ≤ 2. The six exponentials theorem then follows by setting β"ij" = 0 for every "i" and "j", while the five exponentials theorem follows by setting "x"3 = γ/"x"1 and using Baker's theorem to ensure that the "x""i" are linearly independent. There is a sharp version of the five exponentials theorem as well, although it as yet unproven so is known as the sharp five exponentials conjecture. This conjecture implies both the sharp six exponentials theorem and the five exponentials theorem, and is stated as follows. Let "x"1, "x"2 and "y"1, "y"2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let α, β11, β12, β21, β22, and γ be six algebraic numbers with γ ≠ 0 such that the following five numbers are algebraic: formula_5 Then "x""i" "y""j" = β"ij" for 1 ≤ "i", "j" ≤ 2 and γ"x"2 = α"x"1. A consequence of this conjecture that isn't currently known would be the transcendence of "e"π², by setting "x"1 = "y"1 = β11 = 1, "x"2 = "y"2 = "i"π, and all the other values in the statement to be zero. Strong six exponentials theorem. A further strengthening of the theorems and conjectures in this area are the strong versions. The strong six exponentials theorem is a result proved by Damien Roy that implies the sharp six exponentials theorem. This result concerns the vector space over the algebraic numbers generated by 1 and all logarithms of algebraic numbers, denoted here as "L"∗. So "L"∗ is the set of all complex numbers of the form formula_6 for some "n" ≥ 0, where all the β"i" and α"i" are algebraic and every branch of the logarithm is considered. The strong six exponentials theorem then says that if "x"1, "x"2, and "x"3 are complex numbers that are linearly independent over the algebraic numbers, and if "y"1 and "y"2 are a pair of complex numbers that are also linearly independent over the algebraic numbers then at least one of the six numbers "x""i" "y""j" for 1 ≤ "i" ≤ 3 and 1 ≤ "j" ≤ 2 is not in "L"∗. This is stronger than the standard six exponentials theorem which says that one of these six numbers is not simply the logarithm of an algebraic number. There is also a strong five exponentials conjecture formulated by Michel Waldschmidt. It would imply both the strong six exponentials theorem and the sharp five exponentials conjecture. This conjecture claims that if "x"1, "x"2 and "y"1, "y"2 are two pairs of complex numbers, with each pair being linearly independent over the algebraic numbers, then at least one of the following five numbers is not in "L"∗: formula_7 All the above conjectures and theorems are consequences of the unproven extension of Baker's theorem, that logarithms of algebraic numbers that are linearly independent over the rational numbers are automatically algebraically independent too. The diagram on the right shows the logical implications between all these results. Generalization to commutative group varieties. The exponential function ez uniformizes the exponential map of the multiplicative group G"m". Therefore, we can reformulate the six exponential theorem more abstractly as follows: Let "G" G"m" × G"m" and take "u" : C → "G"(C) to be a non-zero complex-analytic group homomorphism. Define L to be the set of complex numbers l for which "u"("l") is an algebraic point of G. If a minimal generating set of L over Q has more than two elements then the image "u"(C) is an algebraic subgroup of "G"(C). (In order to derive the classical statement, set "u"("z") = (e"y"1"z"; e"y"2"z") and note that Qx"1 + Qx"2 + Q"x"3 is a subset of "L"). In this way, the statement of the six exponentials theorem can be generalized to an arbitrary commutative group variety G over the field of algebraic numbers. This generalized six exponential conjecture, however, seems out of scope at the current state of transcendental number theory. For the special but interesting cases "G" G"m" × "E" and "G" "E" × "E′", where "E", "E′" are elliptic curves over the field of algebraic numbers, results towards the generalized six exponential conjecture were proven by Aleksander Momot. These results involve the exponential function ez and a Weierstrass function formula_8 resp. two Weierstrass functions formula_9 with algebraic invariants formula_10, instead of the two exponential functions formula_11 in the classical statement. Let "G" G"m" × "E" and suppose E is not isogenous to a curve over a real field and that "u"(C) is not an algebraic subgroup of "G"(C). Then L is generated over Q either by two elements "x"1, "x"2, or three elements "x"1, "x"2, "x"3 which are not all contained in a real line R"c", where c is a non-zero complex number. A similar result is shown for "G" "E" × "E′". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exp(x_i y_j),\\quad (1 \\leq i \\leq d,\\ 1 \\leq j \\leq l)." }, { "math_id": 1, "text": "\\mathcal{L}=\\{\\lambda\\in\\mathbb{C}\\,:\\,e^\\lambda\\in\\overline{\\mathbb{Q}}\\}." }, { "math_id": 2, "text": "M=\\begin{pmatrix}\\lambda_{11}&\\lambda_{12}&\\lambda_{13} \\\\ \\lambda_{21}&\\lambda_{22}&\\lambda_{23}\\end{pmatrix}" }, { "math_id": 3, "text": "e^{x_1 y_1}, e^{x_1 y_2}, e^{x_2 y_1}, e^{x_2 y_2}, e^{\\gamma x_2/x_1}." }, { "math_id": 4, "text": "e^{x_1 y_1-\\beta_{11}}, e^{x_1 y_2-\\beta_{12}}, e^{x_2 y_1-\\beta_{21}}, e^{x_2 y_2-\\beta_{22}}, e^{x_3 y_1-\\beta_{31}}, e^{x_3 y_2-\\beta_{32}}." }, { "math_id": 5, "text": "e^{x_1 y_1-\\beta_{11}}, e^{x_1 y_2-\\beta_{12}}, e^{x_2 y_1-\\beta_{21}}, e^{x_2 y_2-\\beta_{22}}, e^{(\\gamma x_2/x_1)-\\alpha}." }, { "math_id": 6, "text": "\\beta_0+\\sum_{i=1}^n \\beta_i\\log\\alpha_i," }, { "math_id": 7, "text": "x_1y_1,\\,x_1y_2,\\,x_2y_1,\\,x_2y_2,\\,x_1/x_2." }, { "math_id": 8, "text": "\\wp" }, { "math_id": 9, "text": "\\wp, \\wp'" }, { "math_id": 10, "text": "g_2, g_3, g_2', g_3'" }, { "math_id": 11, "text": "e^{y_1z}, e^{y_2z}" } ]
https://en.wikipedia.org/wiki?curid=15114520
15114628
Four exponentials conjecture
In mathematics, specifically the field of transcendental number theory, the four exponentials conjecture is a conjecture which, given the right conditions on the exponents, would guarantee the transcendence of at least one of four exponentials. The conjecture, along with two related, stronger conjectures, is at the top of a hierarchy of conjectures and theorems concerning the arithmetic nature of a certain number of values of the exponential function. Statement. If "x"1, "x"2 and "y"1, "y"2 are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental: formula_0 An alternative way of stating the conjecture in terms of logarithms is the following. For 1 ≤ "i", "j" ≤ 2 let λ"ij" be complex numbers such that exp(λ"ij") are all algebraic. Suppose λ11 and λ12 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then formula_1 An equivalent formulation in terms of linear algebra is the following. Let "M" be the 2×2 matrix formula_2 where exp(λ"ij") is algebraic for 1 ≤ "i", "j" ≤ 2. Suppose the two rows of "M" are linearly independent over the rational numbers, and the two columns of "M" are linearly independent over the rational numbers. Then the rank of "M" is 2. While a 2×2 matrix having linearly independent rows and columns usually means it has rank 2, in this case we require linear independence over a smaller field so the rank isn't forced to be 2. For example, the matrix formula_3 has rows and columns that are linearly independent over the rational numbers, since "π" is irrational. But the rank of the matrix is 1. So in this case the conjecture would imply that at least one of "e", "e""π", and "e""π"2 is transcendental (which in this case is already known since "e" is transcendental). History. The conjecture was considered in the early 1940s by Atle Selberg who never formally stated the conjecture. A special case of the conjecture is mentioned in a 1944 paper of Leonidas Alaoglu and Paul Erdős who suggest that it had been considered by Carl Ludwig Siegel. An equivalent statement was first mentioned in print by Theodor Schneider who set it as the first of eight important, open problems in transcendental number theory in 1957. The related six exponentials theorem was first explicitly mentioned in the 1960s by Serge Lang and Kanakanahalli Ramachandra, and both also explicitly conjecture the above result. Indeed, after proving the six exponentials theorem Lang mentions the difficulty in dropping the number of exponents from six to four — the proof used for six exponentials "just misses" when one tries to apply it to four. Corollaries. Using Euler's identity this conjecture implies the transcendence of many numbers involving "e" and π. For example, taking "x"1 = 1, "x"2 = √2, "y"1 = "iπ", and "y"2 = "iπ"√2, the conjecture—if true—implies that one of the following four numbers is transcendental: formula_4 The first of these is just −1, and the fourth is 1, so the conjecture implies that "e""iπ"√2 is transcendental (which is already known, by consequence of the Gelfond–Schneider theorem). An open problem in number theory settled by the conjecture is the question of whether there exists a non-integer real number "t" such that both 2"t" and 3"t" are integers, or indeed such that "a""t" and "b""t" are both integers for some pair of integers "a" and "b" that are multiplicatively independent over the integers. Values of "t" such that 2"t" is an integer are all of the form "t" = log2"m" for some integer "m", while for 3"t" to be an integer, "t" must be of the form "t" = log3"n" for some integer "n". By setting "x"1 = 1, "x"2 = "t", "y"1 = log(2), and "y"2 = log(3), the four exponentials conjecture implies that if "t" is irrational then one of the following four numbers is transcendental: formula_5 So if 2"t" and 3"t" are both integers then the conjecture implies that "t" must be a rational number. Since the only rational numbers "t" for which 2"t" is also rational are the integers, this implies that there are no non-integer real numbers "t" such that both 2"t" and 3"t" are integers. It is this consequence, for any two primes (not just 2 and 3), that Alaoglu and Erdős desired in their paper as it would imply the conjecture that the quotient of two consecutive colossally abundant numbers is prime, extending Ramanujan's results on the quotients of consecutive superior highly composite number. Sharp four exponentials conjecture. The four exponentials conjecture reduces the pair and triplet of complex numbers in the hypotheses of the six exponentials theorem to two pairs. It is conjectured that this is also possible with the sharp six exponentials theorem, and this is the sharp four exponentials conjecture. Specifically, this conjecture claims that if "x"1, "x"2, and "y"1, "y"2 are two pairs of complex numbers with each pair being linearly independent over the rational numbers, and if β"ij" are four algebraic numbers for 1 ≤ "i", "j" ≤ 2 such that the following four numbers are algebraic: formula_6 then "x""i" "y""j" = β"ij" for 1 ≤ "i", "j" ≤ 2. So all four exponentials are in fact 1. This conjecture implies both the sharp six exponentials theorem, which requires a third "x" value, and the as yet unproven sharp five exponentials conjecture that requires a further exponential to be algebraic in its hypotheses. Strong four exponentials conjecture. The strongest result that has been conjectured in this circle of problems is the strong four exponentials conjecture. This result would imply both aforementioned conjectures concerning four exponentials as well as all the five and six exponentials conjectures and theorems, as illustrated to the right, and all the three exponentials conjectures detailed below. The statement of this conjecture deals with the vector space over the algebraic numbers generated by 1 and all logarithms of non-zero algebraic numbers, denoted here as "L"∗. So "L"∗ is the set of all complex numbers of the form formula_7 for some "n" ≥ 0, where all the β"i" and α"i" are algebraic and every branch of the logarithm is considered. The statement of the strong four exponentials conjecture is then as follows. Let "x"1, "x"2, and "y"1, "y"2 be two pairs of complex numbers with each pair being linearly independent over the algebraic numbers, then at least one of the four numbers "x""i" "y""j" for 1 ≤ "i", "j" ≤ 2 is not in "L"∗. Three exponentials conjecture. The four exponentials conjecture rules out a special case of non-trivial, homogeneous, quadratic relations between logarithms of algebraic numbers. But a conjectural extension of Baker's theorem implies that there should be no non-trivial algebraic relations between logarithms of algebraic numbers at all, homogeneous or not. One case of non-homogeneous quadratic relations is covered by the still open three exponentials conjecture. In its logarithmic form it is the following conjecture. Let λ1, λ2, and λ3 be any three logarithms of algebraic numbers and γ be a non-zero algebraic number, and suppose that λ1λ2 = γλ3. Then λ1λ2 = γλ3 = 0. The exponential form of this conjecture is the following. Let "x"1, "x"2, and "y" be non-zero complex numbers and let γ be a non-zero algebraic number. Then at least one of the following three numbers is transcendental: formula_8 There is also a sharp three exponentials conjecture which claims that if "x"1, "x"2, and "y" are non-zero complex numbers and α, β1, β2, and γ are algebraic numbers such that the following three numbers are algebraic formula_9 then either "x"2"y" = β2 or γ"x"1 = α"x"2. The strong three exponentials conjecture meanwhile states that if "x"1, "x"2, and "y" are non-zero complex numbers with "x"1"y", "x"2"y", and "x"1/"x"2 all transcendental, then at least one of the three numbers "x"1"y", "x"2"y", "x"1/"x"2 is not in "L"∗. As with the other results in this family, the strong three exponentials conjecture implies the sharp three exponentials conjecture which implies the three exponentials conjecture. However, the strong and sharp three exponentials conjectures are implied by their four exponentials counterparts, bucking the usual trend. And the three exponentials conjecture is neither implied by nor implies the four exponentials conjecture. The three exponentials conjecture, like the sharp five exponentials conjecture, would imply the transcendence of "e"π2 by letting (in the logarithmic version) λ1 = "i"π, λ2 = −"i"π, and γ = 1. Bertrand's conjecture. Many of the theorems and results in transcendental number theory concerning the exponential function have analogues involving the modular function "j". Writing "q" = "e"2π"i"τ for the nome and "j"(τ) = "J"("q"), Daniel Bertrand conjectured that if "q"1 and "q"2 are non-zero algebraic numbers in the complex unit disc that are multiplicatively independent, then "J"("q"1) and "J"("q"2) are algebraically independent over the rational numbers. Although not obviously related to the four exponentials conjecture, Bertrand's conjecture in fact implies a special case known as the weak four exponentials conjecture. This conjecture states that if "x"1 and "x"2 are two positive real algebraic numbers, neither of them equal to 1, then π2 and the product log("x"1)log("x"2) are linearly independent over the rational numbers. This corresponds to the special case of the four exponentials conjecture whereby "y"1 = "i"π, "y"2 = −"i"π, and "x"1 and "x"2 are real. Perhaps surprisingly, though, it is also a corollary of Bertrand's conjecture, suggesting there may be an approach to the full four exponentials conjecture via the modular function "j". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^{x_1y_1}, e^{x_1y_2}, e^{x_2y_1}, e^{x_2y_2}." }, { "math_id": 1, "text": "\\lambda_{11}\\lambda_{22}\\neq\\lambda_{12}\\lambda_{21}.\\," }, { "math_id": 2, "text": "M=\\begin{pmatrix}\\lambda_{11}&\\lambda_{12} \\\\ \\lambda_{21}&\\lambda_{22}\\end{pmatrix}," }, { "math_id": 3, "text": "\\begin{pmatrix}1&\\pi \\\\ \\pi&\\pi^2\\end{pmatrix}" }, { "math_id": 4, "text": "e^{i\\pi}, e^{i\\pi\\sqrt{2}}, e^{i\\pi\\sqrt{2}}, e^{2i\\pi}." }, { "math_id": 5, "text": "2, 3, 2^t, 3^t.\\," }, { "math_id": 6, "text": "e^{x_1 y_1-\\beta_{11}}, e^{x_1 y_2-\\beta_{12}}, e^{x_2 y_1-\\beta_{21}}, e^{x_2 y_2-\\beta_{22}}," }, { "math_id": 7, "text": "\\beta_0+\\sum_{i=1}^n \\beta_i\\log\\alpha_i," }, { "math_id": 8, "text": "e^{x_1y}, e^{x_2y}, e^{\\gamma x_1/x_2}." }, { "math_id": 9, "text": "e^{x_1 y-\\beta_1}, e^{x_2 y-\\beta_2}, e^{(\\gamma x_1/x_2)-\\alpha}," } ]
https://en.wikipedia.org/wiki?curid=15114628
1512013
Wald's equation
Theorem in probability theory In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands. The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation. Basic version. Let ("Xn")"n"∈formula_0 be a sequence of real-valued, independent and identically distributed random variables and let "N ≥ 0" be an integer-valued random variable that is independent of the sequence ("Xn")"n"∈formula_0. Suppose that "N" and the "Xn" have finite expectations. Then formula_1 Example. Roll a six-sided dice. Take the number on the die (call it "N") and roll that number of six-sided dice to get the numbers "X"1, . . . , "XN", and add up their values. By Wald's equation, the resulting value on average is formula_2 General version. Let ("Xn")"n"∈formula_0 be an infinite sequence of real-valued random variables and let "N" be a nonnegative integer-valued random variable. Assume that: 1. ("Xn")"n"∈formula_0 are all integrable (finite-mean) random variables, 2. E["Xn"1{"N" ≥ "n"}] E["Xn"] P("N" ≥ "n") for every natural number "n", and 3. the infinite series satisfies formula_3 Then the random sums formula_4 are integrable and formula_5 If, in addition, 4. ("Xn")"n"∈formula_0 all have the same expectation, and 5. "N" has finite expectation, then formula_6 Remark: Usually, the name "Wald's equation" refers to this last equality. Discussion of assumptions. Clearly, assumption (1) is needed to formulate assumption (2) and Wald's equation. Assumption (2) controls the amount of dependence allowed between the sequence ("Xn")"n"∈formula_0 and the number "N" of terms; see the counterexample below for the necessity. Note that assumption (2) is satisfied when "N" is a stopping time for a sequence of independent random variables ("Xn")"n"∈formula_0. Assumption (3) is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof. If assumption (5) is satisfied, then assumption (3) can be strengthened to the simpler condition 6. there exists a real constant "C" such that E[ for all natural numbers "n". Indeed, using assumption (6), formula_7 and the last series equals the expectation of "N" [Proof], which is finite by assumption (5). Therefore, (5) and (6) imply assumption (3). Assume in addition to (1) and (5) that 7. "N" is independent of the sequence ("Xn")"n"∈formula_0 and 8. there exists a constant "C" such that E[ for all natural numbers "n". Then all the assumptions (1), (2), (5) and (6), hence also (3) are satisfied. In particular, the conditions (4) and (8) are satisfied if 9. the random variables ("Xn")"n"∈formula_0 all have the same distribution. Note that the random variables of the sequence ("Xn")"n"∈formula_0 don't need to be independent. The interesting point is to admit some dependence between the random number "N" of terms and the sequence ("Xn")"n"∈formula_0. A standard version is to assume (1), (5), (8) and the existence of a filtration such that 10. "N" is a stopping time with respect to the filtration, and 11. "Xn" and are independent for every "n" ∈ formula_0. Then (10) implies that the event {"N" ≥ "n"} {"N" ≤ "n" – 1}c is in , hence by (11) independent of "Xn". This implies (2), and together with (8) it implies (6). For convenience (see the proof below using the optional stopping theorem) and to specify the relation of the sequence ("Xn")"n"∈formula_0 and the filtration , the following additional assumption is often imposed: 12. the sequence ("Xn")"n"∈formula_0 is adapted to the filtration , meaning the "Xn" is -measurable for every "n" ∈ formula_0. Note that (11) and (12) together imply that the random variables ("Xn")"n"∈formula_0 are independent. Application. An application is in actuarial science when considering the total claim amount follows a compound Poisson process formula_8 within a certain time period, say one year, arising from a random number "N" of individual insurance claims, whose sizes are described by the random variables ("Xn")"n"∈formula_0. Under the above assumptions, Wald's equation can be used to calculate the expected total claim amount when information about the average claim number per year and the average claim size is available. Under stronger assumptions and with more information about the underlying distributions, Panjer's recursion can be used to calculate the distribution of "SN". Examples. Example with dependent terms. Let "N" be an integrable, formula_00-valued random variable, which is independent of the integrable, real-valued random variable "Z" with E["Z"] 0. Define "Xn" (–1)"n" "Z" for all "n" ∈ formula_0. Then assumptions (1), (5), (7), and (8) with "C" : E[ are satisfied, hence also (2) and (6), and Wald's equation applies. If the distribution of "Z" is not symmetric, then (9) does not hold. Note that, when "Z" is not almost surely equal to the zero random variable, then (11) and (12) cannot hold simultaneously for any filtration , because "Z" cannot be independent of itself as E["Z"2] (E["Z"])2 0 is impossible. Example where the number of terms depends on the sequence. Let ("Xn")"n"∈formula_0 be a sequence of independent, symmetric, and {–1, +1}-valued random variables. For every "n" ∈ formula_0 let be the σ-algebra generated by "X"1, . . . , "Xn" and define "N" "n" when "Xn" is the first random variable taking the value +1. Note that P("N" "n") 1/2"n", hence E["N"] &lt; ∞ by the ratio test. The assumptions (1), (5) and (9), hence (4) and (8) with "C" 1, (10), (11), and (12) hold, hence also (2), and (6) and Wald's equation applies. However, (7) does not hold, because "N" is defined in terms of the sequence ("Xn")"n"∈formula_0. Intuitively, one might expect to have E["SN"] &gt; 0 in this example, because the summation stops right after a one, thereby apparently creating a positive bias. However, Wald's equation shows that this intuition is misleading. Counterexamples. A counterexample illustrating the necessity of assumption (2). Consider a sequence ("Xn")"n"∈formula_0 of i.i.d. (Independent and identically distributed random variables) random variables, taking each of the two values 0 and 1 with probability  (actually, only "X"1 is needed in the following). Define "N" 1 – "X"1. Then "SN" is identically equal to zero, hence E["SN"] 0, but E["X"1] and E["N"] and therefore Wald's equation does not hold. Indeed, the assumptions (1), (3), (4) and (5) are satisfied, however, the equation in assumption (2) holds for all "n" ∈ formula_0 except for "n" 1. A counterexample illustrating the necessity of assumption (3). Very similar to the second example above, let ("Xn")"n"∈formula_0 be a sequence of independent, symmetric random variables, where "Xn" takes each of the values 2"n" and –2"n" with probability . Let "N" be the first "n" ∈ formula_0 such that "Xn" 2"n". Then, as above, "N" has finite expectation, hence assumption (5) holds. Since E["Xn"] 0 for all "n" ∈ formula_0, assumptions (1) and (4) hold. However, since "SN" 1 almost surely, Wald's equation cannot hold. Since "N" is a stopping time with respect to the filtration generated by ("Xn")"n"∈formula_0, assumption (2) holds, see above. Therefore, only assumption (3) can fail, and indeed, since formula_9 and therefore P("N" ≥ "n") 1/2"n"–1 for every "n" ∈ formula_0, it follows that formula_10 A proof using the optional stopping theorem. Assume (1), (5), (8), (10), (11) and (12). Using assumption (1), define the sequence of random variables formula_11 Assumption (11) implies that the conditional expectation of "Xn" given equals E["Xn"] almost surely for every "n" ∈ formula_0, hence ("Mn")"n"∈formula_00 is a martingale with respect to the filtration by assumption (12). Assumptions (5), (8) and (10) make sure that we can apply the optional stopping theorem, hence "MN" "SN" – "TN" is integrable and Due to assumption (8), formula_12 and due to assumption (5) this upper bound is integrable. Hence we can add the expectation of "TN" to both sides of Equation (13) and obtain by linearity formula_13 Remark: Note that this proof does not cover the above example with dependent terms. General proof. This proof uses only Lebesgue's monotone and dominated convergence theorems. We prove the statement as given above in three steps. Step 1: Integrability of the random sum "SN". We first show that the random sum "SN" is integrable. Define the partial sums Since "N" takes its values in formula_00 and since "S"0 0, it follows that formula_14 The Lebesgue monotone convergence theorem implies that formula_15 By the triangle inequality, formula_16 Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain where the second inequality follows using the monotone convergence theorem. By assumption (3), the infinite sequence on the right-hand side of (15) converges, hence "SN" is integrable. Step 2: Integrability of the random sum "TN". We now show that the random sum "TN" is integrable. Define the partial sums of real numbers. Since "N" takes its values in formula_00 and since "T"0 0, it follows that formula_17 As in step 1, the Lebesgue monotone convergence theorem implies that formula_18 By the triangle inequality, formula_19 Using this upper estimate and changing the order of summation (which is permitted because all terms are non-negative), we obtain By assumption (2), formula_20 Substituting this into (17) yields formula_21 which is finite by assumption (3), hence "TN" is integrable. Step 3: Proof of the identity. To prove Wald's equation, we essentially go through the same steps again without the absolute value, making use of the integrability of the random sums "SN" and "TN" in order to show that they have the same expectation. Using the dominated convergence theorem with dominating random variable |"SN"| and the definition of the partial sum "Si" given in (14), it follows that formula_22 Due to the absolute convergence proved in (15) above using assumption (3), we may rearrange the summation and obtain that formula_23 where we used assumption (1) and the dominated convergence theorem with dominating random variable |"Xn"| for the second equality. Due to assumption (2) and the σ-additivity of the probability measure, formula_24 Substituting this result into the previous equation, rearranging the summation (which is permitted due to absolute convergence, see (15) above), using linearity of expectation and the definition of the partial sum "Ti" of expectations given in (16), formula_25 By using dominated convergence again with dominating random variable |"TN"|, formula_26 If assumptions (4) and (5) are satisfied, then by linearity of expectation, formula_27 This completes the proof. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{N}" }, { "math_id": 1, "text": "\\operatorname{E}[X_1+\\dots+X_N]=\\operatorname{E}[N] \\operatorname{E}[X_1]\\,." }, { "math_id": 2, "text": "\\operatorname{E}[N] \\operatorname{E}[X] = \\frac{1+2+3+4+5+6}6\\cdot\\frac{1+2+3+4+5+6}6 = \\frac{441}{36} = \\frac{49}{4} = 12.25\\,." }, { "math_id": 3, "text": "\\sum_{n=1}^\\infty\\operatorname{E}\\!\\bigl[|X_n| 1_{\\{N\\ge n\\}}\\bigr]<\\infty." }, { "math_id": 4, "text": "S_N:=\\sum_{n=1}^NX_n,\\qquad T_N:=\\sum_{n=1}^N\\operatorname{E}[X_n]" }, { "math_id": 5, "text": "\\operatorname{E}[S_N]=\\operatorname{E}[T_N]." }, { "math_id": 6, "text": "\\operatorname{E}[S_N]=\\operatorname{E}[N]\\, \\operatorname{E}[X_1]." }, { "math_id": 7, "text": "\\sum_{n=1}^\\infty\\operatorname{E}\\!\\bigl[|X_n|1_{\\{N\\ge n\\}}\\bigr]\\le\nC\\sum_{n=1}^\\infty\\operatorname{P}(N\\ge n)," }, { "math_id": 8, "text": "S_N=\\sum_{n=1}^NX_n" }, { "math_id": 9, "text": "\\{N\\ge n\\}=\\{X_i=-2^{i} \\text{ for } i=1,\\ldots,n-1\\}" }, { "math_id": 10, "text": "\\sum_{n=1}^\\infty\\operatorname{E}\\!\\bigl[|X_n|1_{\\{N\\ge n\\}}\\bigr]\n=\\sum_{n=1}^\\infty 2^n\\,\\operatorname{P}(N\\ge n)\n=\\sum_{n=1}^\\infty 2=\\infty." }, { "math_id": 11, "text": "M_n = \\sum_{i=1}^n (X_i - \\operatorname{E}[X_i]),\\quad n\\in{\\mathbb N}_0." }, { "math_id": 12, "text": "|T_N|=\\biggl|\\sum_{i=1}^N\\operatorname{E}[X_i]\\biggr| \\le \\sum_{i=1}^N\\operatorname{E}[|X_i|]\\le CN," }, { "math_id": 13, "text": "\\operatorname{E}[S_N]\n=\\operatorname{E}[T_N]." }, { "math_id": 14, "text": "|S_N|=\\sum_{i=1}^\\infty|S_i|\\,1_{\\{N=i\\}}." }, { "math_id": 15, "text": "\\operatorname{E}[|S_N|]=\\sum_{i=1}^\\infty\\operatorname{E}[|S_i|\\,1_{\\{N=i\\}}]." }, { "math_id": 16, "text": "|S_i|\\le\\sum_{n=1}^i|X_n|,\\quad i\\in{\\mathbb N}." }, { "math_id": 17, "text": "|T_N|=\\sum_{i=1}^\\infty|T_i|\\,1_{\\{N=i\\}}." }, { "math_id": 18, "text": "\\operatorname{E}[|T_N|]=\\sum_{i=1}^\\infty |T_i|\\operatorname{P}(N=i)." }, { "math_id": 19, "text": "|T_i|\\le\\sum_{n=1}^i\\bigl|\\!\\operatorname{E}[X_n]\\bigr|,\\quad i\\in{\\mathbb N}." }, { "math_id": 20, "text": "\\bigl|\\!\\operatorname{E}[X_n]\\bigr|\\operatorname{P}(N\\ge n)\n=\\bigl|\\!\\operatorname{E}[X_n1_{\\{N\\ge n\\}}]\\bigr|\n\\le\\operatorname{E}[|X_n|1_{\\{N\\ge n\\}}],\\quad n\\in{\\mathbb N}." }, { "math_id": 21, "text": "\\operatorname{E}[|T_N|]\\le\\sum_{n=1}^\\infty\\operatorname{E}[|X_n|1_{\\{N\\ge n\\}}]," }, { "math_id": 22, "text": "\\operatorname{E}[S_N]=\\sum_{i=1}^\\infty\\operatorname{E}[S_i1_{\\{N=i\\}}]\n=\\sum_{i=1}^\\infty\\sum_{n=1}^i\\operatorname{E}[X_n1_{\\{N=i\\}}]." }, { "math_id": 23, "text": "\\operatorname{E}[S_N]=\\sum_{n=1}^\\infty\\sum_{i=n}^\\infty\\operatorname{E}[X_n1_{\\{N=i\\}}]=\\sum_{n=1}^\\infty\\operatorname{E}[X_n1_{\\{N\\ge n\\}}]," }, { "math_id": 24, "text": "\\begin{align}\\operatorname{E}[X_n1_{\\{N\\ge n\\}}] &=\\operatorname{E}[X_n]\\operatorname{P}(N\\ge n)\\\\\n&=\\operatorname{E}[X_n]\\sum_{i=n}^\\infty\\operatorname{P}(N=i)\n=\\sum_{i=n}^\\infty\\operatorname{E}\\!\\bigl[\\operatorname{E}[X_n]1_{\\{N=i\\}}\\bigr].\\end{align}" }, { "math_id": 25, "text": "\\operatorname{E}[S_N]=\\sum_{i=1}^\\infty\\sum_{n=1}^i\\operatorname{E}\\!\\bigl[\\operatorname{E}[X_n]1_{\\{N=i\\}}\\bigr]=\\sum_{i=1}^\\infty\\operatorname{E}[\\underbrace{T_i1_{\\{N=i\\}}}_{=\\,T_N1_{\\{N=i\\}}}]." }, { "math_id": 26, "text": "\\operatorname{E}[S_N]=\\operatorname{E}\\!\\biggl[T_N\\underbrace{\\sum_{i=1}^\\infty1_{\\{N=i\\}}}_{=\\,1_{\\{N\\ge1\\}}}\\biggr]=\\operatorname{E}[T_N]." }, { "math_id": 27, "text": "\\operatorname{E}[T_N]=\\operatorname{E}\\!\\biggl[\\sum_{n=1}^N \\operatorname{E}[X_n]\\biggr]=\\operatorname{E}[X_1]\\operatorname{E}\\!\\biggl[\\underbrace{\\sum_{n=1}^N 1}_{=\\,N}\\biggr]=\\operatorname{E}[N]\\operatorname{E}[X_1]." } ]
https://en.wikipedia.org/wiki?curid=1512013
1512119
Prime constant
Real number whose nth binary digit is 1 if n is prime and 0 if n is composite or 1 The prime constant is the real number formula_0 whose formula_1th binary digit is 1 if formula_1 is prime and 0 if formula_1 is composite or 1. In other words, formula_0 is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is, formula_2 where formula_3 indicates a prime and formula_4 is the characteristic function of the set formula_5 of prime numbers. The beginning of the decimal expansion of "ρ" is: formula_6 (sequence in the OEIS) The beginning of the binary expansion is: formula_7 (sequence in the OEIS) Irrationality. The number formula_0 can be shown to be irrational. To see why, suppose it were rational. Denote the formula_8th digit of the binary expansion of formula_0 by formula_9. Then since formula_0 is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers formula_10 and formula_8 such that formula_11 for all formula_12 and all formula_13. Since there are an infinite number of primes, we may choose a prime formula_14. By definition we see that formula_15. As noted, we have formula_16 for all formula_13. Now consider the case formula_17. We have formula_18, since formula_19 is composite because formula_20. Since formula_21 we see that formula_0 is irrational.
[ { "math_id": 0, "text": "\\rho" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": " \\rho = \\sum_{p} \\frac{1}{2^p} = \\sum_{n=1}^\\infty \\frac{\\chi_{\\mathbb{P}}(n)}{2^n}" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\chi_{\\mathbb{P}}" }, { "math_id": 5, "text": "\\mathbb{P}" }, { "math_id": 6, "text": " \\rho = 0.414682509851111660248109622\\ldots" }, { "math_id": 7, "text": " \\rho = 0.011010100010100010100010000\\ldots_2 " }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "r_k" }, { "math_id": 10, "text": "N" }, { "math_id": 11, "text": "r_n = r_{n+ik}" }, { "math_id": 12, "text": "n > N" }, { "math_id": 13, "text": "i \\in \\mathbb{N}" }, { "math_id": 14, "text": "p > N" }, { "math_id": 15, "text": "r_p=1" }, { "math_id": 16, "text": "r_p=r_{p+ik}" }, { "math_id": 17, "text": "i=p" }, { "math_id": 18, "text": "r_{p+i \\cdot k}=r_{p+p \\cdot k}=r_{p(k+1)}=0" }, { "math_id": 19, "text": "p(k+1)" }, { "math_id": 20, "text": "k+1 \\geq 2" }, { "math_id": 21, "text": "r_p \\neq r_{p(k+1)}" } ]
https://en.wikipedia.org/wiki?curid=1512119
15126504
Parachor
Physical quantity related to surface tension Parachor is a quantity related to surface tension that was proposed by S. Sugden in 1924. It is defined according to the formula: formula_0, where formula_1 is the surface tension, formula_2 is the molar mass, formula_3 is the liquid density, and formula_4 is the vapor density in equilibrium with liquid. Parachor has a volume multiplier and is therefore extensible from components to mixtures. Parachor "has been used in solving various structural problems." The etymology of "parachor" is from a combination of prefix para "para," meaning "aside," and Greek "chor," meaning "space." Sugden in other publications showed that each compound had a characteristic parachor value. Since the work of Sugden, parachor has been used to "correlate" the surface tension data of a variety of pure liquids and liquid mixtures. Boudh-Hir and Mansoori (1990) presented a general molecular theory for parachor valid for all ranges of temperature. Using the molecular theory of Boudh-Hir and Mansoori, Escobedo and Mansoori (1996) produced an analytical solution for parachor, as a function of temperature valid in all temperatures ranging from melting point to critical point. They also used the resulting analytic equation to predict surface tensions of a variety of liquids in all ranges of temperature from melting point to critical point. It is shown to represent the experimental surface tension data of 94 different organic compounds within 1.05 AAD%. This analytic equation represents an accurate and generalized expression to predict surface tensions of pure liquids of practical interest. Escobedo and Mansoori (1998), extended applications of the same theory to the case of mixtures of organic liquids. Using the proposed equation surface tensions of 55 binary mixtures are predicted within an overall 0.50 AAD% which is better than all the available prediction and correlation methods. When the resulting equations are made compound-insensitive using a corresponding states principle, the surface tension of all the same 55 binary mixtures are predicted within an overall 2.10 AAD%. It is shown that the proposed model is also applicable to multicomponent liquid mixtures. Surface Tension of Binary Mixtures. The surface tension of binary carbon dioxide mixtures was predicted using a modified parachor approach that took temperature-dependent characteristics into account. Individual solvent parachors rise almost linearly with decreasing temperature. The exponent of the parachor equation drops consistently as the temperature is decreased for all binary mixtures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = \\gamma^{1/4} M / (\\rho_L - \\rho_V)" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "\\rho_L" }, { "math_id": 4, "text": "\\rho_V" } ]
https://en.wikipedia.org/wiki?curid=15126504
1512655
Hagen number
The Hagen number (Hg) is a dimensionless number used in forced flow calculations. It is the forced flow equivalent of the Grashof number and was named after the German hydraulic engineer G. H. L. Hagen. It is defined as: formula_0 where: For natural convection formula_2 and so the Hagen number then coincides with the Grashof number. Awad: presented Hagen number vs. Bejan number. Although their physical meaning is not the same because the former represents the dimensionless pressure gradient while the latter represents the dimensionless pressure drop, it will be shown that Hagen number coincides with Bejan number in cases where the characteristic length (l) is equal to the flow length (L). Also, a new expression of Bejan number in the Hagen-Poiseuille flow will be introduced. In addition, extending the Hagen number to a general form will be presented. For the case of Reynolds analogy (Pr = Sc = 1), all these three definitions of Hagen number will be the same. The general form of the Hagen number is formula_3 where formula_4 is the corresponding diffusivity of the process in consideration References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathrm{Hg} = -\\frac{1}{\\rho}\\frac{\\mathrm{d} p}{\\mathrm{d} x}\\frac{L^3}{\\nu^2}\n" }, { "math_id": 1, "text": "\\frac{\\mathrm{d}p}{\\mathrm{d}x}" }, { "math_id": 2, "text": "\n\\frac{\\mathrm{d} p}{\\mathrm{d} x} = \\rho g \\beta \\Delta T,\n" }, { "math_id": 3, "text": "\n\\mathrm{Hg} = -\\frac{1}{\\rho}\\frac{\\mathrm{d} p}{\\mathrm{d} x}\\frac{L^3}{\\delta^2}\n" }, { "math_id": 4, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=1512655
15127042
Thermodynamic integration
Transfer targets in English football Thermodynamic integration is a method used to compare the difference in free energy between two given states (e.g., A and B) whose potential energies formula_0 and formula_1 have different dependences on the spatial coordinates. Because the free energy of a system is not simply a function of the phase space coordinates of the system, but is instead a function of the Boltzmann-weighted integral over phase space (i.e. partition function), the free energy difference between two states cannot be calculated directly from the potential energy of just two coordinate sets (for state A and B respectively). In thermodynamic integration, the free energy difference is calculated by defining a thermodynamic path between the states and integrating over ensemble-averaged enthalpy changes along the path. Such paths can either be real chemical processes or alchemical processes. An example alchemical process is the Kirkwood's coupling parameter method. Derivation. Consider two systems, A and B, with potential energies formula_2 and formula_3. The potential energy in either system can be calculated as an ensemble average over configurations sampled from a molecular dynamics or Monte Carlo simulation with proper Boltzmann weighting. Now consider a new potential energy function defined as: formula_4 Here, formula_5 is defined as a coupling parameter with a value between 0 and 1, and thus the potential energy as a function of formula_5 varies from the energy of system A for formula_6 and system B for formula_7. In the canonical ensemble, the partition function of the system can be written as: formula_8 In this notation, formula_9 is the potential energy of state formula_10 in the ensemble with potential energy function formula_11 as defined above. The free energy of this system is defined as: formula_12, If we take the derivative of F with respect to λ, we will get that it equals the ensemble average of the derivative of potential energy with respect to λ. formula_13 The change in free energy between states A and B can thus be computed from the integral of the ensemble averaged derivatives of potential energy over the coupling parameter formula_5. In practice, this is performed by defining a potential energy function formula_11, sampling the ensemble of equilibrium configurations at a series of formula_5 values, calculating the ensemble-averaged derivative of formula_11 with respect to formula_5 at each formula_5 value, and finally computing the integral over the ensemble-averaged derivatives. Umbrella sampling is a related free energy method. It adds a bias to the potential energy. In the limit of an infinite strong bias it is equivalent to thermodynamic integration.
[ { "math_id": 0, "text": "U_A" }, { "math_id": 1, "text": " U_B " }, { "math_id": 2, "text": "U_A " }, { "math_id": 3, "text": "U_B" }, { "math_id": 4, "text": "U(\\lambda) = U_A + \\lambda(U_B - U_A)" }, { "math_id": 5, "text": "\\lambda" }, { "math_id": 6, "text": "\\lambda = 0" }, { "math_id": 7, "text": "\\lambda = 1" }, { "math_id": 8, "text": "Q(N, V, T, \\lambda) = \\sum_{s} \\exp [-U_s(\\lambda)/k_{B}T]" }, { "math_id": 9, "text": "U_s(\\lambda)" }, { "math_id": 10, "text": "s" }, { "math_id": 11, "text": "U(\\lambda)" }, { "math_id": 12, "text": "F(N,V,T,\\lambda)=-k_{B}T \\ln Q(N,V,T,\\lambda)" }, { "math_id": 13, "text": "\\begin{align}\n\\Delta F(A \\rightarrow B)\n &= \\int_0^1 \\frac{\\partial F(\\lambda)}{\\partial\\lambda} d\\lambda\n\\\\\n &= -\\int_0^1 \\frac{k_{B}T}{Q} \\frac{\\partial Q}{\\partial\\lambda} d\\lambda\n\\\\\n &= \\int_0^1 \\frac{k_{B}T}{Q} \\sum_{s} \\frac{1}{k_{B}T} \\exp[- U_s(\\lambda)/k_{B}T ] \\frac{\\partial U_s(\\lambda)}{\\partial \\lambda} d\\lambda\n\\\\\n &= \\int_0^1 \\left\\langle\\frac{\\partial U(\\lambda)}{\\partial\\lambda}\\right\\rangle_{\\lambda} d\\lambda\n\\\\\n &= \\int_0^1 \\left\\langle U_B(\\lambda) - U_A(\\lambda) \\right\\rangle_{\\lambda} d\\lambda\n\\end{align}\n " } ]
https://en.wikipedia.org/wiki?curid=15127042
15127771
Id (programming language)
Irvine Dataflow (Id) is a general-purpose parallel programming language, started at the University of California at Irvine in 1975 by Arvind and K. P. Gostelow. Arvind continued work with Id at MIT into the 1990s. The major subset of Id is a purely functional programming language with non-strict semantics. Features include: higher-order functions, a Milner-style statically type-checked polymorphic type system with overloading, user defined types and pattern matching, and prefix and infix operators. It led to the development of pH, a parallel dialect of Haskell. Id programs are fine grained implicitly parallel. The MVar synchronisation variable abstraction in Haskell is based on Id's M-structures. Examples. Id supports algebraic datatypes, similar to ML, Haskell, or Miranda: type bool = False | True; Types are inferred by default, but may be annotated with a codice_0 declaration. Type variables use the syntax codice_1, codice_2, etc. typeof id = *0 -&gt; *0; def id x = x; A function which uses an array comprehension to compute the first formula_0 Fibonacci numbers: typeof fib_array = int -&gt; (array int); def fib_array n = { A = { array (0,n) of | [0] = 0 | [1] = 1 In A }; Note the use of non-strict evaluation in the recursive definition of the array codice_3. Id's lenient evaluation strategy allows cyclic datastructures by default. The following code makes a cyclic list, using the cons operator codice_4. def cycle x = { A = x : A In A }; However, to avoid nonterminating construction of truly infinite structures, explicit delays must be annotated using codice_5: def count_up_from x = x :# count_up_from (x + 1); The pHluid system was a research implementation of Id programming language, with future plans for a front-end for pH, a parallel dialect of the Haskell programming language, implemented at Digital's Cambridge Research Laboratory. and non-profit use. It is targeted at standard Unix workstation hardware.
[ { "math_id": 0, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=15127771
15127988
Excess chemical potential
Difference in chemical potential between a given species and an ideal gas In thermodynamics, the excess chemical potential is defined as the difference between the chemical potential of a given species and that of an ideal gas under the same conditions (in particular, at the same pressure, temperature, and composition). The chemical potential of a particle species formula_0 is therefore given by an ideal part and an excess part. formula_1 Chemical potential of a pure fluid can be estimated by the Widom insertion method. Derivation and Measurement. For a system of diameter &amp;NoBreak;&amp;NoBreak; and volume &amp;NoBreak;&amp;NoBreak;, at constant temperature &amp;NoBreak;&amp;NoBreak;, the classical canonical partition function formula_2 with &amp;NoBreak;&amp;NoBreak; a scaled coordinate, the free energy is given by: formula_3 formula_4 Combining the above equation with the definition of chemical potential, formula_5 we get the chemical potential of a sufficiently large system from (and the fact that the smallest allowed change in the particle number is formula_6) formula_7 wherein the chemical potential of an ideal gas can be evaluated analytically. Now let's focus on &amp;NoBreak;&amp;NoBreak; since the potential energy of an system can be separated into the potential energy of an system and the potential of the excess particle interacting with the system, that is, formula_8 and formula_9 Thus far we converted the excess chemical potential into an ensemble average, and the integral in the above equation can be sampled by the brute force Monte Carlo method. The calculating of excess chemical potential is not limited to homogeneous systems, but has also been extended to inhomogeneous systems by the Widom insertion method, or other ensembles such as NPT and NVE. See also. Apparent molar property References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "\\mu_i=\\mu_i^\\text{ideal}+\\mu_i^\\text{excess}" }, { "math_id": 2, "text": "Q(N,V,T)=\\frac{V^{N}}{\\Lambda^{dN}N!}\\int_{0}^{1}\\ldots\\int_{0}^{1}ds^{N}\\exp[-\\beta U(s^{N};L)]" }, { "math_id": 3, "text": "F(N,V,T)= -k_{B}T\\ln Q=-k_{B}T\\ln\\left(\\frac{V^{N}}{\\Lambda^{dN}N!}\\right)-k_{B}T \\ln{\\int ds^{N}\\exp[-\\beta U(s^{N};L)]}=" }, { "math_id": 4, "text": "=F_{id}(N,V,T)+F_{ex}(N,V,T)" }, { "math_id": 5, "text": "\\mu_{a}= \\left(\\frac{\\partial F}{\\partial N_{a}}\\right)_{VT}=\\left(\\frac{\\partial G}{\\partial N_{a}}\\right)_{PT}," }, { "math_id": 6, "text": "\\Delta N=1" }, { "math_id": 7, "text": "\\mu= \\frac{-k_{B}T\\ln(Q_{N+1}/Q_{N})}{\\Delta N}\\overset{\\Delta N=1}{=}-k_{B}T\\ln\\left(\\frac{V/\\Lambda^{d}}{N+1}\\right) - k_{B}T \\ln{\\frac{\\int ds^{N+1}\\exp[-\\beta U(s^{N+1})]}{\\int ds^{N}\\exp[-\\beta U(s^{N})]}}=\\mu_{id}(\\rho) + \\mu_{ex}" }, { "math_id": 8, "text": "\\Delta U\\equiv U(s^{N+1}) - U(s^{N})" }, { "math_id": 9, "text": "\\mu_{ex}= -k_{B}T \\ln \\int ds_{N+1} \\langle \\exp(-\\beta\\Delta U)\\rangle_{N}." } ]
https://en.wikipedia.org/wiki?curid=15127988
15129272
Window operator
Operator in modal logic In modal logic, the window operator formula_0 is a modal operator with the following semantic definition: formula_1 for formula_2 a Kripke model and formula_3. Informally, it says that "w" "sees" every φ-world (or every φ-world is seen by "w"). This operator is not definable in the basic modal logic (i.e. some propositional non-modal language together with a single primitive "necessity" (universal) operator, often denoted by 'formula_4', or its existential dual, often denoted by 'formula_5'). Notice that its truth condition is the converse of the truth condition for the standard "necessity" operator. For references to some of its applications, see the References section.
[ { "math_id": 0, "text": "\\triangle" }, { "math_id": 1, "text": "M,w\\models\\triangle\\phi \\iff \\forall u, M,u\\models\\phi\\Rightarrow Rwu" }, { "math_id": 2, "text": "M=(W,R,f)" }, { "math_id": 3, "text": "w,u\\in W" }, { "math_id": 4, "text": "\\square" }, { "math_id": 5, "text": "\\Diamond" } ]
https://en.wikipedia.org/wiki?curid=15129272
1512988
Trend-stationary process
Stochastic process in time series analysis In the statistical analysis of time series, a trend-stationary process is a stochastic process from which an underlying trend (function solely of time) can be removed, leaving a stationary process. The trend does not have to be linear. Conversely, if the process requires differencing to be made stationary, then it is called difference stationary and possesses one or more unit roots. Those two concepts may sometimes be confused, but while they share many properties, they are different in many aspects. It is possible for a time series to be non-stationary, yet have no unit root and be trend-stationary. In both unit root and trend-stationary processes, the mean can be growing or decreasing over time; however, in the presence of a shock, trend-stationary processes are mean-reverting (i.e. transitory, the time series will converge again towards the growing mean, which was not affected by the shock) while unit-root processes have a permanent impact on the mean (i.e. no convergence over time). Formal definition. A process {"Y"} is said to be trend-stationary if formula_0 where "t" is time, "f" is any function mapping from the reals to the reals, and {"e"} is a stationary process. The value formula_1 is said to be the trend value of the process at time "t". Simplest example: stationarity around a linear trend. Suppose the variable "Y" evolves according to formula_2 where "t" is time and "e""t" is the error term, which is hypothesized to be white noise or more generally to have been generated by any stationary process. Then one can use linear regression to obtain an estimate formula_3 of the true underlying trend slope formula_4 and an estimate formula_5 of the underlying intercept term "b"; if the estimate formula_3 is significantly different from zero, this is sufficient to show with high confidence that the variable "Y" is non-stationary. The residuals from this regression are given by formula_6 If these estimated residuals can be statistically shown to be stationary (more precisely, if one can reject the hypothesis that the true underlying errors are non-stationary), then the residuals are referred to as the detrended data, and the original series {"Y""t"} is said to be trend-stationary even though it is not stationary. Stationarity around other types of trend. Exponential growth trend. Many economic time series are characterized by exponential growth. For example, suppose that one hypothesizes that gross domestic product is characterized by stationary deviations from a trend involving a constant growth rate. Then it could be modeled as formula_7 with Ut being hypothesized to be a stationary error process. To estimate the parameters formula_4 and "B", one first takes the natural logarithm (ln) of both sides of this equation: formula_8 This log-linear equation is in the same form as the previous linear trend equation and can be detrended in the same way, giving the estimated formula_9 as the detrended value of formula_10, and hence the implied formula_11 as the detrended value of formula_12, assuming one can reject the hypothesis that formula_9 is non-stationary. Quadratic trend. Trends do not have to be linear or log-linear. For example, a variable could have a quadratic trend: formula_13 This can be regressed linearly in the coefficients using "t" and "t"2 as regressors; again, if the residuals are shown to be stationary then they are the detrended values of formula_14. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y_t = f(t) + e_t," }, { "math_id": 1, "text": "f(t)" }, { "math_id": 2, "text": "Y_t = a \\cdot t + b + e_t" }, { "math_id": 3, "text": "\\hat{a}" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "\\hat{b}" }, { "math_id": 6, "text": "\\hat{e}_t = Y_t - \\hat{a} \\cdot t - \\hat{b}." }, { "math_id": 7, "text": "\\text{GDP}_t = Be^{at}U_t" }, { "math_id": 8, "text": " \\ln (\\text{GDP}_t) = \\ln B + at + \\ln (U_t)." }, { "math_id": 9, "text": "(\\ln U)_t" }, { "math_id": 10, "text": " (\\ln \\text{GDP})_t " }, { "math_id": 11, "text": "U_t" }, { "math_id": 12, "text": "\\text{GDP}_t" }, { "math_id": 13, "text": "Y_t = a \\cdot t + c \\cdot t^2 + b + e_t." }, { "math_id": 14, "text": "Y_t" } ]
https://en.wikipedia.org/wiki?curid=1512988
15130142
Five-term exact sequence
A sequence of terms related to the first step of a spectral sequence In mathematics, five-term exact sequence or exact sequence of low-degree terms is a sequence of terms related to the first step of a spectral sequence. More precisely, let formula_0 be a first quadrant spectral sequence, meaning that formula_1 vanishes except when "p" and "q" are both non-negative. Then there is an exact sequence 0 → "E"21,0 → "H" 1("A") → "E"20,1 → "E"22,0 → "H" 2("A"). Here, the map formula_2 is the differential of the formula_3-term of the spectral sequence. 0 → "H" 1("G"/"N", "A""N") → "H" 1("G", "A") → "H" 1("N", "A")"G"/"N" → "H" 2("G"/"N", "A""N") →"H" 2("G", "A") in group cohomology arises as the five-term exact sequence associated to the Lyndon–Hochschild–Serre spectral sequence "H" "p"("G"/"N", "H" "q"("N", "A")) ⇒ "H" "p+q"("G, "A") where "G" is a profinite group, "N" is a closed normal subgroup, and "A" is a discrete "G"-module. Construction. The sequence is a consequence of the definition of convergence of a spectral sequence. The second page differential with codomain "E"21,0 originates from "E"2−1,1, which is zero by assumption. The differential with domain "E"21,0 has codomain "E"23,−1, which is also zero by assumption. Similarly, the incoming and outgoing differentials of "E""r"1,0 are zero for all "r" ≥ 2. Therefore the (1,0) term of the spectral sequence has converged, meaning that it is isomorphic to the degree one graded piece of the abutment "H" 1("A"). Because the spectral sequence lies in the first quadrant, the degree one graded piece is equal to the first subgroup in the filtration defining the graded pieces. The inclusion of this subgroup yields the injection "E"21,0 → "H" 1("A") which begins the five-term exact sequence. This injection is called an "edge map". The "E"20,1 term of the spectral sequence has not converged. It has a potentially non-trivial differential leading to "E"22,0. However, the differential landing at "E"20,1 begins at "E"2−2,2, which is zero, and therefore "E"30,1 is the kernel of the differential "E"20,1 → "E"22,0. At the third page, the (0, 1) term of the spectral sequence has converged, because all the differentials into and out of "E"r0,1 either begin or end outside the first quadrant when "r" ≥ 3. Consequently "E"30,1 is the degree zero graded piece of "H" 1("A"). This graded piece is the quotient of "H" 1("A") by the first subgroup in the filtration, and hence it is the cokernel of the edge map from "E"21,0. This yields a short exact sequence 0 → "E"21,0 → "H" 1("A") → "E"30,1 → 0. Because "E"30,1 is the kernel of the differential "E"20,1 → "E"22,0, the last term in the short exact sequence can be replaced with the differential. This produces a four-term exact sequence. The map "H" 1("A") → "E"20,1 is also called an edge map. The outgoing differential of "E"22,0 is zero, so "E"32,0 is the cokernel of the differential "E"20,1 → "E"22,0. The incoming and outgoing differentials of "E"r2,0 are zero if "r" ≥ 3, again because the spectral sequence lies in the first quadrant, and hence the spectral sequence has converged. Consequently "E"32,0 is isomorphic to the degree two graded piece of "H" 2("A"). In particular, it is a subgroup of "H" 2("A"). The composite "E"22,0 → "E"32,0 → "H"2("A"), which is another edge map, therefore has kernel equal to the differential landing at "E"22,0. This completes the construction of the sequence. Variations. The five-term exact sequence can be extended at the cost of making one of the terms less explicit. The seven-term exact sequence is 0 → "E"21,0 → "H" 1("A") → "E"20,1 → "E"22,0 → Ker("H" 2("A") → "E"20,2) → "E"21,1 → "E"23,0. This sequence does not immediately extend with a map to "H"3("A"). While there is an edge map "E"23,0 → "H"3("A"), its kernel is not the previous term in the seven-term exact sequence. For spectral sequences whose first interesting page is "E"1, there is a three-term exact sequence analogous to the five-term exact sequence: formula_4 Similarly for a homological spectral sequence formula_5 we get an exact sequence: formula_6 In both homological and cohomological case there are also low degree exact sequences for spectral sequences in the third quadrant. When additional terms of the spectral sequence are known to vanish, the exact sequences can sometimes be extended further. For example, the long exact sequence associated to a short exact sequence of complexes can be derived in this manner.
[ { "math_id": 0, "text": "E_2^{p,q} \\Rightarrow H^n(A)" }, { "math_id": 1, "text": "E_2^{p,q}" }, { "math_id": 2, "text": "E_2^{0,1} \\to E_2^{2,0}" }, { "math_id": 3, "text": "E_2" }, { "math_id": 4, "text": "0 \\to H^0(A) \\to E_1^{0,0} \\to E_1^{0,1}." }, { "math_id": 5, "text": "E_{p,q}^2 \\Rightarrow H_n(A)" }, { "math_id": 6, "text": "H_2(A)\\to E^2_{2,0}\\xrightarrow{d_2}E^2_{0,1}\\to H_1(A)\\to E^2_{1,0}\\to 0" } ]
https://en.wikipedia.org/wiki?curid=15130142
1513065
Palladium-hydrogen electrode
The palladium-hydrogen electrode (abbreviation: Pd/H2) is one of the common reference electrodes used in electrochemical study. Most of its characteristics are similar to the standard hydrogen electrode (with platinum). But palladium has one significant feature—the capability to absorb (dissolve into itself) molecular hydrogen. Electrode operation. Two phases can coexist in palladium when hydrogen is absorbed: The electrochemical behaviour of a palladium electrode in equilibrium with H3O+ ions in solution parallels the behaviour of palladium with molecular hydrogen formula_0 Thus the equilibrium is controlled in one case by the partial pressure or fugacity of molecular hydrogen and in other case—by activity of H+-ions in solution. formula_1 When palladium is electrochemically charged by hydrogen, the existence of two phases is manifested by a constant potential of approximately +50 mV compared to the reversible hydrogen electrode. This potential is independent of the amount of hydrogen absorbed over a wide range. This property has been utilized in the construction of a palladium/hydrogen reference electrode. The main feature of such electrode is an absence of non-stop bubbling of molecular hydrogen through the solution as it is absolutely necessary for the standard hydrogen electrode. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Electrochimica Acta
[ { "math_id": 0, "text": "\\tfrac{1}{2} \\mathrm{H}_2 = \\mathrm{H}_{ads} = \\mathrm{H}_{abs}" }, { "math_id": 1, "text": "E=E^0 + {RT \\over F}\\ln {a_{\\mathrm{H}^+} \\over (\\frac{p_{\\mathrm{H}2}}{p^0})^{1/2}}" } ]
https://en.wikipedia.org/wiki?curid=1513065
1513195
Fast Kalman filter
The fast Kalman filter (FKF), devised by Antti Lange (born 1941), is an extension of the Helmert–Wolf blocking (HWB) method from geodesy to safety-critical real-time applications of Kalman filtering (KF) such as GNSS navigation up to the centimeter-level of accuracy and satellite imaging of the Earth including atmospheric tomography. Motivation. Kalman filters are an important filtering technique for building fault-tolerance into a wide range of systems, including real-time imaging. The ordinary Kalman filter is an optimal filtering algorithm for linear systems. However, an optimal Kalman filter is not stable (i.e. reliable) if Kalman's observability and controllability conditions are not continuously satisfied. These conditions are very challenging to maintain for any larger system. This means that even optimal Kalman filters may start diverging towards false solutions. Fortunately, the stability of an optimal Kalman filter can be controlled by monitoring its error variances if only these can be reliably estimated (e.g. by MINQUE). Their precise computation is, however, much more demanding than the optimal Kalman filtering itself. The FKF computing method often provides the required speed-up also in this respect. Optimum calibration. Calibration parameters are a typical example of those state parameters that may create serious observability problems if a narrow window of data (i.e. too few measurements) is continuously used by a Kalman filter. Observing instruments onboard orbiting satellites gives an example of optimal Kalman filtering where their calibration is done indirectly on ground. There may also exist other state parameters that are hardly or not at all observable if too small samples of data are processed at a time by any sort of a Kalman filter. Inverse problem. The computing load of the inverse problem of an ordinary Kalman recursion is roughly proportional to the cube of the number of the measurements processed simultaneously. This number can always be set to 1 by processing each scalar measurement independently and (if necessary) performing a simple pre-filtering algorithm to de-correlate these measurements. However, for any large and complex system this pre-filtering may need the HWB computing. Any continued use of a too narrow window of input data weakens observability of the calibration parameters and, in the long run, this may lead to serious controllability problems totally unacceptable in safety-critical applications. Even when many measurements are processed simultaneously, it is not unusual that the linearized equation system becomes sparse, because some measurements turn out to be independent of some state or calibration parameters. In problems of Satellite Geodesy, the computing load of the HWB (and FKF) method is roughly proportional to the square of the total number of the state and calibration parameters only and not of the measurements that are billions. Reliable solution. Reliable operational Kalman filtering requires continuous fusion of data in real-time. Its optimality depends essentially on the use of exact variances and covariances between all measurements and the estimated state and calibration parameters. This large error covariance matrix is obtained by matrix inversion from the respective system of Normal Equations. Its coefficient matrix is usually sparse and the exact solution of all the estimated parameters can be computed by using the HWB (and FKF) method. The optimal solution may also be obtained by Gauss elimination using other sparse-matrix techniques or some iterative methods based e.g. on Variational Calculus. However, these latter methods may solve the large matrix of all the error variances and covariances only approximately and the data fusion would not be performed in a strictly optimal fashion. Consequently, the long-term stability of Kalman filtering becomes uncertain even if Kalman's observability and controllability conditions were permanently satisfied. Description. The Fast Kalman filter applies only to systems with sparse matrices, since HWB is an inversion method to solve sparse linear equations (Wolf, 1978). The sparse coefficient matrix to be inverted may often have either a bordered block- or band-diagonal (BBD) structure. If it is band-diagonal it can be transformed into a block-diagonal form e.g. by means of a generalized Canonical Correlation Analysis (gCCA). Such a large matrix can thus be most effectively inverted in a blockwise manner by using the following analytic inversion formula: formula_0 of Frobenius where formula_1 a large block- or band-diagonal (BD) matrix to be easily inverted, and, formula_2 a much smaller matrix called the Schur complement of formula_3. This is the FKF method that may make it computationally possible to estimate a much larger number of state and calibration parameters than an ordinary Kalman recursion can do. Their operational accuracies may also be reliably estimated from the theory of Minimum-Norm Quadratic Unbiased Estimation (MINQUE) of C. R. Rao and used for controlling the stability of this optimal fast Kalman filtering. Applications. The FKF method extends the very high accuracies of Satellite Geodesy to Virtual Reference Station (VRS) Real Time Kinematic (RTK) surveying, mobile positioning and ultra-reliable navigation. First important applications will be real-time optimum calibration of global observing systems in Meteorology, Geophysics, Astronomy etc. For example, a Numerical Weather Prediction (NWP) system can now forecast observations with confidence intervals and their operational quality control can thus be improved. A sudden increase of uncertainty in predicting observations would indicate that important observations are missing (observability problem) or an unpredictable change of weather is taking place (controllability problem). Remote sensing and imaging from satellites are partly based on forecasted information. Controlling stability of the feedback between these forecasts and the satellite images requires a sensor fusion technique that is both fast and robust, which the FKF fulfills. The computational advantage of FKF is marginal for applications using only small amounts of data in real-time. Therefore, improved built-in calibration and data communication infrastructures need to be developed first and introduced to public use before personal gadgets and machine-to-machine devices can make the best out of FKF. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{bmatrix} A & B \\\\ C & D \\end{bmatrix}^{-1} = \\begin{bmatrix} A^{-1}+A^{-1}B(D-CA^{-1}B)^{-1}CA^{-1} & -A^{-1}B(D-CA^{-1}B)^{-1} \\\\ -(D-CA^{-1}B)^{-1}CA^{-1} & (D-CA^{-1}B)^{-1} \\end{bmatrix}" }, { "math_id": 1, "text": " A = " }, { "math_id": 2, "text": " (D-CA^{-1}B) = " }, { "math_id": 3, "text": "A" } ]
https://en.wikipedia.org/wiki?curid=1513195
15132999
Gy's sampling theory
Gy's sampling theory is a theory about the sampling of materials, developed by Pierre Gy from the 1950s to beginning 2000s in articles and books including: The abbreviation "TOS" is also used to denote Gy's sampling theory. Gy's sampling theory uses a model in which the sample taking is represented by independent Bernoulli trials for every particle in the parent population from which the sample is drawn. The two possible outcomes of each Bernoulli trial are: (1) the particle is selected and (2) the particle is not selected. The probability of selecting a particle may be different during each Bernoulli trial. The model used by Gy is mathematically equivalent to Poisson sampling. Using this model, the following equation for the variance of the sampling error in the mass concentration in a sample was derived by Gy: formula_0 in which "V" is the variance of the sampling error, "N" is the number of particles in the population (before the sample was taken), "q" "i" is the probability of including the "i"th particle of the population in the sample (i.e. the first-order inclusion probability of the "i"th particle), "m" "i" is the mass of the "i"th particle of the population and "a" "i" is the mass concentration of the property of interest in the "i"th particle of the population. It is noted that the above equation for the variance of the sampling error is an approximation based on a linearization of the mass concentration in a sample. In the theory of Gy, correct sampling is defined as a sampling scenario in which all particles have the same probability of being included in the sample. This implies that "q" "i" no longer depends on "i", and can therefore be replaced by the symbol "q". Gy's equation for the variance of the sampling error becomes: formula_1 where "a"batch is the concentration of the property of interest in the population from which the sample is to be drawn and "M"batch is the mass of the population from which the sample is to be drawn. It has been noted that a similar equation had already been derived in 1935 by Kassel and Guy. Two books covering the theory and practice of sampling are available; one is the Third Edition of a high-level monograph and the other an introductory text. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V = \\frac{1}{(\\sum_{i=1}^N q_i m_i)^2} \\sum_{i=1}^N q_i(1-q_i) m_{i}^{2} \\left(a_i - \\frac{\\sum_{j=1}^N q_j a_j m_j}{\\sum_{j=1}^N q_j m_j}\\right)^2 ." }, { "math_id": 1, "text": "V = \\frac{1-q}{q M_\\text{batch}^2} \\sum_{i=1}^N m_{i}^{2} \\left(a_i - a_\\text{batch} \\right)^2 ." } ]
https://en.wikipedia.org/wiki?curid=15132999
15135484
Burton Wold Wind Farm
Wind farm near Burton Latimer in Northamptonshire, UK Burton Wold Wind Farm is a wind farm located near Burton Latimer in the English county of Northamptonshire, UK. The farm was developed by Your Energy Ltd, is owned by Mistral Windfarms and operated by Engineering Renewables Ltd. E.ON UK is buying the electricity output of the project under a long-term power purchase agreement. The farm is spread over three hectares. It has an installed capacity of 20 MW and generate on average around 40,000,000 units (kilowatt hours) of electricity annually. Burton Wold was the first wind farm to be erected in Northamptonshire. Construction work on the site began in December 2005, and the wind farm became operational in May 2006. The wind farm received a mention in the 2008 Civic Trust Awards and was shortlisted in the MKSM Excellence Awards in 2009. Output. Burton Wold Wind Farm comprises 10 turbines with a total nameplate capacity of 20 megawatts. These 10 turbines have the ability to produce electricity for an average of 10,000 homes each year, the equivalent of 25% of all the homes in the borough of Kettering. The E70-E4 turbines were supplied by Enercon Gmbh of Germany, who are one of the leading manufacturers of wind turbines in the world. Each turbine is 64 meters tall to the hub, the rotor blades have a diameter of 71 meters which gives the turbines an overall height from the ground to the vertical tip of a blade of 99.5 meters. Figures on the Ofgem website show that in 2008 (a leap year) the wind farm produced 43,416 Megawatt-hours of electricity, equivalent to the average needs of 9,237 domestic households. This corresponds to a capacity factor of: formula_0 Funding. The wind farm will contribute £280,000 to a community fund during its 25-year life, which is directed towards energy efficiency and education projects within Burton Latimer. The fund is administered by the Kettering Borough Council. The fund has paid for solar hot water heaters at a local sheltered housing development, sun pipes at a local school, and contributed £10,000 towards energy efficient renovations of the Guide Hall. The fund is cited as a case study in two reports produced by the Renewables Advisory Board In 2008 Kettering Borough Council approved an application for a further seven turbines, submitted by Burton Wold Wind Farm Extension Limited, a Company owned by the Beaty Family. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{43,416 \\mbox{MWh}}{(366 \\mbox{days}) \\times (24 \\mbox{hours/day}) \\times (20 \\mbox{MW})}=0.2471 \\approx{25\\%}" } ]
https://en.wikipedia.org/wiki?curid=15135484
15143012
Overlapping distribution method
The Overlapping distribution method was introduced by Charles H. Bennett for estimating chemical potential. Theory. For two N particle systems 0 and 1 with partition function formula_0 and formula_1 , from formula_2 get the thermodynamic free energy difference is formula_3 For every configuration visited during this sampling of system 1 we can compute the potential energy U as a function of the configuration space, and the potential energy difference is formula_4 Now construct a probability density of the potential energy from the above equation: formula_5 where in formula_6 is a configurational part of a partition function formula_7 formula_8 since formula_9 formula_10 now define two functions: formula_11 thus that formula_12 andformula_13 can be obtained by fitting formula_14 and formula_15 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_{0}" }, { "math_id": 1, "text": "Q_{1}" }, { "math_id": 2, "text": " F(N,V,T) = - k_{B}T \\ln Q " }, { "math_id": 3, "text": "\\Delta F = -k_{B}T \\ln (Q_{1}/Q_{0}) = - k_{B} T \\ln (\\frac{\\int ds^{N}\\exp[-\\beta U_{1}(s^{N})]}{\\int ds^{N}\\exp[-\\beta U_{0}(s^{N})]})" }, { "math_id": 4, "text": "\\Delta U = U_{1}(s^{N}) - U_{0}(s^{N})" }, { "math_id": 5, "text": "p_{1}(\\Delta U) = \\frac{\\int ds^{N}\\exp(-\\beta U_{1})\\delta(U_{1}-U_{0}-\\Delta U)}{Q_{1}}\n" }, { "math_id": 6, "text": "p_{1}" }, { "math_id": 7, "text": "\np_{1}(\\Delta U) = \\frac{\\int ds^{N}\\exp(-\\beta U_{1})\\delta(U_{1}-U_{0}-\\Delta U)}{Q_{1}} = \\frac{\\int ds^{N}\\exp[-\\beta(U_{0}+\\Delta U)]\\delta(U_{1}-U_{0}-\\Delta U)}{Q_{1}}" }, { "math_id": 8, "text": "= \\frac{Q_{0}}{Q_{1}} \\exp (-\\beta \\Delta U) \\frac{\\int ds^{N}\\exp(-\\beta U_{0})\\delta(U_{1}-U_{0}-\\Delta U)}{Q_{0}} = \\frac{Q_{0}}{Q_{1}} \\exp (- \\beta \\Delta U) p_{0}(\\Delta U)\n" }, { "math_id": 9, "text": "\\Delta F = -k_{B}T \\ln (Q_{1}/Q_{0})" }, { "math_id": 10, "text": "\\ln p_{1}(\\Delta U) = \\beta(\\Delta F -\\Delta U) + \\ln p_{0}(\\Delta U)" }, { "math_id": 11, "text": "f_{0}(\\Delta U) = \\ln p_{0}(\\Delta U) - \\frac{\\beta\\Delta U}{2}\n\n\n\nf_{1}(\\Delta U) = \\ln p_{1}(\\Delta U) + \\frac{\\beta\\Delta U}{2}\n" }, { "math_id": 12, "text": "f_{1}(\\Delta U) = f_{0}(\\Delta U) + \\beta\\Delta F" }, { "math_id": 13, "text": " \\Delta F" }, { "math_id": 14, "text": "f_{1}" }, { "math_id": 15, "text": "f_{0}" } ]
https://en.wikipedia.org/wiki?curid=15143012
1514405
Instrumental variables estimation
Technique in statistics In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term (endogenous), in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable (is correlated with the endogenous variable) but has no independent effect on the dependent variable and is not correlated with the error term, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable. Instrumental variable methods allow for consistent estimation when the explanatory variables (covariates) are correlated with the error terms in a regression model. Such correlation may occur when: Explanatory variables that suffer from one or more of these issues in the context of a regression are sometimes referred to as endogenous. In this situation, ordinary least squares produces biased and inconsistent estimates. However, if an "instrument" is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation but is correlated with the endogenous explanatory variables, conditionally on the value of other covariates. In linear models, there are two main requirements for using IVs: Example. Informally, in attempting to estimate the causal effect of some variable "X" ("covariate" or "explanatory variable") on another "Y" ("dependent variable"), an "instrument" is a third variable "Z" which affects "Y" only through its effect on "X". For example, suppose a researcher wishes to estimate the causal effect of smoking ("X") on general health ("Y"). Correlation between smoking and health does not imply that smoking causes poor health because other variables, such as depression, may affect both health and smoking, or because health may affect smoking. It is not possible to conduct controlled experiments on smoking status in the general population. The researcher may attempt to estimate the causal effect of smoking on health from observational data by using the tax rate for tobacco products ("Z") as an instrument for smoking. The tax rate for tobacco products is a reasonable choice for an instrument because the researcher assumes that it can only be correlated with health through its effect on smoking. If the researcher then finds tobacco taxes and state of health to be correlated, this may be viewed as evidence that smoking causes changes in health. History. First use of an instrument variable occurred in a 1928 book by Philip G. Wright, best known for his excellent description of the production, transport and sale of vegetable and animal oils in the early 1900s in the United States, while in 1945, Olav Reiersøl applied the same approach in the context of errors-in-variables models in his dissertation, giving the method its name. Wright attempted to determine the supply and demand for butter using panel data on prices and quantities sold in the United States. The idea was that a regression analysis could produce a demand or supply curve because they are formed by the path between prices and quantities demanded or supplied. The problem was that the observational data did not form a demand or supply curve as such, but rather a cloud of point observations that took different shapes under varying market conditions. It seemed that making deductions from the data remained elusive. The problem was that price affected both supply and demand so that a function describing only one of the two could not be constructed directly from the observational data. Wright correctly concluded that he needed a variable that correlated with either demand or supply but not both – that is, an instrumental variable. After much deliberation, Wright decided to use regional rainfall as his instrumental variable: he concluded that rainfall affected grass production and hence milk production and ultimately butter supply, but not butter demand. In this way he was able to construct a regression equation with only the instrumental variable of price and supply. Formal definitions of instrumental variables, using counterfactuals and graphical criteria, were given by Judea Pearl in 2000. Angrist and Krueger (2001) present a survey of the history and uses of instrumental variable techniques. Notions of causality in econometrics, and their relationship with instrumental variables and other methods, are discussed by Heckman (2008). Theory. While the ideas behind IV extend to a broad class of models, a very common context for IV is in linear regression. Traditionally, an instrumental variable is defined as a variable "Z" that is correlated with the independent variable "X" and uncorrelated with the "error term" U in the linear equation formula_0 formula_1 is a vector. formula_2 is a matrix, usually with a column of ones and perhaps with additional columns for other covariates. Consider how an instrument allows formula_3 to be recovered. Recall that OLS solves for formula_4 such that formula_5 (when we minimize the sum of squared errors, formula_6, the first-order condition is exactly formula_7.) If the true model is believed to have formula_8 due to any of the reasons listed above—for example, if there is an omitted variable which affects both formula_2 and formula_1 separately—then this OLS procedure will "not" yield the causal impact of formula_2 on formula_1. OLS will simply pick the parameter that makes the resulting errors appear uncorrelated with formula_2. Consider for simplicity the single-variable case. Suppose we are considering a regression with one variable and a constant (perhaps no other covariates are necessary, or perhaps we have partialed out any other relevant covariates): formula_9 In this case, the coefficient on the regressor of interest is given by formula_10. Substituting for formula_11 gives formula_12 where formula_13 is what the estimated coefficient vector would be if "x" were not correlated with "u". In this case, it can be shown that formula_13 is an unbiased estimator of formula_14 If formula_15 in the underlying model that we believe, then OLS gives a coefficient which does "not" reflect the underlying causal effect of interest. IV helps to fix this problem by identifying the parameters formula_16 not based on whether formula_17 is uncorrelated with formula_18, but based on whether another variable formula_19 is uncorrelated with formula_18. If theory suggests that formula_19 is related to formula_17 (the first stage) but uncorrelated with formula_18 (the exclusion restriction), then IV may identify the causal parameter of interest where OLS fails. Because there are multiple specific ways of using and deriving IV estimators even in just the linear case (IV, 2SLS, GMM), we save further discussion for the Estimation section below. Graphical definition. IV techniques have been developed among a much broader class of non-linear models. General definitions of instrumental variables, using counterfactual and graphical formalism, were given by Pearl (2000; p. 248). The graphical definition requires that "Z" satisfy the following conditions: formula_20 where formula_21 stands for "d"-separation and formula_22 stands for the graph in which all arrows entering "X" are cut off. The counterfactual definition requires that "Z" satisfies formula_23 where "Y""x" stands for the value that "Y" would attain had "X" been "x" and formula_21 stands for independence. If there are additional covariates "W" then the above definitions are modified so that "Z" qualifies as an instrument if the given criteria hold conditional on "W". The essence of Pearl's definition is: These conditions do not rely on specific functional form of the equations and are applicable therefore to nonlinear equations, where "U" can be non-additive (see Non-parametric analysis). They are also applicable to a system of multiple equations, in which "X" (and other factors) affect "Y" through several intermediate variables. An instrumental variable need not be a cause of "X"; a proxy of such cause may also be used, if it satisfies conditions 1–5. The exclusion restriction (condition 4) is redundant; it follows from conditions 2 and 3. Selecting suitable instruments. Since "U" is unobserved, the requirement that "Z" be independent of "U" cannot be inferred from data and must instead be determined from the model structure, i.e., the data-generating process. Causal graphs are a representation of this structure, and the graphical definition given above can be used to quickly determine whether a variable "Z" qualifies as an instrumental variable given a set of covariates "W". To see how, consider the following example. Suppose that we wish to estimate the effect of a university tutoring program on grade point average (GPA). The relationship between attending the tutoring program and GPA may be confounded by a number of factors. Students who attend the tutoring program may care more about their grades or may be struggling with their work. This confounding is depicted in the Figures 1–3 on the right through the bidirected arc between Tutoring Program and GPA. If students are assigned to dormitories at random, the proximity of the student's dorm to the tutoring program is a natural candidate for being an instrumental variable. However, what if the tutoring program is located in the college library? In that case, Proximity may also cause students to spend more time at the library, which in turn improves their GPA (see Figure 1). Using the causal graph depicted in the Figure 2, we see that Proximity does not qualify as an instrumental variable because it is connected to GPA through the path Proximity formula_24 Library Hours formula_24 GPA in formula_22. However, if we control for Library Hours by adding it as a covariate then Proximity becomes an instrumental variable, since Proximity is separated from GPA given Library Hours in formula_22. Now, suppose that we notice that a student's "natural ability" affects his or her number of hours in the library as well as his or her GPA, as in Figure 3. Using the causal graph, we see that Library Hours is a collider and conditioning on it opens the path Proximity formula_25 Library Hours formula_26 GPA. As a result, Proximity cannot be used as an instrumental variable. Finally, suppose that Library Hours does not actually affect GPA because students who do not study in the library simply study elsewhere, as in Figure 4. In this case, controlling for Library Hours still opens a spurious path from Proximity to GPA. However, if we do not control for Library Hours and remove it as a covariate then Proximity can again be used an instrumental variable. Estimation. We now revisit and expand upon the mechanics of IV in greater detail. Suppose the data are generated by a process of the form formula_27 where The parameter vector formula_3 is the causal effect on formula_28 of a one unit change in each element of formula_29, holding all other causes of formula_28 constant. The econometric goal is to estimate formula_3. For simplicity's sake assume the draws of "e" are uncorrelated and that they are drawn from distributions with the same variance (that is, that the errors are serially uncorrelated and homoskedastic). Suppose also that a regression model of nominally the same form is proposed. Given a random sample of "T" observations from this process, the ordinary least squares estimator is formula_31 where "X", "y" and "e" denote column vectors of length "T". This equation is similar to the equation involving formula_32 in the introduction (this is the matrix version of that equation). When "X" and "e" are uncorrelated, under certain regularity conditions the second term has an expected value conditional on "X" of zero and converges to zero in the limit, so the estimator is unbiased and consistent. When "X" and the other unmeasured, causal variables collapsed into the "e" term are correlated, however, the OLS estimator is generally biased and inconsistent for "β". In this case, it is valid to use the estimates to predict values of "y" given values of "X", but the estimate does not recover the causal effect of "X" on "y". To recover the underlying parameter formula_33, we introduce a set of variables "Z" that is highly correlated with each endogenous component of "X" but (in our underlying model) is not correlated with "e". For simplicity, one might consider "X" to be a "T" × 2 matrix composed of a column of constants and one endogenous variable, and "Z" to be a "T" × 2 consisting of a column of constants and one instrumental variable. However, this technique generalizes to "X" being a matrix of a constant and, say, 5 endogenous variables, with "Z" being a matrix composed of a constant and 5 instruments. In the discussion that follows, we will assume that "X" is a "T" × "K" matrix and leave this value "K" unspecified. An estimator in which "X" and "Z" are both "T" × "K" matrices is referred to as just-identified . Suppose that the relationship between each endogenous component "x""i" and the instruments is given by formula_34 The most common IV specification uses the following estimator: formula_35 This specification approaches the true parameter as the sample gets large, so long as formula_36 in the true model: formula_37 As long as formula_36 in the underlying process which generates the data, the appropriate use of the IV estimator will identify this parameter. This works because IV solves for the unique parameter that satisfies formula_36, and therefore hones in on the true underlying parameter as the sample size grows. Now an extension: suppose that there are more instruments than there are covariates in the equation of interest, so that "Z" is a "T × M" matrix with "M &gt; K". This is often called the over-identified case. In this case, the generalized method of moments (GMM) can be used. The GMM IV estimator is formula_38 where formula_39 refers to the projection matrix formula_40. This expression collapses to the first when the number of instruments is equal to the number of covariates in the equation of interest. The over-identified IV is therefore a generalization of the just-identified IV. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Proof that βGMM collapses to βIV in the just-identified case Developing the formula_41 expression: formula_42 In the just-identified case, we have as many instruments as covariates, so that the dimension of "X" is the same as that of "Z". Hence, formula_43 and formula_44 are all squared matrices of the same dimension. We can expand the inverse, using the fact that, for any invertible "n"-by-"n" matrices A and B, (AB)−1 = B−1A−1 (see Invertible matrix#Properties): formula_45 Reference: see Davidson and Mackinnnon (1993) There is an equivalent under-identified estimator for the case where "m &lt; k". Since the parameters are the solutions to a set of linear equations, an under-identified model using the set of equations formula_46 does not have a unique solution. Interpretation as two-stage least squares. One computational method which can be used to calculate IV estimates is two-stage least squares (2SLS or TSLS). In the first stage, each explanatory variable that is an endogenous covariate in the equation of interest is regressed on all of the exogenous variables in the model, including both exogenous covariates in the equation of interest and the excluded instruments. The predicted values from these regressions are obtained: Stage 1: Regress each column of X on Z, (formula_47): formula_48 and save the predicted values: formula_49 In the second stage, the regression of interest is estimated as usual, except that in this stage each endogenous covariate is replaced with the predicted values from the first stage: Stage 2: Regress Y on the predicted values from the first stage: formula_50 which gives formula_51 This method is only valid in linear models. For categorical endogenous covariates, one might be tempted to use a different first stage than ordinary least squares, such as a probit model for the first stage followed by OLS for the second. This is commonly known in the econometric literature as the "forbidden regression", because second-stage IV parameter estimates are consistent only in special cases. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Proof: computation of the 2SLS estimator The usual OLS estimator is: formula_52. Replacing formula_53 and noting that formula_54 is a symmetric and idempotent matrix, so that formula_55 formula_56 The resulting estimator of formula_3 is numerically identical to the expression displayed above. A small correction must be made to the sum-of-squared residuals in the second-stage fitted model in order that the covariance matrix of formula_3 is calculated correctly. Non-parametric analysis. When the form of the structural equations is unknown, an instrumental variable formula_57 can still be defined through the equations: formula_58 formula_59 where formula_60 and formula_61 are two arbitrary functions and formula_57 is independent of formula_62. Unlike linear models, however, measurements of formula_63 and formula_1 do not allow for the identification of the average causal effect of formula_2 on formula_1, denoted ACE formula_64 Balke and Pearl [1997] derived tight bounds on ACE and showed that these can provide valuable information on the sign and size of ACE. In linear analysis, there is no test to falsify the assumption the formula_57 is instrumental relative to the pair formula_65. This is not the case when formula_2 is discrete. Pearl (2000) has shown that, for all formula_60 and formula_61, the following constraint, called "Instrumental Inequality" must hold whenever formula_57 satisfies the two equations above: formula_66 Interpretation under treatment effect heterogeneity. The exposition above assumes that the causal effect of interest does not vary across observations, that is, that formula_3 is a constant. Generally, different subjects will respond in different ways to changes in the "treatment" "x". When this possibility is recognized, the average effect in the population of a change in "x" on "y" may differ from the effect in a given subpopulation. For example, the average effect of a job training program may substantially differ across the group of people who actually receive the training and the group which chooses not to receive training. For these reasons, IV methods invoke implicit assumptions on behavioral response, or more generally assumptions over the correlation between the response to treatment and propensity to receive treatment. The standard IV estimator can recover local average treatment effects (LATE) rather than average treatment effects (ATE). Imbens and Angrist (1994) demonstrate that the linear IV estimate can be interpreted under weak conditions as a weighted average of local average treatment effects, where the weights depend on the elasticity of the endogenous regressor to changes in the instrumental variables. Roughly, that means that the effect of a variable is only revealed for the subpopulations affected by the observed changes in the instruments, and that subpopulations which respond most to changes in the instruments will have the largest effects on the magnitude of the IV estimate. For example, if a researcher uses presence of a land-grant college as an instrument for college education in an earnings regression, she identifies the effect of college on earnings in the subpopulation which would obtain a college degree if a college is present but which would not obtain a degree if a college is not present. This empirical approach does not, without further assumptions, tell the researcher anything about the effect of college among people who would either always or never get a college degree regardless of whether a local college exists. Weak instruments problem. As Bound, Jaeger, and Baker (1995) note, a problem is caused by the selection of "weak" instruments, instruments that are poor predictors of the endogenous question predictor in the first-stage equation. In this case, the prediction of the question predictor by the instrument will be poor and the predicted values will have very little variation. Consequently, they are unlikely to have much success in predicting the ultimate outcome when they are used to replace the question predictor in the second-stage equation. In the context of the smoking and health example discussed above, tobacco taxes are weak instruments for smoking if smoking status is largely unresponsive to changes in taxes. If higher taxes do not induce people to quit smoking (or not start smoking), then variation in tax rates tells us nothing about the effect of smoking on health. If taxes affect health through channels other than through their effect on smoking, then the instruments are invalid and the instrumental variables approach may yield misleading results. For example, places and times with relatively health-conscious populations may both implement high tobacco taxes and exhibit better health even holding smoking rates constant, so we would observe a correlation between health and tobacco taxes even if it were the case that smoking has no effect on health. In this case, we would be mistaken to infer a causal effect of smoking on health from the observed correlation between tobacco taxes and health. Testing for weak instruments. The strength of the instruments can be directly assessed because both the endogenous covariates and the instruments are observable. A common rule of thumb for models with one endogenous regressor is: the F-statistic against the null that the excluded instruments are irrelevant in the first-stage regression should be larger than 10. Statistical inference and hypothesis testing. When the covariates are exogenous, the small-sample properties of the OLS estimator can be derived in a straightforward manner by calculating moments of the estimator conditional on "X". When some of the covariates are endogenous so that instrumental variables estimation is implemented, simple expressions for the moments of the estimator cannot be so obtained. Generally, instrumental variables estimators only have desirable asymptotic, not finite sample, properties, and inference is based on asymptotic approximations to the sampling distribution of the estimator. Even when the instruments are uncorrelated with the error in the equation of interest and when the instruments are not weak, the finite sample properties of the instrumental variables estimator may be poor. For example, exactly identified models produce finite sample estimators with no moments, so the estimator can be said to be neither biased nor unbiased, the nominal size of test statistics may be substantially distorted, and the estimates may commonly be far away from the true value of the parameter. Testing the exclusion restriction. The assumption that the instruments are not correlated with the error term in the equation of interest is not testable in exactly identified models. If the model is overidentified, there is information available which may be used to test this assumption. The most common test of these "overidentifying restrictions", called the Sargan–Hansen test, is based on the observation that the residuals should be uncorrelated with the set of exogenous variables if the instruments are truly exogenous. The Sargan–Hansen test statistic can be calculated as formula_67 (the number of observations multiplied by the coefficient of determination) from the OLS regression of the residuals onto the set of exogenous variables. This statistic will be asymptotically chi-squared with "m" − "k" degrees of freedom under the null that the error term is uncorrelated with the instruments. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y = X \\beta + U " }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\beta" }, { "math_id": 4, "text": " \\widehat{\\beta }" }, { "math_id": 5, "text": "\\operatorname{cov}(X,\\widehat U) = 0" }, { "math_id": 6, "text": "\\min_\\beta (Y- X\\beta)'(Y- X\\beta) " }, { "math_id": 7, "text": " X' (Y- X\\widehat{\\beta}) = X' \\widehat{U} = 0 " }, { "math_id": 8, "text": "\\operatorname{cov}(X,U) \\neq 0" }, { "math_id": 9, "text": "y=\\alpha + \\beta x + u" }, { "math_id": 10, "text": " \\widehat{\\beta }= \\frac{\\operatorname{cov}(x,y)}{\\operatorname{var}(x)} " }, { "math_id": 11, "text": "y" }, { "math_id": 12, "text": "\n\\begin{align}\n\\widehat{\\beta} & = \\frac{\\operatorname{cov}(x,y)}{\\operatorname{var}(x)} = \\frac{\\operatorname{cov}(x,\\alpha + \\beta x + u)}{\\operatorname{var}(x)} \\\\[6pt]\n& =\\frac{\\operatorname{cov}(x, \\alpha +\\beta x)}{\\operatorname{var}(x)} +\\frac{\\operatorname{cov}(x,u)}{\\operatorname{var}(x)}= \\beta^* + \\frac{\\operatorname{cov}(x,u)}{\\operatorname{var}(x)},\n\\end{align}\n" }, { "math_id": 13, "text": "\\beta^*" }, { "math_id": 14, "text": "\\beta ." }, { "math_id": 15, "text": "\\operatorname{cov}(x,u) \\neq 0" }, { "math_id": 16, "text": "{\\beta}" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "u" }, { "math_id": 19, "text": "z" }, { "math_id": 20, "text": "(Z \\perp\\!\\!\\!\\perp Y)_{G_{\\overline{X}}} \\qquad(Z \\not\\!\\!{\\perp\\!\\!\\!\\perp} X)_G " }, { "math_id": 21, "text": "\\perp\\!\\!\\!\\perp" }, { "math_id": 22, "text": "G_{\\overline{X}}" }, { "math_id": 23, "text": "(Z \\perp\\!\\!\\!\\perp Y_x)\\qquad (Z \\not\\!\\!{\\perp\\!\\!\\!\\perp} X)" }, { "math_id": 24, "text": " \\rightarrow " }, { "math_id": 25, "text": "\\rightarrow" }, { "math_id": 26, "text": "\\leftrightarrow" }, { "math_id": 27, "text": " y_i = X_i \\beta + e_i, " }, { "math_id": 28, "text": "y_i" }, { "math_id": 29, "text": "X_i" }, { "math_id": 30, "text": "e_i" }, { "math_id": 31, "text": " \\widehat{\\beta}_\\mathrm{OLS} = (X^\\mathrm T X)^{-1} X^\\mathrm T y = (X^\\mathrm T X)^{-1} X^\\mathrm T (X \\beta + e) = \\beta + (X^\\mathrm T X)^{-1} X^\\mathrm T e" }, { "math_id": 32, "text": " \\operatorname{cov}(X,y) " }, { "math_id": 33, "text": " \\beta " }, { "math_id": 34, "text": " x_i = Z_i \\gamma + v_i, " }, { "math_id": 35, "text": " \\widehat{\\beta}_\\mathrm{IV} = (Z^\\mathrm T X)^{-1} Z^\\mathrm T y " }, { "math_id": 36, "text": " Z^\\mathrm T e = 0 " }, { "math_id": 37, "text": " \\widehat{\\beta}_\\mathrm{IV} = (Z^\\mathrm T X)^{-1} Z^\\mathrm T y = (Z^\\mathrm T X)^{-1} Z^\\mathrm T X \\beta + (Z^\\mathrm T X)^{-1} Z^\\mathrm T e \\rightarrow \\beta " }, { "math_id": 38, "text": " \\widehat{\\beta}_\\mathrm{GMM} = (X^\\mathrm T P_Z X)^{-1}X^\\mathrm T P_Z y," }, { "math_id": 39, "text": "P_Z" }, { "math_id": 40, "text": "P_Z=Z(Z^\\mathrm T Z)^{-1}Z^\\mathrm T" }, { "math_id": 41, "text": "\\beta_\\text{GMM}" }, { "math_id": 42, "text": " \\widehat{\\beta}_\\mathrm{GMM} = (X^\\mathrm{T} Z(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T} X)^{-1}X^\\mathrm{T} Z(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T} y" }, { "math_id": 43, "text": "X^\\mathrm{T} Z, Z^\\mathrm{T} Z" }, { "math_id": 44, "text": "Z^\\mathrm{T}X" }, { "math_id": 45, "text": "\n\\begin{align}\n\\widehat{\\beta}_\\mathrm{GMM} &= (Z^\\mathrm{T} X)^{-1}(Z^\\mathrm{T} Z)(X^\\mathrm{T} Z)^{-1}X^\\mathrm{T} Z(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T} y\\\\\n&= (Z^\\mathrm{T} X)^{-1}(Z^\\mathrm{T} Z)(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T} y\\\\\n&=(Z^\\mathrm{T} X)^{-1}Z^\\mathrm{T}y \\\\\n&=\\widehat{\\beta}_\\mathrm{IV}\n\\end{align}\n" }, { "math_id": 46, "text": " Z'v = 0 " }, { "math_id": 47, "text": " X = Z \\delta + \\text{errors} " }, { "math_id": 48, "text": "\\widehat{\\delta}=(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T}X, \\," }, { "math_id": 49, "text": "\\widehat{X}= Z\\widehat{\\delta} = {\\color{ProcessBlue}Z(Z^\\mathrm{T} Z)^{-1}Z^\\mathrm{T}}X = {\\color{ProcessBlue}P_Z} X.\\, " }, { "math_id": 50, "text": " Y = \\widehat X \\beta + \\mathrm{noise},\\," }, { "math_id": 51, "text": " \\beta_\\text{2SLS} = \\left(X^\\mathrm{T}{\\color{ProcessBlue}P_Z} X\\right)^{-1} X^\\mathrm{T}{\\color{ProcessBlue}P_Z}Y." }, { "math_id": 52, "text": " (\\widehat X^\\mathrm{T}\\widehat X)^{-1}\\widehat X^\\mathrm{T}Y" }, { "math_id": 53, "text": " \\widehat X = P_Z X" }, { "math_id": 54, "text": "P_Z " }, { "math_id": 55, "text": " P_Z^\\mathrm{T}P_Z=P_Z P_Z = P_Z" }, { "math_id": 56, "text": " \\beta_\\text{2SLS} = (\\widehat X^\\mathrm{T}\\widehat X)^{-1}\\widehat X^\\mathrm{T} Y = \\left(X^\\mathrm{T}P_Z^\\mathrm{T}P_Z X\\right)^{-1} X^\\mathrm{T}P_Z^\\mathrm{T}Y=\\left(X^\\mathrm{T}P_Z X\\right)^{-1} X^\\mathrm{T}P_ZY." }, { "math_id": 57, "text": "Z" }, { "math_id": 58, "text": "x = g(z,u) \\, " }, { "math_id": 59, "text": "y = f(x,u) \\, " }, { "math_id": 60, "text": "f" }, { "math_id": 61, "text": "g" }, { "math_id": 62, "text": "U" }, { "math_id": 63, "text": "Z, X" }, { "math_id": 64, "text": "\\text{ACE} = \\Pr(y\\mid \\text{do}(x)) = \\operatorname{E}_u[f(x,u)]." }, { "math_id": 65, "text": "(X,Y)" }, { "math_id": 66, "text": "\\max_x \\sum_y [\\max_z \\Pr(y,x\\mid z)]\\leq 1." }, { "math_id": 67, "text": "TR^2" } ]
https://en.wikipedia.org/wiki?curid=1514405
15144250
Propagule pressure
Measure of the quantity of an introduced species Propagule pressure (also termed introduction effort) is a composite measure of the number of individuals of a species released into a region to which they are not native. It incorporates estimates of the absolute number of individuals involved in any one release event (propagule size) and the number of discrete release events (propagule number). As the number of releases or the number of individuals released increases, propagule pressure also increases. Propagule pressure can be defined as the quality, quantity, and frequency of invading organisms (Groom, 2006). Propagule pressure is a key element to why some introduced species persist while others do not (Lockwood, 2005). Species introduced in large quantities and consistent quantities prove more likely to survive, whereas species introduced in small numbers with only a few release events are more likely to go extinct (Lockwood, 2005). Propagule pressure is a composite measure to the number of individuals released into a non-native region (Lockwood, 2005). Three approaches are used to study and measure propagule pressure. One approach introduces a specific amount of propagules into controlled plots. A second approach allows introduced species to mature and colonize naturally while observing native and non-native species during the colonization. The final approach used to study and measure propagule pressure utilizes records of the numbers of individuals introduced, including natural introductions and intentional introductions (Colautti et al., 2003). History. Propagule pressure plays an important role in species invasions (Groom, 2006). Charles Darwin was the first to study specific factors related to invasions of non-native species. In his research he identified that few members of the same genus were present in habitats containing naturalized non-indigenous species (Colautti et al., 2006). His research showed that the number of nonnative species varied from habitat to habitat. Later, it was suggested that the niche theory and biotic resistance help explain the variation in success or failure of nonnative invasion (Colautti et al., 2006). More recent studies have shown that particular invasive species characteristics, such as ability to compete for resources, aid in their proliferation in habitats. A study noted by Colautti showed the correlation between propagule pressure and invasion success (2006). Without propagule pressure the number of invasive species incorporated would be unpredictable. It has been shown that species successfulness is frequently attributed to propagules (Colautti et al., 2006). Concepts. One important concept of propagule pressure is how it can be used to predict and or prevent invasions of non-native species in high risk locations. As invasion rates increase and biodiversity decreases, the probability of non-native establishment needs to be more accurately measured (Leung et al., 2004). Once estimation rates of species invasion success are better known, prevention efforts can be better implemented (Leung et al., 2004). To properly understand propagule pressure it is also important to realize that it is actually in flux within nature. In general, the probability of establishment will always be higher whenever propagule pressure is higher (Leung et al., 2004). If pressure is extremely low it is likely that the species' population will be too small to detect. When this is the case detailed information on rates of population introduction and size are difficult to obtain (Leung et al., 2004). In most studies where a direct relationship is observed, the higher the propagule pressure, the higher the success of the invasion. It is worthy to note, though, that in Britton-Simmons and Abbott's study (2008) of the successfulness of seaweed establishment in marine algal communities, propagule pressure was not sufficient to maximize invasion success. They found that resource availability had to coincide with invasion time and was a limiting factor to seaweed success (Britton-Simmons et al., 2008). Influencing factors. There are several factors that influence propagule pressure. They include size and frequency of the initial invasion, the pathway of invasion, characteristics of the species involved and the rate of the immigration (Groom, 2006). When studying propagule pressure one must ask the question; do these factors affect persistence independently or interact together? (Leung et al., 2004) The dynamics of the invasion are influenced by propagule pressure even after establishment has taken place. The carrying capacity of the nonnative species remains variable while adapting to their new environment. Case studies. One particular study conducted by Robert I. Colautti et al. (2006), proposes that propagule pressure should act as a null model for studies that consider/compare processes of invasions to patterns of invasions. This study was a meta-analysis studying the characteristics of invasiveness and invasibility. The group researched the impact of thirteen invasiveness characteristics, and seven invasibility characteristics. Of the invasiveness characteristics studied, most did not significantly correlate to establishment, spread, abundance or impact of nonindigenous species. Propagule pressure, however, was shown to be a key contributor to both invasiveness and invasibility. This study found there was a positive association between establishment and propagule pressure. In regards to predicting invasions, it was found that propagule pressure was significantly associated with invasion successes and of the inclination of a habitat to be invaded. Of the thirteen other invasiveness characteristics studied, only three were significantly (positively or negatively) associated with invasiveness. Likewise, only two of the other six invasibility characteristics were shown to be significant. The communities that experienced more disturbances and resource availability achieved higher establishment and abundance of invaders. Impacts. Species characteristics, environmental characteristics, and human involvement affect invasion pathways and thus have an effect on propagule pressure and the success or failure of attempted invasions. For native species of conservation concern such as endangered species, propagule pressure can be used to ensure successful introductions of populations in the wild. On a similar note, propagule pressure also plays a role in unintentional invasions of nonnative species to particular habitats. Once propagule pressure is considered, more suitable measures can be taken to reverse the unwanted effects of nonnative invasions. This tool can be used for positive effects for desired species (Groom et al. 2006). Measurement. The total probability of establishment (E) can be determined using this equation. It considers that each propagule has an independent chance of establishment: formula_0 (E) is the total probability of establishment, (p) is the probability of a single propagule establishing, (N) is the number of propagules arriving at a specific location at a certain time, (l) is the location, (t) is the time. formula_1 Alpha=shape coefficient which is equivalent to −ln(l − p). Measures of propagule pressure have been shown to have specific relationships with the probability of establishment (Leung et al. 2004).
[ { "math_id": 0, "text": "E(Nl,t) = 1 - (1 - p)Nl,t" }, { "math_id": 1, "text": "E(Nl,t ) = 1 - e2<small>-</small>(aNl,t)" } ]
https://en.wikipedia.org/wiki?curid=15144250
151474
Aliasing
Signal processing effect In signal processing and related disciplines, aliasing is the overlapping of frequency components resulting from a sample rate below the Nyquist rate. This overlap results in distortion or artifacts when the signal is reconstructed from samples which causes the reconstructed signal to differ from the original continuous signal. Aliasing that occurs in signals sampled in time, for instance in digital audio or the stroboscopic effect, is referred to as temporal aliasing. Aliasing in spatially sampled signals (e.g., moiré patterns in digital images) is referred to as spatial aliasing. Aliasing is generally avoided by applying low-pass filters or anti-aliasing filters (AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitable reconstruction filtering should then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. For spatial anti-aliasing, the types of anti-aliasing include fast approximate anti-aliasing (FXAA), multisample anti-aliasing, and supersampling. Description. When a digital image is viewed, a reconstruction is performed by a display or printer device, and by the eyes and the brain. If the image data is processed incorrectly during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen. An example of spatial aliasing is the moiré pattern observed in a poorly pixelized image of a brick wall. Spatial anti-aliasing techniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasing "prealiasing" and reconstruction aliasing "postaliasing." Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000 samples per second (Hz), any frequency components at or above 16,000 Hz (the Nyquist frequency for this sampling rate) will cause aliasing when the music is reproduced by a digital-to-analog converter (DAC). The high frequencies in the analog signal will appear as lower frequencies (wrong alias) in the recorded digital sample and, hence, cannot be reproduced by the DAC. To prevent this, an anti-aliasing filter is used to remove components above the Nyquist frequency prior to sampling. In video or cinematography, temporal aliasing results from the limited frame rate, and causes the wagon-wheel effect, whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as a negative frequency. Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming. Like the video camera, most sampling schemes are periodic; that is, they have a characteristic sampling frequency in time or in space. Digital cameras provide a certain number of samples (pixels) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled (digitized) with an analog-to-digital converter, which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content. Bandlimited functions. Actual signals have a finite duration and their frequency content, as defined by the Fourier transform, has no upper bound. Some amount of aliasing always occurs when such functions are sampled. Functions whose frequency content is bounded ("bandlimited") have an infinite duration in the time domain. If sampled at a high enough rate, determined by the "bandwidth", the original function can, in theory, be perfectly reconstructed from the infinite set of samples. Bandpass signals. Sometimes aliasing is used intentionally on signals with no low-frequency content, called "bandpass" signals. Undersampling, which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers exploit aliasing in this way for computational efficiency.  Sampling sinusoidal functions. Sinusoids are an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with a Fourier series or transform). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum. When sampling a function at frequency "f"s (intervals 1/"f"s), the following functions of time ("t") yield identical sets of samples: {sin(2π( "f+Nf"s) "t" + φ), "N" 0, ±1, ±2, ±3...}. A frequency spectrum of the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So the functions and their frequencies are said to be "aliases" of each other. Noting the trigonometric identity: formula_0 we can write all the alias frequencies as positive values:  formula_1. For example, a snapshot of the lower right frame of Fig.2 shows a component at the actual frequency formula_2 and another component at alias formula_3. As formula_2 increases during the animation, formula_3 decreases. The point at which they are equal formula_4 is an axis of symmetry called the folding frequency, also known as Nyquist frequency. Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of the formula_5 frequencies. So it is usually important that formula_6 be the unique minimum.  A necessary and sufficient condition for that is formula_7 called the Nyquist condition. The lower left frame of Fig.2 depicts the typical reconstruction result of the available samples. Until formula_2 exceeds the Nyquist frequency, the reconstruction matches the actual waveform (upper left frame). After that, it is the low frequency alias of the upper frame. Folding. The figures below offer additional depictions of aliasing, due to sampling. A graph of amplitude vs frequency (not time) for a single sinusoid at frequency  0.6 "f"s  and some of its aliases at  0.4 "f"s,  1.4 "f"s,  and  1.6 "f"s  would look like the 4 black dots in Fig.3. The red lines depict the paths (loci) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (between  "f"s/2  and  "f"s).  No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 and  "f"s.  Folding is often observed in practice when viewing the frequency spectrum of real-valued samples, such as Fig.4.. Complex sinusoids. Complex sinusoids are waveforms whose samples are complex numbers, and the concept of negative frequency is necessary to distinguish them. In that case, the frequencies of the aliases are given by just:  "f"N(" f ") "f" + "N f"s.  Therefore, as  f  increases from  0  to  "f"s,   "f"−1(" f ")  also increases (from  –"f"s  to 0).  Consequently, complex sinusoids do not exhibit "folding". Sample frequency. When the condition  "f"s/2 &gt; " f "  is met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called the Nyquist criterion. That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called an anti-aliasing filter. The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via the Whittaker–Shannon interpolation formula) is a customary measure of the effectiveness of sampling. Historical usage. Historically the term "aliasing" evolved from radio engineering because of the action of superheterodyne receivers. When the receiver shifts multiple signals down to lower frequencies, from RF to IF by heterodyning, an unwanted signal, from an RF frequency equally far from the local oscillator (LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as an "image" or "alias" of the desired signal. The first written use of the terms "alias" and "aliasing" in signal processing appears to be in a 1949 unpublished Bell Laboratories technical memorandum by John Tukey and Richard Hamming. That paper includes an example of frequency aliasing dating back to 1922. The first "published" use of the term "aliasing" in this context is due to Blackman and Tukey in 1958. In their preface to the Dover reprint of this paper, they point out that the idea of aliasing had been illustrated graphically by Stumpf ten years prior. The 1949 Bell technical report refers to aliasing as though it is a well-known concept, but does not offer a source for the term. Gwilym Jenkins and Maurice Priestley credit Tukey with introducing it in this context, though an analogous concept of aliasing had been introduced a few years earlier in fractional factorial designs. While Tukey did significant work in factorial experiments and was certainly aware of aliasing in fractional designs, it cannot be determined whether his use of "aliasing" in signal processing was consciously inspired by such designs. Angular aliasing. Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity. Spatial aliasing, particular of angular frequency, can occur when reproducing a light field or sound field with discrete elements, as in 3D displays or wave field synthesis of sound. This aliasing is visible in images such as posters with lenticular printing: if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field. The lack of parallax on viewer movement in 2D images and in 3-D film produced by stereoscopic glasses (in 3D films the effect is called "yawing", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant). More examples. Audio example. The qualitative effects of aliasing can be heard in the following audio demonstration. Six sawtooth waves are played in succession, with the first two sawtooths having a fundamental frequency of 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate between bandlimited (non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform's Fourier series such that no harmonics above the Nyquist frequency are present. The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental. Direction finding. A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points per wavelength, or the wave arrival direction becomes ambiguous. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\sin(2\\pi (f+Nf_{\\rm s})t + \\phi) = \\left\\{ \\begin{array}{ll}\n +\\sin(2\\pi (f+Nf_{\\rm s})t + \\phi),\n & f+Nf_{\\rm s} \\ge 0 \\\\ \n -\\sin(2\\pi |f+Nf_{\\rm s}|t - \\phi),\n & f+Nf_{\\rm s} < 0 \\\\ \n\\end{array} \\right.\n" }, { "math_id": 1, "text": "f_{_N}(f) \\triangleq \\left|f+Nf_{\\rm s}\\right|" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "f_{_{-1}}(f)" }, { "math_id": 4, "text": "(f=f_s/2)" }, { "math_id": 5, "text": "f_{_N}(f)" }, { "math_id": 6, "text": "f_0(f)" }, { "math_id": 7, "text": "f_s/2 > |f|," } ]
https://en.wikipedia.org/wiki?curid=151474
15148924
Minimal model program
Effort to birationally classify algebraic varieties In algebraic geometry, the minimal model program is part of the birational classification of algebraic varieties. Its goal is to construct a birational model of any complex projective variety which is as simple as possible. The subject has its origins in the classical birational geometry of surfaces studied by the Italian school, and is currently an active research area within algebraic geometry. Outline. The basic idea of the theory is to simplify the birational classification of varieties by finding, in each birational equivalence class, a variety which is "as simple as possible". The precise meaning of this phrase has evolved with the development of the subject; originally for surfaces, it meant finding a smooth variety formula_0 for which any birational morphism formula_1 with a smooth surface formula_2 is an isomorphism. In the modern formulation, the goal of the theory is as follows. Suppose we are given a projective variety formula_0, which for simplicity is assumed non-singular. There are two cases based on its Kodaira dimension, formula_3: The question of whether the varieties formula_2 and formula_0 appearing above are non-singular is an important one. It seems natural to hope that if we start with smooth formula_0, then we can always find a minimal model or Fano fibre space inside the category of smooth varieties. However, this is not true, and so it becomes necessary to consider singular varieties also. The singularities that appear are called terminal singularities. Minimal models of surfaces. Every irreducible complex algebraic curve is birational to a unique smooth projective curve, so the theory for curves is trivial. The case of surfaces was first investigated by the geometers of the Italian school around 1900; the contraction theorem of Guido Castelnuovo essentially describes the process of constructing a minimal model of any surface. The theorem states that any nontrivial birational morphism formula_12 must contract a −1-curve to a smooth point, and conversely any such curve can be smoothly contracted. Here a −1-curve is a smooth rational curve "C" with self-intersection formula_13 Any such curve must have formula_14 which shows that if the canonical class is nef then the surface has no −1-curves. Castelnuovo's theorem implies that to construct a minimal model for a smooth surface, we simply contract all the −1-curves on the surface, and the resulting variety "Y" is either a (unique) minimal model with "K" nef, or a ruled surface (which is the same as a 2-dimensional Fano fiber space, and is either a projective plane or a ruled surface over a curve). In the second case, the ruled surface birational to "X" is not unique, though there is a unique one isomorphic to the product of the projective line and a curve. A somewhat subtle point is that even though a surface might have infinitely many -1-curves, one need only contract finitely many of them to obtain a surface with no -1-curves. Higher-dimensional minimal models. In dimensions greater than 2, the theory becomes far more involved. In particular, there exist smooth varieties formula_0 which are not birational to any smooth variety formula_2 with nef canonical class. The major conceptual advance of the 1970s and early 1980s was that the construction of minimal models is still feasible, provided one is careful about the types of singularities which occur. (For example, we want to decide if formula_15 is nef, so intersection numbers formula_16 must be defined. Hence, at the very least, our varieties must have formula_17 to be a Cartier divisor for some positive integer formula_18.) The first key result is the cone theorem of Shigefumi Mori, describing the structure of the cone of curves of formula_0. Briefly, the theorem shows that starting with formula_0, one can inductively construct a sequence of varieties formula_19, each of which is "closer" than the previous one to having formula_20 nef. However, the process may encounter difficulties: at some point the variety formula_19 may become "too singular". The conjectural solution to this problem is the flip, a kind of codimension-2 surgery operation on formula_19. It is not clear that the required flips exist, nor that they always terminate (that is, that one reaches a minimal model formula_2 in finitely many steps). showed that flips exist in the 3-dimensional case. The existence of the more general log flips was established by Vyacheslav Shokurov in dimensions three and four. This was subsequently generalized to higher dimensions by Caucher Birkar, Paolo Cascini, Christopher Hacon, and James McKernan relying on earlier work of Shokurov and Hacon, and McKernan. They also proved several other problems including finite generation of log canonical rings and existence of minimal models for varieties of log general type. The problem of termination of log flips in higher dimensions remains the subject of active research. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "f\\colon X \\to X'" }, { "math_id": 2, "text": "X'" }, { "math_id": 3, "text": "\\kappa(X)" }, { "math_id": 4, "text": "\\kappa(X)=-\\infty." }, { "math_id": 5, "text": "f\\colon X' \\to Y" }, { "math_id": 6, "text": "Y" }, { "math_id": 7, "text": "\\dim Y < \\dim X'," }, { "math_id": 8, "text": "-K_F" }, { "math_id": 9, "text": "F" }, { "math_id": 10, "text": "\\kappa(X) \\geqslant 0." }, { "math_id": 11, "text": "K_{X^\\prime}" }, { "math_id": 12, "text": "f\\colon X\\to Y" }, { "math_id": 13, "text": "C\\cdot C = -1." }, { "math_id": 14, "text": "K\\cdot C = -1" }, { "math_id": 15, "text": "K_{X'}" }, { "math_id": 16, "text": "K_{X'} \\cdot C" }, { "math_id": 17, "text": "nK_{X'}" }, { "math_id": 18, "text": "n" }, { "math_id": 19, "text": "X_i" }, { "math_id": 20, "text": "K_{X_i}" } ]
https://en.wikipedia.org/wiki?curid=15148924
1514907
Unary function
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its codomain coincides with its domain. In contrast, a unary function's domain need not coincide with its range. Examples. The successor function, denoted formula_0, is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows: formula_1 In some programming languages such as C, executing this operation is denoted by postfixing to the operand, i.e. the use of is equivalent to executing the assignment formula_2. Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions.
[ { "math_id": 0, "text": "\\operatorname{succ}" }, { "math_id": 1, "text": "\n\\begin{align}\n \\operatorname{succ} : \\quad & \\mathbb{N} \\rightarrow \\mathbb{N} \\\\\n & n \\mapsto (n + 1)\n\\end{align}\n" }, { "math_id": 2, "text": " n:= \\operatorname{succ}(n)" } ]
https://en.wikipedia.org/wiki?curid=1514907
15149776
Configuration entropy
Measure of particle positions within a system In statistical mechanics, configuration entropy is the portion of a system's entropy that is related to discrete representative positions of its constituent particles. For example, it may refer to the number of ways that atoms or molecules pack together in a mixture, alloy or glass, the number of conformations of a molecule, or the number of spin configurations in a magnet. The name might suggest that it relates to all possible configurations or particle positions of a system, excluding the entropy of their velocity or momentum, but that usage rarely occurs. Calculation. If the configurations all have the same weighting, or energy, the configurational entropy is given by Boltzmann's entropy formula formula_0 where "k""B" is the Boltzmann constant and "W" is the number of possible configurations. In a more general formulation, if a system can be in states "n" with probabilities "P""n", the configurational entropy of the system is given by formula_1 which in the perfect disorder limit (all "P""n" = 1/"W") leads to Boltzmann's formula, while in the opposite limit (one configuration with probability 1), the entropy vanishes. This formulation is called the Gibbs entropy formula and is analogous to that of Shannon's information entropy. The mathematical field of combinatorics, and in particular the mathematics of combinations and permutations is highly important in the calculation of configurational entropy. In particular, this field of mathematics offers formalized approaches for calculating the number of ways of choosing or arranging discrete objects; in this case, atoms or molecules. However, it is important to note that the positions of molecules are not strictly speaking "discrete" above the quantum level. Thus a variety of approximations may be used in discretizing a system to allow for a purely combinatorial approach. Alternatively, integral methods may be used in some cases to work directly with continuous position functions, usually denoted as a configurational integral. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "S = k_B \\, \\ln W," }, { "math_id": 1, "text": "S = - k_B \\, \\sum_{n=1}^W P_n \\ln P_n, " } ]
https://en.wikipedia.org/wiki?curid=15149776
1515472
Stokes parameters
Set of values that describe the polarization state of electromagnetic radiation The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852, as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity ("I"), (fractional) degree of polarization ("p"), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. They can be determined from directly observable phenomena. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947, who named it as the Stokes parameters. Definitions. The relationship of the Stokes parameters "S"0, "S"1, "S"2, "S"3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right. formula_0 Here formula_1, formula_2 and formula_3 are the spherical coordinates of the three-dimensional vector of cartesian coordinates formula_4. formula_5 is the total intensity of the beam, and formula_6 is the degree of polarization, constrained by formula_7. The factor of two before formula_8 represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before formula_9 indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted "I", "Q", "U" and "V", respectively. Given the Stokes parameters, one can solve for the spherical coordinates with the following equations: formula_10 Stokes vectors. The Stokes parameters are often combined into a vector, known as the Stokes vector: formula_11 The Stokes vector spans the space of unpolarized, partially polarized, and fully polarized light. For comparison, the Jones vector only spans the space of fully polarized light, but is more useful for problems involving coherent light. The four Stokes parameters are not a preferred coordinate system of the space, but rather were chosen because they can be easily measured or calculated. Note that there is an ambiguous sign for the formula_12 component depending on the physical convention used. In practice, there are two separate conventions used, either defining the Stokes parameters when looking down the beam towards the source (opposite the direction of light propagation) or looking down the beam away from the source (coincident with the direction of light propagation). These two conventions result in different signs for formula_12, and a convention must be chosen and adhered to. Examples. Below are shown some Stokes vectors for common states of polarization of light. Alternative explanation. A monochromatic plane wave is specified by its propagation vector, formula_13, and the complex amplitudes of the electric field, formula_14 and formula_15, in a basis formula_16. The pair formula_17 is called a Jones vector. Alternatively, one may specify the propagation vector, the phase, formula_18, and the polarization state, formula_19, where formula_19 is the curve traced out by the electric field as a function of time in a fixed plane. The most familiar polarization states are linear and circular, which are degenerate cases of the most general state, an ellipse. One way to describe polarization is by giving the semi-major and semi-minor axes of the polarization ellipse, its orientation, and the direction of rotation (See the above figure). The Stokes parameters formula_5, formula_20, formula_21, and formula_12, provide an alternative description of the polarization state which is experimentally convenient because each parameter corresponds to a sum or difference of measurable intensities. The next figure shows examples of the Stokes parameters in degenerate states. Definitions. The Stokes parameters are defined by formula_22 where the subscripts refer to three different bases of the space of Jones vectors: the standard Cartesian basis (formula_23), a Cartesian basis rotated by 45° (formula_24), and a circular basis (formula_25). The circular basis is defined so that formula_26, formula_27. The symbols ⟨⋅⟩ represent expectation values. The light can be viewed as a random variable taking values in the space "C"2 of Jones vectors formula_17. Any given measurement yields a specific wave (with a specific phase, polarization ellipse, and magnitude), but it keeps flickering and wobbling between different outcomes. The expectation values are various averages of these outcomes. Intense, but unpolarized light will have "I" &gt; 0 but "Q" = "U" = "V" = 0, reflecting that no polarization type predominates. A convincing waveform is depicted at the article on coherence. The opposite would be perfectly polarized light which, in addition, has a fixed, nonvarying amplitude—a pure sine curve. This is represented by a random variable with only a single possible value, say formula_17. In this case one may replace the brackets by absolute value bars, obtaining a well-defined quadratic map formula_28 from the Jones vectors to the corresponding Stokes vectors; more convenient forms are given below. The map takes its image in the cone defined by |"I" |2 = |"Q" |2 + |"U" |2 + |"V" |2, where the purity of the state satisfies "p" = 1 (see below). The next figure shows how the signs of the Stokes parameters are determined by the helicity and the orientation of the semi-major axis of the polarization ellipse. Representations in fixed bases. In a fixed (formula_23) basis, the Stokes parameters when using an "increasing phase convention" are formula_29 while for formula_30, they are formula_31 and for formula_32, they are formula_33 Properties. For purely monochromatic coherent radiation, it follows from the above equations that formula_34 whereas for the whole (non-coherent) beam radiation, the Stokes parameters are defined as averaged quantities, and the previous equation becomes an inequality: formula_35 However, we can define a total polarization intensity formula_36, so that formula_37 where formula_38 is the total polarization fraction. Let us define the complex intensity of linear polarization to be formula_39 Under a rotation formula_40 of the polarization ellipse, it can be shown that formula_5 and formula_12 are invariant, but formula_41 With these properties, the Stokes parameters may be thought of as constituting three generalized intensities: formula_42 where formula_5 is the total intensity, formula_43 is the intensity of circular polarization, and formula_44 is the intensity of linear polarization. The total intensity of polarization is formula_45, and the orientation and sense of rotation are given by formula_46 Since formula_47 and formula_48, we have formula_49 Relation to the polarization ellipse. In terms of the parameters of the polarization ellipse, the Stokes parameters are formula_50 Inverting the previous equation gives formula_51 Measurement. The Stokes parameters (and thus the polarization of some electromagnetic radiation) can be directly determined from observation. Using a linear polarizer and a quarter-wave plate, the following system of equations relating the Stokes parameters to measured intensity can be obtained: formula_52 where formula_53 is the irradiance of the radiation at a point when the linear polarizer is rotated at an angle of formula_54, and similarly formula_55 is the irradiance at a point when the quarter-wave plate is rotated at an angle of formula_54. A system can be implemented using both plates at once at different angles to measure the parameters. This can give a more accurate measure of the relative magnitudes of the parameters (which is often the main result desired) due to all parameters being affected by the same losses. Relationship to Hermitian operators and quantum mixed states. From a geometric and algebraic point of view, the Stokes parameters stand in one-to-one correspondence with the closed, convex, 4-real-dimensional cone of nonnegative Hermitian operators on the Hilbert space C2. The parameter "I" serves as the trace of the operator, whereas the entries of the matrix of the operator are simple linear functions of the four parameters "I", "Q", "U", "V", serving as coefficients in a linear combination of the Stokes operators. The eigenvalues and eigenvectors of the operator can be calculated from the polarization ellipse parameters "I", "p", "ψ", "χ". The Stokes parameters with "I" set equal to 1 (i.e. the trace 1 operators) are in one-to-one correspondence with the closed unit 3-dimensional ball of mixed states (or density operators) of the quantum space C2, whose boundary is the Bloch sphere. The Jones vectors correspond to the underlying space C2, that is, the (unnormalized) pure states of the same system. Note that the overall phase (i.e. the common phase factor between the two component waves on the two perpendicular polarization axes) is lost when passing from a pure state |φ⟩ to the corresponding mixed state |φ⟩⟨φ|, just as it is lost when passing from a Jones vector to the corresponding Stokes vector. In the basis of horizontal polarization state formula_56 and vertical polarization state formula_57, the +45° linear polarization state is formula_58, the -45° linear polarization state is formula_59, the left hand circular polarization state is formula_60, and the right hand circular polarization state is formula_61. It's easy to see that these states are the eigenvectors of Pauli matrices, and that the normalized Stokes parameters ("U/I", "V/I", "Q/I") correspond to the coordinates of the Bloch vector (formula_62, formula_63, formula_64). Equivalently, we have formula_65, formula_66, formula_67, where formula_68 is the density matrix of the mixed state. Generally, a linear polarization at angle θ has a pure quantum state formula_69; therefore, the transmittance of a linear polarizer/analyzer at angle θ for a mixed state light source with density matrix formula_70 is formula_71, with a maximum transmittance of formula_72 at formula_73 if formula_74, or at formula_75 if formula_76; the minimum transmittance of formula_77 is reached at the perpendicular to the maximum transmittance direction. Here, the ratio of maximum transmittance to minimum transmittance is defined as the extinction ratio formula_78, where the degree of linear polarization is formula_79. Equivalently, the formula for the transmittance can be rewritten as formula_80, which is an extended form of Malus's law; here, formula_81 are both non-negative, and is related to the extinction ratio by formula_82. Two of the normalized Stokes parameters can also be calculated by formula_83. It's also worth noting that a rotation of polarization axis by angle θ corresponds to the Bloch sphere rotation operator formula_84. For example, the horizontal polarization state formula_56 would rotate to formula_69. The effect of a quarter-wave plate aligned to the horizontal axis is described by formula_85, or equivalently the Phase gate S, and the resulting Bloch vector becomes formula_86. With this configuration, if we perform the rotating analyzer method to measure the extinction ratio, we will be able to calculate formula_63 and also verify formula_64. For this method to work, the fast axis and the slow axis of the waveplate must be aligned with the reference directions for the basis states. The effect of a quarter-wave plate rotated by angle θ can be determined by Rodrigues' rotation formula as formula_87, with formula_88. The transmittance of the resulting light through a linear polarizer (analyzer plate) along the horizontal axis can be calculated using the same Rodrigues' rotation formula and focusing on its components on formula_5 and formula_89: formula_90 The above expression is the theory basis of many polarimeters. For unpolarized light, T=1/2 is a constant. For purely circularly polarized light, T has a sinusoidal dependence on angle θ with a period of 180 degrees, and can reach absolute extinction where T=0. For purely linearly polarized light, T has a sinusoidal dependence on angle θ with a period of 90 degrees, and absolute extinction is only reachable when the original light's polarization is at 90 degrees from the polarizer (i.e. formula_91). In this configuration, formula_92 and formula_93, with a maximum of 1/2 at θ=45°, and an extinction point at θ=0°. This result can be used to precisely determine the fast or slow axis of a quarter-wave plate, for example, by using a polarizing beam splitter to obtain a linearly polarized light aligned to the analyzer plate and rotating the quarter-wave plate in between. Similarly, the effect of a half-wave plate rotated by angle θ is described by formula_94, which transforms the density matrix to: formula_95 The above expression demonstrates that if the original light is of pure linear polarization (i.e. formula_96), the resulting light after the half-wave plate is still of pure linear polariztion (i.e. without formula_97 component) with a rotated major axis. Such rotation of the linear polarization has a sinusoidal dependence on angle θ with a period of 90 degrees. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nS_0 &= I \\\\\nS_1 &= I p \\cos 2\\psi \\cos 2\\chi \\\\\nS_2 &= I p \\sin 2\\psi \\cos 2\\chi \\\\\nS_3 &= I p \\sin 2\\chi\n\\end{align}\n" }, { "math_id": 1, "text": "I p" }, { "math_id": 2, "text": "2\\psi" }, { "math_id": 3, "text": "2\\chi" }, { "math_id": 4, "text": "(S_1, S_2, S_3)" }, { "math_id": 5, "text": "I" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "0 \\le p \\le 1" }, { "math_id": 8, "text": "\\psi" }, { "math_id": 9, "text": "\\chi" }, { "math_id": 10, "text": " \\begin{align}\nI &= S_0 \\\\\np &= \\frac{\\sqrt{S_1^2 + S_2^2 + S_3^2}}{S_0} \\\\\n2\\psi &= \\mathrm{arctan} \\frac{S_2}{S_1}\\\\\n2\\chi &= \\mathrm{arctan} \\frac{S_3}{\\sqrt{S_1^2+S_2^2}}\\\\\n\\end{align} " }, { "math_id": 11, "text": "\n\\vec S \\ =\n\\begin{pmatrix} S_0 \\\\ S_1 \\\\ S_2 \\\\ S_3\\end{pmatrix}\n=\n\\begin{pmatrix} I \\\\ Q \\\\ U \\\\ V\\end{pmatrix}\n" }, { "math_id": 12, "text": "V" }, { "math_id": 13, "text": "\\vec{k}" }, { "math_id": 14, "text": "E_1" }, { "math_id": 15, "text": "E_2" }, { "math_id": 16, "text": "(\\hat{\\epsilon}_1,\\hat{\\epsilon}_2)" }, { "math_id": 17, "text": "(E_1, E_2)" }, { "math_id": 18, "text": "\\phi" }, { "math_id": 19, "text": "\\Psi" }, { "math_id": 20, "text": "Q" }, { "math_id": 21, "text": "U" }, { "math_id": 22, "text": " \\begin{align}\nI & \\equiv \\langle E_x^{2} \\rangle + \\langle E_y^{2} \\rangle \\\\\n & = \\langle E_a^{2} \\rangle + \\langle E_b^{2} \\rangle \\\\\n & = \\langle E_r^{2} \\rangle + \\langle E_l^{2} \\rangle, \\\\\nQ & \\equiv \\langle E_x^{2} \\rangle - \\langle E_y^{2} \\rangle, \\\\\nU & \\equiv \\langle E_a^{2} \\rangle - \\langle E_b^{2} \\rangle, \\\\\nV & \\equiv \\langle E_r^{2} \\rangle - \\langle E_l^{2} \\rangle.\n\\end{align} " }, { "math_id": 23, "text": "\\hat{x},\\hat{y}" }, { "math_id": 24, "text": "\\hat{a},\\hat{b}" }, { "math_id": 25, "text": "\\hat{l},\\hat{r}" }, { "math_id": 26, "text": "\\hat{l} = (\\hat{x}+i\\hat{y})/\\sqrt{2}" }, { "math_id": 27, "text": "\\hat{r} = (\\hat{x}-i\\hat{y})/\\sqrt{2}" }, { "math_id": 28, "text": " \\begin{matrix}\nI \\equiv |E_x|^{2} + |E_y|^{2} = |E_a|^{2} + |E_b|^{2} = |E_r|^{2} + |E_l|^{2} \\\\\nQ \\equiv |E_x|^{2} - |E_y|^{2}, \\\\\nU \\equiv |E_a|^{2} - |E_b|^{2}, \\\\\nV \\equiv |E_r|^{2} - |E_l|^{2}.\n\\end{matrix} " }, { "math_id": 29, "text": " \\begin{align}\nI&=|E_x|^2+|E_y|^2, \\\\\nQ&=|E_x|^2-|E_y|^2, \\\\\nU&=2\\mathrm{Re}(E_xE_y^*), \\\\\nV&=-2\\mathrm{Im}(E_xE_y^*), \\\\\n\\end{align}\n" }, { "math_id": 30, "text": "(\\hat{a},\\hat{b})" }, { "math_id": 31, "text": " \\begin{align}\nI&=|E_a|^2+|E_b|^2, \\\\\nQ&=-2\\mathrm{Re}(E_a^{*}E_b), \\\\\nU&=|E_a|^{2}-|E_b|^{2}, \\\\\nV&=2\\mathrm{Im}(E_a^{*}E_b). \\\\\n\\end{align}\n" }, { "math_id": 32, "text": "(\\hat{l},\\hat{r})" }, { "math_id": 33, "text": " \\begin{align}\nI &=|E_l|^2+|E_r|^2, \\\\\nQ &=2\\mathrm{Re}(E_l^*E_r), \\\\\nU & = -2\\mathrm{Im}(E_l^*E_r), \\\\\nV & =|E_r|^2-|E_l|^2. \\\\\n\\end{align} " }, { "math_id": 34, "text": "\nQ^2+U^2+V^2 = I^2,\n" }, { "math_id": 35, "text": "\nQ^2+U^2+V^2 \\le I^2.\n" }, { "math_id": 36, "text": "I_p" }, { "math_id": 37, "text": "\nQ^{2} + U^2 +V^2 = I_p^2,\n" }, { "math_id": 38, "text": "I_p/I" }, { "math_id": 39, "text": "\n\\begin{align}\nL & \\equiv |L|e^{i2\\theta} \\\\\n & \\equiv Q +iU. \\\\\n\\end{align}\n" }, { "math_id": 40, "text": "\\theta \\rightarrow \\theta+\\theta'" }, { "math_id": 41, "text": "\n\\begin{align}\nL & \\rightarrow e^{i2\\theta'}L, \\\\\nQ & \\rightarrow \\mbox{Re}\\left(e^{i2\\theta'}L\\right), \\\\\nU & \\rightarrow \\mbox{Im}\\left(e^{i2\\theta'}L\\right).\\\\\n\\end{align}\n" }, { "math_id": 42, "text": "\n\\begin{align}\nI & \\ge 0, \\\\\nV & \\in \\mathbb{R}, \\\\\nL & \\in \\mathbb{C}, \\\\\n\\end{align}\n" }, { "math_id": 43, "text": "|V|" }, { "math_id": 44, "text": "|L|" }, { "math_id": 45, "text": "I_p=\\sqrt{|L|^2+|V|^2}" }, { "math_id": 46, "text": "\n\\begin{align}\n\\theta &= \\frac{1}{2}\\arg(L), \\\\\nh &= \\sgn(V). \\\\\n\\end{align}\n" }, { "math_id": 47, "text": "Q=\\mbox{Re}(L)" }, { "math_id": 48, "text": "U=\\mbox{Im}(L)" }, { "math_id": 49, "text": "\n\\begin{align}\n|L| &= \\sqrt{Q^2+U^2}, \\\\\n\\theta &= \\frac{1}{2}\\tan^{-1}(U/Q). \\\\\n\\end{align}\n" }, { "math_id": 50, "text": "\n\\begin{align}\nI_p & = A^2 + B^2, \\\\\nQ & = (A^2-B^2)\\cos(2\\theta), \\\\\nU & = (A^2-B^2)\\sin(2\\theta), \\\\\nV & = 2ABh. \\\\\n\\end{align}\n" }, { "math_id": 51, "text": "\n\\begin{align}\nA & = \\sqrt{\\frac{1}{2}(I_p+|L|)} \\\\\nB & = \\sqrt{\\frac{1}{2}(I_p-|L|)} \\\\\n\\theta & = \\frac{1}{2}\\arg(L)\\\\\nh & = \\sgn(V). \\\\\n\\end{align}\n" }, { "math_id": 52, "text": "\n\\begin{align}\nI_l(0) &= \\frac12 (I + Q)\\\\\nI_l(\\frac{\\pi}4) &= \\frac12 (I + U)\\\\\nI_l(\\frac{\\pi}2) &= \\frac12 (I - Q)\\\\\nI_q(\\frac{\\pi}4) &= \\frac12 (I + V),\\\\\n\\end{align}\n" }, { "math_id": 53, "text": "I_l(\\theta)" }, { "math_id": 54, "text": "\\theta" }, { "math_id": 55, "text": "I_q(\\theta)" }, { "math_id": 56, "text": "|H\\rangle" }, { "math_id": 57, "text": "|V\\rangle" }, { "math_id": 58, "text": "|+\\rangle =\\frac{1}{\\sqrt2}(|H\\rangle+|V\\rangle) " }, { "math_id": 59, "text": "|-\\rangle =\\frac{1}{\\sqrt2}(|H\\rangle-|V\\rangle) " }, { "math_id": 60, "text": "|L\\rangle =\\frac{1}{\\sqrt2}(|H\\rangle+i|V\\rangle) " }, { "math_id": 61, "text": "|R\\rangle =\\frac{1}{\\sqrt2}(|H\\rangle-i|V\\rangle) " }, { "math_id": 62, "text": "a_x" }, { "math_id": 63, "text": "a_y" }, { "math_id": 64, "text": "a_z" }, { "math_id": 65, "text": "U/I=tr\\left(\\rho \\sigma_x \\right)" }, { "math_id": 66, "text": "V/I=tr\\left(\\rho \\sigma_y \\right)" }, { "math_id": 67, "text": "Q/I=tr\\left(\\rho \\sigma_z \\right)" }, { "math_id": 68, "text": "\\rho" }, { "math_id": 69, "text": "|\\theta\\rangle =\\cos{\\theta}|H\\rangle+\\sin{\\theta}|V\\rangle " }, { "math_id": 70, "text": "\\rho = \\frac{1}{2}\\left(I + a_x \\sigma_x + a_y \\sigma_y + a_z \\sigma_z\\right)" }, { "math_id": 71, "text": "tr(\\rho|\\theta\\rangle\\langle\\theta|) \n= \\frac{1}{2}\\left(1 + a_x \\sin{2\\theta} + a_z \\cos{2\\theta}\\right) " }, { "math_id": 72, "text": " \\frac{1}{2} (1+ \\sqrt{ a_x^2 + a_z^2 }) " }, { "math_id": 73, "text": "\\theta_0 = \\frac{1}{2}\\arctan{ (a_x/a_z) } " }, { "math_id": 74, "text": "a_z > 0" }, { "math_id": 75, "text": "\\theta_0 = \\frac{1}{2}\\arctan{ (a_x/a_z) }+\\frac{\\pi}{2} " }, { "math_id": 76, "text": " a_z < 0" }, { "math_id": 77, "text": " \\frac{1}{2} ( 1- \\sqrt{ a_x^2 + a_z^2 }) " }, { "math_id": 78, "text": "ER = (1 + DOLP) / (1 - DOLP) " }, { "math_id": 79, "text": "DOLP = \\sqrt{ a_x^2 + a_z^2 } " }, { "math_id": 80, "text": "A\\cos^2{(\\theta- \\theta_0)} + B " }, { "math_id": 81, "text": " A, B " }, { "math_id": 82, "text": "ER = (A+B)/B " }, { "math_id": 83, "text": "a_x=DOLP\\sin{2\\theta_0}, \\, a_z=DOLP\\cos{ 2\\theta_0}, \\, DOLP=(ER-1)/(ER+1) " }, { "math_id": 84, "text": "R_y (2\\theta) = \n\\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta \\\\\n \\sin \\theta & \\cos \\theta \n \\end{bmatrix}" }, { "math_id": 85, "text": "R_z (\\pi /2)= \n\\begin{bmatrix} \n e^{ -i\\pi/4 } & 0 \\\\\n 0 & e^{ +i\\pi/4 }\n \\end{bmatrix}" }, { "math_id": 86, "text": "(-a_y,a_x,a_z)" }, { "math_id": 87, "text": "R_n (\\pi/2)=\\frac{1}{\\sqrt2}I-i\\frac{1}{\\sqrt2} (\\hat{n} \\cdot \\vec{\\sigma} ) " }, { "math_id": 88, "text": "\\hat{n}=\\hat{z}\\cos{2\\theta}+\\hat{x} \\sin{2\\theta}" }, { "math_id": 89, "text": "\\sigma_z" }, { "math_id": 90, "text": "\\begin{align}\nT&= tr[R_n(\\pi/2) \\rho R_n (- \\pi/2)|H\\rangle\\langle H|] \\\\\n\n&= \\frac{1}{2}\\left[ 1 + a_y \\sin{2\\theta} + (\\hat{n}\\cdot \\vec{a}) \\cos{2\\theta}\\right] \\\\\n&= \\frac{1}{2}\\left[ 1 + a_y \\sin{2\\theta} + (a_x \\sin{2\\theta} + a_z \\cos{2\\theta}) \\cos{2\\theta}\\right] \\\\\n&= \\frac{1}{2}\\left( 1 + a_y \\sin{2\\theta} +DOLP\\times \\frac{\\cos{(4\\theta-2\\theta_0) }+\\cos{(2\\theta_0) }}{2 }\\right)\n\\end{align} " }, { "math_id": 91, "text": "a_z =-1" }, { "math_id": 92, "text": "\\theta_0=\\frac{\\pi}{2}" }, { "math_id": 93, "text": "T=\\frac{1- \n \\cos{(4\\theta)}}{4} " }, { "math_id": 94, "text": "R_n (\\pi)=-i(\\hat{n} \\cdot \\vec{\\sigma} ) " }, { "math_id": 95, "text": "\\begin{align}\nR_n(\\pi) \\rho R_n (-\\pi) &= \\frac{1}{2}\\left(I+\\vec{a}\\cdot[-\\vec{\\sigma}+2\\hat{n} (\\hat{n}\\cdot\\vec{\\sigma} )]\\right) \\\\\n&= \\frac{1}{2}\\left[I- \\vec{a} \\cdot \\vec{\\sigma}+2(\\hat{n}\\cdot\\vec{a} ) (\\hat{n}\\cdot\\vec{\\sigma} )\\right]\n\\end{align} " }, { "math_id": 96, "text": "a_y= 0 " }, { "math_id": 97, "text": "\\sigma_y " } ]
https://en.wikipedia.org/wiki?curid=1515472
1515511
Hopf link
Simplest nontrivial knot link In mathematical knot theory, the Hopf link is the simplest nontrivial link with more than one component. It consists of two circles linked together exactly once, and is named after Heinz Hopf. Geometric realization. A concrete model consists of two unit circles in perpendicular planes, each passing through the center of the other. This model minimizes the ropelength of the link and until 2002 the Hopf link was the only link whose ropelength was known. The convex hull of these two circles forms a shape called an oloid. Properties. Depending on the relative orientations of the two components the linking number of the Hopf link is ±1. The Hopf link is a (2,2)-torus link with the braid word formula_0 The knot complement of the Hopf link is R × "S"1 × "S"1, the cylinder over a torus. This space has a locally Euclidean geometry, so the Hopf link is not a hyperbolic link. The knot group of the Hopf link (the fundamental group of its complement) is Z2 (the free abelian group on two generators), distinguishing it from an unlinked pair of loops which has the free group on two generators as its group. The Hopf-link is not tricolorable: it is not possible to color the strands of its diagram with three colors, so that at least two of the colors are used and so that every crossing has one or three colors present. Each link has only one strand, and if both strands are given the same color then only one color is used, while if they are given different colors then the crossings will have two colors present. Hopf bundle. The Hopf fibration is a continuous function from the 3-sphere (a three-dimensional surface in four-dimensional Euclidean space) into the more familiar 2-sphere, with the property that the inverse image of each point on the 2-sphere is a circle. Thus, these images decompose the 3-sphere into a continuous family of circles, and each two distinct circles form a Hopf link. This was Hopf's motivation for studying the Hopf link: because each two fibers are linked, the Hopf fibration is a nontrivial fibration. This example began the study of homotopy groups of spheres. Biology. The Hopf link is also present in some proteins. It consists of two covalent loops, formed by pieces of protein backbone, closed with disulfide bonds. The Hopf link topology is highly conserved in proteins and adds to their stability. History. The Hopf link is named after topologist Heinz Hopf, who considered it in 1931 as part of his research on the Hopf fibration. However, in mathematics, it was known to Carl Friedrich Gauss before the work of Hopf. It has also long been used outside mathematics, for instance as the crest of Buzan-ha, a Japanese Buddhist sect founded in the 16th century. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_1^2.\\," } ]
https://en.wikipedia.org/wiki?curid=1515511
151569
Toeplitz matrix
Matrix with shifting rows In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: formula_0 Any formula_1 matrix formula_2 of the form formula_3 is a Toeplitz matrix. If the formula_4 element of formula_2 is denoted formula_5 then we have formula_6 A Toeplitz matrix is not necessarily square. Solving a Toeplitz system. A matrix equation of the form formula_7 is called a Toeplitz system if formula_2 is a Toeplitz matrix. If formula_2 is an formula_1 Toeplitz matrix, then the system has at most only formula_8 unique values, rather than formula_9. We might therefore expect that the solution of a Toeplitz system would be easier, and indeed that is the case. Toeplitz systems can be solved by algorithms such as the Schur algorithm or the Levinson algorithm in formula_10 time. Variants of the latter have been shown to be weakly stable (i.e. they exhibit numerical stability for well-conditioned linear systems). The algorithms can also be used to find the determinant of a Toeplitz matrix in formula_10 time. A Toeplitz matrix can also be decomposed (i.e. factored) in formula_10 time. The Bareiss algorithm for an LU decomposition is stable. An LU decomposition gives a quick method for solving a Toeplitz system, and also for computing the determinant. formula_15 where formula_16 is the lower triangular part of formula_17. formula_18 where formula_19 and formula_20 are lower triangular Toeplitz matrices and formula_20 is a strictly lower triangular matrix. Discrete convolution. The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. For example, the convolution of formula_21 and formula_22 can be formulated as: formula_23 formula_24 This approach can be extended to compute autocorrelation, cross-correlation, moving average etc. Infinite Toeplitz matrix. A bi-infinite Toeplitz matrix (i.e. entries indexed by formula_25) formula_2 induces a linear operator on formula_26. formula_27 The induced operator is bounded if and only if the coefficients of the Toeplitz matrix formula_2 are the Fourier coefficients of some essentially bounded function formula_28. In such cases, formula_28 is called the symbol of the Toeplitz matrix formula_2, and the spectral norm of the Toeplitz matrix formula_2 coincides with the formula_29 norm of its symbol. The proof is easy to establish and can be found as Theorem 1.1 of. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\qquad\\begin{bmatrix}\na & b & c & d & e \\\\\nf & a & b & c & d \\\\\ng & f & a & b & c \\\\\nh & g & f & a & b \\\\\ni & h & g & f & a \n\\end{bmatrix}." }, { "math_id": 1, "text": "n \\times n" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "A = \\begin{bmatrix}\n a_0 & a_{-1} & a_{-2} & \\cdots & \\cdots & a_{-(n-1)} \\\\\n a_1 & a_0 & a_{-1} & \\ddots & & \\vdots \\\\\n a_2 & a_1 & \\ddots & \\ddots & \\ddots & \\vdots \\\\ \n \\vdots & \\ddots & \\ddots & \\ddots & a_{-1} & a_{-2} \\\\\n \\vdots & & \\ddots & a_1 & a_0 & a_{-1} \\\\\na_{n-1} & \\cdots & \\cdots & a_2 & a_1 & a_0\n\\end{bmatrix}" }, { "math_id": 4, "text": "i,j" }, { "math_id": 5, "text": "A_{i,j}" }, { "math_id": 6, "text": "A_{i,j} = A_{i+1,j+1} = a_{i-j}." }, { "math_id": 7, "text": "Ax = b" }, { "math_id": 8, "text": "2n-1" }, { "math_id": 9, "text": "n^2" }, { "math_id": 10, "text": "O(n^2)" }, { "math_id": 11, "text": "n\\times n" }, { "math_id": 12, "text": "A_{i,j}=c_{i-j}" }, { "math_id": 13, "text": "c_{1-n},\\ldots,c_{n-1}" }, { "math_id": 14, "text": "O(n)" }, { "math_id": 15, "text": "\\frac{1}{a_0} A = G G^\\operatorname{T} - (G - I)(G - I)^\\operatorname{T}" }, { "math_id": 16, "text": "G" }, { "math_id": 17, "text": "\\frac{1}{a_0} A" }, { "math_id": 18, "text": "A^{-1} = \\frac{1}{\\alpha_0} (B B^\\operatorname{T} - C C^\\operatorname{T})" }, { "math_id": 19, "text": "B" }, { "math_id": 20, "text": "C" }, { "math_id": 21, "text": " h " }, { "math_id": 22, "text": " x " }, { "math_id": 23, "text": "\n y = h \\ast x =\n \\begin{bmatrix}\n h_1 & 0 & \\cdots & 0 & 0 \\\\\n h_2 & h_1 & & \\vdots & \\vdots \\\\\n h_3 & h_2 & \\cdots & 0 & 0 \\\\\n \\vdots & h_3 & \\cdots & h_1 & 0 \\\\\n h_{m-1} & \\vdots & \\ddots & h_2 & h_1 \\\\\n h_m & h_{m-1} & & \\vdots & h_2 \\\\\n 0 & h_m & \\ddots & h_{m-2} & \\vdots \\\\\n 0 & 0 & \\cdots & h_{m-1} & h_{m-2} \\\\\n \\vdots & \\vdots & & h_m & h_{m-1} \\\\\n 0 & 0 & 0 & \\cdots & h_m\n \\end{bmatrix}\n \\begin{bmatrix}\n x_1 \\\\\n x_2 \\\\\n x_3 \\\\\n \\vdots \\\\\n x_n\n \\end{bmatrix}\n" }, { "math_id": 24, "text": " y^T =\n \\begin{bmatrix}\n h_1 & h_2 & h_3 & \\cdots & h_{m-1} & h_m\n \\end{bmatrix}\n \\begin{bmatrix}\n x_1 & x_2 & x_3 & \\cdots & x_n & 0 & 0 & 0& \\cdots & 0 \\\\\n 0 & x_1 & x_2 & x_3 & \\cdots & x_n & 0 & 0 & \\cdots & 0 \\\\\n 0 & 0 & x_1 & x_2 & x_3 & \\ldots & x_n & 0 & \\cdots & 0 \\\\\n \\vdots & & \\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & & \\vdots \\\\\n 0 & \\cdots & 0 & 0 & x_1 & \\cdots & x_{n-2} & x_{n-1} & x_n & 0 \\\\\n 0 & \\cdots & 0 & 0 & 0 & x_1 & \\cdots & x_{n-2} & x_{n-1} & x_n\n \\end{bmatrix}.\n" }, { "math_id": 25, "text": "\\mathbb Z\\times\\mathbb Z" }, { "math_id": 26, "text": "\\ell^2" }, { "math_id": 27, "text": "\nA=\\begin{bmatrix}\n & \\vdots & \\vdots & \\vdots & \\vdots \\\\\n\\cdots & a_0 & a_{-1} & a_{-2} & a_{-3} & \\cdots \\\\\n\\cdots & a_1 & a_0 & a_{-1} & a_{-2} & \\cdots \\\\\n\\cdots & a_2 & a_1 & a_0 & a_{-1} & \\cdots \\\\\n\\cdots & a_3 & a_2 & a_1 & a_0 & \\cdots \\\\\n & \\vdots & \\vdots & \\vdots & \\vdots\n\\end{bmatrix}.\n \n" }, { "math_id": 28, "text": "f" }, { "math_id": 29, "text": "L^\\infty" }, { "math_id": 30, "text": "a_i=a_{i+n}" } ]
https://en.wikipedia.org/wiki?curid=151569
151577
Causality (physics)
Physics of the cause–effect relation Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy and physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone. Macroscopic vs microscopic causality. Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer faster than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version. Macroscopic causality. In classical physics, an effect cannot occur "before" its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other. Another requirement of causality is that cause and effect be mediated across space and time (requirement of "contiguity"). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than in Descartes' theory. Simultaneity. In modern physics, the notion of causality had to be clarified. The word "simultaneous" is observer-dependent in special relativity. The principle is relativity of simultaneity. Consequently, the relativistic principle of causality says that the cause must precede its effect "according to all inertial observers". This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel "backward in time". For this reason, special relativity does not allow communication faster than the speed of light. In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In those two theories, causality is closely related to the principle of locality. Bell's Theorem shows that conditions of "local causality" in experiments involving quantum entanglement result in non-classical correlations predicted by quantum mechanics. Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture. Determinism (or, what causality is "not"). The word "causality" in this context means that all effects must have specific physical causes due to fundamental interactions. Causality in this context is not associated with definitional principles such as Newton's second law. As such, in the context of "causality," a force does not "cause" a mass to accelerate nor vice versa. Rather, Newton's Second Law can be derived from the conservation of momentum, which itself is a consequence of the spatial homogeneity of physical laws. The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) meant that scholastic arguments about what caused phenomena were either rejected for being untestable or were just ignored. The complaint that physics does not explain the "cause" of phenomena has accordingly been dismissed as a problem that is philosophical or metaphysical rather than empirical (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous and, as indicated above, is not considered a consequence of any principle of causality. Indeed, it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies, formula_0 as two coupled equations describing the positions formula_1 and formula_2 of the two bodies, "without interpreting the right hand sides of these equations as forces"; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times. The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way which, indeed, are time-reversible and agnostic to the arrow of time, but with such a direction of time established, the entire evolution system could then be completely determined. The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace "determinism" (rather than 'Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion between causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined). Distributed causality. Theories in physics like the butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes: "Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality. A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly. Causal sets. In causal set theory, causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class, so knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this, Rafael Sorkin proposed the idea of Causal Set Theory, which is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " m_1 \\frac{d^2 {\\mathbf r}_1 }{ dt^2} = -\\frac{m_1 m_2 G ({\\mathbf r}_1 - {\\mathbf r}_2)}{ |{\\mathbf r}_1 - {\\mathbf r}_2|^3};\\; m_2 \\frac{d^2 {\\mathbf r}_2 }{dt^2} = -\\frac{m_1 m_2 G ({\\mathbf r}_2 - {\\mathbf r}_1) }{ |{\\mathbf r}_2 - {\\mathbf r}_1|^3}, " }, { "math_id": 1, "text": " \\scriptstyle {\\mathbf r}_1(t) " }, { "math_id": 2, "text": " \\scriptstyle {\\mathbf r}_2(t) " } ]
https://en.wikipedia.org/wiki?curid=151577
151584
John Smeaton
British engineer John Smeaton (8 June 1724 – 28 October 1792) was an English civil engineer responsible for the design of bridges, canals, harbours and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Smeaton was the first self-proclaimed "civil engineer", and is often regarded as the "father of civil engineering". He pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate. Smeaton was associated with the Lunar Society. Law and physics. Smeaton was born in Austhorpe, Leeds, England. After studying at Leeds Grammar School he joined his father's law firm, but left to become a mathematical instrument maker (working with Henry Hindley), developing, among other instruments, a pyrometer to study material expansion. In 1750, his premises were in the Great Turnstile in Holborn. He was elected a Fellow of the Royal Society in 1753 and in 1759 won the Copley Medal for his research into the mechanics of waterwheels and windmills. His 1759 paper "An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills and Other Machines Depending on Circular Motion"&lt;ref name="Phil. Trans. 1759 51:100-174; doi:10.1098/rstl.1759.0019"&gt;&lt;/ref&gt; addressed the relationship between pressure and velocity for objects moving in air (Smeaton noted that the table doing so was actually contributed by "my friend Mr Rouse" "an ingenious gentleman of Harborough, Leicestershire" and calculated on the basis of Rouse's experiments), and his concepts were subsequently developed to devise the 'Smeaton Coefficient'. Smeaton's water wheel experiments were conducted on a small scale model with which he tested various configurations over a period of seven years. The resultant increase in efficiency in water power contributed to the Industrial Revolution. Over the period 1759–1782 he performed a series of further experiments and measurements on water wheels that led him to support and champion the "vis viva" theory of German Gottfried Leibniz, an early formulation of conservation of energy. This led him into conflict with members of the academic establishment who rejected Leibniz's theory, believing it inconsistent with Sir Isaac Newton's conservation of momentum. Smeaton coefficient. In his 1759 paper "An Experimental Enquiry Concerning the Natural Powers of Water and Wind to Turn Mills and Other Machines Depending on Circular Motion" Smeaton developed the concepts and data which became the basis for the "Smeaton coefficient", the lift equation used by the Wright brothers. It has the form: formula_0 where: formula_1 is the lift formula_2 is the Smeaton coefficient (see note below) formula_3 is the velocity formula_4 is the area in square feet formula_5 is the lift coefficient (the lift relative to the drag of a plate of the same area) The Wright brothers determined with wind tunnels that the Smeaton coefficient value of 0.005 was incorrect and should have been 0.0033. In modern analysis, the lift coefficient is normalised by the dynamic pressure instead of the Smeaton coefficient. Civil engineering. Smeaton is important in the history, rediscovery of, and development of modern cement, identifying the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement. Portland cement led to the re-emergence of concrete as a modern building material, largely due to Smeaton's influence. Recommended by the Royal Society, Smeaton designed the third Eddystone Lighthouse (1755–59). He pioneered the use of 'hydraulic lime' (a form of mortar that will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. His lighthouse remained in use until 1877 when the rock underlying the structure's foundations had begun to erode; it was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeaton's Tower. In 2020 a Cornish granite bust of Smeaton by Philip Chatfield, commissioned by The Box, Plymouth and funded by Trinity House, was installed in the tower's lantern chamber before its reopening. The bust is based on a plaster one donated by the Institution of Civil Engineers in about 1980, but later removed for safety reasons. Deciding that he wanted to focus on the lucrative field of civil engineering, he commenced an extensive series of commissions, including: Smeaton is considered to be the first expert witness to appear in an English court. Because of his expertise in engineering, he was called to testify in court for a case related to the silting-up of the harbour at Wells-next-the-Sea in Norfolk in 1782. He also acted as a consultant on the disastrous 63-year-long New Harbour at Rye, designed to combat the silting of the port of Winchelsea. The project is now known informally as "Smeaton's Harbour", but despite the name his involvement was limited and occurred more than 30 years after work on the harbour commenced. It closed in 1839. Mechanical engineer. Employing his skills as a mechanical engineer, he devised a water engine for the Royal Botanic Gardens at Kew in 1761 and a watermill at Alston, Cumbria in 1767 (he is credited by some with inventing the cast-iron axle shaft for water wheels). In 1782 he built the Chimney Mill at Spital Tongues in Newcastle upon Tyne, the first 5-sailed smock mill in Britain. He also improved Thomas Newcomen's atmospheric engine, erecting one at Chacewater mine, Wheal Busy, in Cornwall in 1775 which was both highly efficient and the most powerful at the time. In 1789 Smeaton applied an idea by Denis Papin, by using a force pump to maintain the pressure and fresh air inside a diving bell. This bell, built for the Hexham Bridge project, was not intended for underwater work, but in 1790 the design was updated to enable it to be used underwater on the breakwater at Ramsgate Harbour. Smeaton is also credited with explaining the fundamental differences and benefits of overshot versus undershot water wheels. Smeaton experimented with the Newcomen steam engine and made marked improvements around the time James Watt was building his first engines (c. late 1770s). Legacy. Smeaton died after suffering a stroke while walking in the garden of his family home at Austhorpe, and was buried in the parish church at Whitkirk, West Yorkshire. His surviving daughters erected a memorial to him and his wife which is on the chancel wall of the church. Due to the decay of the rock beneath the Eddystone Lighthouse the structure needed to be replaced. When the upper section of Smeaton's lighthouse (which included the lantern, store and living and watch room) was about to be removed, it was suggested that some of it be brought to Whitkirk and set up as a memorial to him. Unfortunately, the project was deemed too expensive as it was estimated that it would cost around £1800. He is highly regarded by other engineers, having contributed to the Lunar Society and founded the Society of Civil Engineers in 1771. He coined the term "civil engineers" to distinguish them from military engineers graduating from the Royal Military Academy at Woolwich. The Society was a forerunner of the Institution of Civil Engineers, established in 1818, and was renamed the Smeatonian Society of Civil Engineers in 1830. His pupils included canal engineer William Jessop and architect and engineer Benjamin Latrobe. The pioneering constant of proportionality describing pressure varying inversely as the square of the velocity when applied to objects moving in air was named "Smeaton's coefficient" in his honour. Based on his concepts and data, it was used by the Wright brothers in their pursuit of the first successful heavier-than-air aircraft. Between 1860 and 1894 the design of the reverse side of the old penny coin showed (behind Britannia) a depiction of Smeaton’s Eddystone lighthouse. Smeaton is one of six civil engineers depicted in the Stephenson stained glass window, designed by William Wailes and unveiled in Westminster Abbey in 1862. A memorial stone commemorating Smeaton himself was unveiled in the Abbey on 7 November 1994, by Noel Ordman, President of the Smeatonian Society of Civil Engineers. John Smeaton Academy, a secondary school in the suburbs of Leeds adjacent to the Pendas Fields estate near Austhorpe, is named after Smeaton. He is also commemorated at the University of Plymouth, where the Mathematics and Technology Department is housed in a building named after him. A viaduct in the final stage of the Leeds Inner Ring Road, opened in 2008, was named after him. Smeaton is mentioned in the song "I Predict a Riot" (as a symbol of a more dignified and peaceful epoch in Leeds history; and in reference to a Junior School House at Leeds Grammar School, which lead singer Ricky Wilson attended) by the indie rock band Kaiser Chiefs, who are natives of Leeds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = k V^2 A C_l \\," }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "C_l" } ]
https://en.wikipedia.org/wiki?curid=151584
1515898
Thermodynamic equations
Equations in thermodynamics Thermodynamics is expressed by a mathematical framework of "thermodynamic equations" which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. Introduction. One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous "On the Motive Power of Fire", he states: “We use here the expression "motive power" to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power: formula_0 During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics. A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state. Notation. Some of the most common thermodynamic quantities are: The "conjugate variable pairs" are the fundamental state variables used to formulate the thermodynamic functions. &lt;templatestyles src="Block indent/styles.css"/&gt; The most important thermodynamic potentials are the following functions: &lt;templatestyles src="Block indent/styles.css"/&gt; Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems. &lt;templatestyles src="Block indent/styles.css"/&gt; Common material properties determined from the thermodynamic functions are the following: &lt;templatestyles src="Block indent/styles.css"/&gt; The following constants are constants that occur in many relationships due to the application of a standard system of units. &lt;templatestyles src="Block indent/styles.css"/&gt; Laws of thermodynamics. The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are: If "A", "B", "C" are thermodynamic systems such that "A" is in thermal equilibrium with "B" and "B" is in thermal equilibrium with "C", then "A" is in thermal equilibrium with "C". The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, "C" is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated. formula_1 where formula_2 is the infinitesimal increase in internal energy of the system, formula_3 is the infinitesimal heat flow into the system, and formula_4 is the infinitesimal work done by the system. The first law is the law of conservation of energy. The symbol formula_5 instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that "Q" and "W" are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as formula_6. The entropy of an isolated system never decreases: formula_7 for an isolated system. A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged). formula_8 when formula_9 The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure. formula_10 formula_11 The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law. The fundamental equation. The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of "k"  different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as: formula_12 Some important aspects of this equation should be noted: , , Thermodynamic potentials. By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant. By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system. The four most common thermodynamic potentials are: After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as: formula_19 formula_20 formula_21 formula_22 The thermodynamic square can be used as a tool to recall and derive these potentials. First order equations. Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find "k"+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as: formula_23 where the formula_24 are the natural variables of the potential. If formula_25 is conjugate to formula_24 then we have the equations of state for that potential, one for each set of conjugate variables. formula_26 Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume: formula_27 For an ideal gas, this becomes the familiar "PV"="NkBT". Euler integrals. Because all of the natural variables of the internal energy "U" are extensive quantities, it follows from Euler's homogeneous function theorem that formula_28 Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials: formula_29 formula_30 formula_31 Note that the Euler integrals are sometimes also referred to as fundamental equations. Gibbs–Duhem relationship. Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that: formula_32 which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with "r" components, there will be "r+1" independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem. Second order equations. There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations). Maxwell relations. Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are: The thermodynamic square can be used as a tool to recall and derive these relations. Material properties. Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived: These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure. Thermodynamic property relations. Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation. Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations. The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, formula_36 represents the specific latent heat, formula_37 represents temperature, and formula_38 represents the change in specific volume. formula_39 The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv: Cp – Cv = R Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = \\frac{W}{t} = \\frac{(mg)h}{t} " }, { "math_id": 1, "text": "dU = \\delta Q - \\delta W " }, { "math_id": 2, "text": "dU " }, { "math_id": 3, "text": "\\delta Q " }, { "math_id": 4, "text": "\\delta W " }, { "math_id": 5, "text": "\\delta" }, { "math_id": 6, "text": "dU = \\delta Q + \\delta W" }, { "math_id": 7, "text": " dS \\ge 0" }, { "math_id": 8, "text": " S = 0 " }, { "math_id": 9, "text": " T = 0 " }, { "math_id": 10, "text": " \\mathbf{J}_{u} = L_{uu}\\, \\nabla(1/T) - L_{ur}\\, \\nabla(m/T) " }, { "math_id": 11, "text": " \\mathbf{J}_{r} = L_{ru}\\, \\nabla(1/T) - L_{rr}\\, \\nabla(m/T) " }, { "math_id": 12, "text": "dU = TdS-pdV+\\sum_{i=1}^k\\mu_idN_i" }, { "math_id": 13, "text": "dU=\n\\left(\\frac{\\partial U}{\\partial S}\\right)_{V,\\{N_i\\}}dS+\n\\left(\\frac{\\partial U}{\\partial V}\\right)_{S,\\{N_i\\}}dV+\n\\sum_i\\left(\\frac{\\partial U}{\\partial N_i}\\right)_{S,V,\\{N_{j \\ne i}\\}}dN_i\n" }, { "math_id": 14, "text": "\\left(\\frac{\\partial U}{\\partial S}\\right)_{V,\\{N_i\\}}=T" }, { "math_id": 15, "text": "\\left(\\frac{\\partial U}{\\partial V}\\right)_{S,\\{N_i\\}}=-p" }, { "math_id": 16, "text": "\\left(\\frac{\\partial U}{\\partial N_i}\\right)_{S,V,\\{N_{j \\ne i}\\}}=\\mu_i" }, { "math_id": 17, "text": "dS" }, { "math_id": 18, "text": "\\left(\\frac{\\partial S}{\\partial V}\\right)_{U,\\{N_i\\}} = \\frac{p}{T}" }, { "math_id": 19, "text": "dU\\left(S,V,{N_{i}}\\right) = TdS - pdV + \\sum_{i} \\mu_{i} dN_i" }, { "math_id": 20, "text": "dF\\left(T,V,N_{i}\\right) = -SdT - pdV + \\sum_{i} \\mu_{i} dN_{i}" }, { "math_id": 21, "text": "dH\\left(S,p,N_{i}\\right) = TdS + Vdp + \\sum_{i} \\mu_{i} dN_{i}" }, { "math_id": 22, "text": "dG\\left(T,p,N_{i}\\right) = -SdT + Vdp + \\sum_{i} \\mu_{i} dN_{i}" }, { "math_id": 23, "text": "d\\Phi = \\sum_i \\frac{\\partial \\Phi}{\\partial X_i} dX_i" }, { "math_id": 24, "text": "X_i" }, { "math_id": 25, "text": "\\gamma_i" }, { "math_id": 26, "text": "\\gamma_i = \\frac{\\partial \\Phi}{\\partial X_i}" }, { "math_id": 27, "text": "\\left(\\frac{\\partial F}{\\partial V}\\right)_{T,\\{N_i\\}}=-p" }, { "math_id": 28, "text": "U=TS-pV+\\sum_i \\mu_i N_i" }, { "math_id": 29, "text": "F= -pV+\\sum_i \\mu_i N_i" }, { "math_id": 30, "text": "H=TS +\\sum_i \\mu_i N_i" }, { "math_id": 31, "text": "G= \\sum_i \\mu_i N_i" }, { "math_id": 32, "text": "0=SdT-Vdp+\\sum_iN_id\\mu_i" }, { "math_id": 33, "text": " \\beta_{T \\text{ or } S} = -{ 1\\over V } \\left ( {\\partial V\\over \\partial p} \\right )_{T,N \\text{ or } S,N}" }, { "math_id": 34, "text": " c_{p \\text{ or } V}= \\frac{T}{N}\\left ( {\\partial S\\over \\partial T} \\right )_{p \\text{ or } V} ~" }, { "math_id": 35, "text": "\\alpha_{p} = \\frac{1}{V}\\left(\\frac{\\partial V}{\\partial T}\\right)_p" }, { "math_id": 36, "text": "L" }, { "math_id": 37, "text": "T" }, { "math_id": 38, "text": "\\Delta v " }, { "math_id": 39, "text": "\\frac{\\mathrm{d} P}{\\mathrm{d} T} = \\frac {L}{T \\Delta v}" } ]
https://en.wikipedia.org/wiki?curid=1515898
1516033
Kummer's function
Mathematical function In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer. Kummer's function is defined by formula_0 The duplication formula is formula_1. Compare this to the duplication formula for the polylogarithm: formula_2 An explicit link to the polylogarithm is given by formula_3
[ { "math_id": 0, "text": "\\Lambda_n(z)=\\int_0^z \\frac{\\log^{n-1}|t|}{1+t}\\;dt." }, { "math_id": 1, "text": "\\Lambda_n(z)+\\Lambda_n(-z)= 2^{1-n}\\Lambda_n(-z^2)" }, { "math_id": 2, "text": "\\operatorname{Li}_n(z)+\\operatorname{Li}_n(-z)= 2^{1-n}\\operatorname{Li}_n(z^2)." }, { "math_id": 3, "text": "\\operatorname{Li}_n(z)=\\operatorname{Li}_n(1)\\;\\;+\\;\\;\n\\sum_{k=1}^{n-1} (-1)^{k-1} \\;\\frac{\\log^k |z|} {k!} \\;\\operatorname{Li}_{n-k} (z) \\;\\;+\\;\\;\n\\frac{(-1)^{n-1}}{(n-1)!} \\;\\left[ \\Lambda_n(-1) - \\Lambda_n(-z) \\right]." } ]
https://en.wikipedia.org/wiki?curid=1516033
1516095
Confluent hypergeometric function
Solution of a confluent hypergeometric equation In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term "confluent" refers to the merging of singular points of families of differential equations; "confluere" is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions: The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables. Kummer's equation. Kummer's equation may be written as: formula_0 with a regular singular point at "z" 0 and an irregular singular point at "z" ∞. It has two (usually) linearly independent solutions "M"("a", "b", "z") and "U"("a", "b", "z"). Kummer's function of the first kind M is a generalized hypergeometric series introduced in , given by: formula_1 where: formula_2 formula_3 is the rising factorial. Another common notation for this solution is Φ("a", "b", "z"). Considered as a function of a, b, or z with the other two held constant, this defines an entire function of a or z, except when "b" 0, −1, −2, ... As a function of b it is analytic except for poles at the non-positive integers. Some values of a and b yield solutions that can be expressed in terms of other known functions. See #Special cases. When a is a non-positive integer, then Kummer's function (if it is defined) is a generalized Laguerre polynomial. Just as the confluent differential equation is a limit of the hypergeometric differential equation as the singular point at 1 is moved towards the singular point at ∞, the confluent hypergeometric function can be given as a limit of the hypergeometric function formula_4 and many of the properties of the confluent hypergeometric function are limiting cases of properties of the hypergeometric function. Since Kummer's equation is second order there must be another, independent, solution. The indicial equation of the method of Frobenius tells us that the lowest power of a power series solution to the Kummer equation is either 0 or 1 − "b". If we let "w"("z") be formula_5 then the differential equation gives formula_6 which, upon dividing out "z"1−"b" and simplifying, becomes formula_7 This means that "z"1−"b""M"("a" + 1 − "b", 2 − "b", "z") is a solution so long as b is not an integer greater than 1, just as "M"("a", "b", "z") is a solution so long as b is not an integer less than 1. We can also use the Tricomi confluent hypergeometric function "U"("a", "b", "z") introduced by Francesco Tricomi (1947), and sometimes denoted by Ψ("a"; "b"; "z"). It is a combination of the above two solutions, defined by formula_8 Although this expression is undefined for integer b, it has the advantage that it can be extended to any integer b by continuity. Unlike Kummer's function which is an entire function of z, "U"("z") usually has a singularity at zero. For example, if "b" 0 and "a" ≠ 0 then Γ("a"+1)"U"("a", "b", "z") − 1 is asymptotic to "az" ln "z" as z goes to zero. But see #Special cases for some examples where it is an entire function (polynomial). Note that the solution "z"1−"b""U"("a" + 1 − "b", 2 − "b", "z") to Kummer's equation is the same as the solution "U"("a", "b", "z"), see #Kummer's transformation. For most combinations of real or complex a and b, the functions "M"("a", "b", "z") and "U"("a", "b", "z") are independent, and if b is a non-positive integer, so "M"("a", "b", "z") doesn't exist, then we may be able to use "z"1−"b""M"("a"+1−"b", 2−"b", "z") as a second solution. But if a is a non-positive integer and b is not a non-positive integer, then "U"("z") is a multiple of "M"("z"). In that case as well, "z"1−"b""M"("a"+1−"b", 2−"b", "z") can be used as a second solution if it exists and is different. But when b is an integer greater than 1, this solution doesn't exist, and if "b" = 1 then it exists but is a multiple of "U"("a", "b", "z") and of "M"("a", "b", "z") In those cases a second solution exists of the following form and is valid for any real or complex a and any positive integer b except when a is a positive integer less than b: formula_9 When "a" = 0 we can alternatively use: formula_10 When "b" 1 this is the exponential integral "E"1("−z"). A similar problem occurs when "a"−"b" is a negative integer and b is an integer less than 1. In this case "M"("a", "b", "z") doesn't exist, and "U"("a", "b", "z") is a multiple of "z"1−"b""M"("a"+1−"b", 2−"b", "z"). A second solution is then of the form: formula_11 Other equations. Confluent Hypergeometric Functions can be used to solve the Extended Confluent Hypergeometric Equation whose general form is given as: formula_12 Note that for "M" 0 or when the summation involves just one term, it reduces to the conventional Confluent Hypergeometric Equation. Thus Confluent Hypergeometric Functions can be used to solve "most" second-order ordinary differential equations whose variable coefficients are all linear functions of z, because they can be transformed to the Extended Confluent Hypergeometric Equation. Consider the equation: formula_13 First we move the regular singular point to 0 by using the substitution of "A" + "Bz" ↦ "z", which converts the equation to: formula_14 with new values of C, D, E, and F. Next we use the substitution: formula_15 and multiply the equation by the same factor, obtaining: formula_16 whose solution is formula_17 where "w"("z") is a solution to Kummer's equation with formula_18 Note that the square root may give an imaginary or complex number. If it is zero, another solution must be used, namely formula_19 where "w"("z") is a confluent hypergeometric limit function satisfying formula_20 As noted below, even the Bessel equation can be solved using confluent hypergeometric functions. Integral representations. If Re "b" &gt; Re "a" &gt; 0, "M"("a", "b", "z") can be represented as an integral formula_21 thus "M"("a", "a"+"b", "it") is the characteristic function of the beta distribution. For a with positive real part U can be obtained by the Laplace integral formula_22 The integral defines a solution in the right half-plane Re "z" &gt; 0. They can also be represented as Barnes integrals formula_23 where the contour passes to one side of the poles of Γ(−"s") and to the other side of the poles of Γ("a" + "s"). Asymptotic behavior. If a solution to Kummer's equation is asymptotic to a power of z as "z" → ∞, then the power must be −"a". This is in fact the case for Tricomi's solution "U"("a", "b", "z"). Its asymptotic behavior as "z" → ∞ can be deduced from the integral representations. If "z" "x" ∈ R, then making a change of variables in the integral followed by expanding the binomial series and integrating it formally term by term gives rise to an asymptotic series expansion, valid as "x" → ∞: formula_24 where formula_25 is a generalized hypergeometric series with 1 as leading term, which generally converges nowhere, but exists as a formal power series in 1/"x". This asymptotic expansion is also valid for complex z instead of real x, with The asymptotic behavior of Kummer's solution for large is: formula_26 The powers of z are taken using −3"π"/2 &lt; arg "z" ≤ "π"/2. The first term is not needed when Γ("b" − "a") is finite, that is when "b" − "a" is not a non-positive integer and the real part of z goes to negative infinity, whereas the second term is not needed when Γ("a") is finite, that is, when a is a not a non-positive integer and the real part of z goes to positive infinity. There is always some solution to Kummer's equation asymptotic to "ezz""a"−"b" as "z" → −∞. Usually this will be a combination of both "M"("a", "b", "z") and "U"("a", "b", "z") but can also be expressed as "ez" (−1)"a"-"b" "U"("b" − "a", "b", −"z"). Relations. There are many relations between Kummer functions for various arguments and their derivatives. This section gives a few typical examples. Contiguous relations. Given "M"("a", "b", "z"), the four functions "M"("a" ± 1, "b", "z"), "M"("a", "b" ± 1, "z") are called contiguous to "M"("a", "b", "z"). The function "M"("a", "b", "z") can be written as a linear combination of any two of its contiguous functions, with rational coefficients in terms of a, b, and z. This gives () = 6 relations, given by identifying any two lines on the right hand side of formula_27 In the notation above, "M" = "M"("a", "b", "z"), "M"("a"+) = "M"("a" + 1, "b", "z"), and so on. Repeatedly applying these relations gives a linear relation between any three functions of the form "M"("a" + "m", "b" + "n", "z") (and their higher derivatives), where m, n are integers. There are similar relations for U. Kummer's transformation. Kummer's functions are also related by Kummer's transformations: formula_28 formula_29. Multiplication theorem. The following multiplication theorems hold true: formula_30 Connection with Laguerre polynomials and similar representations. In terms of Laguerre polynomials, Kummer's functions have several expansions, for example formula_31 or formula_32 Special cases. Functions that can be expressed as special cases of the confluent hypergeometric function include: formula_33 formula_34 formula_35 formula_36 (a polynomial if a is a non-positive integer) formula_37 formula_38 for non-positive integer n is a generalized Laguerre polynomial. formula_39 for non-positive integer n is a multiple of a generalized Laguerre polynomial, equal to formula_40 when the latter exists. formula_41 when n is a positive integer is a closed form with powers of z, equal to formula_42 when the latter exists. formula_43 formula_44 for non-negative integer n is a Bessel polynomial (see lower down). formula_45 etc. Using the contiguous relation formula_46 we get, for example, formula_47 2"a" the function reduces to a Bessel function: formula_48 This identity is sometimes also referred to as Kummer's second transformation. Similarly formula_49 When a is a non-positive integer, this equals 2−"a""θ"−"a"("x"/2) where θ is a Bessel polynomial. formula_50 formula_51 formula_52 formula_53 In the second formula the function's second branch cut can be chosen by multiplying with (−1)"p". Application to continued fractions. By applying a limiting argument to Gauss's continued fraction it can be shown that formula_54 and that this continued fraction converges uniformly to a meromorphic function of z in every bounded domain that does not include a pole. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z\\frac{d^2w}{dz^2} + (b-z)\\frac{dw}{dz} - aw = 0," }, { "math_id": 1, "text": "M(a,b,z)=\\sum_{n=0}^\\infty \\frac {a^{(n)} z^n} {b^{(n)} n!}={}_1F_1(a;b;z)," }, { "math_id": 2, "text": "a^{(0)}=1," }, { "math_id": 3, "text": "a^{(n)}=a(a+1)(a+2)\\cdots(a+n-1)\\, ," }, { "math_id": 4, "text": "M(a,c,z) = \\lim_{b\\to\\infty}{}_2F_1(a,b;c;z/b)" }, { "math_id": 5, "text": "w(z)=z^{1-b}v(z)" }, { "math_id": 6, "text": "z^{2-b}\\frac{d^2v}{dz^2}+2(1-b)z^{1-b}\\frac{dv}{dz}-b(1-b)z^{-b}v + (b-z)\\left[z^{1-b}\\frac{dv}{dz}+(1-b)z^{-b}v\\right] - az^{1-b}v = 0" }, { "math_id": 7, "text": "z\\frac{d^2v}{dz^2}+(2-b-z)\\frac{dv}{dz} - (a+1-b)v = 0." }, { "math_id": 8, "text": "U(a,b,z)=\\frac{\\Gamma(1-b)}{\\Gamma(a+1-b)}M(a,b,z)+\\frac{\\Gamma(b-1)}{\\Gamma(a)}z^{1-b}M(a+1-b,2-b,z)." }, { "math_id": 9, "text": "M(a,b,z)\\ln z+z^{1-b}\\sum_{k=0}^\\infty C_kz^k" }, { "math_id": 10, "text": "\\int_{-\\infty}^z(-u)^{-b}e^u\\mathrm{d}u." }, { "math_id": 11, "text": "z^{1-b}M(a+1-b,2-b,z)\\ln z+\\sum_{k=0}^\\infty C_kz^k" }, { "math_id": 12, "text": "z\\frac{d^2w}{dz^2} +(b-z)\\frac{dw}{dz} -\\left(\\sum_{m=0}^M a_m z^m\\right)w = 0" }, { "math_id": 13, "text": "(A+Bz)\\frac{d^2w}{dz^2} + (C+Dz)\\frac{dw}{dz} +(E+Fz)w = 0" }, { "math_id": 14, "text": "z\\frac{d^2w}{dz^2} + (C+Dz)\\frac{dw}{dz} +(E+Fz)w = 0" }, { "math_id": 15, "text": " z \\mapsto \\frac{1}{\\sqrt{D^2-4F}} z" }, { "math_id": 16, "text": "z\\frac{d^2w}{dz^2}+\\left(C+\\frac{D}{\\sqrt{D^2-4F}}z\\right)\\frac{dw}{dz}+\\left(\\frac{E}{\\sqrt{D^2-4F}}+\\frac{F}{D^2-4F}z\\right)w=0" }, { "math_id": 17, "text": "\\exp \\left ( - \\left (1+ \\frac{D}{\\sqrt{D^2-4F}} \\right) \\frac{z}{2} \\right )w(z)," }, { "math_id": 18, "text": "a=\\left (1+ \\frac{D}{\\sqrt{D^2-4F}} \\right)\\frac{C}{2}-\\frac{E}{\\sqrt{D^2-4F}}, \\qquad b = C." }, { "math_id": 19, "text": "\\exp \\left(-\\tfrac{1}{2} Dz \\right )w(z)," }, { "math_id": 20, "text": "zw''(z)+Cw'(z)+\\left(E-\\tfrac{1}{2}CD \\right)w(z)=0." }, { "math_id": 21, "text": "M(a,b,z)= \\frac{\\Gamma(b)}{\\Gamma(a)\\Gamma(b-a)}\\int_0^1 e^{zu}u^{a-1}(1-u)^{b-a-1}\\,du." }, { "math_id": 22, "text": "U(a,b,z) = \\frac{1}{\\Gamma(a)}\\int_0^\\infty e^{-zt}t^{a-1}(1+t)^{b-a-1}\\,dt, \\quad (\\operatorname{Re}\\ a>0) " }, { "math_id": 23, "text": "M(a,b,z) = \\frac{1}{2\\pi i}\\frac{\\Gamma(b)}{\\Gamma(a)}\\int_{-i\\infty}^{i\\infty} \\frac{\\Gamma(-s)\\Gamma(a+s)}{\\Gamma(b+s)}(-z)^sds" }, { "math_id": 24, "text": "U(a,b,x)\\sim x^{-a} \\, _2F_0\\left(a,a-b+1;\\, ;-\\frac 1 x\\right)," }, { "math_id": 25, "text": "_2F_0(\\cdot, \\cdot; ;-1/x)" }, { "math_id": 26, "text": "M(a,b,z)\\sim\\Gamma(b)\\left(\\frac{e^zz^{a-b}}{\\Gamma(a)}+\\frac{(-z)^{-a}}{\\Gamma(b-a)}\\right)" }, { "math_id": 27, "text": "\\begin{align}\nz\\frac{dM}{dz} = z\\frac{a}{b}M(a+,b+)\n&=a(M(a+)-M)\\\\\n&=(b-1)(M(b-)-M)\\\\\n&=(b-a)M(a-)+(a-b+z)M\\\\\n&=z(a-b)M(b+)/b +zM\\\\\n\\end{align}" }, { "math_id": 28, "text": "M(a,b,z) = e^z\\,M(b-a,b,-z)" }, { "math_id": 29, "text": "U(a,b,z)=z^{1-b} U\\left(1+a-b,2-b,z\\right)" }, { "math_id": 30, "text": "\\begin{align}\nU(a,b,z) &= e^{(1-t)z} \\sum_{i=0} \\frac{(t-1)^i z^i}{i!} U(a,b+i,z t)\\\\\n &= e^{(1-t)z} t^{b-1} \\sum_{i=0} \\frac{\\left(1-\\frac 1 t\\right)^i}{i!} U(a-i,b-i,z t).\n\\end{align}" }, { "math_id": 31, "text": "M\\left(a,b,\\frac{x y}{x-1}\\right) = (1-x)^a \\cdot \\sum_n\\frac{a^{(n)}}{b^{(n)}}L_n^{(b-1)}(y)x^n" }, { "math_id": 32, "text": "\\operatorname{M}\\left( a;\\, b;\\, z \\right) = \\frac{\\Gamma\\left( 1 - a \\right) \\cdot \\Gamma\\left( b \\right)}{\\Gamma\\left( b - a \\right)} \\cdot \\operatorname{L_{-a}^{b - 1}}\\left( z \\right)" }, { "math_id": 33, "text": "M(0,b,z)=1" }, { "math_id": 34, "text": "U(0,c,z)=1" }, { "math_id": 35, "text": "M(b,b,z)=e^z" }, { "math_id": 36, "text": "U(a,a,z)=e^z\\int_z^\\infty u^{-a}e^{-u}du" }, { "math_id": 37, "text": "\\frac{U(1,b,z)}{\\Gamma(b-1)}+\\frac{M(1,b,z)}{\\Gamma(b)}=z^{1-b}e^z" }, { "math_id": 38, "text": "M(n,b,z)" }, { "math_id": 39, "text": "U(n,c,z)" }, { "math_id": 40, "text": "\\tfrac{\\Gamma(1-c)}{\\Gamma(n+1-c)}M(n,c,z)" }, { "math_id": 41, "text": "U(c-n,c,z)" }, { "math_id": 42, "text": "\\tfrac{\\Gamma(c-1)}{\\Gamma(c-n)}z^{1-c}M(1-n,2-c,z)" }, { "math_id": 43, "text": "U(a,a+1,z)= z^{-a}" }, { "math_id": 44, "text": "U(-n,-2n,z)" }, { "math_id": 45, "text": "M(1,2,z)=(e^z-1)/z,\\ \\ M(1,3,z)=2!(e^z-1-z)/z^2" }, { "math_id": 46, "text": "aM(a+)=(a+z)M+z(a-b)M(b+)/b" }, { "math_id": 47, "text": "M(2,1,z)=(1+z)e^z." }, { "math_id": 48, "text": "{}_1F_1(a,2a,x)= e^{x/2}\\, {}_0F_1 \\left(; a+\\tfrac{1}{2}; \\tfrac{x^2}{16} \\right) = e^{x/2} \\left(\\tfrac{x}{4}\\right)^{1/2-a}\\Gamma\\left(a+\\tfrac{1}{2}\\right)I_{a-1/2}\\left(\\tfrac{x}{2}\\right)." }, { "math_id": 49, "text": "U(a,2a,x)= \\frac{e^{x/2}}{\\sqrt \\pi} x^{1/2-a} K_{a-1/2} (x/2)," }, { "math_id": 50, "text": "\\mathrm{erf}(x)= \\frac{2}{\\sqrt{\\pi}}\\int_0^x e^{-t^2} dt= \\frac{2x}{\\sqrt{\\pi}}\\ {}_1F_1\\left(\\tfrac{1}{2},\\tfrac{3}{2},-x^2\\right)." }, { "math_id": 51, "text": "M_{\\kappa,\\mu}(z) = e^{-\\tfrac{z}{2}} z^{\\mu+\\tfrac{1}{2}}M\\left(\\mu-\\kappa+\\tfrac{1}{2}, 1+2\\mu; z\\right)" }, { "math_id": 52, "text": "W_{\\kappa,\\mu}(z) = e^{-\\tfrac{z}{2}} z^{\\mu+\\tfrac{1}{2}}U\\left(\\mu-\\kappa+\\tfrac{1}{2}, 1+2\\mu; z\\right)" }, { "math_id": 53, "text": "\\begin{align}\n\\operatorname{E} \\left[\\left|N\\left(\\mu, \\sigma^2 \\right)\\right|^p \\right] &= \\frac{\\left(2 \\sigma^2\\right)^{p/2} \\Gamma\\left(\\tfrac{1+p}{2}\\right)}{\\sqrt \\pi} \\ {}_1F_1\\left(-\\tfrac p 2, \\tfrac 1 2, -\\tfrac{\\mu^2}{2 \\sigma^2}\\right)\\\\\n\\operatorname{E} \\left[N \\left(\\mu, \\sigma^2 \\right)^p \\right] &= \\left (-2 \\sigma^2\\right)^{p/2} U\\left(-\\tfrac p 2, \\tfrac 1 2, -\\tfrac{\\mu^2}{2 \\sigma^2} \\right)\n\\end{align}" }, { "math_id": 54, "text": "\\frac{M(a+1,b+1,z)}{M(a,b,z)} = \\cfrac{1}{1 - \\cfrac{{\\displaystyle\\frac{b-a}{b(b+1)}z}}\n{1 + \\cfrac{{\\displaystyle\\frac{a+1}{(b+1)(b+2)}z}}\n{1 - \\cfrac{{\\displaystyle\\frac{b-a+1}{(b+2)(b+3)}z}}\n{1 + \\cfrac{{\\displaystyle\\frac{a+2}{(b+3)(b+4)}z}}{1 - \\ddots}}}}}\n" } ]
https://en.wikipedia.org/wiki?curid=1516095