id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
62408819
|
Bacillus submarinus
|
Species of bacterium
<templatestyles src="Template:Taxobox/core/styles.css" />
Bacillus submarinus is a species in the genus "Bacillus," meaning it is rod shaped while being capable of producing endospores. "B. submarinus" is Gram + , where there is a thick layer of peptidoglycan in its cell wall.
Description.
"Bacillus submarinus" is a gram positive, aerobic meaning that it requires oxygen for metabolism. "B. submarinus" is a sporulating bacteria which is when the cell puts it genetic information in a spore during a cell's dormant phase, rod-shaped, bacterium of the genus "Bacillus" that is commonly found in the ocean at extreme depths and pressures. As with other members of the genus "Bacillus", it can form an endospore a bud that contains genetic information in the chance the bacteria cell dies, later when conditions become more hospitable the bacteria returns, surviving extreme conditions.
Habitat.
This species is commonly found in the ocean waters, primarily in the Atlantic Ocean. "Bacillus submarinus" is able to live in oceans at a depth of more than 5000 m, withstanding extreme hydrostatic pressure that is above formula_0 Pa or around 15954 Psi. In contrast, the human femur can only withstand a maximum of 1,700 Psi before shattering.
Reproduction.
"Bacillus submarinus" divide symmetrically to make two daughter cells, producing a single endospore that can remain viable for decades and is resistant to unfavourable environmental conditions such as ocean acidification. They do not reproduce like eukaryotic cells by mitosis but, a process known as binary fission. In binary fission the DNA in the prokaryote is not condensed in structures similar to chromosomes, but make a copy of the DNA and the cell divides in half.
Uses.
"Bacillus submarinus" is proven to decompose oil that is found in the ocean such as after an oil spill. As "B. submarinus" begins the process of decomposing oil in the ocean they form tarballs. In these tarballs the "B. submarinus" works with other organisms such as "Chromobacterium violaceum" and "Candida marina" to change the chemical structure of the oil by decomposing it and causing the molecules in oil to bond to other materials around organism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1.1x10^8"
}
] |
https://en.wikipedia.org/wiki?curid=62408819
|
62412427
|
Two-way string-matching algorithm
|
String-searching algorithm
In computer science, the two-way string-matching algorithm is a string-searching algorithm, discovered by Maxime Crochemore and Dominique Perrin in 1991. It takes a pattern of size "m", called a “needle”, preprocesses it in linear time O("m"), producing information that can then be used to search for the needle in any “haystack” string, taking only linear time O("n") with "n" being the haystack's length.
The two-way algorithm can be viewed as a combination of the forward-going Knuth–Morris–Pratt algorithm (KMP) and the backward-running Boyer–Moore string-search algorithm (BM).
Like those two, the 2-way algorithm preprocesses the pattern to find partially repeating periods and computes “shifts” based on them, indicating what offset to “jump” to in the haystack when a given character is encountered.
Unlike BM and KMP, it uses only O(log "m") additional space to store information about those partial repeats: the search pattern is split into two parts (its critical factorization), represented only by the position of that split. Being a number less than "m", it can be represented in ⌈log₂ "m"⌉ bits. This is sometimes treated as "close enough to O(1) in practice", as the needle's size is limited by the size of addressable memory; the overhead is a number that can be stored in a single register, and treating it as O(1) is like treating the size of a loop counter as O(1) rather than log of the number of iterations.
The actual matching operation performs at most 2"n" − "m" comparisons.
Breslauer later published two improved variants performing fewer comparisons, at the cost of storing additional data about the preprocessed needle:
The algorithm is considered fairly efficient in practice, being cache-friendly and using several operations that can be implemented in well-optimized subroutines. It is used by the C standard libraries glibc, newlib, and musl, to implement the "memmem" and "strstr" family of substring functions. As with most advanced string-search algorithms, the naïve implementation may be more efficient on small-enough instances; this is especially so if the needle isn't searched in multiple haystacks, which would amortize the preprocessing cost.
Critical factorization.
Before we define critical factorization, we should define:
The algorithm.
The algorithm starts by critical factorization of the needle as the preprocessing step. This step produces the index (starting point) of the periodic right-half, and the period of this stretch. The suffix computation here follows the authors' formulation. It can alternatively be computed using the Duval's algorithm, which is simpler and still linear time but slower in practice.
"Shorthand for inversion."
function cmp(a, b)
if a > b return 1
if a = b return 0
if a < b return -1
function maxsuf(n, rev)
l ← len(n)
p ← 1 "currently known period."
k ← 1 "index for period testing, 0 < k <= p."
j ← 0 "index for maxsuf testing. greater than maxs."
i ← -1 "the proposed starting index of maxsuf"
while j + k < l
cmpv ← cmp(n[j + k], n[i + k])
if rev
cmpv ← -cmpv "invert the comparison"
if cmpv < 0
"Suffix (j+k) is smaller. Period is the entire prefix so far."
j ← j + k
k ← 1
p ← j - i
else if cmpv = 0
"They are the same - we should go on."
if k = p
"We are done checking this stretch of p. reset k."
j ← j + p
k ← 1
else
k ← k + 1
else
"Suffix is larger. Start over from here."
i ← j
j ← j + 1
p ← 1
k ← 1
return [i, p]
function crit_fact(n)
[idx1, per1] ← maxsuf(n, false)
[idx2, per2] ← maxsuf(n, true)
if idx1 > idx2
return [idx1, per1]
else
return [idx2, per2]
The comparison proceeds by first matching for the right-hand-side, and then for the left-hand-side if it matches. Linear-time skipping is done using the period.
function match(n, h)
nl ← len(n)
hl ← len(h)
[l, p] ← crit_fact(n)
P ← {} "set of matches."
"Match the suffix."
"Use a library function like memcmp, or write your own loop."
if n[0] ... n[l] = n[l+1] ... n[l+p]
pos ← 0
s ← 0
"TODO. At least put the skip in."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varphi"
}
] |
https://en.wikipedia.org/wiki?curid=62412427
|
6241465
|
Floating-gate MOSFET
|
Type of MOSFET where the gate is electrically isolated
The floating-gate MOSFET (FGMOS), also known as a floating-gate MOS transistor or floating-gate transistor, is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) where the gate is electrically isolated, creating a floating node in direct current, and a number of secondary gates or inputs are deposited above the floating gate (FG) and are electrically isolated from it. These inputs are only capacitively connected to the FG. Since the FG is surrounded by highly resistive material, the charge contained in it remains unchanged for long periods of time, typically longer than 10 years in modern devices. Usually Fowler-Nordheim tunneling and hot-carrier injection mechanisms are used to modify the amount of charge stored in the FG.
The FGMOS is commonly used as a floating-gate memory cell, the digital storage element in EPROM, EEPROM and flash memory technologies. Other uses of the FGMOS include a neuronal computational element in neural networks, analog storage element, digital potentiometers and single-transistor DACs.
History.
The first MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, and presented in 1960. The first report of a FGMOS was later made by Dawon Kahng and Simon Min Sze at Bell Labs, and dates from 1967. The earliest practical application of FGMOS was floating-gate memory cells, which Kahng and Sze proposed could be used to produce reprogrammable ROM (read-only memory). Initial applications of FGMOS was digital semiconductor memory, to store nonvolatile data in EPROM, EEPROM and flash memory.
In 1989, Intel employed the FGMOS as an analog nonvolatile memory element in its electrically trainable artificial neural network (ETANN) chip, demonstrating the potential of using FGMOS devices for applications other than digital memory.
Three research accomplishments laid the groundwork for much of the current FGMOS circuit development:
Structure.
An FGMOS can be fabricated by electrically isolating the gate of a standard MOS transistor, so that there are no resistive connections to its gate. A number of secondary gates or inputs are then deposited above the floating gate (FG) and are electrically isolated from it. These inputs are only capacitively connected to the FG, since the FG is completely surrounded by highly resistive material. So, in terms of its DC operating point, the FG is a floating node.
For applications where the charge of the FG needs to be modified, a pair of small extra transistors are added to each FGMOS transistor to conduct the injection and tunneling operations. The gates of every transistor are connected together; the tunneling transistor has its source, drain and bulk terminals interconnected to create a capacitive tunneling structure. The injection transistor is connected normally and specific voltages are applied to create hot carriers that are then injected via an electric field into the floating gate.
FGMOS transistor for purely capacitive use can be fabricated on N or P versions.
For charge modification applications, the tunneling transistor (and therefore the operating FGMOS) needs to be embedded into a well, hence the technology dictates the type of FGMOS that can be fabricated.
Modeling.
Large signal DC.
The equations modeling the DC operation of the FGMOS can be derived from the equations that describe the operation of the MOS transistor used to build the FGMOS. If it is possible to determine the voltage at the FG of an FGMOS device, it is then possible to express its drain to source current using standard MOS transistor models. Therefore, to derive a set of equations that model the large signal operation of an FGMOS device, it is necessary to find the relationship between its effective input voltages and the voltage at its FG.
Small signal.
An "N"-input FGMOS device has "N"−1 more terminals than a MOS transistor, and therefore, "N"+2 small signal parameters can be defined: "N" effective input transconductances, an output transconductance and a bulk transconductance. Respectively:
formula_0
formula_1
formula_2
where formula_3 is the total capacitance seen by the floating gate. These equations show two drawbacks of the FGMOS compared with the MOS transistor:
Simulation.
Under normal conditions, a floating node in a circuit represents an error because its initial condition is unknown unless it is somehow fixed. This generates two problems:
Among the many solutions proposed for the computer simulation, one of the most promising methods is an Initial Transient Analysis (ITA) proposed by Rodriguez-Villegas, where the FGs are set to zero volts or a previously known voltage based on the measurement of the charge trapped in the FG after the fabrication process. A transient analysis is then run with the supply voltages set to their final values, letting the outputs evolve normally. The values of the FGs can then be extracted and used for posterior small-signal simulations, connecting a voltage supply with the initial FG value to the floating gate using a very-high-value inductor.
Applications.
The usage and applications of the FGMOS can be broadly classified in two cases. If the charge in the floating gate is not modified during the circuit usage, the operation is capacitively coupled.
In the capacitively coupled regime of operation, the net charge in the floating gate is not modified. Examples of application for this regime are single transistor adders, DACs, multipliers and logic functions, and variable threshold inverters.
Using the FGMOS as a programmable charge element, it is commonly used for non-volatile storage such as flash, EPROM and EEPROM memory. In this context, floating-gate MOSFETs are useful because of their ability to store an electrical charge for extended periods of time without a connection to a power supply. Other applications of the FGMOS are neuronal computational element in neural networks, analog storage element and e-pots.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g_{mi}=\\frac{C_i}{C_T}g_m\\quad\\mbox{for}\\quad i=[1,N]"
},
{
"math_id": 1,
"text": "g_{dsF}=g_{ds}+\\frac{C_{GD}}{C_T}g_m"
},
{
"math_id": 2,
"text": "g_{mbF}=g_{mb}+\\frac{C_{GB}}{C_T}g_m"
},
{
"math_id": 3,
"text": "C_T"
}
] |
https://en.wikipedia.org/wiki?curid=6241465
|
62417498
|
Truthful resource allocation
|
Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources.
Model.
There are "m" resources that are assumed to be "homogeneous" and "divisible". Examples are:
There are "n" agents. Each agent has a function that attributes a numeric value to each "bundle" (combination of resources).
It is often assumed that the agents' value functions are "linear", so that if the agent receives a fraction "rj" of each resource "j", then his/her value is the sum of "rj" ∗"vj" .
Design goals.
The goal is to design a truthful mechanism, that will induce the agents to reveal their true value functions, and then calculate an allocation that satisfies some fairness and efficiency objectives. The common efficiency objectives are:
The most common fairness objectives are:
Egalitarian in lieu of "equitable" markets are analogous to laissez-faire early-stage capitalism, which form the basis of common marketplaces bearing fair trade policies in world markets' market evaluation; financiers can capitalise on financial controls and financial leverage and the concomitant exchange.
Trivial algorithms.
Two trivial truthful algorithms are:
It is possible to mix these two mechanisms, and get a truthful mechanism that is partly-fair and partly-efficient. But the ideal mechanism would satisfy all three properties simultaneously: truthfulness, efficiency and fairness.
At most one object per agent.
In a variant of the resource allocation problem, sometimes called one-sided matching or assignment, the total amount of objects allocated to each agent must be at most 1.
When there are 2 agents and 2 objects, the following mechanism satisfies all three properties: if each agent prefers a different object, give each agent his preferred object; if both agents prefer the same object, give each agent 1/2 of each object (It is PE due to the capacity constraints). However, when there are 3 or more agents, it may be impossible to attain all three properties.
Zhou proved that, when there are 3 or more agents, each agent must get at most 1 object, and each object must be given to at most 1 agent, no truthful mechanism satisfies both PE and ETE.
There are analogous impossibility results for agents with ordinal utilities:
See also: Truthful one-sided matching.
Approximation Algorithms.
There are several truthful algorithms that find a constant-factor approximation of the maximum utilitarian or Nash welfare.
Guo and Conitzer studied the special case of "n"=2 agents. For the case of "m"=2 resources, they showed a truthful mechanism attaining 0.828 of the maximum utilitarian welfare, and showed an upper bound of 0.841. For the case of many resources, they showed that all truthful mechanisms of the same kind approach 0.5 of the maximum utilitarian welfare. Their mechanisms are complete - they allocate all the resources.
Cole, Gkatzelis and Goel studied mechanisms of a different kind - based on the max-product allocation. For "many agents", with valuations that are homogeneous functions, they show a truthful mechanism called Partial Allocation that guarantees to each agent at least 1/"e" ≈ 0.368 of his/her utility in the max-product allocation. Their mechanism is envy-free when the valuations are additive linear functions. They show that no truthful mechanism can guarantee to all agents more than 0.5 of their max-product utility.
For the special case of "n=2 agents", they show a truthful mechanism that attains at least 0.622 of the utilitarian welfare. They also show that the mechanism running the "equal-split" mechanism and the "partial-allocation" mechanism, and choosing the outcome with the highest social welfare, is still truthful, since both agents always prefer the "same" outcome. Moreover, it attains at least 2/3 of the optimal welfare. They also show an formula_0 algorithm for computing the max-product allocation, and show that the Nash-optimal allocation itself attains at least 0.933 of the utilitarian welfare.
They also show a mechanism called Strong Demand Matching, which is tailored for a setting with many agents and few resources (such as the privatization auction in the Czech republic). The mechanism guarantees to each agent at least "p"/("p"+1) of the max-product utility, when "p" is the smallest equilibrium price of a resource when each agent has a unit budget. When there are many more agents than resources, the price of each resource is usually high, so the approximation factor approaches 1. In particular, when there are two resources, this fraction is at least "n"/("n"+1). This mechanism assigns to each agent a fraction of a single resource.
Cheung improved the competitive ratios of previous works:
|
[
{
"math_id": 0,
"text": "O(m \\log m)"
}
] |
https://en.wikipedia.org/wiki?curid=62417498
|
624209
|
Induction heating
|
Process of heating an electrically conducting object by electromagnetic induction
Induction heating is the process of heating electrically conductive materials, namely metals or semi-conductors, by electromagnetic induction, through heat transfer passing through an inductor that creates an electromagnetic field within the coil to heat up and possibly melt steel, copper, brass, graphite, gold, silver, aluminum, or carbide.
An important feature of the induction heating process is that the heat is generated inside the object itself, instead of by an external heat source via heat conduction. Thus objects can be heated very rapidly. In addition, there need not be any external contact, which can be important where contamination is an issue. Induction heating is used in many industrial processes, such as heat treatment in metallurgy, Czochralski crystal growth and zone refining used in the semiconductor industry, and to melt refractory metals that require very high temperatures. It is also used in induction cooktops.
An induction heater consists of an electromagnet and an electronic oscillator that passes a high-frequency alternating current (AC) through the electromagnet. The rapidly alternating magnetic field penetrates the object, generating electric currents inside the conductor called eddy currents. The eddy currents flow through the resistance of the material, and heat it by Joule heating. In ferromagnetic and ferrimagnetic materials, such as iron, heat also is generated by magnetic hysteresis losses. The frequency of the electric current used for induction heating depends on the object size, material type, coupling (between the work coil and the object to be heated), and the penetration depth.
Applications.
Induction heating allows the targeted heating of an applicable item for applications including surface hardening, melting, brazing and soldering, and heating to fit. Due to their ferromagnetic nature, iron and its alloys respond best to induction heating. Eddy currents can, however, be generated in any conductor, and magnetic hysteresis can occur in any magnetic material. Induction heating has been used to heat liquid conductors (such as molten metals) and also gaseous conductors (such as a gas plasma—see Induction plasma technology). Induction heating is often used to heat graphite crucibles (containing other materials) and is used extensively in the semiconductor industry for the heating of silicon and other semiconductors. Utility frequency (50/60 Hz) induction heating is used for many lower-cost industrial applications as inverters are not required.
Furnace.
An induction furnace uses induction to heat metal to its melting point. Once molten, the high-frequency magnetic field can also be used to stir the hot metal, which is useful in ensuring that alloying additions are fully mixed into the melt. Most induction furnaces consist of a tube of water-cooled copper rings surrounding a container of refractory material. Induction furnaces are used in most modern foundries as a cleaner method of melting metals than a reverberatory furnace or a cupola. Sizes range from a kilogram of capacity to a hundred tonnes. Induction furnaces often emit a high-pitched whine or hum when they are running, depending on their operating frequency. Metals melted include iron and steel, copper, aluminium, and precious metals. Because it is a clean and non-contact process, it can be used in a vacuum or inert atmosphere. Vacuum furnaces use induction heating to produce specialty steels and other alloys that would oxidize if heated in the presence of air.
Welding.
A similar, smaller-scale process is used for induction welding. Plastics may also be welded by induction, if they are either doped with ferromagnetic ceramics (where magnetic hysteresis of the particles provides the heat required) or by metallic particles.
Seams of tubes can be welded this way. Currents induced in a tube run along the open seam and heat the edges resulting in a temperature high enough for welding. At this point, the seam edges are forced together and the seam is welded. The RF current can also be conveyed to the tube by brushes, but the result is still the same—the current flows along the open seam, heating it.
Manufacturing.
In the Rapid Induction Printing metal additive printing process, a conductive wire feedstock and shielding gas is fed through a coiled nozzle, subjecting the feedstock to induction heating and ejection from the nozzle as a liquid, in order to refuse under shielding to form three-dimensional metal structures. The core benefit of the use of induction heating in this process is significantly greater energy and material efficiency as well as a higher degree of safety when compared with other additive manufacturing methods, such as selective laser sintering, which deliver heat to the material using a powerful laser or electron beam.
Cooking.
In induction cooking, an induction coil inside the cooktop heats the iron base of cookware by magnetic induction. Using induction cookers produces safety, efficiency (the induction cooktop is not heated itself), and speed. Non-ferrous pans such as copper-bottomed pans and aluminium pans are generally unsuitable. By thermal conduction, the heat induced in the base is transferred to the food inside.
Brazing.
Induction brazing is often used in higher production runs. It produces uniform results and is very repeatable. There are many types of industrial equipment where induction brazing is used. For instance, Induction is used for brazing carbide to a shaft.
Sealing.
Induction heating is used in "cap sealing" of containers in the food and pharmaceutical industries. A layer of aluminum foil is placed over the bottle or jar opening and heated by induction to fuse it to the container. This provides a tamper-resistant seal since altering the contents requires breaking the foil.
Heating to fit.
Induction heating is often used to heat an item causing it to expand before fitting or assembly. Bearings are routinely heated in this way using utility frequency (50/60 Hz) and a laminated steel transformer-type core passing through the centre of the bearing.
Heat treatment.
Induction heating is often used in the heat treatment of metal items. The most common applications are induction hardening of steel parts, induction soldering/brazing as a means of joining metal components, and induction annealing to selectively soften an area of a steel part.
Induction heating can produce high-power densities which allow short interaction times to reach the required temperature. This gives tight control of the heating pattern with the pattern following the applied magnetic field quite closely and allows reduced thermal distortion and damage.
This ability can be used in hardening to produce parts with varying properties. The most common hardening process is to produce a localised surface hardening of an area that needs wear resistance while retaining the toughness of the original structure as needed elsewhere. The depth of induction hardened patterns can be controlled through the choice of induction frequency, power density, and interaction time.
Limits to the flexibility of the process arise from the need to produce dedicated inductors for many applications. This is quite expensive and requires the marshalling of high-current densities in small copper inductors, which can require specialized engineering and "copper-fitting."
Plastic processing.
Induction heating is used in plastic injection molding machines. Induction heating improves energy efficiency for injection and extrusion processes. Heat is directly generated in the barrel of the machine, reducing warm-up time and energy consumption. The induction coil can be placed outside thermal insulation, so it operates at low temperatures and has a long life. The frequency used ranges from 30 kHz down to 5 kHz, decreasing for thicker barrels. The reduction in the cost of inverter equipment has made induction heating increasingly popular. Induction heating can also be applied to molds, offering more even mold temperature and improved product quality.
Pyrolysis.
Induction heating is used to obtain biochar in the pyrolysis of biomass. Heat is directly generated into shaker reactor walls, enabling the pyrolysis of the biomass with good mixing and temperature control.
Bolt heating.
Induction heating is used by mechanics to remove rusted bolts. The heat helps remove the rust induced tension between the threads.
Details.
The basic setup is an AC power supply that provides electricity with low voltage but very high current and high frequency. The workpiece to heat is placed inside an air coil driven by the power supply, usually in combination with a resonant tank capacitor to increase the reactive power. The alternating magnetic field induces eddy currents in the workpiece.
The frequency of the inductive current determines the depth that the induced eddy currents penetrate the workpiece. In the simplest case of a solid round bar, the induced current decreases exponentially from the surface. The penetration depth formula_0 in which 86% of power will be concentrated, can be derived as formula_1, where formula_0 is the depth in meters, formula_2 is the resistivity of the workpiece in ohm-meters, formula_3 is the dimensionless relative magnetic permeability of the workpiece, and formula_4 is the frequency of the AC field in Hz. The AC field can be calculated using the formula formula_5. The equivalent resistance of the workpiece and thus the efficiency is a function of the workpiece diameter formula_6 over the reference depth formula_7, increasing rapidly up to about formula_8. Since the workpiece diameter is fixed by the application, the value of formula_9 is determined by the reference depth. Decreasing the reference depth requires increasing the frequency. Since the cost of induction power supplies increases with frequency, supplies are often optimized to achieve a critical frequency at which formula_8. If operated below the critical frequency, heating efficiency is reduced because eddy currents from either side of the workpiece impinge upon one another and cancel out. Increasing the frequency beyond the critical frequency creates minimal further improvement in heating efficiency, although it is used in applications that seek to heat treat only the surface of the workpiece.
Relative depth varies with temperature because resistivities and permeability vary with temperature. For steel, the relative permeability drops to 1 above the Curie temperature. Thus the reference depth can vary with temperature by a factor of 2–3 for nonmagnetic conductors and by as much as 20 for magnetic steels.
Magnetic materials improve the induction heat process because of hysteresis. Materials with high permeability (100–500) are easier to heat with induction heating. Hysteresis heating occurs below the Curie temperature, where materials retain their magnetic properties. High permeability below the Curie temperature in the workpiece is useful. Temperature difference, mass, and specific heat influence the workpiece heating.
The energy transfer of induction heating is affected by the distance between the coil and the workpiece. Energy losses occur through heat conduction from workpiece to fixture, natural convection, and thermal radiation.
The induction coil is usually made of copper tubing and fluid coolant. Diameter, shape, and number of turns influence the efficiency and field pattern.
Core type furnace.
The furnace consists of a circular hearth that contains the charge to be melted in the form of a ring. The metal ring is large in diameter and is magnetically interlinked with an electrical winding energized by an AC source. It is essentially a transformer where the charge to be heated forms a single-turn short circuit secondary and is magnetically coupled to the primary by an iron core.
|
[
{
"math_id": 0,
"text": "\\delta"
},
{
"math_id": 1,
"text": "\\delta = 503 \\sqrt{\\frac{\\rho}{\\mu f}}"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "{\\frac{1}{T}}"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "a/d=4"
},
{
"math_id": 9,
"text": "a/d"
}
] |
https://en.wikipedia.org/wiki?curid=624209
|
62421802
|
Perturbed angular correlation
|
The perturbed γ-γ angular correlation, PAC for short or PAC-Spectroscopy, is a method of nuclear solid-state physics with which magnetic and electric fields in crystal structures can be measured. In doing so, electrical field gradients and the Larmor frequency in magnetic fields as well as dynamic effects are determined. With this very sensitive method, which requires only about 10–1000 billion atoms of a radioactive isotope per measurement, material properties in the local structure, phase transitions, magnetism and diffusion can be investigated. The PAC method is related to nuclear magnetic resonance and the Mössbauer effect, but shows no signal attenuation at very high temperatures.
Today only the time-differential perturbed angular correlation (TDPAC) is used.
History and development.
PAC goes back to a theoretical work by Donald R. Hamilton from 1940. The first successful experiment was carried out by Brady and Deutsch in 1947. Essentially spin and parity of nuclear spins were investigated in these first PAC experiments. However, it was recognized early on that electric and magnetic fields interact with the nuclear moment, providing the basis for a new form of material investigation: nuclear solid-state spectroscopy.
Step by step the theory was developed.
After Abragam and Pound published their work on the theory of PAC in 1953 including extra nuclear fields, many studies with PAC were carried out afterwards. In the 1960s and 1970s, interest in PAC experiments sharply increased, focusing mainly on magnetic and electric fields in crystals into which the probe nuclei were introduced. In the mid-1960s, ion implantation was discovered, providing new opportunities for sample preparation. The rapid electronic development of the 1970s brought significant improvements in signal processing. From the 1980s to the present, PAC has emerged as an important method for the study and characterization of materials, e.g. for the study of semiconductor materials, intermetallic compounds, surfaces and interfaces, and a number of applications have also appeared in biochemistry.
While until about 2008 PAC instruments used conventional high-frequency electronics of the 1970s, in 2008 Christian Herden and Jens Röder et al. developed the first fully digitized PAC instrument that enables extensive data analysis and parallel use of multiple probes. Replicas and further developments followed.
Measuring principle.
PAC uses radioactive probes, which have an intermediate state with decay times of 2 ns to approx. 10 μs, see example 111In in the picture on the right. After electron capture (EC), indium transmutates to cadmium. Immediately thereafter, the 111cadmium nucleus is predominantly in the excited 7/2+ nuclear spin and only to a very small extent in the 11/2- nuclear spin, the latter should not be considered further. The 7/2+ excited state transitions to the 5/2+ intermediate state by emitting a 171 keV γ-quantum. The intermediate state has a lifetime of 84.5 ns and is the sensitive state for the PAC. This state in turn decays into the 1/2+ ground state by emitting a γ-quantum with 245 keV. PAC now detects both γ-quanta and evaluates the first as a start signal, the second as a stop signal.
Now one measures the time between start and stop for each event. This is called coincidence when a start and stop pair has been found. Since the intermediate state decays according to the laws of radioactive decay, one obtains an exponential curve with the lifetime of this intermediate state after plotting the frequency over time. Due to the non-spherically symmetric radiation of the second γ-quantum, the so-called anisotropy, which is an intrinsic property of the nucleus in this transition, it comes with the surrounding electrical and/or magnetic fields to a periodic disorder (hyperfine interaction). The illustration of the individual spectra on the right shows the effect of this disturbance as a wave pattern on the exponential decay of two detectors, one pair at 90° and one at 180° to each other. The waveforms to both detector pairs are shifted from each other. Very simply, one can imagine a fixed observer looking at a lighthouse whose light intensity periodically becomes lighter and darker. Correspondingly, a detector arrangement, usually four detectors in a planar 90 ° arrangement or six detectors in an octahedral arrangement, "sees" the rotation of the core on the order of magnitude of MHz to GHz.
According to the number n of detectors, the number of individual spectra (z) results after z=n²-n, for n=4 therefore 12 and for n=6 thus 30. In order to obtain a PAC spectrum, the 90° and 180° single spectra are calculated in such a way that the exponential functions cancel each other out and, in addition, the different detector properties shorten themselves. The pure perturbation function remains, as shown in the example of a complex PAC spectrum. Its Fourier transform gives the transition frequencies as peaks.
formula_0, the count rate ratio, is obtained from the single spectra by using:
formula_1
Depending on the spin of the intermediate state, a different number of transition frequencies show up. For 5/2 spin, 3 transition frequencies can be observed with the ratio ω1+ω2=ω3. As a rule, a different combination of 3 frequencies can be observed for each associated site in the unit cell.
PAC is a statistical method: Each radioactive probe atom sits in its own environment. In crystals, due to the high regularity of the arrangement of the atoms or ions, the environments are identical or very similar, so that probes on identical lattice sites experience the same hyperfine field or magnetic field, which then becomes measurable in a PAC spectrum. On the other hand, for probes in very different environments, such as in amorphous materials, a broad frequency distribution or no is usually observed and the PAC spectrum appears flat, without frequency response. With single crystals, depending on the orientation of the crystal to the detectors, certain transition frequencies can be reduced or extinct, as can be seen in the example of the PAC spectrum of zinc oxide (ZnO).
Instrumental setup.
In the typical PAC spectrometer, a setup of four 90° and 180° planar arrayed detectors or six octahedral arrayed detectors are placed around the radioactive source sample. The detectors used are scintillation crystals of BaF2 or NaI. For modern instruments today mainly LaBr3:Ce or CeBr3 are used. Photomultipliers convert the weak flashes of light into electrical signals generated in the scintillator by gamma radiation. In classical instruments these signals are amplified and processed in logical AND/OR circuits in combination with time windows the different detector combinations (for 4 detectors: 12, 13, 14, 21, 23, 24, 31, 32, 34, 41, 42, 43) assigned and counted. Modern digital spectrometers use digitizer cards that directly use the signal and convert it into energy and time values and store them on hard drives. These are then searched by software for coincidences. Whereas in classical instruments, "windows" limiting the respective γ-energies must be set before processing, this is not necessary for the digital PAC during the recording of the measurement. The analysis only takes place in the second step. In the case of probes with complex cascades, this makes it makes it possible to perform a data optimization or to evaluate several cascades in parallel, as well as measuríng different probes simultaneously. The resulting data volumes can be between 60 and 300 GB per measurement.
Sample materials.
As materials for the investigation (samples) are in principle all materials that can be solid and liquid. Depending on the question and the purpose of the investigation, certain framework conditions arise. For the observation of clear perturbation frequencies it is necessary, due to the statistical method, that a certain proportion of the probe atoms are in a similar environment and e.g. experiences the same electric field gradient. Furthermore, during the time window between the start and stop, or approximately 5 half-lives of the intermediate state, the direction of the electric field gradient must not change. In liquids, therefore, no interference frequency can be measured as a result of the frequent collisions, unless the probe is complexed in large molecules, such as in proteins. The samples with proteins or peptides are usually frozen to improve the measurement.
The most studied materials with PAC are solids such as semiconductors, metals, insulators, and various types of functional materials. For the investigations, these are usually crystalline. Amorphous materials do not have highly ordered structures. However, they have close proximity, which can be seen in PAC spectroscopy as a broad distribution of frequencies. Nano-materials have a crystalline core and a shell that has a rather amorphous structure. This is called core-shell model. The smaller the nanoparticle becomes, the larger the volume fraction of this amorphous portion becomes. In PAC measurements, this is shown by the decrease of the crystalline frequency component in a reduction of the amplitude (attenuation).
Sample preparation.
The amount of suitable PAC isotopes required for a measurement is between about 10 to 1000 billion atoms (1010-1012). The right amount depends on the particular properties of the isotope. 10 billion atoms are a very small amount of substance. For comparison, one mol contains about 6.22x1023 particles. 1012 atoms in one cubic centimeter of beryllium give a concentration of about 8 nmol/L (nanomol=10−9 mol). The radioactive samples each have an activity of 0.1-5 MBq, which is in the order of the exemption limit for the respective isotope.
How the PAC isotopes are brought into the sample to be examined is up to the experimenter and the technical possibilities. The following methods are usual:
Implantation.
During implantation, a radioactive ion beam is generated, which is directed onto the sample material. Due to the kinetic energy of the ions (1-500 keV) these fly into the crystal lattice and are slowed down by impacts. They either come to a stop at interstitial sites or push a lattice-atom out of its place and replace it. This leads to a disruption of the crystal structure. These disorders can be investigated with PAC. By tempering these disturbances can be healed. If, on the other hand, radiation defects in the crystal and their healing are to be examined, unperseived samples are measured, which are then annealed step by step.
The implantation is usually the method of choice, because it can be used to produce very well-defined samples.
Evaporation.
In a vacuum, the PAC probe can be evaporated onto the sample. The radioactive probe is applied to a hot plate or filament, where it is brought to the evaporation temperature and condensed on the opposite sample material. With this method, e.g. surfaces are examined. Furthermore, by vapor deposition of other materials, interfaces can be produced. They can be studied during tempering with PAC and their changes can be observed. Similarly, the PAC probe can be transferred to sputtering using a plasma.
Diffusion.
In the diffusion method, the radioactive probe is usually diluted in a solvent applied to the sample, dried and it is diffused into the material by tempering it. The solution with the radioactive probe should be as pure as possible, since all other substances can diffuse into the sample and affect thereby the measurement results. The sample should be sufficiently diluted in the sample. Therefore, the diffusion process should be planned so that a uniform distribution or sufficient penetration depth is achieved.
Added during synthesis.
PAC probes may also be added during the synthesis of sample materials to achieve the most uniform distribution in the sample. This method is particularly well suited if, for example, the PAC probe diffuses only poorly in the material and a higher concentration in grain boundaries is to be expected. Since only very small samples are necessary with PAC (about 5 mm), micro-reactors can be used. Ideally, the probe is added to the liquid phase of the sol-gel process or one of the later precursor phases.
Neutron activation.
In neutron activation, the probe is prepared directly from the sample material by converting very small part of one of the elements of the sample material into the desired PAC probe or its parent isotope by neutron capture. As with implantation, radiation damage must be healed. This method is limited to sample materials containing elements from which neutron capture PAC probes can be made. Furthermore, samples can be intentionally contaminated with those elements that are to be activated. For example, hafnium is excellently suited for activation because of its large capture cross section for neutrons.
Nuclear reaction.
Rarely used are direct nuclear reactions in which nuclei are converted into PAC probes by bombardment by high-energy elementary particles or protons. This causes major radiation damage, which must be healed. This method is used with PAD, which belongs to the PAC methods.
Laboratories.
The currently largest PAC laboratory in the world is located at ISOLDE in CERN with about 10 PAC instruments, that receives its major funding form BMBF. Radioactive ion beams are produced at the ISOLDE by bombarding protons from the booster onto target materials (uranium carbide, liquid tin, etc.) and evaporating the spallation products at high temperatures (up to 2000 °C), then ionizing them and then accelerating them. With the subsequent mass separation usually very pure isotope beams can be produced, which can be implanted in PAC samples. Of particular interest to the PAC are short-lived isomeric probes such as: 111mCd, 199mHg, 204mPb, and various rare earth probes.
Theory.
The first formula_2-quantum (formula_3) will be emitted isotropically. Detecting this quantum in a detector selects a subset with an orientation of the many possible directions that has a given. The second formula_2-quantum (formula_4) has an anisotropic emission and shows the effect of the angle correlation. The goal is to measure the relative probability formula_5 with the detection of formula_6 at the fixed angle formula_7 in relation to formula_8. The probability is given with the angle correlation (perturbation theory):
formula_9
For a formula_2-formula_2-cascade, formula_10 is due to the preservation of parity:
formula_11
Where formula_12 is the spin of the intermediate state and formula_13 with formula_14 the multipolarity of the two transitions. For pure multipole transitions, is formula_15.
formula_16 is the anisotropy coefficient that depends on the angular momentum of the intermediate state and the multipolarities of the transition.
The radioactive nucleus is built into the sample material and emits two formula_2-quanta upon decay. During the lifetime of the intermediate state, i.e. the time between formula_8 and formula_6, the core experiences a disturbance due to the hyperfine interaction through its electrical and magnetic environment. This disturbance changes the angular correlation to:
formula_17
formula_18 is the perturbation factor. Due to the electrical and magnetic interaction, the angular momentum of the intermediate state formula_13 experiences a torque about its axis of symmetry. Quantum-mechanically, this means that the interaction leads to transitions between the M states. The second formula_2-quantum (formula_6) is then sent from the intermediate level. This population change is the reason for the attenuation of the correlation.
The interaction occurs between the magnetic core dipole moment formula_19 and the intermediate state formula_12 or/and an external magnetic field formula_20. The interaction also takes place between nuclear quadrupole moment and the off-core electric field gradient formula_21.
Magnetic dipole interaction.
For the magnetic dipole interaction, the frequency of the precession of the nuclear spin around the axis of the magnetic field formula_20 is given by:
formula_22
formula_23
formula_24 is the Landé g-factor und formula_25 is the nuclear magneton.
With formula_26 follows:
formula_27
From the general theory we get:
formula_28
For the magnetic interaction follows:
formula_29
Static electric quadrupole interaction.
The energy of the hyperfine electrical interaction between the charge distribution of the core and the extranuclear static electric field can be extended to multipoles. The monopole term only causes an energy shift and the dipole term disappears, so that the first relevant expansion term is the quadrupole term:
formula_30 ij=1;2;3
This can be written as a product of the quadrupole moment formula_31 and the electric field gradient formula_32. Both [tensor]s are of second order. Higher orders have too small effect to be measured with PAC.
The electric field gradient is the second derivative of the electric potential formula_33 at the core:
formula_34
formula_32 becomes diagonalized, that:
formula_35
The matrix is free of traces in the main axis system (Laplace equation)
formula_36
Typically, the electric field gradient is defined with the largest proportion formula_21 and formula_37:
formula_38, formula_39
In cubic crystals, the axis parameters of the unit cell x, y, z are of the same length. Therefore:
formula_40 and formula_41
In axisymmetric systems is formula_41.
For axially symmetric electric field gradients, the energy of the substates has the values:
formula_42
The energy difference between two substates, formula_43 and formula_44, is given by:
formula_45
The quadrupole frequency formula_46 is introduced.
The formulas in the colored frames are important for the evaluation:
formula_47
formula_48
The publications mostly list formula_49. formula_50 as elementary charge and formula_51 as Planck constant are well known or well defined.
The nuclear quadrupole moment formula_52 is often determined only very inaccurately (often only with 2-3 digits).
Because formula_49 can be determined much more accurately than formula_52, it is not useful to specify only formula_21 because of the error propagation.
In addition, formula_49 is independent of spin! This means that measurements of two different isotopes of the same element can be compared, such as 199mHg(5/2−), 197mHg(5/2−) and 201mHg(9/2−). Further, formula_49 can be used as finger print method.
For the energy difference then follows:
formula_53
If formula_41, then:
formula_54
with:
formula_55
For integer spins applies:
formula_56 und formula_57
For half integer spins applies:
formula_58 und formula_59
The perturbation factor is given by:
formula_60
With the factor for the probabilities of the observed frequencies:
formula_61
As far as the magnetic dipole interaction is concerned, the electrical quadrupole interaction also induces a precision of the angular correlation in time and this modulates the quadrupole interaction frequency. This frequency is an overlap of the different transition frequencies formula_62. The relative amplitudes of the various components depend on the orientation of the electric field gradient relative to the detectors (symmetry axis) and the asymmetry parameter formula_37. For a probe with different probe nuclei, one needs a parameter that allows a direct comparison: Therefore, the quadrupole coupling constant formula_49 independent of the nuclear spin formula_63 is introduced.
Combined interactions.
If there is a magnetic and electrical interaction at the same time on the radioactive nucleus as described above, combined interactions result. This leads to the splitting of the respectively observed frequencies. The analysis may not be trivial due to the higher number of frequencies that must be allocated. These then depend in each case on the direction of the electric and magnetic field to each other in the crystal. PAC is one of the few ways in which these directions can be determined.
Dynamic interactions.
If the hyperfine field fluctuates during the lifetime formula_64 of the intermediate level due to jumps of the probe into another lattice position or from jumps of a near atom into another lattice position, the correlation is lost. For the simple case with an undistorted lattice of cubic symmetry, for a jump rate of formula_65 for equivalent places formula_66, an exponential damping of the static formula_67-terms is observed:
formula_68 formula_69
Here formula_70 is a constant to be determined, which should not be confused with the decay constant formula_71. For large values of formula_72, only pure exponential decay can be observed:
formula_73
The boundary case after Abragam-Pound is formula_70, if formula_74, then:
formula_75
After effects.
Cores that transmute beforehand of the formula_2-formula_2-cascade usually cause a charge change in ionic crystals (In3+) to Cd2+). As a result, the lattice must respond to these changes. Defects or neighboring ions can also migrate. Likewise, the high-energy transition process may cause the Auger effect, that can bring the core into higher ionization states. The normalization of the state of charge then depends on the conductivity of the material. In metals, the process takes place very quickly. This takes considerably longer in semiconductors and insulators. In all these processes, the hyperfine field changes. If this change falls within the formula_2-formula_2-cascade, it may be observed as an after effect.
The number of nuclei in state (a) in the image on the right is depopulated both by the decay after state (b) and after state (c):
formula_76
mit: formula_77
From this one obtains the exponential case:
formula_78
For the total number of nuclei in the static state (c) follows:
formula_79
The initial occupation probabilities formula_80 are for static and dynamic environments:
formula_81
formula_82
General theory.
In the general theory for a transition formula_83 is given:
formula_84
formula_85
formula_86
formula_87 Minimum von formula_88
formula_89
formula_90
formula_91
formula_92
with:
formula_93
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R(t)"
},
{
"math_id": 1,
"text": "R(t)=2\\frac{W(180^\\circ,t)-W(90^\\circ,t)}{W(180^\\circ,t)+2W(90^\\circ,t)}\n"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "\\gamma_1, k_1"
},
{
"math_id": 4,
"text": "\\gamma_2, k_2"
},
{
"math_id": 5,
"text": "W(\\Theta)\\textrm{d}(\\Omega)"
},
{
"math_id": 6,
"text": "\\gamma_2"
},
{
"math_id": 7,
"text": "\\Theta"
},
{
"math_id": 8,
"text": "\\gamma_1"
},
{
"math_id": 9,
"text": "W(\\Theta)=\\sum^{k_{max}}_{k}A_{kk}P_{k}cos(\\Theta)\n"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "0<k<\\textrm{min}(2I_S, I_i+I'_i)\n"
},
{
"math_id": 12,
"text": "I_S"
},
{
"math_id": 13,
"text": "I_i"
},
{
"math_id": 14,
"text": "i=1;2"
},
{
"math_id": 15,
"text": "I_i=I'_i"
},
{
"math_id": 16,
"text": "A_ {kk}"
},
{
"math_id": 17,
"text": "W(\\Theta)=\\sum^{k_{max}}_{k}A_{kk}G_{kk}\n"
},
{
"math_id": 18,
"text": "G_{kk}"
},
{
"math_id": 19,
"text": "\\vec{\\nu}"
},
{
"math_id": 20,
"text": "\\vec{B}"
},
{
"math_id": 21,
"text": "V_{zz}"
},
{
"math_id": 22,
"text": " \\omega_L=\\frac{g\\cdot u_N\\cdot B}{\\hbar}\n"
},
{
"math_id": 23,
"text": "\\Delta E=\\hbar\\cdot\\omega_L=-g\\cdot u_N\\cdot B\n"
},
{
"math_id": 24,
"text": "g"
},
{
"math_id": 25,
"text": "u_N"
},
{
"math_id": 26,
"text": "N=M-M'"
},
{
"math_id": 27,
"text": "E_{magn}(M)-E_{magn}(M')=-(M-M')g\\mu_NB_z=N\\hbar\\omega_L\n"
},
{
"math_id": 28,
"text": "G_{k_1k_2}^{NN}=\\sqrt{(2k_1+1)(2k_2+1)}\\cdot e^{-iN\\omega_L\n t}\\times\\sum_M\\begin{pmatrix}\nI&I&k_1\\\\\nM'&-M&N\\\\\n\\end{pmatrix}\\begin{pmatrix}\nI&I&k_2\\\\\nM'&-M&N\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 29,
"text": "G_{k_1k_2}^{NN}=e^{\\left({-iN\\omega_Lt}\\right)}\n"
},
{
"math_id": 30,
"text": "E_Q=\\sum_{ij}Q_{ij}V_{ij}\n"
},
{
"math_id": 31,
"text": "Q_{ij}"
},
{
"math_id": 32,
"text": "V_{ij}"
},
{
"math_id": 33,
"text": "\\Phi(\\vec{r})"
},
{
"math_id": 34,
"text": "V_{ij}=\\frac{\\partial^2\\Phi(\\vec{r})}{\\partial x_i\\partial\n x_j}=\\begin{pmatrix}\nV_{xx}&0&0\\\\\n0&V_{yy}&0\\\\\n0&0&V_{zz}\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 35,
"text": "|V_{zz}|\\ge|V_{yy}|\\ge|V_{xx}|\n"
},
{
"math_id": 36,
"text": "V_{xx}+V_{yy}+V_{zz}=0\n"
},
{
"math_id": 37,
"text": "\\eta"
},
{
"math_id": 38,
"text": "\\eta=\\frac{V_{yy}-V_{xx}}{V_{zz}}\n"
},
{
"math_id": 39,
"text": "0\\le\\eta\\le 1"
},
{
"math_id": 40,
"text": "V_{zz}=V_{yy}=V_{xx}"
},
{
"math_id": 41,
"text": "\\eta=0"
},
{
"math_id": 42,
"text": "E_Q=\\frac{eQV_{zz}}{4I(2I-1)}\\cdot (3m^2-I(I+1))\n"
},
{
"math_id": 43,
"text": "M"
},
{
"math_id": 44,
"text": "M'"
},
{
"math_id": 45,
"text": "\\Delta E_Q=E_m-E_{m'}=\\frac{eQV_{zz}}{4I(2I-1)}\\cdot 3|M^2-M'^2|\n"
},
{
"math_id": 46,
"text": "\\omega_Q"
},
{
"math_id": 47,
"text": "\\omega_Q=\\frac{eQV_{zz}}{4I(2I-1)\\hbar}=\\frac{2\\pi eQV_{zz}}{4I(2I-1)h}=\\frac{2\\pi\\nu_Q}{4I(2I-1)}\n"
},
{
"math_id": 48,
"text": "\\nu_Q=\\frac{eQ}{h}V_{zz}=\\frac{4I(2I-1)\\omega_Q}{2\\pi}\n"
},
{
"math_id": 49,
"text": "\\nu_Q"
},
{
"math_id": 50,
"text": "e"
},
{
"math_id": 51,
"text": "h"
},
{
"math_id": 52,
"text": "Q"
},
{
"math_id": 53,
"text": "\\Delta E_Q=\\hbar\\omega_Q\\cdot 3|m^2-m'^2|\n"
},
{
"math_id": 54,
"text": "\\omega^n=n\\cdot\\omega^{0}_{Q}\n"
},
{
"math_id": 55,
"text": "\\omega^{0}_{Q}=\\textrm{min}\\left(\\frac{\\Delta E_Q}{\\hbar}\\right)\n"
},
{
"math_id": 56,
"text": "\\omega^{0}_{Q}=3\\cdot\\omega_Q"
},
{
"math_id": 57,
"text": "n=|M^2-M'^2|"
},
{
"math_id": 58,
"text": "\\omega^{0}_{Q}=6\\cdot\\omega_Q"
},
{
"math_id": 59,
"text": "n=\\frac{1}{2}|M^2-M'^2|"
},
{
"math_id": 60,
"text": "G_{k_1k_2}^{NN}=\\sum_ns_{nN}^{k_1k_2}\\cos{(n\\omega_Q^0t)}\n"
},
{
"math_id": 61,
"text": "s_{nN}^{k_1k_2}=\\sqrt{(2k_1+1)(2k_2+1)}\\cdot\\sum_{M,M'}\\begin{pmatrix}\nI&I&k_1\\\\\nM'&-M&N\\\\\n\\end{pmatrix}\\begin{pmatrix}\nI&I&k_2\\\\\nM'&-M&N\\\\\n\\end{pmatrix}\n"
},
{
"math_id": 62,
"text": "\\omega_n"
},
{
"math_id": 63,
"text": "\\vec{I}"
},
{
"math_id": 64,
"text": "\\tau_n"
},
{
"math_id": 65,
"text": "\\omega_s<0.2\\cdot \\nu_Q"
},
{
"math_id": 66,
"text": "N_s"
},
{
"math_id": 67,
"text": "G_{22}(t)"
},
{
"math_id": 68,
"text": "G_{22}^{dyn}(t)=e^{-\\lambda_d t}G_{22}(t)"
},
{
"math_id": 69,
"text": "\\lambda_d=(N_s-1)\\omega_s\n"
},
{
"math_id": 70,
"text": "\\lambda_d"
},
{
"math_id": 71,
"text": "\\lambda=\\frac{1}{\\tau}"
},
{
"math_id": 72,
"text": "\\omega_s"
},
{
"math_id": 73,
"text": "G_{22}^{dyn}(t)=e^{-\\lambda_d t}\n"
},
{
"math_id": 74,
"text": "\\omega_s>3\\cdot\\nu_Q"
},
{
"math_id": 75,
"text": "\\lambda_d\\approx\\frac{2,5\\nu_Q^2}{N_s\\omega_s}\n"
},
{
"math_id": 76,
"text": "\\mathrm{d}N_a=-N_a\\left(\\Gamma_r+\\frac{1}{\\tau_{7/2}}\\right)\\mathrm{d}t"
},
{
"math_id": 77,
"text": "\\tau_{7/2}=\\frac{120\\textrm{ps}}{\\ln{2}}"
},
{
"math_id": 78,
"text": "N_a(t)=N_{a_0}\\cdot e^\\left({-(\\Gamma_r +\\frac{1}{\\tau_{7/2}})t}\\right)\n"
},
{
"math_id": 79,
"text": "N_{c}(t)=\\Gamma_r\\int\\limits_0^tN_a(t)\\mathrm{d}t=N_0\\frac{\\Gamma_r\\tau_{7/2}}{\\Gamma_r\\tau_{7/2}+1}\\left(1-e^{-(\\Gamma_r+\\frac{1}{\\tau_{7/2}})t}\\right)\n"
},
{
"math_id": 80,
"text": "\\rho"
},
{
"math_id": 81,
"text": "\\rho_{stat}=\\frac{\\Gamma_r\\tau_{7/2}}{\\Gamma_r\\tau_{7/2}+1}\n"
},
{
"math_id": 82,
"text": "\\rho_{dyn}=\\frac{1}{\\Gamma_r\\tau_{7/2}+1}\n"
},
{
"math_id": 83,
"text": "M_i\\rightarrow M_f"
},
{
"math_id": 84,
"text": "W(M_i\\rightarrow M_f)=\\left|\\sum_M\\langle\nM_f|\\mathcal{H}_2|M\\rangle\\langle M|\\mathcal{H}_1|M_i\\rangle\\right|^2\n"
},
{
"math_id": 85,
"text": "W(\\vec{k}_1,\\vec{k}_2)=\\sum_{M_i,M_f,\\sigma_1,\\sigma_2}\\left|\\sum_M\\langle\nM_f|\\mathcal{H}_2|M\\rangle\\langle M|\\mathcal{H}_1|M_i\\rangle\\right|^2\n"
},
{
"math_id": 86,
"text": "W(\\vec{k}_1,\\vec{k}_2)=W(\\Theta)=\\sum_{k_{gerade}}^{k_{max}}A_k(1)A_k(2)P_k(\\cos{\\Theta})\n"
},
{
"math_id": 87,
"text": "0\\leq k\\leq"
},
{
"math_id": 88,
"text": "(2I,l_1+l_1',l_2+l_2')"
},
{
"math_id": 89,
"text": "W(\\Theta,t)=\\sum_{k=2,4}A_{kk}P_k(\\cos{\\Theta})"
},
{
"math_id": 90,
"text": "|M_a\\rangle\\rightarrow\\Lambda(t)|M_a=\\sum_{M_b}|M_b\\rangle\\langle M_b|\\Lambda(t)\n|M_a\\rangle "
},
{
"math_id": 91,
"text": "W(\\vec{k}_1,\\vec{k}_2,t)=\\sum_{M_i,M_f,\\sigma_1,\\sigma_2}\\left|\\sum_{M_a}\\langle\nM_f|\\mathcal{H}_2\\Lambda(t)|M_a\\rangle\\langle\nM_a|\\mathcal{H}_1|M_i\\rangle\\right|^2=\\langle\\rho(\\vec{k}_2)\\rangle_t\n"
},
{
"math_id": 92,
"text": "W(\\vec{k}_1,\\vec{k}_2,t)=\\sum_{k_1,k_2,N_1,N_2} A_{k_1}(1)A_{k_2}(2)\\frac{1}{\\sqrt{(2k_1+1)(2k_2+1)}}\\times\nY_{k_1}^{N_1}(\\Theta_1,\\Phi_1)\\cdot\nY_{k_2}^{N_2}(\\Theta_2,\\Phi_2)G_{k_1k_2}^{N_1N_2}(t)\n"
},
{
"math_id": 93,
"text": "\nG_{k_1k_2}^{N_1N_2}=\\sum_{M_a,M_b} (-1)^{2I+M_a+M_b}\\sqrt{(\n2k_1+1)(2k_2+)}\\times\\langle\nM_b|\\Lambda(t)|M_a\\rangle\\langle\nM_b'|\\Lambda(t)|M_a'\\rangle^{*}\\times\\begin{pmatrix}\nI&I& k_1\\\\\nM_a'&-M_a&N_1\n\\end{pmatrix}\n\\begin{pmatrix}\nI&I&k_2\\\\\nM_b'&-M_b& N_2\n\\end{pmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=62421802
|
624231
|
Voltage regulator
|
System designed to maintain a constant voltage
A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages.
Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line.
Electronic voltage regulators.
A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for
low voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large.
Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, "up to a point", to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range (see also: crowbar circuits).
Electromechanical regulators.
In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more.
If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator.
Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement.
Modern designs now use "solid state" technology (transistors) to perform the same function that the relays perform in electromechanical regulators.
Electromechanical regulators are used for mains voltage stabilisation—see AC voltage stabilizers below.
Automatic voltage regulator.
Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults.
AC voltage stabilizers.
Coil-rotation AC voltage regulator.
This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler.
When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil.
This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil.
Electromechanical.
Electromechanical regulators called "voltage stabilizers" or "tap-changers", have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount.
Constant-voltage transformer.
The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary.
The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely.
The ferroresonant transformers, which are also known as "constant-voltage transformers" (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection.
A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage.
Output power factor remains in the range of 0.96 or higher from half to full load.
Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching.
Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency.
Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance.
Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads.
It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components.
Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD.
Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation.
Power distribution.
Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V.
DC voltage stabilizers.
Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a "shunt regulator" such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device.
If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand.
Active regulators.
Active regulators employ at least one active (amplifying) component such as a transistor or operational amplifier. Shunt regulators are often (but not always) passive and simple, but always inefficient because they (essentially) dump the excess current which is not available to the load. When more power must be supplied, more sophisticated circuits are used. In general, these active regulators can be divided into several classes:
Linear regulators.
"Linear regulators" are based on devices that operate in their linear region (in contrast, a switching regulator is based on a device forced to act as an on/off switch). Linear regulators are also classified in two types:
In the past, one or more vacuum tubes were commonly used as the variable resistance. Modern designs use one or more transistors instead, perhaps within an integrated circuit. Linear designs have the advantage of very "clean" output with little noise introduced into their DC output, but are most often much less efficient and unable to step-up or invert the input voltage like switched supplies. All linear regulators require a higher input than the output. If the input voltage approaches the desired output voltage, the regulator will "drop out". The input to output voltage differential at which this occurs is known as the regulator's drop-out voltage. Low-dropout regulators (LDOs) allow an input voltage that can be much lower (i.e., they waste less energy than conventional linear regulators).
Entire linear regulators are available as integrated circuits. These chips come in either fixed or adjustable voltage types. Examples of some integrated circuits are the 723 general purpose regulator and 78xx /79xx series
Switching regulators.
Switching regulators rapidly switch a series device on and off. The duty cycle of the switch sets how much charge is transferred to the load. This is controlled by a similar feedback mechanism as in a linear regulator. Because the series element is either fully conducting, or switched off, it dissipates almost no power; this is what gives the switching design its efficiency. Switching regulators are also able to generate output voltages which are higher than the input, or of opposite polarity—something not possible with a linear design. In switched regulators, the pass transistor is used as a "controlled switch" and is operated at either cutoff or saturated state. Hence the power transmitted across the pass device is in discrete pulses rather than a steady current flow. Greater efficiency is achieved since the pass device is operated as a low impedance switch. When the pass device is at cutoff, there is no current and it dissipates no power. Again when the pass device is in saturation, a negligible voltage drop appears across it and thus dissipates only a small amount of average power, providing maximum current to the load. In either case, the power wasted in the pass device is very little and almost all the power is transmitted to the load. Thus the efficiency of a switched-mode power supply is remarkably high-in the range of 70–90%.
Switched mode regulators rely on pulse-width modulation to control the average value of the output voltage. The average value of a repetitive pulse waveform depends on the area under the waveform. If the duty cycle is varied, the average value of the voltage changes proportionally.
Like linear regulators, nearly complete switching regulators are also available as integrated circuits. Unlike linear regulators, these usually require an inductor that acts as the energy storage element. The IC regulators combine the reference voltage source, error op-amp, pass transistor with short circuit current limiting and thermal overload protection.
Switching regulators are more prone to output noise and instability than linear regulators. However, they provide much better power efficiency than linear regulators.
SCR regulators.
Regulators powered from AC power circuits can use silicon controlled rectifiers (SCRs) as the series device. Whenever the output voltage is below the desired value, the SCR is triggered, allowing electricity to flow into the load until the AC mains voltage passes through zero (ending the half cycle). SCR regulators have the advantages of being both very efficient and very simple, but because they can not terminate an ongoing half cycle of conduction, they are not capable of very accurate voltage regulation in response to rapidly changing loads. An alternative is the SCR shunt regulator which uses the regulator output as a trigger. Both series and shunt designs are noisy, but powerful, as the device has a low on resistance.
Combination or hybrid regulators.
Many power supplies use more than one regulating method in series. For example, the output from a switching regulator can be further regulated by a linear regulator. The switching regulator accepts a wide range of input voltages and efficiently generates a (somewhat noisy) voltage slightly above the ultimately desired output. That is followed by a linear regulator that generates exactly the desired voltage and eliminates nearly all the noise generated by the switching regulator. Other designs may use an SCR regulator as the "pre-regulator", followed by another type of regulator. An efficient way of creating a variable-voltage, accurate output power supply is to combine a multi-tapped transformer with an adjustable linear post-regulator.
Example of linear regulators.
Transistor regulator.
In the simplest case a common base amplifier is used with the base of the regulating transistor connected directly to the voltage reference:
A simple transistor regulator will provide a relatively constant output voltage "U"out for changes in the voltage "U"in of the power source and for changes in load "R"L, provided that "U"in exceeds "U"out by a sufficient margin and that the power handling capacity of the transistor is not exceeded.
The output voltage of the stabilizer is equal to the Zener diode voltage minus the base–emitter voltage of the transistor, "U"Z − "U"BE, where "U"BE is usually about 0.7 V for a silicon transistor, depending on the load current. If the output voltage drops for any external reason, such as an increase in the current drawn by the load (causing an increase in the collector–emitter voltage to observe KVL), the transistor's base–emitter voltage ("U"BE) increases, turning the transistor on further and delivering more current to increase the load voltage again.
"R"v provides a bias current for both the Zener diode and the transistor. The current in the diode is minimal when the load current is maximal. The circuit designer must choose a minimum voltage that can be tolerated across "R"v, bearing in mind that the higher this voltage requirement is, the higher the required input voltage "U"in, and hence the lower the efficiency of the regulator. On the other hand, lower values of "R"v lead to higher power dissipation in the diode and to inferior regulator characteristics.
"R"v is given by
formula_0
where
min "V""R" is the minimum voltage to be maintained across "R"v,
min "I"D is the minimum current to be maintained through the Zener diode,
max "I"L is the maximum design load current,
"h"FE is the forward current gain of the transistor ("I"C/"I"B).
Regulator with a differential amplifier.
The stability of the output voltage can be significantly increased by using a differential amplifier, possibly implemented as an operational amplifier:
In this case, the operational amplifier drives the transistor with more current if the voltage at its inverting input drops below the output of the voltage reference at the non-inverting input. Using the voltage divider (R1, R2 and R3) allows choice of the arbitrary output voltage between Uz and Uin.
Regulator specification.
The output voltage can only be held constant within specified limits. The regulation is specified by two measurements:
Other important parameters are:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_\\text{v} = \\frac{\\min V_R}{\\min I_\\text{D} + \\max I_\\text{L} / (h_\\text{FE} + 1)},"
}
] |
https://en.wikipedia.org/wiki?curid=624231
|
62423927
|
Seafloor depth versus age
|
Term in geology
The depth of the seafloor on the flanks of a mid-ocean ridge is determined mainly by the "age" of the oceanic lithosphere; older seafloor is deeper. During seafloor spreading, lithosphere and mantle cooling, contraction, and isostatic adjustment with age cause seafloor deepening. This relationship has come to be better understood since around 1969 with significant updates in 1974 and 1977. Two main theories have been put forward to explain this observation: one where the mantle including the lithosphere is cooling; the cooling mantle model, and a second where a lithosphere plate cools above a mantle at a constant temperature; the cooling plate model. The cooling mantle model explains the age-depth observations for seafloor younger than 80 million years. The cooling plate model explains the age-depth observations best for seafloor older that 20 million years. In addition, the cooling plate model explains the almost constant depth and heat flow observed in very old seafloor and lithosphere. In practice it is convenient to use the solution for the cooling mantle model for an age-depth relationship younger than 20 million years. Older than this the cooling plate model fits data as well. Beyond 80 million years the plate model fits better than the mantle model.
Background.
The first theories for seafloor spreading in the early and mid twentieth century explained the elevations of the mid-ocean ridges as upwellings above convection currents in Earth's mantle.
The next idea connected seafloor spreading and continental drift in a model of plate tectonics. In 1969, the elevations of ridges was explained as thermal expansion of a lithospheric plate at the spreading center. This 'cooling plate model' was followed in 1974 by noting that elevations of ridges could be modeled by cooling of the whole upper mantle including any plate. This was followed in 1977 by a more refined plate model which explained data that showed that both the ocean depths and ocean crust heat flow approached a constant value for very old seafloor. These observations could not be explained by the "earlier " 'cooling mantle model' which predicted increasing depth and decreasing heat flow at very old ages.
Seafloor topography: cooling mantle and lithosphere models.
The depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is closely correlated with its age (i.e. the age of the lithosphere at the point where depth is measured). Depth is measured to the top of the ocean crust, below any overlying sediment. The age-depth relation can be modeled by the cooling of a lithosphere plate or mantle half-space in areas without significant subduction. The distinction between the two approaches is that the plate model requires the base of the lithosphere to maintain a constant temperature over time and the cooling is of the plate above this lower boundary. The cooling mantle model, which was developed after the plate model, does not require that the lithosphere base is maintained at a constant and limiting temperature. The result of the cooling mantle model is that seafloor depth is predicted to be proportional to the square root of its age.
Cooling mantle model (1974).
In the cooling mantle half-space model developed in 1974, the seabed (top of crust) height is determined by the oceanic lithosphere and mantle temperature, due to thermal expansion. The simple result is that the ridge height or seabed depth is proportional to the square root of its age. In all models, oceanic lithosphere is continuously formed at a constant rate at the mid-ocean ridges. The source of the lithosphere has a half-plane shape ("x" = 0, "z" < 0) and a constant temperature "T"1. Due to its continuous creation, the lithosphere at "x" > 0 is moving away from the ridge at a constant velocity formula_0, which is assumed large compared to other typical scales in the problem. The temperature at the upper boundary of the lithosphere ("z" = 0) is a constant "T"0 = 0. Thus at "x" = 0 the temperature is the Heaviside step function formula_1. The system is assumed to be at a quasi-steady state, so that the temperature distribution is constant in time, i.e. formula_2
By substituting the parameters by their rough estimates into the solution for the height of the ocean floor formula_3:
formula_4
we have:
formula_5
where the height is in meters and time is in millions of years. To get the dependence on "x", one must substitute "t" = "x"/formula_0 ~ "Ax"/"L", where "L" is the distance between the ridge to the continental shelf (roughly half the ocean width), and "A" is the ocean basin age.
Rather than height of the ocean floor formula_3above a base or reference level formula_6, the depth of the seabed formula_7is of interest. Because formula_8(with formula_6 measured from the ocean surface) we can find that:
formula_9; for the eastern Pacific for example, where formula_10 is the depth at the ridge crest, typically 2500 m.
Cooling plate model (1977).
The depth predicted by the square root of seafloor age found by the 1974 cooling mantle derivation is too deep for seafloor older than 80 million years. Depth is better explained by a cooling lithosphere plate model rather than the cooling mantle half-space. The plate has a constant temperature at its base and spreading edge. Derivation of the cooling plate model also starts with the heat flow equation in one dimension as does the cooling mantle model. The difference is in requiring a thermal boundary at the base of a cooling plate. Analysis of depth versus age and depth versus square root of age data allowed Parsons and Sclater to estimate model parameters (for the North Pacific):
~125 km for lithosphere thickness
formula_11 at base and young edge of plate
formula_12
Assuming isostatic equilibrium everywhere beneath the cooling plate yields a revised age-depth relationship for older sea floor that is approximately correct for ages as young as 20 million years:
formula_13meters
Thus older seafloor deepens more slowly than younger and in fact can be assumed almost constant at ~6400 m depth. Their plate model also allowed an expression for conductive heat flow, "q(t)" from the ocean floor, which is approximately constant at formula_14 beyond 120 million years:
formula_15
Parsons and Sclater concluded that some style of mantle convection must apply heat to the base of the plate everywhere to prevent cooling down below 125 km and lithosphere contraction (seafloor deepening) at older ages. Morgan and Smith showed that the flattening of the older seafloor depth can be explained by flow in the asthenosphere below the lithosphere.
The age-depth-heat flow relationship continued to be studied with refinements in the physical parameters that define ocean lithospheric plates.
Impacts.
The usual method for estimating the age of the seafloor is from marine magnetic anomaly data and applying the Vine-Matthews-Morley hypothesis. Other ways include expensive deep sea drilling and dating of core material. If the depth is known at a location where anomalies are not mapped or are absent, and seabed samples are not available, knowing the seabed depth can yield an age estimate using the age-depth relationships.
Along with this, if the seafloor spreading rate in an ocean basin increases, then the average depth in that ocean basin decreases and therefore its volume decreases (and vice versa). This results in global eustatic sea level rise (fall) because the Earth is not expanding. Two main drivers of sea level variation over geologic time are then changes in the volume of continental ice on the land, and the changes over time in ocean basin average depth (basin volume) depending on its average age.
|
[
{
"math_id": 0,
"text": "v"
},
{
"math_id": 1,
"text": "T_1\\cdot\\Theta(-z)"
},
{
"math_id": 2,
"text": "T=T(x,z)."
},
{
"math_id": 3,
"text": "h(t)"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\kappa &\\sim 8\\cdot 10^{-7} \\ \\mathrm{m}^2\\cdot \\mathrm{s}^{-1}&& \\text{for the thermal diffusivity} \\\\ \n\\alpha &\\sim 4\\cdot 10^{-5} \\ {}^{\\circ}\\mathrm{C}^{-1}&& \\text{for the thermal expansion coefficient} \\\\\nT_1 &\\sim 1220 \\ {}^{\\circ}\\mathrm{C} && \\text{for the Atlantic and Indian oceans} \\\\\nT_1 &\\sim 1120 \\ {}^{\\circ}\\mathrm{C} && \\text{for the eastern Pacific} \n\\end{align}"
},
{
"math_id": 5,
"text": "h(t) \\sim \\begin{cases} h_0 - 390 \\sqrt{t} & \\text{for the Atlantic and Indian oceans} \\\\ h_0 - 350 \\sqrt{t} & \\text{for the eastern Pacific} \\end{cases}"
},
{
"math_id": 6,
"text": "h_b"
},
{
"math_id": 7,
"text": "d(t)"
},
{
"math_id": 8,
"text": "d(t)+h(t)=h_b"
},
{
"math_id": 9,
"text": "d(t)=h_b-h_0+350\\sqrt{t}"
},
{
"math_id": 10,
"text": "h_b-h_0"
},
{
"math_id": 11,
"text": "T_1\\thicksim1350\\ {}^{\\circ}\\mathrm{C}"
},
{
"math_id": 12,
"text": "\\alpha\\thicksim3.2\\cdot 10^{-5} \\ {}^{\\circ}\\mathrm{C}^{-1}"
},
{
"math_id": 13,
"text": "d(t)=6400-3200\\exp\\bigl(-t/62.8\\bigr)"
},
{
"math_id": 14,
"text": "1\\cdot 10^{-6}\\mathrm{cal}\\, \\mathrm{cm}^{-2} \\mathrm{sec}^{-1}"
},
{
"math_id": 15,
"text": "q(t)=11.3/\\sqrt{t}"
}
] |
https://en.wikipedia.org/wiki?curid=62423927
|
62427038
|
Lattice of stable matchings
|
Distributive lattice whose elements are stable matchings
In mathematics, economics, and computer science, the lattice of stable matchings is a distributive lattice whose elements are stable matchings. For a given instance of the stable matching problem, this lattice provides an algebraic description of the family of all solutions to the problem. It was originally described in the 1970s by John Horton Conway and Donald Knuth.
By Birkhoff's representation theorem, this lattice can be represented as the lower sets of an underlying partially ordered set. The elements of this set can be given a concrete structure as rotations, with cycle graphs describing the changes between adjacent stable matchings in the lattice. The family of all rotations and their partial order can be constructed in polynomial time, leading to polynomial time solutions for other problems on stable matching including the minimum or maximum weight stable matching. The Gale–Shapley algorithm can be used to construct two special lattice elements, its top and bottom element.
Every finite distributive lattice can be represented as a lattice of stable matchings.
The number of elements in the lattice can vary from an average case of formula_0 to a worst-case of exponential.
Computing the number of elements is #P-complete.
Background.
In its simplest form, an instance of the stable matching problem consists of two sets of the same number of elements to be matched to each other, for instance doctors and positions at hospitals. Each element has a preference ordering on the elements of the other type: the doctors each have different preferences for which hospital they would like to work at (for instance based on which cities they would prefer to live in), and the hospitals each have preferences for which doctors they would like to work for them (for instance based on specialization or recommendations). The goal is to find a matching that is "stable": no pair of a doctor and a hospital prefer each other to their assigned match. Versions of this problem are used, for instance, by the National Resident Matching Program to match American medical students to hospitals.
In general, there may be many different stable matchings. For example, suppose there are three doctors (A,B,C) and three hospitals (X,Y,Z) which have preferences of:
A: YXZ B: ZYX C: XZY
X: BAC Y: CBA Z: ACB
There are three stable solutions to this matching arrangement:
The lattice of stable matchings organizes this collection of solutions, for any instance of stable matching, giving it the structure of a distributive lattice.
Structure.
Partial order on matchings.
The lattice of stable matchings is based on the following weaker structure, a partially ordered set whose elements are the stable matchings. Define a comparison operation formula_1 on the stable matchings,
where formula_2 if and only if all doctors prefer matching formula_3 to matching formula_4: either they have the same assigned hospital in both matchings, or they are assigned a better hospital in formula_3 than they are in formula_4. If the doctors disagree on which matching they prefer, then formula_4 and formula_3 are incomparable: neither one is formula_1 the other.
The same comparison operation can be defined in the same way for any two sets of elements, not just doctors and hospitals. The choice of which of the two sets of elements to use in the role of the doctors is arbitrary. Swapping the roles of the doctors and hospitals reverses the ordering of every pair of elements, but does not otherwise change the structure of the partial order.
Then this ordering gives the matchings the structure of a partially ordered set. To do so, it must obey the following three properties:
For stable matchings, all three properties follow directly from the definition of the comparison operation.
Top and bottom elements.
Define the best match of an element formula_10 of a stable matching instance to be the element formula_11 that formula_10 most prefers, among all the elements that can be matched to formula_10 in a stable matching, and define the worst match analogously. Then no two elements can have the same best match.
For, suppose to the contrary that doctors formula_10 and formula_12 both have formula_11 as their best match, and that formula_11 prefers formula_10 to formula_12. Then, in the stable matching that matches formula_12 to formula_11 (which must exist by the definition of the best match of formula_12), formula_10 and formula_11 would be an unstable pair, because formula_11 prefers formula_10 to formula_12 and formula_10 prefers formula_11 to any other partner in any stable matching. This contradiction shows that assigning all doctors to their best matches gives a matching. It is a stable matching, because any unstable pair would also be unstable for one of the matchings used to define best matches. As well as assigning all doctor to their best matches, it assigns all hospitals to their worst matches. In the partial ordering on the matchings, it is greater than all other stable matchings.
Symmetrically, assigning all doctors to their worst matches and assigning all hospitals to their best matches gives another stable matching. In the partial order on the matchings, it is less than all other stable matchings.
The Gale–Shapley algorithm gives a process for constructing stable matchings, that can be described as follows: until a matching is reached, the algorithm chooses an arbitrary hospital with an unfilled position, and that hospital makes a job offer to the doctor it most prefers among the ones it has not already made offers to. If the doctor is unemployed or has a less-preferred assignment, the doctor accepts the offer (and resigns from their other assignment if it exists). The process always terminates, because each doctor and hospital interact only once. When it terminates, the result is a stable matching, the one that assigns each hospital to its best match and that assigns all doctors to their worst matches. An algorithm that swaps the roles of the doctors and hospitals (in which unemployed doctors send a job applications to their next preference among the hospitals, and hospitals accept applications either when they have an unfilled position or they prefer the new applicant, firing the doctor they had previously accepted) instead produces the stable matching that assigns all doctors to their best matches and each hospital to its worst match.
Lattice operations.
Given any two stable matchings formula_4 and formula_3 for the same input, one can form two more matchings formula_13 and formula_14 in the following way:
In formula_13, each doctor gets their best choice among the two hospitals they are matched to in formula_4 and formula_3 (if these differ) and each hospital gets its worst choice.
In formula_14, each doctor gets their worst choice among the two hospitals they are matched to in formula_4 and formula_3 (if these differ) and each hospital gets its best choice.
Then both formula_13 and formula_14 are matchings.
It is not possible, for instance, for two doctors to have the same best choice and be matched to the same hospital in formula_13, for regardless of which of the two doctors is preferred by the hospital, that doctor and hospital would form an unstable pair in whichever of formula_4 and formula_3 they are not already matched in. Because the doctors are matched in formula_13, the hospitals must also be matched. The same reasoning applies symmetrically to formula_14.
Additionally, both formula_13 and formula_14 are stable.
There cannot be a pair of a doctor and hospital who prefer each other to their match, because the same pair would necessarily also be an unstable pair for at least one of formula_4 and formula_3.
Lattice properties.
The two operations formula_13 and formula_14 form the join and meet operations of a finite distributive lattice.
In this context, a finite lattice is defined as a partially ordered finite set in which there is a unique minimum element and a unique maximum element, in which every two elements have a unique least element greater than or equal to both of them (their join) and every two elements have a unique greatest element less than or equal to both of them (their meet).
In the case of the operations formula_13 and formula_14 defined above, the join formula_13 is greater than or equal to both formula_4 and formula_3 because it was defined to give each doctor their preferred choice, and because these preferences of the doctors are how the ordering on matchings is defined. It is below any other matching that is also above both formula_4 and formula_3, because any such matching would have to give each doctor an assigned match that is at least as good. Therefore, it fits the requirements for the join operation of a lattice.
Symmetrically, the operation formula_14 fits the requirements for the meet operation.
Because they are defined using an element-wise minimum or element-wise maximum in the preference ordering, these two operations obey the same distributive laws obeyed by the minimum and maximum operations on linear orderings: for every three different matchings formula_4, formula_3, and formula_7,
formula_15
and
formula_16
Therefore, the lattice of stable matchings is a distributive lattice.
Representation by rotations.
Birkhoff's representation theorem states that any finite distributive lattice can be represented by a family of finite sets, with intersection and union as the meet and join operations, and with the relation of being a subset as the comparison operation for the associated partial order. More specifically, these sets can be taken to be the lower sets of an associated partial order.
In the general form of Birkhoff's theorem, this partial order can be taken as the induced order on a subset of the elements of the lattice, the join-irreducible elements (elements that cannot be formed as joins of two other elements). For the lattice of stable matchings, the elements of the partial order can instead be described in terms of structures called "rotations", described by .
Suppose that two different stable matchings formula_4 and formula_3 are comparable and have no third stable matching between them in the partial order. (That is, formula_4 and formula_3 form a pair of the covering relation of the partial order of stable matchings.) Then the set of pairs of elements that are matched in one but not both of formula_4 and formula_3 (the symmetric difference of their sets of matched pairs) is called a rotation. It forms a cycle graph whose edges alternate between the two matchings. Equivalently, the rotation can be described as the set of changes that would need to be performed to change the lower of the two matchings into the higher one (with lower and higher determined using the partial order). If two different stable matchings are separately the higher matching for the same rotation, then so is their meet. It follows that for any rotation, the set of stable matchings that can be the higher of a pair connected by the rotation has a unique lowest element. This lowest matching is join irreducible, and this gives a one-to-one correspondence between rotations and join-irreducible stable matchings.
If the rotations are given the same partial ordering as their corresponding join-irreducible stable matchings, then Birkhoff's representation theorem gives a one-to-one correspondence between lower sets of rotations and all stable matchings. The set of rotations associated with any given stable matching can be obtained by changing the given matching by rotations downward in the partial ordering, choosing arbitrarily which rotation to perform at each step, until reaching the bottom element, and listing the rotations used in this sequence of changes. The stable matching associated with any lower set of rotations can be obtained by applying the rotations to the bottom element of the lattice of stable matchings, choosing arbitrarily which rotation to apply when more than one can apply.
Every pair formula_17 of elements of a given stable matching instance belongs to at most two rotations: one rotation that, when applied to the lower of two matchings, removes other assignments to formula_10 and formula_11 and instead assigns them to each other, and a second rotation that, when applied to the lower of two matchings, removes pair formula_17 from the matching and finds other assignments for those two elements. Because there are formula_18 pairs of elements, there are formula_19 rotations.
Mathematical properties.
Universality.
Beyond being a finite distributive lattice, there are no other constraints on the lattice structure of stable matchings. This is because, for every finite distributive lattice formula_20, there exists a stable matching instance whose lattice of stable matchings is isomorphic to formula_20.
More strongly, if a finite distributive lattice has formula_21 elements, then it can be realized using a stable matching instance with at most formula_22 doctors and hospitals.
Number of lattice elements.
The lattice of stable matchings can be used to study the computational complexity of counting the number of stable matchings of a given instance. From the equivalence between lattices of stable matchings and arbitrary finite distributive lattices, it follows that this problem has equivalent computational complexity to counting the number of elements in an arbitrary finite distributive lattice, or to counting the antichains in an arbitrary partially ordered set. Computing the number of stable matchings is #P-complete.
In a uniformly-random instance of the stable marriage problem with formula_23 doctors and formula_23 hospitals, the average number of stable matchings is asymptotically formula_0. In a stable marriage instance chosen to maximize the number of different stable matchings, this number can be at least formula_24,
and us also upper-bounded by an exponential function of n (significantly smaller than the naive factorial bound on the number of matchings).
Algorithmic consequences.
The family of rotations and their partial ordering can be constructed in polynomial time from a given instance of stable matching, and provides a concise representation to the family of all stable matchings, which can for some instances be exponentially larger when listed explicitly. This allows several other computations on stable matching instances to be performed efficiently.
Weighted stable matching and closure.
If each pair of elements in a stable matching instance is assigned a real-valued weight, it is possible to find the minimum or maximum weight stable matching in polynomial time. One possible method for this is to apply linear programming to the order polytope of the partial order of rotations, or to the stable matching polytope. An alternative, combinatorial algorithm is possible, based on the same partial order.
From the weights on pairs of elements, one can assign weights to each rotation, where a rotation that changes a given stable matching to another one higher in the partial ordering of stable matchings is assigned the change in weight that it causes: the total weight of the higher matching minus the total weight of the lower matching. By the correspondence between stable matchings and lower sets of rotations, the total weight of any matching is then equal to the total weight of its corresponding lower set, plus the weight of the bottom element of the lattice of matchings. The problem of finding the minimum or maximum weight stable matching becomes in this way equivalent to the problem of finding the minimum or maximum weight lower set in a partially ordered set of polynomial size, the partially ordered set of rotations.
This optimal lower set problem is equivalent to an instance of the closure problem, a problem on vertex-weighted directed graphs in which the goal is to find a subset of vertices of optimal weight with no outgoing edges. The optimal lower set is an optimal closure of a directed acyclic graph that has the elements of the partial order as its vertices, with an edge from formula_25 to formula_26 whenever formula_27 in the partial order. The closure problem can, in turn, be solved in polynomial time by transforming it into an instance of the maximum flow problem.
Minimum regret.
defines the regret of a participant in a stable matching to be the distance of their assigned match from the top of their preference list. He defines the regret of a stable matching to be the maximum regret of any participant. Then one can find the minimum-regret stable matching by a simple greedy algorithm that starts at the bottom element of the lattice of matchings and then repeatedly applies any rotation that reduces the regret of a participant with maximum regret, until this would cause some other participant to have greater regret.
Median stable matching.
The elements of any distributive lattice form a median graph, a structure in which any three elements formula_4, formula_3, and formula_7 (here, stable matchings) have a unique median element formula_28 that lies on a shortest path between any two of them. It can be defined as:
formula_29
For the lattice of stable matchings, this median can instead be taken element-wise, by assigning each doctor the median in the doctor's preferences of the three hospitals matched to that doctor in formula_4, formula_3, and formula_7 and similarly by assigning each hospital the median of the three doctors matched to it. More generally, any set of an odd number of elements of any distributive lattice (or median graph) has a median, a unique element minimizing its sum of distances to the given set. For the median of an odd number of stable matchings, each participant is matched to the median element of the multiset of their matches from the given matchings. For an even set of stable matchings, this can be disambiguated by choosing the assignment that matches each doctor to the higher of the two median elements, and each hospital to the lower of the two median elements. In particular, this leads to a definition for the median matching in the set of all stable matchings. However, for some instances of the stable matching problem, finding this median of all stable matchings is NP-hard.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e^{-1}n\\ln n"
},
{
"math_id": 1,
"text": "\\le"
},
{
"math_id": 2,
"text": "P\\le Q"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "P\\le P"
},
{
"math_id": 6,
"text": "Q\\le P"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "Q\\le R"
},
{
"math_id": 9,
"text": "P\\le R"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "x'"
},
{
"math_id": 13,
"text": "P\\vee Q"
},
{
"math_id": 14,
"text": "P\\wedge Q"
},
{
"math_id": 15,
"text": "P\\wedge(Q\\vee R)=(P\\wedge Q)\\vee (P\\wedge R)"
},
{
"math_id": 16,
"text": "P\\vee(Q\\wedge R)=(P\\vee Q)\\wedge (P\\vee R)"
},
{
"math_id": 17,
"text": "(x,y)"
},
{
"math_id": 18,
"text": "n^2"
},
{
"math_id": 19,
"text": "O(n^2)"
},
{
"math_id": 20,
"text": "L"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "k^2-k+4"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "2^{n-1}"
},
{
"math_id": 25,
"text": "\\alpha"
},
{
"math_id": 26,
"text": "\\beta"
},
{
"math_id": 27,
"text": "\\alpha\\le\\beta"
},
{
"math_id": 28,
"text": "m(P,Q,R)"
},
{
"math_id": 29,
"text": "m(P,Q,R)=(P\\wedge Q)\\vee(P\\wedge R)\\vee(Q\\wedge R)=(P\\vee Q)\\wedge(P\\vee R)\\wedge(Q\\vee R)."
}
] |
https://en.wikipedia.org/wiki?curid=62427038
|
62428
|
Partial evaluation
|
Technique for program optimization
In computing, partial evaluation is a technique for several different types of program optimization by specialization. The most straightforward application is to produce new programs that run faster than the originals while being guaranteed to behave in the same way.
A computer program "prog" is seen as a mapping of input data into output data:
formula_0
where formula_1, the "static data", is the part of the input data known at compile time.
The partial evaluator transforms formula_2 into formula_3 by precomputing all static input at compile time. formula_4 is called the "residual program" and should run more efficiently than the original program. The act of partial evaluation is said to "residualize" formula_5 to formula_4.
Futamura projections.
A particularly interesting example of the use of partial evaluation, first described in the 1970s by Yoshihiko Futamura, is when "prog" is an interpreter for a programming language.
If "I"static is source code designed to run inside that interpreter, then partial evaluation of the interpreter with respect to this data/program produces "prog"*, a version of the interpreter that only runs that source code, is written in the implementation language of the interpreter, does not require the source code to be resupplied, and runs faster than the original combination of the interpreter and the source. In this case "prog"* is effectively a compiled version of "I"static.
This technique is known as the first Futamura projection, of which there are three:
They were described by Futamura in Japanese in 1971 and in English in 1983.
References.
<templatestyles src="Reflist/styles.css" />
General references.
<templatestyles src="Refbegin/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "prog : I_\\text{static} \\times I_\\text{dynamic} \\to O,"
},
{
"math_id": 1,
"text": "I_\\text{static}"
},
{
"math_id": 2,
"text": "\\langle prog, I_\\text{static}\\rangle"
},
{
"math_id": 3,
"text": "prog^* : I_\\text{dynamic} \\to O"
},
{
"math_id": 4,
"text": "prog^*"
},
{
"math_id": 5,
"text": "prog"
}
] |
https://en.wikipedia.org/wiki?curid=62428
|
62428696
|
Centrifugal pump selection and characteristics
|
The basic function of a pump is to do work on a liquid. It can be used to transport and compress a liquid. In industries heavy-duty pumps are used to move water, chemicals, slurry, food, oil and so on. Depending on their action, pumps are classified into two types — Centrifugal Pumps and Positive Displacement Pumps. While centrifugal pumps impart momentum to the fluid by motion of blades, positive displacement pumps transfer fluid by variation in the size of the pump’s chamber. Centrifugal pumps can be of rotor or propeller types, whereas positive displacement pumps may be gear-based, piston-based, diaphragm-based, etc.
As a general rule, centrifugal pumps are used with low viscosity fluids and positive displacement pumps are used with high viscosity fluids.
Parameters and Definitions.
Volume flow rate (Q), specifies the volume of fluid flowing through the pump per unit time. Thus, it gives the rate at which fluid travels through the pump. Given the density of the operating fluid, mass flow rate (ṁ) can also be used to obtain the volume flow rate. The relationship between the mass flow rate and volume flow rate (also known as the capacity) is given by:
formula_0
Where ρ is the operating fluid density.
One of the most important considerations, as a consequence, is to match the rated capacity of the pump with the required flow rate in the system that we are designing.
Discharge Head, is the net head obtained at the outlet of a pump. For a centrifugal pump, the discharge pressure depends on the suction or inlet pressure as well, along with the fluid’s density. Thus, for the same flow rate of the fluid, we may have different values of discharge pressure depending on the inlet pressure. Thus, discharge head (the height which the fluid can reach after getting pumped) varies according to its operating conditions.
Total Head is the difference between the height to which the fluid can rise at the outlet and the height to which it can rise at the inlet for a centrifugal pump. This is a crucial parameter for pump selection and is a popularly used parameter for ascertaining industrial requirements. By eliminating the inlet head, we remove the effect of the supplied pressure to the pump and are left with only the pump’s energy (head) contribution to the fluid flow.
Factors Affecting Pump Selection.
Flow Rate – The flow rate is necessary to select a pump because the head characteristics of a pump will be affected by the flow rate of the system. It is necessary to importantly measure or ascertain this parameter, since the flow rate is critical in many industrial processes, especially in chemical industries.
Static Head – The difference between the inlet tank fluid surface elevation and the discharge tank fluid surface elevation.
Friction Head – The friction head accounts for the frictional losses in the pumping system. The value of the friction head can be found from available data-tables depending on the flow parameters such as fluid viscosity, pipe dimensions, flow rate, etc.
Total Head – It is obtained by adding the friction and static heads. It gives a measure of the amount of energy imparted by the pump to the fluid. Using the total head and the flow rate, the appropriate dynamic pump (centrifugal pump) can be selected.
Selection Using Pump Characteristics.
Whenever there is a need to select a pump for any industrial or personal requirement, it is important to determine the required total head for the operation and the required flow rate. All this data is important because each pump which is manufactured by manufacturer has a characteristic value of head and flow at which it leads to maximum efficiency operation. For example, in a process industry if there is a need to transport chemical liquids at a specific flow rate for a particular chemical reaction to take place then there is a need to ascertain both the dynamic head (which is related to the flow rate) and static head. After calculating both the head and the flow rate, the pump curves given by the manufacturer are referred and the pump giving the maximum efficiency at the operational condition is selected. It should however be noted that the "best efficiency point is not the best operating point in practice", because the pump curve describes how a centrifugal pump performs in isolation from plant equipment. How it operates in practice is determined by the resistance of the system it is installed in.
Characteristic Pump Curves.
Pump curves are quite useful in the pump selection, testing, operation and maintenance. Pump performance curve is a graph of differential head against the operating flow rate. They specify performance and efficiency characteristics. Performance tests are done on the pumps to verify the claims made by the pump maker. It is quite possible that with time in the plant, requirements of the process along with the infrastructure and conditions may change considerably. In that case pump curves are used to verify whether the pumps would still be the best fit for modified requirements.
Selecting Using Pump Curves.
Pump performance curves are important indicators of pump characteristics provided by the manufacturer. These curves are fundamental in predicting the variation in the differential head across the pump, as the flow changes. However, such curves are not limited to the head, and variation in other parameters such as power, efficiency or NPSH with flow can also be shown on similar plots by the manufacturer.
Due to mechanical and power constraints head provided by the pump drops as it pushes more quantity of fluid. In other words, when there is an increase the flow rate (for the same impeller diameter), there is a drop in differential head that the pump is capable of providing. The two are related as follows:
formula_1 Here formula_2 and formula_3 depend on the geometric parameters and the rotational speed of the pump and are assumed to be constant for the purpose of comparison.
However, this simple linear relationship undergoes modification on account of various losses and a non-linear, decreasing formula_4 relationship is seen in the pump characteristic curve.
From the curve, it is observed that even when the differential head drops off, the output obtained increases because the product of flow rate and head increases (recall that the net pump output is given by formula_5 and the efficiency is formula_6). This is due to the increase in flow rate. However, the reduction in the discharge head means that the pump consumes more power to push the additional fluid that we need (on account of the increased flow rate). After a specific point, known as the best efficiency point, the effect of reduction in the obtained head outweighs the increase in the flow rate. As a consequence, the power starts reducing hereafter, and the efficiency starts falling. Mathematically, the effect of flow rate on the efficiency is given by:
formula_7 where formula_8 is called the capacity constant, and formula_9 and formula_10 are constants that depend on the pump design and rotation speed.
Because of this contradicting feature a point of optimal efficiency is achieved for the pump. Our target should be to select pump which operates close to the maximum efficiency point for required operational requirement. This is the best efficiency point for pump and plotted on Pump Efficiency Curve.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\dot m = \\rho \\cdot Q"
},
{
"math_id": 1,
"text": "H=A-BQ"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "H-Q"
},
{
"math_id": 5,
"text": " P_o=\\rho gHQ"
},
{
"math_id": 6,
"text": " \\eta =\\rho gHQ /P_i"
},
{
"math_id": 7,
"text": "\\eta=k_1 C_Q-k_2 C_Q^3,"
},
{
"math_id": 8,
"text": "C_Q=Q/ND^3"
},
{
"math_id": 9,
"text": "k_1"
},
{
"math_id": 10,
"text": "k_2"
}
] |
https://en.wikipedia.org/wiki?curid=62428696
|
62431974
|
Nehemiah 3
|
Chapter from Nehemiah in the Old Testament
Nehemiah 3 is the third chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 13th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter records in detail the rebuilding of the walls and gates of Jerusalem, starting from the north to west sections (verses 1–15), continued to south and east sections until reaching the Sheep Gate again, the initial starting point (verses 16–32).
Text.
This chapter is divided into 32 verses. The original text of this chapter is in Hebrew language.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The northern wall (3:1-5).
In this section, Nehemiah lists the process of rebuilding the wall of Jerusalem, starting with the people working on the north wall and its gates. The north side of wall would have suffered 'the brunt of most attacks on Jerusalem, for those arriving from Mesopotamia' (cf. ).
"Then Eliashib the high priest rose up with his brethren the priests, and they builded the sheep gate; they sanctified it, and set up the doors of it; even unto the tower of Meah they sanctified it, unto the tower of Hananeel."
"Also the sons of Hassenaah built the Fish Gate; they laid its beams and hung its doors with its bolts and bars."
Verse 3.
The workers on the Fish Gate 'built' rather than 'repaired' the wall.
"And next to them Meremoth the son of Uriah, son of Hakkoz repaired. And next to them Meshullam the son of Berechiah, son of Meshezabel repaired. And next to them Zadok the son of Baana repaired."
The western wall (3:6-14).
The rebuilding process of the wall around Jerusalem, as reported in sections, actually happened simultaneously. While the priests worked on the north wall, others built along the western extension.
"And next unto him repaired Shallum the son of Halohesh, the ruler of the half part of Jerusalem, he and his daughters.
The eastern wall (3:15-32).
The last section describes the building the east wall, which needed more workers, 'probably because it was more extensively damaged'. Twenty-one work details were reported on this side of the wall.
"But the gate of the fountain repaired Shallun the son of Colhozeh, the ruler of part of Mizpah; he built it, and covered it, and set up the doors thereof, the locks thereof, and the bars thereof, and the wall of the pool of Siloah by the king's garden, and unto the stairs that go down from the city of David."
"After him Meremoth the son of Uriah, son of Hakkoz repaired another section from the door of the house of Eliashib to the end of the house of Eliashib."
"And between the going up of the corner unto the sheep gate repaired the goldsmiths and the merchants."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62431974
|
6243282
|
Meshfree methods
|
Methods in numerical analysis not requiring knowledge of neighboring points
In the field of numerical analysis, meshfree methods are those that do not require connection between nodes of the simulation domain, i.e. a mesh, but are rather based on interaction of each node with all its neighbors. As a consequence, original extensive properties such as mass or kinetic energy are no longer assigned to mesh elements but rather to the single nodes. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. The absence of a mesh allows Lagrangian simulations, in which the nodes can move according to the velocity field.
Motivation.
Numerical methods such as the finite difference method, finite-volume method, and finite element method were originally defined on meshes of data points. In such a mesh, each point has a fixed number of predefined neighbors, and this connectivity between neighbors can be used to define mathematical operators like the derivative. These operators are then used to construct the equations to simulate—such as the Euler equations or the Navier–Stokes equations.
But in simulations where the material being simulated can move around (as in computational fluid dynamics) or where large deformations of the material can occur (as in simulations of plastic materials), the connectivity of the mesh can be difficult to maintain without introducing error into the simulation. If the mesh becomes tangled or degenerate during simulation, the operators defined on it may no longer give correct values. The mesh may be recreated during simulation (a process called remeshing), but this can also introduce error, since all the existing data points must be mapped onto a new and different set of data points. Meshfree methods are intended to remedy these problems. Meshfree methods are also useful for:
Example.
In a traditional finite difference simulation, the domain of a one-dimensional simulation would be some function formula_0, represented as a mesh of data values formula_1 at points formula_2, where
formula_3
formula_4
formula_5
formula_6
We can define the derivatives that occur in the equation being simulated using some finite difference formulae on this domain, for example
formula_7
and
formula_8
Then we can use these definitions of formula_9 and its spatial and temporal derivatives to write the equation being simulated in finite difference form, then simulate the equation with one of many finite difference methods.
In this simple example, the steps (here the spatial step formula_10 and timestep formula_11) are constant along all the mesh, and the left and right mesh neighbors of the data value at formula_2 are the values at formula_12 and formula_13, respectively. Generally in finite differences one can allow very simply for steps variable along the mesh, but all the original nodes should be preserved and they can move independently only by deforming the original elements. If even only two of all the nodes change their order, or even only one node is added to or removed from the simulation, that creates a defect in the original mesh and the simple finite difference approximation can no longer hold.
Smoothed-particle hydrodynamics (SPH), one of the oldest meshfree methods, solves this problem by treating data points as physical particles with mass and density that can move around over time, and carry some value formula_14 with them. SPH then defines the value of formula_9 between the particles by
formula_15
where formula_16 is the mass of particle formula_17, formula_18 is the density of particle formula_17, and formula_19 is a kernel function that operates on nearby data points and is chosen for smoothness and other useful qualities. By linearity, we can write the spatial derivative as
formula_20
Then we can use these definitions of formula_9 and its spatial derivatives to write the equation being simulated as an ordinary differential equation, and simulate the equation with one of many numerical methods. In physical terms, this means calculating the forces between the particles, then integrating these forces over time to determine their motion.
The advantage of SPH in this situation is that the formulae for formula_9 and its derivatives do not depend on any adjacency information about the particles; they can use the particles in any order, so it doesn't matter if the particles move around or even exchange places.
One disadvantage of SPH is that it requires extra programming to determine the nearest neighbors of a particle. Since the kernel function formula_19 only returns nonzero results for nearby particles within twice the "smoothing length" (because we typically choose kernel functions with compact support), it would be a waste of effort to calculate the summations above over every particle in a large simulation. So typically SPH simulators require some extra code to speed up this nearest neighbor calculation.
History.
One of the earliest meshfree methods is smoothed particle hydrodynamics, presented in 1977. Libersky "et al." were the first to apply SPH in solid mechanics. The main drawbacks of SPH are inaccurate results near boundaries and tension instability that was first investigated by Swegle.
In the 1990s a new class of meshfree methods emerged based on the Galerkin method. This first method called the diffuse element method (DEM), pioneered by Nayroles et al., utilized the MLS approximation in the Galerkin solution of partial differential equations, with approximate derivatives of the MLS function. Thereafter Belytschko pioneered the Element Free Galerkin (EFG) method, which employed MLS with Lagrange multipliers to enforce boundary conditions, higher order numerical quadrature in the weak form, and full derivatives of the MLS approximation which gave better accuracy. Around the same time, the reproducing kernel particle method (RKPM) emerged, the approximation motivated in part to correct the kernel estimate in SPH: to give accuracy near boundaries, in non-uniform discretizations, and higher-order accuracy in general. Notably, in a parallel development, the Material point methods were developed around the same time which offer similar capabilities. Material point methods are widely used in the movie industry to simulate large deformation solid mechanics, such as snow in the movie Frozen. RKPM and other meshfree methods were extensively developed by Chen, Liu, and Li in the late 1990s for a variety of applications and various classes of problems. During the 1990s and thereafter several other varieties were developed including those listed below.
List of methods and acronyms.
The following numerical methods are generally considered to fall within the general class of "meshfree" methods. Acronyms are provided in parentheses.
Related methods:
Recent development.
The primary areas of advancement in meshfree methods are to address issues with essential boundary enforcement, numerical quadrature, and contact and large deformations. The common weak form requires strong enforcement of the essential boundary conditions, yet meshfree methods in general lack the Kronecker delta property. This make essential boundary condition enforcement non-trivial, at least more difficult than the Finite element method, where they can be imposed directly. Techniques have been developed to overcome this difficulty and impose conditions strongly. Several methods have been developed to impose the essential boundary conditions weakly, including Lagrange multipliers, Nitche's method, and the penalty method.
As for quadrature, nodal integration is generally preferred which offers simplicity, efficiency, and keeps the meshfree method free of any mesh (as opposed to using Gauss quadrature, which necessitates a mesh to generate quadrature points and weights). Nodal integration however, suffers from numerical instability due to underestimation of strain energy associated with short-wavelength modes, and also yields inaccurate and non-convergent results due to under-integration of the weak form. One major advance in numerical integration has been the development of a stabilized conforming nodal integration (SCNI) which provides a nodal integration method which does not suffer from either of these problems. The method is based on strain-smoothing which satisfies the first order patch test. However, it was later realized that low-energy modes were still present in SCNI, and additional stabilization methods have been developed. This method has been applied to a variety of problems including thin and thick plates, poromechanics, convection-dominated problems, among others. More recently, a framework has been developed to pass arbitrary-order patch tests, based on a Petrov–Galerkin method.
One recent advance in meshfree methods aims at the development of computational tools for automation in modeling and simulations. This is enabled by the so-called weakened weak (W2) formulation based on the G space theory. The W2 formulation offers possibilities to formulate various (uniformly) "soft" models that work well with triangular meshes. Because a triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence enables automation in modeling and simulation. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. Typical W2 models are the Smoothed Point Interpolation Methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments. The W2 formulation has also led to the development of combination of meshfree techniques with the well-developed FEM techniques, and one can now use triangular mesh with excellent accuracy and desired softness. A typical such a formulation is the so-called smoothed finite element method (or S-FEM). The S-FEM is the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler.
It is a general perception that meshfree methods are much more expensive than the FEM counterparts. The recent study has found however, some meshfree methods such as the S-PIM and S-FEM can be much faster than the FEM counterparts.
The S-PIM and S-FEM works well for solid mechanics problems. For CFD problems, the formulation can be simpler, via strong formulation. A Gradient Smoothing Methods (GSM) has also been developed recently for CFD problems, implementing the gradient smoothing idea in strong form. The GSM is similar to [FVM], but uses gradient smoothing operations exclusively in nested fashions, and is a general numerical method for PDEs.
Nodal integration has been proposed as a technique to use finite elements to emulate a meshfree behaviour. However, the obstacle that must be overcome in using nodally integrated elements is that the quantities at nodal points are not continuous, and the nodes are shared among multiple elements.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "u(x, t)"
},
{
"math_id": 1,
"text": "u_i^n"
},
{
"math_id": 2,
"text": "x_i"
},
{
"math_id": 3,
"text": "i=0,1,2..."
},
{
"math_id": 4,
"text": "n=0,1,2..."
},
{
"math_id": 5,
"text": "x_{i+1}-x_i=h\\ \\forall i"
},
{
"math_id": 6,
"text": "t_{n+1}-t_n=k\\ \\forall n"
},
{
"math_id": 7,
"text": "{\\partial u\\over \\partial x}={u_{i+1}^n-u_{i-1}^n\\over 2h}"
},
{
"math_id": 8,
"text": "{\\partial u\\over \\partial t}={u_i^{n+1}-u_i^n\\over k}"
},
{
"math_id": 9,
"text": "u(x,t)"
},
{
"math_id": 10,
"text": "h"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "x_{i-1}"
},
{
"math_id": 13,
"text": "x_{i+1}"
},
{
"math_id": 14,
"text": "u_i"
},
{
"math_id": 15,
"text": "u(x,t_n) = \\sum_i m_i \\frac{u_i^n}{\\rho_i} W(|x-x_i|)"
},
{
"math_id": 16,
"text": "m_i"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "\\rho_i"
},
{
"math_id": 19,
"text": "W"
},
{
"math_id": 20,
"text": "{\\partial u\\over \\partial x} = \\sum_i m_i \\frac{u_i^n}{\\rho_i} {\\partial W(|x-x_i|) \\over \\partial x}"
}
] |
https://en.wikipedia.org/wiki?curid=6243282
|
62434171
|
Nehemiah 4
|
Chapter from Nehemiah in the Old Testament
Nehemiah 4 is the fourth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 14th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter recounts how the Jews had to militarize the building of the wall due to the constant threat from their enemies.
Text.
The original text of this chapter is in Hebrew. In English bibles, this chapter is divided into 23 verses, but in Hebrew texts 4:1-6 is numbered 3:33-38, and 4:7-23 is numbered 4:1-17.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Derision (4:1–3).
When the restoration of the Jerusalem walls was advanced, Sanballat and his allies intensified the attacks beyond the scorn first mentioned in .
"But it came to pass, that when Sanballat heard that we builded the wall, he was wroth, and took great indignation, and mocked the Jews."
Verse 1.
On discovering 'the systematic design of refortifying Jerusalem', the Samaritan faction represented by Sanballat showed their bitter animosity to the Jews and in heaping scoffs and insults, as well as all sorts of disparaging words, their feelings of hatred and contempt increased.
"Now Tobiah the Ammonite was beside him, and he said, "Whatever they build, if even a fox goes up on it, he will break down their stone wall".
Verse 3.
The language about a fox on the wall is "troublesome". Some writers see the term as a reference to a siege weapon. H. G. M. Williamson sees it as a sarcastic reference to a small animal being able to break apart what the Jews are putting together.
Nehemiah's response to the attack (4:4–6).
Refusing to engage in a war of words or retaliatory actions, Nehemiah prayed to God, then went to work.
"So built we the wall; and all the wall was joined together unto the half thereof: for the people had a mind to work."
Obstacles (4:7-23).
With each step forward, Nehemiah faced obstacles to complete the wall, but he persevered with prayer and hard work. In this section he described the plot (verses 7–), discouragement (), threats and rumors () against him, but then he found his resolve () and executed his contingency plans ().
"7 But it came to pass, that when Sanballat, and Tobiah, and the Arabians, and the Ammonites, and the Ashdodites, heard that the walls of Jerusalem were made up, and that the breaches began to be stopped, then they were very wroth,"
"8 And conspired all of them together to come and to fight against Jerusalem, and to hinder it."
Verses 7–8.
The Jews were completely encircled by the enemies: the Samaritans (Sanballat) in the north, the Ammonites (Tobiah) in the east, the Arabians (Geshem) in the south, and the Ashdodites in the east.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62434171
|
62435420
|
Maximin share
|
Criterion of fair item allocation
Maximin share (MMS) is a criterion of fair item allocation. Given a set of items with different values, the "1-out-of-n maximin-share" is the maximum value that can be gained by partitioning the items into formula_0 parts and taking the part with the minimum value. An allocation of items among formula_0 agents with different valuations is called MMS-fair if each agent gets a bundle that is at least as good as his/her 1-out-of-"n" maximin-share. MMS fairness is a relaxation of the criterion of proportionality - each agent gets a bundle that is at least as good as the equal split (formula_1 of every resource). Proportionality can be guaranteed when the items are divisible, but not when they are indivisible, even if all agents have identical valuations. In contrast, MMS fairness can always be guaranteed to identical agents, so it is a natural alternative to proportionality even when the agents are different.
Motivation and examples.
Identical items. Suppose first that formula_2 identical items have to be allocated fairly among formula_0 people. Ideally, each person should receive formula_3 items, but this may be impossible if formula_2 is not divisible by formula_0, as the items are indivisible. A natural second-best fairness criterion is to round formula_3 down to the nearest integer, and give each person at least formula_4 items. Receiving less than formula_4 items is "too unfair" - it is an unfairness not justified by the indivisibility of the items.
Different items. Suppose now that the items are different, and each item has a different value. For example, suppose formula_5 and formula_6 and the items' values are formula_7, adding up to formula_8. If the items were divisible, we would give each person a value of formula_9 (or, if they were divisible only to integer values as in the preceding paragraph, at least formula_10), but this is not possible. The largest value that can be guaranteed to all three agents is 7, by the partition formula_11. Informally, formula_12 is the total value divided by formula_0 "rounded down to the nearest item".
The set formula_13 attaining this maximin value is called the "1-out-of-3 maximin-share" - it is the best subset of items that can be constructed by partitioning the original set into formula_14 parts and taking the least valuable part. Therefore, in this example, an allocation is MMS-fair iff it gives each agent a value of at least formula_12.
Different valuations. Suppose now that each agent assigns a "different" value to each item, for example:
Now, each agent has a different MMS:
Here, an allocation is MMS-fair if it gives Alice a value of at least formula_12, George a value of at least formula_17, and Dina a value of at least formula_14. For example, giving George the first two items formula_20, Alice the next two items formula_21, and Dina the last item formula_22, is MMS-fair.
Interpretation. The 1-out-of-formula_0 MMS of an agent can be interpreted as the maximal utility that an agent can hope to get from an allocation if all the other agents have the "same" preferences, when he always receives the worst share. It is the minimal utility that an agent could feel entitled to, based on the following argument: if all the other agents have the same preferences as me, there is at least one allocation that gives me this utility, and makes every other agent (weakly) better off; hence there is no reason to give me less.
An alternative interpretation is: the most preferred bundle the agent could guarantee as divider in divide and choose against adversarial opponents: the agent proposes her best allocation and leaves all the other ones to choose one share before taking the remaining one.
MMS-fairness can also be described as the result of the following negotiation process. A certain allocation is suggested. Each agent can object to it by suggesting an alternative partition of the items. However, in doing so he must let all other agents chose their share before he does. Hence, an agent would object to an allocation only if he can suggest a partition in which "all" bundles are better than his current bundle. An allocation is MMS-fair iff no agent objects to it, i.e., for every agent, in every partition there exists a bundle which is weakly worse than his current share.
History.
Theodore Hill studied the maximin-share in 1987. He presented a lower bound for the maximin-share of an agent as a function of the largest item value, and proved that there always exists an allocation in which each agent receives at least this lower bound. Note that the actual maximin-share might be higher than the lower bound, so the allocation found by Hill's method might not be MMS-fair.
Budish studied MMS-fairness in 2011, in the context of course allocation. He presented the A-CEEI mechanism, which attains an approximately MMS-fair allocation if it is allowed to add some goods. In 2014, Procaccia and Wang proved that an exact MMS-fair allocation among three or more agents may not exist.
Formal definition.
Let formula_23 be a set representing the resource to be allocated. Let formula_24 be any real-valued function on subsets of formula_23, representing their "value". The 1-out-of-"n" maximin-share of formula_24 from formula_23 is defined as:formula_25Here, the maximum is over all partitions of formula_23 into formula_0 disjoint subsets, and the minimum is over all formula_0 subsets in the partition. In the above examples, formula_23 was a set of integers, and "formula_24" was the sum function, that is, formula_26 was defined as the sum of integers in formula_27. For example, we showed that formula_28, where the maximizing partition is formula_29. In a typical fair allocation problem, there are some formula_0 different agents with different value functions formula_30 over the same resource formula_23. The 1-out-of-formula_0 MMS value of agent formula_31 is denoted by formula_32. An "allocation" is a vector of "n" pairwise-disjoint subsets of formula_23 "--" one subset per agent. An allocation formula_33 is called "MMS-fair", or simply "an MMS allocation", if for every agent formula_31,formula_34.An allocation is called "an MMS partition of agent" formula_31 if it holds that formula_35 for all formula_36, i.e., the allocation is one of the partitions that maximizes the formula for formula_37's MMS.
Lower bound.
Hill proved that, if the value of every item for an agent is at most formula_38 times the value of all items, then the 1-out-of-"n" MMS of that agent is at least formula_39, where formula_39 is the following piecewise-linear function:formula_40 for all formula_41, for all formula_42.Note that formula_39 is a continuous and non-increasing function of formula_38, with formula_43 and formula_44 (see the paper for a plot of formula_45 and formula_46)
Hill also proved that, for every "n" and formula_38, and for any "n" agents who value each item at most formula_38 times the total value, there exists a partition in which each agent receives a value of at least formula_39. Moreover, this guarantee is tight: for every "n" and formula_38, there are cases in which it is impossible to guarantee more than formula_39 to everyone, even when all valuations are identical.
Markakis and Psomas strengthened Hill's guarantee, and provided a polynomial-time algorithm for computing an allocation satisfying this stronger guarantee. They also showed that no truthful mechanism can obtain a 2/3-approximation to this guarantee, and present a truthful constant-factor approximation for a bounded number of goods. Gourves, Monnot and Tlilane extended the algorithm of Markakis and Psomas to gain a tighter fairness guarantee, that works for the more general problem of allocating a basis of a matroid.
Li, Moulin, Sun and Zhou have extended Hill's lower bound to bads, and presented a more accurate bound that depends also on the "number" of bads. They also presented a polynomial-time algorithm attaining this bound.
Existence of MMS-fair allocations.
An MMS-fair allocation might not exist. Procaccia and Wang and Kurokawa constructed an instance with formula_5 agents and formula_47 items, in which no allocation guarantees to each agent the 1-out-of-3 MMS. Note that this does not contradict Hill's result, since the MMS of all agents may be strictly larger than Hill's lower bound formula_39. In their instance, there are formula_48 objects, indexed by formula_49 and formula_50. Each agent formula_51 values each object formula_52 by:formula_53where formula_54 are particular 3-by-4 matrices with values smaller than formula_55. They prove that every agent can partition the objects into formula_14 subsets of formula_56 objects each, such that the sum of values in each subset is 4,055,000, which is therefore the MMS of all agents. They prove that every MMS allocation must give exactly 4 particular objects to every agent, but such an allocation does not exist. Thus, every allocation gives at least one agent a value of at most 4,054,999. They generalized this instance and showed that for every formula_57 there is such an instance with formula_58 items.
Feige, Sapir and Tauber improved the non-existence result, constructing an instance with formula_59 agents and formula_60 items, in which there is no MMS allocation. In this instance each agent has an MMS of 40, but it is only possible to guarantee the worst off agent items with combined value of 39. They also show that for any formula_57, there is an instance with formula_61 items for which an MMS allocation does not exist. If formula_0 is even, they improve the bound to formula_62 items. For these instances, the worst of agent can at most receive a formula_63 share of their MMS.
While MMS allocations are not guaranteed to exist, it has been proved that in random instances, MMS allocations exist with high probability. Kurokawa, Procaccia and Wang showed that this holds true for two cases:
Amanatidis, Markakis, Nikzad and Saberi also prove that, in randomly-generated instances, MMS-fair allocations exist with high probability.
For many classes of instances, it has been proven that MMS allocations always exist. When all "n" agents have identical valuations, an MMS allocation always exists by definition (all agents have the same MMS partitions). A slightly more general case in which an MMS allocation exists is when some formula_67 agents have identical valuations. An MMS allocation can then be found by divide and choose: the formula_67 identical agents partition the items into formula_68 bundles each of which is at least as good as their MMS; the formula_0-th agent chooses the bundle with the highest value; and the identical agents take the remaining formula_67 bundles. In particular, with two agents, an MMS allocation always exists.
Bouveret and Lemaître proved that MMS allocations exist in the following cases:
The latter result was later improved to formula_72 by Kurokawa, Procaccia and Wang and formula_73 by Feige, Sapir and Tauber. Due to the negative example with three agents and nine items, this is the largest constant formula_74 that exists, such that all instances with formula_68 agents and formula_75 items always have MMS allocations, no matter the value of formula_68. Hummel further showed that MMS allocations exist in the following cases:
Amanatidis, Markakis, Nikzad and Saberi showed that MMS allocations exist and can be found in polynomial time for the case of "ternary valuations", in which each items are valued at 0, 1 or 2.
Uriel Feige proved that MMS allocations always exists in "bivalued instances", in which there are some two values "a" and "b", and each agent values every item at either "a" or "b".
Approximations.
Budish introduced an approximation to the 1-of-"n" MMS—the "1-of-(formula_81") MMS - each agent receives at least as much as he could get by partitioning into "n"+1 bundles and getting the worst one. In general, for any "d" > "n", one can consider the "1-of-d" MMS as an approximation to the 1-of"-n" MMS, and look for an allocation in which, for each agent "i":formula_82Note that the value of the "1-of-d" MMS is a weakly decreasing function of "d". This is called an "ordinal approximation", since it depends only on the ranking of the bundles, and not on their precise values.
Procaccia and Wang introduced a different kind of approximation - the "multiplicative approximation" to MMS: an allocation is "r-fraction MMS"-fair, for some fraction r in [0,1], if each agent's value is at least a fraction "r" of the value of his/her MMS, that is, for each agent "i":formula_83Suppose one can choose between two algorithms: the first guarantees a multiplicative approximation (e.g. 3/4-fraction MMS), while the second guarantees an ordinal approximation (e.g. 1-out-of-(3"n"/2) MMS). Which of the two guarantees is higher? The answer depends on the values.
In general, for any integer "k", the 1-of-"n" MMS as at least "k" times the 1-of-"nk" MMS: take an optimal 1-of-"nk" MMS partition, and group the bundles into "n" super-bundles, each of which contains "k" original bundles. Each of these super-bundles is worth at least "k" times the smallest original bundle. Therefore, a 1/"k"-multiplicative approximation is at least as large as a 1-of-"nk" ordinal approximation, but may be smaller than 1-of-("nk-"1) ordinal approximation, as in the above example. In particular, any "r"-multiplicative approximation for r ≥ 1/2 is at least as good as 1-of-(2"n") ordinal approximation, but might be worse than 1-of-(2"n"-1) ordinal approximation.
MMS-fair allocation of goods.
Multiplicative approximations.
Procaccia and Wang presented an algorithm that always finds an "rn"-fraction MMS, where
formula_85
where oddfloor("n") is the largest odd integer smaller or equal to "n". In particular, "r3" = "r4" = 3/4, it decreases when "n" increases, and it always larger than 2/3. Their algorithm runs in time polynomial in "m" when "n" is constant, but its runtime might be exponential in "n".
Amanatidis, Markakis, Nikzad and Saberi presented several improved algorithms:
Barman and Krishnamurthy presented:
Ghodsi, Hajiaghayi, Seddighin, Seddighin and Yami presented:
Garg, McGlaughlin and Taki presented a simple algorithm for 2/3-fraction MMS-fairness whose analysis is also simple.
Garg and Taki presented:
Akrami, Garg, Sharma and Taki improve the analysis of the algorithm presented by Garg and Taki, simplifying the analysis and improving the existence guarantee to formula_88.
To date, it is not known what is the largest "r" such that an "r"-fraction MMS allocation always exists. It can be any number between formula_89 and formula_90.
Ordinal approximations.
Budish showed that the Approximate Competitive Equilibrium from Equal Incomes always guarantees the "1-of-(formula_81") MMS, However, this allocation may have excess supply, and - more importantly - excess demand: the sum of the bundles allocated to all agents might be slightly larger than the set of all items. Such an error is reasonable in course allocation, since a small excess supply can be corrected by adding a small number of seats. But the classic fair division problem assumes that items may not be added.
Without excess supply and demand, the following approximations are known:
To date, it is not known what is the smallest "d" such that a 1-out-of-"d" MMS allocation always exists. It can be any number between "n"+1 and 3"n/"2. The smallest open case is "n"=4.
Additional constraints.
Maximizing the product: Caragiannis, Kurokawa, Moulin, Procaccia, Shah and Wang showed that the "max-Nash-welfare allocation" (the allocation maximizing the product of utilities) is always formula_92-fraction MMS fair, and it is tight.
Truthfulness: Amanatidis, Birmpas and Markakis presented truthful mechanisms for approximate MMS-fair allocations (see also Strategic fair division):
Cardinality constraints: The items are partitioned into categories, and each agent can get at most "kh" items from each category "h". In other words, the bundles must be independent sets of a partition matroid.
Conflict graph: Hummel and Hetland study another setting where there is a conflict graph between items (for example: items represent events, and an agent cannot attend two simultaneous events). They show that, if the degree of the conflict graph is "d" and it is in (2,"n"), then a 1/"d"-fraction MMS allocation can be found in polynomial time, and a 1/3-fraction MMS allocation always exists.
Connectivity: the items are located on a graph, and each part must be a connected subgraph.
MMS-fair allocation of chores.
Aziz, Rauchecker, Schryen and Walsh extended the MMS notion to "chores" (items with negative utilities). Note that, for chores, the multiplicative approximation factors are larger than 1 (since fewer chores have higher utility), and the ordinal approximation factors are smaller than "n". They presented:
Barman and Krishnamurthy presented an algorithm attaining 4/3-fraction MMS (precisely, formula_93). The algorithm can be seen as a generalization of the LPT algorithm for identical-machines scheduling.
Huang and Lu prove that a 11/9-fraction MMS-fair allocation for "chores" always exists, and a 5/4-fraction MMS allocation can be found in polynomial time. Their algorithm can be seen as a generalization of the Multifit algorithm for identical-machines scheduling.
Kulkarni, Mehta and Taki study MMS-fair allocation with "mixed valuations", i.e., when there are both goods and chores. They prove that:
Ebadian, Peters and Shah prove that an MMS allocation always exists in bivalued instances, when each agent "i" partitions the chores to "easy" (valued at 1 for everyone) or "difficult" (valued at some integer "pi" > 1).
Techniques and algorithms.
Various normalizations can be applied to the original problem without changing the solution. Below, "O" is the set of all objects.
Scaling.
If, for each agent i, all valuations are scaled by a factor formula_94 (which can be different for different agents), then the MMS for each agent is scaled by the same factor; therefore, every MMS allocation in the original instance is an MMS allocation in the scaled instance. It is common to scale the valuations such that the MMS of every agent is exactly formula_69. After this scaling, the MMS approximation problems can be stated as:
The above scaling requires to compute the MMS of each agent, which is an NP-hard problem (multiway number partitioning). An alternative scaling, that can be done faster, is:
Allocating one object.
If we remove one object formula_97 from formula_96. Then for each agent, the formula_69-out-of-(formula_67) MMS w.r.t. the remaining set formula_98 is at least his formula_69-out-of-formula_0 MMS w.r.t. the original set formula_96. This is because, in the original MMS partition, formula_67 parts remain intact. Now, suppose we aim to give each agent a value of formula_95. If some object formula_99 is worth at least formula_95 to at least one agent, then we can give formula_99 to one such agent arbitrarily, and proceed to allocate the remaining objects to the remaining agents. Therefore, we can assume w.l.o.g. that:
This normalization works even with the fast scaling, and with arbitrary monotone valuations (even non-additive).
Bag filling.
Denote an object, that is valued by at most formula_100 by all agents, as an "formula_100-small object". Suppose that all objects are formula_100-small. Take an empty bag and fill it with object after object, until the bag is worth at least formula_95 to at least one agent. Then, give the bag to one such agent arbitrarily. Since all objects are formula_100-small, the remaining agents value the bag at most formula_101; if this value is sufficiently small, then the remaining value is sufficiently large so that we can proceed recursively. In particular, bag-filling gives as the following solutions:
These bag-filling algorithms work even with the fast scaling, so they run in polynomial time - they do not need to know the exact MMS value. In fact, both algorithms can be stated without mentioning the MMS at all:
Modified bag filling: The condition that all objects are formula_100-small can be relaxed as follows. Take some formula_108. Denote an object that is not formula_100-small (i.e., valued at least formula_100 by at least one agent) as an "formula_100-large object". Suppose at most formula_0 objects are formula_100-large. Take one formula_100-large object formula_99, put it in a bag, and fill it with formula_100-small objects until one agent indicates that it is worth for him at least formula_95. There must be at least one such agent, since some agent formula_31 values formula_99 at some formula_109. For this agent, there are at most formula_67 remaining formula_100-large objects. By the previous normalization, these objects are still formula_95-small, so their total value for formula_31 is at most formula_110"," so the value of remaining formula_100-small objects is at least formula_111"."
Ordering.
An instance is "ordered" if all agents have the same ordinal ranking on the objects, i.e, the objects can be numbered formula_112 such that, for each agent formula_31, formula_113. Intuitively, ordered instances are hardest, since the conflict between agents is largest. Indeed, the negative instance of is ordered - the order of the objects is determined by the matrix formula_114, which is the same for all agents. This can also be proved formally. Suppose we have an algorithm that finds, for every ordered instance, an formula_95-fraction MMS allocation. Now, we are given a general item-allocation instance formula_115. We solve it in the following way.
So when looking for formula_95-fraction MMS allocations, we can assume w.l.o.g. that:
Allocating two objects.
Suppose we find two objects "o"1 and "o"2, that one agent "i" values at least "r", while the other agents value at most 1. Then these two objects can be allocated to "i". For the other agents, the 1-out-of-("n"-1) MMS w.r.t. the remaining set is at least his 1-out-of-"n" MMS w.r.t. the original set "O". This is because, in the original MMS partition, at least "n"-2 parts remain intact, while the two parts that are not intact can be combined to form a single part with a value of at least 1. This normalization works only with additive valuations.
Moreover, suppose that the instance is ordered, and suppose we remove from "O" the two objects "on", "on"+1 (i.e., the "n"-th and ("n"+1)-th highest-valued items). Then for each agent, the 1-out-of-("n"-1) MMS w.r.t. the remaining set is at least his 1-out-of-"n" MMS w.r.t. the original set "O". This is because, by the pigeonhole principle, at least one MMS part of each agent must contain two or more objects from the set {"o"1, ..., "on"+1}. These items can be used to replace the objects given away, which results in "n"-1 parts with a value of at least 1. This means that, if the objects "on", "on"+1 have a value of at least the MMS for some agent "i", we can give them to "i" and proceed to allocate the remaining objects to the remaining agents. Therefore, we can assume w.l.o.g. that:
This normalization works even with the fast scaling. Combining it with modified bag-filling leads to the following simple algorithm for 2/3-fraction MMS.
The guarantee of this algorithm can be stated even without mentioning MMS:
Algorithmic problems.
Several basic algorithms related to the MMS are:
Relation to other fairness criteria.
An allocation is called "envy-free-except-c-items" (EF"c") for an agent "i" if, for every other agent "j", there exists a set of at most "c" items that, if removed from "j"'s bundle, then "i" does not envy the remainder. An EF0 allocation is simply called "envy-free". EF1 allocations can be found, for example, by round-robin item allocation or by the envy-graph procedure.
An allocation is called "proportional-except-c-items" (PROP*"c") for an agent "i" if there exists a set of at most "c" items outside "i"'s bundle that, if removed from the set of all items, then "i" values his bundle at least 1/"n" of the remainder. A PROP*0 allocation is simply called "proportional".
EF0 implies PROP*0, and EF1 implies PROP*("n"-1). Moreover, for any integer "c" 0, EF"c" implies PROP*(("n"-1)"c"). The opposite implication is true when "n"=2, but not when "n">2.
The following maximin-share approximations are implied by PROP*("n"-1), hence also by EF1:
The above implications are illustrated below:
An allocation is called "envy-free-except-any-item" (EF"x") for an agent "i" if, for every other agent "j", for "any" single item removed from "j"'s bundle, "i" does not envy the remainder. EFx is strictly stronger than EF1. It implies the following MMS approximations:
MMS for groups.
An allocation is called pairwise-maximin-share-fair (PMMS-fair) if, for every two agents "i" and "j", agent "i" receives at least his 1-out-of-2 maximin-share restricted to the items received by "i" and "j". It is not known whether a PMMS allocation always exists, but a 0.618-approximation always exists.
An allocation is called groupwise-maximin-share-fair (GMMS-fair) if, for every subgroup of agents of size "k", each member of the subgroup receives his/her 1-out-of-"k" maximin-share restricted to the items received by this subgroup.
With additive valuations, the various fairness notions are related as follows:
GMMS allocations are guaranteed to exist when the valuations of the agents are either binary or identical. With general additive valuations, 1/2-GMMS allocations exist and can be found in polynomial time.
MMS for agents with different entitlements.
When agents have "different entitlements" (also termed: "unequal shares" or "asymmetric rights"), MMS fairness should be adapted to guarantee a higher share to agents with higher entitlements. Various adaptations have been suggested. Below, we assume that the entitlements are given by a vector formula_125, where formula_126 represents the entitlement of agent "formula_31".
Weighted-MMS fairness.
Farhadi, Ghodsi, Hajiaghayi, Lahaie, Pennock, Seddighin and Seddigin introduce the Weighted Maximin Share (WMMS), defined by:formula_127Intuitively, the optimal WMMS is attained by a partition in which the value of part "j" is proportional to the entitlement of agent "j". For example, suppose all value functions are the sums, and the entitlement vector is "t"=(1/6, 11/24, 9/24). Then formula_128 by the partition ({1,3},{5,6},{9}); it is optimal since the value of each part "formula_31" equals formula_129. By the same partition, formula_130 and formula_131. When all "n" entitlements are equal, formula_132.
An allocation of "C" is called "WMMS-fair" for entitlement-vector "t" if the value of each agent i is at least formula_133. When all "n" agents have identical valuations, a WMMS-fair allocation always exists by definition. But with different valuations, the best possible multiplicative approximation is 1/"n". The upper bound is proved by the following example with 2"n"-1 goods and "n" agents, where ε>0 is a very small constant:
All agents have an optimal WMMS partition: for the "small" agents (1, ..., "n"-1) it is the partition ({1}, ..., {"n"-1}, {"n"}) and for the "large" agent ("n") it is ({"n"+1}, ..., {2"n"-1}, {1...,"n"}). Therefore, formula_134 for all agents "i" (for comparison, note that formula_135 for the small agents, but formula_136 for the large agent).
In any multiplicative approximation of WMMS, all agents have to get a positive value. This means that the small agents take at least "n"-1 of the items 1...,"n", so at most one such item remains for the large agent, and his value is approximately 1/"n" rather than almost 1.
A 1/"n"-fraction WMMS-fair allocation always exists and can be found by round-robin item allocation. In a restricted case, in which each agent "i" values each good by at most formula_137, a 1/2-fraction WMMS-fair allocation exists and can be found by an algorithm similar to bag-filling: the valuations of each agent i are multiplied by formula_138; and in each iteration, an item is given to an unsatisfied agent (an agent with value less than formula_139) who values it the most. This algorithm allocates to each agent "i" at least formula_139 and at most formula_137. In practice, a WMMS-fair allocation almost always exists.
Ordinal-MMS fairness.
Babaioff, Nisan and Talgam-Cohen present a natural extension of the ordinal MMS approximation to agents with different entitlements. For any two integers formula_140, set "C" and value function "V", defineformula_141Here, the maximum is over all partitions of "C" into formula_84 disjoint subsets, and the minimum is over all "unions" of formula_142 parts. For example, formula_143 by the partition ({1,6},{3,5},{9}). Now, the Ordinal Maximin Share (OMMS) is defined by:formula_144For example, if the entitlement of agent "i" is any real number at least as large as 2/3, then he is entitled to at least the 2-out-of-3 MMS of "C". Note that, although there are infinitely many pairs formula_140 satisfying with formula_145, only finitely-many of them are not redundant (not implied by others), so it is possible to compute the OMMS in finite time. An allocation "Z"1...,"Zn" is called "OMMS-fair for entitlement-vector w" if the value of each agent i is at least formula_146.
The OMMS can be higher or lower than the WMMS, depending on the values:
AnyPrice-Share fairness.
Babaioff, Ezra and Feige introduced a third criterion for fairness, which they call AnyPrice Share (APS). They define it in two equivalent ways; one of them is clearly a strengthening of the maximin share. Instead of partitioning the items into "d" disjoint bundles, the agent is allowed to choose "any" collection of bundles, which may overlap. But, the agent must then assign a weight to each bundle such that the sum of weights is at least 1, and each item belongs to bundles whose total weight is at most the agent's entitlement. The APS is the value of the least valuable positive-weight bundle. Formally: formula_150 where the maximum is over all sets of bundles such that, for some assignment of weights to the bundles, the total weight of all bundles is at least 1, and the total weight of each item is at most "ti". There is a polyomial-time algorithm that guarantees to each agent at least 3/5 of his APS.
The APS is always at least as high as the OMMS: given an optimal "l"-out-of-"d" partition, with l/d ≤ "ti", one can assign a weight of 1/"d" to the union of parts 1...,"l", the union of parts 2...,"l"+1, and so on (in a cyclic way), such that each part is included in exactly "l" unions. Therefore, each item belongs to bundles whose total weight is at most "l"/"d", which is at most "ti". The agent is guaranteed the least valuable such bundle, which is at least the "l"-out-of-"d" MMS.
In some cases, the APS is strictly higher than the OMMS. Here are two examples:
The APS can be higher or lower than the WMMS; the examples are the same as the ones used for OMMS vs WMMS:
Maximin share as a function of the largest item value.
Theodore Hill presented a version of the MMS that depends on the largest item value.
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "1/n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "m/n"
},
{
"math_id": 4,
"text": "\\lfloor m/n \\rfloor"
},
{
"math_id": 5,
"text": "n = 3"
},
{
"math_id": 6,
"text": "m = 5"
},
{
"math_id": 7,
"text": "1, 3, 5, 6, 9"
},
{
"math_id": 8,
"text": "24"
},
{
"math_id": 9,
"text": "24/3 = 8"
},
{
"math_id": 10,
"text": "\\lfloor 24/3 \\rfloor = 8"
},
{
"math_id": 11,
"text": "\\{1,6\\}, \\{3,5\\}, \\{9\\}"
},
{
"math_id": 12,
"text": "7"
},
{
"math_id": 13,
"text": "\\{1, 6\\}"
},
{
"math_id": 14,
"text": "3"
},
{
"math_id": 15,
"text": "1, 7, 2, 6, 8"
},
{
"math_id": 16,
"text": "1, 1, 1, 4, 17"
},
{
"math_id": 17,
"text": "8"
},
{
"math_id": 18,
"text": "\\{1, 7\\}, \\{2, 6\\}, \\{8\\}"
},
{
"math_id": 19,
"text": "\\{1, 1, 1\\}, \\{4\\}, \\{17\\}"
},
{
"math_id": 20,
"text": "\\{1, 7\\}"
},
{
"math_id": 21,
"text": "\\{5, 6\\}"
},
{
"math_id": 22,
"text": "\\{17\\}"
},
{
"math_id": 23,
"text": "C"
},
{
"math_id": 24,
"text": "v"
},
{
"math_id": 25,
"text": "\\operatorname{MMS}_{v}^{1\\text{-out-of-}n}(C) := ~~~\\max_{(Z_1,\\ldots,Z_n) \\in \\operatorname{Partitions}(C,n)} ~~~ \\min_{j\\in [n]} ~~~ v(Z_j)"
},
{
"math_id": 26,
"text": "v(Z)"
},
{
"math_id": 27,
"text": "Z"
},
{
"math_id": 28,
"text": "\\operatorname{MMS}_{v}^{1\\text{-out-of-}3}(\\{1,3,5,6,9\\}) := 7 "
},
{
"math_id": 29,
"text": "\\{1, 6\\}, \\{3, 5\\}, \\{9\\}"
},
{
"math_id": 30,
"text": "v_1, \\dots ,v_n"
},
{
"math_id": 31,
"text": "i"
},
{
"math_id": 32,
"text": "\\operatorname{MMS}_{i}^{1\\text{-out-of-}n}(C) := \\operatorname{MMS}_{v_i}^{1\\text{-out-of-}n}(C) "
},
{
"math_id": 33,
"text": "Z_1, \\dots, Z_n"
},
{
"math_id": 34,
"text": "v_i(Z_i) \\geq \\operatorname{MMS}_{i}^{1\\text{-out-of-}n}(C) "
},
{
"math_id": 35,
"text": "v_i(Z_j) \\ge \\operatorname{MMS}_{i}^{1\\text{-out-of-}n}(C) "
},
{
"math_id": 36,
"text": "j "
},
{
"math_id": 37,
"text": "i "
},
{
"math_id": 38,
"text": "\\alpha"
},
{
"math_id": 39,
"text": "V_n(\\alpha)"
},
{
"math_id": 40,
"text": "V_n(\\alpha) = 1 - k\\cdot(n-1)\\cdot \\alpha"
},
{
"math_id": 41,
"text": "\\alpha \\in \\left[\\frac{1}{k(n-1/(k+1))}, \\frac{1}{k(n-1/k)}\\right]"
},
{
"math_id": 42,
"text": "k\\geq 1"
},
{
"math_id": 43,
"text": "V_n(0)=1/n"
},
{
"math_id": 44,
"text": "V_n(1)=0"
},
{
"math_id": 45,
"text": "V_2(\\alpha)"
},
{
"math_id": 46,
"text": "V_3(\\alpha)"
},
{
"math_id": 47,
"text": "m = 12"
},
{
"math_id": 48,
"text": "12"
},
{
"math_id": 49,
"text": "i\\in[3]"
},
{
"math_id": 50,
"text": "j\\in[4]"
},
{
"math_id": 51,
"text": "k"
},
{
"math_id": 52,
"text": "(i,j)"
},
{
"math_id": 53,
"text": "v_k(i,j) = 1,000,000 + 1,000\\cdot T_{i,j} + E_{i,j}^{(k)}"
},
{
"math_id": 54,
"text": "T, E^{(1)}, E^{(2)}, E^{(3)}"
},
{
"math_id": 55,
"text": "100"
},
{
"math_id": 56,
"text": "4"
},
{
"math_id": 57,
"text": "n \\ge 3"
},
{
"math_id": 58,
"text": "3n + 4"
},
{
"math_id": 59,
"text": "n = 3 "
},
{
"math_id": 60,
"text": "m = 9 "
},
{
"math_id": 61,
"text": "3n + 3"
},
{
"math_id": 62,
"text": "3n + 1"
},
{
"math_id": 63,
"text": "1 - 1/n^4"
},
{
"math_id": 64,
"text": "m \\geq \\alpha\\cdot n \\ln{n}"
},
{
"math_id": 65,
"text": " \\alpha"
},
{
"math_id": 66,
"text": "m < n^{8/7}"
},
{
"math_id": 67,
"text": "n - 1"
},
{
"math_id": 68,
"text": "n "
},
{
"math_id": 69,
"text": "1"
},
{
"math_id": 70,
"text": "0"
},
{
"math_id": 71,
"text": "m \\le n + 3"
},
{
"math_id": 72,
"text": "m \\le n + 4 "
},
{
"math_id": 73,
"text": "m \\le n + 5 "
},
{
"math_id": 74,
"text": "c "
},
{
"math_id": 75,
"text": "m \\le n + c "
},
{
"math_id": 76,
"text": "m \\le n + 6 "
},
{
"math_id": 77,
"text": "n \\neq 3 "
},
{
"math_id": 78,
"text": "m \\le n + 7 "
},
{
"math_id": 79,
"text": "n \\ge 8 "
},
{
"math_id": 80,
"text": "n \\ge \\lfloor 0.6597c(c!)\\rfloor "
},
{
"math_id": 81,
"text": "n+1"
},
{
"math_id": 82,
"text": "V_i(Z_i) \\geq \\operatorname{MMS}_{i}^{1\\text{-out-of-}d}(C) "
},
{
"math_id": 83,
"text": "V_i(Z_i) \\geq r\\cdot \\operatorname{MMS}_{i}^{1\\text{-out-of-}n}(C) "
},
{
"math_id": 84,
"text": "d"
},
{
"math_id": 85,
"text": "r_n := \\frac{2\\cdot \\text{oddfloor}(n)}{3\\cdot \\text{oddfloor}(n) -1} = \n\\begin{cases}\n\\frac{2n}{3n-1} & n ~ \\text{ odd}\n\\\\\n\\frac{2n-2}{3n-4} & n ~ \\text{ even}\n\\end{cases}"
},
{
"math_id": 86,
"text": "\\frac{2n}{3n-1}"
},
{
"math_id": 87,
"text": "(\\frac{3}{4} + \\frac{1}{12 n})"
},
{
"math_id": 88,
"text": "\\frac{3}{4} + \\min\\left(\\frac{1}{36}, \\frac{3}{16n - 4}\\right)"
},
{
"math_id": 89,
"text": "3/4"
},
{
"math_id": 90,
"text": "39/40"
},
{
"math_id": 91,
"text": "1/2"
},
{
"math_id": 92,
"text": "\\frac{2}{1+\\sqrt{4n-3}}"
},
{
"math_id": 93,
"text": "\\frac{4n-1}{3n}"
},
{
"math_id": 94,
"text": "k_i"
},
{
"math_id": 95,
"text": "r"
},
{
"math_id": 96,
"text": "O"
},
{
"math_id": 97,
"text": "o"
},
{
"math_id": 98,
"text": "O \\setminus o"
},
{
"math_id": 99,
"text": "o_1"
},
{
"math_id": 100,
"text": "s"
},
{
"math_id": 101,
"text": "r + s"
},
{
"math_id": 102,
"text": "s = r = 1/2"
},
{
"math_id": 103,
"text": "n - (r + s) = n - 1"
},
{
"math_id": 104,
"text": "s = r = 1"
},
{
"math_id": 105,
"text": "2n"
},
{
"math_id": 106,
"text": "2n - (r + s) = 2n - 2 = 2(n - 1)"
},
{
"math_id": 107,
"text": "1/(2n)"
},
{
"math_id": 108,
"text": "s < r"
},
{
"math_id": 109,
"text": "x>s"
},
{
"math_id": 110,
"text": "r(n-1)"
},
{
"math_id": 111,
"text": "n - r(n - 1) - x = r(n - 1) + r - x \\ge r-x"
},
{
"math_id": 112,
"text": "o_1, \\dots, o_m"
},
{
"math_id": 113,
"text": "v_i(o_1) \\ge \\dots \\ge v_i(o_m)"
},
{
"math_id": 114,
"text": "T"
},
{
"math_id": 115,
"text": "P"
},
{
"math_id": 116,
"text": "\\mathrm{ord}(P)"
},
{
"math_id": 117,
"text": "v_i(o_j)"
},
{
"math_id": 118,
"text": "j"
},
{
"math_id": 119,
"text": "O(nm\\lg m)"
},
{
"math_id": 120,
"text": "\\mathrm{ord}(A)"
},
{
"math_id": 121,
"text": "o_2"
},
{
"math_id": 122,
"text": "A"
},
{
"math_id": 123,
"text": "o_j"
},
{
"math_id": 124,
"text": "NP^{NP}"
},
{
"math_id": 125,
"text": "t = (t_1,\\ldots,t_n)"
},
{
"math_id": 126,
"text": "t_i"
},
{
"math_id": 127,
"text": "\\operatorname{WMMS}_{i}^{t}(C) := ~~~ \\max_{(Z_1,\\ldots,Z_n) \\in \\operatorname{Partitions}(C,n)} ~~~ \\min_{j\\in [n]} ~~~ \\frac{t_i}{t_j}V(Z_j)\n = ~~~t_i\\cdot \\max_{(Z_1,\\ldots,Z_n) \\in \\operatorname{Partitions}(C,n)} ~~~ \\min_{j\\in [n]} ~~~ \\frac{V(Z_j)}{t_j}"
},
{
"math_id": 128,
"text": "\\operatorname{WMMS}_{1}^{t}(\\{1,3,5,6,9\\}) = 4 "
},
{
"math_id": 129,
"text": "24 t_i"
},
{
"math_id": 130,
"text": "\\operatorname{WMMS}_{2}^{t}= 11 "
},
{
"math_id": 131,
"text": "\\operatorname{WMMS}_{3}^{t} = 9 "
},
{
"math_id": 132,
"text": "\\operatorname{WMMS}_{i}^{t} \\equiv \\operatorname{MMS}_i^{1\\text{-out-of-}n} "
},
{
"math_id": 133,
"text": "\\operatorname{WMMS}_{i}^{t}(C) "
},
{
"math_id": 134,
"text": "\\operatorname{WMMS}_{i}^{t} = t_i "
},
{
"math_id": 135,
"text": "\\operatorname{MMS}_{i}^{1\\text{-out-of-}n} = \\epsilon = t_i "
},
{
"math_id": 136,
"text": "\\operatorname{MMS}_{i}^{1\\text{-out-of-}n} = [1-(n-1)\\epsilon] / n = t_i/n "
},
{
"math_id": 137,
"text": "\\operatorname{WMMS}_{i}^{t} "
},
{
"math_id": 138,
"text": "t_i / \\operatorname{WMMS}_{i}^{t} "
},
{
"math_id": 139,
"text": "\\operatorname{WMMS}_{i}^{t}/2 "
},
{
"math_id": 140,
"text": "l,d"
},
{
"math_id": 141,
"text": "\\operatorname{MMS}_{V}^{l\\text{-out-of-}d}(C) := ~~~\\max_{P \\in \\operatorname{Partitions}(C,d)} ~~~ \\min_{Z\\in \\operatorname{Unions}(P,l)} ~~~ V(Z)"
},
{
"math_id": 142,
"text": "l"
},
{
"math_id": 143,
"text": "\\operatorname{MMS}_{V}^{2\\text{-out-of-}3}(\\{1,3,5,6,9\\}) = 15 "
},
{
"math_id": 144,
"text": "\\operatorname{OMMS}_{i}^{t}(C) := ~~~ \\max_{l,d:~l/d \\leq t_i} \\operatorname{MMS}^{l\\text{-out-of-}d}_i(C)"
},
{
"math_id": 145,
"text": "l/d\\leq t_i"
},
{
"math_id": 146,
"text": "\\operatorname{OMMS}_{i}^{t}(C) "
},
{
"math_id": 147,
"text": "l/d \\leq 0.4"
},
{
"math_id": 148,
"text": "l/d \\leq 0.6"
},
{
"math_id": 149,
"text": "l/d = 1/2"
},
{
"math_id": 150,
"text": "\\operatorname{APS}_{V}^{t}(C) := ~~~\\max_{P \\in \\operatorname{AllowedBundleSets}(C,t_i)} ~~~ \\min_{Z\\in P} ~~~ V(Z)"
}
] |
https://en.wikipedia.org/wiki?curid=62435420
|
6243993
|
LU decomposition
|
Type of matrix factorization
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method
[of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition (factors into left and right triangular matrices).
Definitions.
Let "A" be a square matrix. An LU factorization refers to the factorization of "A", with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix "L" and an upper triangular matrix "U":
formula_0
In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all the elements below the diagonal are zero. For example, for a 3 × 3 matrix "A", its LU decomposition looks like this:
formula_1
Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For example, it is easy to verify (by expanding the matrix multiplication) that formula_2. If formula_3, then at least one of formula_4 and formula_5 has to be zero, which implies that either "L" or "U" is singular. This is impossible if "A" is nonsingular (invertible). This is a procedural problem. It can be removed by simply reordering the rows of "A" so that the first element of the permuted matrix is nonzero. The same problem in subsequent factorization steps can be removed the same way; see the basic procedure below.
LU factorization with partial pivoting.
It turns out that a proper permutation in rows (or columns) is sufficient for LU factorization. LU factorization with partial pivoting (LUP) refers often to LU factorization with row permutations only:
formula_6
where "L" and "U" are again lower and upper triangular matrices, and "P" is a permutation matrix, which, when left-multiplied to "A", reorders the rows of "A". It turns out that all square matrices can be factorized in this form, and the factorization is numerically stable in practice. This makes LUP decomposition a useful technique in practice.
LU factorization with full pivoting.
An LU factorization with full pivoting involves both row and column permutations:
formula_7
where "L", "U" and "P" are defined as before, and "Q" is a permutation matrix that reorders the columns of "A".
Lower-diagonal-upper (LDU) decomposition.
A Lower-diagonal-upper (LDU) decomposition is a decomposition of the form
formula_8
where "D" is a diagonal matrix, and "L" and "U" are unitriangular matrices, meaning that all the entries on the diagonals of "L" and "U" are one.
Rectangular matrices.
Above we required that "A" be a square matrix, but these decompositions can all be generalized to rectangular matrices as well. In that case, "L" and "D" are square matrices both of which have the same number of rows as "A", and "U" has exactly the same dimensions as "A". "Upper triangular" should be interpreted as having only zero entries below the main diagonal, which starts at the upper left corner. Similarly, the more precise term for "U" is that it is the row echelon form of the matrix "A".
Example.
We factor the following 2-by-2 matrix:
formula_9
One way to find the LU decomposition of this simple matrix would be to simply solve the linear equations by inspection. Expanding the matrix multiplication gives
formula_10
This system of equations is underdetermined. In this case any two non-zero elements of "L" and "U" matrices are parameters of the solution and can be set arbitrarily to any non-zero value. Therefore, to find the unique LU decomposition, it is necessary to put some restriction on "L" and "U" matrices. For example, we can conveniently require the lower triangular matrix "L" to be a unit triangular matrix, so that all the entries of its main diagonal are set to one. Then the system of equations has the following solution:
formula_11
Substituting these values into the LU decomposition above yields
formula_12
Existence and uniqueness.
Square matrices.
Any square matrix formula_13 admits "LUP" and "PLU" factorizations. If formula_13 is invertible, then it admits an "LU" (or "LDU") factorization if and only if all its leading principal minors are nonzero (for example formula_14
does not admit an "LU" or "LDU" factorization). If formula_13 is a singular matrix of rank formula_15, then it admits an "LU" factorization if the first formula_15 leading principal minors are nonzero, although the converse is not true.
If a square, invertible matrix has an "LDU" (factorization with all diagonal entries of "L" and "U" equal to 1), then the factorization is unique. In that case, the "LU" factorization is also unique if we require that the diagonal of formula_16 (or formula_17) consists of ones.
In general, any square matrix formula_18 could have one of the following:
In Case 3, one can approximate an LU factorization by changing a diagonal entry formula_19 to formula_20 to avoid a zero leading principal minor.
Symmetric positive-definite matrices.
If "A" is a symmetric (or Hermitian, if "A" is complex) positive-definite matrix, we can arrange matters so that "U" is the conjugate transpose of "L". That is, we can write "A" as
formula_21
This decomposition is called the Cholesky decomposition. If formula_22 is positive definite, then the Cholesky decomposition exists and is unique. Furthermore, computing the Cholesky decomposition is more efficient and numerically more stable than computing some other LU decompositions.
General matrices.
For a (not necessarily invertible) matrix over any field, the exact necessary and sufficient conditions under which it has an LU factorization are known. The conditions are expressed in terms of the ranks of certain submatrices. The Gaussian elimination algorithm for obtaining LU decomposition has also been extended to this most general case.
Algorithms.
Closed formula.
When an LDU factorization exists and is unique, there is a closed (explicit) formula for the elements of "L", "D", and "U" in terms of ratios of determinants of certain submatrices of the original matrix "A". In particular, formula_23, and for formula_24, formula_25 is the ratio of the formula_26-th principal submatrix to the formula_27-th principal submatrix. Computation of the determinants is computationally expensive, so this explicit formula is not used in practice.
Using Gaussian elimination.
The following algorithm is essentially a modified form of Gaussian elimination. Computing an LU decomposition using this algorithm requires formula_28 floating-point operations, ignoring lower-order terms. Partial pivoting adds only a quadratic term; this is not the case for full pivoting.
Generalized explanation.
Notation.
Given an "N" × "N" matrix formula_29, define formula_30 as the original, unmodified version of the matrix formula_22. The parenthetical superscript (e.g., formula_31) of the matrix formula_22 is the version of the matrix. The matrix formula_32 is the formula_22 matrix in which the elements below the main diagonal have already been eliminated to 0 through Gaussian elimination for the first formula_33 columns.
Below is a matrix to observe to help us remember the notation (where each formula_34 represents any real number in the matrix):
formula_35
Procedure.
During this process, we gradually modify the matrix formula_22 using row operations until it becomes the matrix formula_36 in which all the elements below the main diagonal are equal to zero. During this, we will simultaneously create two separate matrices formula_37 and formula_38, such that formula_39.
We define the final permutation matrix formula_37 as the identity matrix which has all the same rows swapped in the same order as the formula_22 matrix while it transforms into the matrix formula_40. For our matrix formula_41, we may start by swapping rows to provide the desired conditions for the n-th column. For example, we might swap rows to perform partial pivoting, or we might do it to set the pivot element formula_42 on the main diagonal to a non-zero number so that we can complete the Gaussian elimination.
For our matrix formula_41, we want to set every element below formula_43 to zero (where formula_43 is the element in the n-th column of the main diagonal). We will denote each element below formula_43 as formula_44 (where formula_45). To set formula_44 to zero, we set formula_46 for each row formula_47. For this operation, formula_48. Once we have performed the row operations for the first formula_49 columns, we have obtained an upper triangular matrix formula_50 which is denoted by formula_40.
We can also create the lower triangular matrix denoted as formula_51, by directly inputting the previously calculated values of formula_52 via the formula below.
formula_53
Example.
If we are given the matrixformula_54we will choose to implement partial pivoting and thus swap the first and second row so that our matrix formula_22 and the first iteration of our formula_37 matrix respectively becomeformula_55Once we have swapped the rows, we can eliminate the elements below the main diagonal on the first column by performing formula_56such that,formula_57Once these rows have been subtracted, we have derived from formula_58 the matrix formula_59Because we are implementing partial pivoting, we swap the second and third rows of our derived matrix and the current version of our formula_37 matrix respectively to obtainformula_60Now, we eliminate the elements below the main diagonal on the second column by performing formula_61 such that formula_62. Because no non-zero elements exist below the main diagonal in our current iteration of formula_22 after this row subtraction, this row subtraction derives our final formula_22 matrix (denoted as formula_36) and final formula_37 matrix:formula_63After also switching the corresponding rows, we obtain our final formula_38 matrix:formula_64Now these matrices have a relation such that formula_65.
Relations when no rows are swapped.
If we did not swap rows at all during this process, we can perform the row operations simultaneously for each column formula_33 by setting formula_66 where formula_67 is the "N" × "N" identity matrix with its "n"-th column replaced by the transposed vector formula_68 In other words, the lower triangular matrix
formula_69
Performing all the row operations for the first formula_49 columns using the formula_70 formula is equivalent to finding the decomposition
formula_71
Denote formula_72 so that formula_73.
Now let's compute the sequence of formula_74. We know that formula_75 has the following formula.
formula_76
If there are two lower triangular matrices with 1s in the main diagonal, and neither have a non-zero item below the main diagonal in the same column as the other, then we can include all non-zero items at their same location in the product of the two matrices. For example:
formula_77
Finally, multiply formula_75 together and generate the fused matrix denoted as formula_51 (as previously mentioned). Using the matrix formula_51, we obtain formula_78
It is clear that in order for this algorithm to work, one needs to have formula_79 at each step (see the definition of formula_52). If this assumption fails at some point, one needs to interchange "n"-th row with another row below it before continuing. This is why an LU decomposition in general looks like formula_80.
LU Crout decomposition.
Note that the decomposition obtained through this procedure is a "Doolittle decomposition": the main diagonal of "L" is composed solely of "1"s. If one would proceed by removing elements "above" the main diagonal by adding multiples of the "columns" (instead of removing elements "below" the diagonal by adding multiples of the "rows"), we would obtain a "Crout decomposition", where the main diagonal of "U" is of "1"s.
Another (equivalent) way of producing a Crout decomposition of a given matrix "A" is to obtain a Doolittle decomposition of the transpose of "A". Indeed, if formula_81 is the LU-decomposition obtained through the algorithm presented in this section, then by taking formula_82 and formula_83, we have that formula_84 is a Crout decomposition.
Through recursion.
Cormen et al. describe a recursive algorithm for LUP decomposition.
Given a matrix "A", let "P1" be a permutation matrix such that
formula_85,
where formula_86, if there is a nonzero entry in the first column of "A"; or take "P1" as the identity matrix otherwise. Now let formula_87, if formula_86; or formula_88 otherwise. We have
formula_89
Now we can recursively find an LUP decomposition formula_90. Let formula_91. Therefore
formula_92
which is an LUP decomposition of "A".
Randomized algorithm.
It is possible to find a low rank approximation to an LU decomposition using a randomized algorithm. Given an input matrix formula_93 and a desired low rank formula_94, the randomized LU returns permutation matrices formula_95 and lower/upper trapezoidal matrices formula_96 of size formula_97 and formula_98 respectively, such that with high probability formula_99, where formula_100 is a constant that depends on the parameters of the algorithm and formula_101 is the formula_102-th singular value of the input matrix formula_93.
Theoretical complexity.
If two matrices of order "n" can be multiplied in time "M"("n"), where "M"("n") ≥ "n""a" for some "a" > 2, then an LU decomposition can be computed in time O("M"("n")). This means, for example, that an O("n"2.376) algorithm exists based on the Coppersmith–Winograd algorithm.
Sparse-matrix decomposition.
Special algorithms have been developed for factorizing large sparse matrices. These algorithms attempt to find sparse factors "L" and "U". Ideally, the cost of computation is determined by the number of nonzero entries, rather than by the size of the matrix.
These algorithms use the freedom to exchange rows and columns to minimize fill-in (entries that change from an initial zero to a non-zero value during the execution of an algorithm).
General treatment of orderings that minimize fill-in can be addressed using graph theory.
Applications.
Solving linear equations.
Given a system of linear equations in matrix form
formula_103
we want to solve the equation for x, given "A" and b. Suppose we have already obtained the LUP decomposition of "A" such that formula_104, so formula_105.
In this case the solution is done in two logical steps:
In both cases we are dealing with triangular matrices ("L" and "U"), which can be solved directly by forward and backward substitution without using the Gaussian elimination process (however we do need this process or equivalent to compute the "LU" decomposition itself).
The above procedure can be repeatedly applied to solve the equation multiple times for different b. In this case it is faster (and more convenient) to do an LU decomposition of the matrix "A" once and then solve the triangular matrices for the different b, rather than using Gaussian elimination each time. The matrices "L" and "U" could be thought to have "encoded" the Gaussian elimination process.
The cost of solving a system of linear equations is approximately formula_108 floating-point operations if the matrix formula_93 has size formula_109. This makes it twice as fast as algorithms based on QR decomposition, which costs about formula_110 floating-point operations when Householder reflections are used. For this reason, LU decomposition is usually preferred.
Inverting a matrix.
When solving systems of equations, "b" is usually treated as a vector with a length equal to the height of matrix "A". In matrix inversion however, instead of vector "b", we have matrix "B", where "B" is an "n"-by-"p" matrix, so that we are trying to find a matrix "X" (also a "n"-by-"p" matrix):
formula_111
We can use the same algorithm presented earlier to solve for each column of matrix "X". Now suppose that "B" is the identity matrix of size "n". It would follow that the result "X" must be the inverse of "A".
Computing the determinant.
Given the LUP decomposition formula_112 of a square matrix "A", the determinant of "A" can be computed straightforwardly as
formula_113
The second equation follows from the fact that the determinant of a triangular matrix is simply the product of its diagonal entries, and that the determinant of a permutation matrix is equal to (−1)"S" where "S" is the number of row exchanges in the decomposition.
In the case of LU decomposition with full pivoting, formula_114 also equals the right-hand side of the above equation, if we let "S" be the total number of row and column exchanges.
The same method readily applies to LU decomposition by setting "P" equal to the identity matrix.
Code examples.
C code example.
/* INPUT: A - array of pointers to rows of a square matrix having dimension N
* Tol - small tolerance number to detect failure when the matrix is near degenerate
* OUTPUT: Matrix A is changed, it contains a copy of both matrices L-E and U as A=(L-E)+U such that P*A=L*U.
* The permutation matrix is not stored as a matrix, but in an integer vector P of size N+1
* containing column indexes where the permutation matrix has "1". The last element P[N]=S+N,
* where S is the number of row exchanges needed for determinant computation, det(P)=(-1)^S
int LUPDecompose(double **A, int N, double Tol, int *P) {
int i, j, k, imax;
double maxA, *ptr, absA;
for (i = 0; i <= N; i++)
P[i] = i; //Unit permutation matrix, P[N] initialized with N
for (i = 0; i < N; i++) {
maxA = 0.0;
imax = i;
for (k = i; k < N; k++)
if ((absA = fabs(A[k][i])) > maxA) {
maxA = absA;
imax = k;
if (maxA < Tol) return 0; //failure, matrix is degenerate
if (imax != i) {
//pivoting P
j = P[i];
P[i] = P[imax];
P[imax] = j;
//pivoting rows of A
ptr = A[i];
A[i] = A[imax];
A[imax] = ptr;
//counting pivots starting from N (for determinant)
P[N]++;
for (j = i + 1; j < N; j++) {
A[j][i] /= A[i][i];
for (k = i + 1; k < N; k++)
A[j][k] -= A[j][i] * A[i][k];
return 1; //decomposition done
/* INPUT: A,P filled in LUPDecompose; b - rhs vector; N - dimension
* OUTPUT: x - solution vector of A*x=b
void LUPSolve(double **A, int *P, double *b, int N, double *x) {
for (int i = 0; i < N; i++) {
x[i] = b[P[i]];
for (int k = 0; k < i; k++)
x[i] -= A[i][k] * x[k];
for (int i = N - 1; i >= 0; i--) {
for (int k = i + 1; k < N; k++)
x[i] -= A[i][k] * x[k];
x[i] /= A[i][i];
/* INPUT: A,P filled in LUPDecompose; N - dimension
* OUTPUT: IA is the inverse of the initial matrix
void LUPInvert(double **A, int *P, int N, double **IA) {
for (int j = 0; j < N; j++) {
for (int i = 0; i < N; i++) {
IA[i][j] = P[i] == j ? 1.0 : 0.0;
for (int k = 0; k < i; k++)
IA[i][j] -= A[i][k] * IA[k][j];
for (int i = N - 1; i >= 0; i--) {
for (int k = i + 1; k < N; k++)
IA[i][j] -= A[i][k] * IA[k][j];
IA[i][j] /= A[i][i];
/* INPUT: A,P filled in LUPDecompose; N - dimension.
* OUTPUT: Function returns the determinant of the initial matrix
double LUPDeterminant(double **A, int *P, int N) {
double det = A[0][0];
for (int i = 1; i < N; i++)
det *= A[i][i];
return (P[N] - N) % 2 == 0 ? det : -det;
C# code example.
public class SystemOfLinearEquations
public double[] SolveUsingLU(double[,] matrix, double[] rightPart, int n)
// decomposition of matrix
double[,] lu = new double[n, n];
double sum = 0;
for (int i = 0; i < n; i++)
for (int j = i; j < n; j++)
sum = 0;
for (int k = 0; k < i; k++)
sum += lu[i, k] * lu[k, j];
lu[i, j] = matrix[i, j] - sum;
for (int j = i + 1; j < n; j++)
sum = 0;
for (int k = 0; k < i; k++)
sum += lu[j, k] * lu[k, i];
lu[j, i] = (1 / lu[i, i]) * (matrix[j, i] - sum);
// lu = L+U-I
// find solution of Ly = b
double[] y = new double[n];
for (int i = 0; i < n; i++)
sum = 0;
for (int k = 0; k < i; k++)
sum += lu[i, k] * y[k];
y[i] = rightPart[i] - sum;
// find solution of Ux = y
double[] x = new double[n];
for (int i = n - 1; i >= 0; i--)
sum = 0;
for (int k = i + 1; k < n; k++)
sum += lu[i, k] * x[k];
x[i] = (1 / lu[i, i]) * (y[i] - sum);
return x;
MATLAB code example.
function LU = LUDecompDoolittle(A)
n = length(A);
LU = A;
% decomposition of matrix, Doolittle's Method
for i = 1:1:n
for j = 1:(i - 1)
LU(i,j) = (LU(i,j) - LU(i,1:(j - 1))*LU(1:(j - 1),j)) / LU(j,j);
end
j = i:n;
LU(i,j) = LU(i,j) - LU(i,1:(i - 1))*LU(1:(i - 1),j);
end
%LU = L+U-I
end
function x = SolveLinearSystem(LU, B)
n = length(LU);
y = zeros(size(B));
% find solution of Ly = B
for i = 1:n
y(i,:) = B(i,:) - LU(i,1:i)*y(1:i,:);
end
% find solution of Ux = y
x = zeros(size(B));
for i = n:(-1):1
x(i,:) = (y(i,:) - LU(i,(i + 1):n)*x((i + 1):n,:))/LU(i, i);
end
end
A = [ 4 3 3; 6 3 3; 3 4 3 ]
LU = LUDecompDoolittle(A)
B = [ 1 2 3; 4 5 6; 7 8 9; 10 11 12 ]'
x = SolveLinearSystem(LU, B)
A * x
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
References
Computer code
Online resources
|
[
{
"math_id": 0,
"text": " A = LU. "
},
{
"math_id": 1,
"text": "\n \\begin{bmatrix}\n a_{11} & a_{12} & a_{13} \\\\\n a_{21} & a_{22} & a_{23} \\\\\n a_{31} & a_{32} & a_{33}\n \\end{bmatrix} =\n \\begin{bmatrix}\n \\ell_{11} & 0 & 0 \\\\\n \\ell_{21} & \\ell_{22} & 0 \\\\\n \\ell_{31} & \\ell_{32} & \\ell_{33}\n \\end{bmatrix}\n \\begin{bmatrix}\n u_{11} & u_{12} & u_{13} \\\\\n 0 & u_{22} & u_{23} \\\\\n 0 & 0 & u_{33}\n \\end{bmatrix}.\n"
},
{
"math_id": 2,
"text": "a_{11} = \\ell_{11} u_{11}"
},
{
"math_id": 3,
"text": "a_{11} = 0"
},
{
"math_id": 4,
"text": "\\ell_{11}"
},
{
"math_id": 5,
"text": "u_{11}"
},
{
"math_id": 6,
"text": " PA = LU, "
},
{
"math_id": 7,
"text": " PAQ = LU, "
},
{
"math_id": 8,
"text": " A = LDU, "
},
{
"math_id": 9,
"text": "\n \\begin{bmatrix}\n 4 & 3 \\\\\n 6 & 3\n \\end{bmatrix} =\n \\begin{bmatrix}\n \\ell_{11} & 0 \\\\\n \\ell_{21} & \\ell_{22}\n \\end{bmatrix}\n \\begin{bmatrix}\n u_{11} & u_{12} \\\\\n 0 & u_{22}\n \\end{bmatrix}.\n"
},
{
"math_id": 10,
"text": "\\begin{align}\n \\ell_{11} \\cdot u_{11} + 0 \\cdot 0 &= 4 \\\\\n \\ell_{11} \\cdot u_{12} + 0 \\cdot u_{22} &= 3 \\\\\n \\ell_{21} \\cdot u_{11} + \\ell_{22} \\cdot 0 &= 6 \\\\\n \\ell_{21} \\cdot u_{12} + \\ell_{22} \\cdot u_{22} &= 3.\n\\end{align}"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\ell_{11} = \\ell_{22} &= 1 \\\\\n \\ell_{21} &= 1.5 \\\\\n u_{11} &= 4 \\\\\n u_{12} &= 3 \\\\\n u_{22} &= -1.5\n\\end{align}"
},
{
"math_id": 12,
"text": "\n \\begin{bmatrix}\n 4 & 3 \\\\\n 6 & 3\n \\end{bmatrix} =\n \\begin{bmatrix}\n 1 & 0 \\\\\n 1.5 & 1\n \\end{bmatrix}\n \\begin{bmatrix}\n 4 & 3 \\\\\n 0 & -1.5\n \\end{bmatrix}.\n"
},
{
"math_id": 13,
"text": " A "
},
{
"math_id": 14,
"text": "\n \\begin{bmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{bmatrix}\n"
},
{
"math_id": 15,
"text": " k "
},
{
"math_id": 16,
"text": " L "
},
{
"math_id": 17,
"text": " U "
},
{
"math_id": 18,
"text": "A_{n \\times n}"
},
{
"math_id": 19,
"text": " a_{jj} "
},
{
"math_id": 20,
"text": " a_{jj} \\pm \\varepsilon"
},
{
"math_id": 21,
"text": " A = LL^*. \\, "
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "D_1 = A_{1,1}"
},
{
"math_id": 24,
"text": "i = 2, \\ldots, n"
},
{
"math_id": 25,
"text": "D_i"
},
{
"math_id": 26,
"text": "i"
},
{
"math_id": 27,
"text": "(i - 1)"
},
{
"math_id": 28,
"text": "\\tfrac{2}{3} n^3"
},
{
"math_id": 29,
"text": "A = (a_{i,j})_{1 \\leq i,j \\leq N}"
},
{
"math_id": 30,
"text": " A^{(0)}"
},
{
"math_id": 31,
"text": "(0)"
},
{
"math_id": 32,
"text": "A^{(n)}"
},
{
"math_id": 33,
"text": "n"
},
{
"math_id": 34,
"text": "*"
},
{
"math_id": 35,
"text": "A^{(n-1)} = \\begin{pmatrix}\n * & & & \\cdots & & & * \\\\\n 0 & \\ddots & & & & \\\\\n & \\ddots & * & & & \\\\\n \\vdots & & 0 & a_{n,n}^{(n-1)} & & & \\vdots \\\\\n & & \\vdots & a_{i,n}^{(n-1)} & * \\\\\n & & & \\vdots & \\vdots & \\ddots \\\\\n 0 & \\cdots & 0 & a_{i,n}^{(n-1)} & * & \\cdots & *\n\\end{pmatrix}\n"
},
{
"math_id": 36,
"text": "U"
},
{
"math_id": 37,
"text": "P"
},
{
"math_id": 38,
"text": "L"
},
{
"math_id": 39,
"text": "PA = LU"
},
{
"math_id": 40,
"text": " U"
},
{
"math_id": 41,
"text": "A^{(n-1)}"
},
{
"math_id": 42,
"text": "a_{n,n}"
},
{
"math_id": 43,
"text": "a_{n,n}^{(n-1)}"
},
{
"math_id": 44,
"text": "a_{i,n}^{(n-1)}"
},
{
"math_id": 45,
"text": "i = n+1, \\dotsc, N"
},
{
"math_id": 46,
"text": "row_i=row_i-(\\ell_{i,n})\\cdot row_n"
},
{
"math_id": 47,
"text": "i"
},
{
"math_id": 48,
"text": "\\ell_{i,n} := {a_{i,n}^{(n-1)}}/{a_{n,n}^{(n-1)}}"
},
{
"math_id": 49,
"text": " N-1"
},
{
"math_id": 50,
"text": " A^{(N-1)}"
},
{
"math_id": 51,
"text": "L"
},
{
"math_id": 52,
"text": "\\ell_{i,n}"
},
{
"math_id": 53,
"text": "L = \\begin{pmatrix}\n 1 & 0 & \\cdots & 0 \\\\\n\\ell_{2,1} & \\ddots & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & \\ddots & 0 \\\\\n\\ell_{N,1} & \\cdots & \\ell_{N,N-1} & 1 \n\\end{pmatrix}\n"
},
{
"math_id": 54,
"text": "A = \\begin{pmatrix}\n0 & 5 & \\frac{22}{3} \\\\\n4 & 2 & 1 \\\\\n2 & 7 & 9 \\\\\n\\end{pmatrix},"
},
{
"math_id": 55,
"text": "A^{(0)}=\\begin{pmatrix}\n4 & 2 & 1 \\\\\n0 & 5 & \\frac{22}{3} \\\\\n2 & 7 & 9 \\\\\n\\end{pmatrix},\\quad\nP^{(0)}=\\begin{pmatrix}\n0 & 1 & 0 \\\\\n1 & 0 & 0 \\\\\n0 & 0 & 1 \\\\\n\\end{pmatrix}."
},
{
"math_id": 56,
"text": "\\begin{alignat}{0} \nrow_2=row_2-(\\ell_{2,1})\\cdot row_1 \\\\\nrow_3=row_3-(\\ell_{3,1})\\cdot row_1 \n\\end{alignat}"
},
{
"math_id": 57,
"text": "\\begin{alignat}{0} \n\\ell_{2,1}= \\frac{0}{4}=0 \\\\\n\\ell_{3,1}= \\frac{2}{4}=0.5 \n\\end{alignat}"
},
{
"math_id": 58,
"text": "A^{(1)}"
},
{
"math_id": 59,
"text": "A^{(1)}=\n\\begin{pmatrix}\n4 & 2 & 1 \\\\\n0 & 5 & \\frac{22}{3} \\\\\n0 & 6 & 8.5 \\\\\n\\end{pmatrix}."
},
{
"math_id": 60,
"text": "A^{(1)}=\\begin{pmatrix}\n4 & 2 & 1 \\\\\n0 & 6 & 8.5 \\\\\n0 & 5 & \\frac{22}{3} \\\\\n\\end{pmatrix}, \\quad\nP^{(1)}=\\begin{pmatrix}\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & 0 & 0 \\\\\n\\end{pmatrix}."
},
{
"math_id": 61,
"text": "row_3=row_3-(\\ell_{3,2})\\cdot row_2"
},
{
"math_id": 62,
"text": "\\ell_{3,2}= \\frac{5}{6} "
},
{
"math_id": 63,
"text": "A^{(2)}=A^{(N-1)}=U=\\begin{pmatrix}\n4 & 2 & 1 \\\\\n0 & 6 & 8.5 \\\\\n0 & 0 & 0.25 \\\\\n\\end{pmatrix}, \\quad\nP=\\begin{pmatrix}\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n1 & 0 & 0 \\\\\n\\end{pmatrix}."
},
{
"math_id": 64,
"text": "L = \\begin{pmatrix}\n1 & 0 & 0 \\\\\n\\ell_{3,1} & 1 & 0 \\\\\n\\ell_{2,1} & \\ell_{3,2} & 1 \\\\\n\\end{pmatrix}\n= \\begin{pmatrix}\n1 & 0 & 0 \\\\\n0.5 & 1 & 0 \\\\\n0 & \\frac{5}{6} & 1 \\\\\n\\end{pmatrix}"
},
{
"math_id": 65,
"text": "PA=LU"
},
{
"math_id": 66,
"text": " A^{(n)} := L^{-1}_n A^{(n-1)},"
},
{
"math_id": 67,
"text": "L^{-1}_n"
},
{
"math_id": 68,
"text": "\\begin{pmatrix}0 & \\dotsm & 0 & 1 & \n -\\ell_{n+1,n} & \\dotsm & -\\ell_{N,n} \\end{pmatrix}^\\textsf{T}."
},
{
"math_id": 69,
"text": "L^{-1}_n =\n \\begin{pmatrix}\n 1 & & & & & \\\\\n & \\ddots & & & & \\\\\n & & 1 & & & \\\\\n & & -\\ell_{n+1,n} & & & \\\\\n & & \\vdots & & \\ddots & \\\\\n & & -\\ell_{N,n} & & & 1\n \\end{pmatrix}.\n"
},
{
"math_id": 70,
"text": " A^{(n)} := L^{-1} _n A^{(n-1)}"
},
{
"math_id": 71,
"text": "A = L_1 L_1^{-1} A^{(0)} = L_1 A^{(1)}\n = L_1 L_2 L_2^{-1} A^{(1)}\n = L_1 L_2 A^{(2)}\n = \\dotsm\n = L_1 \\dotsm L_{N-1} A^{(N-1)}.\n"
},
{
"math_id": 72,
"text": "L = L_1 \\dotsm L_{N-1} "
},
{
"math_id": 73,
"text": "A=LA^{(N-1)}=LU"
},
{
"math_id": 74,
"text": "L_1 \\dotsm L_{N-1}"
},
{
"math_id": 75,
"text": "L_{i} "
},
{
"math_id": 76,
"text": "L_n =\n \\begin{pmatrix}\n 1 & & & & & \\\\\n & \\ddots & & & & \\\\\n & & 1 & & & \\\\\n & & \\ell_{n+1,n} & & & \\\\\n & & \\vdots & & \\ddots & \\\\\n & & \\ell_{N,n} & & & 1\n \\end{pmatrix}\n"
},
{
"math_id": 77,
"text": "\n\\left(\\begin{array}{ccccc}\n1 & 0 & 0 & 0 & 0 \\\\\n77 & 1 & 0 & 0 & 0 \\\\\n12 & 0 & 1 & 0 & 0 \\\\\n63 & 0 & 0 & 1 & 0 \\\\\n7 & 0 & 0 & 0 & 1\n\\end{array}\\right)\\left(\\begin{array}{ccccc}\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 22 & 1 & 0 & 0 \\\\\n0 & 33 & 0 & 1 & 0 \\\\\n0 & 44 & 0 & 0 & 1\n\\end{array}\\right)=\\left(\\begin{array}{ccccc}\n1 & 0 & 0 & 0 & 0 \\\\\n77 & 1 & 0 & 0 & 0 \\\\\n12 & 22 & 1 & 0 & 0 \\\\\n63 & 33 & 0 & 1 & 0 \\\\\n7 & 44 & 0 & 0 & 1\n\\end{array}\\right)\n"
},
{
"math_id": 78,
"text": "A = LU."
},
{
"math_id": 79,
"text": "a_{n,n}^{(n-1)} \\neq 0"
},
{
"math_id": 80,
"text": "P^{-1}A = L U "
},
{
"math_id": 81,
"text": " A^\\textsf{T} = L_0 U_0 "
},
{
"math_id": 82,
"text": "L = U_0^\\textsf{T}"
},
{
"math_id": 83,
"text": "U = L_0^\\textsf{T}"
},
{
"math_id": 84,
"text": "A = LU"
},
{
"math_id": 85,
"text": "\n P_1 A = \\left( \\begin{array}{c|ccc}\n a & & w^\\textsf{T} & \\\\ \\hline\n & & & \\\\\n v & & A' & \\\\\n & & &\n \\end{array} \\right)\n"
},
{
"math_id": 86,
"text": "a \\neq 0"
},
{
"math_id": 87,
"text": "c = 1/a"
},
{
"math_id": 88,
"text": "c = 0"
},
{
"math_id": 89,
"text": "\n P_1 A = \\left( \\begin{array}{c|ccc}\n 1 & & 0 & \\\\ \\hline\n & & & \\\\\n cv & & I_{n-1} & \\\\\n & & &\n \\end{array} \\right)\n \\left( \\begin{array}{c|c}\n a & w^\\textsf{T} \\\\ \\hline\n & \\\\\n 0 & A'-cvw^\\textsf{T} \\\\\n &\n \\end{array} \\right) .\n"
},
{
"math_id": 90,
"text": "P' \\left(A' - cvw^\\textsf{T}\\right) = L' U'"
},
{
"math_id": 91,
"text": "v' = P'v"
},
{
"math_id": 92,
"text": "\n \\left( \\begin{array}{c|ccc}\n 1 & & 0 & \\\\ \\hline\n & & & \\\\\n 0 & & P' & \\\\\n & & &\n \\end{array} \\right) P_1 A\n = \\left( \\begin{array}{c|ccc}\n 1 & & 0 & \\\\ \\hline\n & & & \\\\\n cv' & & L' & \\\\\n & & &\n \\end{array} \\right)\n \\left( \\begin{array}{c|ccc}\n a & & w^\\textsf{T} & \\\\ \\hline\n & & & \\\\\n 0 & & U' & \\\\\n & & &\n \\end{array} \\right) ,\n"
},
{
"math_id": 93,
"text": "A"
},
{
"math_id": 94,
"text": "k"
},
{
"math_id": 95,
"text": "P, Q"
},
{
"math_id": 96,
"text": "L, U"
},
{
"math_id": 97,
"text": "m \\times k "
},
{
"math_id": 98,
"text": "k \\times n"
},
{
"math_id": 99,
"text": "\\left\\| PAQ-LU \\right\\|_2 \\le C\\sigma_{k+1}"
},
{
"math_id": 100,
"text": "C"
},
{
"math_id": 101,
"text": "\\sigma_{k+1}"
},
{
"math_id": 102,
"text": "(k+1)"
},
{
"math_id": 103,
"text": "A\\mathbf x = \\mathbf b,"
},
{
"math_id": 104,
"text": "PA = LU"
},
{
"math_id": 105,
"text": "LU \\mathbf x = P \\mathbf b"
},
{
"math_id": 106,
"text": "L \\mathbf y = P \\mathbf b"
},
{
"math_id": 107,
"text": "U \\mathbf x = \\mathbf y"
},
{
"math_id": 108,
"text": "\\frac{2}{3} n^3"
},
{
"math_id": 109,
"text": "n"
},
{
"math_id": 110,
"text": "\\frac{4}{3} n^3"
},
{
"math_id": 111,
"text": "AX = LUX = B."
},
{
"math_id": 112,
"text": "A = P^{-1} LU"
},
{
"math_id": 113,
"text": "\\det(A) = \\det\\left(P^{-1}\\right) \\det(L) \\det(U) = (-1)^S \\left( \\prod_{i=1}^n l_{ii} \\right) \\left( \\prod_{i=1}^n u_{ii} \\right) ."
},
{
"math_id": 114,
"text": "\\det(A)"
}
] |
https://en.wikipedia.org/wiki?curid=6243993
|
624406
|
Amagat's law
|
Gas law describing volume of a gas mixture
Amagat's law or the law of partial volumes describes the behaviour and properties of mixtures of ideal (as well as some cases of non-ideal) gases. It is of use in chemistry and thermodynamics. It is named after Emile Amagat.
Overview.
Amagat's law states that the extensive volume "V" = "Nv" of a gas mixture is equal to the sum of volumes "Vi" of the "K" component gases, if the temperature "T" and the pressure "p" remain the same:
formula_0
This is the experimental expression of volume as an extensive quantity.
According to Amagat's law of partial volume, the total volume of a non-reacting mixture of gases at constant temperature and pressure should be equal to the sum of the individual partial volumes of the constituent gases. So if formula_1 are considered to be the partial volumes of components in the gaseous mixture, then the total volume formula_2 would be represented as
formula_3
Both Amagat's and Dalton's laws predict the properties of gas mixtures. Their predictions are the same for ideal gases. However, for real (non-ideal) gases, the results differ. Dalton's law of partial pressures assumes that the gases in the mixture are non-interacting (with each other) and each gas independently applies its own "pressure", the sum of which is the total pressure. Amagat's law assumes that the "volumes" of the component gases (again at the same temperature and pressure) are additive; the interactions of the different gases are the same as the average interactions of the components.
The interactions can be interpreted in terms of a second virial coefficient "B"("T") for the mixture. For two components, the second virial coefficient for the mixture can be expressed as
formula_4
where the subscripts refer to components 1 and 2, the "Xi" are the mole fractions, and the "Bi" are the second virial coefficients. The cross term "B"1,2 of the mixture is given by
formula_5 for Dalton's law
and
formula_6 for Amagat's law.
When the "volumes" of each component gas (same temperature and pressure) are very similar, then Amagat's law becomes mathematically equivalent to Vegard's law for solid mixtures.
Ideal gas mixture.
When Amagat's law is valid "and" the gas mixture is made of ideal gases,
formula_7
where:
formula_8 is the pressure of the gas mixture,
formula_9 is the volume of the "i"-th component of the gas mixture,
formula_10 is the total volume of the gas mixture,
formula_11 is the amount of substance of "i"-th component of the gas mixture (in mol),
formula_12 is the total amount of substance of gas mixture (in mol),
formula_13 is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant,
formula_14 is the absolute temperature of the gas mixture (in K),
formula_15 is the mole fraction of the "i"-th component of the gas mixture.
It follows that the mole fraction and volume fraction are the same. This is true also for other equation of state.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " N\\, v(T, p) = \\sum_{i=1}^K N_i\\, v_i(T, p)."
},
{
"math_id": 1,
"text": "V_1, V_2, \\dots, V_n"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "V = V_1 + V_2 + V_3 + \\dots + V_n = \\sum_i V_i."
},
{
"math_id": 4,
"text": "B(T) = X_1 B_1 + X_2 B_2 + X_1 X_2 B_{1,2},"
},
{
"math_id": 5,
"text": "B_{1,2} = 0 "
},
{
"math_id": 6,
"text": "B_{1,2} = \\frac{B_1 + B_2}{2} "
},
{
"math_id": 7,
"text": "\\frac{V_i}{V} = \\dfrac{\\dfrac{n_i RT}{p}}{\\dfrac{n RT}{p}} = \\frac{n_i}{n} = x_i,"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "V_i = \\frac{n_i RT}{p}"
},
{
"math_id": 10,
"text": "V = \\sum V_i"
},
{
"math_id": 11,
"text": "n_i"
},
{
"math_id": 12,
"text": "n = \\sum n_i"
},
{
"math_id": 13,
"text": "R"
},
{
"math_id": 14,
"text": "T"
},
{
"math_id": 15,
"text": "x_i = \\frac{n_i}{n}"
}
] |
https://en.wikipedia.org/wiki?curid=624406
|
62441912
|
Nehemiah 5
|
Chapter in the Book of Nehemiah
Nehemiah 5 is the fifth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 15th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter records the reform of Nehemiah in the case of economic oppression among the Jews, and shows how he led by example.
Text.
The original text of this chapter is in Hebrew. This chapter is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Troubles within (5:1–13).
For any organization or nation, any internal schisms, inequities, or injustices in any organization or nation will bring ruin far quicker than outside attack, so the well-being (and survival) of a particular group or community depends on its internal health. This section deals with the economic oppression among the Jews (verses 1–5), Nehemiah's judgment on the issue (verses 6–11), and the pledge of the people (verses 12–13). Verse 3 refers to a famine as the cause of hunger and price inflation. Methodist writer Joseph Benson suggests this is the famine announced by the prophet Haggai, who spoke in the second year of King Darius: "through want of rain, which God had withheld as a punishment for the people’s taking more care to build their own houses than his temple". Anglican bishop H. E. Ryle notes a connection with the rebuilding of the walls of Jerusalem recorded in the previous chapter: "a general stoppage of trade must have resulted from the national undertaking. The presence of the enemy in the neighbourhood prevented free agricultural labour."
"Now there was a great outcry of the people and their wives against their fellow Jews."
Verse 1.
The Revised Version's opening word is "then", which Ryle argues "rightly" connects this passage with the rebuilding of the walls.
The "outcry of the people", from , "tsa-‘ă-qaṯ hā-‘ām", is a cry of oppression against their own people, their Jewish neighbors; in contrast to the cry against Pharaoh, or the cry against enemies (cf. ; ), also ‘the cry to God for deliverance from injustice and abuse’ (, ).
"And I said to them, "According to our ability, we have redeemed our Jewish brethren who were sold to the nations. Now indeed, will you even sell your brethren? Or should they be sold to us?" Then they were silenced and found nothing to say."
Verse 8.
This verse "apparently refers to what had been the merciful custom of [Nehemiah] and his countrymen when they were in exile, but possibly also to his action in Jerusalem since his arrival. The word for ‘redeemed’ here would be literally rendered 'acquired' or 'bought'." In the Septuagint, the redemption of the enslaved Jews was secured εν ἑκούσίω ημών ("en hekousiō hemōn"), through our freewill offerings. The word ἑκούσιον ("hekousion") appears in St Paul's Letter to Philemon, where Paul seeks to ensure that Philemon's generosity is not secured "by compulsion, as it were, but voluntary".
Leadership by example (5:14–19).
As governor of Yehud Medinata, the province of Judah, Nehemiah led by example, where he demonstrates his integrity and his unbending adherence to God's laws and his moral standard. Unlike the previous governors who took bread, wine, and 'forty shekels of silver', Nehemiah declined to take an income from taxes, and even at his own expense provided ‘the necessities expected of a government official’.
"Moreover from the time that I was appointed to be their governor in the land of Judah (from the twentieth year even until the thirty-second year of King Artaxerxes) twelve years had passed. And my companions and I had not eaten the governor's food allotment."
Verse 14.
Nehemiah's appointment took place in Nisan 444 BC (or 445 BC; the 20th year of Artaxerxes I), as recorded in , and he governed Judah for 12 years. Therefore, the entire first section of the Book of Nehemiah (chapters 1–7) could be written after 432 BC (the 32nd year of Artaxerxes I), the year when Nehemiah returned to the Persian court from Jerusalem (Nehemiah 13:6).
"Indeed, I also continued the work on this wall, and we did not buy any land. All my servants were gathered there for the work."
Verse 16.
Ryle portrays Nehemiah and his friends as "too strenuously occupied (in rebuilding the walls) to interest themselves in the purchase of lands". The Masoretic Text has the plural, "we did not buy ...". In the Septuagint and Vulgate, the text is singular, "et agrum non emi", as it is in the Revised Standard Version, "I also held to the work on this wall, and acquired no land".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62441912
|
62442607
|
3D printing speed
|
3D printing speed measures the amount of manufactured material over a given time period (formula_0), where the unit of time is measured in Seconds, and the unit of manufactured material is typically measured in units of either kg, mm or cm3, depending on the type of additive manufacturing technique.
The following table compares the speeds of commercially relevant 3D printing technologies.
3D printing speed refers to only the build stage, a subcomponent of the entire 3D printing process. However, the entire process spans from pre-processing to post-processing stages. The time required for printing a completed part from a data file (.stl or .obj) is calculated as the sum of time for the following stages:
Speed up.
Additive manufacturing technologies usually imply a trade off between the printing speed and quality. Improvements in speed of the entire 3D printing process can be grouped in the following two categories.
Software improvements.
Since the actual printing process is directly influenced by how the model is sliced, oriented, and filled, optimizing them results in shorter print time.
Optimal Orientation. Changing the orientation of a part can be done through either the STL file or on the CAD model. Determining the optimal part orientation is a common software solution for all additive manufacturing processes. This can lead to a significant improvement in many key factors that affect the total print time. The following factors heavily depend on part orientation:
Adaptive Slicing. Error caused by the staircase effect can be measured using several metrics, all of which refer to the difference between model surface and actual printed surface. By adaptively computing the height distribution of layers, this error can be minimized: The quality of surface increases while post-processing time decreases. The benefits of adaptive slicing depend on the proportion of the surface-to-volume ratio of the part. Efficient computation of adaptive layers is possible by analyzing the model surface over the full layer height. Several implementations are available as an open source software.
Hardware improvements.
Increasing the speed of printing through hardware can take the following forms, many of which are used by leading 3D printing companies.
Challenges.
Depending on the technology used, there are some challenges that could limit the speed of the 3D printing:
Research.
Acoustic fabrication.
Interesting features of sound waves have encouraged scientists to use it in additive manufacturing. Sound waves can form pressure fields that shape the material in the desired form in a contact-free setup. The fact that it can be applied over a large area at the same time makes it a good candidate for rapid fabrication.
The process starts by designing an acoustic hologram. An acoustic hologram is a mask that will direct the sound field to form the desired pattern. It can be fabricated in an additive fabrication combined with etching and nanoimprint methods. The process follows by placing silicone rubber particles in a liquid medium with photo-initiator agents. Then the acoustic mask is used to generate the desired pressure sound field to put the particle in the correct order. The next step is applying the UV light to solidify the final product.
Improved SLA processes.
The speed of SLA processes is limited by:
Rapid continuous additive manufacturing by inhibition patterning
Due to the mentioned effects, the printing speed with SLA methods is limited to a few millimeters to several centimeters per hour. To address this problem a system of two light sources is used, one for polymerization and one for inhibiting the polymerization to avoid adhesion and as a result print faster. This method allows us to speed up the process up to 200 cm/hr. Moreover, by controlling the intensity of each pixel in the setup topographical patterning can be created in a single exposure with no stage translation.
A mixture of photo initiators and photo inhibitors is used in the setup. The absorbance spectra of two material is orthogonal this allows to control the process with the two orthogonal light sources. As the material is generated layer by layer the tray is gradually lifted and the photo inhibitors will not allow adhesion near the window.
Rapid, large-volume, thermally controlled 3D printing, using a mobile liquid interface
Another way to address the adhesion problem is to create a dead layer which prohibits the curing process. One method to create this dead layer is to use fluorinated oil flow. This liquid is omniphobic which means that it repels all the materials and will not stick to anything. The reason to use a flow instead of a static layer is to create a larger force against the adhesion force and also help with the cooling of the cured layer (curing generates heat).
Fast 3D printing by integrating construction kit building blocks.
Dividing an Object into smaller blocks (e.g. Lego parts) before print, can lead to 2.44x increase in speed over conventional printing method. Moreover, when the object needs to be iterated to find the optimal design it is not efficient to reprint the whole object over and over again: One solution is to print the main constant structure only once and reprint only the small changing parts with high resolution. These smaller parts are mounted onto the main structure.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{amount} / \\text{time}"
}
] |
https://en.wikipedia.org/wiki?curid=62442607
|
62443864
|
Neural tangent kernel
|
Type of kernel induced by artificial neural networks
In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods.
In general, a kernel is a positive-semidefinite symmetric function of two inputs which represents some notion of similarity between the two inputs. The NTK is a specific kernel derived from a given neural network; in general, when the neural network parameters change during training, the NTK evolves as well. However, in the limit of large layer width the NTK becomes constant, revealing a duality between training the wide neural network and kernel methods: gradient descent in the infinite-width limit is fully equivalent to kernel gradient descent with the NTK. As a result, using gradient descent to minimize least-square loss for neural networks yields the same mean estimator as ridgeless kernel regression with the NTK. This duality enables simple closed form equations describing the training dynamics, generalization, and predictions of wide neural networks.
The NTK was introduced in 2018 by Arthur Jacot, Franck Gabriel and Clément Hongler, who used it to study the convergence and generalization properties of fully connected neural networks. Later works extended the NTK results to other neural network architectures. In fact, the phenomenon behind NTK is not specific to neural networks and can be observed in generic nonlinear models, usually by a suitable scaling.<templatestyles src="Template:TOC limit/styles.css" />
Main results (informal).
Let formula_0 denote the scalar function computed by a given neural network with parameters formula_1 on input formula_2. Then the neural tangent kernel is defined asformula_3Since it is written as a dot product between mapped inputs (with the gradient of the neural network function serving as the feature map), we are guaranteed that the NTK is symmetric and positive semi-definite. The NTK is thus a valid kernel function.
Consider a fully connected neural network whose parameters are chosen i.i.d. according to any mean-zero distribution. This random initialization of formula_1 induces a distribution over formula_0 whose statistics we will analyze, both at initialization and throughout training (gradient descent on a specified dataset). We can visualize this distribution via a neural network ensemble which is constructed by drawing many times from the initial distribution over formula_0 and training each draw according to the same training procedure.
The number of neurons in each layer is called the layer’s width. Consider taking the width of every hidden layer to infinity and training the neural network with gradient descent (with a suitably small learning rate). In this infinite-width limit, several nice properties emerge:
Applications.
Ridgeless kernel regression and kernel gradient descent.
Kernel methods are machine learning algorithms which use only pairwise relations between input points. Kernel methods do not depend on the concrete values of the inputs; they only depend on the relations between the inputs and other inputs (such as the training set). These pairwise relations are fully captured by the kernel function: a symmetric, positive-semidefinite function of two inputs which represents some notion of similarity between the two inputs. A fully equivalent condition is that there exists some feature map formula_9 such that the kernel function can be written as a dot product of the mapped inputsformula_10The properties of a kernel method depend on the choice of kernel function. (Note that formula_11 may have higher dimension than formula_12.) As a relevant example, consider linear regression. This is the task of estimating formula_13 given formula_14 samples formula_15 generated from formula_16, where each formula_17 is drawn according to some input data distribution. In this setup, formula_13 is the weight vector which defines the true function formula_18; we wish to use the training samples to develop a model formula_19 which approximates formula_13. We do this by minimizing the mean-square error between our model and the training samples:formula_20There exists an explicit solution for formula_19 which minimizes the squared error: formula_21, where formula_22 is the matrix whose columns are the training inputs, and formula_23 is the vector of training outputs. Then, the model can make predictions on new inputs: formula_24.
However, this result can be rewritten as: formula_25. Note that this dual solution is expressed solely in terms of the inner products between inputs. This motivates extending linear regression to settings in which, instead of directly taking inner products between inputs, we first transform the inputs according to a chosen feature map and then evaluate the inner products between the transformed inputs. As discussed above, this can be captured by a kernel function formula_26, since all kernel functions are inner products of feature-mapped inputs. This yields the ridgeless kernel regression estimator:formula_27If the kernel matrix formula_28 is singular, one uses the Moore-Penrose pseudoinverse. The regression equations are called "ridgeless" because they lack a ridge regularization term.
In this view, linear regression is a special case of kernel regression with the identity feature map: formula_29. Equivalently, kernel regression is simply linear regression in the feature space (i.e. the range of the feature map defined by the chosen kernel). Note that kernel regression is typically a "nonlinear" regression in the input space, which is a major strength of the algorithm.
Just as it’s possible to perform linear regression using iterative optimization algorithms such as gradient descent, one can perform kernel regression using kernel gradient descent. This is equivalent to performing gradient descent in the feature space. It’s known that if the weight vector is initialized close to zero, least-squares gradient descent converges to the minimum-norm solution, i.e., the final weight vector has the minimum Euclidean norm of all the interpolating solutions. In the same way, kernel gradient descent yields the minimum-norm solution with respect to the RKHS norm. This is an example of the implicit regularization of gradient descent.
The NTK gives a rigorous connection between the inference performed by infinite-width ANNs and that performed by kernel methods: when the loss function is the least-squares loss, the inference performed by an ANN is in expectation equal to ridgeless kernel regression with respect to the NTK. This suggests that the performance of large ANNs in the NTK parametrization can be replicated by kernel methods for suitably chosen kernels.
Overparametrization, interpolation, and generalization.
In overparametrized models, the number of tunable parameters exceeds the number of training samples. In this case, the model is able to memorize (perfectly fit) the training data. Therefore, overparametrized models interpolate the training data, achieving essentially zero training error.
Kernel regression is typically viewed as a non-parametric learning algorithm, since there are no explicit parameters to tune once a kernel function has been chosen. An alternate view is to recall that kernel regression is simply linear regression in feature space, so the “effective” number of parameters is the dimension of the feature space. Therefore, studying kernels with high-dimensional feature maps can provide insights about strongly overparametrized models.
As an example, consider the problem of generalization. According to classical statistics, memorization should cause models to fit noisy signals in the training data, harming their performance on unseen data. To mitigate this, machine learning algorithms often introduce regularization to mitigate noise-fitting tendencies. Surprisingly, modern neural networks (which tend to be strongly overparametrized) seem to generalize well, even in the absence of explicit regularization. To study the generalization properties of overparametrized neural networks, one can exploit the infinite-width duality with ridgeless kernel regression. Recent works have derived equations describing the expected generalization error of high-dimensional kernel regression; these results immediately explain the generalization of sufficiently wide neural networks trained to convergence on least-squares.
Convergence to a global minimum.
For a convex loss functional formula_30 with a global minimum, if the NTK remains positive-definite during training, the loss of the ANN formula_31 converges to that minimum as formula_32. This positive-definiteness property has been shown in a number of cases, yielding the first proofs that large-width ANNs converge to global minima during training.
Extensions and limitations.
The NTK can be studied for various ANN architectures, in particular convolutional neural networks (CNNs), recurrent neural networks (RNNs) and transformers. In such settings, the large-width limit corresponds to letting the number of parameters grow, while keeping the number of layers fixed: for CNNs, this involves letting the number of channels grow.
Individual parameters of a wide neural network in the kernel regime change negligibly during training. However, this implies that infinite-width neural networks cannot exhibit feature learning, which is widely considered to be an important property of realistic deep neural networks. This is not a generic feature of infinite-width neural networks and is largely due to a specific choice of the scaling by which the width is taken to the infinite limit; indeed several works have found alternate infinite-width scaling limits of neural networks in which there is no duality with kernel regression and feature learning occurs during training. Others introduce a "neural tangent hierarchy" to describe finite-width effects, which may drive feature learning.
Neural Tangents is a free and open-source Python library used for computing and doing inference with the infinite width NTK and neural network Gaussian process (NNGP) corresponding to various common ANN architectures. In addition, there exists a scikit-learn compatible implementation of the infinite width NTK for Gaussian processes called scikit-ntk.
Details.
When optimizing the parameters formula_33 of an ANN to minimize an empirical loss through gradient descent, the NTK governs the dynamics of the ANN output function formula_34 throughout the training.
Case 1: Scalar output.
An ANN with scalar output consists of a family of functions formula_35 parametrized by a vector of parameters formula_33.
The NTK is a kernel formula_36 defined byformula_37In the language of kernel methods, the NTK formula_38 is the kernel associated with the feature map formula_39. To see how this kernel drives the training dynamics of the ANN, consider a dataset formula_40 with scalar labels formula_41 and a loss function formula_42. Then the associated empirical loss, defined on functions formula_43, is given byformula_44When the ANN formula_45 is trained to fit the dataset (i.e. minimize formula_46) via continuous-time gradient descent, the parameters formula_47 evolve through the ordinary differential equation:
formula_48
During training the ANN output function follows an evolution differential equation given in terms of the NTK:
formula_49
This equation shows how the NTK drives the dynamics of formula_50 in the space of functions formula_51 during training.
Case 2: Vector output.
An ANN with vector output of size formula_52 consists in a family of functions formula_53 parametrized by a vector of parameters formula_33.
In this case, the NTK formula_54 is a matrix-valued kernel, with values in the space of formula_55 matrices, defined byformula_56Empirical risk minimization proceeds as in the scalar case, with the difference being that the loss function takes vector inputs formula_57. The training of formula_58 through continuous-time gradient descent yields the following evolution in function space driven by the NTK:formula_59This generalizes the equation shown in case 1 for scalar outputs.
Interpretation.
Each data point formula_60 influences the evolution, of the output formula_61 for each input formula_2, throughout the training. More concretely, with respect to example formula_62, the NTK value formula_63 determines the influence of the loss gradient formula_64 on the evolution of ANN output formula_61 through a gradient descent step. In the scalar case, this readsformula_65
Wide fully-connected ANNs have a deterministic NTK, which remains constant throughout training.
Consider an ANN with fully-connected layers formula_66 of widths formula_67, so that formula_68, where formula_69 is the composition of an affine transformation formula_70 with the pointwise application of a nonlinearity formula_71, where formula_1 parametrizes the maps formula_72. The parameters formula_33 are initialized randomly, in an independent, identically distributed way.
As the widths grow, the NTK's scale is affected by the exact parametrization of the formula_70's and by the parameter initialization. This motivates the so-called NTK parametrization formula_73. This parametrization ensures that if the parameters formula_33 are initialized as standard normal variables, the NTK has a finite nontrivial limit. In the large-width limit, the NTK converges to a deterministic (non-random) limit formula_74, which stays constant in time.
The NTK formula_74 is explicitly given by formula_75, where formula_76 is determined by the set of recursive equations:
formula_77
where formula_78 denotes the kernel defined in terms of the Gaussian expectation:
formula_79
In this formula the kernels formula_80 are the ANN's so-called activation kernels.
Wide fully connected networks are linear in their parameters throughout training.
The NTK describes the evolution of neural networks under gradient descent in function space. Dual to this perspective is an understanding of how neural networks evolve in parameter space, since the NTK is defined in terms of the gradient of the ANN's outputs with respect to its parameters. In the infinite width limit, the connection between these two perspectives becomes especially interesting. The NTK remaining constant throughout training at large widths co-occurs with the ANN being well described throughout training by its first order Taylor expansion around its parameters at initialization:
formula_81
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x;\\theta )"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "\\Theta (x,x';\\theta )=\\nabla _{\\theta }f(x;\\theta )\\cdot \\nabla _{\\theta }f(x';\\theta )."
},
{
"math_id": 4,
"text": "\\mathbb {E} _{\\theta }[f(x;\\theta )]=0"
},
{
"math_id": 5,
"text": "\\mathbb {E} _{\\theta }[f(x;\\theta )f(x';\\theta )]=\\Sigma (x,x')"
},
{
"math_id": 6,
"text": "\\Sigma (x,x')"
},
{
"math_id": 7,
"text": "f(x;\\theta_0 +\\Delta \\theta )=f(x;\\theta_0 )+\\Delta \\theta \\cdot \\nabla _{\\theta }f(x;\\theta_0 )"
},
{
"math_id": 8,
"text": "\\theta_0"
},
{
"math_id": 9,
"text": "{\\mathbf {x}}\\mapsto \\psi ({\\mathbf {x}})"
},
{
"math_id": 10,
"text": "K({\\mathbf {x}},{\\mathbf {x}}')=\\psi ({\\mathbf {x}})\\cdot \\psi ({\\mathbf {x}}')."
},
{
"math_id": 11,
"text": "\\psi ({\\mathbf {x}})"
},
{
"math_id": 12,
"text": "\\mathbf{x}"
},
{
"math_id": 13,
"text": "{\\mathbf {w}}^{*}"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "({\\mathbf {x}}_{i},y_{i})"
},
{
"math_id": 16,
"text": "y^{*}({\\mathbf {x}})={\\mathbf {w}}^{*}\\cdot {\\mathbf {x}}"
},
{
"math_id": 17,
"text": "\\mathbf {x}_{i}"
},
{
"math_id": 18,
"text": "y^{*}"
},
{
"math_id": 19,
"text": "\\mathbf {\\hat {w}}"
},
{
"math_id": 20,
"text": "{\\mathbf {\\hat {w}}}=\\arg \\min _{\\mathbf {w}}{\\frac {1}{N}}\\sum_{i=0}^{N}||y^{*}({\\mathbf {x}}_{i})-{\\mathbf {w}}\\cdot {\\mathbf {x}}_{i}||^{2}"
},
{
"math_id": 21,
"text": "{\\mathbf {\\hat {w}}}=({\\mathbf {X}}{\\mathbf {X}}^{T})^{-1}{\\mathbf {X}}{\\mathbf {y}}"
},
{
"math_id": 22,
"text": "{\\mathbf {X}}"
},
{
"math_id": 23,
"text": "{\\mathbf {y}}"
},
{
"math_id": 24,
"text": "{\\hat {y}}({\\mathbf {x}})={\\mathbf {\\hat {w}}}\\cdot {\\mathbf {x}}"
},
{
"math_id": 25,
"text": "{\\hat {y}}({\\mathbf {x}})=({\\mathbf {x}}^{T}{\\mathbf {X}})({\\mathbf {X}}^{T}{\\mathbf {X}})^{-1}{\\mathbf {y}}"
},
{
"math_id": 26,
"text": "K({\\mathbf {x}},{\\mathbf {x}}')"
},
{
"math_id": 27,
"text": "{\\hat {y}}({\\mathbf {x}})=K({\\mathbf {x}},{\\mathbf {X}})\\;K({\\mathbf {X}},{\\mathbf {X}})^{-1}\\;{\\mathbf {y}}."
},
{
"math_id": 28,
"text": "K({\\mathbf {X}},{\\mathbf {X}})"
},
{
"math_id": 29,
"text": "\\psi ({\\mathbf {x}})={\\mathbf {x}}"
},
{
"math_id": 30,
"text": "{\\mathcal {C}}"
},
{
"math_id": 31,
"text": "{\\mathcal {C}}\\left(f\\left(\\cdot;\\theta \\left(t\\right)\\right)\\right)"
},
{
"math_id": 32,
"text": "t\\to \\infty"
},
{
"math_id": 33,
"text": "\\theta\\in\\mathbb{R}^{P}"
},
{
"math_id": 34,
"text": "f_{\\theta}"
},
{
"math_id": 35,
"text": "f\\left(\\cdot,\\theta\\right):\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}"
},
{
"math_id": 36,
"text": "\\Theta:\\mathbb{R}^{n_{\\mathrm{in}}}\\times\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}"
},
{
"math_id": 37,
"text": "\\Theta\\left(x,y;\\theta\\right)=\\sum_{p=1}^{P}\\partial_{\\theta_{p}}f\\left(x;\\theta\\right)\\partial_{\\theta_{p}}f\\left(y;\\theta\\right)."
},
{
"math_id": 38,
"text": "\\Theta"
},
{
"math_id": 39,
"text": "\\left(x\\mapsto\\partial_{\\theta_{p}}f\\left(x;\\theta\\right)\\right)_{p=1,\\ldots,P}"
},
{
"math_id": 40,
"text": "\\left(x_{i}\\right)_{i=1,\\ldots,n}\\subset\\mathbb{R}^{n_{\\mathrm{in}}}"
},
{
"math_id": 41,
"text": "\\left(z_{i}\\right)_{i=1,\\ldots,n}\\subset\\mathbb{R}"
},
{
"math_id": 42,
"text": "c:\\mathbb{R}\\times\\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 43,
"text": "f:\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}"
},
{
"math_id": 44,
"text": "\\mathcal{C}\\left(f\\right)=\\sum_{i=1}^{n}c\\left(f\\left(x_{i}\\right),z_{i}\\right)."
},
{
"math_id": 45,
"text": "f\\left(\\cdot;\\theta\\right):\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}"
},
{
"math_id": 46,
"text": "\\mathcal{C}"
},
{
"math_id": 47,
"text": "\\left(\\theta\\left(t\\right)\\right)_{t\\geq0}"
},
{
"math_id": 48,
"text": "\\partial_{t}\\theta\\left(t\\right)=-\\nabla\\mathcal{C}\\left(f\\left(\\cdot;\\theta\\right)\\right)."
},
{
"math_id": 49,
"text": "\\partial_{t}f\\left(x;\\theta\\left(t\\right)\\right)=-\\sum_{i=1}^{n}\\Theta\\left(x,x_{i};\\theta\\right)\\partial_{w}c\\left(w,z_{i}\\right)\\Big|_{w=f\\left(x_{i};\\theta\\left(t\\right)\\right)}."
},
{
"math_id": 50,
"text": "f\\left(\\cdot;\\theta\\left(t\\right)\\right)"
},
{
"math_id": 51,
"text": "\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}"
},
{
"math_id": 52,
"text": "n_{\\mathrm{out}}"
},
{
"math_id": 53,
"text": "f\\left(\\cdot;\\theta\\right):\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathbb{R}^{n_{\\mathrm{out}}}"
},
{
"math_id": 54,
"text": "\\Theta:\\mathbb{R}^{n_{\\mathrm{in}}}\\times\\mathbb{R}^{n_{\\mathrm{in}}}\\to\\mathcal{M}_{n_{\\mathrm{out}}}\\left(\\mathbb{R}\\right)"
},
{
"math_id": 55,
"text": "n_{\\mathrm{out}}\\times n_{\\mathrm{out}}"
},
{
"math_id": 56,
"text": "\\Theta_{k,l}\\left(x,y;\\theta\\right)=\\sum_{p=1}^{P}\\partial_{\\theta_{p}}f_{k}\\left(x;\\theta\\right)\\partial_{\\theta_{p}}f_{l}\\left(y;\\theta\\right)."
},
{
"math_id": 57,
"text": "c:\\mathbb{R}^{n_{\\mathrm{out}}}\\times\\mathbb{R}^{n_{\\mathrm{out}}}\\to\\mathbb{R}"
},
{
"math_id": 58,
"text": "f_{\\theta\\left(t\\right)}"
},
{
"math_id": 59,
"text": "\\partial_{t}f_{k}\\left(x;\\theta\\left(t\\right)\\right)=-\\sum_{i=1}^{n}\\sum_{l=1}^{n_{\\mathrm{out}}}\\Theta_{k,l}\\left(x,x_{i};\\theta\\right)\\partial_{w_{l}}c\\left(\\left(w_{1},\\ldots,w_{n_{\\mathrm{out}}}\\right),z_{i}\\right)\\Big|_{w=f\\left(x_{i};\\theta\\left(t\\right)\\right)}."
},
{
"math_id": 60,
"text": "x_{i}"
},
{
"math_id": 61,
"text": "f\\left(x;\\theta\\right)"
},
{
"math_id": 62,
"text": "i"
},
{
"math_id": 63,
"text": "\\Theta\\left(x,x_{i};\\theta\\right)"
},
{
"math_id": 64,
"text": "\\partial_{w}c\\left(w,z_{i}\\right)\\big|_{w=f\\left(x_{i};\\theta\\right)}"
},
{
"math_id": 65,
"text": "f\\left(x;\\theta\\left(t+\\epsilon\\right)\\right)-f\\left(x;\\theta\\left(t\\right)\\right)\\approx\\epsilon\\sum_{i=1}^{n}\\Theta\\left(x,x_{i};\\theta\\left(t\\right)\\right)\\partial_{w}c\\left(w,z_{i}\\right)\\big|_{w=f\\left(x_{i};\\theta\\right)}."
},
{
"math_id": 66,
"text": "\\ell=0,\\ldots,L"
},
{
"math_id": 67,
"text": "n_{0}=n_{\\mathrm{in}},n_{1},\\ldots,n_{L}=n_{\\mathrm{out}}"
},
{
"math_id": 68,
"text": "f\\left(\\cdot;\\theta\\right)=R_{L-1}\\circ\\cdots\\circ R_{0}"
},
{
"math_id": 69,
"text": "R_{\\ell}=\\sigma\\circ A_{\\ell}"
},
{
"math_id": 70,
"text": "A_{i}"
},
{
"math_id": 71,
"text": "\\sigma:\\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 72,
"text": "A_{0},\\ldots,A_{L-1}"
},
{
"math_id": 73,
"text": "A_{\\ell}\\left(x\\right)=\\frac{1}{\\sqrt{n_{\\ell}}}W^{\\left(\\ell\\right)}x+b^{\\left(\\ell\\right)}"
},
{
"math_id": 74,
"text": "\\Theta_{\\infty}"
},
{
"math_id": 75,
"text": "\\Theta_{\\infty}=\\Theta^{\\left(L\\right)}"
},
{
"math_id": 76,
"text": "\\Theta^{\\left(L\\right)}"
},
{
"math_id": 77,
"text": "\\begin{align}\n\\Theta^{\\left(1\\right)}\\left(x,y\\right) &= \\Sigma^{\\left(1\\right)}\\left(x,y\\right),\\\\\n\\Sigma^{\\left(1\\right)}\\left(x,y\\right) &= \\frac{1}{n_{\\mathrm{in}}}x^{T}y+1,\\\\\n\\Theta^{\\left(\\ell+1\\right)}\\left(x,y\\right) &=\\Theta^{\\left(\\ell\\right)}\\left(x,y\\right)\\dot{\\Sigma}^{\\left(\\ell+1\\right)}\\left(x,y\\right)+\\Sigma^{\\left(\\ell+1\\right)}\\left(x,y\\right),\\\\\n\\Sigma^{\\left(\\ell+1\\right)}\\left(x,y\\right) &= L_{\\Sigma^{\\left(\\ell\\right)}}^{\\sigma}\\left(x,y\\right),\\\\\n\\dot{\\Sigma}^{\\left(\\ell+1\\right)}\\left(x,y\\right) &= L_{\\Sigma^{\\left(\\ell\\right)}}^{\\dot{\\sigma}},\n\\end{align}"
},
{
"math_id": 78,
"text": "L_{K}^{f}"
},
{
"math_id": 79,
"text": "L_{K}^{f}\\left(x,y\\right)=\\mathbb{E}_{\\left(X,Y\\right)\\sim\\mathcal{N}\\left(0,\\begin{pmatrix}K\\left(x,x\\right) & K\\left(x,y\\right)\\\\\nK\\left(y,x\\right) & K\\left(y,y\\right)\n\\end{pmatrix}\\right)}\\left[f\\left(X\\right)f\\left(Y\\right)\\right]."
},
{
"math_id": 80,
"text": "\\Sigma^{\\left(\\ell\\right)}"
},
{
"math_id": 81,
"text": "f\\left(x;\\theta(t)\\right) = f\\left(x;\\theta(0)\\right) + \\nabla_{\\theta}f\\left(x;\\theta(0)\\right) \\left(\\theta(t) - \\theta(0)\\right) + \\mathcal{O}\\left(\\min\\left(n_1 \\dots n_{L-1}\\right)^{-\\frac{1}{2}}\\right)\n."
}
] |
https://en.wikipedia.org/wiki?curid=62443864
|
62448004
|
Disappearing polymorph
|
Phenomenon in materials science
In materials science, a disappearing polymorph is a form of a crystal structure that is suddenly unable to be produced, instead transforming into a different crystal structure with the same chemical composition (a polymorph) during nucleation. Sometimes the resulting transformation is extremely hard or impractical to reverse, because the new polymorph may be more stable. It is hypothesized that contact with a single microscopic seed crystal of the new polymorph can be enough to start a chain reaction causing the transformation of a much larger mass of material. Widespread contamination with such microscopic seed crystals may lead to the impression that the original polymorph has "disappeared". In a few cases such as progesterone and paroxetine hydrochloride, the disappearance is global, and it is suspected that it is because earth's atmosphere is permeated with tiny seed crystals. It is believed that seeds as small as a few million molecules (about formula_0 grams) is sufficient for converting one morph to another, making unwanted disappearance of morphs particularly difficult to prevent.
This is of concern to the pharmaceutical industry, where disappearing polymorphs can ruin the effectiveness of their products and make it impossible to manufacture the original product if there is any contamination. There have been cases in which a laboratory that attempted to reproduce crystals of a particular structure instead grew not the original but a new crystal structure. The drug paroxetine was subject to a lawsuit that hinged on such a pair of polymorphs, and multiple life-saving drugs, such as ritonavir, have been recalled due to unexpected polymorphism. Although it may seem like a so-called disappearing polymorph has disappeared for good, it is believed that it is always possible in principle to reconstruct the original polymorph, though doing so may be impractically difficult. Disappearing polymorphs are generally metastable forms that are replaced by more stable forms.
It is hypothesized that "unintentional seeding" may also be responsible for the phenomenon in which it often becomes easier to crystallize synthetic compounds over time.
Thermodynamics.
Disappearing polymorphs occur when there are two morphs of a substance, and one morph has lower Gibbs free energy, but is kinetically slower to form. Thus, when the crystal is first formed, the kinetically faster morph occurs first. Eventually, by accident or catalysis, the other morph occurs, which can then serve as seed crystal. More abstractly stated, disappearing polymorphs are morphs that are kinetically stable but not thermodynamically stable.
Pharmaceutical and legal impact.
In the United States, the first company to develop a drug ("pioneer") must demonstrate the drug is safe and effective by extensive and expensive trials. After that, there would be a period of exclusive rights to sell the drug, after which other companies ("generics") can market the same drug as a generic chemical under the Abbreviated New Drug Application. The pioneer companies often attempt to evergreen the patent drug by many methods. Since the appearance of generics can decrease the revenue rate of patented drugs by as much as 80%, this is very profitable.
When disappearing polymorphs are involved, it is sometimes true that the pioneer company first discovered and patented polymorph A, then polymorph B, but polymorph A inevitably converts to polymorph B when seeded with microscopic amounts of B. This then means that later companies, even if they follow all the steps specified by the pioneering patent, end up with a polymorph B. Since with disappearing polymorphism, it is practically impossible for anyone to produce the original drug without it turning into the new one, producers are effectively barred from selling generics until the patent for the new polymorph has run out. Alternatively, they may try to argue that a new polymorph needs to undergo the same trials as new drugs, potentially delaying release of a generic for years.
Case studies.
Paroxetine hydrochloride.
Paroxetine hydrochloride was developed in the 1970s by scientists at Ferrosan and patented as US4007196A in 1976. Ferrosan licensed this patent to the Beecham Group, which later merged into GSK (GlaxoSmithKline at the time).
The Paroxetine developed at that time was paroxetine anhydrate, which is a chalky powder that was hygroscopic. This made it difficult to handle. In late 1984, while scaling up the production of Paroxetine, a new crystal form (hemihydrate) suddenly appeared at two Beecham sites in the UK within a few weeks of each other. In the presence of water or humidity, mere contact with hemihydrate converts anhydrate into hemihydrate.
Alan Curzons, working for GSK, wrote down the "Paroxetine Polymorphism" memorandum on May 29, 1985, a memorandum vital to later litigations.
When the patent for paroxetine anhydrate (the "original" polymorph) ran out, other companies wanted to make generic antidepressants using the chemical. The only problem was that by the time other companies began manufacturing, Earth's atmosphere was already seeded with microscopic quantities of paroxetine hemihydrate from GSK's manufacturing plants, which meant that anyone trying to manufacture the original polymorph would find it transformed into the still-patented version, which GSK refused to give manufacturing rights for. Thus, GSK sued the Canadian generic pharmaceutical company Apotex ("SmithKline Beecham Corp. v Apotex Corp") for patent infringement by producing quantities of the newer paroxetine polymorph in their generic pills, asking for their products to be blocked from entering the market.
GSK claimed that the anhydrate "inevitably" converts to hemihydrate due to the presence of seeds. Apotex rejected the seeding theory as "junk science", and "alchemy". Both the District Court and the Federal Circuit Court accepted the seeding theory of GSK, but nevertheless both judged in favor of Apotex. The District Court judged that Apotex was not responsible for unintentional presence of seeding in facility. The Federal Circuit Court invalidated the newer patent concerning the hemihydrates, on the argument of prior public use from the clinical trials.
Later research showed that the "anhydrate" was in fact a nonstoichiometric hydrate that rapidly dehydrates and rehydrates. The hemihydrate form is more stable due to a higher number of hydrogen bonds.
Paroxetine mesylate.
In order to avoid patents on paroxetine hydrochloride, some companies developed alternative salts of paroxetine. In the mid-1990s SmithKline Beecham (now a part of GSK) and Synthon independently developed paroxetine mesylate. They obtained two separate patents.
Subsequently, all attempts to produce Synthon's version of paroxetine mesylate ended up with Beecham's version. There were two possibilities: either Synthon's version is a disappearing polymorph, or Synthon's patent application contained erroneous data. Many litigations later, there was no legal consensus on which possibility was correct.
Ritonavir.
Released to the public in 1996, ritonavir is an antiretroviral medication used to help treat HIV/AIDS. It has been listed on the World Health Organization's List of Essential Medicines. The original medication was manufactured in the form of semisolid gel capsules, based on the only known crystal form of the drug ("Form I"). In 1998, however, a second crystal form ("Form II") was unexpectedly discovered. It had significantly lower solubility and was not medically effective.
Form II was of sufficiently lower energy that it became impossible to produce Form I in any laboratory where Form II was introduced, even indirectly. Scientists who had been exposed to Form II in the past seemingly contaminated entire manufacturing plants by their presence, probably because they carried over microscopic seed crystals of the new polymorph. The drug was temporarily recalled from the market. Tens of thousands of AIDS patients went without medication for their condition until ritonavir was reformulated, approved, and re-released to the market in 1999. It is estimated that Abbott, the company which produced ritonavir under the brand name Norvir, lost over $250 million USD as a result of the incident.
A later study found 3 additional morphs: a metastable polymorph, a trihydrate, and a formamide solvate.
Rotigotine.
Rotigotine (sold under the brand name Neupro among others) is a dopamine agonist indicated for the treatment of Parkinson's disease (PD) and restless legs syndrome (RLS). In 2007, the Neupro patch was approved by the Food and Drug Administration (FDA) as the first transdermal patch treatment of Parkinson's disease in the United States. The drug had been established in 1980, and no prior polymorphism had been observed. In 2008, a more stable polymorph unexpectedly emerged, which was described as resembling "snow-like crystals". The new polymorph did not display any observable reduction in efficacy, but nonetheless, Schwarz Pharma recalled all Neupro patches in the United States and some in Europe. Those with remaining patches in Europe were told to refrigerate their stock, since refrigeration seemed to reduce crystallization rates. The patch was reformulated in 2012, as per FDA recommendations, and was reintroduced in the United States without requiring refrigeration.
Progesterone.
Progesterone is a naturally occurring steroid hormone and is used in hormone therapy and birth control pills, among other applications. There are two known forms of naturally-occurring progesterone (or "nat"‐progesterone), and other synthetic polymorphs of the hormone have also been created and studied.
Early scientists reported being able to crystallize both forms of "nat"‐progesterone, and they could convert form 2 into form 1 (which is more thermodynamically stable and melts at a different temperature). When later scientists tried to crystallize form 2 from pure materials, they could not. Attempts to replicate older instructions (and variations on those instructions) for crystallization of form 2 invariably produced form 1 instead, sometimes even leading to crystals of exceptional purity but still of form 1. Researchers have tentatively suggested that form 2 became gradually harder to produce around 1975, based on a review of production difficulties documented or alluded to in existing literature.
Form 2 was eventually successfully synthesized by using pregnenolone, a structurally similar compound, as an additive in the crystallization process. The additive seemed to reverse the order of stability of the polymorphs. Multiple theories were proposed for why earlier research was able to produce form 2 from "pure" ingredients, ranging from the possibility that the early researchers were unintentionally working with impure materials to the possibility that seed crystals of form 1 had become more common in the atmosphere of laboratories since the 1970s.
Beta-melibiose.
Pfanstiehl Chemical Company in Waukegan, Illinois, was known for isolating and purifying natural substances, including melibiose. The final step of purifying melibiose was to crystallize it. However, one day, all new melibiose crystals appeared in a different morph. The old morph was called beta-melibiose and the new morph, alpha-melibiose. The chemists theorized that tiny traces of the alpha morph in the air or on the lab equipment could be causing this change, but they never found out where the contamination was coming from. Ultimately, the company gave up. However, they suggested that if the process were attempted in a different location, where there was absolutely no trace of alpha morph, it might still be possible to successfully crystallize the beta morph.
As of 1995, this issue might still exist. According to a survey of catalogs from various chemical companies including Merck, Fluka, BDH, Aldrich, and Sigma, only the alpha-melibiose was available.
Beta-melibiose is in fact an epimer of alpha-melibiose. However, since when in solution, alpha- and beta-melibiose rapidly convert to each other, this may still be productively considered a case of crystal polymorphism.
Xylitol.
Xylitol, a type of sugar, was first synthesized from beech wood chips in September 1890 in the form of syrups, but no one reported its crystal forms for 50 years. It has two different crystal morphs. One is a metastable, moisture-absorbing form that melts at 61 °C, and the other is a more stable form that melts at 94 °C. Notably, its metastable morph was prepared before the stable form, conforming to Ostwald's rule.
When a sample of xylitol in the metastable form is brought into a lab where the stable form had previously been made, the sample would change into the stable form after a few days in the open air. The structure of only the stable crystal was determined by X-ray diffraction in a 1969 publication. The researchers failed to obtain the metastable form from a solution in alcohol, either at room temperature or near freezing; they invariably grew only the stable form. This seems to be because once the stable form has been made in a lab, its "seeds" or nuclei can disperse in air, influencing new crystals to grow the same way.
Cephadroxil.
Cefadroxil is an antibiotic. Bristol-Myers Squibb (BMS) patented the "Bouzard form" under US Patent No. 4,504,657 ('657) in 1985. The patenting took 6 years due to disputes about polymorphs. An earlier patent (US Patent No. 3,781,282) covered a different form, the "Micetich form". Attempts to replicate the Micetich form according to Example 19 in the '282 patent consistently yielded the Bouzard form, leading to challenges that the '657 patent was already "inherent" in the '282 patent, thus invalidated by prior art. BMS argued that the prevalence of the Bouzard form in manufacturing facilities led to unintentional seeding. Experimental tests of the seeding theory were ambiguous, but eventually the patent was granted.
Later, Zenith Laboratories marketed a cefadroxil hemihydrate. BMS sued for "gastrointestinal infringement", claiming it converted to the patented Bouzard form in the stomach. The case hinged on the interpretation of X-ray diffraction data, with BMS arguing it demonstrated the presence of the Bouzard form in patients who ingested Zenith's product. However, the court sided with Zenith.
Ranitidine.
Ranitidine, a medicine for peptic ulcers sold under the name of Zantac, was developed by Allen & Hanburys (then a part of Glaxo Group Research, now GSK), and patented in 1978 (US4128658A, Example 32). Originally, its crystals were all in Form 1, but the batch prepared on April 15, 1980 exhibited a new infrared spectrogram peak at 1045 formula_1, demonstrating that a new crystal had appeared, designated Form 2. Subsequent batches produced more and more Form 2 despite using the same procedure, until Form 1 completely disappeared. The group patented Form 2 in 1985 (US4521431A ) and 1987 (US4672133A).
Though it is very difficult to crystalize Form 1 in the presence of seeds of Form 2, once Form 1 crystals are obtained, they can coexist indefinitely with Form 2 crystals when mixed together.
As the 1978 patent was nearing its 1995 expiration, many generics companies attempted to develop generics using the procedure described in 1978 patent, but they all ended up with Form 2. Some generics companies (such as Novopharm) claimed that Glaxo never produced Form 1, and thus the 1978 patent "inherently anticipated" Form 2, thus invalidating the 1985 and 1987 patents (since double patenting is invalid). If the argument holds, then Form 2 could be marketed as generics in 1995 at the expiration of the 1978 patent. Since an additional seven years of exclusive marketing is highly profitable, Glaxo fought back.
In order to win the first "Glaxo, Inc. v. Novopharm, Ltd" case, Glaxo argued successfully that Form 1 could be produced according to the 1978 patent procedure in a carefully quarantined environment, and that Novopharm had been producing Form 2 due to disappearing polymorphs. The organic chemist Jack Baldwin, acting as a witness to Glaxo, had two of his postdoctoral researchers, for three times, produce Form 1 according to the 1978 patent procedure. Consequently, the court ruled that Form 2 is covered by the 1985 patent.
Subsequent to losing the case, Novopharm attempted to bring Form 1 to market, so Glaxo sued them again in the second "Glaxo, Inc. v. Novopharm, Ltd" case. Glaxo argued that Novopharm could not market generics containing even trace amounts of Form 2. In particular, that means any generic Zantac containing an infrared spectrogram peak at 1045 formula_1 infringes their 1985 patent. However, during the prosecution of the first case, Glaxo had already accepted that the 1985 patent covered only products containing chemicals with a specific, 29-peak infrared (IR) spectrum. This was intended to avoid double patenting—Glaxo had to emphasize the unique aspects of Form 2 to distinguish it from the invention described in the 1978 patent. Since Glaxo could not establish the presence of the 29-peak IR spectrogram in Novopharm's product, the court ruled in favor of Novopharm.
<templatestyles src="Template:Blockquote/styles.css" />... the claims at issue all identify Form 2 RHCl by reference to a 29-peak IR spectrum... proof of infringement requires proof that the drug alleged to infringe would exhibit all of those peaks, not a single, potentially meaningless peak.
In fiction.
<templatestyles src="Template:Blockquote/styles.css" />The atoms had begun to stack and lock—to freeze—in a different fashion. The liquid that was crystallizing hadn't changed, but the crystals it was forming were, as far as industrial applications went, pure junk... The seed, which had come from God-only-knows where, taught the atoms the novel way in which to stack and lock, to crystallize, to freeze.
In the 1963 novel "Cat's Cradle," by Kurt Vonnegut, the narrator learns about Ice-nine, an alternative structure of water that is solid at room temperature and acts as a seed crystal upon contact with ordinary liquid water, causing that liquid water to instantly freeze and transform into more Ice-nine. Later in the book, a character frozen in Ice-nine falls into the sea. Instantly, all the water in the world's seas, rivers, and groundwater transforms into solid Ice-nine, leading to a climactic doomsday scenario.
Ice-nine has been described as a fictional parallel—a seed crystal triggering a chain reaction akin to the disappearing polymorph phenomenon.
In an indirect homage to "Cat's Cradle", Ice-nine and its doomsday scenario is also mentioned in the 2009 video game "". A character additionally describes a rumor that glycerin was not observed to crystallize until 1920, when a batch spontaneously crystallized independently of a seed crystal. From that incident forward, all glycerin globally was observed to crystallize when cooled to under 64 degrees Fahrenheit, regardless of whether it had come into contact with a seed crystal or not.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^{-15}"
},
{
"math_id": 1,
"text": "cm^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=62448004
|
6245046
|
Spectral index
|
In astronomy, the spectral index of a source is a measure of the dependence of radiative flux density (that is, radiative flux per unit of frequency) on frequency. Given frequency formula_0 in Hz and radiative flux density formula_1 in Jy, the spectral index formula_2 is given implicitly by
formula_3
Note that if flux does not follow a power law in frequency, the spectral index itself is a function of frequency. Rearranging the above, we see that the spectral index is given by
formula_4
Clearly the power law can only apply over a certain range of frequency because otherwise the integral over all frequencies would be infinite.
Spectral index is also sometimes defined in terms of wavelength formula_5. In this case, the spectral index formula_2 is given implicitly by
formula_6
and at a given frequency, spectral index may be calculated by taking the derivative
formula_7
The spectral index using the formula_1, which we may call formula_8 differs from the index formula_9 defined using formula_10 The total flux between two frequencies or wavelengths is
formula_11
which implies that
formula_12
The opposite sign convention is sometimes employed, in which the spectral index is given by
formula_13
The spectral index of a source can hint at its properties. For example, using the positive sign convention, the spectral index of the emission from an optically thin thermal plasma is -0.1, whereas for an optically thick plasma it is 2. Therefore, a spectral index of -0.1 to 2 at radio frequencies often indicates thermal emission, while a steep negative spectral index typically indicates synchrotron emission. It is worth noting that the observed emission can be affected by several absorption processes that affect the low-frequency emission the most; the reduction in the observed emission at low frequencies might result in a positive spectral index even if the intrinsic emission has a negative index. Therefore, it is not straightforward to associate positive spectral indices with thermal emission.
Spectral index of thermal emission.
At radio frequencies (i.e. in the low-frequency, long-wavelength limit), where the Rayleigh–Jeans law is a good approximation to the spectrum of thermal radiation, intensity is given by
formula_14
Taking the logarithm of each side and taking the partial derivative with respect to formula_15 yields
formula_16
Using the positive sign convention, the spectral index of thermal radiation is thus formula_17 in the Rayleigh–Jeans regime. The spectral index departs from this value at shorter wavelengths, for which the Rayleigh–Jeans law becomes an increasingly inaccurate approximation, tending towards zero as intensity reaches a peak at a frequency given by Wien's displacement law. Because of the simple temperature-dependence of radiative flux in the Rayleigh–Jeans regime, the "radio spectral index" is defined implicitly by
formula_18
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nu"
},
{
"math_id": 1,
"text": "S_\\nu"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "S_\\nu\\propto\\nu^\\alpha."
},
{
"math_id": 4,
"text": "\\alpha \\! \\left( \\nu \\right) = \\frac{\\partial \\log S_\\nu \\! \\left( \\nu \\right)}{\\partial \\log \\nu}."
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "S_\\lambda\\propto\\lambda^\\alpha,"
},
{
"math_id": 7,
"text": "\\alpha \\! \\left( \\lambda \\right) =\\frac{\\partial \\log S_\\lambda \\! \\left( \\lambda \\right)}{\\partial \\log \\lambda}."
},
{
"math_id": 8,
"text": "\\alpha_\\nu,"
},
{
"math_id": 9,
"text": "\\alpha_\\lambda"
},
{
"math_id": 10,
"text": "S_\\lambda."
},
{
"math_id": 11,
"text": "S = C_1\\left(\\nu_2^{\\alpha_\\nu+1}-\\nu_1^{\\alpha_\\nu+1}\\right) = C_2\\left(\\lambda_2^{\\alpha_\\lambda+1} - \\lambda_1^{\\alpha_\\lambda+1}\\right) = c^{\\alpha_\\lambda+1} C_2\\left(\\nu_2^{-\\alpha_\\lambda-1}-\\nu_1^{-\\alpha_\\lambda-1}\\right)"
},
{
"math_id": 12,
"text": "\\alpha_\\lambda=-\\alpha_\\nu-2."
},
{
"math_id": 13,
"text": "S_\\nu\\propto\\nu^{-\\alpha}."
},
{
"math_id": 14,
"text": "B_\\nu(T) \\simeq \\frac{2 \\nu^2 k T}{c^2}."
},
{
"math_id": 15,
"text": "\\log \\, \\nu"
},
{
"math_id": 16,
"text": "\\frac{\\partial \\log B_\\nu(T)}{\\partial \\log \\nu} \\simeq 2."
},
{
"math_id": 17,
"text": "\\alpha \\simeq 2"
},
{
"math_id": 18,
"text": "S \\propto \\nu^{\\alpha} T."
}
] |
https://en.wikipedia.org/wiki?curid=6245046
|
6245532
|
Expander walk sampling
|
In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution.
The earliest version of this theorem is due to , and the more general version is typically attributed to .
Statement.
Let formula_0 be an n-vertex expander graph with positively weighted edges, and let formula_1. Let formula_2 denote the stochastic matrix of the graph, and let formula_3 be the second largest eigenvalue of formula_4. Let formula_5 denote the vertices encountered in a formula_6-step random walk on formula_7 starting at vertex formula_8, and let formula_9 formula_10. Where formula_11
The theorem states that for a weighted graph formula_0 and a random walk formula_5 where formula_8 is chosen by an initial distribution formula_15, for all formula_16, we have the following bound:
formula_17
Where formula_18 is dependent on formula_19 and formula_20.
The theorem gives a bound for the rate of convergence to formula_21 with respect to the length of the random walk, hence giving a more efficient method to estimate formula_21 compared to independent sampling the vertices of formula_7.
Proof.
In order to prove the theorem, we provide a few definitions followed by three lemmas.
Let formula_22 be the weight of the edge formula_23 and let formula_24 Denote by formula_25. Let formula_26 be the matrix with entriesformula_27 , and let formula_28.
Let formula_29 and formula_30. Let formula_31 where formula_4 is the stochastic matrix, formula_32 and formula_33. Then:
formula_34
Where formula_35. As formula_36 and formula_37 are symmetric, they have real eigenvalues. Therefore, as the eigenvalues of formula_37 and formula_38 are equal, the eigenvalues of formula_39 are real. Let formula_40 and formula_41 be the first and second largest eigenvalue of formula_39 respectively.
For convenience of notation, let formula_42, formula_43, formula_44, and let formula_45 be the all-1 vector.
Lemma 1
formula_46
Proof:
By Markov's inequality,
formula_47
Where formula_48 is the expectation of formula_49 chosen according to the probability distribution formula_50. As this can be interpreted by summing over all possible trajectories formula_51, hence:
formula_52
Combining the two results proves the lemma.
Lemma 2
For formula_53,
formula_54
Proof:
As eigenvalues of formula_38 and formula_37 are equal,
formula_55
Lemma 3
If formula_56 is a real number such that formula_57,
formula_58
Proof summary:
We Taylor expand formula_59 about point formula_60 to get:
formula_61
Where formula_62 are first and second derivatives of formula_63 at formula_64. We show that formula_65 We then prove that (i) formula_66 by matrix manipulation, and then prove (ii)formula_67 using (i) and Cauchy's estimate from complex analysis.
The results combine to show that
formula_68
A line to line proof can be found in Gilman (1998)
Proof of theorem
Combining lemma 2 and lemma 3, we get that
formula_69
Interpreting the exponent on the right hand side of the inequality as a quadratic in formula_56 and minimising the expression, we see that
formula_70
A similar bound
formula_71
holds, hence setting formula_72 gives the desired result.
Uses.
This theorem is useful in randomness reduction in the study of derandomization. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling formula_73 independent samples from formula_74 is formula_75, whereas if we sample from an infinite family of constant-degree expanders this costs only formula_76. Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "A\\subset V"
},
{
"math_id": 2,
"text": "P\n"
},
{
"math_id": 3,
"text": "\\lambda_2"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "y_0, y_1, \\ldots, y_{k-1}"
},
{
"math_id": 6,
"text": "(k-1)"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "y_0"
},
{
"math_id": 9,
"text": "\\pi (A):="
},
{
"math_id": 10,
"text": "\\lim_{k\\rightarrow\\infty} \\frac{1}{k} \\sum_{i = 0}^{k-1} \\mathbf{1}_A(y_i)"
},
{
"math_id": 11,
"text": "\\mathbf{1}_A(y)\\begin{cases} 1, & \\text{if }y \\in A \\\\ 0, & \\text{otherwise }\\end{cases}"
},
{
"math_id": 12,
"text": "\\pi (A)"
},
{
"math_id": 13,
"text": "k \\rightarrow\n"
},
{
"math_id": 14,
"text": "\\infty\n\n"
},
{
"math_id": 15,
"text": "\\mathbf{q}"
},
{
"math_id": 16,
"text": "\\gamma > 0"
},
{
"math_id": 17,
"text": "\\Pr\\left[\\bigg| \\frac{1}{k} \\sum_{i=0}^{k-1} \\mathbf{1}_A(y_i) - \\pi(A)\\bigg| \\geq \\gamma\\right] \\leq C e^{-\\frac{1}{20} (\\gamma^2 (1-\\lambda_2) k)}."
},
{
"math_id": 18,
"text": "C"
},
{
"math_id": 19,
"text": "\\mathbf{q}, G "
},
{
"math_id": 20,
"text": "A\n"
},
{
"math_id": 21,
"text": "\\pi(A)"
},
{
"math_id": 22,
"text": "\\it{w}_{xy}"
},
{
"math_id": 23,
"text": "xy\\in E(G)"
},
{
"math_id": 24,
"text": "\\it{w}_x=\\sum_{y:xy\\in E(G)}\\it{w}_{xy}."
},
{
"math_id": 25,
"text": "\\pi(x):=\\it{w}_x/\\sum_{y\\in V} \\it{w}_y"
},
{
"math_id": 26,
"text": "\\frac{\\mathbf{q}}{\\sqrt\\pi}"
},
{
"math_id": 27,
"text": "\\frac{\\mathbf{q}(x)}{\\sqrt{\\pi(x)}}"
},
{
"math_id": 28,
"text": "N_{\\pi,\\mathbf{q}}=||\\frac{\\mathbf{q}}{\\sqrt\\pi}||_{2}"
},
{
"math_id": 29,
"text": "D=\\text{diag}(1/\\it{w}_i )"
},
{
"math_id": 30,
"text": "M=(\\it{w}_{ij})"
},
{
"math_id": 31,
"text": "P(r)=PE_r"
},
{
"math_id": 32,
"text": "E_r=\\text{diag}(e^{r\\mathbf{1}_A})"
},
{
"math_id": 33,
"text": "r \\ge 0\n"
},
{
"math_id": 34,
"text": "P = \\sqrt{D}S\\sqrt{D^{-1}} \\qquad \\text{and} \\qquad P(r) = \\sqrt{DE_r^{-1}}S(r)\\sqrt{E_rD^{-1}}"
},
{
"math_id": 35,
"text": "S:=\\sqrt{D}M\\sqrt{D} \\text{ and } S(r) := \\sqrt{DE_r}M\\sqrt{DE_r}"
},
{
"math_id": 36,
"text": "S"
},
{
"math_id": 37,
"text": "S(r)"
},
{
"math_id": 38,
"text": "P(r)"
},
{
"math_id": 39,
"text": "P(r)"
},
{
"math_id": 40,
"text": "\\lambda(r)"
},
{
"math_id": 41,
"text": "\\lambda_2(r)"
},
{
"math_id": 42,
"text": "t_k=\\frac{1}{k} \\sum_{i=0}^{k-1} \\mathbf{1}_A(y_i)"
},
{
"math_id": 43,
"text": "\\epsilon=\\lambda-\\lambda_2\n"
},
{
"math_id": 44,
"text": "\\epsilon_r=\\lambda(r)-\\lambda_2(r)\n"
},
{
"math_id": 45,
"text": "\\mathbf{1}"
},
{
"math_id": 46,
"text": "\\Pr\\left[t_k- \\pi(A) \\ge \\gamma\\right] \\leq e^{-rk(\\pi(A)+\\gamma)+k\\log\\lambda(r)}(\\mathbf{q}P(r)^k\\mathbf{1})/\\lambda(r)^k"
},
{
"math_id": 47,
"text": "\\begin{alignat}{2}\n\\Pr\\left[t_k \\ge \\pi(A) +\\gamma\\right] =\\Pr[e^{rt_k}\\ge e^{rk(\\pi(A)+\\gamma)}]\\leq e^{-rk(\\pi(A)+\\gamma)}E_\\mathbf{q}e^{rt_k}\n\\end{alignat}"
},
{
"math_id": 48,
"text": "E_\\mathbf{q}"
},
{
"math_id": 49,
"text": "x_0"
},
{
"math_id": 50,
"text": "\\mathbf{q}"
},
{
"math_id": 51,
"text": "x_0,x_1,.. .,x_k"
},
{
"math_id": 52,
"text": "E_{\\mathbf{q}}e^{rt}=\\sum_{x_1,x_2,...,x_k}e^{rt}\\mathbb{q}(x_0)\\Pi_{i=1}^kp_{x_{i-1}x_i}=\\mathbf{q}P(r)^k\\mathbf{1}"
},
{
"math_id": 53,
"text": " 0\\le r \\le 1"
},
{
"math_id": 54,
"text": "(\\mathbf{q}P(r)^k\\mathbf{1})/\\lambda(r)^k\\le (1+r)N_{\\pi,\\mathbf{q}}"
},
{
"math_id": 55,
"text": "\\begin{align}\n(\\mathbf{q}P(r)^k\\mathbf{1})/\\lambda(r)^k&= (\\mathbf{q}P\\sqrt{DE_r^{-1}}S(r)^k \n\\sqrt{D^{-1}E_r}\\mathbf{1})/\\lambda(r)^k\\\\ &\\le e^{r/2}||\\frac{\\mathbf{q}}{\\sqrt{\\pi}}||_2||S(r)^k||_2||\\sqrt{\\pi}||_2/\\lambda(r)^k\\\\\n&\\le e^{r/2}N_{\\pi,\\mathbf{q}}\\\\\n&\\le (1+r)N_{\\pi,\\mathbf{q}}\\qquad \\square\n\\end{align}"
},
{
"math_id": 56,
"text": "r"
},
{
"math_id": 57,
"text": "0\\le e^r-1\\le \\epsilon/4"
},
{
"math_id": 58,
"text": "\\log\\lambda(r)\\le r\\pi(A)+5r^2/\\epsilon"
},
{
"math_id": 59,
"text": "\\log \\lambda(y)\n"
},
{
"math_id": 60,
"text": "r=z\n"
},
{
"math_id": 61,
"text": "\\log\\lambda(r)= \\log\\lambda(z)+m_z(r-z)+(r-z)^2\\int_0^1 (1-t)V_{z+(r-z)t}dt"
},
{
"math_id": 62,
"text": "m_x \\text{ and } V_x"
},
{
"math_id": 63,
"text": "\\log \\lambda(r)"
},
{
"math_id": 64,
"text": "r=x"
},
{
"math_id": 65,
"text": "m_0=\\lim_{k \\to \\infty}t_k=\\pi(A)."
},
{
"math_id": 66,
"text": "\\epsilon_r\\ge 3\\epsilon/4"
},
{
"math_id": 67,
"text": "V_r\\le 10/\\epsilon"
},
{
"math_id": 68,
"text": "\\begin{align}\n\\log\\lambda(r)= \\log\\lambda(0)+m_0r+r^2\\int_0^1 (1-t)V_{rt}dt\n\\le r\\pi(A)+5r^2/\\epsilon\n\\end{align}"
},
{
"math_id": 69,
"text": "\\Pr[t_k-\\pi(A)\\ge \\gamma]\\le(1+r)N_{\\pi,\\mathbf{q}}e^{-k(r\\gamma-5r^2/\\epsilon)}"
},
{
"math_id": 70,
"text": "\\Pr[t_k-\\pi(A)\\ge \\gamma]\\le(1+\\gamma\\epsilon/10)N_{\\pi,\\mathbf{q}}e^{-k\\gamma^2\\epsilon/20}"
},
{
"math_id": 71,
"text": "\\Pr[t_k-\\pi(A)\\le - \\gamma]\\le (1+\\gamma\\epsilon/10)N_{\\pi,\\mathbf{q}}e^{-k\\gamma^2\\epsilon/20}"
},
{
"math_id": 72,
"text": "C=2(1+\\gamma\\epsilon/10)N_{\\pi,\\mathbf{q}}"
},
{
"math_id": 73,
"text": "k"
},
{
"math_id": 74,
"text": "f"
},
{
"math_id": 75,
"text": "k \\log n"
},
{
"math_id": 76,
"text": "\\log n + O(k)"
}
] |
https://en.wikipedia.org/wiki?curid=6245532
|
62455443
|
Roth's theorem on arithmetic progressions
|
On the existence of arithmetic progressions in subsets of the natural numbers
Roth's theorem on arithmetic progressions is a result in additive combinatorics concerning the existence of arithmetic progressions in subsets of the natural numbers. It was first proven by Klaus Roth in 1953. Roth's theorem is a special case of Szemerédi's theorem for the case formula_0.
Statement.
A subset "A" of the natural numbers is said to have positive upper density if
formula_1.
Roth's theorem on arithmetic progressions (infinite version): A subset of the natural numbers with positive upper density contains a 3-term arithmetic progression.
An alternate, more qualitative, formulation of the theorem is concerned with the maximum size of a Salem–Spencer set which is a subset of formula_2. Let formula_3 be the size of the largest subset of formula_4 which contains no 3-term arithmetic progression.
Roth's theorem on arithmetic progressions (finitary version): formula_5.
Improving upper and lower bounds on formula_3 is still an open research problem.
History.
The first result in this direction was Van der Waerden's theorem in 1927, which states that for sufficiently large N, coloring the integers formula_6 with formula_7 colors will result in a formula_8 term arithmetic progression.
Later on in 1936 Erdős and Turán conjectured a much stronger result that any subset of the integers with positive density contains arbitrarily long arithmetic progressions. In 1942, Raphaël Salem and Donald C. Spencer provided a construction of a 3-AP-free set (i.e. a set with no 3-term arithmetic progressions) of size formula_9, disproving an additional conjecture of Erdős and Turán that formula_10 for some formula_11.
In 1953, Roth partially resolved the initial conjecture by proving they must contain an arithmetic progression of length 3 using Fourier analytic methods. Eventually, in 1975, Szemerédi proved Szemerédi's theorem using combinatorial techniques, resolving the original conjecture in full.
Proof techniques.
The original proof given by Roth used Fourier analytic methods. Later on another proof was given using Szemerédi's regularity lemma.
Proof sketch via Fourier analysis.
In 1953, Roth used Fourier analysis to prove an upper bound of formula_12. Below is a sketch of this proof.
Define the Fourier transform of a function formula_13 to be the function formula_14 satisfying
formula_15,
where formula_16.
Let formula_17 be a 3-AP-free subset of formula_18. The proof proceeds in 3 steps.
Step 1.
For functions, formula_20 define
formula_21
Counting Lemma Let formula_22 satisfy formula_23. Define formula_24. Then formula_25.
The counting lemma tells us that if the Fourier Transforms of formula_26 and formula_27 are "close", then the number of 3-term arithmetic progressions between the two should also be "close." Let formula_28 be the density of formula_17. Define the functions formula_29 (i.e the indicator function of formula_17), and formula_30. Step 1 can then be deduced by applying the Counting Lemma to formula_26 and formula_27, which tells us that there exists some formula_31 such that
formula_32.
Step 2.
Given the formula_31 from step 1, we first show that it's possible to split up formula_33 into relatively large subprogressions such that the character formula_34 is roughly constant on each subprogression.
Lemma 1: Let formula_35. Assume that formula_36 for a universal constant formula_37. Then it is possible to partition formula_33 into arithmetic progressions formula_38 with length formula_39 such that formula_40 for all formula_41.
Next, we apply Lemma 1 to obtain a partition into subprogressions. We then use the fact that formula_31 produced a large coefficient in step 1 to show that one of these subprogressions must have a density increment:
Lemma 2: Let formula_17 be a 3-AP-free subset of formula_33, with formula_42 and formula_43. Then, there exists a sub progression formula_44 such that formula_45 and formula_46.
Step 3.
We now iterate step 2. Let formula_47 be the density of formula_17 after the formula_48th iteration. We have that formula_49 and formula_50 First, see that formula_51 doubles (i.e. reach formula_52 such that formula_53) after at most formula_54 steps. We double formula_51 again (i.e reach formula_55) after at most formula_56 steps. Since formula_57, this process must terminate after at most formula_58 steps.
Let formula_59 be the size of our current progression after formula_48 iterations. By Lemma 2, we can always continue the process whenever formula_60 and thus when the process terminates we have that formula_61 Also, note that when we pass to a subprogression, the size of our set decreases by a cube root. Therefore
formula_62
Therefore formula_63 so formula_64 as desired. formula_65
Unfortunately, this technique does not generalize directly to larger arithmetic progressions to prove Szemerédi's theorem. An extension of this proof eluded mathematicians for decades until 1998, when Timothy Gowers developed the field of higher-order Fourier analysis specifically to generalize the above proof to prove Szemerédi's theorem.
Proof sketch via graph regularity.
Below is an outline of a proof using the Szemerédi regularity lemma.
Let formula_66 be a graph and formula_67. We call formula_68 an formula_69-regular pair if for all formula_70 with formula_71, one has formula_72.
A partition formula_73 of formula_74 is an formula_69-regular partition if
formula_75.
Then the Szemerédi regularity lemma says that for every formula_76, there exists a constant formula_77 such that every graph has an formula_69-regular partition into at most formula_77 parts.
We can also prove that triangles between formula_69-regular sets of vertices must come along with many other triangles. This is known as the triangle counting lemma.
Triangle Counting Lemma: Let formula_66 be a graph and formula_78 be subsets of the vertices of formula_66 such that formula_79 are all formula_69-regular pairs for some formula_80. Let formula_81 denote the edge densities formula_82 respectively. If formula_83, then the number of triples formula_84 such that formula_85 form a triangle in formula_66 is at least
formula_86.
Using the triangle counting lemma and the Szemerédi regularity lemma, we can prove the triangle removal lemma, a special case of the graph removal lemma.
Triangle Removal Lemma: For all formula_80, there exists formula_11 such that any graph on formula_87 vertices with less than or equal to formula_88 triangles can be made triangle-free by removing at most formula_89 edges.
This has an interesting corollary pertaining to graphs formula_66 on formula_90 vertices where every edge of formula_66 lies in a unique triangle. In specific, all of these graphs must have formula_91 edges.
Take a set formula_17 with no 3-term arithmetic progressions. Now, construct a tripartite graph formula_66 whose parts formula_78 are all copies of formula_92. Connect a vertex formula_93 to a vertex formula_94 if formula_95. Similarly, connect formula_96 with formula_94 if formula_97. Finally, connect formula_93 with formula_96 if formula_98.
This construction is set up so that if formula_85 form a triangle, then we get elements formula_99 that all belong to formula_17. These numbers form an arithmetic progression in the listed order. The assumption on formula_17 then tells us this progression must be trivial: the elements listed above are all equal. But this condition is equivalent to the assertion that formula_85 is an arithmetic progression in formula_92. Consequently, every edge of formula_66 lies in exactly one triangle. The desired conclusion follows. formula_65
Extensions and generalizations.
Szemerédi's theorem resolved the original conjecture and generalized Roth's theorem to arithmetic progressions of arbitrary length. Since then it has been extended in multiple fashions to create new and interesting results.
Furstenberg and Katznelson used ergodic theory to prove a multidimensional version and Leibman and Bergelson extended it to polynomial progressions as well. Most recently, Green and Tao proved the Green–Tao theorem which says that the prime numbers contain arbitrarily long arithmetic progressions. Since the prime numbers are a subset of density 0, they introduced a "relative" Szemerédi theorem which applies to subsets with density 0 that satisfy certain pseudorandomness conditions. Later on Conlon, Fox, and Zhao strengthened this theorem by weakening the necessary pseudorandomness condition. In 2020, Bloom and Sisask proved that any set formula_17 such that formula_100 diverges must contain arithmetic progressions of length 3; this is the first non-trivial case of another conjecture of Erdős postulating that any such set must in fact contain arbitrarily long arithmetic progressions.
Improving bounds.
There has also been work done on improving the bound in Roth's theorem. The bound from the original proof of Roth's theorem showed that
formula_101
for some constant formula_102. Over the years this bound has been continually lowered by Szemerédi, Heath-Brown, Bourgain, and Sanders. The current (July 2020) best bound is due to Bloom and Sisask who have showed the existence of an absolute constant c>0 such that
formula_103
In February 2023 a preprint (later published) by Kelley and Meka gave a new bound of:
formula_104.
Four days later, Bloom and Sisask published a preprint giving an exposition of the result (later published), simplifying the argument and yielding some additional applications. Several months later, Bloom and Sisask obtained a further improvement to formula_105, and stated (without proof) that their techniques can be used to show formula_106.
There has also been work done on the other end, constructing the largest set with no three-term arithmetic progressions. The best construction has barely been improved since 1946 when Behrend improved on the initial construction by Salem and Spencer and proved
formula_107.
Due to no improvements in over 70 years, it is conjectured that Behrend's set is asymptotically very close in size to the largest possible set with no three-term progressions. If correct, the Kelley-Meka bound will prove this conjecture.
Roth's theorem in finite fields.
As a variation, we can consider the analogous problem over finite fields. Consider the finite field formula_108, and let formula_109 be the size of the largest subset of formula_108 which contains no 3-term arithmetic progression. This problem is actually equivalent to the cap set problem, which asks for the largest subset of formula_108 such that no 3 points lie on a line. The cap set problem can be seen as a generalization of the card game Set.
In 1982, Brown and Buhler were the first to show that formula_110 In 1995, Roy Mesuhlam used a similar technique to the Fourier-analytic proof of Roth's theorem to show that formula_111 This bound was improved to formula_112 in 2012 by Bateman and Katz.
In 2016, Ernie Croot, Vsevolod Lev, Péter Pál Pach, Jordan Ellenberg and Dion Gijswijt developed a new technique based on the polynomial method to prove that formula_113.
The best known lower bound is formula_114, discovered in December 2023 by Google DeepMind researchers using a large language model (LLM).
Roth's theorem with popular differences.
Another generalization of Roth's theorem shows that for positive density subsets, there not only exists a 3-term arithmetic progression, but that there exist many 3-APs all with the same common difference.
Roth's theorem with popular differences: For all formula_80, there exists some formula_115 such that for every formula_116 and formula_117 with formula_118 there exists some formula_119 such that formula_120
If formula_17 is chosen randomly from formula_121 then we would expect there to be formula_122 progressions for each value of formula_123. The popular differences theorem thus states that for each formula_19 with positive density, there is some formula_123 such that the number of 3-APs with common difference formula_123 is close to what we would expect.
This theorem was first proven by Green in 2005, who gave a bound of formula_124 where formula_125 is the tower function. In 2019, Fox and Pham recently improved the bound to formula_126
A corresponding statement is also true in formula_127 for both 3-APs and 4-APs. However, the claim has been shown to be false for 5-APs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k = 3"
},
{
"math_id": 1,
"text": "\\limsup_{n \\to \\infty}\\frac{|A\\cap \\{1, 2, 3, \\dotsc, n\\}|}{n} > 0"
},
{
"math_id": 2,
"text": " [N] = \\{1, \\dots, N\\}"
},
{
"math_id": 3,
"text": "r_3([N])"
},
{
"math_id": 4,
"text": " [N]"
},
{
"math_id": 5,
"text": "r_3([N]) = o(N)"
},
{
"math_id": 6,
"text": "1, \\dots, n"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\frac{N}{e^{O(\\log N/\\log \\log N)}}"
},
{
"math_id": 10,
"text": "r_3([N]) = N^{1 - \\delta}"
},
{
"math_id": 11,
"text": "\\delta > 0"
},
{
"math_id": 12,
"text": "r_3([N]) = O\\left(\\frac{N}{\\log \\log N}\\right)"
},
{
"math_id": 13,
"text": "f : \\mathbb{Z} \\rightarrow \\mathbb{C} "
},
{
"math_id": 14,
"text": "\\widehat{f}"
},
{
"math_id": 15,
"text": "\\widehat{f}(\\theta) = \\sum_{x \\in \\mathbb{Z}}f(x)e(-x\\theta)"
},
{
"math_id": 16,
"text": "e(t) = e^{2\\pi i t}"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "\\{1, \\dots, N\\}"
},
{
"math_id": 19,
"text": "|A|"
},
{
"math_id": 20,
"text": "f, g, h : \\mathbb{Z} \\rightarrow \\mathbb{C},"
},
{
"math_id": 21,
"text": "\\Lambda(f, g, h) = \\sum_{x, y \\in \\mathbb{Z}} f(x)g(x + y)h(x + 2y)"
},
{
"math_id": 22,
"text": "f, g : \\mathbb{Z} \\rightarrow \\mathbb{C}"
},
{
"math_id": 23,
"text": "\\sum_{n \\in \\mathbb{Z}}|f(n)|^2, \\sum_{n \\in \\mathbb{Z}}|g(n)|^2 \\le M"
},
{
"math_id": 24,
"text": "\\Lambda_3(f) = \\Lambda(f, f, f)"
},
{
"math_id": 25,
"text": "|\\Lambda_3(f) - \\Lambda_3(g)| \\le 3M\\|\\widehat{f - g}\\|_\\infty"
},
{
"math_id": 26,
"text": "f"
},
{
"math_id": 27,
"text": "g"
},
{
"math_id": 28,
"text": "\\alpha = |A|/N"
},
{
"math_id": 29,
"text": "f = \\mathbf{1}_{A}"
},
{
"math_id": 30,
"text": "g = \\alpha \\cdot \\mathbf{1}_{[N]}"
},
{
"math_id": 31,
"text": "\\theta"
},
{
"math_id": 32,
"text": "\\left|\\sum_{n=1}^N (1_A - \\alpha)(n)e(\\theta n) \\right| \\ge \\frac{\\alpha^2}{10}N"
},
{
"math_id": 33,
"text": "[N]"
},
{
"math_id": 34,
"text": "x \\mapsto e(x\\theta)"
},
{
"math_id": 35,
"text": "0 < \\eta < 1, \\theta \\in \\mathbb{R}"
},
{
"math_id": 36,
"text": "N > C\\eta^{-6}"
},
{
"math_id": 37,
"text": "C"
},
{
"math_id": 38,
"text": "P_i"
},
{
"math_id": 39,
"text": "N^{1/3} \\le |P_i| \\le 2N^{1/3}"
},
{
"math_id": 40,
"text": "\\sup_{x, y \\in P_i}|e(x\\theta) - e(y\\theta)| < \\eta"
},
{
"math_id": 41,
"text": "i"
},
{
"math_id": 42,
"text": "|A| = \\alpha N"
},
{
"math_id": 43,
"text": "N > C\\alpha^{-12}"
},
{
"math_id": 44,
"text": "P \\subset [N]"
},
{
"math_id": 45,
"text": "|P| \\ge N^{1/3}"
},
{
"math_id": 46,
"text": "|A \\cap P| \\ge (\\alpha + \\alpha^2/40)|P|"
},
{
"math_id": 47,
"text": "a_t"
},
{
"math_id": 48,
"text": "t"
},
{
"math_id": 49,
"text": "\\alpha_0 = \\alpha,"
},
{
"math_id": 50,
"text": "\\alpha_{t + 1} \\ge \\alpha + \\alpha^2/40."
},
{
"math_id": 51,
"text": "\\alpha"
},
{
"math_id": 52,
"text": "T"
},
{
"math_id": 53,
"text": "\\alpha_T \\ge 2\\alpha_0"
},
{
"math_id": 54,
"text": "40/\\alpha + 1"
},
{
"math_id": 55,
"text": "\\alpha_T \\ge 4\\alpha_0"
},
{
"math_id": 56,
"text": "20/\\alpha + 1"
},
{
"math_id": 57,
"text": "\\alpha \\le 1"
},
{
"math_id": 58,
"text": "O(1/\\alpha)"
},
{
"math_id": 59,
"text": "N_t"
},
{
"math_id": 60,
"text": "N_t \\ge C\\alpha_t^{-12},"
},
{
"math_id": 61,
"text": "N_t \\le C\\alpha_t^{-12} \\le C\\alpha^{-12}."
},
{
"math_id": 62,
"text": "N \\le N_t^{3^t} \\le (C\\alpha^{-12})^{3^{O(1/\\alpha)}} = e^{e^{O(1/\\alpha)}}."
},
{
"math_id": 63,
"text": "\\alpha = O(1/\\log \\log N),"
},
{
"math_id": 64,
"text": "|A| = O \\left(\\frac{N}{\\log \\log N}\\right),"
},
{
"math_id": 65,
"text": "\\blacksquare"
},
{
"math_id": 66,
"text": "G"
},
{
"math_id": 67,
"text": "X,Y\\subseteq V(G)"
},
{
"math_id": 68,
"text": "(X,Y)"
},
{
"math_id": 69,
"text": "\\epsilon"
},
{
"math_id": 70,
"text": "A\\subset X,B\\subset Y"
},
{
"math_id": 71,
"text": "|A|\\geq\\epsilon|X|,|B|\\geq\\epsilon|Y|"
},
{
"math_id": 72,
"text": "|d(A,B)-d(X,Y)|\\leq\\epsilon"
},
{
"math_id": 73,
"text": "\\mathcal{P}=\\{V_1,\\ldots,V_k\\}"
},
{
"math_id": 74,
"text": "V(G)"
},
{
"math_id": 75,
"text": "\\sum_{(i,j)\\in[k]^2, (V_i,V_j)\\text{ not }\\epsilon\\text{-regular}} |V_i||V_j|\\leq\\epsilon|V(G)|^2"
},
{
"math_id": 76,
"text": "\\epsilon>0"
},
{
"math_id": 77,
"text": "M"
},
{
"math_id": 78,
"text": "X, Y, Z"
},
{
"math_id": 79,
"text": "(X,Y), (Y,Z), (Z,X)"
},
{
"math_id": 80,
"text": "\\epsilon > 0"
},
{
"math_id": 81,
"text": "d_{XY}, d_{XZ}, d_{YZ}"
},
{
"math_id": 82,
"text": "d(X,Y), d(X,Z), d(Y,Z)"
},
{
"math_id": 83,
"text": "d_{XY}, d_{XZ}, d_{YZ} \\ge 2\\epsilon"
},
{
"math_id": 84,
"text": "(x,y,z)\\in X\\times Y\\times Z"
},
{
"math_id": 85,
"text": "x,y,z"
},
{
"math_id": 86,
"text": "(1-2\\epsilon)(d_{XY} - \\epsilon)(d_{XZ} - \\epsilon)(d_{YZ} - \\epsilon)\\cdot |X||Y||Z|"
},
{
"math_id": 87,
"text": "n"
},
{
"math_id": 88,
"text": "\\delta n^3"
},
{
"math_id": 89,
"text": "\\epsilon n^2"
},
{
"math_id": 90,
"text": "N"
},
{
"math_id": 91,
"text": "o(N^2)"
},
{
"math_id": 92,
"text": "\\mathbb{Z}/(2N+1)\\mathbb{Z}"
},
{
"math_id": 93,
"text": "x\\in X"
},
{
"math_id": 94,
"text": "y\\in Y"
},
{
"math_id": 95,
"text": "y-x\\in A"
},
{
"math_id": 96,
"text": "z\\in Z"
},
{
"math_id": 97,
"text": "z-y\\in A"
},
{
"math_id": 98,
"text": "(z-x)/2\\in A"
},
{
"math_id": 99,
"text": "y-x, \\frac{z-x}{2}, z-y"
},
{
"math_id": 100,
"text": "\\sum_{n \\in A} \\frac{1}{n}"
},
{
"math_id": 101,
"text": "r_3([N]) \\leq c\\cdot\\frac{N}{\\log\\log N}"
},
{
"math_id": 102,
"text": "c"
},
{
"math_id": 103,
"text": "r_3([N]) \\leq \\frac{N}{(\\log N)^{1+c}}. "
},
{
"math_id": 104,
"text": "r_3([N]) \\leq 2^{-\\Omega((\\log N)^{1/12})} \\cdot N"
},
{
"math_id": 105,
"text": "r_3([N]) \\leq \\exp(-c(\\log N)^{1/9})N"
},
{
"math_id": 106,
"text": "r_3([N]) \\leq \\exp(-c(\\log N)^{5/41})N"
},
{
"math_id": 107,
"text": "r_3([N]) \\geq N\\exp(-c\\sqrt{\\log N})"
},
{
"math_id": 108,
"text": " \\mathbb{F}_3^n "
},
{
"math_id": 109,
"text": " r_3(\\mathbb{F}_3^n) "
},
{
"math_id": 110,
"text": "r_3(\\mathbb{F}_3^n) = o(3^n)."
},
{
"math_id": 111,
"text": " r_3(\\mathbb{F}_3^n) = O\\left(\\frac{3^n}{n}\\right). "
},
{
"math_id": 112,
"text": "O(3^n/n^{1 + \\epsilon})"
},
{
"math_id": 113,
"text": "r_3(\\mathbb{F}_3^n) = O(2.756^n)"
},
{
"math_id": 114,
"text": "2.2202^{n}"
},
{
"math_id": 115,
"text": "n_0 = n_0(\\epsilon)"
},
{
"math_id": 116,
"text": "n > n_0"
},
{
"math_id": 117,
"text": "A \\subset \\mathbb{F}_3^n"
},
{
"math_id": 118,
"text": "|A| = \\alpha3^n,"
},
{
"math_id": 119,
"text": "y \\neq 0"
},
{
"math_id": 120,
"text": "|\\{x : x, x + y, x + 2y \\in A\\}| \\ge (\\alpha^3 - \\epsilon)3^n."
},
{
"math_id": 121,
"text": "\\mathbb{F}_3^n,"
},
{
"math_id": 122,
"text": "\\alpha^33^n"
},
{
"math_id": 123,
"text": "y"
},
{
"math_id": 124,
"text": "n_0 = \\text{tow}((1/\\epsilon)^{O(1)}),"
},
{
"math_id": 125,
"text": "\\text{tow}"
},
{
"math_id": 126,
"text": "n_0 = \\text{tow}(O(\\log\\frac{1}{\\epsilon}))."
},
{
"math_id": 127,
"text": "\\mathbb{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=62455443
|
62456017
|
Heyde theorem
|
In the mathematical theory of probability, the Heyde theorem is the characterization theorem concerning the normal distribution (the Gaussian distribution) by the symmetry of one linear form given another. This theorem was proved by C. C. Heyde.
Formulation.
Let formula_0 be independent random variables. Let formula_1 be nonzero constants such that formula_2 for all formula_3. If the conditional distribution of the linear form formula_4 given formula_5 is symmetric then all random variables formula_6 have normal distributions (Gaussian distributions).
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\xi_j, j = 1, 2, \\ldots, n, n \\ge 2"
},
{
"math_id": 1,
"text": "\\alpha_j, \\beta_j"
},
{
"math_id": 2,
"text": "\\frac{\\beta_i}{\\alpha_i} + \\frac{\\beta_j}{\\alpha_j} \\ne 0"
},
{
"math_id": 3,
"text": "i \\ne j"
},
{
"math_id": 4,
"text": "L_2 = \\beta_1\\xi_1 + \\cdots + \\beta_n\\xi_n"
},
{
"math_id": 5,
"text": "L_1 = \\alpha_1\\xi_1 + \\cdots + \\alpha_n\\xi_n"
},
{
"math_id": 6,
"text": "\\xi_j"
}
] |
https://en.wikipedia.org/wiki?curid=62456017
|
62457740
|
Structural Ramsey theory
|
In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below).
Structural Ramsey theory began in the 1970s with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics.
History.
Leeb is given credit for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham, Leeb and Rothschild's 1972 paper on the subject. Key development of these ideas was done by Nešetřil and Rödl in their series of 1977 and 1983 papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington, and further generalised by Prömel. More recently, Mašulović and Solecki have done some pioneering work in the field.
Motivation.
This article will use the set theory convention that each natural number formula_0 can be considered as the set of all natural numbers less than it: i.e. formula_1. For any set formula_2, an "formula_3-colouring of formula_2" is an assignment of one of formula_3 labels to each element of formula_2. This can be represented as a function formula_4 mapping each element to its label in formula_5 (which this article will use), or equivalently as a partition of formula_6 into formula_3 pieces.
Here are some of the classic results of Ramsey theory:
These "Ramsey-type" theorems all have a similar idea: we fix two integers formula_9 and formula_18, and a set of colours formula_3. Then, we want to show there is some formula_16 large enough, such that for every formula_3-colouring of the "substructures" of size formula_9 inside formula_16, we can find a suitable "structure" formula_2 inside formula_16, of size formula_18, such that all the "substructures" formula_34 of formula_2 with size formula_9 have the same colour.
What types of structures are allowed depends on the theorem in question, and this turns out to be virtually the only difference between them. This idea of a "Ramsey-type theorem" leads itself to the more precise notion of the Ramsey property (below).
The Ramsey property.
Let formula_35 be a category. formula_35 has the "Ramsey property" if for every natural number formula_3, and all objects formula_36 in formula_35, there exists another object formula_37 in formula_35, such that for every formula_3-colouring formula_38, there exists a morphism formula_39 which is formula_13-monochromatic, i.e. the set
formula_40
is formula_13-monochromatic.
Often, formula_35 is taken to be a class of finite formula_41-structures over some fixed language formula_41, with embeddings as morphisms. In this case, instead of colouring morphisms, one can think of colouring "copies" of formula_2 in formula_37, and then finding a copy of formula_34 in formula_37, such that all copies of formula_2 in this copy of formula_34 are monochromatic. This may lend itself more intuitively to the earlier idea of a "Ramsey-type theorem".
There is also a notion of a dual Ramsey property; formula_35 has the dual Ramsey property if its dual category formula_42 has the Ramsey property as above. More concretely, formula_35 has the "dual Ramsey property" if for every natural number formula_3, and all objects formula_36 in formula_35, there exists another object formula_37 in formula_35, such that for every formula_3-colouring formula_43, there exists a morphism formula_44 for which formula_45 is formula_13-monochromatic.
The Kechris–Pestov–Todorčević correspondence.
In 2005, Kechris, Pestov and Todorčević discovered the following correspondence (hereafter called the KPT correspondence) between structural Ramsey theory, Fraïssé theory, and ideas from topological dynamics.
Let formula_60 be a topological group. For a topological space formula_61, a "formula_60-flow" (denoted formula_62) is a continuous action of formula_60 on formula_61. We say that formula_60 is "extremely amenable" if any formula_60-flow formula_62 on a compact space formula_61 admits a fixed point formula_63, i.e. the stabiliser of formula_64 is formula_60 itself.
For a Fraïssé structure formula_65, its automorphism group formula_66 can be considered a topological group, given the topology of pointwise convergence, or equivalently, the subspace topology induced on formula_66 by the space formula_67 with the product topology. The following theorem illustrates the KPT correspondence:Theorem (KPT). For a Fraïssé structure formula_65, the following are equivalent:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n \\in \\mathbb{N}"
},
{
"math_id": 1,
"text": "n = \\{ 0, 1, \\ldots, n-1 \\}"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "\\Delta: A \\to r"
},
{
"math_id": 5,
"text": "r = \\{ 0, 1, \\ldots, r-1 \\}"
},
{
"math_id": 6,
"text": "A = A_{0} \\sqcup \\cdots \\sqcup A_{r-1}"
},
{
"math_id": 7,
"text": "k \\leq m, r \\in \\mathbb{N}"
},
{
"math_id": 8,
"text": "\\Delta: [n]^{(k)} \\to r"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "A \\subseteq n"
},
{
"math_id": 11,
"text": "|A|=m"
},
{
"math_id": 12,
"text": "[A]^{(k)}"
},
{
"math_id": 13,
"text": "\\Delta"
},
{
"math_id": 14,
"text": "m, r \\in \\mathbb{N}"
},
{
"math_id": 15,
"text": "\\Delta: n \\to r"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "\\{ a, a+d, a+2d, \\ldots, a+(m-1)d \\} \\subseteq n"
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": "L = \\{ a_0, a_1, \\ldots, a_{d-1} \\}"
},
{
"math_id": 20,
"text": "L"
},
{
"math_id": 21,
"text": "w \\in (L \\cup \\{ x_0, x_1, \\ldots, x_{k-1} \\})^n"
},
{
"math_id": 22,
"text": "x_i"
},
{
"math_id": 23,
"text": "\\textstyle [L]\\binom{n}{k}"
},
{
"math_id": 24,
"text": "\\textstyle w \\in [L]\\binom{n}{m}"
},
{
"math_id": 25,
"text": "\\textstyle v \\in [L]\\binom{m}{k}"
},
{
"math_id": 26,
"text": "\\textstyle w \\circ v \\in [L]\\binom{n}{k}"
},
{
"math_id": 27,
"text": "w"
},
{
"math_id": 28,
"text": "i"
},
{
"math_id": 29,
"text": "v"
},
{
"math_id": 30,
"text": "\\textstyle \\Delta: [L]\\binom{n}{k} \\to r"
},
{
"math_id": 31,
"text": "\\textstyle w \\circ [L]\\binom{m}{k} = \\{ w \\circ v: v \\in [L]\\binom{m}{k} \\}"
},
{
"math_id": 32,
"text": "\\textstyle \\big( \\sum_{k \\in A} k \\big) < n"
},
{
"math_id": 33,
"text": "\\textstyle \\operatorname{FS}(A) = \\{ \\sum_{k \\in B} k : B \\in \\mathcal{P}(A) \\setminus \\varnothing \\}"
},
{
"math_id": 34,
"text": "B"
},
{
"math_id": 35,
"text": "\\mathbf{C}"
},
{
"math_id": 36,
"text": "A, B"
},
{
"math_id": 37,
"text": "D"
},
{
"math_id": 38,
"text": "\\Delta: \\operatorname{Hom}(A,D) \\to r"
},
{
"math_id": 39,
"text": "f: B \\to D"
},
{
"math_id": 40,
"text": "f \\circ \\operatorname{Hom}(A,B) = \\big\\{ f \\circ g: g \\in \\operatorname{Hom}(A,B) \\big\\}"
},
{
"math_id": 41,
"text": "\\mathcal{L}"
},
{
"math_id": 42,
"text": "\\mathbf{C}^\\mathrm{op}"
},
{
"math_id": 43,
"text": "\\Delta: \\operatorname{Hom}(D,A) \\to r"
},
{
"math_id": 44,
"text": "f: D \\to B"
},
{
"math_id": 45,
"text": "\\operatorname{Hom}(B,A) \\circ f"
},
{
"math_id": 46,
"text": "x \\mapsto a + dx"
},
{
"math_id": 47,
"text": "a,d \\in \\mathbb{N}"
},
{
"math_id": 48,
"text": "d \\neq 0"
},
{
"math_id": 49,
"text": "A=1"
},
{
"math_id": 50,
"text": "k \\in \\mathbb{N}"
},
{
"math_id": 51,
"text": "X_k = \\{ x_0, \\ldots, x_{k-1} \\}"
},
{
"math_id": 52,
"text": "\\mathbf{GR}"
},
{
"math_id": 53,
"text": "A_k = L \\cup X_k"
},
{
"math_id": 54,
"text": "A_n \\to A_k"
},
{
"math_id": 55,
"text": "n \\geq k"
},
{
"math_id": 56,
"text": "f: X_n \\to A_k"
},
{
"math_id": 57,
"text": "X_k \\subseteq A_k = \\operatorname{codom}f"
},
{
"math_id": 58,
"text": "A = A_0"
},
{
"math_id": 59,
"text": "B = A_1"
},
{
"math_id": 60,
"text": "G"
},
{
"math_id": 61,
"text": "X"
},
{
"math_id": 62,
"text": "G \\curvearrowright X"
},
{
"math_id": 63,
"text": "x \\in X"
},
{
"math_id": 64,
"text": "x"
},
{
"math_id": 65,
"text": "\\mathbf{F}"
},
{
"math_id": 66,
"text": "\\operatorname{Aut}(\\mathbf{F})"
},
{
"math_id": 67,
"text": "\\mathbf{F}^\\mathbf{F} = \\{ f: \\mathbf{F} \\to \\mathbf{F} \\}"
},
{
"math_id": 68,
"text": "\\operatorname{Age}(\\mathbf{F})"
}
] |
https://en.wikipedia.org/wiki?curid=62457740
|
62458732
|
Round-robin item allocation
|
Fair item allocation procedure
Round robin is a procedure for fair item allocation. It can be used to allocate several indivisible items among several people, such that the allocation is "almost" envy-free: each agent believes that the bundle they received is at least as good as the bundle of any other agent, when at most one item is removed from the other bundle. In sports, the round-robin procedure is called a draft.
Setting.
There are "m" objects to allocate, and "n" people ("agents") with equal rights to these objects. Each person has different preferences over the objects. The preferences of an agent are given by a vector of values - a value for each object. It is assumed that the value of a bundle for an agent is the sum of the values of the objects in the bundle (in other words, the agents' valuations are an additive set function on the set of objects).
Description.
The protocol proceeds as follows:
It is assumed that each person in their turn picks an unassigned object with a highest value among the remaining objects.
Additivity requirement.
The round-robin protocol requires additivity, since it requires each agent to pick their "best item" without knowing what other items they are going to get; additivity of valuations guarantees that there is always a "best item" (an item with a highest value). In other words, it assumes that the items are independent goods. The additivity requirement can be relaxed to weak additivity.
Properties.
The round-robin protocol is very simple to execute: it requires only "m" steps. Each agent can order the objects in advance by descending value (this takes formula_1 time per agent) and then pick an object in time formula_2.
The final allocation is EF1 - envy-free up to one object. This means that, for every pair of agents formula_3 and formula_4, if at most one object is removed from the bundle of formula_4, then formula_3 does not envy formula_4.
"Proof:" For every agent formula_3, divide the selections made by the agents to sub-sequences: the first subsequence starts at agent 1 and ends at agent formula_5; the latter subsequences start at formula_3 and end at formula_5. In the latter subsequences, agent formula_3 chooses first, so they can choose their best item, so they do not envy any other agent. Agent formula_3 can envy only one of the agents formula_6, and the envy comes only from an item they selected in the first subsequence. If this item is removed, agent formula_3 does not envy.
Additionally, round-robin guarantees that each agent receives the same number of items ("m"/"n", if "m" is divisible by "n"), or almost the same number of items (if "m" is not divisible by "n"). Thus, it is useful in situations with simple cardinality constraints, such as: assigning course-seats to students where each student must receive the same number of courses.
Efficiency considerations.
Round-robin guarantees approximate fairness, but the outcome might be inefficient. As a simple example, suppose the valuations are:
Round-robin, when Alice chooses first, yields the allocation formula_7 with utilities (24,23) and social welfare 47. It is not Pareto efficient, since it is dominated e.g. y the allocation formula_8, with utilities (25,25).
An alternative algorithm, which may attain a higher social welfare, is the "Iterated maximum-weight matching" algorithm. In each iteration, it finds a maximum-weight matching in the bipartite graph in which the nodes are the agents and the items, and the edge weights are the agents' values to the items. In the above example, the first matching is formula_9, the second is formula_10, and the third is formula_11. The total allocation is formula_12 with utilities (18,32); the social welfare (- the sum of utilities) is 50, which is higher than in the round-robin allocation.
Note that even iterated maximum-weight matching does not guarantee Pareto efficiency, as the above allocation is dominated by (xwv, zyu) with utilities (19,36).
Round-robin for groups.
The round-robin algorithm can be used to fairly allocate items among groups. In this setting, all members in each group consume the same bundle, but different members in each group may have different preferences over the items. This raises the question of how each group should decide which item to choose in its turn. Suppose that the goal of each group is to maximize the fraction of its members that are "happy", that is, feel that the allocation is fair (according to their personal preferences). Suppose also that the agents have binary additive valuations, that is, each agent values each item at either 1 ("approve") or 0 ("disapprove"). Then, each group can decide what item to pick using "weighted approval voting":
The resulting algorithm is called RWAV (round-robin with weighted approval voting). The weight function "w"("r","s") is determined based on an auxiliary function "B"("r","s"), defined by the following recurrence relation:
Intuitively, B("r","s") of an agent represents the probability that the agent is happy with the final allocation. If "s" ≤ 0, then by definition this probability is 1: the agent needs no more goods to be happy. If 0<"s" and "r"<"s", then this probability is 0: the agent cannot be happy, since they need more goods than are available. Otherwise, B("r","s") is the average between B("r"-1,"s") - when the other group takes a good wanted by the agent, and B("r"-1,"s-1") - when the agent's group takes a good wanted by the agent. The term B("r"-2,"s"-1) represents the situation when both groups take a good wanted by the agent. Once B("r","s") is computed, the weight function "w" is defined as follows: formula_16 When using this weight function and running RWAV with two groups, the fraction of happy members in group 1 is at least B("r", s("r")), and the fraction of happy members in group 2 is at least B("r"-1, s("r")).Lemma 3.6 The function "s"("r") is determined by the fairness criterion. For example, for 1-out-of-3 maximin-share fairness, "s"("r") = floor("r"/3). The following table shows some values of the function "B", with the values of B(r-1, floor(r/3)) boldfaced:
From this one can conclude that the RWAV algorithm guarantees that, in both groups, at least 75% of the members feel that the allocation is 1-out-of-3 MMS fair.
Extensions.
1. The round-robin protocol guarantees EF1 when the items are "goods" (- valued positively by all agents) and when they are "chores" (- valued negatively by all agents). However, when there are both goods and chores, it does not guarantee EF1. An adaptation of round-robin called double round-robin guarantees EF1 even with a mixture of goods and chores.
2. When agents have more complex cardinality constraints (i.e., the items are divided into categories, and for each category of items, there is an upper bound on the number of items each agent can get from this category), round-robin might fail. However, combining round-robin with the envy-graph procedure gives an algorithm that finds allocations that are both EF1 and satisfy the cardinality constraints.
3. When agents have different weights (i.e., agents have different entitlement for the total items), a generalized round-robin protocol called weighted round-robin guarantees EF1 when the items are "goods" (- valued positively by all agents) and the reversed weighted round-robin guarantees EF1 when the items are "chores" (-valued negatively by all agents).
See also.
Round-robin is a special case of a picking sequence.
Round-robin protocols are used in other areas besides fair item allocation. For example, see round-robin scheduling and round-robin tournament.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "O(m \\text{log}m)"
},
{
"math_id": 2,
"text": "O(1)"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "i-1"
},
{
"math_id": 6,
"text": "1,...,i-1"
},
{
"math_id": 7,
"text": "(zxv, ywu)"
},
{
"math_id": 8,
"text": "(yxw, zvu)"
},
{
"math_id": 9,
"text": "(y,z)"
},
{
"math_id": 10,
"text": "(w,x)"
},
{
"math_id": 11,
"text": "(u,v)"
},
{
"math_id": 12,
"text": "(ywu,zxv)"
},
{
"math_id": 13,
"text": "B(r,s) := 1 ~~\\text{if}~~ s\\leq 0;\n"
},
{
"math_id": 14,
"text": "B(r,s) := 0 ~~\\text{if}~~ 0<s ~\\text{and}~ r<s;\n"
},
{
"math_id": 15,
"text": "B(r,s) := \n\\min\\bigg[\n\\frac{1}{2}[B(r-1,s)+B(r-1,s-1)]\n,\nB(r-2,s-1)\n\\bigg] ~~\\text{otherwise}"
},
{
"math_id": 16,
"text": "w(r,s) := B(r,s) - B(r-1,s)"
}
] |
https://en.wikipedia.org/wiki?curid=62458732
|
6246
|
Covalent bond
|
Chemical bond by sharing of electron pairs
A covalent bond is a chemical bond that involves the sharing of electrons to form electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs. The stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full valence shell, corresponding to a stable electronic configuration. In organic chemistry, covalent bonding is much more common than ionic bonding.
Covalent bonding also includes many kinds of interactions, including σ-bonding, π-bonding, metal-to-metal bonding, agostic interactions, bent bonds, three-center two-electron bonds and three-center four-electron bonds. The term "covalent bond" dates from 1939. The prefix "co-" means "jointly, associated in action, partnered to a lesser degree, " etc.; thus a "co-valent bond", in essence, means that the atoms share "valence", such as is discussed in valence bond theory.
In the molecule H2, the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails the sharing of electrons over more than two atoms is said to be delocalized.
History.
The term "covalence" in regard to bonding was first used in 1919 by Irving Langmuir in a "Journal of the American Chemical Society" article entitled "The Arrangement of Electrons in Atoms and Molecules". Langmuir wrote that "we shall denote by the term "covalence" the number of pairs of electrons that a given atom shares with its neighbors."
The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms (and in 1926 he also coined the term "photon" for the smallest unit of radiant energy). He introduced the "Lewis notation" or "electron dot notation" or "Lewis dot structure", in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond-forming electron pairs represented as solid lines.
Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) – its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the "n" = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the "n" = 1 shell, which can hold only two.
While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms.
Types of covalent bonds.
Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head-on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds.
Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H–H. An unequal relationship creates a polar covalent bond such as with H−Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out, resulting in a non-polar molecule.
Covalent structures.
There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but generally, there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO2, CO2, and CH4. In molecular structures, there are weak forces of attraction. Such covalent substances are low-boiling-temperature liquids (such as ethanol), and low-melting-temperature solids (such as iodine and solid CO2). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3-dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures.
One- and three-electron bonds.
Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1-electron bond is found in the dihydrogen cation, H2+. One-electron bonds often have about half the bond energy of a 2-electron bond, and are therefore called "half bonds". However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1-electron Li2+ than for the 2-electron Li2. This exception can be explained in terms of hybridization and inner-shell effects.
The simplest example of three-electron bonding can be found in the helium dimer cation, He2+. It is considered a "half bond" because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3-electron bond, in addition to two 2-electron bonds, is nitric oxide, NO. The oxygen molecule, O2 can also be regarded as having two 3-electron bonds and one 2-electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three-electron bonds.
Molecules with odd-electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities.
Resonance.
There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule and its resulting experimentally-determined properties, hence a superposition of structures is needed. The same two atoms in such molecules can be bonded differently in different Lewis structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N–O interaction is = .
Aromaticity.
In organic chemistry, when a molecule with a planar ring obeys Hückel's rule, where the number of π electrons fit the formula 4"n" + 2 (where "n" is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons ("n" = 1, 4"n" + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1,3,5-cyclohexatriene.
In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behavior of aromatic ring bonds, which otherwise are equivalent.
Hypervalence.
Certain molecules such as xenon difluoride and sulfur hexafluoride have higher co-ordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three-center four-electron bond ("3c–4e") model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and resonance of sigma bonds in valence bond theory.
Electron deficiency.
In three-center two-electron bonds ("3c–2e") three atoms share two electrons in bonding. This type of bonding occurs in boron hydrides such as diborane (B2H6), which are often described as electron deficient because there are not enough valence electrons to form localized (2-centre 2-electron) bonds joining all the atoms. However the more modern description using 3c–2e bonds does provide enough bonding orbitals to connect all the atoms, so that the molecules can instead be classified as electron-precise.
Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (the nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so-called four-center two-electron bonds also have been postulated.
Quantum mechanical description.
After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states.
Comparison of VB and MO theories.
The two theories represent two ways to build up the electron configuration of the molecule. For valence bond theory, the atomic hybrid orbitals are filled with electrons first to produce a fully bonded valence configuration, followed by performing a linear combination of contributing structures (resonance) if there are several of them. In contrast, for molecular orbital theory a linear combination of atomic orbitals is performed first, followed by filling of the resulting molecular orbitals with electrons.
The two approaches are regarded as complementary, and each provides its own insights into the problem of chemical bonding. As valence bond theory builds the molecular wavefunction out of localized bonds, it is more suited for the calculation of bond energies and the understanding of reaction mechanisms. As molecular orbital theory builds the molecular wavefunction out of delocalized orbitals, it is more suited for the calculation of ionization energies and the understanding of spectral absorption bands.
At the qualitative level, both theories contain incorrect predictions. Simple (Heitler–London) valence bond theory correctly predicts the dissociation of homonuclear diatomic molecules into separate atoms, while simple (Hartree–Fock) molecular orbital theory incorrectly predicts dissociation into a mixture of atoms and ions. On the other hand, simple molecular orbital theory correctly predicts Hückel's rule of aromaticity, while simple valence bond theory incorrectly predicts that cyclobutadiene has larger resonance energy than benzene.
Although the wavefunctions generated by both theories at the qualitative level do not agree and do not match the stabilization energy by experiment, they can be corrected by configuration interaction. This is done by combining the valence bond covalent function with the functions describing all possible ionic structures or by combining the molecular orbital ground state function with the functions describing all possible excited states using unoccupied orbitals. It can then be seen that the simple molecular orbital approach overestimates the weight of the ionic structures while the simple valence bond approach neglects them. This can also be described as saying that the simple molecular orbital approach neglects electron correlation while the simple valence bond approach overestimates it.
Modern calculations in quantum chemistry usually start from (but ultimately go far beyond) a molecular orbital rather than a valence bond approach, not because of any intrinsic superiority in the former but rather because the MO approach is more readily adapted to numerical computations. Molecular orbitals are orthogonal, which significantly increases the feasibility and speed of computer calculations compared to nonorthogonal valence bond orbitals.
Covalency from atomic contribution to the electronic density of states.
In COOP, COHP and BCOOP, evaluation of bond covalency is dependent on the basis set. To overcome this issue, an alternative formulation of the bond covalency can be provided in this way.
The mass center &NoBreak;&NoBreak; of an atomic orbital formula_0 with quantum numbers &NoBreak;&NoBreak; &NoBreak;&NoBreak; &NoBreak;&NoBreak; &NoBreak;&NoBreak; for atom A is defined as
formula_1
where formula_2 is the contribution of the atomic orbital formula_3 of the atom A to the total electronic density of states &NoBreak;&NoBreak; of the solid
formula_4
where the outer sum runs over all atoms A of the unit cell. The energy window &NoBreak;&NoBreak; is chosen in such a way that it encompasses all of the relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along with the considered bond.
The relative position &NoBreak;}&NoBreak; of the mass center of formula_5 levels of atom A with respect to the mass center of formula_6 levels of atom B is given as
formula_7
where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is
formula_8
where, for simplicity, we may omit the dependence from the principal quantum number &NoBreak;&NoBreak; in the notation referring to &NoBreak;&NoBreak;
In this formalism, the greater the value of &NoBreak;&NoBreak; the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent bond. The quantity &NoBreak;}&NoBreak; is denoted as the "covalency" of the bond, which is specified in the same units of the energy &NoBreak;&NoBreak;.
Analogous effect in nuclear systems.
An analogous effect to covalent binding is believed to occur in some nuclear systems, with the difference that the shared fermions are quarks rather than electrons. High energy proton-proton scattering cross-section indicates that quark interchange of either u or d quarks is the dominant process of the nuclear force at short distance. In particular, it dominates over the Yukawa interaction where a meson is exchanged. Therefore, covalent binding by quark interchange is expected to be the dominating mechanism of nuclear binding at small distance when the bound hadrons have covalence quarks in common.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "| n,l,m_l,m_s \\rangle ,"
},
{
"math_id": 1,
"text": "cm^\\mathrm{A}(n,l,m_l,m_s)=\\frac{\\int\\limits_{E_0}\\limits^{E_1} E g_{|n,l,m_l,m_s\\rangle}^\\mathrm{A}(E) dE}{\\int\\limits_{E_0}\\limits^{E_1} g_{|n,l,m_l,m_s\\rangle}^\\mathrm{A} (E)dE}"
},
{
"math_id": 2,
"text": "g_{|n,l,m_l,m_s\\rangle}^\\mathrm{A}(E)"
},
{
"math_id": 3,
"text": "|n,l,m_l,m_s \\rangle"
},
{
"math_id": 4,
"text": "g(E)=\\sum_\\mathrm{A}\\sum_{n, l}\\sum_{m_l, m_s}{g_{|n,l,m_l,m_s\\rangle}^\\mathrm{A}(E)}"
},
{
"math_id": 5,
"text": "| n_\\mathrm{A},l_\\mathrm{A}\\rangle"
},
{
"math_id": 6,
"text": "| n_\\mathrm{B},l_\\mathrm{B}\\rangle"
},
{
"math_id": 7,
"text": "C_{n_\\mathrm{A}l_\\mathrm{A},n_\\mathrm{B}l_\\mathrm{B}}=-\\left|cm^\\mathrm{A}(n_\\mathrm{A},l_\\mathrm{A})-cm^\\mathrm{B}(n_\\mathrm{B},l_\\mathrm{B})\\right|"
},
{
"math_id": 8,
"text": "C_\\mathrm{A,B}=-\\left|cm^\\mathrm{A}-cm^\\mathrm{B}\\right|"
}
] |
https://en.wikipedia.org/wiki?curid=6246
|
62461655
|
Polynomial method in combinatorics
|
In mathematics, the polynomial method is an algebraic approach to combinatorics problems that involves capturing some combinatorial structure using polynomials and proceeding to argue about their algebraic properties. Recently, the polynomial method has led to the development of remarkably simple solutions to several long-standing open problems. The polynomial method encompasses a wide range of specific techniques for using polynomials and ideas from areas such as algebraic geometry to solve combinatorics problems. While a few techniques that follow the framework of the polynomial method, such as Alon's Combinatorial Nullstellensatz, have been known since the 1990s, it was not until around 2010 that a broader framework for the polynomial method has been developed.
Mathematical overview.
Many uses of the polynomial method follow the same high-level approach. The approach is as follows:
Example.
As an example, we outline Dvir's proof of the Finite Field Kakeya Conjecture using the polynomial method.
Finite Field Kakeya Conjecture: Let formula_0 be a finite field with formula_1 elements. Let formula_2 be a Kakeya set, i.e. for each vector formula_3there exists formula_4 such that formula_5 contains a line formula_6. Then the set formula_5 has size at least formula_7where formula_8 is a constant that only depends on formula_9.
Proof: The proof we give will show that formula_5 has size at least formula_10. The bound of formula_7 can be obtained using the same method with a little additional work.
Assume we have a Kakeya set formula_5 with
formula_11
Consider the set of monomials of the form formula_12 of degree exactly formula_13. There are exactly formula_14 such monomials. Thus, there exists a nonzero homogeneous polynomial formula_15 of degree formula_13 that vanishes on all points in formula_5"." Note this is because finding such a polynomial reduces to solving a system of formula_16 linear equations for the coefficients.
Now we will use the property that formula_5 is a Kakeya set to show that formula_17 must vanish on all of formula_18. Clearly formula_19. Next, for formula_20, there is an formula_21 such that the line formula_6 is contained in formula_5. Since formula_17 is homogeneous, if formula_22 for some formula_23 then formula_24 for any formula_25. In particular
formula_26
for all nonzero formula_27. However, formula_28 is a polynomial of degree formula_13 in formula_29 but it has at least formula_30 roots corresponding to the nonzero elements of formula_0 so it must be identically zero. In particular, plugging in formula_31 we deduce formula_32.
We have shown that formula_32 for all formula_3 but formula_17 has degree less than formula_33 in each of the variables so this is impossible by the Schwartz–Zippel lemma. We deduce that we must actually have
formula_34
Polynomial partitioning.
A variation of the polynomial method, often called polynomial partitioning, was introduced by Guth and Katz in their solution to the Erdős distinct distances problem. Polynomial partitioning involves using polynomials to divide the underlying space into regions and arguing about the geometric structure of the partition. These arguments rely on results from algebraic geometry bounding the number of incidences between various algebraic curves. The technique of polynomial partitioning has been used to give a new proof of the Szemerédi–Trotter theorem via the polynomial ham sandwich theorem and has been applied to a variety of problems in incidence geometry.
Applications.
A few examples of longstanding open problems that have been solved using the polynomial method are:
|
[
{
"math_id": 0,
"text": "\\mathbb{F}_q"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "K \\subseteq \\mathbb{F}_q^n"
},
{
"math_id": 3,
"text": "y \\in \\mathbb{F}_q^n"
},
{
"math_id": 4,
"text": "x \\in \\mathbb{F}_q^n"
},
{
"math_id": 5,
"text": "K"
},
{
"math_id": 6,
"text": "\\{x + ty, t \\in \\mathbb{F}_q \\}"
},
{
"math_id": 7,
"text": "c_nq^n"
},
{
"math_id": 8,
"text": "c_n > 0"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "c_nq^{n-1}"
},
{
"math_id": 11,
"text": "|K| < {q+n-3\\choose n-1}"
},
{
"math_id": 12,
"text": "x_1^{d_1}x_2^{d_2} \\dots x_n^{d_n}"
},
{
"math_id": 13,
"text": "q-2"
},
{
"math_id": 14,
"text": "{q+n-3\\choose n-1}"
},
{
"math_id": 15,
"text": "P(x_1,x_2, \\dots , x_n)"
},
{
"math_id": 16,
"text": "|K|"
},
{
"math_id": 17,
"text": "P"
},
{
"math_id": 18,
"text": "\\mathbb{F}_q^n"
},
{
"math_id": 19,
"text": "P(0,0 \\dots , 0) = 0"
},
{
"math_id": 20,
"text": "y \\neq 0"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "P(z) = 0"
},
{
"math_id": 23,
"text": "z \\in \\mathbb{F}_q^n"
},
{
"math_id": 24,
"text": "P(cz) = 0"
},
{
"math_id": 25,
"text": "c \\in \\mathbb{F}_q"
},
{
"math_id": 26,
"text": "P(tx + y) = P(t(x+t^{-1}y)) = 0"
},
{
"math_id": 27,
"text": "t \\in \\mathbb{F}_q"
},
{
"math_id": 28,
"text": "P(tx+y) "
},
{
"math_id": 29,
"text": "t"
},
{
"math_id": 30,
"text": "q-1"
},
{
"math_id": 31,
"text": "t = 0"
},
{
"math_id": 32,
"text": "P(y) = 0"
},
{
"math_id": 33,
"text": "q - 1"
},
{
"math_id": 34,
"text": "|K| \\ge {q+n-3\\choose n-1} \\sim \\frac{q^{n-1}}{(n-1)!}"
},
{
"math_id": 35,
"text": "\\mathbb{Z}_4^n"
}
] |
https://en.wikipedia.org/wiki?curid=62461655
|
62464104
|
Nehemiah 6
|
Chapter in the Book of Nehemiah
Nehemiah 6 is the sixth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 16th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter records the continuing opposition to Nehemiah from sources both external (Sanballat, Tobiah, and their allies) and internal (the prophetess Noadiah and the rest of the prophets).
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The pretense of peace (6:1-4).
As a leader, Nehemiah holds his motives and conduct blameless, but at the same time, he must understand and deal wisely with the opposition 'who seek to compromise God's work'.
"1 Now it happened when Sanballat, Tobiah, Geshem the Arab, and the rest of our enemies heard that I had rebuilt the wall, and that there were no breaks left in it (though at that time I had not hung the doors in the gates), 2 that Sanballat and Geshem sent to me, saying, "Come, let us meet together among the villages in the plain of Ono." But they thought to do me harm."
Verses 1–2.
Tobiah is described as an Ammonite in
The trap of intimidation (6:5-9).
Sanballat hoped that Nehemiah would follow the logical action against the rumors of threats, the way he and his allies would do, that is, 'given to ambition, opportunistic maneuvering, and dedicated to self-preservation', but Nehemiah 'refused to become distracted by the ploy of politics' and kept his devotion to God.
"Then Sanballat sent his servant to me as before, the fifth time, with an open letter in his hand."
Verse 5.
Sanballat sent his fifth letter as an "open letter", because he is 'well aware of the possibility that popular sentiment will stand behind a claim to restore an independent Judah', and uses it to launch an accusation that Nehemiah is sponsoring prophetic supports (indicating the importance of prophetic authority in Ezra–Nehemiah).
"In it was written, "It is reported among the nations, and Geshem also says it, that you and the Jews intend to rebel; that is why you are building the wall. And according to these reports you wish to become their king.""
The lure of safety (6:10-14).
In this section, Nehemiah remembers that the will of God is eternal and has primacy over any individual.
"Then I went to the house of Shemaiah son of Delaiah, the son of Mehetabel. He was confined to his home. He said, "Let’s set up a time to meet in the house of God, within the temple. Let’s close the doors of the temple, for they are coming to kill you. It will surely be at night that they will come to kill you.""
Continued opposition (6:15-19).
The establishment of fortifications does not provide full security, as continued opposition remains in place; dangers can always threaten the community of faith, but ... the godly character of the people is the greatest defense against the threats.
"So the wall was finished in the twenty and fifth day of the month Elul, in fifty and two days."
"Moreover in those days the nobles of Judah sent many letters unto Tobiah, and the letters of Tobiah came unto them."
"Moreover, they kept reporting to me his good deeds and then telling him what I said. And Tobiah sent letters to intimidate me."
Verse 19.
The nobles of Judah acted as intermediaries: they "endeavoured to convince Nehemiah that Tobiah’s professions of goodwill were sincere ... and on the other hand they communicated to Tobiah all that Nehemiah said and did". Anglican commentator H. E. Ryle suggests that their aim was to supply Tobiah with "material for charges against Nehemiah to be made before the Persian king, or for slanders to the Jewish people".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62464104
|
62466987
|
Double marginalization
|
Supply chain market situation
Double marginalization is a vertical externality that occurs when two firms with market power (i.e., not in a situation of perfect competition), at different vertical levels in the same supply chain, apply a mark-up to their prices. This is caused by the prospect of facing a steep demand curve slope, prompting the firm to mark-up the price beyond its marginal costs. Double marginalization is clearly negative from a welfare point of view, as the double markup induces a deadweight loss, because the retail price is higher than the optimal monopoly price a vertically integrated company would set, leading to underproduction. Thus all social groups are negatively affected because the overall profit for the company is lower, the consumer has to pay more and a smaller amount of units are consumed.
Example.
Consider an industry with the following characteristics -
formula_0
formula_1
formula_2
In a monopolistic situation with a single integrated firm, the profit-maximizing firm would set its price at formula_3, resulting in a quantity of formula_4 and a total profit of formula_5.
In a non-integrated scenario, the monopolist retailer and the monopolist manufacturer set their price independently, respectively formula_6 and formula_7.
Not only is the total profit lower than in the integrated scenario, but the price is higher, thus reducing the consumer surplus.
Solutions.
There are numerous mechanisms to prevent or at least limit double marginalization. These include, among others, the following.
Note that the above mechanisms only solve the problem of double marginalization; from an overall welfare point of view, the problem of monopoly pricing remains. It should also be noted that while some of the solutions presented above, such as mergers, have a positive effect in minimizing the double markup present within the vertical competition, but it damages the horizontal competition.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{Demand:}\\quad \\mathrm{Q}=10-p "
},
{
"math_id": 1,
"text": "\\text{Manufacturer's Marginal Cost:}\\quad c= C'(\\mathrm{Q})=2 "
},
{
"math_id": 2,
"text": "\\text{Total Profit:}\\quad \\pi = p \\cdot \\mathrm{Q} - c \\cdot \\mathrm{Q} "
},
{
"math_id": 3,
"text": " p = 6"
},
{
"math_id": 4,
"text": " \\mathrm{Q} = 4"
},
{
"math_id": 5,
"text": "\\pi = 16"
},
{
"math_id": 6,
"text": "p_r"
},
{
"math_id": 7,
"text": "p_m"
},
{
"math_id": 8,
"text": "(p_r - p_m)(10 - p_r)"
},
{
"math_id": 9,
"text": "p_r = 5 + 0.5p_m"
},
{
"math_id": 10,
"text": "(p_m - 2)(5 - 0.5p_m)"
},
{
"math_id": 11,
"text": "p_m = 6"
},
{
"math_id": 12,
"text": "p_r = 8"
},
{
"math_id": 13,
"text": " \\mathrm{Q} = 2"
}
] |
https://en.wikipedia.org/wiki?curid=62466987
|
62469111
|
Homomorphism density
|
Concept in mathematical graph theory
In the mathematical field of extremal graph theory, homomorphism density with respect to a graph formula_0 is a parameter formula_1 that is associated to each graph formula_2 in the following manner:
formula_3.
Above, formula_4 is the set of graph homomorphisms, or adjacency preserving maps, from formula_0 to formula_2. Density can also be interpreted as the probability that a map from the vertices of formula_0 to the vertices of formula_2 chosen uniformly at random is a graph homomorphism. There is a connection between homomorphism densities and subgraph densities, which is elaborated on below.
Examples.
Other important properties such as the number of stable sets or the maximum cut can be expressed or estimated in terms of homomorphism numbers or densities.
Subgraph densities.
We define the (labeled) subgraph density of formula_0 in formula_2 to be
formula_12.
Note that it might be slightly dubious to call this a density, as we are not quite dividing through by the total number of labeled subgraphs on formula_13 vertices of formula_2, but our definition is asymptotically equivalent and simpler to analyze for our purposes. Observe that any labeled copy of formula_0 in formula_2 corresponds to a homomorphism of formula_0 into formula_2. However, not every homomorphism corresponds to a labeled copy − there are some degenerate cases, in which multiple vertices of formula_0 are sent to the same vertex of formula_2. That said, the number of such degenerate homomorphisms is only formula_14, so we have formula_15. For instance, we see that for graphs with constant homomorphism density, the labeled subgraph density and homomorphism density are asymptotically equivalent. For formula_0 being a complete graph formula_16, the homomorphism density and subgraph density are in fact equal (for formula_2 without self-loops), as the edges of formula_16 force all images under a graph homomorphism to be distinct.
Generalization to graphons.
The notion of homomorphism density can be generalized to the case where instead of a graph formula_2, we have a graphon formula_17,
formula_18
Note that the integrand is a product that runs over the edges in the subgraph formula_0, whereas the differential is a product running over the vertices in formula_0. Intuitively, each vertex formula_19 in formula_0 is represented by the variable formula_20
For example, the triangle density in a graphon is given by
formula_21.
This definition of homomorphism density is indeed a generalization, because for every graph formula_2 and its associated step graphon formula_22, formula_23.
The definition can be further extended to all symmetric, measurable functions formula_17. The following example demonstrates the benefit of this further generalization. Relative to the function formula_24, the density of formula_0 in formula_17 is the number of Eulerian cycles in formula_0.
This notion is helpful in understanding asymptotic behavior of homomorphism densities of graphs which satisfy certain property, since a graphon is a limit of a sequence of graphs.
Inequalities.
Many results in extremal graph theory can be described by inequalities involving homomorphism densities associated to a graph. The following are a sequence of examples relating the density of triangles to the density of edges.
Turan's Theorem.
A classic example is Turán's Theorem, which states that if formula_25, then formula_26. A special case of this is Mantel's Theorem, which states that if formula_27, then formula_28.
Goodman's Theorem.
An extension of Mantel's Theorem provides an explicit lower bound on triangle densities in terms of edge densities.Theorem (Goodman). formula_29
Kruskal-Katona Theorem.
A converse inequality to Goodman's Theorem is a special case of Kruskal–Katona theorem, which states that formula_30. It turns out that both of these inequalities are tight for specific edge densities.
"Proof." It is sufficient to prove this inequality for any graph formula_2. Say formula_2 is a graph on formula_31 vertices, and formula_32 are the eigenvalues of its adjacency matrix formula_33. By spectral graph theory, we know
formula_34, and formula_35.
The conclusion then comes from the following inequality:
formula_36.
Description of triangle vs edge density.
A more complete description of the relationship between formula_37 and formula_38 was proven by Razborov. His work from 2008 completes the understanding of a homomorphism inequality problem, the description of formula_39, which is the region of feasible edge density, triangle density pairs in a graphon.formula_40.The upper boundary of the region is tight and given by the Kruskal-Katona Theorem. The lower boundary is main result of work by Razborov in providing a complete description.
Useful tools.
Cauchy-Schwarz.
One particularly useful inequality to analyze homomorphism densities is the Cauchy–Schwarz inequality. The effect of applying the Cauchy-Schwarz inequality is "folding" the graph over a line of symmetry to relate it to a smaller graph. This allows for the reduction of densities of large but symmetric graphs to that of smaller graphs. As an example, we prove that the cycle of length 4 is Sidorenko. If the vertices are labelled 1,2,3,4 in that order, the diagonal through vertices 1 and 3 is a line of symmetry. Folding over this line relates formula_41 to the complete bipartite graph formula_42. Mathematically, this is formalized as
formula_43
where we applied Cauchy-Schwarz to "fold" vertex 2 onto vertex 4. The same technique can be used to show formula_44, which combined with the above verifies that formula_41 is a Sidorenko graph.
The generalization Hölder's inequality can also be used in a similar manner to fold graphs multiple times with a single step. It is also possible to apply the more general form of Cauchy-Schwarz to fold graphs in the case that certain edges lie on the line of symmetry.
Lagrangian.
The Lagrangian can be useful in analyzing extremal problems. The quantity is defined to be
formula_45.
One useful fact is that a maximizing vector formula_46 is equally supported on the vertices of a clique in formula_0. The following is an application of analyzing this quantity.
According to Hamed Hatami and Sergei Norine, one can convert any algebraic inequality between homomorphism densities to a linear inequality. In some situations, deciding whether such an inequality is true or not can be simplified, such as it is the case in the following theorem.Theorem (Bollobás). Let formula_47 be real constants. Then, the inequality
formula_48
holds for every graph formula_2 if and only if it holds for every complete graph formula_49.
However, we get a much harder problem, in fact an undecidable one, when we have a homomorphism inequalities on a more general set of graphs formula_50:Theorem (Hatami, Norine). Let formula_47 be real constants, and formula_51 graphs. Then, it is an undecidable problem to determine whether the homomorphism density inequality
formula_52
holds for every graph formula_2. A recent observation proves that any linear homomorphism density inequality is a consequence of the positive semi-definiteness of a certain infinite matrix, or to the positivity of a quantum graph; in other words, any such inequality would follow from applications of the Cauchy-Schwarz Inequality.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "t(H,-)"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "t(H,G):=\\frac{\\left|\\operatorname{hom}(H,G)\\right|}{|V(G)|^{|V(H)|}}"
},
{
"math_id": 4,
"text": "\\operatorname{hom}(H,G)"
},
{
"math_id": 5,
"text": "t(K_{2},G)"
},
{
"math_id": 6,
"text": "k-1"
},
{
"math_id": 7,
"text": "\\operatorname{hom}(P_k, G)"
},
{
"math_id": 8,
"text": "\\operatorname{hom}(C_k, G) = \\operatorname{Tr}(A^k)"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "t(G, K_k)"
},
{
"math_id": 12,
"text": "d(H,G):=\\frac{\\# \\text{ labeled copies of } H \\text{ in } G}{|V(G)|^{|V(H)|}}"
},
{
"math_id": 13,
"text": "|V(H)|"
},
{
"math_id": 14,
"text": "O(n^{|V(H)|-1})"
},
{
"math_id": 15,
"text": "t(H,G)=d(H,G)+O(1/n)"
},
{
"math_id": 16,
"text": "K_m"
},
{
"math_id": 17,
"text": "W"
},
{
"math_id": 18,
"text": "t(H,W)=\\int_{[0,1]^{|V(H)|}}\\prod_{ij\\in E(H)}W(x_{i},x_{j})\\prod_{i\\in V(H)}dx_{i}"
},
{
"math_id": 19,
"text": "i"
},
{
"math_id": 20,
"text": "x_{i}."
},
{
"math_id": 21,
"text": "t(K_3, W) = \\int\\limits_{[0,1]^3} W(x,y)W(y,z)W(z,x) dx dy dz"
},
{
"math_id": 22,
"text": "W_{G}"
},
{
"math_id": 23,
"text": "t(H,G)=t(H,W_{G})"
},
{
"math_id": 24,
"text": "W(x,y) = 2\\cos(2\\pi(x-y))"
},
{
"math_id": 25,
"text": "t(K_{r},W)=0"
},
{
"math_id": 26,
"text": "t(K_{2},W) \\leq \\left(1-\\frac{1}{r-1}\\right)"
},
{
"math_id": 27,
"text": "t(K_{3},W)=0"
},
{
"math_id": 28,
"text": "t(K_{2},W)\\leq 1/2"
},
{
"math_id": 29,
"text": "t(K_3, G) \\geq t(K_2, G)(2t(K_2, G) - 1)."
},
{
"math_id": 30,
"text": "t(K_3, G) \\leq t(K_2, G)^{3/2}"
},
{
"math_id": 31,
"text": "n"
},
{
"math_id": 32,
"text": "\\{\\lambda_{i}\\}_{i=1}^{n}"
},
{
"math_id": 33,
"text": "A_{G}"
},
{
"math_id": 34,
"text": "\\operatorname{hom}(K_{2},G)=t(K_{2},G)|V(G)|^{2}=\\sum_{i=1}^{n}\\lambda_{i}^{2}"
},
{
"math_id": 35,
"text": "\\operatorname{hom}(K_{3},G)=t(K_{3},G)|V(G)|^{3}=\\sum_{i=1}^{n}\\lambda_{i}^{3}"
},
{
"math_id": 36,
"text": "\\operatorname{hom}(K_{3},G)=\\sum_{i=1}^{n}\\lambda_{i}^{3}\\leq\\left(\\sum_{i=1}^{n}\\lambda_{i}^{2}\\right)^{3/2}=\\operatorname{hom}(K_{2},G)^{3/2}"
},
{
"math_id": 37,
"text": "t(K_3, G)"
},
{
"math_id": 38,
"text": "t(K_2, G)"
},
{
"math_id": 39,
"text": "D_{2,3}"
},
{
"math_id": 40,
"text": "D_{2,3}=\\{(t(K_{2},W),t(K_{3},W))\\;:\\;W\\text{ is a graphon}\\}\\subseteq [0,1]^{2}"
},
{
"math_id": 41,
"text": "C_4"
},
{
"math_id": 42,
"text": "K_{1,2}"
},
{
"math_id": 43,
"text": "\\begin{align}\nt(C_4, G) &= \\int_{1,2,3,4} W(1,2)W(2,3)W(3,4)W(1,4) = \\int_{1,3}\\left(\\int_2 W(1,2)W(2,3)\\right)\\left(\\int_4 W(1,4)W(4,3)\\right) = \\int_{1,3}\\left(\\int_2 W(1,2)W(2,3)\\right)^2 \\\\\n&\\geq \\left(\\int_{1,2,3} W(1,2)W(2,3)\\right)^2 = t(K_{1,2}, G)^2\n\\end{align}"
},
{
"math_id": 44,
"text": "t(K_{1,2}, G) \\geq t(K_2, G)^2"
},
{
"math_id": 45,
"text": "L(H) = \\max_{\\begin{matrix}x_1, \\ldots, x_n \\geq 0 \\\\ x_1 + \\cdots x_n = 1 \\end{matrix} } \\sum_{e \\in E(H)} \\prod_{v \\in e} x_v"
},
{
"math_id": 46,
"text": "x"
},
{
"math_id": 47,
"text": "a_{1},\\cdots,a_{n}"
},
{
"math_id": 48,
"text": "\\sum_{i=1}^{n}a_{i}t(K_{i},G)\\geq 0"
},
{
"math_id": 49,
"text": "K_{m}"
},
{
"math_id": 50,
"text": "H_{i}"
},
{
"math_id": 51,
"text": "\\{H_{i}\\}_{i=1}^{n}"
},
{
"math_id": 52,
"text": "\\sum_{i=1}^{n}a_{r}t(H_{i},G)\\geq 0"
}
] |
https://en.wikipedia.org/wiki?curid=62469111
|
62469606
|
Darmois–Skitovich theorem
|
If 2 linear forms on independent random variables are independent, the variables are normal
In mathematical statistics, the Darmois–Skitovich theorem characterizes the normal distribution (the Gaussian distribution) by the independence of two linear forms from independent random variables. This theorem was proved independently by G. Darmois and V. P. Skitovich in 1953.
Formulation.
Let formula_0 be independent random variables. Let formula_1 be nonzero constants. If the linear forms formula_2 and formula_3 are independent then all random variables formula_4 have normal distributions (Gaussian distributions).
History.
The Darmois–Skitovich theorem is a generalization of the Kac–Bernstein theorem in which the normal distribution (the Gaussian distribution) is characterized by the independence of the sum and the difference of two independent random variables. For a history of proving the theorem by V. P. Skitovich, see the article
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\xi_j, j = 1, 2, \\ldots, n, n \\ge 2"
},
{
"math_id": 1,
"text": "\\alpha_j, \\beta_j"
},
{
"math_id": 2,
"text": "L_1 = \\alpha_1\\xi_1 + \\cdots + \\alpha_n\\xi_n"
},
{
"math_id": 3,
"text": "L_2 = \\beta_1\\xi_1 + \\cdots + \\beta_n\\xi_n "
},
{
"math_id": 4,
"text": "\\xi_j"
}
] |
https://en.wikipedia.org/wiki?curid=62469606
|
62469857
|
Kac–Bernstein theorem
|
The Kac–Bernstein theorem is one of the first characterization theorems of mathematical statistics. It is easy to see that if the random variables formula_0 and formula_1 are independent and normally distributed with the same variance, then their sum and difference are also independent. The Kac–Bernstein theorem states that the independence of the sum and difference of two independent random variables characterizes the normal distribution (the Gauss distribution). This theorem was proved independently by Polish-American mathematician Mark Kac and Soviet mathematician Sergei Bernstein.
Formulation.
Let formula_0 and formula_1 be independent random variables. If formula_2 and formula_3 are independent then formula_0 and formula_1 have normal distributions (the Gaussian distribution).
Generalization.
A generalization of the Kac–Bernstein theorem is the Darmois–Skitovich theorem, in which instead of sum and difference linear forms from "n" independent random variables are considered.
|
[
{
"math_id": 0,
"text": "\\xi"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "\\xi+\\eta"
},
{
"math_id": 3,
"text": "\\xi-\\eta"
}
] |
https://en.wikipedia.org/wiki?curid=62469857
|
624708
|
Persistence of a number
|
Property of a number
In mathematics, the persistence of a number is the number of times one must apply a given operation to an integer before reaching a fixed point at which the operation no longer alters the number.
Usually, this involves additive or multiplicative persistence of a non-negative integer, which is how often one has to replace the number by the sum or product of its digits until one reaches a single digit. Because the numbers are broken down into their digits, the additive or multiplicative persistence depends on the radix. In the remainder of this article, base ten is assumed.
The single-digit final state reached in the process of calculating an integer's additive persistence is its digital root. Put another way, a number's additive persistence counts how many times we must sum its digits to arrive at its digital root.
Examples.
The additive persistence of 2718 is 2: first we find that 2 + 7 + 1 + 8 = 18, and then that 1 + 8 = 9. The multiplicative persistence of 39 is 3, because it takes three steps to reduce 39 to a single digit: 39 → 27 → 14 → 4. Also, 39 is the smallest number of multiplicative persistence 3.
Smallest numbers of a given multiplicative persistence.
In base 10, there is thought to be no number with a multiplicative persistence greater than 11; this is known to be true for numbers up to 2.67×1030000. The smallest numbers with persistence 0, 1, 2, ... are:
0, 10, 25, 39, 77, 679, 6788, 68889, 2677889, 26888999, 3778888999, 277777788888899. (sequence in the OEIS)
The search for these numbers can be sped up by using additional properties of the decimal digits of these record-breaking numbers. These digits must be in increasing order (with the exception of the second number, 10), and – except for the first two digits – all digits must be 7, 8, or 9. There are also additional restrictions on the first two digits.
Based on these restrictions, the number of candidates for "n"-digit numbers with record-breaking persistence is only proportional to the square of "n", a tiny fraction of all possible "n"-digit numbers. However, any number that is missing from the sequence above would have multiplicative persistence > 11; such numbers are believed not to exist, and would need to have over 20,000 digits if they do exist.
Properties of additive persistence.
More about the additive persistence of a number can be found here.
Smallest numbers of a given additive persistence.
The additive persistence of a number, however, can become arbitrarily large (proof: for a given number formula_3, the persistence of the number consisting of formula_3 repetitions of the digit 1 is 1 higher than that of formula_3). The smallest numbers of additive persistence 0, 1, 2, ... are:
0, 10, 19, 199, 19999999999999999999999, ... (sequence in the OEIS)
The next number in the sequence (the smallest number of additive persistence 5) is 2 × 102×(1022 − 1)/9 − 1 (that is, 1 followed by 2222222222222222222222 9's). For any fixed base, the sum of the digits of a number is at most proportional to its logarithm; therefore, the additive persistence is at most proportional to the iterated logarithm, and the smallest number of a given additive persistence grows tetrationally.
Functions with limited persistence.
Some functions only allow persistence up to a certain degree.
For example, the function which takes the minimal digit only allows for persistence 0 or 1, as you either start with or step to a single-digit number.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "n>9"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n \\cdot b^k"
}
] |
https://en.wikipedia.org/wiki?curid=624708
|
624714
|
Thomson scattering
|
Low energy photon scattering off charged particles
Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is the low-energy limit of Compton scattering: the particle's kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: formula_0, or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays).
Description of the phenomenon.
Thomson scattering is a model for the effect of electromagnetic fields on electrons when the field energy formula_1 is much less than the rest mass of the electron formula_2. In the model the electric field of the incident wave accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson. As long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized.
The electric fields of the incoming and observed wave (i.e. the outgoing wave) can be divided up into those components lying in the plane of observation (formed by the incoming and observed waves) and those components perpendicular to that plane. Those components lying in the plane are referred to as "radial" and those perpendicular to the plane are "tangential". (It is difficult to make these terms seem natural, but it is standard terminology.)
The diagram on the right depicts the plane of observation. It shows the radial component of the incident electric field, which causes the charged particles at the scattering point to exhibit a radial component of acceleration (i.e., a component tangent to the plane of observation). It can be shown that the amplitude of the observed wave will be proportional to the cosine of χ, the angle between the incident and observed waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos2(χ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way.
The scattering is best described by an emission coefficient which is defined as "ε" where "ε" "dt" "dV" "d"Ω "dλ" is the energy scattered by a volume element formula_3 in time "dt" into solid angle "d"Ω between wavelengths "λ" and "λ"+"dλ". From the point of view of an observer, there are two emission coefficients, "ε""r" corresponding to radially polarized light and "ε""t" corresponding to tangentially polarized light. For unpolarized incident light, these are given by:
formula_4
where formula_5 is the density of charged particles at the scattering point, formula_6 is incident flux (i.e. energy/time/area/wavelength), formula_7 is the angle between the incident and scattered photons (see figure above) and formula_8 is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element formula_3 in time "dt" between wavelengths "λ" and "λ"+"dλ" is found by integrating the sum of the emission coefficients over all directions (solid angle):
formula_9
The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by
formula_10
expressed in SI units; q is the charge per particle, m the mass of particle, and formula_11 a constant, the permittivity of free space. (To obtain an expression in cgs units, drop the factor of 4π"ε"0.) Integrating over the solid angle, we obtain the Thomson cross section
formula_12
in SI units.
The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass "m" and charge "q", namely
formula_13
Alternatively, this can be expressed in terms of formula_14, the Compton wavelength, and the fine structure constant:
formula_15
For an electron, the Thomson cross-section is numerically given by:
formula_16
Examples of Thomson scattering.
The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002.
The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites.
In tokamaks, corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events.
In the Sunyaev–Zeldovich effect, where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron.
Models for X-ray crystallography are based on Thomson scattering.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nu\\ll mc^2/h "
},
{
"math_id": 1,
"text": "h\\nu"
},
{
"math_id": 2,
"text": "m_0c^2"
},
{
"math_id": 3,
"text": "dV "
},
{
"math_id": 4,
"text": "\\begin{align}\n\\varepsilon_t &= \\frac{3}{16\\pi} \\sigma_t In \\\\[1ex]\n\\varepsilon_r &= \\frac{3}{16\\pi}\\sigma_t In \\cos^2\\chi\n\\end{align}"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "I"
},
{
"math_id": 7,
"text": "\\chi"
},
{
"math_id": 8,
"text": "\\sigma_t"
},
{
"math_id": 9,
"text": "\n\\int\\varepsilon \\, d\\Omega = \\int_0^{2\\pi} d\\varphi \\int_0^\\pi d\\chi (\\varepsilon_t + \\varepsilon_r) \\sin \\chi = I \\frac{3 \\sigma_t}{16\\pi} n 2 \\pi (2 + 2/3) = \\sigma_t I n.\n"
},
{
"math_id": 10,
"text": "\n\\frac{d\\sigma_t}{d\\Omega} = \\left(\\frac{q^2}{4\\pi\\varepsilon_0 mc^2}\\right)^2 \\frac{1+\\cos^2\\chi} 2\n"
},
{
"math_id": 11,
"text": "\\varepsilon_0"
},
{
"math_id": 12,
"text": "\n\\sigma_t = \\frac{8\\pi} 3 \\left(\\frac{q^2}{4\\pi\\varepsilon_0 mc^2}\\right)^2\n"
},
{
"math_id": 13,
"text": "\\sigma_t = \\frac{8\\pi} 3 r_e^2"
},
{
"math_id": 14,
"text": "\\lambda_c"
},
{
"math_id": 15,
"text": "\n\\sigma_t = \\frac{8 \\pi} 3 \\left(\\frac{\\alpha \\lambda_c}{2\\pi}\\right)^2\n"
},
{
"math_id": 16,
"text": "\n\\sigma_t =\\frac{8 \\pi} 3 \\left(\\frac{\\alpha \\hbar c}{m c^2}\\right)^2 = 6.652 458 7321(60)\\times 10^{-29} \\text{ m}^2 \\approx 66.5 \\text{ fm}^2 = 0.665 \\text{ b}\n"
}
] |
https://en.wikipedia.org/wiki?curid=624714
|
62471938
|
Counting lemma
|
The counting lemmas this article discusses are statements in combinatorics and graph theory. The first one extracts information from formula_0-regular pairs of subsets of vertices in a graph formula_1, in order to guarantee patterns in the entire graph; more explicitly, these patterns correspond to the count of copies of a certain graph formula_2 in formula_1. The second counting lemma provides a similar yet more general notion on the space of graphons, in which a scalar of the cut distance between two graphs is correlated to the homomorphism density between them and formula_2.
Graph embedding version of counting lemma.
Whenever we have an formula_0-regular pair of subsets of vertices formula_3 in a graph formula_1, we can interpret this in the following way: the bipartite graph, formula_4, which has density formula_5, is "close" "to being" a random bipartite graph in which every edge appears with probability formula_5, with some formula_0 error.
In a setting where we have several clusters of vertices, some of the pairs between these clusters being formula_6-regular, we would expect the count of small, or local patterns, to be roughly equal to the count of such patterns in a random graph. These small patterns can be, for instance, the number of graph embeddings of some formula_2 in formula_1, or more specifically, the number of copies of formula_2 in formula_1 formed by taking one vertex in each vertex cluster.
The above intuition works, yet there are several important conditions that must be satisfied in order to have a complete statement of the theorem; for instance, the pairwise densities are at least formula_7, the cluster sizes are at least formula_8 , and formula_9. Being more careful of these details, the statement of the graph counting lemma is as follows:
Statement of the theorem.
If formula_2 is a graph with vertices formula_10 and formula_11 edges, and formula_1 is a graph with (not necessarily disjoint) vertex subsets formula_12, such that formula_13 for all formula_14 and for every edge formula_15 of formula_2 the pair formula_16 is formula_6-regular with density formula_17 and formula_9, then formula_1 contains at least formula_18 many copies of formula_2 with the copy of vertex formula_19 in formula_20.
This theorem is a generalization of the triangle counting lemma, which states the above but with formula_21:
Triangle counting Lemma.
Let formula_1 be a graph on formula_22 vertices, and let formula_23 be subsets of formula_24 which are pairwise formula_0-regular, and suppose the edge densities formula_25 are all at least formula_26. Then the number of triples formula_27 such that formula_28 form a triangle in formula_1 is at leastformula_29
"Proof of triangle counting lemma:".
Since formula_30 is a regular pair, less than formula_31 of the vertices in formula_32 have fewer than formula_33 neighbors in formula_34; otherwise, this set of vertices from formula_32 along with its neighbors in formula_34 would witness irregularity of formula_30, a contradiction. Intuitively, we are saying that not too many vertices in formula_32 can have a small degree in formula_34.
By an analogous argument in the pair formula_35, less than formula_31 of the vertices in formula_32 have fewer than formula_36 neighbors in formula_37. Combining these two subsets of formula_32 and taking their complement, we obtain a subset formula_38 of size at least formula_39 such that every vertex formula_40 has at least formula_33 neighbors in formula_34 and at least formula_36 neighbors in formula_37.
We also know that formula_41, and that formula_42 is an formula_7-regular pair; therefore, the density between the neighborhood of formula_43 in formula_34 and the neighborhood of formula_43 in formula_37 is at least formula_44, because by regularity it is formula_7-close to the actual density between formula_34 and formula_37.
Summing up, for each of these at least formula_39 vertices formula_40, there are at least formula_45 choices of edges between the neighborhood of formula_43 in formula_34 and the neighborhood of formula_43 in formula_37. From there we can conclude this proof.
"Idea of proof of graph counting lemma:"The general proof of the graph counting lemma extends this argument through a greedy embedding strategy; namely, vertices of formula_46 are embedded in the graph one by one, by using the regularity condition so as to be able to keep a sufficiently large set of vertices in which we could embed the next vertex.
Graphon version of counting lemma.
The space formula_47 of graphons is given the structure of a metric space where the metric is the cut distance formula_48. The following lemma is an important step in order to prove that formula_49 is a compact metric space. Intuitively, it says that for a graph formula_50, the homomorphism densities of two graphons with respect to this graph have to be close (this bound depending on the number of edges formula_50) if the graphons are close in terms of cut distance.
Definition (cut norm)..
The cut norm of formula_51 is defined as formula_52, where formula_53 and formula_54 are measurable sets.
Definition (cut distance)..
The cut distance is defined as formula_55, where formula_56 represents formula_57 for a measure-preserving bijection formula_58.
Graphon Counting Lemma.
For graphons formula_59 and graph formula_50, we have formula_60, where formula_61 denotes the number of edges of graph formula_50.
"Proof of the graphon counting lemma:".
It suffices to prove formula_62Indeed, by considering the above, with the right hand side expression having a factor formula_63 instead of formula_64, and taking the infimum of the over all measure-preserving bijections formula_65, we obtain the desired result.
"Step 1: Reformulation." We prove a reformulation of the cut norm, which is by definition the left hand side of the following equality. The supremum in the right hand side is taken among measurable functions formula_66 and formula_67:formula_68
Here's the reason for the above to hold: By taking formula_69 and formula_70, we note that the left hand side is less than or equal than the right hand side. The right hand side is less than or equal than the left hand side by bilinearity of the integrand in formula_71, and by the fact that the extrema are attained for formula_71 taking values at formula_72 or formula_73.
"Step 2: Proof for formula_74." In the case that formula_74, we observe that
formula_75
By Step 1, we have that for a fixed formula_76 thatformula_77Therefore, when integrating over all formula_78 we get that formula_79Using this bound on each of the three summands, we get that the whole sum is bounded by formula_80.
"Step 3: General case." For a general graph formula_50, we need the following lemma to make everything more convenient:
Lemma..
The following expression holds:formula_81
The above lemma follows from a straightforward expansion of the right hand side. Then, by the triangle inequality of norm, we have the followingformula_82
Here, each absolute value term in the sum is bounded by the cut norm formula_83 if we fix all the variables except for formula_84 and formula_85 for each formula_19-th term, altogether implying that formula_86. This finishes the proof.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\epsilon"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "U,V"
},
{
"math_id": 4,
"text": "(U,V)"
},
{
"math_id": 5,
"text": "d(U,V)"
},
{
"math_id": 6,
"text": "\\gamma"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "\\gamma^{-1}"
},
{
"math_id": 9,
"text": "\\gamma\\leq \\varepsilon^{h}/4h"
},
{
"math_id": 10,
"text": "1,\\cdots,h"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "W_{1},\\cdots,W_{h}"
},
{
"math_id": 13,
"text": "|W_{i}|\\geq\\gamma^{-1}"
},
{
"math_id": 14,
"text": "i=1,\\ldots,h"
},
{
"math_id": 15,
"text": "(i,j)"
},
{
"math_id": 16,
"text": "(W_{i},W_{j})"
},
{
"math_id": 17,
"text": "d(W_{i},W_{j})>\\varepsilon"
},
{
"math_id": 18,
"text": "2^{-h}\\varepsilon^{m}|W_{1}|\\cdot\\ldots\\cdot|W_{h}|"
},
{
"math_id": 19,
"text": "i"
},
{
"math_id": 20,
"text": "W_{i}"
},
{
"math_id": 21,
"text": "H=K_{3}"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "X, Y, Z"
},
{
"math_id": 24,
"text": "V(G)"
},
{
"math_id": 25,
"text": "d_{XY}, d_{XZ}, d_{YZ}"
},
{
"math_id": 26,
"text": "2\\epsilon"
},
{
"math_id": 27,
"text": "(x,y,z) \\in X \\times Y \\times Z"
},
{
"math_id": 28,
"text": "x, y, z"
},
{
"math_id": 29,
"text": "(1-2\\epsilon)(d_{XY}-\\epsilon)(d_{XZ}-\\epsilon)(d_{YZ}-\\epsilon)|X||Y||Z|. "
},
{
"math_id": 30,
"text": "(X,Y)"
},
{
"math_id": 31,
"text": "\\varepsilon|X|"
},
{
"math_id": 32,
"text": "X"
},
{
"math_id": 33,
"text": "(d_{XY}-\\varepsilon)|Y|"
},
{
"math_id": 34,
"text": "Y"
},
{
"math_id": 35,
"text": "(X,Z)"
},
{
"math_id": 36,
"text": "(d_{XZ}-\\varepsilon)|Z|"
},
{
"math_id": 37,
"text": "Z"
},
{
"math_id": 38,
"text": "X'\\subseteq X"
},
{
"math_id": 39,
"text": "(1-2\\varepsilon)|X|"
},
{
"math_id": 40,
"text": "x\\in X'"
},
{
"math_id": 41,
"text": "d_{XY},d_{XZ}\\geq 2\\varepsilon"
},
{
"math_id": 42,
"text": "(Y,Z)"
},
{
"math_id": 43,
"text": "x"
},
{
"math_id": 44,
"text": "(d_{YZ}-\\varepsilon)"
},
{
"math_id": 45,
"text": "(d_{XY}-\\epsilon)(d_{XZ}-\\epsilon)(d_{YZ}-\\epsilon)|Y||Z| "
},
{
"math_id": 46,
"text": "H "
},
{
"math_id": 47,
"text": "\\tilde{\\mathcal{W}}_0"
},
{
"math_id": 48,
"text": "\\delta_{\\Box}"
},
{
"math_id": 49,
"text": "(\\tilde{\\mathcal{W}}_0, \\delta_{\\Box})"
},
{
"math_id": 50,
"text": "F"
},
{
"math_id": 51,
"text": "W:[0,1]^2\\to \\mathbb{R}"
},
{
"math_id": 52,
"text": "\\|W\\|_{\\square} = \\sup_{S, T\\subseteq[0, 1]} \\left|\\int_{S\\times T} W\\right|"
},
{
"math_id": 53,
"text": "S"
},
{
"math_id": 54,
"text": "T"
},
{
"math_id": 55,
"text": "\\delta_{\\square}(U, W) = \\inf_{\\phi} ||U - W^{\\phi}||_{\\square} "
},
{
"math_id": 56,
"text": "W^{\\phi}(x, y) "
},
{
"math_id": 57,
"text": "W(\\phi(x), \\phi(y))\n "
},
{
"math_id": 58,
"text": "\\phi\n "
},
{
"math_id": 59,
"text": "W,U"
},
{
"math_id": 60,
"text": "|t(F,W)-t(F,U)|\\le |E(F)|\\delta_{\\square}(W,U)"
},
{
"math_id": 61,
"text": "|E(F)|"
},
{
"math_id": 62,
"text": "|t(F,W)-t(F,U)|\\le|E(F)|\\|W-U\\|_{\\square}."
},
{
"math_id": 63,
"text": "\\|W-U^{\\phi}\\|_{\\Box}"
},
{
"math_id": 64,
"text": "\\|W-U\\|_{\\Box}"
},
{
"math_id": 65,
"text": "\\phi"
},
{
"math_id": 66,
"text": "u"
},
{
"math_id": 67,
"text": "v"
},
{
"math_id": 68,
"text": "\\sup_{S,T\\subseteq [0,1]}\\left|\\int_{S\\times T}W\\right|=\\sup_{u,v:[0,1]\\rightarrow [0,1]}\\left| \\int_{[0,1]^2}W(x,y)u(x)v(y)dxdy\\right|."
},
{
"math_id": 69,
"text": "u=\\mathbb{1}_S"
},
{
"math_id": 70,
"text": "u=\\mathbb{1}_T"
},
{
"math_id": 71,
"text": "u,v"
},
{
"math_id": 72,
"text": "0"
},
{
"math_id": 73,
"text": "1"
},
{
"math_id": 74,
"text": "F=K_3"
},
{
"math_id": 75,
"text": "\n\\begin{aligned}\nt(K_3,W)-t(K_3,U)&=\\int_{[0,1]^{3}}((W(x,y)W(x,z)W(y,z)-U(x,y)U(x,z)U(y,z))dxdydz\\\\&=\\int_{[0,1]^{3}}(W-U)(x,y)W(x,z)W(y,z)dxdydz\\\\&\\qquad+\\int_{[0,1]^{3}} U(x,y)(W-U)(x,z)W(y,z)dxdydz\\\\&\\qquad+\\int_{[0,1]^{3}} U(x,y)U(x,z)(W-U)(y,z)dxdydz.\n\\end{aligned}\n"
},
{
"math_id": 76,
"text": "z"
},
{
"math_id": 77,
"text": "\\left|\\int_{[0,1]^{2}}(W-U)(x,y)W(x,z)W(y,z)dxdy\\right|\\le \\|W-U\\|_{\\square}"
},
{
"math_id": 78,
"text": "z\\in[0,1]"
},
{
"math_id": 79,
"text": "\\left|\\int_{[0,1]^{3}}(W-U)(x,y)W(x,z)W(y,z)dxdydz\\right|\\le \\|W-U\\|_{\\square}"
},
{
"math_id": 80,
"text": "3\\|W-U\\|_{\\square}"
},
{
"math_id": 81,
"text": "a_1a_2\\cdots a_n-b_1b_2\\cdots b_n=(a_1-b_1)a_2\\cdots a_n+b_1(a_2-b_2)\\cdots a_n+\\cdots\nb_1b_2\\cdots (a_n-b_n)."
},
{
"math_id": 82,
"text": "\\begin{aligned}\n\\left| t(F,W)-t(F,U)\\right| &=\\left| \\int\\left(\\prod_{u_iv_i\\in E(F)}W(u_i,v_i)-\\prod_{u_iv_i\\in E(F)}U(u_i,v_i)\\right)\\prod_{v\\in V}dv\\right|\\\\&\\le\\sum_{i=1}^{|E(F)|}\\left| \\int \\left(\\ \\prod_{j=1}^{i-1}U(u_j,v_j)(W(u_i,v_i)-U(u_i,v_i))\\prod_{k=i+1}^{|E(F)|}W(u_k,v_k)\\right)\\prod_{v\\in V}dv\\right|.\\end{aligned}"
},
{
"math_id": 83,
"text": "\\|W-U\\|_{\\square}"
},
{
"math_id": 84,
"text": "u_i"
},
{
"math_id": 85,
"text": "v_i"
},
{
"math_id": 86,
"text": "|t(F,W)-t(F,U)|\\le |E(F)|\\ \\delta_{\\square}(W,U)"
}
] |
https://en.wikipedia.org/wiki?curid=62471938
|
62472235
|
Plünnecke–Ruzsa inequality
|
In additive combinatorics, the Plünnecke–Ruzsa inequality is an inequality that bounds the size of various sumsets of a set formula_0, given that there is another set formula_1 so that formula_2 is not much larger than formula_1. A slightly weaker version of this inequality was originally proven and published by Helmut Plünnecke (1970).
Imre Ruzsa (1989) later published a simpler proof of the current, more general, version of the inequality.
The inequality forms a crucial step in the proof of Freiman's theorem.
Statement.
The following sumset notation is standard in additive combinatorics. For subsets formula_1 and formula_0 of an abelian group and a natural number formula_3, the following are defined:
The set formula_7 is known as the sumset of formula_1 and formula_0.
Plünnecke-Ruzsa inequality.
The most commonly cited version of the statement of the Plünnecke–Ruzsa inequality is the following.
This is often used when formula_11, in which case the constant formula_12 is known as the doubling constant of formula_1. In this case, the Plünnecke–Ruzsa inequality states that sumsets formed from a set with small doubling constant must also be small.
Plünnecke's inequality.
The version of this inequality that was originally proven by Plünnecke (1970) is slightly weaker.
Proof.
Ruzsa triangle inequality.
The Ruzsa triangle inequality is an important tool which is used to generalize Plünnecke's inequality to the Plünnecke–Ruzsa inequality. Its statement is:
Proof of Plünnecke-Ruzsa inequality.
The following simple proof of the Plünnecke–Ruzsa inequality is due to Petridis (2014).
Lemma: Let formula_1 and formula_0 be finite subsets of an abelian group formula_14. If formula_15 is a nonempty subset that minimizes the value of formula_16, then for all finite subsets formula_17,
formula_18
Proof: This is demonstrated by induction on the size of formula_19. For the base case of formula_20, note that formula_21 is simply a translation of formula_22 for any formula_23, so
formula_24
For the inductive step, assume the inequality holds for all formula_25 with formula_26 for some positive integer formula_9. Let formula_13 be a subset of formula_14 with formula_27, and let formula_28 for some formula_29. (In particular, the inequality holds for formula_30.) Finally, let formula_31. The definition of formula_32 implies that formula_33. Thus, by the definition of these sets,
formula_34
Hence, considering the sizes of the sets,
formula_35
The definition of formula_32 implies that formula_36, so by the definition of formula_37, formula_38. Thus, applying the inductive hypothesis on formula_30 and using the definition of formula_37,
formula_39
To bound the right side of this inequality, let formula_40. Suppose formula_41 and formula_42, then there exists formula_43 such that formula_44. Thus, by definition, formula_45, so formula_46. Hence, the sets formula_47 and formula_48 are disjoint. The definitions of formula_49 and formula_30 thus imply that
formula_50
Again by definition, formula_51, so formula_52. Hence,
formula_53
Putting the above two inequalities together gives
formula_54
This completes the proof of the lemma.
To prove the Plünnecke–Ruzsa inequality, take formula_37 and formula_55 as in the statement of the lemma. It is first necessary to show that
formula_56
This can be proved by induction. For the base case, the definitions of formula_8 and formula_55 imply that formula_57. Thus, the definition of formula_37 implies that formula_58. For inductive step, suppose this is true for formula_59. Applying the lemma with formula_60 and the inductive hypothesis gives
formula_61
This completes the induction. Finally, the Ruzsa triangle inequality gives
formula_62
Because formula_15, it must be the case that formula_63. Therefore,
formula_10
This completes the proof of the Plünnecke–Ruzsa inequality.
Plünnecke graphs.
Both Plünnecke's proof of Plünnecke's inequality and Ruzsa's original proof of the Plünnecke–Ruzsa inequality use the method of Plünnecke graphs. Plünnecke graphs are a way to capture the additive structure of the sets formula_64 in a graph theoretic manner
To define a Plünnecke graph we first define commutative graphs and layered graphs:
Definition. A directed graph formula_14 is called semicommutative if, whenever there exist distinct formula_65 such that formula_66 and formula_67 are edges in formula_14 for each formula_68, then there also exist distinct formula_69 so that formula_70 and formula_71 are edges in formula_14 for each formula_68.
formula_14 is called commutative if it is semicommutative and the graph formed by reversing all its edges is also semicommutative.
Definition. A layered graph is a (directed) graph formula_14 whose vertex set can be partitioned formula_72 so that all edges in formula_14 are from formula_73 to formula_74, for some formula_68.
Definition. A Plünnecke graph is a layered graph which is commutative.
The canonical example of a Plünnecke graph is the following, which shows how the structure of the sets formula_75 form a Plünnecke graph.
Example. Let formula_76 be subsets of an abelian group. Then, let formula_14 be the layered graph so that each layer formula_77 is a copy of formula_78, so that formula_79, formula_80, ..., formula_81. Create the edge formula_66 (where formula_82 and formula_83) whenever there exists formula_84 such that formula_85. (In particular, if formula_82, then formula_86 by definition, so every vertex has outdegree equal to the size of formula_0.)
Then formula_14 is a Plünnecke graph. For example, to check that formula_14 is semicommutative, if formula_66 and formula_67 are edges in formula_14 for each formula_68, then formula_87. Then, let formula_88, so that formula_89 and formula_90. Thus, formula_14 is semicommutative. It can be similarly checked that the graph formed by reversing all edges of formula_14 is also semicommutative, so formula_14 is a Plünnecke graph.
In a Plünnecke graph, the image of a set formula_91 in formula_77, written formula_92, is defined to be the set of vertices in formula_77 which can be reached by a path starting from some vertex in formula_37. In particular, in the aforementioned example, formula_92 is just formula_93.
The magnification ratio between formula_94 and formula_77, denoted formula_95, is then defined as the minimum factor by which the image of a set must exceed the size of the original set. Formally,
formula_96
Plünnecke's theorem is the following statement about Plünnecke graphs.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Plünnecke's theorem) — Let formula_14 be a Plünnecke graph. Then, formula_97 is decreasing in formula_98.
The proof of Plünnecke's theorem involves a technique known as the "tensor product trick", in addition to an application of Menger's theorem.
The Plünnecke–Ruzsa inequality is a fairly direct consequence of Plünnecke's theorem and the Ruzsa triangle inequality. Applying Plünnecke's theorem to the graph given in the example, at formula_99 and formula_100, yields that if formula_101, then there exists formula_102 so that formula_103. Applying this result once again with formula_37 instead of formula_1, there exists formula_104 so that formula_105. Then, by Ruzsa's triangle inequality (on formula_106),
formula_107
thus proving the Plünnecke–Ruzsa inequality.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "A+B"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "A+B=\\{a+b:a\\in A,b\\in B\\}"
},
{
"math_id": 5,
"text": "A-B=\\{a-b:a\\in A,b\\in B\\}"
},
{
"math_id": 6,
"text": "kA=\\underbrace{A+A+\\cdots+A}_{k\\text{ times}}"
},
{
"math_id": 7,
"text": "A + B"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "|mB-nB|\\le K^{m+n}|A|."
},
{
"math_id": 11,
"text": "A = B"
},
{
"math_id": 12,
"text": "K = |2A|/|A|"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "G"
},
{
"math_id": 15,
"text": "X\\subseteq A"
},
{
"math_id": 16,
"text": "K'=|X+B|/|X|"
},
{
"math_id": 17,
"text": "C\\subset G"
},
{
"math_id": 18,
"text": "|X+B+C|\\le K'|X+C|."
},
{
"math_id": 19,
"text": "|C|"
},
{
"math_id": 20,
"text": "|C|=1"
},
{
"math_id": 21,
"text": "S+C"
},
{
"math_id": 22,
"text": "S"
},
{
"math_id": 23,
"text": "S\\subseteq G"
},
{
"math_id": 24,
"text": "|X+B+C|=|X+B|=K'|X|=K'|X+C|."
},
{
"math_id": 25,
"text": "C\\subseteq G"
},
{
"math_id": 26,
"text": "|C|\\le n"
},
{
"math_id": 27,
"text": "|C|=n+1"
},
{
"math_id": 28,
"text": "C=C'\\sqcup\\{\\gamma\\}"
},
{
"math_id": 29,
"text": "\\gamma\\in C"
},
{
"math_id": 30,
"text": "C'"
},
{
"math_id": 31,
"text": "Z=\\{x\\in X: x+B+\\{\\gamma\\}\\subseteq X+B+C'\\}"
},
{
"math_id": 32,
"text": "Z"
},
{
"math_id": 33,
"text": "Z+B+\\{\\gamma\\}\\subseteq X+B+C'"
},
{
"math_id": 34,
"text": "X+B+C=(X+B+C')\\cup((X+B+\\{\\gamma\\})\\backslash(Z+B+\\{\\gamma\\}))."
},
{
"math_id": 35,
"text": "\\begin{align}|X+B+C|&\\le|X+B+C'|+|(X+B+\\{\\gamma\\})\\backslash(Z+B+\\{\\gamma\\})|\\\\&=|X+B+C'|+|X+B+\\{\\gamma\\}|-|Z+B+\\{\\gamma\\}|\\\\&=|X+B+C'|+|X+B|-|Z+B|.\\end{align}"
},
{
"math_id": 36,
"text": "Z\\subseteq X\\subseteq A"
},
{
"math_id": 37,
"text": "X"
},
{
"math_id": 38,
"text": "|Z+B|\\ge K'|Z|"
},
{
"math_id": 39,
"text": "\\begin{align}|X+B+C|&\\le|X+B+C'|+|X+B|-|Z+B|\\\\&\\le K'|X+C'|+|X+B|-|Z+B|\\\\&\\le K'|X+C'|+K'|X|-|Z+B|\\\\&\\le K'|X+C'|+K'|X|-K'|Z|\\\\&=K'(|X+C'|+|X|-|Z|).\\end{align}"
},
{
"math_id": 40,
"text": "W=\\{x\\in X:x+\\gamma\\in X+C'\\}"
},
{
"math_id": 41,
"text": "y\\in X+C'"
},
{
"math_id": 42,
"text": "y\\in X+\\{\\gamma\\}"
},
{
"math_id": 43,
"text": "x\\in X"
},
{
"math_id": 44,
"text": "x+\\gamma=y\\in X+C'"
},
{
"math_id": 45,
"text": "x\\in W"
},
{
"math_id": 46,
"text": "y\\in W+\\{\\gamma\\}"
},
{
"math_id": 47,
"text": "X+C'"
},
{
"math_id": 48,
"text": "(X+\\{\\gamma\\})\\backslash(W+\\{\\gamma\\})"
},
{
"math_id": 49,
"text": "W"
},
{
"math_id": 50,
"text": "X+C=(X+C')\\sqcup((X+\\{\\gamma\\})\\backslash(W+\\{\\gamma\\}))."
},
{
"math_id": 51,
"text": "W\\subseteq Z"
},
{
"math_id": 52,
"text": "|W|\\le|Z|"
},
{
"math_id": 53,
"text": "\\begin{align}|X+C|&=|X+C'|+|(X+\\{\\gamma\\})\\backslash(W+\\{\\gamma\\})|\\\\&=|X+C'|+|X+\\{\\gamma\\}|-|W+\\{\\gamma\\}|\\\\&=|X+C'|+|X|-|W|\\\\&\\ge|X+C'|+|X|-|Z|.\\end{align}"
},
{
"math_id": 54,
"text": "|X+B+C|\\le K'(|X+C'|+|X|-|Z|)\\le K'|X+C|."
},
{
"math_id": 55,
"text": "K'"
},
{
"math_id": 56,
"text": "|X+nB|\\le K^n|X|."
},
{
"math_id": 57,
"text": "K'\\le K"
},
{
"math_id": 58,
"text": "|X+B|\\le K|X|"
},
{
"math_id": 59,
"text": "n=j"
},
{
"math_id": 60,
"text": "C=jB"
},
{
"math_id": 61,
"text": "|X+(j+1)B|\\le K'|X+jB|\\le K|X+jB|\\le K^{j+1}|X|."
},
{
"math_id": 62,
"text": "|mB-nB|\\le\\frac{|X+mB||X+nB|}{|X|}\\le\\frac{K^m|X|K^n|X|}{|X|}=K^{m+n}|X|."
},
{
"math_id": 63,
"text": "|X|\\le |A|"
},
{
"math_id": 64,
"text": "A, A+B, A+2B, \\dots"
},
{
"math_id": 65,
"text": "x, y, z_1, z_2, \\dots, z_k"
},
{
"math_id": 66,
"text": "(x, y)"
},
{
"math_id": 67,
"text": "(y, z_i)"
},
{
"math_id": 68,
"text": "i"
},
{
"math_id": 69,
"text": "y_1, y_2, \\dots, y_k"
},
{
"math_id": 70,
"text": "(x, y_i)"
},
{
"math_id": 71,
"text": "(y_i, z_i)"
},
{
"math_id": 72,
"text": "V_0 \\cup V_1 \\cup \\dots \\cup V_m"
},
{
"math_id": 73,
"text": "V_i"
},
{
"math_id": 74,
"text": "V_{i+1}"
},
{
"math_id": 75,
"text": "A, A+B, A+2B, \\dots, A + mB"
},
{
"math_id": 76,
"text": "A, B"
},
{
"math_id": 77,
"text": "V_j"
},
{
"math_id": 78,
"text": "A + jB"
},
{
"math_id": 79,
"text": "V_0 = A"
},
{
"math_id": 80,
"text": "V_1 = A + B"
},
{
"math_id": 81,
"text": "V_m = A + mB"
},
{
"math_id": 82,
"text": "x \\in V_i"
},
{
"math_id": 83,
"text": "y \\in V_{i+1}"
},
{
"math_id": 84,
"text": "b \\in B"
},
{
"math_id": 85,
"text": "y = x + b"
},
{
"math_id": 86,
"text": "x + b \\in V_{i+1}"
},
{
"math_id": 87,
"text": "y - x, z_i - y \\in B"
},
{
"math_id": 88,
"text": "y_i = x + z_i - y"
},
{
"math_id": 89,
"text": "y_i - x = z_i - y \\in B"
},
{
"math_id": 90,
"text": "z_i - y_i = y - x \\in B"
},
{
"math_id": 91,
"text": "X \\subseteq V_0"
},
{
"math_id": 92,
"text": "\\text{im}(X, V_j)"
},
{
"math_id": 93,
"text": "X + jB"
},
{
"math_id": 94,
"text": "V_0"
},
{
"math_id": 95,
"text": "\\mu_j(G)"
},
{
"math_id": 96,
"text": "\\mu_j(G) = \\min_{X \\subseteq V_0, X \\neq \\emptyset} \\frac{|\\text{im}(X, V_j)|}{|X|}."
},
{
"math_id": 97,
"text": "\\mu_j(G)^{1/j}"
},
{
"math_id": 98,
"text": "j"
},
{
"math_id": 99,
"text": "j = m"
},
{
"math_id": 100,
"text": "j = 1"
},
{
"math_id": 101,
"text": "|A + B| / |A| = K"
},
{
"math_id": 102,
"text": "X \\subseteq A"
},
{
"math_id": 103,
"text": "|X + mB| / |X| \\le K^m"
},
{
"math_id": 104,
"text": "X' \\subseteq X"
},
{
"math_id": 105,
"text": "|X' + nB| / |X'| \\le K^n"
},
{
"math_id": 106,
"text": "-X', mB, nB"
},
{
"math_id": 107,
"text": "|mB - nB| \\le |X' + mB||X' + nB|/|X'| \\le K^{m}|X| K^{n} = K^{m+n}|X|,"
}
] |
https://en.wikipedia.org/wiki?curid=62472235
|
62472742
|
Ruzsa triangle inequality
|
In additive combinatorics, the Ruzsa triangle inequality, also known as the Ruzsa difference triangle inequality to differentiate it from some of its variants, bounds the size of the difference of two sets in terms of the sizes of both their differences with a third set. It was proven by Imre Ruzsa (1996), and is so named for its resemblance to the triangle inequality. It is an important lemma in the proof of the Plünnecke-Ruzsa inequality.
Statement.
If formula_0 and formula_1 are subsets of a group, then the sumset notation formula_2 is used to denote formula_3. Similarly, formula_4 denotes formula_5. Then, the Ruzsa triangle inequality states the following.
An alternate formulation involves the notion of the "Ruzsa distance".
Definition. If formula_0 and formula_1 are finite subsets of a group, then the Ruzsa distance between these two sets, denoted formula_7, is defined to be
formula_8
Then, the Ruzsa triangle inequality has the following equivalent formulation:
This formulation resembles the triangle inequality for a metric space; however, the Ruzsa distance does not define a metric space since formula_9 is not always zero.
Proof.
To prove the statement, it suffices to construct an injection from the set formula_10 to the set formula_11. Define a function formula_12 as follows. For each formula_13 choose a formula_14 and a formula_15 such that formula_16. By the definition of formula_17, this can always be done. Let formula_18 be the function that sends formula_19 to formula_20. For every point formula_21 in the set is formula_11, it must be the case that formula_22 and formula_23. Hence, formula_12 maps every point in formula_10 to a distinct point in formula_11 and is thus an injection. In particular, there must be at least as many points in formula_11 as in formula_10. Therefore,
formula_24
completing the proof.
Variants of the Ruzsa triangle inequality.
The Ruzsa sum triangle inequality is a corollary of the Plünnecke-Ruzsa inequality (which is in turn proved using the ordinary Ruzsa triangle inequality).
Proof. The proof uses the following lemma from the proof of the Plünnecke-Ruzsa inequality.
Lemma. Let formula_0 and formula_1 be finite subsets of an abelian group formula_25. If formula_26 is a nonempty subset that minimizes the value of formula_27, then for all finite subsets formula_28
formula_29
If formula_0 is the empty set, then the left side of the inequality becomes formula_30, so the inequality is true. Otherwise, let formula_31 be a subset of formula_0 that minimizes formula_27. Let formula_32. The definition of formula_31 implies that formula_33 Because formula_34, applying the above lemma gives
formula_35
Rearranging gives the Ruzsa sum triangle inequality.
By replacing formula_1 and formula_6 in the Ruzsa triangle inequality and the Ruzsa sum triangle inequality with formula_36 and formula_37 as needed, a more general result can be obtained: If formula_0, formula_1, and formula_6 are finite subsets of an abelian group then
formula_38
where all eight possible configurations of signs hold. These results are also sometimes known collectively as the Ruzsa triangle inequalities.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A+B"
},
{
"math_id": 3,
"text": "\\{a+b:a\\in A,b\\in B\\}"
},
{
"math_id": 4,
"text": "A-B"
},
{
"math_id": 5,
"text": "\\{a-b:a\\in A,b\\in B\\}"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "d(A, B)"
},
{
"math_id": 8,
"text": "d(A, B) = \\log \\frac{|A-B|}{\\sqrt{|A||B|}}."
},
{
"math_id": 9,
"text": "d(A, A)"
},
{
"math_id": 10,
"text": "A\\times(B-C)"
},
{
"math_id": 11,
"text": "(A-B)\\times(A-C)"
},
{
"math_id": 12,
"text": "\\phi"
},
{
"math_id": 13,
"text": "x\\in B-C"
},
{
"math_id": 14,
"text": "b(x)\\in B"
},
{
"math_id": 15,
"text": "c(x)\\in C"
},
{
"math_id": 16,
"text": "x=b(x)-c(x)"
},
{
"math_id": 17,
"text": "B-C"
},
{
"math_id": 18,
"text": "\\phi:A\\times(B-C)\\rightarrow(A-B)\\times(A-C)"
},
{
"math_id": 19,
"text": "(a,x)"
},
{
"math_id": 20,
"text": "(a-b(x),a-c(x))"
},
{
"math_id": 21,
"text": "\\phi(a,x)=(y,z)"
},
{
"math_id": 22,
"text": "x=z-y"
},
{
"math_id": 23,
"text": "a=y+b(x)"
},
{
"math_id": 24,
"text": "|A||B-C|=|A\\times(B-C)|\\le|(A-B)\\times(A-C)|=|A-B||A-C|,"
},
{
"math_id": 25,
"text": "G"
},
{
"math_id": 26,
"text": "X\\subseteq A"
},
{
"math_id": 27,
"text": "K'=|X+B|/|X|"
},
{
"math_id": 28,
"text": "C\\subset G,"
},
{
"math_id": 29,
"text": "|X+B+C|\\le K'|X+C|."
},
{
"math_id": 30,
"text": "0"
},
{
"math_id": 31,
"text": "X"
},
{
"math_id": 32,
"text": "K=|A+B|/|A|"
},
{
"math_id": 33,
"text": "K'\\le K."
},
{
"math_id": 34,
"text": "X\\subset A"
},
{
"math_id": 35,
"text": "|B+C|\\le|X+B+C|\\le K'|X+C|\\le K'|A+C|\\le K|A+C|=\\frac{|A+B||A+C|}{|A|}."
},
{
"math_id": 36,
"text": "-B"
},
{
"math_id": 37,
"text": "-C"
},
{
"math_id": 38,
"text": "|A||B\\pm C|\\le|A\\pm B||A\\pm C|,"
}
] |
https://en.wikipedia.org/wiki?curid=62472742
|
62474553
|
Z3 Theorem Prover
|
Software for solving satisfiability problems
Z3, also known as the Z3 Theorem Prover, is a satisfiability modulo theories (SMT) solver developed by Microsoft.
Overview.
Z3 was developed in the "Research in Software Engineering" (RiSE) group at Microsoft Research Redmond and is targeted at solving problems that arise in software verification and program analysis. Z3 supports arithmetic, fixed-size bit-vectors, extensional arrays, datatypes, uninterpreted functions, and quantifiers. Its main applications are extended static checking, test case generation, and predicate abstraction.
Z3 was open sourced in the beginning of 2015. The source code is licensed under MIT License and hosted on GitHub.
The solver can be built using Visual Studio, a makefile or using CMake and runs on Windows, FreeBSD, Linux, and macOS.
The default input format for Z3 is SMTLIB2.
It also has officially supported bindings for several programming languages, including C, C++, Python, .NET, Java, and OCaml.
Examples.
Propositional and predicate logic.
In this example propositional logic assertions are checked using functions to represent the propositions a and b. The following Z3 script checks to see if formula_0:
(declare-fun a () Bool)
(declare-fun b () Bool)
(assert (not (= (not (and a b)) (or (not a)(not b)))))
(check-sat)
Result:
unsat
Note that the script asserts the "negation" of the proposition of interest. The "unsat" result means that the negated proposition is not satisfiable, thus proving the desired result (De Morgan's law).
Solving equations.
The following script solves the two given equations, finding suitable values for the variables a and b:
(declare-const a Int)
(declare-const b Int)
(assert (= (+ a b) 20))
(assert (= (+ a (* 2 b)) 10))
(check-sat)
(get-model)
Result:
sat
(model
(define-fun b () Int
-10)
(define-fun a () Int
30)
Awards.
In 2015, Z3 received the "Programming Languages Software Award" from ACM SIGPLAN. In 2018, Z3 received the "Test of Time Award" from the European Joint Conferences on Theory and Practice of Software (ETAPS). Microsoft researchers Nikolaj Bjørner and Leonardo de Moura received the 2019 Herbrand Award for Distinguished Contributions to Automated Reasoning in recognition of their work in advancing theorem proving with Z3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\overline{a \\land b} \\equiv \\overline{a} \\lor \\overline{b}"
}
] |
https://en.wikipedia.org/wiki?curid=62474553
|
62475939
|
Equal-area projection
|
Type of map projection
In cartography, an equivalent, authalic, or equal-area projection is a map projection that preserves relative area measure between any and all map regions. Equivalent projections are widely used for thematic maps showing scenario distribution such as population, farmland distribution, forested areas, and so forth, because an equal-area map does not change apparent density of the phenomenon being mapped.
By Gauss's Theorema Egregium, an equal-area projection cannot be conformal. This implies that an equal-area projection inevitably distorts shapes. Even though a point or points or a path or paths on a map might have no distortion, the greater the area of the region being mapped, the greater and more obvious the distortion of shapes inevitably becomes.
Description.
In order for a map projection of the sphere to be equal-area, its generating formulae must meet this Cauchy-Riemann-like condition:
formula_0
where formula_1 is constant throughout the map. Here, formula_2 represents latitude; formula_3 represents longitude; and formula_4 and formula_5 are the projected (planar) coordinates for a given formula_6 coordinate pair.
For example, the sinusoidal projection is a very simple equal-area projection. Its generating formulae are:
formula_7
where formula_8 is the radius of the globe. Computing the partial derivatives,
formula_9
and so
formula_10
with formula_1 taking the value of the constant formula_11.
For an equal-area map of the ellipsoid, the corresponding differential condition that must be met is:
formula_12
where formula_13 is the eccentricity of the ellipsoid of revolution.
Statistical grid.
The term "statistical grid" refers to a discrete grid (global or local) of an equal-area surface representation, used for data visualization, geocode and statistical spatial analysis.
List of equal-area projections.
These are some projections that preserve area:
|
[
{
"math_id": 0,
"text": "\\frac{\\partial y}{\\partial \\varphi} \\cdot \\frac{\\partial x}{\\partial \\lambda} - \\frac{\\partial y}{\\partial \\lambda} \\cdot \\frac{\\partial x}{\\partial \\varphi} = s \\cdot \\cos \\varphi"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "(\\varphi, \\lambda)"
},
{
"math_id": 7,
"text": "\\begin{align}\nx &= R \\cdot \\lambda \\cos \\varphi \\\\\ny &= R \\cdot \\varphi\n\\end{align}"
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "\\frac{\\partial x}{\\partial \\varphi} = -R \\cdot \\lambda \\cdot \\sin \\varphi,\\quad R \\cdot \\frac{\\partial x}{\\partial \\lambda} = R \\cdot \\cos \\varphi,\\quad \\frac{\\partial y}{\\partial \\varphi} = R,\\quad \\frac{\\partial y}{\\partial \\lambda} = 0"
},
{
"math_id": 10,
"text": "\\frac{\\partial y}{\\partial \\varphi} \\cdot \\frac{\\partial x}{\\partial \\lambda} - \\frac{\\partial y}{\\partial \\lambda} \\cdot \\frac{\\partial x}{\\partial \\varphi} = R \\cdot R \\cdot \\cos \\varphi - 0 \\cdot (-R \\cdot \\lambda \\cdot \\sin \\varphi) = R^2 \\cdot \\cos \\varphi = s \\cdot \\cos \\varphi"
},
{
"math_id": 11,
"text": "R^2"
},
{
"math_id": 12,
"text": "\\frac{\\partial y}{\\partial \\varphi} \\cdot \\frac{\\partial x}{\\partial \\lambda} - \\frac{\\partial y}{\\partial \\lambda} \\cdot \\frac{\\partial x}{\\partial \\varphi} = s \\cdot \\cos \\varphi \\cdot \\frac{(1-e^2)}{(1-e^2 \\sin^2 \\varphi)^2}"
},
{
"math_id": 13,
"text": "e"
}
] |
https://en.wikipedia.org/wiki?curid=62475939
|
62478411
|
Nehemiah 7
|
A chapter in the Book of Nehemiah
Nehemiah 7 is the seventh chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 17th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter records the joint appointments of Hanani and Hananiah over Jerusalem and the second appearance of the "Golah" ("exiles") list, that is, the list of the first returning group of Jews from Babylon, which was documented earlier in Ezra 2 with few variations.
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 73 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 9:37-55 is an equivalent of Nehemiah 7:73-8:12 (The reading of the Law).
Vigilance (7:1–3).
The wall around Jerusalem was not the ultimate security but 'a necessary defense and dynamic distinctive symbol' of the Jews among the surrounding nations, so the inhabitants have to participate in the system to protect the city.
"1 Now when the wall had been built and I had set up the doors, and the gatekeepers, the singers, and the Levites had been appointed, 2 I gave my brother Hanani and Hananiah the governor of the castle charge over Jerusalem, for he was a more faithful and God-fearing man than many."
"Now the city was large and spacious, but the people in it were few, and the houses were not rebuilt."
Verse 4.
The Revised Standard Version reads "... no houses had been built", the Revised Version, "the houses were not builded". H. E. Ryle counsels against a literal interpretation of these words, suggesting that the real meaning was that there were large open spaces within the walls where more houses could be built.
The census (7:4–73).
The defensive measures implemented by Nehemiah, Hanani and Hananiah were only for short-term, because the bigger goal was to reestablish Jerusalem as the center of Jewish culture and religious purity, so it has to be repopulated from some people who then lived outside the city. Nehemiah was looking for Jews with veriable heritage to send some family members to populate Jerusalem, but instead of starting a census, he used the original listing of those who had been the first to return which specified clan origins. This list is almost an exact replication of the one in Ezra 2, with slight variations likely due to the transcribing and transmission over time.
"Who came with Zerubbabel, Jeshua, Nehemiah, Azariah, Raamiah, Nahamani, Mordecai, Bilshan, Mispereth, Bigvai, Nehum, Baanah. The number, I say, of the men of the people of Israel was this;"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62478411
|
62478738
|
Hopper (microarchitecture)
|
GPU microarchitecture designed by Nvidia
Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is parallel to Ada Lovelace. It is the latest generation of the line of products formerly branded as Nvidia Tesla and since rebranded as Nvidia Data Center GPUs.
Named for computer scientist and United States Navy rear admiral Grace Hopper, the Hopper architecture was leaked in November 2019 and officially revealed in March 2022. It improves upon its predecessors, the Turing and Ampere microarchitectures, featuring a new streaming multiprocessor and a faster memory subsystem.
Architecture.
The Nvidia Hopper H100 GPU is implemented using the TSMC N4 process with 80 billion transistors. It consists of up to 144 streaming multiprocessors. In SXM5, the Nvidia Hopper H100 offers better performance than PCIe.
Streaming multiprocessor.
The streaming multiprocessors for Hopper improve upon the Turing and Ampere microarchitectures, although the maximum number of concurrent warps per streaming multiprocessor (SM) remains the same between the Ampere and Hopper architectures, 64. The Hopper architecture provides a Tensor Memory Accelerator (TMA), which supports bidirectional asynchronous memory transfer between shared memory and global memory. Under TMA, applications may transfer up to 5D tensors. When writing from shared memory to global memory, elementwise reduction and bitwise operators may be used, avoiding registers and SM instructions while enabling users to write warp specialized codes. TMA is exposed through codice_0
When parallelizing applications, developers can use thread block clusters. Thread blocks may perform atomics in the shared memory of other thread blocks within its cluster, otherwise known as distributed shared memory. Distributed shared memory may be used by an SM simultaneously with L2 cache; when used to communicate data between SMs, this can utilize the combined bandwidth of distributed shared memory and L2. The maximum portable cluster size is 8, although the Nvidia Hopper H100 can support a cluster size of 16 by using the codice_1 function, potentially at the cost of reduced number of active blocks. With L2 multicasting and distributed shared memory, the required bandwidth for dynamic random-access memory read and writes is reduced.
Hopper features improved single-precision floating-point format (FP32) throughput with twice as many FP32 operations per cycle per SM than its predecessor. Additionally, the Hopper architecture adds support for new instructions, including the Smith–Waterman algorithm. Like Ampere, TensorFloat-32 (TF-32) arithmetic is supported. The mapping pattern for both architectures is identical.
Memory.
The Nvidia Hopper H100 supports HBM3 and HBM2e memory up to 80 GB; the HBM3 memory system supports 3 TB/s, an increase of 50% over the Nvidia Ampere A100's 2 TB/s. Across the architecture, the L2 cache capacity and bandwidth were increased.
Hopper allows CUDA compute kernels to utilize automatic inline compression, including in individual memory allocation, which allows accessing memory at higher bandwidth. This feature does not increase the amount of memory available to the application, because the data (and thus it's compressibility) may be changed at any time. The compressor will automatically choose between several compression algorithms.
The Nvidia Hopper H100 increases the capacity of the combined L1 cache, texture cache, and shared memory to 256 KB. Like its predecessors, it combines L1 and texture caches into a unified cache designed to be a coalescing buffer. The attribute codice_2 may be used to define the carveout of the L1 cache. Hopper introduces enhancements to NVLink through a new generation with faster overall communication bandwidth.
Memory synchronization domains.
Some CUDA applications may experience interference when performing fence or flush operations due to memory ordering. Because the GPU cannot know which writes are guaranteed and which are visible by chance timing, it may wait on unnecessary memory operations, thus slowing down fence or flush operations. For example, when a kernel performs computations in GPU memory and a parallel kernel performs communications with a peer, the local kernel will flush its writes, resulting in slower NVLink or PCIe writes. In the Hopper architecture, the GPU can reduce the net cast through a fence operation.
DPX instructions.
The Hopper architecture math application programming interface (API) exposes functions in the SM such as codice_3, which performs the per-halfword formula_0. In the Smith–Waterman algorithm, codice_4 can be used, a three-way min or max followed by a clamp to zero. Similarly, Hopper speeds up implementations of the Needleman–Wunsch algorithm.
Transformer engine.
The Hopper architecture utilizes a transformer engine.
Power efficiency.
The SXM5 form factor H100 has a thermal design power (TDP) of 700 watts. With regards to its asynchrony, the Hopper architecture may attain high degrees of utilization and thus may have a better performance-per-watt.
Grace Hopper.
The GH200 combines a Hopper-based H200 GPU with a Grace-based 72-core CPU on a single module. The total power draw of the module is up to 1000 W. CPU and GPU are connected via NVLink, which provides memory coherence between CPU and GPU memory.
History.
In November 2019, a well-known Twitter account posted a tweet revealing that the next architecture after Ampere would be called Hopper, named after computer scientist and United States Navy rear admiral Grace Hopper, one of the first programmers of the Harvard Mark I. The account stated that Hopper would be based on a multi-chip module design, which would result in a yield gain with lower wastage.
During the 2022 Nvidia GTC, Nvidia officially announced Hopper. By 2023, during the AI boom, H100s were in great demand. Larry Ellison of Oracle Corporation said that year that at a dinner with Nvidia CEO Jensen Huang, he and Elon Musk of Tesla, Inc. and xAI "were begging" for H100s, "I guess is the best way to describe it. An hour of sushi and begging".
In January 2024, Raymond James Financial analysts estimated that Nvidia was selling the H100 GPU in the price range of $25,000 to $30,000 each, while on eBay, individual H100s cost over $40,000. As of February 2024, Nvidia was reportedly shipping H100 GPUs to data centers in armored cars.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "max(min(a + b, c), 0)"
}
] |
https://en.wikipedia.org/wiki?curid=62478738
|
62479541
|
Nehemiah 8
|
A chapter in the Book of Nehemiah
Nehemiah 8 is the eighth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 18th chapter of the book of Ezra–Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter and the next focus mainly on Ezra, with this chapter recording Ezra's reading and instructing God's law to the people, then together they celebrated the Feast of Tabernacles with great joy.
Nehemiah the governor is mentioned briefly in verse 9 but Smith-Christopher argues that "the presence of Ezra and the virtual absence of Nehemiah support the argument that chapter 8 is among the displaced chapters from the Ezra material", and suggests that "the original place for [this chapter] would logically have been between Ezra 8 and 9".
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
An ancient Greek book called 1 Esdras (Greek: ) containing some parts of 2 Chronicles, Ezra and Nehemiah is included in most editions of the Septuagint and is placed before the single book of Ezra–Nehemiah (which is titled in Greek: ). 1 Esdras 9:37-55 is an equivalent of Nehemiah 7:73-8:12 (The reading of the Law).
Ezra reads the law (8:1–12).
The commission given to Ezra was to 'restructure the Jewish community' under God's laws, so he read and instructed the people who gathered around in 'the commands and intentions of God's revelation'.
"And all the people gathered themselves together as one man into the street that was before the water gate; and they spake unto Ezra the scribe to bring the book of the law of Moses, which the Lord had commanded to Israel."
Verse 1.
Ezra is described as "the scribe" in this verse and as "the priest" in verse 2. Repairs to "the place in front of the Water Gate toward the east" were referred to in . Whereas the King James Version refers to the "street" before the gate, other translations refer to the "square" or the "courtyard". In the Vulgate the closing words of Nehemiah 7:73, "When the seventh month came, the children of Israel were in their cities" form the opening words of Nehemiah 8:1: see also in the Douay–Rheims Bible.
"Then he (Ezra) read from it (the Law) in the open square that was in front of the Water Gate from morning until midday, before the men and women and those who could understand; and the ears of all the people were attentive to the Book of the Law."
Verse 3.
Ezra's actions recall those of King Josiah in :
"The king went up to the house of the Lord with all the men of Judah, and with him all the inhabitants of Jerusalem — the priests and the prophets and all the people, both small and great. And he read in their hearing all the words of the Book of the Covenant which had been found in the house of the Lord."
The feast of Tabernacles (8:13–18).
The requirements of God's laws were founded on God's grace and the intention behind the Feast of Tabernacles was to commemorate God's miraculous deliverance of Israel. The celebration closely followed the regulation in .
"So the whole assembly of those who had returned from the captivity made booths and sat under the booths; for since the days of Joshua the son of Nun until that day the children of Israel had not done so. And there was very great gladness."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62479541
|
62480458
|
Sidorenko's conjecture
|
Conjecture in graph theory
Sidorenko's conjecture is a conjecture in the field of graph theory, posed by Alexander Sidorenko in 1986. Roughly speaking, the conjecture states that for any bipartite graph formula_0 and graph formula_1 on formula_2 vertices with average degree formula_3, there are at least formula_4 labeled copies of formula_0 in formula_1, up to a small error term. Formally, it provides an intuitive inequality about graph homomorphism densities in graphons. The conjectured inequality can be interpreted as a statement that the density of copies of formula_0 in a graph is asymptotically minimized by a random graph, as one would expect a formula_5 fraction of possible subgraphs to be a copy of formula_0 if each edge exists with probability formula_6.
Statement.
Let formula_0 be a graph. Then formula_0 is said to have Sidorenko's property if, for all graphons formula_7, the inequality
formula_8
is true, where formula_9 is the homomorphism density of formula_0 in formula_7.
Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property.
If formula_7 is a graph formula_1, this means that the probability of a uniform random mapping from formula_10 to formula_11 being a homomorphism is at least the product over each edge in formula_0 of the probability of that edge being mapped to an edge in formula_1. This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies of formula_0. This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is the limit point of some sequence of graphs.
The requirement that formula_0 is bipartite to have Sidorenko's property is necessary — if formula_7 is a bipartite graph, then formula_12 since formula_7 is triangle-free. But formula_13 is twice the number of edges in formula_7, so Sidorenko's property does not hold for formula_14. A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs.
Equivalent formulation.
Sidorenko's property is equivalent to the following reformulation:
For all graphs formula_1, if formula_1 has formula_2 vertices and an average degree of formula_3, then formula_15.
This is equivalent because the number of homomorphisms from formula_16 to formula_1 is twice the number of edges in formula_1, and the inequality only needs to be checked when formula_7 is a graph as previously mentioned.
In this formulation, since the number of non-injective homomorphisms from formula_0 to formula_1 is at most a constant times formula_17, Sidorenko's property would imply that there are at least formula_18 labeled copies of formula_0 in formula_1.
Examples.
As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphs formula_1. Throughout this section, formula_1 is a graph on formula_2 vertices with average degree formula_3. The quantity formula_19 refers to the number of homomorphisms from formula_0 to formula_1. This quantity is the same as formula_20.
Elementary proofs of Sidorenko's property for some graphs follow from the Cauchy–Schwarz inequality or Hölder's inequality. Others can be done by using spectral graph theory, especially noting the observation that the number of closed paths of length formula_21 from vertex formula_22 to vertex formula_23 in formula_1 is the component in the formula_22th row and formula_23th column of the matrix formula_24, where formula_25 is the adjacency matrix of formula_1.
Cauchy–Schwarz: The 4-cycle "C"4.
By fixing two vertices formula_26 and formula_27 of formula_1, each copy of formula_28 that have formula_26 and formula_27 on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors of formula_26 and formula_27. Letting formula_29 denote the "codegree" of formula_26 and formula_27 (i.e. the number of common neighbors), this implies:
formula_30
by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So:
formula_31
by Cauchy–Schwarz again. So:
formula_32
as desired.
Spectral graph theory: The 2"k"-cycle "C"2"k".
Although the Cauchy–Schwarz approach for formula_28 is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite.
Using the observation about closed paths, it follows that formula_33 is the sum of the diagonal entries in formula_34. This is equal to the trace of formula_34, which in turn is equal to the sum of the formula_35th powers of the eigenvalues of formula_25. If formula_36 are the eigenvalues of formula_25, then the min-max theorem implies that:
formula_37
where formula_38 is the vector with formula_2 components, all of which are formula_39. But then:
formula_40
because the eigenvalues of a real symmetric matrix are real. So:
formula_41
as desired.
Entropy: Paths of length 3.
J.L. Xiang Li and Balázs Szegedy (2011) introduced the idea of using entropy to prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property. While Szegedy's proof wound up being abstract and technical, Tim Gowers and Jason Long reduced the argument to a simpler one for specific cases such as paths of length formula_42. In essence, the proof chooses a nice probability distribution of choosing the vertices in the path and applies Jensen's inequality (i.e. convexity) to deduce the inequality.
Partial results.
Here is a list of some bipartite graphs formula_0 which have been shown to have Sidorenko's property. Let formula_0 have bipartition formula_43.
However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graph formula_47, formed by removing a formula_48-cycle from the complete bipartite graph with parts of size formula_49.
László Lovász proved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" to random graphs in a sense of cut norm.
Forcing conjecture.
A sequence of graphs formula_50 is called "quasi-random with density formula_6" for some density formula_51 if for every graph formula_0:
formula_52
The sequence of graphs would thus have properties of the Erdős–Rényi random graph formula_53.
If the edge density formula_54 is fixed at formula_55, then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graph formula_0.
From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for the formula_28 count to match what would be expected of a random graph (i.e. the condition holds for formula_56). The paper also asks which graphs formula_0 have this property besides formula_28. Such graphs are called "forcing graphs" as their count controls the quasi-randomness of a sequence of graphs.
The forcing conjecture states the following:
A graph formula_0 is forcing if and only if it is bipartite and not a tree.
It is straightforward to see that if formula_0 is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing.
Sidorenko's conjecture for graphs of density formula_6 follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "pn"
},
{
"math_id": 4,
"text": "p^{|E(H)|} n^{|V(H)|}"
},
{
"math_id": 5,
"text": "p^{|E(H)|}"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "W"
},
{
"math_id": 8,
"text": "t(H,W)\\geq t(K_2,W)^{|E(H)|}"
},
{
"math_id": 9,
"text": "t(H,W)"
},
{
"math_id": 10,
"text": "V(H)"
},
{
"math_id": 11,
"text": "V(G)"
},
{
"math_id": 12,
"text": "t(K_3,W)=0"
},
{
"math_id": 13,
"text": "t(K_2,W)"
},
{
"math_id": 14,
"text": "K_3"
},
{
"math_id": 15,
"text": "t(H,G)\\geq p^{|E(H)|}"
},
{
"math_id": 16,
"text": "K_2"
},
{
"math_id": 17,
"text": "n^{|V(H)|-1}"
},
{
"math_id": 18,
"text": "(p^{|E(H)|}-o(1))n^{|V(H)|}"
},
{
"math_id": 19,
"text": "\\operatorname{hom}(H,G)"
},
{
"math_id": 20,
"text": "n^{|V(H)|}t(H,G)"
},
{
"math_id": 21,
"text": "\\ell"
},
{
"math_id": 22,
"text": "i"
},
{
"math_id": 23,
"text": "j"
},
{
"math_id": 24,
"text": "A^\\ell"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "u"
},
{
"math_id": 27,
"text": "v"
},
{
"math_id": 28,
"text": "C_4"
},
{
"math_id": 29,
"text": "\\operatorname{codeg}(u,v)"
},
{
"math_id": 30,
"text": "\\operatorname{hom}(C_4,G)=\\sum_{u,v\\in V(G)}\\operatorname{codeg}(u,v)^2\\geq\\frac{1}{n^2}\\left(\\sum_{u,v\\in V(G)}\\operatorname{codeg}(u,v)\\right)^2"
},
{
"math_id": 31,
"text": "\\operatorname{hom}(C_4,G)\\geq\\frac{1}{n^2}\\left(\\sum_{x\\in V(G)}\\deg(x)^2\\right)^2\\geq\\frac{1}{n^2}\\left(\\frac{1}{n}\\left(\\sum_{x\\in V(G)} \\deg(x) \\right)^2\\right)^2=\\frac{1}{n^2}\\left(\\frac{1}{n}(n\\cdot pn)^2\\right)^2=p^4n^4"
},
{
"math_id": 32,
"text": "t(C_4,G)=\\frac{\\operatorname{hom}(C_4,G)}{n^4}\\geq p^4"
},
{
"math_id": 33,
"text": "\\operatorname{hom}(C_{2k},G)"
},
{
"math_id": 34,
"text": "A^{2k}"
},
{
"math_id": 35,
"text": "2k"
},
{
"math_id": 36,
"text": "\\lambda_1\\geq\\lambda_2\\geq\\dots\\geq\\lambda_n"
},
{
"math_id": 37,
"text": "\\lambda_1\\geq\\frac{\\mathbf{1}^\\intercal A\\mathbf{1}}{\\mathbf{1}^\\intercal\\mathbf{1}}=\\frac{1}{n} \\sum_{x\\in V(G)}\\deg(x)=pn,"
},
{
"math_id": 38,
"text": "\\mathbf{1}"
},
{
"math_id": 39,
"text": "1"
},
{
"math_id": 40,
"text": "\\operatorname{hom}(C_{2k},G)=\\sum_{i=1}^n\\lambda_i^{2k}\\geq\\lambda_1^{2k}\\geq p^{2k}n^{2k}"
},
{
"math_id": 41,
"text": "t(C_{2k},G)=\\frac{\\operatorname{hom}(C_{2k},G)}{n^{2k}}\\geq p^{2k}"
},
{
"math_id": 42,
"text": "3"
},
{
"math_id": 43,
"text": "A\\sqcup B"
},
{
"math_id": 44,
"text": "\\min\\{|A|,|B|\\}\\leq4"
},
{
"math_id": 45,
"text": "Q_3"
},
{
"math_id": 46,
"text": "B"
},
{
"math_id": 47,
"text": "K_{5,5}\\setminus C_{10}"
},
{
"math_id": 48,
"text": "10"
},
{
"math_id": 49,
"text": "5"
},
{
"math_id": 50,
"text": "\\{G_n\\}_{n=1}^{\\infty}"
},
{
"math_id": 51,
"text": "0<p<1"
},
{
"math_id": 52,
"text": "t(H,G_n)=(1+o(1))p^{|E(H)|}."
},
{
"math_id": 53,
"text": "G(n,p)"
},
{
"math_id": 54,
"text": "t(K_2,G_n)"
},
{
"math_id": 55,
"text": "(1+o(1))p"
},
{
"math_id": 56,
"text": "H=C_4"
}
] |
https://en.wikipedia.org/wiki?curid=62480458
|
62480574
|
Alon–Boppana bound
|
In spectral graph theory, the Alon–Boppana bound provides a lower bound on the second-largest eigenvalue of the adjacency matrix of a formula_0-regular graph, meaning a graph in which every vertex has degree formula_0. The reason for the interest in the second-largest eigenvalue is that the largest eigenvalue is guaranteed to be formula_0 due to formula_0-regularity, with the all-ones vector being the associated eigenvector. The graphs that come close to meeting this bound are Ramanujan graphs, which are examples of the best possible expander graphs.
Its discoverers are Noga Alon and Ravi Boppana.
Theorem statement.
Let formula_1 be a formula_0-regular graph on formula_2 vertices with diameter formula_3, and let formula_4 be its adjacency matrix. Let formula_5 be its eigenvalues. Then
formula_6
The above statement is the original one proved by Noga Alon. Some slightly weaker variants exist to improve the ease of proof or improve intuition. Two of these are shown in the proofs below.
Intuition.
The intuition for the number formula_7 comes from considering the infinite formula_0-regular tree. This graph is a universal cover of formula_0-regular graphs, and it has spectral radius formula_8
Saturation.
A graph that essentially saturates the Alon–Boppana bound is called a Ramanujan graph. More precisely, a Ramanujan graph is a formula_0-regular graph such that formula_9
A theorem by Friedman shows that, for every formula_0 and formula_10 and for sufficiently large formula_2, a random formula_0-regular graph formula_1 on formula_2 vertices satisfies formula_11 with high probability. This means that a random formula_2-vertex formula_0-regular graph is typically "almost Ramanujan."
First proof (slightly weaker statement).
We will prove a slightly weaker statement, namely dropping the specificity on the second term and simply asserting formula_12 Here, the formula_13 term refers to the asymptotic behavior as formula_2 grows without bound while formula_0 remains fixed.
Let the vertex set be formula_14 By the min-max theorem, it suffices to construct a nonzero vector formula_15 such that formula_16 and formula_17
Pick some value formula_18 For each vertex in formula_19 define a vector formula_20 as follows. Each component will be indexed by a vertex formula_21 in the graph. For each formula_22 if the distance between formula_21 and formula_23 is formula_24 then the formula_21-component of formula_25 is formula_26 if formula_27 and formula_28 if formula_29 We claim that any such vector formula_30 satisfies
formula_31
To prove this, let formula_32 denote the set of all vertices that have a distance of exactly formula_33 from formula_34 First, note that
formula_35
Second, note that
formula_36
where the last term on the right comes from a possible overcounting of terms in the initial expression. The above then implies
formula_37
which, when combined with the fact that formula_38 for any formula_24 yields
formula_39
The combination of the above results proves the desired inequality.
For convenience, define the formula_40-ball of a vertex formula_23 to be the set of vertices with a distance of at most formula_41 from formula_34 Notice that the entry of formula_25 corresponding to a vertex formula_21 is nonzero if and only if formula_21 lies in the formula_40-ball of formula_42
The number of vertices within distance formula_33 of a given vertex is at most formula_43 Therefore, if formula_44 then there exist vertices formula_45 with distance at least formula_46
Let formula_30 and formula_47 It then follows that formula_48 because there is no vertex that lies in the formula_40-balls of both formula_49 and formula_50 It is also true that formula_51 because no vertex in the formula_40-ball of formula_49 can be adjacent to a vertex in the formula_40-ball of formula_50
Now, there exists some constant formula_52 such that formula_53 satisfies formula_54 Then, since formula_55
formula_56
Finally, letting formula_57 grow without bound while ensuring that formula_58 (this can be done by letting formula_57 grow sublogarithmically as a function of formula_2) makes the error term formula_13 in formula_59
Second proof (slightly modified statement).
This proof will demonstrate a slightly modified result, but it provides better intuition for the source of the number formula_8 Rather than showing that formula_60 we will show that formula_61
First, pick some value formula_62 Notice that the number of closed walks of length formula_63 is
formula_64
However, it is also true that the number of closed walks of length formula_63 starting at a fixed vertex formula_23 in a formula_0-regular graph is at least the number of such walks in an infinite formula_0-regular tree, because an infinite formula_0-regular tree can be used to cover the graph. By the definition of the Catalan numbers, this number is at least formula_65 where formula_66 is the formula_67 Catalan number.
It follows that
formula_68
formula_69
Letting formula_2 grow without bound and letting formula_33 grow without bound but sublogarithmically in formula_2 yields formula_70
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\lambda_1 \\ge \\lambda_2 \\ge \\cdots \\ge \\lambda_n"
},
{
"math_id": 6,
"text": "\\lambda_2 \\ge 2\\sqrt{d-1} - \\frac{2\\sqrt{d-1} - 1}{\\lfloor m/2 \\rfloor}."
},
{
"math_id": 7,
"text": "2\\sqrt{d-1}"
},
{
"math_id": 8,
"text": "2\\sqrt{d-1}."
},
{
"math_id": 9,
"text": "|\\lambda_2|, |\\lambda_n| \\le 2\\sqrt{d-1}."
},
{
"math_id": 10,
"text": "\\epsilon > 0"
},
{
"math_id": 11,
"text": "\\max\\{|\\lambda_2|, |\\lambda_n|\\} < 2\\sqrt{d-1} + \\epsilon"
},
{
"math_id": 12,
"text": "\\lambda_2 \\ge 2\\sqrt{d-1} - o(1)."
},
{
"math_id": 13,
"text": "o(1)"
},
{
"math_id": 14,
"text": "V."
},
{
"math_id": 15,
"text": "z\\in\\mathbb{R}^{|V|}"
},
{
"math_id": 16,
"text": "z^{\\text{T}}\\mathbf{1} = 0"
},
{
"math_id": 17,
"text": "\\frac{z^{\\text{T}}Az}{z^{\\text{T}}z} \\ge 2\\sqrt{d-1} - o(1)."
},
{
"math_id": 18,
"text": "r\\in\\mathbb{N}."
},
{
"math_id": 19,
"text": "V,"
},
{
"math_id": 20,
"text": "f(v)\\in\\mathbb{R}^{|V|}"
},
{
"math_id": 21,
"text": "u"
},
{
"math_id": 22,
"text": "u,"
},
{
"math_id": 23,
"text": "v"
},
{
"math_id": 24,
"text": "k,"
},
{
"math_id": 25,
"text": "f(v)"
},
{
"math_id": 26,
"text": "f(v)_u = w_k = (d-1)^{-k/2}"
},
{
"math_id": 27,
"text": "k\\le r-1"
},
{
"math_id": 28,
"text": "0"
},
{
"math_id": 29,
"text": "k\\ge r."
},
{
"math_id": 30,
"text": "x = f(v)"
},
{
"math_id": 31,
"text": "\\frac{x^{\\text{T}}Ax}{x^{\\text{T}}x} \\ge 2\\sqrt{d-1}\\left(1 - \\frac{1}{2r}\\right)."
},
{
"math_id": 32,
"text": "V_k"
},
{
"math_id": 33,
"text": "k"
},
{
"math_id": 34,
"text": "v."
},
{
"math_id": 35,
"text": "x^{\\text{T}}x = \\sum_{k=0}^{r-1}|V_k|w^2_k."
},
{
"math_id": 36,
"text": "x^{\\text{T}}Ax = \\sum_{u\\in V}x_u \\sum_{u'\\in N(u)}x_{u'} \\ge \\sum_{k=0}^{r-1}|V_k|w_k\\left[w_{k-1} + (d-1)w_{k+1}\\right] - (d-1)|V_{r-1}|w_{r-1}w_r,"
},
{
"math_id": 37,
"text": "x^{\\text{T}}Ax \\ge 2\\sqrt{d-1}\\left(\\sum_{k=0}^{r-1}|V_k|w^2_k - \\frac{1}{2}|V_{r-1}|w^2_{r-1}\\right),"
},
{
"math_id": 38,
"text": "|V_{k+1}| \\le (d-1)|V_k|"
},
{
"math_id": 39,
"text": "x^{\\text{T}}Ax \\ge 2\\sqrt{d-1}\\left(1 - \\frac{1}{2r}\\right)\\sum_{k=0}^{r-1}|V_k|w^2_k."
},
{
"math_id": 40,
"text": "(r-1)"
},
{
"math_id": 41,
"text": "r-1"
},
{
"math_id": 42,
"text": "x."
},
{
"math_id": 43,
"text": "1 + d + d(d-1) + d(d-1)^2 + \\cdots + d(d-1)^{k-1} = d^k + 1."
},
{
"math_id": 44,
"text": "n \\ge d^{2r-1} + 2,"
},
{
"math_id": 45,
"text": "u, v"
},
{
"math_id": 46,
"text": "2r."
},
{
"math_id": 47,
"text": "y = f(u)."
},
{
"math_id": 48,
"text": "x^{\\text{T}}y = 0,"
},
{
"math_id": 49,
"text": "x"
},
{
"math_id": 50,
"text": "y."
},
{
"math_id": 51,
"text": "x^{\\text{T}}Ay = 0,"
},
{
"math_id": 52,
"text": "c"
},
{
"math_id": 53,
"text": "z = x - cy"
},
{
"math_id": 54,
"text": "z^{\\text{T}}\\mathbf{1} = 0."
},
{
"math_id": 55,
"text": "x^{\\text{T}}y = x^{\\text{T}}Ay = 0,"
},
{
"math_id": 56,
"text": "z^{\\text{T}}Az = x^{\\text{T}}Ax + c^2y^{\\text{T}}Ay \\ge 2\\sqrt{d-1}\\left(1 - \\frac{1}{2r}\\right)(x^{\\text{T}}x + c^2y^{\\text{T}}y) = 2\\sqrt{d-1}\\left(1 - \\frac{1}{2r}\\right)z^{\\text{T}}z."
},
{
"math_id": 57,
"text": "r"
},
{
"math_id": 58,
"text": "n \\ge d^{2r-1} + 2"
},
{
"math_id": 59,
"text": "n."
},
{
"math_id": 60,
"text": "\\lambda_2 \\ge 2\\sqrt{d-1} - o(1),"
},
{
"math_id": 61,
"text": "\\lambda = \\max(|\\lambda_2|, |\\lambda_n|) \\ge 2\\sqrt{d-1} - o(1)."
},
{
"math_id": 62,
"text": "k\\in\\mathbb{N}."
},
{
"math_id": 63,
"text": "2k"
},
{
"math_id": 64,
"text": "\\operatorname{tr}A^{2k} = \\sum_{i=1}^n \\lambda^{2k}_i\\le d^{2k} + n\\lambda^{2k}."
},
{
"math_id": 65,
"text": "C_k(d-1)^k,"
},
{
"math_id": 66,
"text": "C_k = \\frac{1}{k+1}\\binom{2k}{k}"
},
{
"math_id": 67,
"text": "k^{\\text{th}}"
},
{
"math_id": 68,
"text": "\\operatorname{tr}A^{2k} \\ge n\\frac{1}{k+1}\\binom{2k}{k}(d-1)^{k}"
},
{
"math_id": 69,
"text": "\\implies \\lambda^{2k} \\ge \\frac{1}{k+1}\\binom{2k}{k}(d-1)^{k} - \\frac{d^{2k}}{n}."
},
{
"math_id": 70,
"text": "\\lambda \\ge 2\\sqrt{d-1} - o(1)."
}
] |
https://en.wikipedia.org/wiki?curid=62480574
|
624839
|
Monte Carlo algorithm
|
Type of randomized algorithm
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for minimum feedback arc set.
The name refers to the Monte Carlo casino in the Principality of Monaco, which is well-known around the world as an icon of gambling. The term "Monte Carlo" was first introduced in 1947 by Nicholas Metropolis.
Las Vegas algorithms are a dual of Monte Carlo algorithms and never return an incorrect answer. However, they may make random choices as part of their work. As a result, the time taken might vary between runs, even with the same input.
If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one, running the algorithm repeatedly while testing the answers will eventually give a correct answer. Whether this process is a Las Vegas algorithm depends on whether halting with probability one is considered to satisfy the definition.
One-sided vs two-sided error.
While the answer returned by a deterministic algorithm is always expected to be correct, this is not the case for Monte Carlo algorithms. For decision problems, these algorithms are generally classified as either false-biased or true-biased. A false-biased Monte Carlo algorithm is always correct when it returns false; a true-biased algorithm is always correct when it returns true. While this describes algorithms with "one-sided errors", others might have no bias; these are said to have "two-sided errors". The answer they provide (either true or false) will be incorrect, or correct, with some bounded probability.
For instance, the Solovay–Strassen primality test is used to determine whether a given number is a prime number. It always answers true for prime number inputs; for composite inputs, it answers false with probability at least <templatestyles src="Fraction/styles.css" />1⁄2 and true with probability less than <templatestyles src="Fraction/styles.css" />1⁄2. Thus, false answers from the algorithm are certain to be correct, whereas the true answers remain uncertain; this is said to be a "<templatestyles src="Fraction/styles.css" />1⁄2-correct false-biased algorithm".
Amplification.
For a Monte Carlo algorithm with one-sided errors, the failure probability can be reduced (and the success probability amplified) by running the algorithm "k" times. Consider again the Solovay–Strassen algorithm which is "<templatestyles src="Fraction/styles.css" />1⁄2-correct false-biased". One may run this algorithm multiple times returning a false answer if it reaches a false response within "k" iterations, and otherwise returning true. Thus, if the number is prime then the answer is always correct, and if the number is composite then the answer is correct with probability at least 1−(1−<templatestyles src="Fraction/styles.css" />1⁄2)"k" = 1−2"−k".
For Monte Carlo decision algorithms with two-sided error, the failure probability may again be reduced by running the algorithm "k" times and returning the majority function of the answers.
Complexity classes.
The complexity class BPP describes decision problems that can be solved by polynomial-time Monte Carlo algorithms with a bounded probability of two-sided errors, and the complexity class RP describes problems that can be solved by a Monte Carlo algorithm with a bounded probability of one-sided error: if the correct answer is false, the algorithm always says so, but it may answer false incorrectly for some instances where the correct answer is true. In contrast, the complexity class ZPP describes problems solvable by polynomial expected time Las Vegas algorithms. ZPP ⊆ RP ⊆ BPP, but it is not known whether any of these complexity classes is distinct from each other; that is, Monte Carlo algorithms may have more computational power than Las Vegas algorithms, but this has not been proven. Another complexity class, PP, describes decision problems with a polynomial-time Monte Carlo algorithm that is more accurate than flipping a coin but where the error probability cannot necessarily be bounded away from <templatestyles src="Fraction/styles.css" />1⁄2.
Classes of Monte Carlo and Las Vegas algorithms.
Randomized algorithms are primarily divided by its two main types, Monte Carlo and Las Vegas, however, these represent only a top of the hierarchy and can be further categorized.
"Both Las Vegas and Monte Carlo are dealing with decisions, i.e., problems in their decision version." "This however should not give a wrong impression and confine these algorithms to such problems—both types of randomized algorithms can be used on numerical problems as well, problems where the output is not simple ‘yes’/‘no’, but where one needs to receive a result that is numerical in nature."
Previous table represents a general framework for Monte Carlo and Las Vegas randomized algorithms. Instead of the mathematical symbol formula_0 one could use formula_1, thus making probabilities in the worst case equal.
Applications in computational number theory and other areas.
Well-known Monte Carlo algorithms include the Solovay–Strassen primality test, the Baillie–PSW primality test, the Miller–Rabin primality test, and certain fast variants of the Schreier–Sims algorithm in computational group theory.
For algorithms that are a part of Stochastic Optimization (SO) group of algorithms, where probability is not known in advance and is empirically determined, it is sometimes possible to merge Monte Carlo and such an algorithm "to have both probability bound calculated in advance and a Stochastic Optimization component." "Example of such an algorithm is Ant Inspired Monte Carlo." In this way, "drawback of SO has been mitigated, and a confidence in a solution has been established."
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "<"
},
{
"math_id": 1,
"text": "\\leq"
}
] |
https://en.wikipedia.org/wiki?curid=624839
|
62485335
|
Doppler parameter
|
Physical parameter commonly used in astrophysics
The Doppler parameter, or Doppler broadening parameter, usually denoted as formula_0, is a parameter commonly used in astrophysics to characterize the width of observed spectral lines of astronomical objects. It is defined as
formula_1,
where formula_2 is the one-dimensional velocity dispersion . Given this parameter, the velocity distribution of the line-emitting/absorbing atoms and ions proximated by a Gaussian can be rewritten as
formula_3,
where formula_4 is the probability of the velocity along the line of sight being in the interval formula_5.
The line width is also often specified in terms of the FWHM (full width at half maximum), which is
formula_6.
Distribution.
The Doppler parameters of Lyman-alpha forest absorption lines are in the range 10–100 km s−1, with a median value around formula_7 that decrease with redshift . Analyses of the HST/COS dataset of low-redshift quasars gives a median formula_0 parameter of around formula_8 (, ).
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": " b = \\sqrt{2} \\sigma "
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": " p = \\frac{1}{\\sqrt{2\\pi}}\\frac{1}{\\sigma}e^{-(v-v_0)^2/2\\sigma^2} = \\frac{1}{\\sqrt{\\pi}}\\frac{1}{b}e^{-(v-v_0)^2/b^2}"
},
{
"math_id": 4,
"text": "p\\mathrm{d}v"
},
{
"math_id": 5,
"text": "[v, v + \\mathrm{d}v]"
},
{
"math_id": 6,
"text": " \\mathrm{FWHM} = 2\\sqrt{2\\ln 2}\\sigma = 2\\sqrt{\\ln 2} b \\approx 1.665b "
},
{
"math_id": 7,
"text": "b_m = 36\\ \\mathrm{km\\ s}^{-1}"
},
{
"math_id": 8,
"text": "33\\ \\mathrm{km\\ s}^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=62485335
|
62485342
|
Nehemiah 9
|
A chapter in the Book of Nehemiah
Nehemiah 9 is the ninth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 19th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter and the previous one focus mainly on Ezra; with this chapter recording Ezra's prayer of repentance for the sake of the people (parallel to Ezra 9–10).
Text.
The original text of this chapter is in the Hebrew language. In English Bibles this chapter is divided into 38 verses, but only 37 verses in the Hebrew Bible, with verse 9:38 in English texts numbered as 10:1 in Hebrew texts.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
A time of mourning (9:1–5a).
The Jewish community at this time determined to sincerely follow God and to become a holy people, so they gathered in a 'demonstration of mourning, confession and [praising] God'.
"Now in the twenty and fourth day of this month the children of Israel were assembled with fasting, and with sackclothes, and earth upon them."
Verse 1.
The month was Tishrei. The feast of tabernacles began on the fourteenth day of the month, and ended on the twenty-second, "all which time mourning had been forbidden, as contrary to the nature of the feast, which was to be kept with joy". Methodist commentator Joseph Benson reflects that "now, on the twenty-fourth, the next day but one after the feast, their consciences having been fully awakened, and their hearts filled with grief for their sins, which they were not allowed to express in that time of public joy, they resume their former thoughts, and, recalling their sins to mind, set apart a day for solemn fasting and humiliation". "Sackclothes" were made of "dark, coarse material associated with sorrow and repentance".
The prayer (9:5b–37).
This section records the prayer of praise and petition offered by the Levites on behalf of the people to appeal for the grace of God. With the Persians presumably listening, the mentioned historical events are certainly not arbitrarily selected, as the prayer is making some strong statements:
"And testified against them,"
"That You might bring them back to Your law."
"Yet they acted proudly,"
"And did not heed Your commandments,"
"But sinned against Your judgments,"
"‘Which if a man does, he shall live by them.’"
"And they shrugged their shoulders,"
"Stiffened their necks,"
"And would not hear."
The pledge of the people (9:38).
It is a tradition in the ancient Middle-East that a document (covenant, agreement) should always be authenticated by a seal or any number of seals. For example, Babylonian and Assyrian documents were often found ‘stamped with half a dozen seals or more’, which ‘were impressed upon the moist clay, and then the clay was baked’.
"And because of all this we make a sure covenant, and write it; and our princes, Levites, and priests, seal unto it."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62485342
|
62488279
|
Theorem of transition
|
Theorem about commutative rings and subrings
In algebra, the theorem of transition is said to hold between commutative rings formula_0 if
Given commutative rings formula_0 such that formula_1 dominates formula_2 and for each maximal ideal formula_6 of formula_2 such that formula_10 is finite, the natural inclusion formula_11 is a faithfully flat ring homomorphism if and only if the theorem of transition holds between formula_0.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A \\subset B"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "IB"
},
{
"math_id": 4,
"text": "\\mathfrak n"
},
{
"math_id": 5,
"text": "\\mathfrak n \\cap A"
},
{
"math_id": 6,
"text": "\\mathfrak m"
},
{
"math_id": 7,
"text": "Q"
},
{
"math_id": 8,
"text": "\\operatorname{length}_B (B/ Q B)"
},
{
"math_id": 9,
"text": "\\operatorname{length}_B (B/ Q B) = \\operatorname{length}_B (B/ \\mathfrak{m} B) \\operatorname{length}_A(A/Q)."
},
{
"math_id": 10,
"text": "\\operatorname{length}_B (B/ \\mathfrak{m} B)"
},
{
"math_id": 11,
"text": "A \\to B"
}
] |
https://en.wikipedia.org/wiki?curid=62488279
|
6249929
|
Lambert (unit)
|
Non-SI metric unit of luminance
The lambert (symbol L, la or Lb) is a non-SI metric unit of luminance named for Johann Heinrich Lambert (1728–1777), a Swiss mathematician, physicist and astronomer. A related unit of luminance, the foot-lambert, is used in the lighting, cinema and flight simulation industries. The SI unit is the candela per square metre (cd/m2).
Definition.
1 lambert (L) = formula_0 candela per square centimetre (0.3183 cd/cm2) or formula_1 cd m−2
See also.
Other units of luminance:
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{\\pi}"
},
{
"math_id": 1,
"text": "\\frac{10^4}{\\pi}"
}
] |
https://en.wikipedia.org/wiki?curid=6249929
|
62501305
|
Pósa's theorem
|
Sufficient condition for a Hamiltonian cycle in a graph, based on its vertex's degrees
Pósa's theorem, in graph theory, is a sufficient condition for the existence of a Hamiltonian cycle based on the degrees of the vertices in an undirected graph. It implies two other degree-based sufficient conditions, Dirac's theorem on Hamiltonian cycles and Ore's theorem. Unlike those conditions, it can be applied to graphs with a small number of low-degree vertices. It is named after Lajos Pósa, a protégé of Paul Erdős born in 1947, who discovered this theorem in 1962.
The Pósa condition for a finite undirected graph formula_0 having formula_1 vertices requires that, if the degrees of the formula_1 vertices in increasing order as
formula_2
then for each index formula_3 the inequality formula_4 is satisfied.
Pósa's theorem states that if a finite undirected graph satisfies the Pósa condition, then that graph has a Hamiltonian cycle in it.
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "d_{1} \\leq d_{2} \\leq ... \\leq d_{n},"
},
{
"math_id": 3,
"text": "k < n/2"
},
{
"math_id": 4,
"text": "k < d_{k}"
}
] |
https://en.wikipedia.org/wiki?curid=62501305
|
62503788
|
Carleman linearization
|
Mathematical transformation technique
In mathematics, Carleman linearization (or Carleman embedding) is a technique to transform a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system. It was introduced by the Swedish mathematician Torsten Carleman in 1932. Carleman linearization is related to composition operator and has been widely used in the study of dynamical systems. It also been used in many applied fields, such as in control theory and in quantum computing.
Procedure.
Consider the following autonomous nonlinear system:
formula_0
where formula_1 denotes the system state vector. Also, formula_2 and formula_3's are known analytic vector functions, and formula_4 is the formula_5 element of an unknown disturbance to the system.
At the desired nominal point, the nonlinear functions in the above system can be approximated by Taylor expansion
formula_6
where formula_7 is the formula_8 partial derivative of formula_9 with respect to formula_10 at formula_11 and formula_12 denotes the formula_8 Kronecker product.
Without loss of generality, we assume that formula_13 is at the origin.
Applying Taylor approximation to the system, we obtain
formula_14
where formula_15 and formula_16.
Consequently, the following linear system for higher orders of the original states are obtained:
formula_17
where formula_18, and similarly formula_19.
Employing Kronecker product operator, the approximated system is presented in the following form
formula_20
where formula_21, and formula_22 and formula_23 matrices are defined in (Hashemian and Armaou 2015).
|
[
{
"math_id": 0,
"text": "\n\\dot{x}=f(x)+\\sum_{j=1}^m g_j(x)d_j(t)\n"
},
{
"math_id": 1,
"text": "x\\in R^n"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "g_i"
},
{
"math_id": 4,
"text": "d_j"
},
{
"math_id": 5,
"text": "j^{th}"
},
{
"math_id": 6,
"text": "\nf(x)\\simeq f(x_0)+\n\\sum _{k=1}^\\eta \\frac{1}{k!}\\partial f_{[k]}\\mid _{x=x_0}(x-x_0)^{[k]}\n"
},
{
"math_id": 7,
"text": "\\partial f_{[k]}\\mid _{x=x_0}"
},
{
"math_id": 8,
"text": "k^{th}"
},
{
"math_id": 9,
"text": "f(x)"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "x=x_0"
},
{
"math_id": 12,
"text": "x^{[k]}"
},
{
"math_id": 13,
"text": "x_{0}"
},
{
"math_id": 14,
"text": "\n\\dot x\\simeq \\sum _{k=0}^\\eta A_k x^{[k]}\n+\\sum_{j=1}^{m}\\sum _{k=0}^\\eta B_{jk} x^{[k]}d_j \n"
},
{
"math_id": 15,
"text": "A_k=\\frac{1}{k!}\\partial f_{[k]}\\mid _{x=0}"
},
{
"math_id": 16,
"text": "B_{jk}=\\frac{1}{k!}\\partial g_{j[k]}\\mid _{x=0}"
},
{
"math_id": 17,
"text": "\n\\frac{d(x^{[i]})}{dt}\\simeq \\sum _{k=0}^{\\eta-i+1} A_{i,k} x^{[k+i-1]}\n+\\sum_{j=1}^m \\sum _{k=0}^{\\eta-i+1} B_{j,i,k} x^{[k+i-1]}d_j\n"
},
{
"math_id": 18,
"text": "A_{i,k}=\\sum _{l=0}^{i-1}I^{[l]}_n \\otimes A_k \\otimes I^{[i-1-l]}_n"
},
{
"math_id": 19,
"text": "B_{j,i,\\kappa}=\\sum _{l=0}^{i-1}I^{[l]}_n \\otimes B_{j,\\kappa} \\otimes I^{[i-1-l]}_n"
},
{
"math_id": 20,
"text": "\n\\dot x_{\\otimes}\\simeq Ax_{\\otimes}\n+\\sum_{j=1}^m [B_jx_{\\otimes}d_j+B_{j0}d_j]+A_r\n"
},
{
"math_id": 21,
"text": "x_{\\otimes}=\\begin{bmatrix}\nx^T &x^{{[2]}^T} & ... & x^{{[\\eta]}^T}\n \\end{bmatrix}^T"
},
{
"math_id": 22,
"text": "A, B_j , A_r"
},
{
"math_id": 23,
"text": "B_{j,0}"
}
] |
https://en.wikipedia.org/wiki?curid=62503788
|
62505649
|
Nehemiah 10
|
A chapter in the Book of Nehemiah
Nehemiah 10 is the tenth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 20th chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Books of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE known as The Chronicler is the final author of these books. The chapter contains the list of signatories to the people's pledge and the later part deals with intermarriage with the non-Jews among the "people of the land" (parallel to Ezra 10) punctuated with the pledge to separate from "foreigners".
Text.
The original text of this chapter is in Hebrew language. In English Bible texts this chapter is divided into 39 verses, but 40 verses in Hebrew Bible, due to a different verse numbering as follows:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The leaders set their seal to the pledge (10:1–27).
After the first seal from Nehemiah the governor (verse 1a), the record is carefully ordered with three lists of signatories: the priests (10:1b–8), the Levites (10:9–13) and the chiefs of the people (10:14–27). Ezra the priest, who has played a leading part in the narrative on chapters 8 and 9, is not mentioned in this chapter.
"Now those who placed their seal on the document were:"
"Nehemiah the governor, the son of Hacaliah, and
"Zedekiah,"
"Pashhur, Amariah, Malchijah,"
"Harim, Meremoth, Obadiah,"
"The beginning of the se[cond] month is [on the si]xth [day] of the course of Jedaiah. On the second of the month is the Sabbath of the course of Harim...".
"Meshullam, Abijah, Mijamin,"
"Maaziah, Bilgai, Shemaiah: these were the priests."
Stipulations of the pledge (10:28–39).
The pledge contains the general affirmation involving the whole community (verses 28–29; cf. Ezra 9–10) and particular obligations 'which they lay upon themselves' (verses 30–39), in relation to intermarriage (verse 30), to the Sabbath and sabbatical year (verse 31), and to the provision for the upkeep of the Temple and clergy (verses 32–). The wording can be traced to the Book of Deuteronomy, such as "to walk in God's law" (cf. ) and "to observe and do all the commandments" (cf. ).
"These joined with their brethren, their nobles, and entered into a curse and an oath to walk in God's Law, which was given by Moses the servant of God, and to observe and do all the commandments of the Lord our Lord, and His ordinances and His statutes."
Verse 29.
The "curse" is the penalty which they invoked if they were faithless to the covenant, the "oath" is the solemn obligation of a duty which they vowed to perform: the oath recalls the wording of , "enter into covenant with the Lord your God, and into His oath, which the Lord your God makes with you today".
"Also we made ordinances for us, to charge ourselves yearly with the third part of a shekel for the service of the house of our God;"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62505649
|
6250799
|
Proper forcing axiom
|
In the mathematical field of set theory, the proper forcing axiom ("PFA") is a significant strengthening of Martin's axiom, where forcings with the countable chain condition (ccc) are replaced by proper forcings.
Statement.
A forcing or partially ordered set formula_0 is proper if for all regular uncountable cardinals formula_1, forcing with P preserves stationary subsets of formula_2.
The proper forcing axiom asserts that if formula_0 is proper and formula_3 is a dense subset of formula_0 for each formula_4, then there is a filter formula_5 such that formula_6 is nonempty for all formula_4.
The class of proper forcings, to which PFA can be applied, is rather large. For example, standard arguments show that if formula_0 is ccc or ω-closed, then formula_0 is proper. If formula_0 is a countable support iteration of proper forcings, then formula_0 is proper. Crucially, all proper forcings preserve formula_7.
Consequences.
PFA directly implies its version for ccc forcings, Martin's axiom. In cardinal arithmetic, PFA implies formula_8. PFA implies any two formula_9-dense subsets of R are isomorphic, any two Aronszajn trees are club-isomorphic, and every automorphism of the Boolean algebra formula_10 is trivial. PFA implies that the Singular Cardinals Hypothesis holds. An especially notable consequence proved by John R. Steel is that the axiom of determinacy holds in L(R), the smallest inner model containing the real numbers. Another consequence is the failure of square principles and hence existence of inner models with many Woodin cardinals.
Consistency strength.
If there is a supercompact cardinal, then there is a model of set theory in which PFA holds. The proof uses the fact that proper forcings are preserved under countable support iteration, and the fact that if formula_11 is supercompact, then there exists a Laver function for formula_11.
It is not yet known precisely how much large cardinal strength comes from PFA, and currently the best lower bound is a bit below the existence of a Woodin cardinal that is a limit of Woodin cardinals.
Other forcing axioms.
The bounded proper forcing axiom (BPFA) is a weaker variant of PFA which instead of arbitrary dense subsets applies only to maximal antichains of size formula_12. Martin's maximum is the strongest possible version of a forcing axiom.
Forcing axioms are viable candidates for extending the axioms of set theory as an alternative to large cardinal axioms.
The Fundamental Theorem of Proper Forcing.
The Fundamental Theorem of Proper Forcing, due to Shelah, states that any countable support iteration of proper forcings is itself proper. This follows from the Proper Iteration Lemma, which states that whenever formula_13 is a countable support forcing iteration based on formula_14 and formula_15 is a countable elementary substructure of formula_16 for a sufficiently large regular cardinal formula_17, and formula_18 and formula_19 and formula_20 is formula_21-generic and formula_20 forces formula_22, then there exists formula_23 such that formula_24 is formula_15-generic and the restriction of formula_24 to formula_25 equals formula_20 and formula_20 forces the restriction of formula_24 to formula_26 to be stronger or equal to formula_27.
This version of the Proper Iteration Lemma, in which the name formula_27 is not assumed to be in formula_28, is due to Schlindwein.
The Proper Iteration Lemma is proved by a fairly straightforward induction on formula_11, and the Fundamental Theorem of Proper Forcing follows by taking formula_29.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": " \\lambda "
},
{
"math_id": 2,
"text": "[\\lambda]^\\omega"
},
{
"math_id": 3,
"text": "D_\\alpha"
},
{
"math_id": 4,
"text": "\\alpha < \\omega_1"
},
{
"math_id": 5,
"text": "G \\subseteq P"
},
{
"math_id": 6,
"text": "D_\\alpha \\cap G"
},
{
"math_id": 7,
"text": "\\aleph_1 "
},
{
"math_id": 8,
"text": " 2^{\\aleph_0} = \\aleph_2 "
},
{
"math_id": 9,
"text": "\\aleph_1"
},
{
"math_id": 10,
"text": "P(\\omega)\\text{/fin}"
},
{
"math_id": 11,
"text": "\\kappa"
},
{
"math_id": 12,
"text": "\\omega_1"
},
{
"math_id": 13,
"text": "(P_\\alpha)_{\\alpha\\leq\\kappa}"
},
{
"math_id": 14,
"text": "(Q_\\alpha)_{\\alpha<\\kappa}"
},
{
"math_id": 15,
"text": "N"
},
{
"math_id": 16,
"text": "H_\\lambda"
},
{
"math_id": 17,
"text": "\\lambda"
},
{
"math_id": 18,
"text": "P_\\kappa\\in N"
},
{
"math_id": 19,
"text": "\\alpha\\in \\kappa\\cap N"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "(N,P_\\alpha)"
},
{
"math_id": 22,
"text": "q\\in P_\\kappa/G_{P_\\alpha}\\cap N[G_{P_\\alpha}]"
},
{
"math_id": 23,
"text": "r\\in P_\\kappa"
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "P_\\alpha"
},
{
"math_id": 26,
"text": "[\\alpha,\\kappa)"
},
{
"math_id": 27,
"text": "q"
},
{
"math_id": 28,
"text": " N"
},
{
"math_id": 29,
"text": "\\alpha=0"
}
] |
https://en.wikipedia.org/wiki?curid=6250799
|
62508931
|
Proth prime
|
Prime number of the form k*(2^n)+1
A Proth number is a natural number "N" of the form formula_0 where "k" and "n" are positive integers, "k" is odd and formula_1. A Proth prime is a Proth number that is prime. They are named after the French mathematician François Proth. The first few Proth primes are
3, 5, 13, 17, 41, 97, 113, 193, 241, 257, 353, 449, 577, 641, 673, 769, 929, 1153, 1217, 1409, 1601, 2113, 2689, 2753, 3137, 3329, 3457, 4481, 4993, 6529, 7297, 7681, 7937, 9473, 9601, 9857 (OEIS: ).
It is still an open question whether an infinite number of Proth primes exist. It was shown in 2022 that the reciprocal sum of Proth primes converges to a real number near 0.747392479, substantially less than the value of 1.093322456 for the reciprocal sum of Proth numbers.
The primality of Proth numbers can be tested more easily than many other numbers of similar magnitude.
Definition.
A Proth number takes the form formula_2 where "k" and "n" are positive integers, formula_3 is odd and formula_4. A Proth prime is a Proth number that is prime. Without the condition that formula_5, all odd integers larger than 1 would be Proth numbers.
Primality testing.
The primality of a Proth number can be tested with Proth's theorem, which states that a Proth number formula_6 is prime if and only if there exists an integer formula_7 for which
formula_8
This theorem can be used as a probabilistic test of primality, by checking for many random choices of formula_7 whether formula_8 If this fails to hold for several random formula_7, then it is very likely that the number formula_6 is composite.
This test is a Las Vegas algorithm: it never returns a false positive but can return a false negative; in other words, it never reports a composite number as "probably prime" but can report a prime number as "possibly composite".
In 2008, Sze created a deterministic algorithm that runs in at most formula_9 time, where Õ is the soft-O notation. For typical searches for Proth primes, usually formula_3 is either fixed (e.g. 321 Prime Search or Sierpinski Problem) or of order formula_10 (e.g. Cullen prime search). In these cases algorithm runs in at most formula_11, or formula_12 time for all formula_13. There is also an algorithm that runs in formula_14 time.
Fermat numbers are a special case of Proth numbers, wherein "k"
1. In such a scenario Pépin's test proves that only base "a"
"3" need to be checked to deterministically verify or falsify the primality of a Fermat number.
Large primes.
As of 2022[ [update]], the largest known Proth prime is formula_15. It is 9,383,761 digits long. It was found by Szabolcs Peter in the PrimeGrid volunteer computing project which announced it on 6 November 2016. It is also the second largest known non-Mersenne prime.
The project Seventeen or Bust, searching for Proth primes with a certain formula_16 to prove that 78557 is the smallest Sierpinski number (Sierpinski problem), has found 11 large Proth primes by 2007. Similar resolutions to the prime Sierpiński problem and extended Sierpiński problem have yielded several more numbers.
Since divisors of Fermat numbers formula_17 are always of the form formula_18, it is customary to determine if a new Proth prime divides a Fermat number.
As of July 2023, PrimeGrid is the leading computing project for searching for Proth primes. Its main projects include:
As of June 2023, the largest Proth primes discovered are:
<templatestyles src="Reflist/styles.css" />
Uses.
Small Proth primes (less than 10200) have been used in constructing prime ladders, sequences of prime numbers such that each term is "close" (within about 1011) to the previous one. Such ladders have been used to empirically verify prime-related conjectures. For example, Goldbach's weak conjecture was verified in 2008 up to 8.875 × 1030 using prime ladders constructed from Proth primes. (The conjecture was later proved by Harald Helfgott.)
Also, Proth primes can optimize den Boer reduction between the Diffie–Hellman problem and the Discrete logarithm problem. The prime number 55 × 2286 + 1 has been used in this way.
As Proth primes have simple binary representations, they have also been used in fast modular reduction without the need for pre-computation, for example by Microsoft.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N = k \\times 2^n +1"
},
{
"math_id": 1,
"text": "2^n > k"
},
{
"math_id": 2,
"text": "N=k 2^n +1"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "2^n>k"
},
{
"math_id": 5,
"text": " 2^n > k"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "a^{\\frac{p-1}{2}}\\equiv -1 \\pmod{p}."
},
{
"math_id": 9,
"text": "\\tilde{O}((k\\log k+\\log N)(\\log N)^2)"
},
{
"math_id": 10,
"text": "O(\\log N)"
},
{
"math_id": 11,
"text": "\\tilde{O}((\\log N)^3)"
},
{
"math_id": 12,
"text": "O((\\log N)^{3+\\epsilon})"
},
{
"math_id": 13,
"text": "\\epsilon>0"
},
{
"math_id": 14,
"text": "\\tilde{O}((\\log N)^{24/7})"
},
{
"math_id": 15,
"text": "10223 \\times 2^{31172165} + 1"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "F_n = 2^{2^n} + 1"
},
{
"math_id": 18,
"text": "k \\times 2^{n+2} + 1"
},
{
"math_id": 19,
"text": "3\\times2^n+1"
},
{
"math_id": 20,
"text": "27\\times2^n+1"
},
{
"math_id": 21,
"text": "121\\times2^n+1"
},
{
"math_id": 22,
"text": "n\\times2^n+1"
},
{
"math_id": 23,
"text": "k \\times 2^n+1"
}
] |
https://en.wikipedia.org/wiki?curid=62508931
|
62510733
|
Nehemiah 11
|
A chapter in the Book of Nehemiah
Nehemiah 11 is the eleventh chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 21st chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. The chapter describes the repopulation of Jerusalem. Judahites (4-6), Benjamites (7-9), priests (10-14), Levites (15-18), gatekeepers (19) and "the rest of Israel" (20-21). Roles in relation to leadership, maintenance and prayer in the Temple are allocated. The people cast lots and 1 of 10 are to volunteer to live in the city (still having military duties) whilst the remainder repopulate the surrounding areas ( possession of the land theme).
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 36 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Repopulation of Judah (11:1–30).
Jerusalem, as the provincial capital, already had a sizeable number of population, but mostly of the ruling class, close to leadership positions. Nehemiah was recorded as having 150 officials dining with him in . Anglican commentator H. E. Ryle refers to a suggestion that the rulers or princes, before Nehemiah took the matter in hand, had resided in the country.
However, the city needed more general population in order to grow. The people who would move to Jerusalem were determined by casting lots, one each out of groups of ten family representatives. The detailed list (verses 3–24) demonstrates that each group living outside the city was well represented by families living within its walls.
Among the cities resettled by the returning populations are mentioned Qiryat-arba, Zorah, Jarmuth, Zanoah, Adullam, and Lachish.
"And the rulers of the people dwelt at Jerusalem: the rest of the people also cast lots, to bring one of ten to dwell in Jerusalem the holy city, and nine parts to dwell in other cities.".
Verse 1.
Jerusalem is also called "the holy city verse 18. Ryle notes that "the occurrence of this title in Scripture may be illustrated by Isaiah 48:2, "For they call themselves of the holy city" and , "O Jerusalem, the holy city"", see also Daniel 9:24 and Joel 3:17. In the New Testament it occurs in and Matthew 27:53, see also ; ; ; Revelation 22:19. The New English Translation explains that "the word 'hand' is used here in the sense or a part or portion".
"And Shabbethai and Jozabad, of the chief of the Levites, had the oversight of the outward business of the house of God."
"For it was the king's commandment concerning them, that a certain portion should be for the singers, due for every day."
Outside Jerusalem (11:25–36).
This part scans the Jewish habitation outside Jerusalem with enclaves and settlements throughout the Judean countryside, listing the towns of Judah (verses 25–30), the towns of Benjamin (verses 31–35) and a note on the dwellings of the Levites (verse 36).
" Also the children of Benjamin from Geba dwelt in Michmash, Aija, and Bethel, and their villages;"
"And of the Levites were divisions in Judah, and in Benjamin."
Verse 36.
Based on , the Levites was not given land as inheritance, for 'their portion was the Lord and the honor of his service', but they were given a share of specific towns among the various tribes of Israel.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62510733
|
62513129
|
Nehemiah 12
|
A chapter in the Book of Nehemiah
Nehemiah 12 is the twelfth chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 22nd chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter recounts the lineage of the priests and Levites and describes the dedication of the walls of Jerusalem, whose construction has been a primary concern since the beginning of the book.
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 47 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Priests and Levites (12:1–26).
This part records the several lists of priests and Levites to document the genuineness of the Jewish community and its religious authority, in order to give legitimacy in this postexilic community. The list starts with those said to have returned with Zerubbabel in the first wave at the time of the Persian king, Cyrus (verses 1–9), but this list is quite different from the one in Ezra 2. After listing the high priests from the last one at the time of exile, Jozadak, the father of Jeshua, until Jaddua (verses 10–11), it records those returning at the time of Ezra (verses 12–21), with a careful note on its sources (verses 22–23).
"Now these are the priests and the Levites that went up with Zerubbabel the son of Shealtiel, and Jeshua:"
"Seraiah, Jeremiah, Ezra,"
"Shecaniah, Rehum, Meremoth,"
"Iddo, Ginnethoi, Abijah,"
"Mijamin, Maadiah, Bilgah,"
"of Harim, Adna;"
"of Meraioth, Helkai;"
"The beginning of the se[cond] month is [on the si]xth [day] of the course of Jedaiah. On the second of the month is the Sabbath of the course of Harim...".
"of Abijah, Zichri;"
"the son of Minjamin;"
"of Moadiah, Piltai;"
Joyous dedication (12:27–43).
These verses describe the joyous dedication of the completed work orchestrated by Nehemiah, within the frame of a symmetrically ordered structure as follows:
A Preparations for joyous dedication (verses 27–30)
B Two companies appointed (verse 31a)
C One goes to the right upon the wall (verses 31b, 37)
C' One goes to the left upon the wall (verses 38–39)
B' Two companies meet and stand at the house of God (verse 40)
A' Performance of joyous dedication (verse 43)
The exuberant tone of this passage is indicated by the framework of "joy" which brackets this section (verse 27, five times in verse 43), as the final exposition after previous use in some turning points in the narrative:
Two lists of participants are recorded in verses 32–36 and 41–42, and also display a remarkable symmetry:
A. Hoshaiah and half of the princes of Judah (verse 32)
B. Seven priests with trumpets (verses 33–35a)
C. Zechariah and eight Levitical instrumentalists (verses 35b–36a)
X. Ezra, the scribe (verse 36b)
A. Nehemiah and half of people/officials (verses 38–40)
B. Seven priests with trumpets (verse 41)
C. Jezrahaiah and eight Levitical singers (verse 42)
"And his brethren, Shemaiah, and Azarael, Milalai, Gilalai, Maai, Nethaneel, and Judah, Hanani, with the musical instruments of David the man of God,"
"and Ezra the scribe before them."
Verse 36.
The appearance of "Ezra, the scribe" (verse 36b) provides the primary evidence for the contemporaneity of Ezra and Nehemiah.
"And from above the gate of Ephraim, and above the old gate, and above the fish gate, and the tower of Hananeel, and the tower of Meah, even unto the sheep gate: and they stood still in the prison gate."
"And Maaseiah, and Shemaiah, and Eleazar, and Uzzi, and Jehohanan, and Malchijah, and Elam, and Ezer. And the singers sang loud, with Jezrahiah their overseer."
"Also that day they offered great sacrifices, and rejoiced: for God had made them rejoice with great joy: the wives also and the children rejoiced: so that the joy of Jerusalem was heard even afar off."
Verse 43.
The words "joy" and "rejoice" occur five times in this sentence: "this verse is full of joy; but before the rejoicing comes the abundant offering of sacrifices." Methodist commentator Joseph Benson notes that the security of the walls meant that "they could praise the Lord there without disturbance or fear".
The organization of worship (12:44–47).
The last part of this chapter focuses on the priests and Levites who help people worship God in the Temple, as their needs were taken care by the same people. David was mentioned twice, indicating that the people were emulating the traditions established since the time 'God directed David to establish the Temple'. Verse 47 also confirms that the pattern of bringing food for Temple workers was already observed from the time of Zerubbabel when the Temple was rebuilt, and consistently practiced until the time of Nehemiah. This explains the anger of Nehemiah a few years later when he heard the people stopped providing the needs of the Temple workers (Nehemiah 13:10–13).
"And at that time were some appointed over the chambers for the treasures, for the offerings, for the firstfruits, and for the tithes, to gather into them out of the fields of the cities the portions of the law for the priests and Levites: for Judah rejoiced for the priests and for the Levites that waited."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62513129
|
6251420
|
Non-standard model of arithmetic
|
Model of (first-order) Peano arithmetic that contains non-standard numbers
In mathematical logic, a non-standard model of arithmetic is a model of first-order Peano arithmetic that contains non-standard numbers. The term standard model of arithmetic refers to the standard natural numbers 0, 1, 2, …. The elements of any model of Peano arithmetic are linearly ordered and possess an initial segment isomorphic to the standard natural numbers. A non-standard model is one that has additional elements outside this initial segment. The construction of such models is due to Thoralf Skolem (1934).
Non-standard models of arithmetic exist only for the first-order formulation of the Peano axioms; for the original second-order formulation, there is, up to isomorphism, only one model: the natural numbers themselves.
Existence.
There are several methods that can be used to prove the existence of non-standard models of arithmetic.
From the compactness theorem.
The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem. To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol "x". The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral "n", the axiom "x" > "n" is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant "x" interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to "x" cannot be a standard number, because as indicated it is larger than any standard number.
Using more complex methods, it is possible to build non-standard models that possess more complicated properties. For example, there are models of Peano arithmetic in which Goodstein's theorem fails. It can be proved in Zermelo–Fraenkel set theory that Goodstein's theorem holds in the standard model, so a model where Goodstein's theorem fails must be non-standard.
From the incompleteness theorems.
Gödel's incompleteness theorems also imply the existence of non-standard models of arithmetic.
The incompleteness theorems show that a particular sentence "G", the Gödel sentence of Peano arithmetic, is neither provable nor disprovable in Peano arithmetic. By the completeness theorem, this means that "G" is false in some model of Peano arithmetic. However, "G" is true in the standard model of arithmetic, and therefore any model in which "G" is false must be a non-standard model. Thus satisfying ~"G" is a sufficient condition for a model to be nonstandard. It is not a necessary condition, however; for any Gödel sentence "G" and any infinite cardinality there is a model of arithmetic with "G" true and of that cardinality.
Arithmetic unsoundness for models with ~"G" true.
Assuming that arithmetic is consistent, arithmetic with ~"G" is also consistent. However, since ~"G" states that arithmetic is inconsistent, the result will not be ω-consistent (because ~"G" is false and this violates ω-consistency).
From an ultraproduct.
Another method for constructing a non-standard model of arithmetic is via an ultraproduct. A typical construction uses the set of all sequences of natural numbers, formula_0. Choose an ultrafilter on formula_1, then identify two sequences whenever they have equal values on positions that form a member of the ultrafilter (this requires that they agree on infinitely many terms, but the condition is stronger than this as ultrafilters resemble axiom-of-choice-like maximal extensions of the Fréchet filter). The resulting semiring is a non-standard model of arithmetic. It can be identified with the hypernatural numbers.
Structure of countable non-standard models.
The ultraproduct models are uncountable. One way to see this is to construct an injection of the infinite product of N into the ultraproduct. However, by the Löwenheim–Skolem theorem there must exist countable non-standard models of arithmetic. One way to define such a model is to use Henkin semantics.
Any countable non-standard model of arithmetic has order type ω + (ω* + ω) ⋅ η, where ω is the order type of the standard natural numbers, ω* is the dual order (an infinite decreasing sequence) and η is the order type of the rational numbers. In other words, a countable non-standard model begins with an infinite increasing sequence (the standard elements of the model). This is followed by a collection of "blocks," each of order type ω* + ω, the order type of the integers. These blocks are in turn densely ordered with the order type of the rationals. The result follows fairly easily because it is easy to see that the blocks of non-standard numbers have to be dense and linearly ordered without endpoints, and the order type of the rationals is the only countable dense linear order without endpoints.
So, the order type of the countable non-standard models is known. However, the arithmetical operations are much more complicated.
It is easy to see that the arithmetical structure differs from ω + (ω* + ω) ⋅ η. For instance if a nonstandard (non-finite) element "u" is in the model, then so is "m" ⋅ "u" for any "m" in the initial segment N, yet "u"2 is larger than "m" ⋅ "u" for any standard finite "m".
Also one can define "square roots" such as the least "v" such that "v"2 > 2 ⋅ "u". These cannot be within a standard finite number of any rational multiple of "u". By analogous methods to non-standard analysis one can also use PA to define close approximations to irrational multiples of a non-standard number "u" such as the least "v" with "v" > π ⋅ "u" (these can be defined in PA using non-standard finite rational approximations of π even though π itself cannot be). Once more, "v" − ("m"/"n") ⋅ ("u"/"n") has to be larger than any standard finite number for any standard finite "m", "n".
This shows that the arithmetical structure of a countable non-standard model is more complex than the structure of the rationals. There is more to it than that though: Tennenbaum's theorem shows that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as (standard) natural numbers such that either the addition or multiplication operation of the model is computable on the codes. This result was first obtained by Stanley Tennenbaum in 1959.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{N}^{\\mathbb{N}}"
},
{
"math_id": 1,
"text": "\\mathbb{N}"
}
] |
https://en.wikipedia.org/wiki?curid=6251420
|
62520140
|
Nehemiah 13
|
A chapter in the Book of Nehemiah
Nehemiah 13 is the thirteenth (and the final) chapter of the Book of Nehemiah in the Old Testament of the Christian Bible, or the 23rd chapter of the book of Ezra-Nehemiah in the Hebrew Bible, which treats the book of Ezra and the book of Nehemiah as one book. Jewish tradition states that Ezra is the author of Ezra-Nehemiah as well as the Book of Chronicles, but modern scholars generally accept that a compiler from the 5th century BCE (the so-called "Chronicler") is the final author of these books. This chapter addresses a series of problems handled by Nehemiah himself, which had arisen during his temporary absence from the land, with some similar issues to those related in Ezra 9–10 and Nehemiah 10.
Text.
The original text of this chapter is in Hebrew language. This chapter is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Purification (13:1–3).
The opening verses record the obedience of the people at that period of time to the words of the Mosaic law, that they took "immediate" response (verse 3); in this case, by removing all people of foreign descents ("mixed multitude").
"On that day they read in the book of Moses in the audience of the people; and therein was found written, that the Ammonite and the Moabite should not come into the congregation of God for ever;"
Verse 1.
The exclusion of the Ammonites and Moabites from the sanctuary is written in , because of two reasons (verse 2):
The reforms of Nehemiah (13:4–31).
After 12 years in Jerusalem, Nehemiah returned to the court of Artaxerxes (verse 6), but during his absence, various abuses sprang up which he had to handle emphatically as recorded in this section. The cause of the offences can be traced to the religious laxity in the community, especially with close relationship of the priests with Tobiah (verse 4) and the family alliance of a grandson of Eliashib, the high priest, with Sanballat the Horonite (verse 28). Nehemiah took drastic measures to eradicate the ill:
Nehemiah also reestablished the previous good conditions in chapters 10 and 12 by putting people under oath once more (verse 25; cf. ) and set up provisions for the regular service of the Temple (verses 30–31; cf. ff, ff).
"But during all this I was not in Jerusalem, for in the thirty-second year of Artaxerxes king of Babylon I had returned to the king. Then after certain days I obtained leave from the king,"
Verse 6.
"The thirty-second year of Artaxerxes" corresponds to 433 BC. Thus, Nehemiah was governor of Judah from 445 to 433 BC, then he stayed in Susa for an unknown period of time before returning to Jerusalem. The text does not specify in what capacity he returned, although it was with authorisation from the king: he probably continued to be the governor until 407 BC, when Bigvai became governor.
"And for the wood offering, at times appointed, and for the firstfruits."
"Remember me, O my God, for good."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62520140
|
6252231
|
Landau levels
|
Quantization of cyclotron orbits
In quantum mechanics, the energies of cyclotron orbits of charged particles in a uniform magnetic field are quantized to discrete values, thus known as Landau levels. These levels are degenerate, with the number of electrons per level directly proportional to the strength of the applied magnetic field. It is named after the Soviet physicist Lev Landau.
Landau quantization contributes towards magnetic susceptibility of metals, known as Landau diamagnetism. Under strong magnetic fields, Landau quantization leads to oscillations in electronic properties of materials as a function of the applied magnetic field known as the De Haas–Van Alphen and Shubnikov–de Haas effects.
Landau quantization is a key ingredient in explanation of the integer quantum Hall effect.
Derivation.
Consider a system of non-interacting particles with charge q and spin S confined to an area "A" = "LxLy" in the "x-y" plane. Apply a uniform magnetic field formula_0 along the z-axis. In SI units, the Hamiltonian of this system (here, the effects of spin are neglected) is
formula_1
Here, formula_2 is the canonical momentum operator and formula_3 is the operator for the electromagnetic vector potential formula_4 (in position space formula_5).
The vector potential is related to the magnetic field by formula_6
There is some gauge freedom in the choice of vector potential for a given magnetic field. The Hamiltonian is gauge invariant, which means that adding the gradient of a scalar field to A changes the overall phase of the wave function by an amount corresponding to the scalar field. But physical properties are not influenced by the specific choice of gauge.
In the Landau gauge.
From the possible solutions for A, a gauge fixing introduced by Lev Landau is often used for charged particles in a constant magnetic field.
When formula_7 then formula_8 is a possible solution in the Landau gauge.
In this gauge, the Hamiltonian is
formula_9
The operator formula_10 commutes with this Hamiltonian, since the operator "ŷ" is absent by the choice of gauge. Thus the operator formula_10 can be replaced by its eigenvalue "ħky". Since formula_11 does not appear in the Hamiltonian and only the z-momentum appears in the kinetic energy, this motion along the z-direction is a free motion.
The Hamiltonian can also be written more simply by noting that the cyclotron frequency is "ω"c = "qB"/"m", giving
formula_12
This is exactly the Hamiltonian for the quantum harmonic oscillator, except with the minimum of the potential shifted in coordinate space by "x"0 = "ħky"/"mω"c .
To find the energies, note that translating the harmonic oscillator potential does not affect the energies. The energies of this system are thus identical to those of the standard quantum harmonic oscillator,
formula_13
The energy does not depend on the quantum number "ky", so there will be a finite number of degeneracies (If the particle is placed in an unconfined space, this degeneracy will correspond to a continuous sequence of formula_14). The value of formula_15 is continuous if the particle is unconfined in the z-direction and discrete if the particle is bounded in the z-direction also. Each set of wave functions with the same value of n is called a Landau level.
For the wave functions, recall that formula_10 commutes with the Hamiltonian. Then the wave function factors into a product of momentum eigenstates in the y direction and harmonic oscillator eigenstates formula_16 shifted by an amount x0 in the x direction:
formula_17
where formula_18. In sum, the state of the electron is characterized by the quantum numbers, n, "ky" and "kz".
In the symmetric gauge.
The derivation treated x and "y" as asymmetric. However, by the symmetry of the system, there is no physical quantity which distinguishes these coordinates. The same result could have been obtained with an appropriate interchange of x and y.
A more adequate choice of gauge, is the symmetric gauge, which refers to the choice
formula_19
In terms of dimensionless lengths and energies, the Hamiltonian can be expressed as
formula_20
The correct units can be restored by introducing factors of formula_21 and formula_22.
Consider operators
formula_23
These operators follow certain commutation relations
formula_24
In terms of above operators the Hamiltonian can be written as
formula_25
where we reintroduced the units back.
The Landau level index formula_26 is the eigenvalue of the operator formula_27.
The application of formula_28 increases formula_29 by one unit while preserving formula_26, whereas formula_30 application simultaneously increase formula_26 and decreases formula_29 by one unit. The analogy to quantum harmonic oscillator provides solutions
formula_31
where
formula_32
and
formula_33
One may verify that the above states correspond to choosing wavefunctions proportional to
formula_34
where formula_35.
In particular, the lowest Landau level formula_36 consists of arbitrary analytic functions multiplying a Gaussian, formula_37.
Degeneracy of the Landau levels.
In the Landau gauge.
The effects of Landau levels may only be observed when the mean thermal energy "kT" is smaller than the energy level separation, "kT" ≪ "ħω"c, meaning low temperatures and strong magnetic fields.
Each Landau level is degenerate because of the second quantum number "ky", which can take the values
formula_38
where N is an integer. The allowed values of N are further restricted by the condition that the center of force of the oscillator, "x0", must physically lie within the system, 0 ≤ "x"0 < "Lx". This gives the following range for N,
formula_39
For particles with charge "q" = "Ze", the upper bound on N can be simply written as a ratio of fluxes,
formula_40
where Φ0 = "h"/"e" is the fundamental magnetic flux quantum and Φ = "BA" is the flux through the system (with area "A" = "LxLy").
Thus, for particles with spin S, the maximum number D of particles per Landau level is
formula_41
which for electrons (where "Z" = 1 and "S" = 1/2) gives "D" = 2Φ/Φ0, two available states for each flux quantum that penetrates the system.
The above gives only a rough idea of the effects of finite-size geometry. Strictly speaking, using the standard solution of the harmonic oscillator is only valid for systems unbounded in the x-direction (infinite strips). If the size "Lx" is finite, boundary conditions in that direction give rise to non-standard quantization conditions on the magnetic field, involving (in principle) both solutions to the Hermite equation. The filling of these levels with many electrons is still an active area of research.
In general, Landau levels are observed in electronic systems. As the magnetic field is increased, more and more electrons can fit into a given Landau level. The occupation of the highest Landau level ranges from completely full to entirely empty, leading to oscillations in various electronic properties (see De Haas–Van Alphen effect and Shubnikov–de Haas effect).
If Zeeman splitting is included, each Landau level splits into a pair, one for spin up electrons and the other for spin down electrons. Then the occupation of each spin Landau level is just the ratio of fluxes "D" = Φ/Φ0. Zeeman splitting has a significant effect on the Landau levels because their energy scales are the same, 2"μ"B"B" = "ħω"c. However, the Fermi energy and ground state energy stay roughly the same in a system with many filled levels, since pairs of split energy levels cancel each other out when summed.
Moreover, the above derivation in the Landau gauge assumed an electron confined in the z-direction, which is a relevant experimental situation — found in two-dimensional electron gases, for instance. Still, this assumption is not essential for the results. If electrons are free to move along the z direction, the wave function acquires an additional multiplicative term exp("ikzz"); the energy corresponding to this free motion, ("ħ" "kz")2/(2"m"), is added to the E discussed. This term then fills in the separation in energy of the different Landau levels, blurring the effect of the quantization. Nevertheless, the motion in the x-y-plane, perpendicular to the magnetic field, is still quantized.
In the symmetric gauge.
Each Landau level has degenerate orbitals labeled by the quantum numbers formula_29 in symmetric gauge. The degeneracy per unit area is the same in each Landau level.
The "z" component of angular momentum is
formula_42
Exploiting the property formula_43 we chose eigenfunctions which diagonalize formula_44 and formula_45, The eigenvalue of formula_45 is denoted by formula_46, where it is clear that formula_47 in the formula_26th Landau level. However, it may be arbitrarily large, which is necessary to obtain the infinite degeneracy (or finite degeneracy per unit area) exhibited by the system.
Relativistic case.
An electron following Dirac equation under a constant magnetic field, can be analytically solved. The energies are given by
formula_48
where "c" is the speed of light, the sign depends on the particle-antiparticle component and "ν" is a non-negative integer. Due to spin, all levels are degenerate except for the ground state at "ν" = 0.
The massless 2D case can be simulated in single-layer materials like graphene near the Dirac cones, where the eigenergies are given by
formula_49
where the speed of light has to be replaced with the Fermi speed "v"F of the material and the minus sign corresponds to electron holes.
Magnetic susceptibility of a Fermi gas.
The Fermi gas (an ensemble of non-interacting fermions) is part of the basis for understanding of the thermodynamic properties of metals. In 1930 Landau derived an estimate for the magnetic susceptibility of a Fermi gas, known as Landau susceptibility, which is constant for small magnetic fields. Landau also noticed that the susceptibility oscillates with high frequency for large magnetic fields, this physical phenomenon is known as the De Haas–Van Alphen effect.
Two-dimensional lattice.
The tight binding energy spectrum of charged particles in a two dimensional infinite lattice is known to be self-similar and fractal, as demonstrated in Hofstadter's butterfly. For an integer ratio of the magnetic flux quantum and the magnetic flux through a lattice cell, one recovers the Landau levels for large integers.
Integer quantum Hall effect.
The energy spectrum of the semiconductor in a strong magnetic field forms Landau levels that can be labeled by integer indices. In addition, the Hall resistivity also exhibits discrete levels labeled by an integer ν. The fact that these two quantities are related can be shown in different ways, but most easily can be seen from Drude model: the Hall conductivity depends on the electron density n as
formula_50
Since the resistivity plateau is given by
formula_51
the required density is
formula_52
which is exactly the density required to fill the Landau level. The gap between different Landau levels along with large degeneracy of each level renders the resistivity quantized.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{B} = \\begin{pmatrix}0\\\\0\\\\B\\end{pmatrix}"
},
{
"math_id": 1,
"text": "\\hat{H} = \\frac{1}{2m} \\left(\\hat{\\mathbf{p}} - q\\hat{\\mathbf{A}}\\right)^2."
},
{
"math_id": 2,
"text": "\\hat{\\mathbf{p}}"
},
{
"math_id": 3,
"text": "\\hat{\\mathbf{A}}"
},
{
"math_id": 4,
"text": "\\mathbf{A}"
},
{
"math_id": 5,
"text": "\\hat{\\mathbf{A}} =\\mathbf{A}"
},
{
"math_id": 6,
"text": "\\mathbf{B}=\\mathbf{\\nabla}\\times \\mathbf{A}. "
},
{
"math_id": 7,
"text": "\\mathbf{B} = \\begin{pmatrix} 0 \\\\ 0 \\\\ B \\end{pmatrix}"
},
{
"math_id": 8,
"text": "\\mathbf{A} = \\begin{pmatrix} 0 \\\\ B\\cdot x \\\\ 0 \\end{pmatrix}"
},
{
"math_id": 9,
"text": "\\hat{H} = \\frac{\\hat{p}_x^2}{2m} + \\frac{1}{2m} \\left(\\hat{p}_y - qB\\hat{x}\\right)^2 + \\frac{\\hat{p}_z^2}{2m}."
},
{
"math_id": 10,
"text": "\\hat{p}_y"
},
{
"math_id": 11,
"text": "\\hat{z}"
},
{
"math_id": 12,
"text": "\\hat{H} = \\frac{\\hat{p}_x^2}{2m} + \\frac{1}{2} m \\omega_{\\rm c}^2 \\left( \\hat{x} - \\frac{\\hbar k_y}{m \\omega_{\\rm c}} \\right)^2 + \\frac{\\hat{p}_z^2}{2m}."
},
{
"math_id": 13,
"text": "E_n=\\hbar\\omega_{\\rm c}\\left(n+\\frac{1}{2}\\right) + \\frac{p_z^2}{2m},\\quad n\\geq 0~. "
},
{
"math_id": 14,
"text": "p_y"
},
{
"math_id": 15,
"text": "p_z"
},
{
"math_id": 16,
"text": "|\\phi_n\\rangle"
},
{
"math_id": 17,
"text": "\\Psi(x,y,z) = e^{i(k_y y+k_z z)} \\phi_n(x-x_0) "
},
{
"math_id": 18,
"text": "k_z = p_z / \\hbar"
},
{
"math_id": 19,
"text": "\\hat{\\mathbf{A}} =\\frac{1}{2} \\mathbf{B}\\times \\hat{\\mathbf{r}} = \\frac{1}{2}\\begin{pmatrix} -By\\\\ Bx \\\\0 \\end{pmatrix}."
},
{
"math_id": 20,
"text": "\\hat{H} = \\frac{1}{2} \\left[\\left(-i\\frac{\\partial}{\\partial x} - \\frac{y}{2}\\right)^2 + \\left(-i \\frac{\\partial}{\\partial y} + \\frac{x}{2}\\right)^2 \\right] "
},
{
"math_id": 21,
"text": " q, \\hbar, \\mathbf{B}"
},
{
"math_id": 22,
"text": " m "
},
{
"math_id": 23,
"text": "\\begin{align}\n\\hat{a} &= \\frac{1}{\\sqrt{2}} \\left[\\left(\\frac{x}{2} + \\frac{\\partial}{\\partial x}\\right) -i \\left(\\frac{y}{2} + \\frac{\\partial}{\\partial y}\\right)\\right] \\\\\n\\hat{a}^{\\dagger} &= \\frac{1}{\\sqrt{2}} \\left[\\left(\\frac{x}{2} - \\frac{\\partial}{\\partial x}\\right) +i \\left(\\frac{y}{2} - \\frac{\\partial}{\\partial y}\\right)\\right] \\\\\n\\hat{b} &= \\frac{1}{\\sqrt{2}} \\left[\\left(\\frac{x}{2} + \\frac{\\partial}{\\partial x}\\right) +i \\left(\\frac{y}{2} + \\frac{\\partial}{\\partial y}\\right)\\right] \\\\\n\\hat{b}^{\\dagger} &= \\frac{1}{\\sqrt{2}} \\left[\\left(\\frac{x}{2} - \\frac{\\partial}{\\partial x}\\right) -i \\left(\\frac{y}{2} - \\frac{\\partial}{\\partial y}\\right)\\right]\n\\end{align}"
},
{
"math_id": 24,
"text": "[\\hat{a}, \\hat{a}^{\\dagger}] = [\\hat{b},\\hat{b}^{\\dagger}] = 1."
},
{
"math_id": 25,
"text": " \\hat{H} = \\hbar\\omega_{\\rm c}\\left(\\hat{a}^{\\dagger}\\hat{a} + \\frac{1}{2}\\right),"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "\\hat{N}=\\hat{a}^{\\dagger}\\hat{a}"
},
{
"math_id": 28,
"text": "\\hat{b}^{\\dagger}"
},
{
"math_id": 29,
"text": "m_z"
},
{
"math_id": 30,
"text": "\\hat{a}^{\\dagger}"
},
{
"math_id": 31,
"text": "\\hat{H} |n,m_z\\rangle = E_n |n,m_z\\rangle, "
},
{
"math_id": 32,
"text": "E_n = \\hbar\\omega_{\\rm c}\\left(n + \\frac{1}{2}\\right)"
},
{
"math_id": 33,
"text": "|n,m_z\\rangle = \\frac{(\\hat{b}^{\\dagger})^{m_z+n}}{\\sqrt{(m_z+n)!}} \\frac{(\\hat{a}^{\\dagger})^{n}}{\\sqrt{n!}}|0,0\\rangle. "
},
{
"math_id": 34,
"text": "\\psi_{n,m_z}(x, y) = \\left( \\frac{\\partial}{\\partial w} - \\frac{\\bar{w}}{4} \\right)^n w^{n + m_z} e^{-|w|^2 / 4}"
},
{
"math_id": 35,
"text": "w = x - i y"
},
{
"math_id": 36,
"text": "n = 0"
},
{
"math_id": 37,
"text": "\\psi(x,y) = f(w) e^{-|w|^2/4}"
},
{
"math_id": 38,
"text": "k_y = \\frac{2 \\pi N}{L_y},"
},
{
"math_id": 39,
"text": "0 \\leq N < \\frac{m \\omega_{\\rm c} L_x L_y}{2\\pi\\hbar}."
},
{
"math_id": 40,
"text": "\\frac{Z B L_x L_y}{(h/e)} = Z\\frac{\\Phi}{\\Phi_0},"
},
{
"math_id": 41,
"text": "D = Z (2S+1) \\frac{\\Phi}{\\Phi_0}~,"
},
{
"math_id": 42,
"text": "\\hat{L}_z = -i \\hbar \\frac{\\partial}{\\partial \\theta} = - \\hbar (\\hat{b}^{\\dagger}\\hat{b} - \\hat{a}^{\\dagger}\\hat{a})"
},
{
"math_id": 43,
"text": "[\\hat{H}, \\hat{L}_z] = 0"
},
{
"math_id": 44,
"text": "\\hat{H}"
},
{
"math_id": 45,
"text": "\\hat{L}_z"
},
{
"math_id": 46,
"text": "- m_z \\hbar"
},
{
"math_id": 47,
"text": "m_z \\ge -n"
},
{
"math_id": 48,
"text": "E_{\\rm rel}=\\pm \\sqrt{(mc^2)^2+(c\\hbar k_z)^2+2\\nu \\hbar\\omega_{\\rm c} mc^2}"
},
{
"math_id": 49,
"text": "E_{\\rm graphene}=\\pm \\sqrt{2\\nu\\hbar eBv_{\\rm F}^2 }"
},
{
"math_id": 50,
"text": "\\rho_{xy}=\\frac{B}{n e}."
},
{
"math_id": 51,
"text": "\\rho_{xy}=\\frac{2 \\pi\\hbar }{e^2}\\frac{1}{\\nu},"
},
{
"math_id": 52,
"text": "n=\\frac{B }{\\Phi_0}\\nu,"
}
] |
https://en.wikipedia.org/wiki?curid=6252231
|
625226
|
Reversible reaction
|
Chemical reaction whose products can react together to produce the reactants again
A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously.
<chem> \mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD</chem>
A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics.
Weak acids and bases undergo reversible reactions. For example, carbonic acid:
H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq).
The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, "K". The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction
CaCO3 + 2HCl → CaCl2 + H2O + CO2↑
History.
The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone):
2NaCl + CaCO3 → Na2CO3 + CaCl2
He recognized this as the reverse of the familiar reaction
Na2CO3 + CaCl2→ 2NaCl + CaCO3
Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate.
In 1864, Peter Waage and Cato Maximilian Guldberg formulated their law of mass action which quantified Berthollet's observation. Between 1884 and 1888, Le Chatelier and Braun formulated Le Chatelier's principle, which extended the same idea to a more general statement on the effects of factors other than concentration on the position of the equilibrium.
Reaction kinetics.
For the reversible reaction A⇌B, the forward step A→B has a rate constant formula_0 and the backwards step B→A has a rate constant formula_1. The concentration of A obeys the following differential equation:
If we consider that the concentration of product B at anytime is equal to the concentration of reactants at time zero minus the concentration of reactants at time formula_2, we can set up the following equation:
Combining 1 and 2, we can write
formula_3.
Separation of variables is possible and using an initial value formula_4, we obtain:
formula_5
and after some algebra we arrive at the final kinetic expression:
formula_6.
The concentration of A and B at infinite time has a behavior as follows:
formula_7
formula_8
formula_9
formula_10
Thus, the formula can be linearized in order to determine formula_11:
formula_12
To find the individual constants formula_0 and formula_1, the following formula is required:
formula_13
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k_1"
},
{
"math_id": 1,
"text": "k_{-1}"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "\\frac{d[A]}{dt}=-k_\\text{1}[A]+k_\\text{-1}([A]_\\text{0}-[A])"
},
{
"math_id": 4,
"text": "[A](t=0) = [A]_0"
},
{
"math_id": 5,
"text": "C=\\frac{{-\\ln}(-k_\\text{1}[A]_\\text{0})}{k_\\text{1}+k_\\text{-1}}"
},
{
"math_id": 6,
"text": "[A]=\\frac{k_\\text{-1}[A]_\\text{0}}{k_\\text{1}+k_\\text{-1}}+\\frac{k_\\text{1}[A]_\\text{0}}{k_\\text{1}+k_\\text{-1}}\\exp{{(-k_\\text{1}+k_\\text{-1}})t}"
},
{
"math_id": 7,
"text": "[A]_\\infty=\\frac{k_\\text{-1}[A]_\\text{0}}{k_\\text{1}+k_\\text{-1}}"
},
{
"math_id": 8,
"text": "[B]_\\infty=[A]_\\text{0}-[A]_\\infty=[A]_\\text{0}-\\frac{k_\\text{-1}[A]_\\text{0}}{k_\\text{1} +k_\\text{-1}}"
},
{
"math_id": 9,
"text": "\\frac{[B]_\\infty}{[A]_\\infty}=\\frac{k_\\text{1}}{k_\\text{-1}}=K_\\text{eq}"
},
{
"math_id": 10,
"text": "[A]=[A]_\\infty+([A]_\\text{0}-[A]_\\infty)\\exp(-k_\\text{1}+k_\\text{-1})t"
},
{
"math_id": 11,
"text": "k_1+k_{-1}"
},
{
"math_id": 12,
"text": "\\ln([A]-[A]_\\infty)=\\ln([A]_\\text{0}-[A]_\\infty)-(k_\\text{1}+k_\\text{-1})t"
},
{
"math_id": 13,
"text": "K_\\text{eq}=\\frac{k_\\text{1}}{k_\\text{-1}}=\\frac{[B]_\\infty}{[A]_\\infty}"
}
] |
https://en.wikipedia.org/wiki?curid=625226
|
62525480
|
ENUBET
|
The Enhanced NeUtrino BEams from kaon Tagging or ENUBET is an ERC funded project that aims at producing an artificial neutrino beam in which the flavor, flux and energy of the produced neutrinos are known with unprecedented precision.
Interest in these types of high precision neutrino beams has grown significantly in the last ten years, especially after the start of the construction of the DUNE and Hyper-Kamiokande detectors. DUNE and Hyper-Kamiokande are aimed at discovering CP violation in neutrinos observing a small difference between the probability of a muon-neutrino to oscillate into an electron-neutrino and the probability of a muon-antineutrino to oscillate into an electron-antineutrino. This effect points toward a difference in the behavior of matter and antimatter. In quantum field theory, this effect is described by a violation of the CP symmetry in particle physics.
The experiments that will measure CP violation need a very precise knowledge of the neutrino cross-sections, i.e. the probability for a neutrino to interact in the detector. This probability is measured counting the number of interacting neutrinos divided by the flux of incoming neutrinos. Current neutrino cross-section experiments are limited by large uncertainties in the neutrino flux. A new generation of cross-section experiment is therefore needed to overcome these limitations with new techniques or high precision beams, as ENUBET.
In ENUBET, neutrinos are produced by focusing mesons in a narrow band beam towards an instrumented decay tunnel, where charged leptons produced in association with neutrinos by mesons' decay can be monitored at the single particle level. Beams like ENUBET are called monitored neutrino beams.
Mesons (essentially pions and kaons) are produced in the interactions of accelerated protons with a Beryllium or Graphite target. The proposed facility is being studied taking into account the energies of currently available proton drivers: 400 GeV (CERN SPS), 120 GeV (FNAL Main Injector), 30 GeV (J-PARC Main Ring).
Kaons and pions are momentum and charge selected in a short transfer line by means of dipole and quadrupole magnets and are focused in a collimated beam into an instrumented decay tube. Large angle muons and positrons from kaon decays (formula_0, formula_1, formula_2) are measured by detectors on the tunnel walls, while muons from pion decays (formula_3) are monitored after the hadron dump at the end of the tunnel. The decay region is kept short (40 m) in order to reduce the neutrino contamination from muon decays (formula_4).
In this way, the neutrino flux is assessed in a direct way with a precision of 1%, without relying on complex simulations of the transfer line and on hadro-production data extrapolation that currently limits the knowledge of the flux to 5-10%. The ENUBET facility can be used to perform precision studies of the neutrino cross section and of sterile neutrinos or Non-Standard Interaction models. This method can also be extended to detect other leptons in order to have a complete monitored neutrino beam.
The ENUBET project started in 2016. As of 2024, it involves 17 European institutions in 5 European countries and brings together about 80 scientists.
ENUBET studies all technical and physics challenges to demonstrate the feasibility of a monitored neutrino beam: it has build a full-scale demonstrator of the instrumented decay tunnel (3 m length and partial azimuthal coverage) and assesses costs and physics reach of the proposed facility.
The first end-to-end simulation of the ENUBET monitored neutrino beam was published in 2023.
The ENUBET ERC project was completed in 2022. Since March 2019, ENUBET has been part of the CERN Neutrino Platform (NP06/ENUBET) for the development of a new generation of neutrino detectors and facilities.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K^{+} \\rightarrow \\mu^+ \\nu_{\\mu}"
},
{
"math_id": 1,
"text": "K^{+} \\rightarrow \\mu^+ \\pi^0 \\nu_{\\mu}"
},
{
"math_id": 2,
"text": "K^{+} \\rightarrow e^+ \\pi^0 \\nu_{e}"
},
{
"math_id": 3,
"text": "\\pi^+ \\rightarrow \\mu^+ \\nu_{\\mu}"
},
{
"math_id": 4,
"text": "\\mu^+ \\rightarrow e^+ \\nu_{e} \\bar{\\nu}_{\\mu}"
}
] |
https://en.wikipedia.org/wiki?curid=62525480
|
6252669
|
Central composite design
|
In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to use a complete three-level factorial experiment.
After the designed experiment is performed, linear regression is used, sometimes iteratively, to obtain results. Coded variables are often used when constructing this design.
Implementation.
The design consists of three distinct sets of experimental runs:
Design matrix.
The design matrix for a central composite design experiment involving "k" factors is derived from a matrix, d, containing the following three different parts corresponding to the three types of experimental runs:
formula_0
Then d is the vertical concatenation:
formula_1
The design matrix X used in linear regression is the horizontal concatenation of a column of 1s (intercept), d, and all elementwise products of a pair of columns of d:
formula_2
where d("i") represents the "i"th column in d.
Choosing α.
There are many different methods to select a useful value of α. Let "F" be the number of points due to the factorial design and "T" = 2"k" + "n", the number of additional points, where "n" is the number of central points in the design. Common values are as follows (Myers, 1971):
Application of central composite designs for optimization.
Statistical approaches such as Response Surface Methodology can be employed to maximize the production of a special substance by optimization of operational factors. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. For instance, in a study, a central composite design was employed to investigate the effect of critical parameters of organosolv pretreatment of rice straw including temperature, time, and ethanol concentration. The residual solid, lignin recovery, and hydrogen yield were selected as the response variables.
References.
<templatestyles src="Reflist/styles.css" />
Myers, Raymond H. "Response Surface Methodology". Boston: Allyn and Bacon, Inc., 1971
|
[
{
"math_id": 0,
"text": " \\mathbf E = \\begin{bmatrix}\n \\alpha & 0 & 0 & \\cdots & \\cdots & \\cdots & 0 \\\\\n { - \\alpha } & 0 & 0 & \\cdots & \\cdots & \\cdots & 0 \\\\\n 0 & \\alpha & 0 & \\cdots & \\cdots & \\cdots & 0 \\\\\n 0 & { - \\alpha } & 0 & \\cdots & \\cdots & \\cdots & 0 \\\\\n \\vdots & {} & {} & {} & {} & {} & \\vdots \\\\\n 0 & 0 & 0 & 0 & \\cdots & \\cdots & \\alpha \\\\\n 0 & 0 & 0 & 0 & \\cdots & \\cdots & { - \\alpha } \\\\\n\\end{bmatrix}. "
},
{
"math_id": 1,
"text": " \\mathbf d = \\begin{bmatrix} \\mathbf F \\\\ \\mathbf C \\\\ \\mathbf E \n \\end{bmatrix}. "
},
{
"math_id": 2,
"text": "\\mathbf X = \\begin{bmatrix} \\mathbf 1 & \\mathbf d & \\mathbf d(1)\\times\\mathbf d(2) & \\mathbf d(1)\\times\\mathbf d(3) & \\cdots & \\mathbf d(k-1)\\times\\mathbf d(k) & \\mathbf d(1)^2 &\\mathbf d(2)^2 &\\cdots & \\mathbf d(k)^2 \\end{bmatrix}, "
},
{
"math_id": 3,
"text": "\\alpha = (Q\\times F/4)^{1/4}\\,\\!"
},
{
"math_id": 4,
"text": " Q = (\\sqrt{F + T} -\\sqrt{F})^2 "
}
] |
https://en.wikipedia.org/wiki?curid=6252669
|
62534769
|
Commelec
|
Commelec is a framework that provides distributed and real-time control of electrical grids by using explicit setpoints for active/reactive power absorptions/injections. It is based on the joint-operation of communication and electricity systems. Commelec has been developed by scientists at École Polytechnique Fédérale de Lausanne, a research institute and university in Lausanne, Switzerland. The Commelec project is part of the SNSF’s National Research Programme “Energy Turnaround” (NRP 70).
Motivation.
Due to penetration of a large amount of distributed generation, modern power systems are facing numerous challenges such as the absence of inertia, stochastic power generation, grid stress and stability issues. This could lead to problems related to power balance, power quality, voltage and frequency control, system economics and load dispatch. The conventional distribution grid was not designed to support the distributed generation of electricity. Therefore Commelec framework is developed in order to guarantee a proper grid operation under these challenges without major grid reinforcements. It can provide both primary frequency control and secondary voltage control, being also capable to operate in islanded mode. In contrast to conventional droop-control, it keeps the equilibrium point without using the frequency as the main indicator of power imbalance.
Principle of Operation.
Commelec is an agent-based framework. The grid agent (GA) is a piece of software that is running on an embedded computer attached somewhere in the grid. It monitors the state of the grid through the measurement system and orchestrates different resources by speaking to resource agents (RAs) that are usually collocated on the inverters of the resources. While GAs are smart and take part in computing decision actions, RAs are simple-minded, merely requested to send information about their internal state in a specified and universal format.
Device-independent Protocol for Message Exchange.
Every 100 ms, RA sends a device-independent representation about its internal state to the GA. On receiving this information from RAs through a communication network (e.g. internet), GA solves robust multi-objective optimization problem (taking into account the constraints of the grid), takes local decisions and implements them. Correction of decisions can be done after receiving new advertisements from RAs. The information that GA receives from RA has pure mathematical abstract description. It consists of:
Composability.
Power network can be organized in a "flat setting" where single GA controls the whole grid and leads all the RAs, and in the "hierarchical setting" in which GA can lead not only RAs, but also the GAs with lower hierarchy level. Composability property that Commelec provides, enables aggregation of several resources that GA controls to a single entity (i.e. virtual resource) which can be further controlled by a GA with a higher hierarchy level. Such virtual resource uses the same language to advertise internal state to its leading GA which makes the control problem scalable.
Experimental Validation.
The performance of Commelec control framework is evaluated through a case study composed of a replica of CIGRÉ’s low-voltage microgrid benchmark TF C6.04.02. This microgrid, built at EPFL, consists of different types of resources such as photovoltaic plants, battery energy storage systems and electric heaters. For real-time monitoring, phasor measurement units (PMUs) are used.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\cal A \\subseteq \\mathbb{R}^2"
},
{
"math_id": 1,
"text": "(P,Q)"
},
{
"math_id": 2,
"text": "CF: \\cal{A}\\rightarrow\\mathbb{R}"
},
{
"math_id": 3,
"text": "BF:\\cal{A}\\rightarrow 2^{\\mathbb{R}^2}"
},
{
"math_id": 4,
"text": "(P',Q')"
},
{
"math_id": 5,
"text": "(P',Q')\\in BF(P,Q)\\subseteq\\mathbb{R}^2"
}
] |
https://en.wikipedia.org/wiki?curid=62534769
|
62539802
|
Rotation distance
|
In discrete mathematics and theoretical computer science, the rotation distance between two binary trees with the same number of nodes is the minimum number of tree rotations needed to reconfigure one tree into another. Because of a combinatorial equivalence between binary trees and triangulations of convex polygons, rotation distance is equivalent to the flip distance for triangulations of convex polygons.
Rotation distance was first defined by Karel Čulík II and Derick Wood in 1982. Every two n-node binary trees have rotation distance at most 2"n" − 6, and some pairs of trees have exactly this distance. The computational complexity of computing the rotation distance is unknown.
Definition.
A binary tree is a structure consisting of a set of nodes, one of which is designated as the root node, in which each remaining node is either the "left child" or "right child" of some other node, its "parent", and in which following the parent links from any node eventually leads to the root node.
For any node x in the tree, there is a "subtree" of the same form, rooted at x and consisting of all the nodes that can reach x by following parent links. Each binary tree has a left-to-right ordering of its nodes, its inorder traversal, obtained by recursively traversing the left subtree (the subtree at the left child of the root, if such a child exists), then listing the root itself, and then recursively traversing the right subtree.
In a binary search tree, each node is associated with a search key, and the left-to-right ordering is required to be consistent with the order of the keys.
A tree rotation is an operation that changes the structure of a binary tree without changing its left-to-right ordering. Several self-balancing binary search tree data structures use these rotations as a primitive operation in their rebalancing algorithms. A rotation operates on two nodes x and y, where x is the parent of y, and restructures the tree by making y be the parent of x and taking the place of x in the tree. To free up one of the child links of y and make room to link x as a child of y, this operation may also need to move one of the children of y to become a child of x.
There are two variations of this operation, a "right rotation" in which y begins as the left child of x and x ends as the right child of y, and a "left rotation" in which y begins as the right child of x and x ends as the left child of y.
Any two trees that have the same left-to-right sequence of nodes may be transformed into each other by a sequence of rotations. The rotation distance between the two trees is the number of rotations in the shortest possible sequence of rotations that performs this transformation. It can also be described as the shortest path distance in a "rotation graph", a graph that has a vertex for each binary tree on a given left-to-right sequence of nodes and an edge for each rotation between two trees. This rotation graph is exactly the graph of vertices and edges of an associahedron.
Equivalence to flip distance.
Given a family of triangulations of some geometric object, a "flip" is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a "flip graph", a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds.
There is a one-to-one correspondence between triangulations of a given convex polygon, with a designated root edge, and binary trees, taking triangulations of n-sided polygons into binary trees with "n" − 2 nodes. In this correspondence, each triangle of a triangulation corresponds to a node in a binary tree. The root node is the triangle having the designated root edge as one of its sides, and two nodes are linked as parent and child in the tree when the corresponding triangles share a diagonal in the triangulation.
Under this correspondence, rotations in binary trees correspond exactly to flips in the corresponding triangulations. Therefore, the rotation distance on ("n" − 2)-node trees corresponds exactly to flip distance on triangulations of n-sided convex polygons.
Maximum value.
define the "right spine" of a binary tree to be the path obtained by starting from the root and following right child links until reaching a node that has no right child. If a tree has the property that not all nodes belong to the right spine, there always exists a right rotation that increases the length of the right spine. For, in this case, there exists at least one node x on the right spine that has a left child y that is not on the right spine. Performing a right rotation on x and y adds y to the right spine without removing any other node from it. By repeatedly increasing the length of the right spine, any n-node tree can be transformed into the unique tree with the same node order in which all nodes belong to the right spine, in at most "n" − 1 steps. Given any two trees with the same node order, one can transform one into the other by transforming the first tree into a tree with all nodes on the right spine, and then reversing the same transformation of the second tree, in a total of at most 2"n" − 2 steps. Therefore, as proved, the rotation distance between any two trees is at most 2"n" − 2.
By considering the problem in terms of flips of convex polygons instead of rotations of trees, were able to show that the rotation distance is at most 2"n" − 6. In terms of triangulations of convex polygons, the right spine is the sequence of triangles incident to the right endpoint of the root edge, and the tree in which all vertices lie on the spine corresponds to a fan triangulation for this vertex. The main idea of their improvement is to try flipping both given triangulations to a fan triangulation for any vertex, rather than only the one for the right endpoint of the root edge. It is not possible for all of these choices to simultaneously give the worst-case distance "n" − 1 from each starting triangulation, giving the improvement.
also used a geometric argument to show that, for infinitely many values of n, the maximum rotation distance is exactly 2"n" − 6. They again use the interpretation of the problem in terms of flips of triangulations of convex polygons, and they interpret the starting and ending triangulation as the top and bottom faces of a convex polyhedron with the convex polygon itself interpreted as a Hamiltonian circuit in this polyhedron. Under this interpretation, a sequence of flips from one triangulation to the other can be translated into a collection of tetrahedra that triangulate the given three-dimensional polyhedron. They find a family of polyhedra with the property that (in three-dimensional hyperbolic geometry) the polyhedra have large volume, but all tetrahedra inside them have much smaller volume, implying that many tetrahedra are needed in any triangulation. The binary trees obtained from translating the top and bottom sets of faces of these polyhedra back into trees have high rotation distance, at least 2"n" − 6.
Subsequently, provided a proof that for all "n" ≥ 11, the maximum rotation distance is exactly 2"n" − 6. Pournin's proof is combinatorial, and avoids the use of hyperbolic geometry.
Computational complexity.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
What is the complexity of computing the rotation distance between two trees?
As well as defining rotation distance, asked for the computational complexity of computing the rotation distance between two given trees. The existence of short rotation sequences between any two trees implies that testing whether the rotation distance is at most k belongs to the complexity class NP, but it is not known to be NP-complete, nor is it known to be solvable in polynomial time.
The rotation distance between any two trees can be lower bounded, in the equivalent view of polygon triangulations, by the number of diagonals that need to be removed from one triangulation and replaced by other diagonals to produce the other triangulation. It can also be upper bounded by twice this number,
by partitioning the problem into subproblems along any diagonals shared between both triangulations and then applying the method of to each subproblem. This method provides an approximation algorithm for the problem with an approximation ratio of two. A similar approach of partitioning into subproblems along shared diagonals leads to a fixed-parameter tractable algorithm for computing the rotation distance exactly.
Determining the complexity of computing the rotation distance exactly without parameterization remains unsolved, and the best algorithms currently known for the problem run in exponential time.
Variants.
Though the complexity of rotation distance is unknown, there exists several variants for which rotation distance can be solved in polynomial time.
In abstract algebra, each element in Thompson's group F has a presentation using two generators. Finding the minimum length of such a presentation is equivalent to finding the rotation distance between two binary trees with only rotations on the root node and its right child allowed. Fordham's algorithm computes the rotation distance under this restriction in linear time. The algorithm classifies tree nodes into 7 types and uses a lookup table to find the number of rotations required to transform a node of one type into another. The sum of the costs of all transformations is the rotation distance.
In two additional variants, one only allows rotations such that the pivot of the rotation is a non-leaf child of the root and the other child of the root is a leaf, while the other only allow rotations on right-arm nodes (nodes that are on the path from the root to its rightmost leaf). Both variants result in a meet semi-lattice, whose structure is exploited to derive a formula_0 algorithm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(n^2)"
}
] |
https://en.wikipedia.org/wiki?curid=62539802
|
62541247
|
Incomplete Bessel functions
|
In mathematics, the incomplete Bessel functions are types of special functions which act as a type of extension from the complete-type of Bessel functions.
Definition.
The incomplete Bessel functions are defined as the same delay differential equations of the complete-type Bessel functions:
formula_0
formula_1
formula_2
formula_3
formula_4
formula_5
And the following suitable extension forms of delay differential equations from that of the complete-type Bessel functions:
formula_6
formula_7
formula_8
formula_9
formula_10
formula_11
Where the new parameter formula_12 defines the integral bound of the upper-incomplete form and lower-incomplete form of the modified Bessel function of the second kind:
formula_13
formula_14
formula_15
formula_16
formula_17 for integer formula_18
formula_19
formula_20
formula_21
formula_22
formula_23 for non-integer formula_18
formula_24
formula_25
formula_26
formula_27
formula_28 for non-integer formula_18
formula_29 for non-integer formula_18
Differential equations.
formula_30 satisfies the inhomogeneous Bessel's differential equation
formula_31
Both formula_32 , formula_33 , formula_34 and formula_35 satisfy the partial differential equation
formula_36
Both formula_37 and formula_30 satisfy the partial differential equation
formula_38
Integral representations.
Base on the preliminary definitions above, one would derive directly the following integral forms of formula_32 , formula_33:
formula_39
formula_40
With the Mehler–Sonine integral expressions of formula_41 and formula_42 mentioned in Digital Library of Mathematical Functions,
we can further simplify to formula_43 and formula_44 , but the issue is not quite good since the convergence range will reduce greatly to formula_45.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "J_{v-1}(z,w)-J_{v+1}(z,w)=2\\dfrac{\\partial}{\\partial z}J_v(z,w)"
},
{
"math_id": 1,
"text": "Y_{v-1}(z,w)-Y_{v+1}(z,w)=2\\dfrac{\\partial}{\\partial z}Y_v(z,w)"
},
{
"math_id": 2,
"text": "I_{v-1}(z,w)+I_{v+1}(z,w)=2\\dfrac{\\partial}{\\partial z}I_v(z,w)"
},
{
"math_id": 3,
"text": "K_{v-1}(z,w)+K_{v+1}(z,w)=-2\\dfrac{\\partial}{\\partial z}K_v(z,w)"
},
{
"math_id": 4,
"text": "H_{v-1}^{(1)}(z,w)-H_{v+1}^{(1)}(z,w)=2\\dfrac{\\partial}{\\partial z}H_v^{(1)}(z,w)"
},
{
"math_id": 5,
"text": "H_{v-1}^{(2)}(z,w)-H_{v+1}^{(2)}(z,w)=2\\dfrac{\\partial}{\\partial z}H_v^{(2)}(z,w)"
},
{
"math_id": 6,
"text": "J_{v-1}(z,w)+J_{v+1}(z,w)=\\dfrac{2v}{z}J_v(z,w)-\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}J_v(z,w)"
},
{
"math_id": 7,
"text": "Y_{v-1}(z,w)+Y_{v+1}(z,w)=\\dfrac{2v}{z}Y_v(z,w)-\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}Y_v(z,w)"
},
{
"math_id": 8,
"text": "I_{v-1}(z,w)-I_{v+1}(z,w)=\\dfrac{2v}{z}I_v(z,w)-\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}I_v(z,w)"
},
{
"math_id": 9,
"text": "K_{v-1}(z,w)-K_{v+1}(z,w)=-\\dfrac{2v}{z}K_v(z,w)+\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}K_v(z,w)"
},
{
"math_id": 10,
"text": "H_{v-1}^{(1)}(z,w)+H_{v+1}^{(1)}(z,w)=\\dfrac{2v}{z}H_v^{(1)}(z,w)-\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}H_v^{(1)}(z,w)"
},
{
"math_id": 11,
"text": "H_{v-1}^{(2)}(z,w)+H_{v+1}^{(2)}(z,w)=\\dfrac{2v}{z}H_v^{(2)}(z,w)-\\dfrac{2\\tanh vw}{z}\\dfrac{\\partial}{\\partial w}H_v^{(2)}(z,w)"
},
{
"math_id": 12,
"text": "w"
},
{
"math_id": 13,
"text": "K_v(z,w)=\\int_w^\\infty e^{-z\\cosh t}\\cosh vt~dt"
},
{
"math_id": 14,
"text": "J_v(z,w)=\\int_0^we^{-z\\cosh t}\\cosh vt~dt"
},
{
"math_id": 15,
"text": "J_v(z,w)=J_v(z)+\\dfrac{e^\\frac{v\\pi i}{2}J(iz,v,w)-e^{-\\frac{v\\pi i}{2}}J(-iz,v,w)}{i\\pi}"
},
{
"math_id": 16,
"text": "Y_v(z,w)=Y_v(z)+\\dfrac{e^\\frac{v\\pi i}{2}J(iz,v,w)+e^{-\\frac{v\\pi i}{2}}J(-iz,v,w)}{\\pi}"
},
{
"math_id": 17,
"text": "I_{-v}(z,w)=I_v(z,w)"
},
{
"math_id": 18,
"text": "v"
},
{
"math_id": 19,
"text": "I_{-v}(z,w)-I_v(z,w)=I_{-v}(z)-I_v(z)-\\dfrac{2\\sin v\\pi}{\\pi}J(z,v,w)"
},
{
"math_id": 20,
"text": "I_v(z,w)=I_v(z)+\\dfrac{J(-z,v,w)-e^{-v\\pi i}J(z,v,w)}{i\\pi}"
},
{
"math_id": 21,
"text": "I_v(z,w)=e^{-\\frac{v\\pi i}{2}}J_v(iz,w)"
},
{
"math_id": 22,
"text": "K_{-v}(z,w)=K_v(z,w)"
},
{
"math_id": 23,
"text": "K_v(z,w)=\\dfrac{\\pi}{2}\\dfrac{I_{-v}(z,w)-I_v(z,w)}{\\sin v\\pi}"
},
{
"math_id": 24,
"text": "H_v^{(1)}(z,w)=J_v(z,w)+iY_v(z,w)"
},
{
"math_id": 25,
"text": "H_v^{(2)}(z,w)=J_v(z,w)-iY_v(z,w)"
},
{
"math_id": 26,
"text": "H_{-v}^{(1)}(z,w)=e^{v\\pi i}H_v^{(1)}(z,w)"
},
{
"math_id": 27,
"text": "H_{-v}^{(2)}(z,w)=e^{-v\\pi i}H_v^{(2)}(z,w)"
},
{
"math_id": 28,
"text": "H_v^{(1)}(z,w)=\\dfrac{J_{-v}(z,w)-e^{-v\\pi i}J_v(z,w)}{i\\sin v\\pi}=\\dfrac{Y_{-v}(z,w)-e^{-v\\pi i}Y_v(z,w)}{\\sin v\\pi}"
},
{
"math_id": 29,
"text": "H_v^{(2)}(z,w)=\\dfrac{e^{v\\pi i}J_v(z,w)-J_{-v}(z,w)}{i\\sin v\\pi}=\\dfrac{Y_{-v}(z,w)-e^{v\\pi i}Y_v(z,w)}{\\sin v\\pi}"
},
{
"math_id": 30,
"text": "K_v(z,w)"
},
{
"math_id": 31,
"text": "z^2\\dfrac{d^2y}{dz^2}+z\\dfrac{dy}{dz}-(x^2+v^2)y=(v\\sinh vw+z\\cosh vw\\sinh w)e^{-z\\cosh w}"
},
{
"math_id": 32,
"text": "J_v(z,w)"
},
{
"math_id": 33,
"text": "Y_v(z,w)"
},
{
"math_id": 34,
"text": "H_v^{(1)}(z,w)"
},
{
"math_id": 35,
"text": "H_v^{(2)}(z,w)"
},
{
"math_id": 36,
"text": "z^2\\dfrac{\\partial^2y}{\\partial z^2}+z\\dfrac{\\partial y}{\\partial z}+(z^2-v^2)y-\\dfrac{\\partial^2y}{\\partial w^2}+2v\\tanh vw\\dfrac{\\partial y}{\\partial w}=0"
},
{
"math_id": 37,
"text": "I_v(z,w)"
},
{
"math_id": 38,
"text": "z^2\\dfrac{\\partial^2y}{\\partial z^2}+z\\dfrac{\\partial y}{\\partial z}-(z^2+v^2)y-\\dfrac{\\partial^2y}{\\partial w^2}+2v\\tanh vw\\dfrac{\\partial y}{\\partial w}=0"
},
{
"math_id": 39,
"text": "\\begin{align}\nJ_v(z,w)&=J_v(z)+\\dfrac{1}{\\pi i}\\left(\\int_0^we^{\\frac{v\\pi i}{2}-iz\\cosh t}\\cosh vt~dt-\\int_0^we^{iz\\cosh t-\\frac{v\\pi i}{2}}\\cosh vt~dt\\right)\n\\\\&=J_v(z)+\\dfrac{1}{\\pi i}\\left(\\int_0^w\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt-i\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\right.\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad\\left.-\\int_0^w\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt-i\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\right)\n\\\\&=J_v(z)+\\dfrac{1}{\\pi i}\\left(-2i\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\right)\n\\\\&=J_v(z)-\\dfrac{2}{\\pi}\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\end{align}"
},
{
"math_id": 40,
"text": "\\begin{align}\nY_v(z,w)&=Y_v(z)+\\dfrac{1}{\\pi}\\left(\\int_0^we^{\\frac{v\\pi i}{2}-iz\\cosh t}\\cosh vt~dt+\\int_0^we^{iz\\cosh t-\\frac{v\\pi i}{2}}\\cosh vt~dt\\right)\n\\\\&=Y_v(z)+\\dfrac{1}{\\pi}\\left(\\int_0^w\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt-i\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\right.\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad\\left.+\\int_0^w\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt+i\\int_0^w\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\right)\n\\\\&=Y_v(z)+\\dfrac{2}{\\pi}\\int_0^w\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt\\end{align}"
},
{
"math_id": 41,
"text": "J_v(z)=\\dfrac{2}{\\pi}\\int_0^\\infty\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt"
},
{
"math_id": 42,
"text": "Y_v(z)=-\\dfrac{2}{\\pi}\\int_0^\\infty\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt"
},
{
"math_id": 43,
"text": "J_v(z,w)=\\dfrac{2}{\\pi}\\int_w^\\infty\\sin\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt"
},
{
"math_id": 44,
"text": "Y_v(z,w)=-\\dfrac{2}{\\pi}\\int_w^\\infty\\cos\\left(z\\cosh t-\\dfrac{v\\pi}{2}\\right)\\cosh vt~dt"
},
{
"math_id": 45,
"text": "|v|<1"
}
] |
https://en.wikipedia.org/wiki?curid=62541247
|
625425
|
Q Score
|
In marketing, a way to measure the familiarity of an item
The Q Score (popularly known as Q-Rating) is a measurement of the familiarity and appeal of a brand, celebrity, company, or entertainment product (e.g., television show) used in the United States. The more highly regarded the item or person is, the higher the Q Score among those who are aware of the subject. Q Scores and other variants are primarily used by the advertising, marketing, media, and public relations industries.
Usage.
The Q Score is a metric that determines a "quotient" ("Q") factor through mail and online panelists who make up representative samples of the population. The score identifies the familiarity of an athlete, brand, celebrity, poet, entertainment offering (e.g., television show), or licensed property, and measures the appeal of each among people familiar with the entity being measured. Other popular synonyms include Q rating, Q factor, and simply Q.
The Q Score was developed in 1963 by Jack Landis and is owned by Marketing Evaluations, Inc, the company he founded in 1964. Q Scores are calculated for the population as a whole as well as by demographic groups such as age, education level, gender, income, or marital status.
Q Score respondents are given choices for each person or item being surveyed: A. One of my favorites. B. Very Good C. Good D. Fair E. Poor F. Never heard of
The "positive" Q Score is calculated by counting how many respondents answered A divided by the number of respondents answering A-E, and calculating the percentage. (that is, multiplying the fraction by 100). Put another way, formula_0
Similarly, the "negative" Q Score is calculated by calculating the percentage of respondents who answered D or E relative to respondents who answered A to E.
formula_1
Other companies have created alternative measures and metrics related to the likability, popularity, and appeal of athletes, brands, celebrities, entertainment offerings, or licensed properties. Marketing Evaluations claims the Q Score is more valuable to marketers than other popularity measurements, such as the Nielsen ratings, because Q Scores indicate not only how many people are "aware of" or "watch a show" but also how those people "feel about" the entity being measured. A well-liked television show, for example, may be worth more as a commercial vehicle to an advertiser than a higher-rated show that people don’t like as much. Emotional bonding with a show means stronger viewer involvement and audience attention, which are very desirable to sponsors. Viewers who regard the show as a "favorite" have higher awareness of the show's commercial content.
Forms.
Marketing Evaluations regularly calculates Q Scores in eight categories:
Cable Q and TVQ scores are calculated for all regularly scheduled broadcast and cable shows.
Other Q Scores are calculated to order for clients who want to research public perception of a brand or celebrity. For example, in 2000, IBM hired Marketing Evaluations to calculate the Q Score for Deep Blue, the supercomputer that defeated chess Grandmaster Garry Kasparov. Deep Blue’s Q Score was 9, meaning the computer was as familiar and appealing at the time as Carmen Electra, Howard Stern, and Bruce Wayne. In contrast, Albert Einstein’s Q Score at the time was 56, while Larry Ellison and Scott McNealy each received a Q Score of 6.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q_+ = \\frac{\\text{favorites}}{\\text{known}} \\times 100"
},
{
"math_id": 1,
"text": "Q_- = \\frac{\\text{disliked}}{\\text{known}} \\times 100"
}
] |
https://en.wikipedia.org/wiki?curid=625425
|
62544
|
Shannon–Fano coding
|
Data compression algorithms
In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is one of two related techniques for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured).
Shannon–Fano codes are suboptimal in the sense that they do not always achieve the lowest possible expected codeword length, as Huffman coding does. However, Shannon–Fano codes have an expected codeword length within 1 bit of optimal. Fano's method usually produces encoding with shorter expected lengths than Shannon's method. However, Shannon's method is easier to analyse theoretically.
Shannon–Fano coding should not be confused with Shannon–Fano–Elias coding (also known as Elias coding), the precursor to arithmetic coding.
Naming.
Regarding the confusion in the two different codes being referred to by the same name, Krajči et al. write:
Around 1948, both Claude E. Shannon (1948) and Robert M. Fano (1949) independently proposed two different source coding algorithms for an efficient description of a discrete memoryless source. Unfortunately, in spite of being different, both schemes became known under the same name "Shannon–Fano coding".
There are several reasons for this mixup. For one thing, in the discussion of his coding scheme, Shannon mentions Fano’s scheme and calls it “substantially the same” (Shannon, 1948, p. 17 [reprint]). For another, both Shannon’s and Fano’s coding schemes are similar in the sense that they both are efficient, but "suboptimal" prefix-free coding schemes with a similar performance.
Shannon's (1948) method, using predefined word lengths, is called Shannon–Fano coding by Cover and Thomas, Goldie and Pinch, Jones and Jones, and Han and Kobayashi. It is called Shannon coding by Yeung.
Fano's (1949) method, using binary division of probabilities, is called Shannon–Fano coding by Salomon and Gupta. It is called Fano coding by Krajči et al.
Shannon's code: predefined word lengths.
Shannon's algorithm.
Shannon's method starts by deciding on the lengths of all the codewords, then picks a prefix code with those word lengths.
Given a source with probabilities formula_2 the desired codeword lengths are formula_3. Here, formula_4 is the ceiling function, meaning the smallest integer greater than or equal to formula_5.
Once the codeword lengths have been determined, we must choose the codewords themselves. One method is to pick codewords in order from most probable to least probable symbols, picking each codeword to be the lexicographically first word of the correct length that maintains the prefix-free property.
A second method makes use of cumulative probabilities. First, the probabilities are written in decreasing order formula_6. Then, the cumulative probabilities are defined as
formula_7
so formula_8 and so on.
The codeword for symbol formula_0 is chosen to be the first formula_9 binary digits in the binary expansion of formula_10.
Example.
This example shows the construction of a Shannon–Fano code for a small alphabet. There 5 different source symbols. Suppose 39 total symbols have been observed with the following frequencies, from which we can estimate the symbol probabilities.
This source has entropy formula_11 bits.
For the Shannon–Fano code, we need to calculate the desired word lengths formula_3.
We can pick codewords in order, choosing the lexicographically first word of the correct length that maintains the prefix-free property. Clearly A gets the codeword 00. To maintain the prefix-free property, B's codeword may not start 00, so the lexicographically first available word of length 3 is 010. Continuing like this, we get the following code:
Alternatively, we can use the cumulative probability method.
Note that although the codewords under the two methods are different, the word lengths are the same. We have lengths of 2 bits for A, and 3 bits for B, C, D and E, giving an average length of
formula_12
which is within one bit of the entropy.
Expected word length.
For Shannon's method, the word lengths satisfy
formula_13
Hence the expected word length satisfies
formula_14
Here, formula_15 is the entropy, and Shannon's source coding theorem says that any code must have an average length of at least formula_16. Hence we see that the Shannon–Fano code is always within one bit of the optimal expected word length.
Fano's code: binary splitting.
Outline of Fano's code.
In Fano's method, the symbols are arranged in order from most probable to least probable, and then divided into two sets whose total probabilities are as close as possible to being equal. All symbols then have the first digits of their codes assigned; symbols in the first set receive "0" and symbols in the second set receive "1". As long as any sets with more than one member remain, the same process is repeated on those sets, to determine successive digits of their codes. When a set has been reduced to one symbol this means the symbol's code is complete and will not form the prefix of any other symbol's code.
The algorithm produces fairly efficient variable-length encodings; when the two smaller sets produced by a partitioning are in fact of equal probability, the one bit of information used to distinguish them is used most efficiently. Unfortunately, Shannon–Fano coding does not always produce optimal prefix codes; the set of probabilities {0.35, 0.17, 0.17, 0.16, 0.15} is an example of one that will be assigned non-optimal codes by Shannon–Fano coding.
Fano's version of Shannon–Fano coding is used in the codice_0 compression method, which is part of the codice_1 file format.
The Shannon–Fano tree.
A Shannon–Fano tree is built according to a specification designed to define an effective code table. The actual algorithm is simple:
Example.
We continue with the previous example.
All symbols are sorted by frequency, from left to right (shown in Figure a). Putting the dividing line between symbols B and C results in a total of 22 in the left group and a total of 17 in the right group. This minimizes the difference in totals between the two groups.
With this division, A and B will each have a code that starts with a 0 bit, and the C, D, and E codes will all start with a 1, as shown in Figure b. Subsequently, the left half of the tree gets a new division between A and B, which puts A on a leaf with code 00 and B on a leaf with code 01.
After four division procedures, a tree of codes results. In the final tree, the three symbols with the highest frequencies have all been assigned 2-bit codes, and two symbols with lower counts have 3-bit codes as shown table below:
This results in lengths of 2 bits for A, B and C and per 3 bits for D and E, giving an average length of
formula_17
We see that Fano's method, with an average length of 2.28, has outperformed Shannon's method, with an average length of 2.62.
Expected word length.
It is shown by Krajči et al that the expected length of Fano's method has expected length bounded above by formula_18, where formula_19 is the probability of the least common symbol.
Comparison with other coding methods.
Neither Shannon–Fano algorithm is guaranteed to generate an optimal code. For this reason, Shannon–Fano codes are almost never used; Huffman coding is almost as computationally simple and produces prefix codes that always achieve the lowest possible expected code word length, under the constraints that each symbol is represented by a code formed of an integral number of bits. This is a constraint that is often unneeded, since the codes will be packed end-to-end in long sequences. If we consider groups of codes at a time, symbol-by-symbol Huffman coding is only optimal if the probabilities of the symbols are independent and are some power of a half, i.e., formula_20. In most situations, arithmetic coding can produce greater overall compression than either Huffman or Shannon–Fano, since it can encode in fractional numbers of bits which more closely approximate the actual information content of the symbol. However, arithmetic coding has not superseded Huffman the way that Huffman supersedes Shannon–Fano, both because arithmetic coding is more computationally expensive and because it is covered by multiple patents.
Huffman coding.
A few years later, David A. Huffman (1952) gave a different algorithm that always produces an optimal tree for any given symbol probabilities. While Fano's Shannon–Fano tree is created by dividing from the root to the leaves, the Huffman algorithm works in the opposite direction, merging from the leaves to the root.
Example with Huffman coding.
We use the same frequencies as for the Shannon–Fano example above, viz:
In this case D & E have the lowest frequencies and so are allocated 0 and 1 respectively and grouped together with a combined probability of 0.282. The lowest pair now are B and C so they're allocated 0 and 1 and grouped together with a combined probability of 0.333. This leaves BC and DE now with the lowest probabilities so 0 and 1 are prepended to their codes and they are combined. This then leaves just A and BCDE, which have 0 and 1 prepended respectively and are then combined. This leaves us with a single node and our algorithm is complete.
The code lengths for the different characters this time are 1 bit for A and 3 bits for all other characters.
This results in the lengths of 1 bit for A and per 3 bits for B, C, D and E, giving an average length of
formula_21
We see that the Huffman code has outperformed both types of Shannon–Fano code, which had expected lengths of 2.62 and 2.28.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "l_i = \\lceil - \\log_2 p_i\\rceil"
},
{
"math_id": 2,
"text": "p_1, p_2, \\dots, p_n"
},
{
"math_id": 3,
"text": "l_i = \\lceil -\\log_2 p_i \\rceil"
},
{
"math_id": 4,
"text": "\\lceil x \\rceil"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "p_1 \\geq p_2 \\geq \\cdots \\geq p_n"
},
{
"math_id": 7,
"text": "c_1 = 0, \\qquad c_i = \\sum_{j=1}^{i-1} p_j \\text{ for }i \\geq 2 , "
},
{
"math_id": 8,
"text": "c_1 = 0, c_2 = p_1, c_3 = p_1 + p_2"
},
{
"math_id": 9,
"text": "l_i"
},
{
"math_id": 10,
"text": "c_i"
},
{
"math_id": 11,
"text": "H(X) = 2.186"
},
{
"math_id": 12,
"text": "\\frac{2\\,\\text{bits}\\cdot(15) + 3\\,\\text{bits} \\cdot (7+6+6+5)}{39\\, \\text{symbols}} \\approx 2.62\\,\\text{bits per symbol,}"
},
{
"math_id": 13,
"text": "l_i = \\lceil -\\log_2 p_i \\rceil \\leq -\\log_2 p_i + 1 ."
},
{
"math_id": 14,
"text": "\\mathbb E L = \\sum_{i=1}^n p_il_i \\leq \\sum_{i=1}^n p_i (-\\log_2 p_i + 1) = -\\sum_{i=1}^n p_i \\log_2 p_i + \\sum_{i=1}^n p_i = H(X) + 1."
},
{
"math_id": 15,
"text": "H(X) = - \\textstyle\\sum_{i=1}^n p_i \\log_2 p_i"
},
{
"math_id": 16,
"text": "H(X)"
},
{
"math_id": 17,
"text": "\\frac{2\\,\\text{bits}\\cdot(15+7+6) + 3\\,\\text{bits} \\cdot (6+5)}{39\\, \\text{symbols}} \\approx 2.28\\,\\text{bits per symbol.}"
},
{
"math_id": 18,
"text": "\\mathbb{E}L \\leq H(X) + 1 - p_\\text{min}"
},
{
"math_id": 19,
"text": "p_\\text{min} = \\textstyle\\min_i p_i"
},
{
"math_id": 20,
"text": "\\textstyle 1 / 2^k"
},
{
"math_id": 21,
"text": "\\frac{1\\,\\text{bit}\\cdot 15 + 3\\,\\text{bits} \\cdot (7+6+6+5)}{39\\, \\text{symbols}} \\approx 2.23\\,\\text{bits per symbol.}"
}
] |
https://en.wikipedia.org/wiki?curid=62544
|
6254418
|
RF power amplifier
|
Type of electronic amplifier
A radio-frequency power amplifier (RF power amplifier) is a type of electronic amplifier that converts a low-power radio-frequency (RF) signal into a higher-power signal. Typically, RF power amplifiers are used in the final stage of a radio transmitter, their output driving the antenna. Design goals often include gain, power output, bandwidth, power efficiency, linearity (low signal compression at rated output), input and output impedance matching, and heat dissipation.
Amplifier classes.
RF amplifier circuits operate in different modes, called "classes", based on how much of the cycle of the sinusoidal radio signal the amplifier (transistor or vacuum tube) is conducting current. Some classes are class A, class AB, class B, which are considered the linear amplifier classes in which the active device is used as a controlled current source, while class C is a nonlinear class in which the active device is used as a switch. The bias at the input of the active device determines the class of the amplifier.
A common trade-off in power amplifier design is the trade-off between efficiency and linearity. The previously named classes become more efficient, but less linear, in the order they are listed. Operating the active device as a switch results in higher efficiency, theoretically up to 100%, but lower linearity. Among the switch-mode classes are class D, class F and class E. The class D amplifier is not often used in RF applications because the finite switching speed of the active devices and possible charge storage in saturation could lead to a large I-V product, which deteriorates efficiency.
Solid state vs. vacuum tube amplifiers.
Modern RF power amplifiers use solid-state devices, predominantly MOSFETs (metal–oxide–semiconductor field-effect transistors). The earliest MOSFET-based RF amplifiers date back to the mid-1960s. Bipolar junction transistors were also commonly used in the past, up until they were replaced by power MOSFETs, particularly LDMOS transistors, as the standard technology for RF power amplifiers by the 1990s, due to the superior RF performance of LDMOS transistors. Generally speaking, solid-state power amplifiers contain four main components: input, output, amplification stage and power supply.
MOSFET transistors and other modern solid-state devices have replaced vacuum tubes in most electronic devices, but tubes are still used in some high-power transmitters (see Valve RF amplifier). Although mechanically robust, transistors are electrically fragile – they are easily damaged by excess voltage or current. Tubes are mechanically fragile but electrically robust – they can handle remarkably high electrical overloads without appreciable damage.
Applications.
The basic applications of the RF power amplifier include driving to another high-power source, driving a transmitting antenna and exciting microwave cavity resonators. Among these applications, driving transmitter antennas is most well known. The transmitter–receivers are used not only for voice and data communication but also for weather sensing (in the form of a radar).
RF power amplifiers using LDMOS (laterally diffused MOSFET) are the most widely used power semiconductor devices in wireless telecommunication networks, particularly mobile networks. LDMOS-based RF power amplifiers are widely used in digital mobile networks such as 2G, 3G, and 4G and the good cost/performance ratio make them the preferred option for amateur radio.
Wideband amplifier design.
Impedance transformations over large bandwidth are difficult to realize, so conventionally, most wideband amplifiers are designed to feed a 50 Ω output load. Transistor output power is then limited to
formula_0
where
formula_1 is defined as the breakdown voltage,
formula_2 is defined as the knee voltage,
formula_3 is chosen so that the rated power can be met.
The external load is, by convention, formula_4 Therefore, there must be some sort of impedance matching that transforms from formula_3 to formula_4
The loadline method is often used in RF power amplifier design.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " P_\\text{out} \\le \\frac{(V_\\text{br} - V_\\text{k})^2 }{8 Z_\\text{o}},"
},
{
"math_id": 1,
"text": "V_\\text{br}"
},
{
"math_id": 2,
"text": "V_\\text{k}"
},
{
"math_id": 3,
"text": "Z_\\text{o}"
},
{
"math_id": 4,
"text": "Z_\\text{L} = 50~\\Omega."
}
] |
https://en.wikipedia.org/wiki?curid=6254418
|
62545
|
Arithmetic coding
|
Form of entropy encoding used in data compression
Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, an arbitrary-precision fraction "q", where 0.0 ≤ "q" < 1.0. It represents the current information as a range, defined by two numbers. A recent family of entropy coders called asymmetric numeral systems allows for faster implementations thanks to directly operating on a single natural number representing the current information.
Implementation details and examples.
Equal probabilities.
In the simplest case, the probability of each symbol occurring is equal. For example, consider a set of three symbols, A, B, and C, each equally likely to occur. Encoding the symbols one by one would require 2 bits per symbol, which is wasteful: one of the bit variations is never used. That is to say, symbols A, B and C might be encoded respectively as 00, 01 and 10, with 11 unused.
A more efficient solution is to represent a sequence of these three symbols as a rational number in base 3 where each digit represents a symbol. For example, the sequence "ABBCAB" could become 0.0112013, in arithmetic coding as a value in the interval [0, 1). The next step is to encode this ternary number using a fixed-point binary number of sufficient precision to recover it, such as 0.00101100012 – this is only 10 bits; 2 bits are saved in comparison with naïve block encoding. This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers.
To decode the value, knowing the original string had length 6, one can simply convert back to base 3, round to 6 digits, and recover the string.
Defining a model.
In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities. (The optimal value is −log2"P" bits for each symbol of probability "P"; see "Source coding theorem".) Compression algorithms that use arithmetic coding start by determining a model of the data – basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimal the output will be.
Example: a simple, static model for describing the output of a particular monitoring instrument over time might be:
Models can also handle alphabets other than the simple four-symbol set chosen for this example. More sophisticated models are also possible: "higher-order" modelling changes its estimation of the current probability of a symbol based on the symbols that precede it (the "context"), so that in a model for English text, for example, the percentage chance of "u" would be much higher when it follows a "Q" or a "q". Models can even be "adaptive", so that they continually change their prediction of the data based on what the stream actually contains. The decoder must have the same model as the encoder.
Encoding and decoding: overview.
In general, each step of the encoding process, except for the last, is the same; the encoder has basically just three pieces of data to consider:
The encoder divides the current interval into sub-intervals, each representing a fraction of the current interval proportional to the probability of that symbol in the current context. Whichever interval corresponds to the actual symbol that is next to be encoded becomes the interval used in the next step.
Example: for the four-symbol model above:
When all symbols have been encoded, the resulting interval unambiguously identifies the sequence of symbols that produced it. Anyone who has the same final interval and model that is being used can reconstruct the symbol sequence that must have entered the encoder to result in that final interval.
It is not necessary to transmit the final interval, however; it is only necessary to transmit "one fraction" that lies within that interval. In particular, it is only necessary to transmit enough digits (in whatever base) of the fraction so that all fractions that begin with those digits fall into the final interval; this will guarantee that the resulting code is a prefix code.
Encoding and decoding: example.
Consider the process for decoding a message encoded with the given four-symbol model. The message is encoded in the fraction 0.538 (using decimal for clarity, instead of binary; also assuming that there are only as many digits as needed to decode the message.)
The process starts with the same interval used by the encoder: [0,1), and using the same model, dividing it into the same four sub-intervals that the encoder must have. The fraction 0.538 falls into the sub-interval for NEUTRAL, [0, 0.6); this indicates that the first symbol the encoder read must have been NEUTRAL, so this is the first symbol of the message.
Next divide the interval [0, 0.6) into sub-intervals:
Since 0.538 is within the interval [0.48, 0.54), the second symbol of the message must have been NEGATIVE.
Again divide our current interval into sub-intervals:
Now 0.538 falls within the interval of the END-OF-DATA symbol; therefore, this must be the next symbol. Since it is also the internal termination symbol, it means the decoding is complete. If the stream is not internally terminated, there needs to be some other way to indicate where the stream stops. Otherwise, the decoding process could continue forever, mistakenly reading more symbols from the fraction than were in fact encoded into it.
Sources of inefficiency.
The message 0.538 in the previous example could have been encoded by the equally short fractions 0.534, 0.535, 0.536, 0.537 or 0.539. This suggests that the use of decimal instead of binary introduced some inefficiency. This is correct; the information content of a three-digit decimal is formula_0 bits; the same message could have been encoded in the binary fraction 0.10001001 (equivalent to 0.53515625 decimal) at a cost of only 8bits.
This 8 bit output is larger than the information content, or entropy of the message, which is
formula_1
But an integer number of bits must be used in the binary encoding, so an encoder for this message would use at least 8 bits, resulting in a message 8.4% larger than the entropy contents. This inefficiency of at most 1 bit results in relatively less overhead as the message size grows.
Moreover, the claimed symbol probabilities were [0.6, 0.2, 0.1, 0.1), but the actual frequencies in this example are [0.33, 0, 0.33, 0.33). If the intervals are readjusted for these frequencies, the entropy of the message would be 4.755 bits and the same NEUTRAL NEGATIVE END-OF-DATA message could be encoded as intervals [0, 1/3); [1/9, 2/9); [5/27, 6/27); and a binary interval of [0.00101111011, 0.00111000111). This is also an example of how statistical coding methods like arithmetic encoding can produce an output message that is larger than the input message, especially if the probability model is off.
Adaptive arithmetic coding.
One advantage of arithmetic coding over other similar methods of data compression is the convenience of adaptation. "Adaptation" is the changing of the frequency (or probability) tables while processing the data. The decoded data matches the original data as long as the frequency table in decoding is replaced in the same way and in the same step as in encoding. The synchronization is, usually, based on a combination of symbols occurring during the encoding and decoding process.
Precision and renormalization.
The above explanations of arithmetic coding contain some simplification. In particular, they are written as if the encoder first calculated the fractions representing the endpoints of the interval in full, using infinite precision, and only converted the fraction to its final form at the end of encoding. Rather than try to simulate infinite precision, most arithmetic coders instead operate at a fixed limit of precision which they know the decoder will be able to match, and round the calculated fractions to their nearest equivalents at that precision. An example shows how this would work if the model called for the interval [0,1) to be divided into thirds, and this was approximated with 8 bit precision. Note that since now the precision is known, so are the binary ranges we'll be able to use.
A process called "renormalization" keeps the finite precision from becoming a limit on the total number of symbols that can be encoded. Whenever the range is reduced to the point where all values in the range share certain beginning digits, those digits are sent to the output. For however many digits of precision the computer "can" handle, it is now handling fewer than that, so the existing digits are shifted left, and at the right, new digits are added to expand the range as widely as possible. Note that this result occurs in two of the three cases from our previous example.
Arithmetic coding as a generalized change of radix.
Recall that in the case where the symbols had equal probabilities, arithmetic coding could be implemented by a simple change of base, or radix. In general, arithmetic (and range) coding may be interpreted as a "generalized" change of radix. For example, we may look at any sequence of symbols:
formula_2
as a number in a certain base presuming that the involved symbols form an ordered set and each symbol in the ordered set denotes a sequential integer A = 0, B = 1, C = 2, D = 3, and so on. This results in the following frequencies and cumulative frequencies:
The "cumulative frequency" for an item is the sum of all frequencies preceding the item. In other words, cumulative frequency is a running total of frequencies.
In a positional numeral system the radix, or base, is numerically equal to a number of different symbols used to express the number. For example, in the decimal system the number of symbols is 10, namely 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix is used to express any finite integer in a presumed multiplier in polynomial form. For example, the number 457 is actually 4×102 + 5×101 + 7×100, where base 10 is presumed but not shown explicitly.
Initially, we will convert DABDDB into a base-6 numeral, because 6 is the length of the string. The string is first mapped into the digit string 301331, which then maps to an integer by the polynomial:
formula_3
The result 23671 has a length of 15 bits, which is not very close to the theoretical limit (the entropy of the message), which is approximately 9 bits.
To encode a message with a length closer to the theoretical limit imposed by information theory we need to slightly generalize the classic formula for changing the radix. We will compute lower and upper bounds "L" and "U" and choose a number between them. For the computation of "L" we multiply each term in the above expression by the product of the frequencies of all previously occurred symbols:
formula_4
The difference between this polynomial and the polynomial above is that each term is multiplied by the product of the frequencies of all previously occurring symbols. More generally, "L" may be computed as:
formula_5
where formula_6 are the cumulative frequencies and formula_7 are the frequencies of occurrences. Indexes denote the position of the symbol in a message. In the special case where all frequencies formula_7 are 1, this is the change-of-base formula.
The upper bound "U" will be "L" plus the product of all frequencies; in this case "U" = "L" + (3 × 1 × 2 × 3 × 3 × 2) = 25002 + 108 = 25110. In general, "U" is given by:
formula_8
Now we can choose any number from the interval ["L", "U") to represent the message; one convenient choice is the value with the longest possible trail of zeroes, 25100, since it allows us to achieve compression by representing the result as 251×102. The zeroes can also be truncated, giving 251, if the length of the message is stored separately. Longer messages will tend to have longer trails of zeroes.
To decode the integer 25100, the polynomial computation can be reversed as shown in the table below. At each stage the current symbol is identified, then the corresponding term is subtracted from the result.
During decoding we take the floor after dividing by the corresponding power of 6. The result is then matched against the cumulative intervals and the appropriate symbol is selected from look up table. When the symbol is identified the result is corrected. The process is continued for the known length of the message or while the remaining result is positive. The only difference compared to the classical change-of-base is that there may be a range of values associated with each symbol. In this example, A is always 0, B is either 1 or 2, and D is any of 3, 4, 5. This is in exact accordance with our intervals that are determined by the frequencies. When all intervals are equal to 1 we have a special case of the classic base change.
Theoretical limit of compressed message.
The lower bound "L" never exceeds "n""n", where "n" is the size of the message, and so can be represented in formula_9 bits. After the computation of the upper bound "U" and the reduction of the message by selecting a number from the interval ["L", "U") with the longest trail of zeros we can presume that this length can be reduced by formula_10 bits. Since each frequency in a product occurs exactly the same number of times as the value of this frequency, we can use the size of the alphabet "A" for the computation of the product
formula_11
Applying log2 for the estimated number of bits in the message, the final message (not counting a logarithmic overhead for the message length and frequency tables) will match the number of bits given by entropy, which for long messages is very close to optimal:
formula_12
In other words, the efficiency of arithmetic encoding approaches the theoretical limit of formula_13 bits per symbol, as the message length approaches infinity.
Asymptotic equipartition.
We can understand this intuitively. Suppose the source is ergodic, then it has the asymptotic equipartition property (AEP). By the AEP, after a long stream of formula_14 symbols, the interval of formula_15 is almost partitioned into almost equally-sized intervals.
Technically, for any small formula_16, for all large enough formula_14, there exists formula_17 strings formula_18, such that each string has almost equal probability formula_19, and their total probability is formula_20.
For any such string, it is arithmetically encoded by a binary string of length formula_21, where formula_21 is the smallest formula_21 such that there exists a fraction of form formula_22 in the interval for formula_18. Since the interval for formula_18 has size formula_23, we should expect it to contain one fraction of form formula_22 when formula_24.
Thus, with high probability, formula_18 can be arithmetically encoded with a binary string of length formula_25.
Connections with other compression methods.
Huffman coding.
Because arithmetic coding doesn't compress one datum at a time, it can get arbitrarily close to entropy when compressing IID strings. By contrast, using the extension of Huffman coding (to strings) does not reach entropy unless all probabilities of alphabet symbols are powers of two, in which case both Huffman and arithmetic coding achieve entropy.
When naively Huffman coding binary strings, no compression is possible, even if entropy is low (e.g. ({0, 1}) has probabilities {0.95, 0.05}). Huffman encoding assigns 1 bit to each value, resulting in a code of the same length as the input. By contrast, arithmetic coding compresses bits well, approaching the optimal compression ratio of
formula_26
One simple way to address Huffman coding's suboptimality is to concatenate symbols ("blocking") to form a new alphabet in which each new symbol represents a sequence of original symbols – in this case bits – from the original alphabet. In the above example, grouping sequences of three symbols before encoding would produce new "super-symbols" with the following frequencies:
With this grouping, Huffman coding averages 1.3 bits for every three symbols, or 0.433 bits per symbol, compared with one bit per symbol in the original encoding, i.e., formula_27 compression. Allowing arbitrarily large sequences gets arbitrarily close to entropy – just like arithmetic coding – but requires huge codes to do so, so is not as practical as arithmetic coding for this purpose.
An alternative is encoding run lengths via Huffman-based Golomb-Rice codes. Such an approach allows simpler and faster encoding/decoding than arithmetic coding or even Huffman coding, since the latter requires a table lookups. In the {0.95, 0.05} example, a Golomb-Rice code with a four-bit remainder achieves a compression ratio of formula_28, far closer to optimum than using three-bit blocks. Golomb-Rice codes only apply to Bernoulli inputs such as the one in this example, however, so it is not a substitute for blocking in all cases.
History and patents.
Basic algorithms for arithmetic coding were developed independently by Jorma J. Rissanen, at IBM Research, and by Richard C. Pasco, a Ph.D. student at Stanford University; both were published in May 1976. Pasco cites a pre-publication draft of Rissanen's article and comments on the relationship between their works:
<templatestyles src="Template:Blockquote/styles.css" />One algorithm of the family was developed independently by Rissanen [1976]. It shifts the code element to the most significant end of the accumulator, using a pointer obtained by addition and exponentiation. We shall now compare the alternatives in the three choices, and see that it is preferable to shift the code element rather than the accumulator, and to add code elements to the least significant end of the accumulator.
Less than a year after publication, IBM filed for a US patent on Rissanen's work. Pasco's work was not patented.
A variety of specific techniques for arithmetic coding have historically been covered by US patents, although various well-known methods have since passed into the public domain as the patents have expired. Techniques covered by patents may be essential for implementing the algorithms for arithmetic coding that are specified in some formal international standards. When this is the case, such patents are generally available for licensing under what is called "reasonable and non-discriminatory" (RAND) licensing terms (at least as a matter of standards-committee policy). In some well-known instances, (including some involving IBM patents that have since expired), such licenses were available for free, and in other instances, licensing fees have been required. The availability of licenses under RAND terms does not necessarily satisfy everyone who might want to use the technology, as what may seem "reasonable" for a company preparing a proprietary commercial software product may seem much less reasonable for a free software or open source project.
At least one significant compression software program, bzip2, deliberately discontinued the use of arithmetic coding in favor of Huffman coding due to the perceived patent situation at the time. Also, encoders and decoders of the JPEG file format, which has options for both Huffman encoding and arithmetic coding, typically only support the Huffman encoding option, which was originally because of patent concerns; the result is that nearly all JPEG images in use today use Huffman encoding although JPEG's arithmetic coding patents have expired due to the age of the JPEG standard (the design of which was approximately completed by 1990). JPEG XL, as well as archivers like PackJPG, Brunsli and Lepton, that can losslessly convert Huffman encoded files to ones with arithmetic coding (or asymmetric numeral systems in case of JPEG XL), showing up to 25% size saving.
The JPEG image compression format's arithmetic coding algorithm is based on the following cited patents (since expired).
Other patents (mostly also expired) related to arithmetic coding include the following.
Note: This list is not exhaustive. See the following links for a list of more US patents. The Dirac codec uses arithmetic coding and is not patent pending.
Patents on arithmetic coding may exist in other jurisdictions; see software patents for a discussion of the patentability of software around the world.
Benchmarks and other technical characteristics.
Every programmatic implementation of arithmetic encoding has a different compression ratio and performance. While compression ratios vary only a little (usually under 1%), the code execution time can vary by a factor of 10. Choosing the right encoder from a list of publicly available encoders is not a simple task because performance and compression ratio depend also on the type of data, particularly on the size of the alphabet (number of different symbols). One of two particular encoders may have better performance for small alphabets while the other may show better performance for large alphabets. Most encoders have limitations on the size of the alphabet and many of them are specialized for alphabets of exactly two symbols (0 and 1).
|
[
{
"math_id": 0,
"text": "3 \\times \\log_2(10) \\approx 9.966"
},
{
"math_id": 1,
"text": " \\sum -\\log_2(p_i) = -\\log_2(0.6) - \\log_2(0.1) - \\log_2(0.1) = 7.381 \\text{ bits}."
},
{
"math_id": 2,
"text": "\\mathrm{DABDDB}"
},
{
"math_id": 3,
"text": "6^5 \\times 3 + 6^4 \\times 0 + 6^3 \\times 1 + 6^2 \\times 3 + 6^1 \\times 3 + 6^0 \\times 1 = 23671"
},
{
"math_id": 4,
"text": "\\begin{align}\n L = {} &(6^5 \\times 3) + {}\\\\\n & 3 \\times (6^4 \\times 0) + {}\\\\\n & (3 \\times 1) \\times (6^3 \\times 1) + {}\\\\\n & (3 \\times 1 \\times 2) \\times (6^2 \\times 3) + {}\\\\\n & (3 \\times 1 \\times 2 \\times 3) \\times (6^1 \\times 3) + {}\\\\\n & (3 \\times 1 \\times 2 \\times 3 \\times 3) \\times (6^0 \\times 1) {}\\\\\n = {} & 25002\n\\end{align}"
},
{
"math_id": 5,
"text": " L = \\sum_{i=1}^n n^{n-i} C_i { \\prod_{k=1}^{i-1} f_k } "
},
{
"math_id": 6,
"text": "\\scriptstyle C_i"
},
{
"math_id": 7,
"text": "\\scriptstyle f_k"
},
{
"math_id": 8,
"text": " U = L + \\prod_{k=1}^{n} f_k "
},
{
"math_id": 9,
"text": "\\log_2(n^n) = n \\log_2(n)"
},
{
"math_id": 10,
"text": "\\textstyle \\log_2\\left(\\prod_{k=1}^n f_k\\right)"
},
{
"math_id": 11,
"text": " \\prod_{k=1}^n f_k = \\prod_{k=1}^A f_k^{f_k}."
},
{
"math_id": 12,
"text": "-\\left[\\sum_{i=1}^A f_i \\log_2(f_i)\\right] n = n H"
},
{
"math_id": 13,
"text": "H"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "(0, 1)"
},
{
"math_id": 16,
"text": "\\epsilon > 0"
},
{
"math_id": 17,
"text": "2^{nH(X)(1+O(\\epsilon))}"
},
{
"math_id": 18,
"text": "x_{1:n}"
},
{
"math_id": 19,
"text": "Pr(x_{1:n}) = 2^{-nH(X)(1+ O(\\epsilon))} "
},
{
"math_id": 20,
"text": "1-O(\\epsilon)"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "\\frac{?}{2^k}"
},
{
"math_id": 23,
"text": "2^{-nH(X)(1+ O(\\epsilon))} "
},
{
"math_id": 24,
"text": "k = nH(X)(1+O(\\epsilon))"
},
{
"math_id": 25,
"text": "nH(X) ( 1 + O(\\epsilon))"
},
{
"math_id": 26,
"text": " 1 - [-0.95 \\log_2(0.95) + -0.05 \\log_2(0.05)] \\approx 71.4\\%."
},
{
"math_id": 27,
"text": "56.7\\%"
},
{
"math_id": 28,
"text": "71.1\\%"
}
] |
https://en.wikipedia.org/wiki?curid=62545
|
62549019
|
Andrea Morello
|
Italian professor of quantum computing (born 1972)
Andrea Morello (born 26 June 1972, in Pinerolo, Italy) is the Scientia Professor of Quantum Engineering in the School of Electrical Engineering and Telecommunications at the University of New South Wales, and a Program Manager at the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T). Morello is the head of the Fundamental Quantum Technologies Laboratory at UNSW.
Education.
Morello completed his undergraduate degree in electrical engineering at the Politecnico di Torino in Italy in 1998. His research career began at the Grenoble High Magnetic Field Laboratory where he investigated the magnetic phase diagram of high formula_0 superconductors. He obtained his PhD in experimental physics from the Kamerlingh Onnes Laboratory in Leiden in 2004, during which he explored the quantum dynamics of molecular nanomagnets at low temperatures. Morello spent two years at the University of British Columbia before joining UNSW Sydney in 2006.
Research.
Morello's research is primarily focused on designing and building the basic components of a quantum computer using the spins of single atoms in silicon. His team were the first in the world to demonstrate the coherent control and readout of the electron and nuclear spin of an individual phosphorus atom in silicon, and for many years they held the record for the longest quantum memory time for a single qubit in the solid state (35.6 seconds). Morello's research also focuses on using highly coherent spin systems to study the foundations of quantum mechanics.
Outreach.
Outside of his research Morello is actively engaged in science outreach and education. He has produced a series of YouTube videos 'The Quantum Around You' and 'Quantum Computing Concepts' to bring the fundamental concepts of quantum physics to a wider audience. Morello also starred in a series of videos produced by YouTuber Derek Muller on his channel Veritasium, explaining the fundamental concepts of quantum computing, with the highest viewed video in this series being watched over 4.4 million times.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T_c"
}
] |
https://en.wikipedia.org/wiki?curid=62549019
|
62556833
|
Impulse vector
|
Mathematical tool
An impulse vector, also known as Kang vector, is a mathematical tool used to graphically design and analyze input shapers that can suppress residual vibration. The impulse vector can be applied to both undamped and underdamped systems, as well as to both positive and negative impulses in a unified manner. The impulse vector makes it easy to obtain impulse time and magnitude of the input shaper graphically.
A vector concept for an input shaper was first introduced by W. Singhose for undamped systems with positive impulses. Building on this idea, C.-G. Kang introduced the impulse vector (or Kang vector) to generalize Singhose's idea to undamped and underdamped systems with positive and negative impulses.
Definition.
For a vibratory second-order system formula_4 with undamped natural frequency formula_5 and damping ratio formula_6, the magnitude formula_7 and angle formula_8 of an impulse vector (or Kang vector) formula_1 corresponding to an impulse function formula_0, formula_9 is defined in a 2-dimensional polar coordinate system as
formula_10
formula_11
where formula_12 implies the magnitude of an impulse function, formula_13 implies the time location of the impulse function, and formula_14 implies damped natural frequency formula_15. For a positive impulse function with formula_2, the initial point of the impulse vector is located at the origin of the polar coordinate system, while for a negative impulse function with formula_3, the terminal point of the impulse vector is located at the origin. □
In this definition, the magnitude formula_7 is the product of formula_12 and a scaling factor for damping during time interval formula_13, which represents the magnitude formula_12 before being damped; the angle formula_8 is the product of the impulse time and damped natural frequency. formula_16 represents the Dirac delta function with impulse time at formula_17.
Note that an impulse function is a purely mathematical quantity, while the impulse vector includes a physical quantity (that is, formula_5 and formula_6 of a second-order system) as well as a mathematical impulse function. Representing more than two impulse vectors in the same polar coordinate system makes an "impulse vector diagram". The impulse vector diagram is a graphical representation of an impulse sequence.
Consider two impulse vectors formula_19 and formula_20 in the figure on the right-hand side, in which formula_19 is an impulse vector with magnitude formula_21 and angle formula_22 corresponding to a positive impulse with formula_23, and formula_20 is an impulse vector with magnitude formula_24 and angle formula_25 corresponding to a negative impulse with formula_26. Since the two time-responses corresponding to formula_19 and formula_20 are exactly same after the final impulse time formula_18 as shown in the figure, the two impulse vectors formula_19 and formula_20 can be regarded as the same vector for vector addition and subtraction. Impulse vectors satisfy the commutative and associative laws, as well as the distributive law for scalar multiplication.
The magnitude of the impulse vector determines the magnitude of the impulse, and the angle of the impulse vector determines the time location of the impulse. One rotation, formula_27 angle, on an impulse vector diagram corresponds to one (damped) period of the corresponding impulse response.
If it is an undamped system (formula_28), the magnitude and angle of the impulse vector become formula_29 and formula_30.
Properties.
Property 1: Resultant of two impulse vectors..
The impulse response of a second-order system corresponding to the resultant of two impulse vectors is same as the time response of the system with a two-impulse input corresponding to two impulse vectors after the final impulse time regardless of whether the system is undamped or underdamped. □
Property 2: Zero resultant of impulse vectors..
If the resultant of impulse vectors is zero, the time response of a second-order system for the input of the impulse sequence corresponding to the impulse vectors becomes zero also after the final impulse time regardless of whether the system is undamped or underdamped. □
Consider an underdamped second-order system with the transfer function formula_35. This system has formula_36 and formula_37. For given impulse vectors formula_19 and formula_20 as shown in the figure, the resultant can be represented in two ways, formula_38 and formula_39, in which formula_38 corresponds to a negative impulse with formula_40 and formula_41, and formula_39 corresponds to a positive impulse with formula_42 and formula_43.
The resultants formula_38, formula_39 can be found as follows.
formula_44,
formula_45
formula_46
Note that formula_47. The impulse responses formula_48 and formula_49 corresponding to formula_38 and formula_39 are exactly same with formula_50 after each impulse time location as shown in green lines of the figure (b).
Now, place an impulse vector formula_31 on the impulse vector diagram to cancel the resultant formula_51 as shown in the figure. The impulse vector formula_31 is given by
formula_52.
When the impulse sequence corresponding to three impulse vectors formula_53 and formula_31 is applied to a second-order system as an input, the resulting time response causes no residual vibration after the final impulse time formula_32 as shown in the red line of the bottom figure (b). Of course, another canceling vector formula_54 can exist, which is the impulse vector with the same magnitude as formula_31 but with an opposite arrow direction. However, this canceling vector has a longer impulse time that can be as much as a half period compared to formula_31.
Applications: Design of input shapers using impulse vectors.
ZVD"n" shaper.
Using impulse vectors, we can redesign known input shapers such as zero vibration (ZV), zero vibration and derivative (ZVD), and ZVD"n" shapers.
The ZV shaper is composed of two impulse vectors, in which the first impulse vector is located at 0°, and the second impulse vector with the same magnitude is located at 180° for formula_55. Then from the impulse vector diagram of the ZV shaper on the right-hand side,
formula_56
formula_57.
Therefore, formula_58.
Since formula_59 (normalization constraint) must be hold, and formula_60,
formula_61.
Therefoere, formula_62.
Thus, the ZV shaper formula_63 is given by
formula_64.
formula_65
The ZVD shaper is composed of three impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at formula_66 rad, and the third vector at formula_27 rad, and the magnitude ratio is formula_67. Then formula_34. From the impulse vector diagram,
formula_68.
Therefore, formula_69.
Also from the impulse vector diagram,
formula_70.
Since formula_71 must be hold,
formula_72.
Therefore, formula_73.
Thus, the ZVD shaper formula_74 is given by
formula_75.
formula_65
The ZVD2 shaper is composed of four impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at formula_66 rad, the third vector at formula_27 rad, and the fourth vector at formula_76 rad, and the magnitude ratio is formula_77. Then formula_78. From the impulse vector diagram,
formula_79.
Therefore, formula_80.
Also, from the impulse vector diagram,
formula_81.
Since formula_82 must be hold,
formula_83.
Therefore, formula_84.
Thus, the ZVD2 shaper formula_85 is given by
formula_86.
formula_65
Similarly, the ZVD3 shaper with five impulse vectors can be obtained, in which the first vector is located at 0 rad, the second vector at formula_66 rad, third vector at formula_27 rad, the fourth vector at formula_76 rad, and the fifth vector at formula_87 rad, and the magnitude ratio is formula_88. In general, for the ZVD"n" shaper, "i"-th impulse vector is located at formula_89 rad, and the magnitude ratio is formula_90 where formula_91 implies a mathematical combination.
ETM shaper.
Now, consider "equal shaping-time and magnitudes" (ETM) shapers, with the same magnitude of impulse vectors and with the same angle between impulse vectors. The ETM"n" shaper satisfies the conditions
formula_92
formula_93
formula_94.
Thus, the resultant of the impulse vectors of the ETM"n" shaper becomes always zero for all formula_95. One merit of the ETM"n" shaper is that, unlike the ZVD"n" or extra insensitive (EI) shapers, the shaping time is always one (damped) period of the time response even if "n" increases.
The ETM4 shaper with four impulse vectors is obtained from the above conditions together with impulse vector definitions as
formula_96.
formula_97.
The ETM5 shaper with five impulse vectors is obtained similarly as
formula_98.
formula_99.
In the same way, the ETM"n" shaper with formula_100 can be obtained easily. In general, ETM shapers are less sensitive to modeling errors than ZVD"n" shapers in a large positive error range. Note that the ZVD shaper is an ETM3 shaper with formula_101.
NMe shaper.
Moreover, impulse vectors can be applied to design input shapers with negative impulses. Consider a "negative equal-magnitude" (NMe) shaper, in which the magnitudes of three impulse vectors are formula_102, and the angles are formula_103. Then the resultant of three impulse vectors becomes zero, and thus the residual vibration is suppressed. Impulse time formula_104 of the NMe shaper are obtained as formula_105, and impulse magnitudes are obtained easily by solving the simultaneous equations
formula_106
formula_71.
The resulting NMe shaper formula_33 is
formula_107.
formula_108.
The NMe shaper has faster rise time than the ZVD shaper, but it is more sensitive to modeling error than the ZVD shaper. Note that the NMe shaper is the same with the UM shaper if the system is undamped (formula_28).
Figure (a) in the right side shows a typical block diagram of an input-shaping control system, and figure (b) shows residual vibration suppressions in unit-step responses by ZV, ZVD, ETM4 and NMe shapers.
Refer to the reference for sensitivity curves of the above input shapers, which represent the robustness to modeling errors in formula_5 and formula_6.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A_i \\delta (t-t_i)"
},
{
"math_id": 1,
"text": "\\mathbf{I}_i"
},
{
"math_id": 2,
"text": "A_i > 0"
},
{
"math_id": 3,
"text": "A_i < 0"
},
{
"math_id": 4,
"text": "\\omega_n^2 /(s^2 + 2 \\zeta \\omega_n + \\omega_n^2 )"
},
{
"math_id": 5,
"text": "\\omega_n"
},
{
"math_id": 6,
"text": "\\zeta"
},
{
"math_id": 7,
"text": "I_i"
},
{
"math_id": 8,
"text": "\\theta_i"
},
{
"math_id": 9,
"text": "i = 1,2,...,n"
},
{
"math_id": 10,
"text": "I_i = A_i e^{\\zeta \\omega_n t_i }"
},
{
"math_id": 11,
"text": "\\theta_i = \\omega_d t_i "
},
{
"math_id": 12,
"text": "A_i"
},
{
"math_id": 13,
"text": "t_i"
},
{
"math_id": 14,
"text": "\\omega_d"
},
{
"math_id": 15,
"text": "\\omega_n \\sqrt{1-\\zeta^2}"
},
{
"math_id": 16,
"text": "\\delta(t-t_i)"
},
{
"math_id": 17,
"text": "t=t_i"
},
{
"math_id": 18,
"text": "t_2"
},
{
"math_id": 19,
"text": "\\mathbf{I}_1"
},
{
"math_id": 20,
"text": "\\mathbf{I}_2"
},
{
"math_id": 21,
"text": "I_1 (>0)"
},
{
"math_id": 22,
"text": "\\theta_1"
},
{
"math_id": 23,
"text": "A_1 > 0"
},
{
"math_id": 24,
"text": "I_2 = -I_1"
},
{
"math_id": 25,
"text": "\\theta_2 = \\pi + \\theta_1"
},
{
"math_id": 26,
"text": "A_2 < 0"
},
{
"math_id": 27,
"text": "2 \\pi"
},
{
"math_id": 28,
"text": "\\zeta = 0"
},
{
"math_id": 29,
"text": "I_i = A_i"
},
{
"math_id": 30,
"text": "\\theta_i = \\omega_n t_i"
},
{
"math_id": 31,
"text": "\\mathbf{I}_3"
},
{
"math_id": 32,
"text": "t_3"
},
{
"math_id": 33,
"text": "A_1 \\delta (t) + A_2 \\delta (t-t_2) + A_3 \\delta (t-t_3)"
},
{
"math_id": 34,
"text": "\\mathbf{I}_1 + \\mathbf{I}_2 + \\mathbf{I}_3 = \\mathbf{0}"
},
{
"math_id": 35,
"text": "4 \\pi^2 / (s^2 + 0.4 \\pi s + 4 \\pi^2 )"
},
{
"math_id": 36,
"text": "\\omega_n = 2 \\pi"
},
{
"math_id": 37,
"text": "\\zeta = 0.1"
},
{
"math_id": 38,
"text": "\\mathbf{I}_{R1}"
},
{
"math_id": 39,
"text": "\\mathbf{I}_{R2}"
},
{
"math_id": 40,
"text": "A_{R1} = I_{R1} / e^{\\zeta \\omega_n t_{R1}}"
},
{
"math_id": 41,
"text": "t_{R1} = \\theta_{R1} / \\omega_d"
},
{
"math_id": 42,
"text": "A_{R2} = I_{R2} / e^{\\zeta \\omega_n t_{R2}}"
},
{
"math_id": 43,
"text": "t_{R2} = \\theta_{R2} / \\omega_d"
},
{
"math_id": 44,
"text": "R_x = I_1 + I_2 \\cos \\theta_2 , \\ \\ R_y = I_2 \\sin \\theta_2"
},
{
"math_id": 45,
"text": "I_{R1} = - \\sqrt{R_x^2 + R_y^2},\\ \\ \\theta_{R1} = \\pi + \\tan^{-1} (R_y / R_x)"
},
{
"math_id": 46,
"text": "I_{R2} = \\sqrt{R_x^2 + R_y^2},\\ \\ \\theta_{R2} = \\tan^{-1} (R_y / R_x)"
},
{
"math_id": 47,
"text": "- \\pi /2 < \\tan^{-1} (a) < \\pi /2"
},
{
"math_id": 48,
"text": "y_{R1}"
},
{
"math_id": 49,
"text": "y_{R2}"
},
{
"math_id": 50,
"text": "y_1 + y_2"
},
{
"math_id": 51,
"text": "\\mathbf{I}_1 + \\mathbf{I}_2"
},
{
"math_id": 52,
"text": "I_3 = \\sqrt{R_x^2 + R_y^2},\\ \\ \\theta_3 = \\pi + \\tan^{-1}(R_y /R_x )"
},
{
"math_id": 53,
"text": "\\mathbf{I}_1, \\mathbf{I}_2"
},
{
"math_id": 54,
"text": " \\mathbf{I}^'_3"
},
{
"math_id": 55,
"text": "\\mathbf{I}_1 + \\mathbf{I}_2 = \\mathbf{0}"
},
{
"math_id": 56,
"text": "\\theta_1 = 0, \\ \\ \\theta_2 = \\pi "
},
{
"math_id": 57,
"text": "I_1 = I_2 = I"
},
{
"math_id": 58,
"text": " t_1 = 0, \\ \\ t_2 = \\pi/ \\omega_d"
},
{
"math_id": 59,
"text": "A_1 + A_2 = 1"
},
{
"math_id": 60,
"text": "A_1 = I_1, \\ \\ A_2 = I_2 / e^{\\zeta \\omega_n t_2}"
},
{
"math_id": 61,
"text": "I_1 + \\frac {I_2}{e^{\\zeta \\omega_n t_2 }} = I + \\frac {I}{K} = 1, \\quad K=e^{\\zeta \\pi / \\sqrt {1-\\zeta^2 }} "
},
{
"math_id": 62,
"text": "I = K/(K+1)"
},
{
"math_id": 63,
"text": "A_1 \\delta (t) + A_2 \\delta (t-t_2)"
},
{
"math_id": 64,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & \\pi / \\omega_d \\\\\nK/(K+1), & 1/(K+1)\n\\end{bmatrix}\n"
},
{
"math_id": 65,
"text": " \\quad "
},
{
"math_id": 66,
"text": "\\pi"
},
{
"math_id": 67,
"text": "I_1 : I_2 : I_3 = 1:2:1"
},
{
"math_id": 68,
"text": "\\theta_1 = 0, \\ \\ \\theta_2 = \\pi , \\ \\ \\theta_3 = 2 \\pi "
},
{
"math_id": 69,
"text": " t_1 = 0, \\ t_2 = \\pi/ \\omega_d , \\ t_3 = 2 \\pi/ \\omega_d"
},
{
"math_id": 70,
"text": "I_1 = I_3 = I, \\ \\ I_2 = 2I"
},
{
"math_id": 71,
"text": "A_1 + A_2 + A_3 = 1"
},
{
"math_id": 72,
"text": "I_1 + \\frac {I_2}{e^{\\zeta \\omega_n t_3}} + \\frac {I_3}{e^{\\zeta \\omega_n t_3}} = I+ \\frac{2I}{K}+ \\frac{I}{K^2} = 1, \\ \\ K=e^{\\zeta \\pi / \\sqrt {1-\\zeta^2 }} "
},
{
"math_id": 73,
"text": " I=K^2 /(K+1)^2"
},
{
"math_id": 74,
"text": "A_1 \\delta (t) + A_2 \\delta (t-t_2) + A_3 \\delta (t- t_3 )"
},
{
"math_id": 75,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & \\pi / \\omega_d, & 2 \\pi / \\omega_d\\\\\nK^2/(K+1)^2, & 2K/(K+1)^2, & 1/(K+1)^2\n\\end{bmatrix}\n"
},
{
"math_id": 76,
"text": "3 \\pi"
},
{
"math_id": 77,
"text": "I_1 : I_2 : I_3 : I_4 = 1:3:3:1"
},
{
"math_id": 78,
"text": "\\mathbf{I}_1 + \\mathbf{I}_2 + \\mathbf{I}_3 + \\mathbf{I}_4 = \\mathbf{0}"
},
{
"math_id": 79,
"text": "\\theta_1 = 0, \\ \\ \\theta_2 = \\pi , \\ \\ \\theta_3 = 2 \\pi , \\ \\ \\theta_4 = 3 \\pi"
},
{
"math_id": 80,
"text": "t_1 = 0, \\ \\ t_2 = \\pi / \\omega_d , \\ \\ t_3 = 2 \\pi / \\omega_d , \\ \\ t_4 = 3 \\pi / \\omega_d"
},
{
"math_id": 81,
"text": "I_1 = I_4 = I, \\ \\ I_2 = I_3 = 3I"
},
{
"math_id": 82,
"text": "A_1 + A_2 + A_3 + A_4 = 1"
},
{
"math_id": 83,
"text": "I_1 + \\frac {I_2}{e^{\\zeta \\omega_n t_2}} + \\frac {I_3}{e^{\\zeta \\omega_n t_3}} + \\frac {I_4}{e^{\\zeta \\omega_n t_4}} = I + \\frac {3I}{K} + \\frac {3I}{K^2} + \\frac {I}{K^3} = 1, \\ \\ \\ K=e^{\\zeta \\pi / \\sqrt {1-\\zeta^2 }} "
},
{
"math_id": 84,
"text": "I = K^3 / (K+1)^3"
},
{
"math_id": 85,
"text": "A_1 \\delta (t) + A_2 \\delta (t-t_2) + A_3 \\delta (t-t_3) + A_4 \\delta (t-t_4)"
},
{
"math_id": 86,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & \\pi / \\omega_d, & 2 \\pi / \\omega_d, & 3 \\pi / \\omega_d \\\\\nK^3/(K+1)^3, & 3K^2/(K+1)^3, & 3K/(K+1)^3, & 1/(K+1)^3\n\\end{bmatrix}\n"
},
{
"math_id": 87,
"text": "4 \\pi"
},
{
"math_id": 88,
"text": "I_1 : I_2 : I_3 : I_4 : I_5 = 1:4:6:4:1"
},
{
"math_id": 89,
"text": "(i-1) \\pi"
},
{
"math_id": 90,
"text": "I_1 : I_2 : I_3 : \\cdots : I_{n+2} = \\tbinom{n+1}{0} : \\tbinom{n+1}{1} : \\tbinom{n+1}{2} : \\cdots : \\tbinom{n+1}{n+1}"
},
{
"math_id": 91,
"text": "\\tbinom{m}{k}"
},
{
"math_id": 92,
"text": "\\theta_1 = 0, \\ \\ \\theta_2 = \\frac{2 \\pi}{n-1}, \\cdots , \\theta_{n-1} = \\frac{(n-2)2 \\pi}{n-1}\\ \\ "
},
{
"math_id": 93,
"text": "I_2 = I_3 = \\cdots = I_{n-1} = I_1 + I_n , \\ \\ I_n = mI_1 \\ (m>0)"
},
{
"math_id": 94,
"text": "\\sum_{i=1}^{n} A_i = 1"
},
{
"math_id": 95,
"text": "n \\ge 2 "
},
{
"math_id": 96,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & (2 \\pi /3) / \\omega_d, & (4 \\pi /3) / \\omega_d, & 2 \\pi / \\omega_d \\\\\nI/(1+m), & I/K^{2/3}, & I/K^{4/3}, & mI/ [(1+m)K^2]\n\\end{bmatrix}\n"
},
{
"math_id": 97,
"text": "I=\\frac{(1+m)K^2}{K^2 + (1+m)(K^{4/3} + K^{2/3})+m} , \\quad K=e^{\\zeta \\pi / \\sqrt{1-\\zeta^2}}"
},
{
"math_id": 98,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & 0.5 \\pi / \\omega_d, & \\pi / \\omega_d, & 1.5 \\pi / \\omega_d , & 2 \\pi / \\omega_d\\\\\nI/(1+m), & I/K^{1/2}, & I/K, & I/K^{3/2}, & mI/ [(1+m)K^2]\n\\end{bmatrix}\n"
},
{
"math_id": 99,
"text": "I=\\frac{(1+m)K^2}{K^2 + (1+m)(K^{3/2} + K + K^{1/2})+m} ,\\quad K=e^{\\zeta \\pi / \\sqrt{1-\\zeta^2}}"
},
{
"math_id": 100,
"text": "n \\ge 6"
},
{
"math_id": 101,
"text": "m=1"
},
{
"math_id": 102,
"text": "I_1 = I (>0), \\ I_2 = -I,\\ I_3 = I"
},
{
"math_id": 103,
"text": "\\theta_1 = 0, \\ \\theta_2 = \\pi /3,\\ \\theta_3 = 2 \\pi /3"
},
{
"math_id": 104,
"text": "t_2, t_3"
},
{
"math_id": 105,
"text": "t_2 = (\\pi /3)/ \\omega_d, \\ t_3 = (2 \\pi /3)/ \\omega_d"
},
{
"math_id": 106,
"text": "A_1 = I, A_2 = -I/e^{\\zeta \\omega_n t_2}, \\ \\ A_3 = I/e^{\\zeta \\omega_n t_3}"
},
{
"math_id": 107,
"text": "\n\\begin{bmatrix}\nt_i \\\\\nA_i \n\\end{bmatrix}\n= \\begin{bmatrix}\n0, & ( \\pi /3) / \\omega_d, & (2 \\pi /3)/ \\omega_d \\\\\nI, & -I/K^{1/3}, & I/K^{2/3}\n\\end{bmatrix}\n"
},
{
"math_id": 108,
"text": "I = K/(K-K^{2/3} + K^{1/3}), \\ \\ \\ K=e^{\\zeta \\pi / \\sqrt{1- \\zeta^2}}"
}
] |
https://en.wikipedia.org/wiki?curid=62556833
|
6257581
|
Magnesium/Teflon/Viton
|
Pyrolant commonly used in decoy flares
Magnesium/Teflon/Viton (MTV) is a pyrolant. Teflon and Viton are trademarks of DuPont for polytetrafluoroethylene, (C2F4)"n", and fluoroelastomer, (CH2CF2)"n"(CF(CF3)CF2)"n".
History.
Thermites based on magnesium/Teflon/Viton, aka MTV-compositions, have been in use since the 1950s as payloads in infrared decoy flare applications. See also Countermeasures. Derived from the acronym MTV is the expression "MTV-Flare" for pyrotechnic infrared decoy flares.
Chemistry.
Whereas in conventional visual pyrotechnic illuminants sodium nitrate, NaNO3, is used as an oxidizer, in MTV compositions the polytetrafluoroethylene, (C2F4)"n", acts as fluorine source. The very high reaction enthalpy, formula_0, upon combustion of magnesium with PTFE is based on the formation of magnesium fluoride, having a very high negative enthalpy of formation ( formula_1 = −1124 kJ mol−1):
2"n" Mg + (C2F4)"n" → 2"n" MgF2"(s)" + 2"n" C, formula_2 = −1438 kJ mol−1 (1)
As much carbon and heat are released upon combustion of MTV the combustion flame can be described as a grey body of high emissivity.
Depending on stoichiometry, MTV displays varying burn rates and yields different reaction products. With constant Viton-content the burn rate increases exponentially with increasing magnesium content. Nevertheless the burn rate of MTV, as is the case with many metallized pyrotechnic compositions is strongly dependent on the specific surface area of the metal fuel, that are particle morphology and dimensions. Generally magnesium powder having a high specific surface area will exhibit a higher burn rate than those having a smaller specific area. The main reaction products for MTV at Mg contents between 30 and 65 wt% are magnesium fluoride, soot and vaporized magnesium.
For aerial decoy flares magnesium rich compositions are used with Mg contents between 55 and 65 wt%. At these stoichiometries only a part of the applied Mg reacts with the PTFE. The surplus Mg is vaporised and reacts with the atmospheric oxygen; likewise the thermally excited soot reacts with the atmospheric oxygen:
"m" Mg + (C2F4)"n" → 2"n" MgF2"(s)" + ("m" − 2"n") Mg"(g)" + 2"n" C, "m" ≥ 2"n" (2)
("m" − 2"n") Mg"(g)" + 2"n" C + ((1/2)"m" + "n") O2"(g)" → ("m" − 2"n") MgO"(s)" + 2"n" CO2"(g)" (3)
Safety.
Pyrotechnic compositions based on magnesium/polytetrafluoroethylene with stoichiometries from 25 wt% to 90 wt% magnesium are, according to German explosive legislation, the Koenen test (steel sleeve test), and BAM impact test, explosive substances. Due to their sensitivity and their reaction behaviour these substances are categorized as group 1.1.2. MTV compositions explode at minimum confinement (also self confinement) at relative low amounts. MTV compositions are sensitive toward thermal ignition.
In addition MTV compositions in loose and pressed state are extraordinarily sensitive to friction and electrostatic discharges (ESD). Hence, suitable measures have to be taken to avoid ESD while processing and handling of MTV.
Aerial decoy flare applications.
Since aircraft and helicopters could (and still can) counter surface-to-air and air-to-air missiles with the substance, MTV was a classified issue until the mid-1980s. It was not until 1997 that the U.S. government released a formerly classified invention, U.S. patent 5679921 (filing year 1957), that originally described the properties and applications of MTV.
Although missile development has improved seeker countermeasures against MTV flares there are still numerous missile systems fielded worldwide based on 1st generation technology. Hence MTV flares are still not obsolete in fighting unknown threats. Together with advanced spectral flares (see countermeasures) they are part of the so-called "cocktail solution".
Literature.
E.-C. Koch, "Metal-Fluorocarbon Based Energetic Materials", Wiley-VCH, 2012, 360 pages
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\Delta_\\mathrm{R}H "
},
{
"math_id": 1,
"text": " \\Delta_\\mathrm{f}H^o"
},
{
"math_id": 2,
"text": "\\Delta_\\mathrm{R}H"
}
] |
https://en.wikipedia.org/wiki?curid=6257581
|
6258022
|
359 (number)
|
Natural number
359 (three hundred [and] fifty-nine) is the natural number following 358 and preceding 360. 359 is the 72nd prime number.
|
[
{
"math_id": 0,
"text": "2(359)+1=719"
}
] |
https://en.wikipedia.org/wiki?curid=6258022
|
62585948
|
Reconfiguration
|
In discrete mathematics and theoretical computer science, reconfiguration problems are computational problems involving reachability or connectivity of state spaces.
Types of problems.
Here, a state space is a discrete set of configurations of a system or solutions of a combinatorial problem, called states, together with a set of allowed moves linking one state to another. Reconfiguration problems may ask:
Examples.
Examples of problems studied in reconfiguration include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n\\times n\\times n"
},
{
"math_id": 1,
"text": "\\Theta(n^2/\\log n)"
}
] |
https://en.wikipedia.org/wiki?curid=62585948
|
62590570
|
Claw finding problem
|
The claw finding problem is a classical problem in complexity theory, with several applications in cryptography. In short, given two functions "f", "g", viewed as oracles, the problem is to find "x" and "y" such as "f"("x") = "g"("y"). The pair ("x", "y") is then called a "claw". Some problems, especially in cryptography, are best solved when viewed as a claw finding problem, hence any algorithmic improvement to solving the claw finding problem provides a better attack on cryptographic primitives such as hash functions.
Definition.
Let formula_0 be finite sets, and formula_1, formula_2 two functions. A pair formula_3 is called a "claw" if formula_4. The claw finding problem is defined as to find such a claw, given that one exists.
If we view formula_5 as random functions, we expect a claw to exist iff formula_6. More accurately, there are exactly formula_7 pairs of the form formula_8 with formula_9; the probability that such a pair is a claw is formula_10. So if formula_6, the expected number of claws is at least 1.
Algorithms.
If classical computers are used, the best algorithm is similar to a Meet-in-the-middle attack, first described by Diffie and Hellman. The algorithm works as follows: assume formula_11. For every formula_12, save the pair formula_13 in a hash table indexed by formula_14. Then, for every formula_15, look up the table at formula_16. If such an index exists, we found a claw. This approach takes time formula_17 and memory formula_18.
If quantum computers are used, Seiichiro Tani showed that a claw can be found in complexity
formula_19 if formula_20 and
formula_21 if formula_22.
Shengyu Zhang showed that asymptotically these algorithms are the most efficient possible.
Applications.
As noted, most applications of the claw finding problem appear in cryptography. Examples include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A, B, C"
},
{
"math_id": 1,
"text": "f: A \\to C"
},
{
"math_id": 2,
"text": "g: B \\to C"
},
{
"math_id": 3,
"text": "(x,y) \\in A \\times B"
},
{
"math_id": 4,
"text": "f(x) = g(y)"
},
{
"math_id": 5,
"text": "f, g"
},
{
"math_id": 6,
"text": "|A| \\cdot |B| \\geq |C|"
},
{
"math_id": 7,
"text": "|A| \\cdot |B|"
},
{
"math_id": 8,
"text": "(x,y)"
},
{
"math_id": 9,
"text": "x \\in A, y \\in B"
},
{
"math_id": 10,
"text": "1/|C|"
},
{
"math_id": 11,
"text": "|A| \\leq |B|"
},
{
"math_id": 12,
"text": "x \\in A"
},
{
"math_id": 13,
"text": "(f(x),x)"
},
{
"math_id": 14,
"text": "f(x)"
},
{
"math_id": 15,
"text": "y \\in B"
},
{
"math_id": 16,
"text": "g(y)"
},
{
"math_id": 17,
"text": "O(|A| + |B|)"
},
{
"math_id": 18,
"text": "O(|A|)"
},
{
"math_id": 19,
"text": "\\sqrt[3]{|A|\\cdot|B|}"
},
{
"math_id": 20,
"text": "|A|\\le|B|<|A|^2"
},
{
"math_id": 21,
"text": "\\sqrt{|B|}"
},
{
"math_id": 22,
"text": "|B|\\ge|A|^2"
}
] |
https://en.wikipedia.org/wiki?curid=62590570
|
62591526
|
Hamiltonian complexity
|
Hamiltonian complexity or quantum Hamiltonian complexity is a topic which deals with problems in quantum complexity theory and condensed matter physics. It mostly studies constraint satisfaction problems related to ground states of local Hamiltonians; that is, Hermitian matrices that act locally on a system of interest. The constraint satisfaction problems in quantum Hamiltonian complexity have led to the quantum version of the Cook–Levin theorem. Quantum Hamiltonian complexity has helped physicists understand the difficulty of simulating physical systems.
Local Hamiltonian problem.
Given a Hermitian matrix formula_0, let formula_1 denote the ground state energy of the Hamiltonian formula_0, and let formula_2 and formula_3 be non-negative real numbers with formula_4. If formula_5, output Yes. If formula_6, output No. The "k-local Hamiltonian problem" is similar except the Hamiltonians have formula_7-local interactions. This problem has been shown to be QMA-complete for formula_8.
Area law.
The area law explains the structure of entanglement present in ground states of physically relevant systems. It states that the entropy of a reduced density matrix of a quantum system in its ground state is proportional to the boundary length of the area.
The area law has been useful in finding efficient ways to simulate entangled quantum systems.
Quantum analog of the PCP theorem.
The classical PCP theorem states that simulating the ground states of classical systems is hard. The quantum analog of the PCP theorem concerns simulations of quantum systems. Proving the quantum analog of the PCP theorem is an open problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "\\lambda_0"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "b \\geq a + 1"
},
{
"math_id": 5,
"text": "\\lambda_0 \\leq a"
},
{
"math_id": 6,
"text": "\\lambda_0 \\geq b "
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "k \\geq 2 "
}
] |
https://en.wikipedia.org/wiki?curid=62591526
|
62592127
|
Fraïssé limit
|
Method in mathematical logic
In mathematical logic, specifically in the discipline of model theory, the Fraïssé limit (also called the Fraïssé construction or Fraïssé amalgamation) is a method used to construct (infinite) mathematical structures from their (finite) substructures. It is a special example of the more general concept of a direct limit in a category. The technique was developed in the 1950s by its namesake, French logician Roland Fraïssé.
The main point of Fraïssé's construction is to show how one can approximate a (countable) structure by its finitely generated substructures. Given a class formula_0 of finite relational structures, if formula_0 satisfies certain properties (described below), then there exists a unique countable structure formula_1, called the Fraïssé limit of formula_0, which contains all the elements of formula_0 as substructures.
The general study of Fraïssé limits and related notions is sometimes called Fraïssé theory. This field has seen wide applications to other parts of mathematics, including topological dynamics, functional analysis, and Ramsey theory.
Finitely generated substructures and age.
Fix a language formula_2. By an "formula_2-structure", we mean a logical structure having signature formula_2.
Given an formula_2-structure formula_3 with domain formula_4, and a subset formula_5, we use formula_6 to denote the least substructure of formula_3 whose domain contains formula_7 (i.e. the closure of formula_7 under all the function and constant symbols in formula_2).
A substructure formula_8 of formula_3 is then said to be "finitely generated" if formula_9 for some "finite" subset formula_5. The "age of formula_3," denoted formula_10, is the class of all finitely generated substructures of "formula_3."
One can prove that any class formula_0 that is the age of some structure satisfies the following two conditions:
Hereditary property (HP)
If formula_11 and formula_12 is a finitely generated substructure of formula_7, then formula_12 is isomorphic to some structure in formula_0.
Joint embedding property (JEP)
If formula_13, then there exists formula_14 such that both formula_7 and formula_12 are embeddable in formula_15.
Fraïssé's theorem.
As above, we noted that for any formula_2-structure "formula_3, formula_10" satisfies the HP and JEP. Fraïssé proved a sort-of-converse result: when formula_0 is any non-empty, countable set of finitely generated formula_2-structures that has the above two properties, then it is the age of some countable structure.
Furthermore, suppose that formula_0 happens to satisfy the following additional properties.
Amalgamation property (AP)
For any structures formula_16, such that there exist embeddings "formula_17", "formula_18", there exists a structure formula_19 and embeddings "formula_20", "formula_21" such that formula_22 (i.e. they coincide on the image of A in both structures).
Essential countability (EC)
Up to isomorphism, there are countably many structures in formula_0.
In that case, we say that K is a "Fraïssé class", and there is a unique (up to isomorphism), countable, homogeneous structure formula_1 whose age is exactly formula_0. This structure is called the "Fraïssé limit" of formula_0.
Here, "homogeneous" means that any isomorphism "formula_23" between two finitely generated substructures formula_13 can be extended to an automorphism of the whole structure.
Examples.
The archetypal example is the class formula_24 of all finite linear orderings, for which the Fraïssé limit is a dense linear order without endpoints (i.e. no smallest nor largest element). By Cantor's isomorphism theorem, up to isomorphism, this is always equivalent to the structure formula_25, i.e. the rational numbers with the usual ordering.
As a non-example, note that neither formula_26 nor formula_27 are the Fraïssé limit of formula_24. This is because, although both of them are countable and have formula_24 as their age, neither one is homogeneous. To see this, consider the substructures formula_28 and formula_29, and the isomorphism formula_30 between them. This cannot be extended to an automorphism of formula_26 or formula_27, since there is no element to which we could map formula_31, while still preserving the order.
Another example is the class formula_32 of all finite graphs, whose Fraïssé limit is the Rado graph.
For any prime "p", the Fraïssé limit of the class of finite fields of characteristic "p" is the algebraic closure formula_33.
The Fraïssé limit of the class of finite abelian "p"-groups is formula_34 (the direct sum of countably many copies of the Prüfer group). The Fraïssé limit of the class of all finite abelian groups is formula_35.
The Fraïssé limit of the class of all finite groups is Hall's universal group.
The Fraïssé limit of the class of nontrivial finite Boolean algebras is the unique countable atomless Boolean algebra.
ω-categoricity and quantifier elimination.
The class formula_0 under consideration is called "uniformly locally finite" if for every formula_36, there is a uniform bound on the size of formula_36-generated (substructures of) structures in formula_0. The Fraïssé limit of formula_0 is ω-categorical if and only if formula_0 is uniformly locally finite. If formula_0 is uniformly locally finite, then the Fraïssé limit of formula_0 has quantifier elimination.
If the language of formula_0 is finite, and consists only of relations and constants, then formula_0 is uniformly locally finite automatically.
For example, the class of finite dimensional vector spaces over a fixed field is always a Fraïssé class, but it is uniformly locally finite only if the field is finite.
The class of finite Boolean algebras is uniformly locally finite, whereas the classes of finite fields of a given characteristic, or finite groups or abelian groups, are not, as 1-generated structures in these classes may have arbitrarily large finite size.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{K}"
},
{
"math_id": 1,
"text": "\\operatorname{Flim}(\\mathbf{K})"
},
{
"math_id": 2,
"text": "\\mathcal{L}"
},
{
"math_id": 3,
"text": "\\mathcal{M}"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "A \\subseteq M"
},
{
"math_id": 6,
"text": "\\langle A \\rangle^\\mathcal{M}"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "\\mathcal{N}"
},
{
"math_id": 9,
"text": "\\mathcal{N} = \\langle A \\rangle^\\mathcal{M}"
},
{
"math_id": 10,
"text": "\\operatorname{Age}(\\mathcal{M})"
},
{
"math_id": 11,
"text": "A \\in \\mathbf{K}"
},
{
"math_id": 12,
"text": "B"
},
{
"math_id": 13,
"text": "A, B \\in \\mathbf{K}"
},
{
"math_id": 14,
"text": "C \\in \\mathbf{K}"
},
{
"math_id": 15,
"text": "C"
},
{
"math_id": 16,
"text": "A, B, C \\in \\mathbf{K}"
},
{
"math_id": 17,
"text": "f: A \\to B"
},
{
"math_id": 18,
"text": "g: A \\to C"
},
{
"math_id": 19,
"text": "D \\in \\mathbf{K}"
},
{
"math_id": 20,
"text": "f': B \\to D"
},
{
"math_id": 21,
"text": "g': C \\to D"
},
{
"math_id": 22,
"text": "f' \\circ f = g' \\circ g"
},
{
"math_id": 23,
"text": "\\pi: A \\to B"
},
{
"math_id": 24,
"text": "\\mathbf{FCh}"
},
{
"math_id": 25,
"text": "\\langle \\mathbb{Q}, < \\rangle"
},
{
"math_id": 26,
"text": "\\langle \\mathbb{N}, < \\rangle"
},
{
"math_id": 27,
"text": "\\langle \\mathbb{Z}, < \\rangle"
},
{
"math_id": 28,
"text": "\\big\\langle \\{ 1, 3 \\}, < \\big\\rangle"
},
{
"math_id": 29,
"text": "\\big\\langle \\{ 5, 6 \\}, < \\big\\rangle"
},
{
"math_id": 30,
"text": "1 \\mapsto 5,\\ 3 \\mapsto 6"
},
{
"math_id": 31,
"text": "2"
},
{
"math_id": 32,
"text": "\\mathbf{Gph}"
},
{
"math_id": 33,
"text": "\\overline{\\mathbb F}_p"
},
{
"math_id": 34,
"text": "\\mathbb Z(p^\\infty)^{(\\omega)}"
},
{
"math_id": 35,
"text": "\\bigoplus_{p\\text{ prime}}\\mathbb Z(p^\\infty)^{(\\omega)}\\simeq(\\mathbb Q/\\mathbb Z)^{(\\omega)}"
},
{
"math_id": 36,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=62592127
|
62595897
|
Complex Lie algebra
|
In mathematics, a complex Lie algebra is a Lie algebra over the complex numbers.
Given a complex Lie algebra formula_0, its conjugate formula_1 is a complex Lie algebra with the same underlying real vector space but with formula_2 acting as formula_3 instead. As a real Lie algebra, a complex Lie algebra formula_0 is trivially isomorphic to its conjugate. A complex Lie algebra is isomorphic to its conjugate if and only if it admits a real form (and is said to be defined over the real numbers).
Real form.
Given a complex Lie algebra formula_0, a real Lie algebra formula_4 is said to be a real form of formula_0 if the complexification formula_5 is isomorphic to formula_0.
A real form formula_4 is abelian (resp. nilpotent, solvable, semisimple) if and only if formula_0 is abelian (resp. nilpotent, solvable, semisimple). On the other hand, a real form formula_4 is simple if and only if either formula_0 is simple or formula_0 is of the form formula_6 where formula_7 are simple and are the conjugates of each other.
The existence of a real form in a complex Lie algebra formula_8 implies that formula_8 is isomorphic to its conjugate; indeed, if formula_9, then let formula_10 denote the formula_11-linear isomorphism induced by complex conjugate and then
formula_12,
which is to say formula_13 is in fact a formula_14-linear isomorphism.
Conversely, suppose there is a formula_14-linear isomorphism formula_15; without loss of generality, we can assume it is the identity function on the underlying real vector space. Then define formula_16, which is clearly a real Lie algebra. Each element formula_17 in formula_0 can be written uniquely as formula_18. Here, formula_19 and similarly formula_13 fixes formula_20. Hence, formula_21; i.e., formula_4 is a real form.
Complex Lie algebra of a complex Lie group.
Let formula_0 be a semisimple complex Lie algebra that is the Lie algebra of a complex Lie group formula_22. Let formula_23 be a Cartan subalgebra of formula_0 and formula_24 the Lie subgroup corresponding to formula_23; the conjugates of formula_24 are called Cartan subgroups.
Suppose there is the decomposition formula_25 given by a choice of positive roots. Then the exponential map defines an isomorphism from formula_26 to a closed subgroup formula_27. The Lie subgroup formula_28 corresponding to the Borel subalgebra formula_29 is closed and is the semidirect product of formula_24 and formula_30; the conjugates of formula_31 are called Borel subgroups.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "\\overline{\\mathfrak g}"
},
{
"math_id": 2,
"text": "i = \\sqrt{-1}"
},
{
"math_id": 3,
"text": "-i"
},
{
"math_id": 4,
"text": "\\mathfrak{g}_0"
},
{
"math_id": 5,
"text": "\\mathfrak{g}_0 \\otimes_{\\mathbb{R}}\\mathbb{C}"
},
{
"math_id": 6,
"text": "\\mathfrak{s} \\times \\overline{\\mathfrak{s}}"
},
{
"math_id": 7,
"text": "\\mathfrak{s}, \\overline{\\mathfrak{s}}"
},
{
"math_id": 8,
"text": "\\mathfrak g"
},
{
"math_id": 9,
"text": "\\mathfrak{g} = \\mathfrak{g}_0 \\otimes_{\\mathbb{R}} \\mathbb{C} = \\mathfrak{g}_0 \\oplus i\\mathfrak{g}_0"
},
{
"math_id": 10,
"text": "\\tau : \\mathfrak{g} \\to \\overline{\\mathfrak{g}}"
},
{
"math_id": 11,
"text": "\\mathbb{R}"
},
{
"math_id": 12,
"text": "\\tau(i(x + iy)) = \\tau(ix - y) = -ix- y = -i\\tau(x + iy)"
},
{
"math_id": 13,
"text": "\\tau"
},
{
"math_id": 14,
"text": "\\mathbb{C}"
},
{
"math_id": 15,
"text": "\\tau: \\mathfrak{g} \\overset{\\sim}\\to \\overline{\\mathfrak{g}}"
},
{
"math_id": 16,
"text": "\\mathfrak{g}_0 = \\{ z \\in \\mathfrak{g} | \\tau(z) = z \\}"
},
{
"math_id": 17,
"text": "z"
},
{
"math_id": 18,
"text": "z = 2^{-1}(z + \\tau(z)) + i 2^{-1}(i\\tau(z) - iz)"
},
{
"math_id": 19,
"text": "\\tau(i\\tau(z) - iz) = -iz + i\\tau(z)"
},
{
"math_id": 20,
"text": "z + \\tau(z)"
},
{
"math_id": 21,
"text": "\\mathfrak{g} = \\mathfrak{g}_0 \\oplus i \\mathfrak{g}_0"
},
{
"math_id": 22,
"text": "G"
},
{
"math_id": 23,
"text": "\\mathfrak{h}"
},
{
"math_id": 24,
"text": "H"
},
{
"math_id": 25,
"text": "\\mathfrak{g} = \\mathfrak{n}^- \\oplus \\mathfrak{h} \\oplus \\mathfrak{n}^+"
},
{
"math_id": 26,
"text": "\\mathfrak{n}^+"
},
{
"math_id": 27,
"text": "U \\subset G"
},
{
"math_id": 28,
"text": "B \\subset G"
},
{
"math_id": 29,
"text": "\\mathfrak{b} = \\mathfrak{h} \\oplus \\mathfrak{n}^+"
},
{
"math_id": 30,
"text": "U"
},
{
"math_id": 31,
"text": "B"
}
] |
https://en.wikipedia.org/wiki?curid=62595897
|
626035
|
Bed load
|
Particles in a flowing fluid that are transported along the bed
The term bed load or bedload describes particles in a flowing fluid (usually water) that are transported along the stream bed. Bed load is complementary to suspended load and wash load.
Bed load moves by rolling, sliding, and/or saltating (hopping).
Generally, bed load downstream will be smaller and more rounded than bed load upstream (a process known as downstream fining). This is due in part to attrition and abrasion which results from the stones colliding with each other and against the river channel, thus removing the rough texture (rounding) and reducing the size of the particles. However, selective transport of sediments also plays a role in relation to downstream fining: smaller-than average particles are more easily entrained than larger-than average particles, since the shear stress required to entrain a grain is linearly proportional to the diameter of the grain. However, the degree of size selectivity is restricted by the hiding effect described by Parker and Klingeman (1982), wherein larger particles protrude from the bed whereas small particles are shielded and hidden by larger particles, with the result that nearly all grain sizes become entrained at nearly the same shear stress.
Experimental observations suggest that a uniform free-surface flow over a cohesion-less plane bed is unable to entrain sediments below a critical value formula_0 of the ratio between measures of hydrodynamic (destabilizing) and gravitational (stabilizing)
forces acting on sediment particles, the so-called Shields stress formula_1. This quantity reads as:
formula_2,
where formula_3 is the friction velocity, s is the relative particle density, d is an effective particle diameter which is entrained by the flow, and g is gravity. Meyer-Peter-Müller formula for the bed load capacity under equilibrium and uniform flow conditions states that the magnitude of the bed load flux formula_4 for unit width is proportional to the excess of shear stress with respect to a critical one formula_5. Specifically, formula_4 is a monotonically increasing nonlinear function of the excess Shields stress formula_6, typically expressed in the form of a power law.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tau_{*c}"
},
{
"math_id": 1,
"text": "\\tau_*"
},
{
"math_id": 2,
"text": "\\tau_*=\\frac{u^2_*}{(s-1)gd}"
},
{
"math_id": 3,
"text": "u_{*}"
},
{
"math_id": 4,
"text": "q_s"
},
{
"math_id": 5,
"text": "\\tau_{*c} "
},
{
"math_id": 6,
"text": "\\phi(\\tau_{*} -\\tau_{*c} )"
}
] |
https://en.wikipedia.org/wiki?curid=626035
|
62606505
|
KTHNY theory
|
In statistical mechanics, the KTHNY-theory describes the process of melting of crystals in two dimensions (2D). The name is derived from the initials of the surnames of John Michael Kosterlitz, David J. Thouless, Bertrand Halperin, David R. Nelson, and A. Peter Young, who developed the theory in the 1970s. It is, beside the Ising model in 2D and the XY model in 2D, one of the few theories, which can be solved analytically and which predicts a phase transition at a temperature formula_0.
Main idea.
Melting of 2D crystals is mediated by the dissociation of topological defects, which destroy the order of the crystal. In 2016, Michael Kosterlitz and David Thouless were awarded with the Nobel prize in physics for their idea, how thermally excited pairs of virtual dislocations induce a softening (described by renormalization group theory) of the crystal during heating. The shear elasticity disappears simultaneously with the dissociation of the dislocations, indicating a fluid phase. Based on this work, David Nelson and Bertrand Halperin showed, that the resulting hexatic phase is not yet an isotropic fluid. Starting from a hexagonal crystal (which is the densest packed structure in 2D), the hexatic phase has a six-folded director field, similar to liquid crystals. Orientational order only disappears due to the dissociations of a second class of topological defects, named disclinations. Peter Young calculated the critical exponent of the diverging correlations length at the transition between crystalline and hexatic.
KTHNY theory predicts two continuous phase transitions, thus latent heat and phase coexistence is ruled out. The thermodynamic phases can be distinguished based on discrete versus continuous translational and orientational order. One of the transitions separates a solid phase with quasi-long range translational order and perfect long ranged orientational order from the hexatic phase. The hexatic phase shows short ranged translational order and quasi-long ranged orientational order. The second phase transition separates the hexatic phase from the isotropic fluid, where both, translational and orientational order is short ranged. The system is dominated by critical fluctuations, since for continuous transitions, the difference of energy between the thermodynamic phases disappears in the vicinity of the transition. This implies, that ordered and disordered regions fluctuate strongly in space and time. The size of those regions grows strongly near the transitions and diverges at the transition itself. At this point, the pattern of symmetry broken versus symmetric domains is fractal. Fractals are characterized by a scaling invariance – they appear similar on an arbitrary scale or by arbitrarily zooming in (this is true on any scale larger than the atomic distance). The scale invariance is the basis to use the renormalization group theory to describe the phase transitions. Both transitions are accompanied by spontaneous symmetry breaking. Unlike for melting in three dimensions, translational and orientational symmetry breaking does not need to appear simultaneously in 2D, since two different types of topological defects destroy the different types of order.
Background.
Michael Kosterlitz and David Thouless tried to resolve a contradiction about 2D crystals: on one hand side, the Mermin-Wagner theorem claims that symmetry breaking of a continuous order-parameter cannot exist in two dimensions. This implies, that perfect long range positional order is ruled out in 2D crystals. On the other side, very early computer simulations of Berni Alder and Thomas E. Wainwright indicated crystallization in 2D. The KTHNY theory shows implicitly that periodicity is not a necessary criterion for a solid (this is already indicated by the existence of amorphous solids like glasses). Following M. Kosterlitz, a finite shear elasticity defines a 2D solid, including quasicrystals in this description.
Structure factor in 2D.
All three thermodynamic phases and their corresponding symmetries can be visualized using the structure factor:formula_1. The double sum runs over all positions of particle pairs i and j and the brackets denote an average about various configurations. The isotropic phase is characterized by concentric rings at formula_2, if formula_3 is the average particle distance calculated by the 2D particle density formula_4. The (closed packed) crystalline phase is characterized by six-fold symmetry based on the orientational order. Unlike in 3D, where the peaks are arbitrarily sharp (formula_5-peaks), the 2D peaks have a finite width described with a Lorenz-curve. This is due to the fact, that the translational order is only quasi-long ranged as predicted by the Mermin-Wagner theorem. The hexatic phase is characterized by six segments, which reflect the quasi-long ranged orientational order. The structure factor of Figure 1 is calculated from the positions of a colloidal monolayer (crosses at high intensity are artefacts from the Fourier transformation due to the finite (rectangular) field of view of the ensemble).
Interaction between dislocations.
To analyse melting due to the dissociation of dislocations, one starts with the energy formula_7 as function of distance between two dislocations. An isolated dislocation in 2D is a local distortions of the six-folded lattice, where neighbouring particles have five- and seven nearest neighbours, instead of six. It is important to note, that dislocations can only be created in pairs, due to topological reasons. A bound pair of dislocations is a local configuration with 5-7-7-5 neighbourhood.
formula_8
The double sum runs over all positions of defect pairs formula_9 and formula_10, formula_11 measures the distance between the dislocations. formula_12 is the Burgers vector and denotes the orientation of the dislocation at position Orte formula_13. The second term in the brackets brings dislocations to arrange preferentially antiparallel due to energetic reasons. Its contribution is small and can be neglected for large distance between defects. The main contribution stems from the logarithmic term (the first one in the brackets) which describes, how the energy of a dislocation pair diverges with increasing distance. Since the shortest distance between two dislocations is given approximatively by the average particle distance formula_14, the scaling of distances with formula_14 prevents the logarithm formula_15 to become negative. The strength of the interaction is proportional to Young's modulus formula_16 given by the stiffness of the crystal lattice. To create a dislocation from an undisturbed lattice, a small displacement on a scale smaller than the average particle distance formula_14 is needed. The discrete energy associated with this displacement is usually called core energy Energie formula_17 and has to be counted for each of the formula_18 dislocations individually (last term).
An easy argument for the dominating logarithmic term is, that the magnitude of the strain induced by an isolated dislocation decays according to formula_19 with distance. Assuming Hooke's approximation, the associated stress is linear with the strain. Integrating the strain ~1/r gives the energy proportional to the logarithm. The logarithmic distance dependence of the energy is the reason, why KTHNY-theory is one of the few theories of phase transitions which can be solved analytically: in statistical physics one has to calculate partition functions, e.g. the probability distribution for all possible configurations of dislocation pairs given by the Boltzmann distribution formula_20. Here, formula_21 is the thermal energy with Boltzmann constant formula_22. For the majority of problems in statistical physics one can hardly solve the partition function due to the enormous amount of particles and degrees of freedoms. This is different in KTHNY theory due to the logarithmic energy functions of dislocations formula_7 and the e-function from the Boltzmann factor as inverse which can be solved easily.
Example.
We want to calculate the mean squared distance between two dislocations considering only the dominant logarithmic term for simplicity:
formula_23
This mean distance formula_24 tends to zero for low temperatures – dislocations will annihilate and the crystal is free of defects. The expression diverges formula_25, if the denominator tends to zero. This happens, when
formula_26. A diverging distance of dislocations implies, that they are dissociated and do not form a bound pair. The crystal is molten, if several isolated dislocations are thermally excited and the melting temperature formula_27 is given by Young's modulus:
formula_28
The dimensionless quantity formula_6 is a universal constant for melting in 2D and is independent of details of the system under investigation.
This example investigated only an isolated pair of dislocations. In general, a multiplicity of dislocations will appear during melting. The strain field of an isolated dislocation will be shielded and the crystal will get softer in the vicinity of the phase transition; Young's modulus will decrease due to dislocations. In KTHNY theory, this feedback of dislocations on elasticity, and especially on Young's modulus acting as coupling constant in the energy function, is described within the framework of renormalization group theory.
Renormalization of elasticity.
If a 2D crystal is heated, virtual dislocation pairs will be excited due to thermal fluctuations in the vicinity of the phase transition. Virtual means, that the average thermal energy is not large enough to overcome (two times) the core-energy and to dissociate (unbind) dislocation pairs. Nonetheless, dislocation pairs can appear locally on very short time scales due to thermal fluctuations, before they annihilate again. Although they annihilate, they have a detectable impact on elasticity: they soften the crystal. The principle is completely analogue to calculating the bare charge of the electron in quantum electrodynamics (QED). In QED, the charge of the electron is shielded due to virtual electron-positron pairs due to quantum fluctuations of the vacuum. Roughly spoken one can summarize: If the crystal is softened due to the presence of virtual pairs of dislocation, the probability (fugacity) formula_29 for creating additional virtual dislocations is enhanced, proportional to the Boltzmann factor of the core-energy of a dislocation formula_30. If additional (virtual) dislocations are present, the crystal will get additionally softer. If the crystal is additionally softer, the fugacity will increase further... and so on and so forth.
David Nelson, Bertrand Halperin and independently Peter Young formulated this in a mathematically precise way, using renormalization group theory for the fugacity and the elasticity: In the vicinity of the continuous phase transition, the system becomes critical – this means that it becomes self-similar on all length scales formula_31. Executing a transformation of all length scales by a factor of formula_32, the energy formula_33 and fugacity formula_34 will depend on this factor, but the system has to appear identically, simultaneously due to the self similarity. Especially the energy function (Hamiltonian) of the dislocations have to be invariant in structure. The softening of the system after a length scale transformation (zooming out to visualize a larger area implies to count more dislocations) is now covered in a renormalized (reduced) elasticity. The recursion relation for elasticity and fugacity are:
formula_35
formula_36
Similar recursion relations can be derived for the shear modulus and the bulk modulus. formula_37 and formula_38 are Bessel functions, respectively. Depending on the starting point, the recursion relation can run into two directions. formula_39 implies no defects, the ensemble is crystalline. formula_40, implies arbitrary many defects, the ensemble is fluid. The recursion relation have a fix-point at formula_41 with formula_42. Now, formula_43 is the renormalized value instead of the bare one. Figure 2 shows Youngs’modulus as function of the dimensionless control parameter formula_44. It measures the ratio of the repelling energy between two particles and the thermal energy (which was constant in this experiment). It can be interpreted as pressure or inverse temperature. The black curve is a thermodynamic calculation of a perfect hexagonal crystal at formula_45. The blue curve is from computer simulations and shows a reduced elasticity due to lattice vibrations at formula_46. The red curve is the renormalization following the recursion relations, Young's modulus disappears discontinuously to zero at formula_6. Turquoise symbols are from measurements of elasticity in a colloidal monolayer, and confirm the melting point at formula_47.
Interaction between disclinations.
The system enters the hexatic phase after the dissociation of dislocations. To reach the isotropic fluid, dislocations (5-7-pairs) have to dissociate into disclinations, consisting of isolated 5-folded and isolated 7-folded particles. Similar arguments for the interaction of disclinations compared to dislocations can be used. Again, disclinations can only be created as pairs due to topological reasons. Starting with the energy formula_48 as function of distance between two disclinations one finds:
formula_49
The logarithmic term is again dominating. The sign of the interaction gives attraction or repulsion for the winding numbers formula_50 and formula_51 of the five- and seven-folded disclinations in a way that charges with opposite sign have attraction. The overall strength is given by the stiffness against twist. The coupling constant formula_52 is called Frank's constant, following the theory of liquid crystals.
formula_53 is the discrete energy of a dislocation to dissociate into two disclinations. The squared distance of two disclinations can be calculated the same way, as for dislocations, only the prefactor, denoting the coupling constant, has to be changed accordingly. It diverges for formula_54. The system is molten from the hexatic phase into the isotropic liquid, if unbound disclinations are present. This transition temperature formula_55 is given by Frank's constant:
formula_56
formula_57 is again a universal constant. Figure 3 shows measurements of the orientational stiffness of a colloidal monolayer; Frank's constant drops below this universal constant at formula_55.
Critical exponents.
Typically, Kosterlitz–Thouless transitions have a continuum of critical points which can be characterised by self-similar grains of disordered and ordered regions. In second order phase transitions, the correlation length measuring the size of those regions diverges algebraically:
formula_58.
Here, formula_59 is the transition temperature and formula_60 is a critical exponent. Another special feature of Kosterlitz–Thouless transitions is, that translational and orientational correlation length in 2D diverge exponentially (see also hexatic phase for the definition of those correlation functions):
formula_61
The critical exponent becomes formula_62 for the diverging translational correlation length at the hexatic – crystalline transition. D. Nelson and B. Halperin predicted, that Frank's constant diverges exponentially with formula_63 at formula_55, too. The red curve shows a fit of experimental data covering the critical behaviour; the critical exponent is measured to be formula_64. This value is compatible with the prediction of KTHNY theory within the error bars. The orientational correlation length at the hexatic – isotropic transition is predicted to diverge with an exponent formula_65. This rational value is compatible with mean-field-theories and implies that a renormalization of Frank's constant is not necessary. The increasing shielding of orientational stiffness due to disclinations has not to be taken into account – this is already done by dislocations which are frequently present at formula_55. Experiments measured a critical exponent of formula_66.
KTHNY-theory has been tested in experiment and in computer simulations. For short range particle interaction (hard discs), simulations found a weakly first order transition for the hexatic – isotropic transition, slightly beyond KTHNY-theory.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T > 0"
},
{
"math_id": 1,
"text": " S(\\vec{q}) =\\frac{1}{N}\\langle\\sum_{ij}e^{-i\\vec{q}(\\vec{r}_i-\\vec{r}_j)}\\rangle "
},
{
"math_id": 2,
"text": " q = 2\\pi / a "
},
{
"math_id": 3,
"text": " a = 1/\\sqrt{\\rho} "
},
{
"math_id": 4,
"text": " \\rho "
},
{
"math_id": 5,
"text": " \\delta"
},
{
"math_id": 6,
"text": " 16 \\pi "
},
{
"math_id": 7,
"text": " H_{loc} "
},
{
"math_id": 8,
"text": " H_{loc} = - \\frac{a^2 Y}{8 \\pi} \\sum_{k \\neq l} \\Big[ \\vec{b}(\\vec{r}_k)\\cdot\\vec{b}(\\vec{r}_l) \\ln \\frac{\\Delta \\vec{r}_{k,l}}{a} - \\frac{[\\vec{b}(\\vec{r}_k) \\cdot \\Delta\\vec{r}_{k,l}][\\vec{b}(\\vec{r}_l) \\cdot \\Delta\\vec{r}_{k,l}]}{\\Delta r^2_{i,j}}\\Big] + E_c \\cdot N_{loc}. "
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "l"
},
{
"math_id": 11,
"text": " \\Delta\\vec{r}_{k,l} = \\vec{r}_k-\\vec{r}_l "
},
{
"math_id": 12,
"text": " \\vec{b} "
},
{
"math_id": 13,
"text": " \\vec{r}_k "
},
{
"math_id": 14,
"text": " a "
},
{
"math_id": 15,
"text": " \\ln \\frac{\\Delta\\vec{r}_{k,l}}{a} "
},
{
"math_id": 16,
"text": " Y "
},
{
"math_id": 17,
"text": " E_c "
},
{
"math_id": 18,
"text": " N_{loc} "
},
{
"math_id": 19,
"text": " \\propto \\frac{1}{r} "
},
{
"math_id": 20,
"text": " e^{\\frac{H_{loc}}{k_BT}} "
},
{
"math_id": 21,
"text": " k_BT "
},
{
"math_id": 22,
"text": " k_B "
},
{
"math_id": 23,
"text": " \\langle r^2 \\rangle = \\frac{\\int r^2 \\cdot e^{-\\frac{Ya \\ln(r/a)}{4\\pi k_B T}}d^2r}{\\int e^{-\\frac{Ya \\ln(r/a)}{4\\pi k_B T}}d^2r} \\sim \\frac{2-\\frac{Y\\cdot a}{4\\pi k_B T}}{4-\\frac{Y\\cdot a}{4\\pi k_B T}}.\n"
},
{
"math_id": 24,
"text": " \\langle r^2 \\rangle \\to 0 "
},
{
"math_id": 25,
"text": " \\langle r^2 \\rangle \\to \\infty "
},
{
"math_id": 26,
"text": " \\frac{Y\\cdot a}{4\\pi k_B T} = 4 "
},
{
"math_id": 27,
"text": " T_m "
},
{
"math_id": 28,
"text": " \\frac{Y \\cdot a}{k_B T_m} = 16 \\pi.\n"
},
{
"math_id": 29,
"text": " y "
},
{
"math_id": 30,
"text": " y = e^{\\frac{E_C}{k_BT}} "
},
{
"math_id": 31,
"text": " \\gg a "
},
{
"math_id": 32,
"text": " l "
},
{
"math_id": 33,
"text": " E \\to E(l) "
},
{
"math_id": 34,
"text": " y \\to y(l) "
},
{
"math_id": 35,
"text": " \n\\frac{dY^{-1}(l)}{dl} = \\frac{3}{2} \\pi y^2 e^{Y(l)/8\\pi} I_0 \\Big(Y(l)/8 \\pi \\Big) - \\frac{3}{4} \\pi y^2 e^{Y(l)/8\\pi} I_1 \\Big(Y(l)/8 \\pi \\Big),\n"
},
{
"math_id": 36,
"text": "\n\\frac{dy(l)}{dl} = \\Big( 2 - \\frac{Y(l)}{8 \\pi} \\Big) y(l) + 2 \\pi y^2 e^{Y(l)/16\\pi} I_0\\Big(Y(l)/8 \\pi \\Big).\n"
},
{
"math_id": 37,
"text": " I_0 "
},
{
"math_id": 38,
"text": " I_1 "
},
{
"math_id": 39,
"text": " y \\to 0 "
},
{
"math_id": 40,
"text": " y \\to \\infty "
},
{
"math_id": 41,
"text": " y = 0 "
},
{
"math_id": 42,
"text": " E_R/k_BT = 16 \\pi "
},
{
"math_id": 43,
"text": " E_R "
},
{
"math_id": 44,
"text": " \\Gamma "
},
{
"math_id": 45,
"text": " T = 0 "
},
{
"math_id": 46,
"text": " T > 0 "
},
{
"math_id": 47,
"text": " Y_R = 16 \\pi "
},
{
"math_id": 48,
"text": " H_{cli} "
},
{
"math_id": 49,
"text": " H_{cli} = - \\frac{F_A\\cdot \\pi}{36} \\sum_{k \\neq l} s(\\vec{r}_k)\\cdot s(\\vec{r}_l) \\ln \\frac{\\Delta \\vec{r}_{k,l}}{a} + E_s \\cdot N_{cli}. "
},
{
"math_id": 50,
"text": " +\\pi /3 "
},
{
"math_id": 51,
"text": " -\\pi /3 "
},
{
"math_id": 52,
"text": " F_A "
},
{
"math_id": 53,
"text": " E_s "
},
{
"math_id": 54,
"text": " \\frac{F_A \\cdot \\pi}{36} = 4 "
},
{
"math_id": 55,
"text": " T_i "
},
{
"math_id": 56,
"text": " \\frac{F_A}{k_B T_i} = 72 / \\pi.\n"
},
{
"math_id": 57,
"text": " 72 / \\pi "
},
{
"math_id": 58,
"text": " \\xi = \\xi_0 \\Big(\\frac{T - T_c}{T_c} \\Big)^{-\\nu} "
},
{
"math_id": 59,
"text": " T_c "
},
{
"math_id": 60,
"text": " \\nu "
},
{
"math_id": 61,
"text": " \\xi = \\xi_0 \\cdot e^{\\Big(\\frac{T - T_c}{T_c} \\Big)^{-\\nu}}. "
},
{
"math_id": 62,
"text": " \\bar\\nu = 0{,}36963\\dots"
},
{
"math_id": 63,
"text": " \\bar\\nu "
},
{
"math_id": 64,
"text": " \\bar\\nu = 0{,}35 \\pm 0{,}02 "
},
{
"math_id": 65,
"text": " \\nu = 0{,}5 "
},
{
"math_id": 66,
"text": " \\nu = 0{,}5 \\pm 0{,}03 "
}
] |
https://en.wikipedia.org/wiki?curid=62606505
|
6260959
|
Folded normal distribution
|
Probability distribution
The folded normal distribution is a probability distribution related to the normal distribution. Given a normally distributed random variable "X" with mean "μ" and variance "σ"2, the random variable "Y" = |"X"| has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called "folded" because probability mass to the left of "x" = 0 is folded over by taking the absolute value. In the physics of heat conduction, the folded normal distribution is a fundamental solution of the heat equation on the half space; it corresponds to having a perfect insulator on a hyperplane through the origin.
Definitions.
Density.
The probability density function (PDF) is given by
formula_0
for "x" ≥ 0, and 0 everywhere else. An alternative formulation is given by
formula_1,
where cosh is the Hyperbolic cosine function. It follows that the cumulative distribution function (CDF) is given by:
formula_2
for "x" ≥ 0, where erf() is the error function. This expression reduces to the CDF of the half-normal distribution when "μ" = 0.
The mean of the folded distribution is then
formula_3
or
formula_4
where formula_5 is the normal cumulative distribution function:
formula_6
The variance then is expressed easily in terms of the mean:
formula_7
Both the mean ("μ") and variance ("σ"2) of "X" in the original normal distribution can be interpreted as the location and scale parameters of "Y" in the folded distribution.
Properties.
Mode.
The mode of the distribution is the value of formula_8 for which the density is maximised. In order to find this value, we take the first derivative of the density with respect to formula_8 and set it equal to zero. Unfortunately, there is no closed form. We can, however, write the derivative in a better way and end up with a non-linear equation
formula_9
formula_10
formula_11
formula_12
formula_13.
Tsagris et al. (2014) saw from numerical investigation that when formula_14, the maximum is met when formula_15, and when formula_16 becomes greater than formula_17, the maximum approaches formula_16. This is of course something to be expected, since, in this case, the folded normal converges to the normal distribution. In order to avoid any trouble with negative variances, the exponentiation of the parameter is suggested. Alternatively, you can add a constraint, such as if the optimiser goes for a negative variance the value of the log-likelihood is NA or something very small.
Characteristic function and other related functions.
formula_18.
formula_19.
formula_20.
formula_21.
formula_22.
Related distributions.
0, the distribution of "Y" is a half-normal distribution.
Statistical Inference.
Estimation of parameters.
There are a few ways of estimating the parameters of the folded normal. All of them are essentially the maximum likelihood estimation procedure, but in some cases, a numerical maximization is performed, whereas in other cases, the root of an equation is being searched. The log-likelihood of the folded normal when a sample formula_26 of size formula_27 is available can be written in the following way
formula_28
formula_29
formula_30
In R (programming language), using the package Rfast one can obtain the MLE really fast (command codice_0). Alternatively, the command or will fit this distribution. The maximisation is easy, since two parameters (formula_31 and formula_32) are involved. Note, that both positive and negative values for formula_31 are acceptable, since formula_31 belongs to the real line of numbers, hence, the sign is not important because the distribution is symmetric with respect to it. The next code is written in R
folded <- function(y) {
## y is a vector with positive data
n <- length(y) ## sample size
sy2 <- sum(y^2)
sam <- function(para, n, sy2) {
me <- para[1] ; se <- exp( para[2] )
f <- - n/2 * log(2/pi/se) + n * me^2 / 2 / se +
sy2 / 2 / se - sum( log( cosh( me * y/se ) ) )
f
mod <- optim( c( mean(y), sd(y) ), n = n, sy2 = sy2, sam, control = list(maxit = 2000) )
mod <- optim( mod$par, sam, n = n, sy2 = sy2, control = list(maxit = 20000) )
result <- c( -mod$value, mod$par[1], exp(mod$par[2]) )
names(result) <- c("log-likelihood", "mu", "sigma squared")
result
The partial derivatives of the log-likelihood are written as
formula_33
formula_34
formula_35
formula_36.
By equating the first partial derivative of the log-likelihood to zero, we obtain a nice relationship
formula_37.
Note that the above equation has three solutions, one at zero and two more with the opposite sign. By substituting the above equation, to the partial derivative of the log-likelihood w.r.t formula_38 and equating it to zero, we get the following expression for the variance
formula_39,
which is the same formula as in the normal distribution. A main difference here is that formula_31 and formula_32 are not statistically independent. The above relationships can be used to obtain maximum likelihood estimates in an efficient recursive way. We start with an initial value for formula_32 and find the positive root (formula_31) of the last equation. Then, we get an updated value of formula_32. The procedure is being repeated until the change in the log-likelihood value is negligible. Another easier and more efficient way is to perform a search algorithm. Let us write the last equation in a more elegant way
formula_40
formula_41.
It becomes clear that the optimization the log-likelihood with respect to the two parameters has turned into a root search of a function. This of course is identical to the previous root search. Tsagris et al. (2014) spotted that there are three roots to this equation for formula_31, i.e. there are three possible values of formula_31 that satisfy this equation. The formula_42 and formula_43, which are the maximum likelihood estimates and 0, which corresponds to the minimum log-likelihood.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f_Y(x;\\mu,\\sigma^2)=\n\\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\, e^{ -\\frac{(x-\\mu)^2}{2\\sigma^2} }\n+ \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\, e^{ -\\frac{(x+\\mu)^2}{2\\sigma^2} } "
},
{
"math_id": 1,
"text": "f\\left(x \\right)=\\sqrt{\\frac{2}{\\pi\\sigma^2}}e^{-\\frac{\\left(x^2+\\mu^2 \\right)}{2\\sigma^2}}\\cosh{\\left(\\frac{\\mu x}{\\sigma^2}\\right)}"
},
{
"math_id": 2,
"text": "\nF_Y(x; \\mu, \\sigma^2) = \\frac{1}{2}\\left[ \\mbox{erf}\\left(\\frac{x+\\mu}{\\sqrt{2\\sigma^2}}\\right) + \\mbox{erf}\\left(\\frac{x-\\mu}{\\sqrt{2\\sigma^2}}\\right)\\right] "
},
{
"math_id": 3,
"text": "\\mu_Y = \\sigma \\sqrt{\\frac{2}{\\pi}} \\,\\, \\exp\\left(\\frac{-\\mu^2}{2\\sigma^2}\\right) + \\mu \\, \\mbox{erf}\\left(\\frac{\\mu}{\\sqrt{2\\sigma^2}}\\right)"
},
{
"math_id": 4,
"text": " \\mu_Y = \\sqrt{\\frac{2}{\\pi}}\\sigma e^{-\\frac{\\mu^2}{2\\sigma^2}}+\\mu\\left[1-2\\Phi\\left(-\\frac{\\mu}{\\sigma}\\right) \\right]"
},
{
"math_id": 5,
"text": "\\Phi"
},
{
"math_id": 6,
"text": " \\Phi(x)\\; =\\; \\frac12\\left[1 + \\operatorname{erf}\\left(\\frac{x}{\\sqrt{2}}\\right)\\right]."
},
{
"math_id": 7,
"text": "\\sigma_Y^2 = \\mu^2 + \\sigma^2 - \\mu_Y^2. "
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "\\frac{df(x)}{dx}=0 \\Rightarrow -\\frac{\\left(x-\\mu\\right)}{\\sigma^2}e^{-\\frac{1}{2}\\frac{\\left(x-\\mu\\right)^2}{\\sigma^2}}-\n\\frac{\\left(x+\\mu\\right)}{\\sigma^2}e^{-\\frac{1}{2}\\frac{\\left(x+\\mu\\right)^2}{\\sigma^2}}=0\n\n"
},
{
"math_id": 10,
"text": " x\\left[e^{-\\frac{1}{2}\\frac{\\left(x-\\mu\\right)^2}{\\sigma^2}}+e^{-\\frac{1}{2}\\frac{\\left(x+\\mu\\right)^2}{\\sigma^2}}\\right]-\n\\mu \\left[e^{-\\frac{1}{2}\\frac{\\left(x-\\mu\\right)^2}{\\sigma^2}}-e^{-\\frac{1}{2}\\frac{\\left(x+\\mu\\right)^2}{\\sigma^2}}\\right]=0 "
},
{
"math_id": 11,
"text": "x\\left(1+e^{-\\frac{2\\mu x}{\\sigma^2}}\\right)-\\mu\\left(1-e^{-\\frac{2\\mu x}{\\sigma^2}}\\right)=0 "
},
{
"math_id": 12,
"text": "\\left(\\mu+x\\right)e^{-\\frac{2\\mu x}{\\sigma^2}}=\\mu-x "
},
{
"math_id": 13,
"text": "x=-\\frac{\\sigma^2}{2\\mu}\\log{\\frac{\\mu-x}{\\mu+x}} "
},
{
"math_id": 14,
"text": "\\mu<\\sigma\n\n"
},
{
"math_id": 15,
"text": "x=0\n\n"
},
{
"math_id": 16,
"text": "\\mu\n\n"
},
{
"math_id": 17,
"text": "3\\sigma\n\n"
},
{
"math_id": 18,
"text": "\\varphi_x\\left(t\\right)=e^{\\frac{-\\sigma^2 t^2}{2}+i\\mu t}\\Phi\\left(\\frac{\\mu}{\\sigma}+i\\sigma t \\right) +\ne^{-\\frac{\\sigma^2 t^2}{2}-i\\mu t}\\Phi\\left(-\\frac{\\mu}{\\sigma}+i\\sigma t \\right) "
},
{
"math_id": 19,
"text": "M_x\\left(t\\right)=\\varphi_x\\left(-it\\right)=e^{\\frac{\\sigma^2 t^2}{2}+\\mu t}\\Phi\\left(\\frac{\\mu}{\\sigma}+\\sigma t \\right) +\ne^{\\frac{\\sigma^2 t^2}{2}-\\mu t}\\Phi\\left(-\\frac{\\mu}{\\sigma}+\\sigma t \\right) "
},
{
"math_id": 20,
"text": "K_x\\left(t\\right)=\\log{M_x\\left(t\\right)}=\n\\left(\\frac{\\sigma^2t^2}{2}+\\mu t\\right) + \\log{\\left\\lbrace 1-\\Phi\\left(-\\frac{\\mu}{\\sigma}-\\sigma t \\right) +\ne^{\\frac{\\sigma^2 t^2}{2}-\\mu t}\\left[1-\\Phi\\left(\\frac{\\mu}{\\sigma}-\\sigma t \\right) \\right] \\right\\rbrace}"
},
{
"math_id": 21,
"text": "E\\left(e^{-tx}\\right)=e^{\\frac{\\sigma^2t^2}{2}-\\mu t}\\left[1-\\Phi\\left(-\\frac{\\mu}{\\sigma}+\\sigma t \\right) \\right]+\ne^{\\frac{\\sigma^2 t^2}{2}+\\mu t}\\left[1-\\Phi\\left(\\frac{\\mu}{\\sigma}+\\sigma t \\right) \\right]"
},
{
"math_id": 22,
"text": "\\hat{f}\\left(t\\right)=\\varphi_x\\left(-2\\pi t\\right)= e^{\\frac{-4\\pi^2\\sigma^2 t^2}{2}- i2\\pi \\mu t}\\left[1-\\Phi\\left(-\\frac{\\mu}{\\sigma}-i2\\pi \\sigma t \\right) \\right]+ e^{-\\frac{4\\pi^2 \\sigma^2 t^2}{2}+i2\\pi\\mu t}\\left[1-\\Phi\\left(\\frac{\\mu}{\\sigma}-i2\\pi \\sigma t \\right) \\right]\n"
},
{
"math_id": 23,
"text": "(0, \\infty)"
},
{
"math_id": 24,
"text": " f(x)= \\frac{2\\beta^{\\frac{\\alpha}{2}} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi{\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}}"
},
{
"math_id": 25,
"text": "\\Psi(\\alpha,z)={}_1\\Psi_1\\left(\\begin{matrix}\\left(\\alpha,\\frac{1}{2}\\right)\\\\(1,0)\\end{matrix};z \\right)"
},
{
"math_id": 26,
"text": "x_i"
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "l = -\\frac{n}{2}\\log{2\\pi\\sigma^2}+\\sum_{i=1}^n\\log{\\left[e^{-\\frac{\\left(x_i-\\mu\\right)^2}{2\\sigma^2}}+\ne^{-\\frac{\\left(x_i+\\mu\\right)^2}{2\\sigma^2}} \\right] } "
},
{
"math_id": 29,
"text": "l = -\\frac{n}{2}\\log{2\\pi\\sigma^2}+\\sum_{i=1}^n\\log{\\left[e^{-\\frac{\\left(x_i-\\mu\\right)^2}{2\\sigma^2}}\n\\left(1+e^{-\\frac{\\left(x_i+\\mu\\right)^2}{2\\sigma^2}}e^{\\frac{\\left(x_i-\\mu\\right)^2}{2\\sigma^2}}\\right)\\right]}"
},
{
"math_id": 30,
"text": "l = -\\frac{n}{2}\\log{2\\pi\\sigma^2}-\\sum_{i=1}^n\\frac{\\left(x_i-\\mu\\right)^2}{2\\sigma^2}+\\sum_{i=1}^n\\log{\\left(1+e^{-\\frac{2\\mu x_i}{\\sigma^2}} \\right)}"
},
{
"math_id": 31,
"text": "\\mu"
},
{
"math_id": 32,
"text": "\\sigma^2"
},
{
"math_id": 33,
"text": "\\frac{\\partial l}{\\partial \\mu} = \\frac{\\sum_{i=1}^n\\left(x_i-\\mu \\right)}{\\sigma^2}-\n\\frac{2}{\\sigma^2}\\sum_{i=1}^n\\frac{x_ie^{\\frac{-2\\mu x_i}{\\sigma^2}}}{1+e^{\\frac{-2\\mu x_i}{\\sigma^2}}}\n"
},
{
"math_id": 34,
"text": "\\frac{\\partial l}{\\partial \\mu} = \\frac{\\sum_{i=1}^n\\left(x_i-\\mu \\right)}{\\sigma^2}-\\frac{2}{\\sigma^2}\\sum_{i=1}^n\\frac{x_i}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}} \\ \\ \\text{and}"
},
{
"math_id": 35,
"text": "\\frac{\\partial l}{\\partial \\sigma^2} = -\\frac{n}{2\\sigma^2}+\\frac{\\sum_{i=1}^n\\left(x_i-\\mu \\right)^2}{2\\sigma^4}+\n\\frac{2\\mu}{\\sigma^4}\\sum_{i=1}^n\\frac{x_ie^{-\\frac{2\\mu x_i}{\\sigma^2}}}{1+e^{-\\frac{2\\mu x_i}{\\sigma^2}}} "
},
{
"math_id": 36,
"text": "\\frac{\\partial l}{\\partial \\sigma^2} = -\\frac{n}{2\\sigma^2}+\\frac{\\sum_{i=1}^n\\left(x_i-\\mu \\right)^2}{2\\sigma^4}+\n\\frac{2\\mu}{\\sigma^4}\\sum_{i=1}^n\\frac{x_i}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}}"
},
{
"math_id": 37,
"text": "\n\n\\sum_{i=1}^n\\frac{x_i}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}}=\\frac{\\sum_{i=1}^n\\left(x_i-\\mu \\right)}{2}\n\n"
},
{
"math_id": 38,
"text": " \\sigma^2"
},
{
"math_id": 39,
"text": "\\sigma^2=\\frac{\\sum_{i=1}^n\\left(x_i-\\mu\\right)^2}{n}+\\frac{2\\mu\\sum_{i=1}^n\\left(x_i-\\mu\\right)}{n}=\\frac{\\sum_{i=1}^n\\left(x_i^2-\\mu^2\\right)}{n}=\\frac{\\sum_{i=1}^nx_i^2}{n}-\\mu^2"
},
{
"math_id": 40,
"text": "2\\sum_{i=1}^n\\frac{x_i}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}}-\n\\sum_{i=1}^n\\frac{x_i\\left(1+e^{\\frac{2\\mu x_i}{\\sigma^2}}\\right)}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}}+n\\mu = 0"
},
{
"math_id": 41,
"text": "\\sum_{i=1}^n\\frac{x_i\\left(1-e^{\\frac{2\\mu x_i}{\\sigma^2}}\\right)}{1+e^{\\frac{2\\mu x_i}{\\sigma^2}}}+n\\mu = 0\n"
},
{
"math_id": 42,
"text": "-\\mu"
},
{
"math_id": 43,
"text": "+\\mu"
}
] |
https://en.wikipedia.org/wiki?curid=6260959
|
62610482
|
Kaplansky's theorem on projective modules
|
In abstract algebra, Kaplansky's theorem on projective modules, first proven by Irving Kaplansky, states that a projective module over a local ring is free; where a not-necessarily-commutative ring is called "local" if for each element "x", either "x" or 1 − "x" is a unit element. The theorem can also be formulated so to characterize a local ring (#Characterization of a local ring).
For a finite projective module over a commutative local ring, the theorem is an easy consequence of Nakayama's lemma. For the general case, the proof (both the original as well as later one) consists of the following two steps:
The idea of the proof of the theorem was also later used by Hyman Bass to show big projective modules (under some mild conditions) are free. According to , Kaplansky's theorem "is very likely the inspiration for a major portion of the results" in the theory of semiperfect rings.
Proof.
The proof of the theorem is based on two lemmas, both of which concern decompositions of modules and are of independent general interest.
"Proof": Let "N" be a direct summand; i.e., formula_2. Using the assumption, we write formula_3 where each formula_4 is a countably generated submodule. For each subset formula_5, we write formula_6 the image of formula_7 under the projection formula_8 and formula_9 the same way. Now, consider the set of all triples (formula_10, formula_11, formula_12) consisting of a subset formula_13 and subsets formula_14 such that formula_15 and formula_16 are the direct sums of the modules in formula_17. We give this set a partial ordering such that formula_18 if and only if formula_19, formula_20. By Zorn's lemma, the set contains a maximal element formula_21. We shall show that formula_22; i.e., formula_23. Suppose otherwise. Then we can inductively construct a sequence of at most countable subsets formula_24 such that formula_25 and for each integer formula_26,
formula_27.
Let formula_28 and formula_29. We claim:
formula_30
The inclusion formula_31 is trivial. Conversely, formula_32 is the image of formula_33 and so formula_34. The same is also true for formula_35. Hence, the claim is valid.
Now, formula_36 is a direct summand of formula_1 (since it is a summand of formula_37, which is a summand of formula_1); i.e., formula_38 for some formula_39. Then, by modular law, formula_40. Set formula_41. Define formula_42 in the same way. Then, using the early claim, we have:
formula_43
which implies that
formula_44
is countably generated as formula_45. This contradicts the maximality of formula_21. formula_46
"Proof": Let formula_48 denote the family of modules that are isomorphic to modules of the form formula_49 for some finite subset formula_50. The assertion is then implied by the following claim:
Indeed, assume the claim is valid. Then choose a sequence formula_53 in "N" that is a generating set. Then using the claim, write formula_54 where formula_55. Then we write formula_56 where formula_57. We then decompose formula_58 with formula_59. Note formula_60. Repeating this argument, in the end, we have: formula_61; i.e., formula_62. Hence, the proof reduces to proving the claim and the claim is a straightforward consequence of Azumaya's theorem (see the linked article for the argument). formula_46
"Proof of the theorem": Let formula_47 be a projective module over a local ring. Then, by definition, it is a direct summand of some free module formula_63. This formula_63 is in the family formula_0 in Lemma 1; thus, formula_47 is a direct sum of countably generated submodules, each a direct summand of "F" and thus projective. Hence, without loss of generality, we can assume formula_47 is countably generated. Then Lemma 2 gives the theorem. formula_46
Characterization of a local ring.
Kaplansky's theorem can be stated in such a way to give a characterization of a local ring. A direct summand is said to be "maximal" if it has an indecomposable complement.
The implication formula_64 is exactly (usual) Kaplansky's theorem and Azumaya's theorem. The converse formula_65 follows from the following general fact, which is interesting itself:
formula_70 is by Azumaya's theorem as in the proof of formula_64. Conversely, suppose formula_71 has the above property and that an element "x" in "R" is given. Consider the linear map formula_72. Set formula_73. Then formula_74, which is to say formula_75 splits and the image formula_1 is a direct summand of formula_71. It follows easily from that the assumption that either "x" or -"y" is a unit element. formula_46
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak{F}"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "M = N \\oplus L"
},
{
"math_id": 3,
"text": "M = \\bigoplus_{i \\in I} M_i"
},
{
"math_id": 4,
"text": "M_i"
},
{
"math_id": 5,
"text": "A \\subset I"
},
{
"math_id": 6,
"text": "M_A = \\bigoplus_{i \\in A} M_i, N_A ="
},
{
"math_id": 7,
"text": "M_A"
},
{
"math_id": 8,
"text": "M \\to N \\hookrightarrow M"
},
{
"math_id": 9,
"text": "L_A"
},
{
"math_id": 10,
"text": "J"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "C"
},
{
"math_id": 13,
"text": "J \\subset I"
},
{
"math_id": 14,
"text": "B, C \\subset \\mathfrak{F}"
},
{
"math_id": 15,
"text": "M_J = N_J \\oplus L_J"
},
{
"math_id": 16,
"text": "N_J, L_J"
},
{
"math_id": 17,
"text": "B, C"
},
{
"math_id": 18,
"text": "(J, B, C) \\le (J', B', C')"
},
{
"math_id": 19,
"text": "J \\subset J'"
},
{
"math_id": 20,
"text": "B \\subset B', C \\subset C'"
},
{
"math_id": 21,
"text": "(J, B, C)"
},
{
"math_id": 22,
"text": "J = I"
},
{
"math_id": 23,
"text": "N = N_J = \\bigoplus_{N' \\in B} N' \\in \\mathfrak{F}"
},
{
"math_id": 24,
"text": "I_1 \\subset I_2 \\subset \\cdots \\subset I"
},
{
"math_id": 25,
"text": "I_1 \\not\\subset J"
},
{
"math_id": 26,
"text": "n \\ge 1"
},
{
"math_id": 27,
"text": "M_{I_n} \\subset N_{I_n} + L_{I_n} \\subset M_{I_{n+1}}"
},
{
"math_id": 28,
"text": "I' = \\bigcup_0^\\infty I_n"
},
{
"math_id": 29,
"text": "J' = J \\cup I'"
},
{
"math_id": 30,
"text": "M_{J'} = N_{J'} \\oplus L_{J'}."
},
{
"math_id": 31,
"text": "\\subset"
},
{
"math_id": 32,
"text": "N_{J'}"
},
{
"math_id": 33,
"text": "N_J + L_J + M_{I'} \\subset N_J + M_{I'}"
},
{
"math_id": 34,
"text": "N_{J'} \\subset M_{J'}"
},
{
"math_id": 35,
"text": "L_{J'}"
},
{
"math_id": 36,
"text": "N_J"
},
{
"math_id": 37,
"text": "M_J"
},
{
"math_id": 38,
"text": "N_J \\oplus M' = M"
},
{
"math_id": 39,
"text": "M'"
},
{
"math_id": 40,
"text": "N_{J'} = N_J \\oplus (M' \\cap N_{J'})"
},
{
"math_id": 41,
"text": "\\widetilde{N_J} = M' \\cap N_{J'}"
},
{
"math_id": 42,
"text": "\\widetilde{L_J}"
},
{
"math_id": 43,
"text": "M_{J'} = M_J \\oplus \\widetilde{N_J} \\oplus \\widetilde{L_J},"
},
{
"math_id": 44,
"text": "\\widetilde{N_J} \\oplus \\widetilde{L_J} \\simeq M_{J'} / M_J \\simeq M_{J' - J}"
},
{
"math_id": 45,
"text": "J' - J \\subset I'"
},
{
"math_id": 46,
"text": "\\square"
},
{
"math_id": 47,
"text": "N"
},
{
"math_id": 48,
"text": "\\mathcal{G}"
},
{
"math_id": 49,
"text": "\\bigoplus_{i \\in F} M_i"
},
{
"math_id": 50,
"text": "F \\subset I"
},
{
"math_id": 51,
"text": "x \\in N"
},
{
"math_id": 52,
"text": "H \\in \\mathcal{G}"
},
{
"math_id": 53,
"text": "x_1, x_2, \\dots"
},
{
"math_id": 54,
"text": "N = H_1 \\oplus N_1"
},
{
"math_id": 55,
"text": "x_1 \\in H_1 \\in \\mathcal{G}"
},
{
"math_id": 56,
"text": "x_2 = y + z"
},
{
"math_id": 57,
"text": "y \\in H_1, z \\in N_1"
},
{
"math_id": 58,
"text": "N_1 = H_2 \\oplus N_2"
},
{
"math_id": 59,
"text": "z \\in H_2 \\in \\mathcal{G}"
},
{
"math_id": 60,
"text": "\\{ x_1, x_2 \\} \\subset H_1 \\oplus H_2"
},
{
"math_id": 61,
"text": " \\{ x_1, x_2, \\dots \\} \\subset \\bigoplus_0^\\infty H_n"
},
{
"math_id": 62,
"text": "N = \\bigoplus_0^\\infty H_n"
},
{
"math_id": 63,
"text": "F"
},
{
"math_id": 64,
"text": "1. \\Rightarrow 2."
},
{
"math_id": 65,
"text": "2. \\Rightarrow 1."
},
{
"math_id": 66,
"text": "\\Leftrightarrow"
},
{
"math_id": 67,
"text": "R^2 = R \\times R"
},
{
"math_id": 68,
"text": "R^2 = (0 \\times R) \\oplus M"
},
{
"math_id": 69,
"text": "R^2 = (R \\times 0) \\oplus M"
},
{
"math_id": 70,
"text": "(\\Rightarrow)"
},
{
"math_id": 71,
"text": "R^2"
},
{
"math_id": 72,
"text": "\\sigma:R^2 \\to R, \\, \\sigma(a, b) = a - b"
},
{
"math_id": 73,
"text": "y = x - 1"
},
{
"math_id": 74,
"text": "\\sigma(x, y) = 1"
},
{
"math_id": 75,
"text": "\\eta: R \\to R^2, a \\mapsto (ax, ay)"
}
] |
https://en.wikipedia.org/wiki?curid=62610482
|
62612416
|
Leimkuhler–Matthews method
|
In mathematics, the Leimkuhler-Matthews method (or LM method in its original paper ) is an algorithm for finding discretized solutions to the Brownian dynamics
formula_0
where formula_1 is a constant, formula_2 is an energy function and formula_3 is a Wiener process. This stochastic differential equation has solutions (denoted formula_4 at time formula_5) distributed according to formula_6 in the limit of large-time, making solving these dynamics relevant in sampling-focused applications such as classical molecular dynamics and machine learning.
Given a time step formula_7, the Leimkuhler-Matthews update scheme is compactly written as
formula_8
with initial condition formula_9, and where formula_10. The vector formula_11 is a vector of independent normal random numbers redrawn at each step so formula_12 (where formula_13 denotes expectation). Despite being of equal cost to the Euler-Maruyama scheme (in terms of the number of evaluations of the function formula_14 per update), given some assumptions on formula_15 and formula_16 solutions have been shown to have a superconvergence property
formula_17
for constants formula_18 not depending on formula_19. This means that as formula_19 gets large we obtain an effective second order with formula_20 error in computed expectations. For small time step formula_21 this can give significant improvements over the Euler-Maruyama scheme, at no extra cost.
Discussion.
Comparison to other schemes.
The obvious method for comparison is the Euler-Maruyama scheme as it has the same cost, requiring one evaluation of formula_22 per step. Its update is of the form
formula_23
with error (given some assumptions ) as formula_24 with constant formula_25 independent of formula_19. Compared to the above definition, the only difference between the schemes is the "one-step" averaged noise term, making it simple to implement.
For sufficiently small time step formula_26 and large enough time formula_19 it is clear that the LM scheme gives a smaller error than Euler-Maruyama. While there are many algorithms that can give reduced error compared to the Euler scheme (see e.g. Milstein, Runge-Kutta or Heun's method) these almost always come at an efficiency cost, requiring more computation in exchange for reducing the error. However the Leimkuhler-Matthews scheme can give significantly reduced error with minimal change to the standard Euler scheme. The trade-off comes from the (relatively) limited scope of the stochastic differential equation it solves: formula_27 must be a scalar constant and the drift function must be of the form formula_22. The LM scheme also is not Markovian, as updates require more than just the state at time formula_19. However, we can recast the scheme as a Markov process by extending the space.
Markovian Form.
We can rewrite the algorithm in a Markovian form by extending the state space with a "momentum vector" formula_28 so that the overall state is formula_29 at time formula_19. Initializing the momentum to be a vector of formula_30 standard normal random numbers, we have
formula_31
formula_32
formula_33
where the middle step completely redraws the momentum so that each component is an independent normal random number. This scheme is Markovian, and has the same properties as the original LM scheme.
Applications.
The algorithm has application in any area where the weak (i.e. average) properties of solutions to Brownian dynamics are required. This applies to any molecular simulation problem (such as classical molecular dynamics), but also can apply to statistical sampling problems due to the properties of solutions at large times. In the limit of formula_34, solutions will become distributed according to the Probability distribution formula_35. Thus we can generate independent samples according to a required distribution by using formula_36 and running the LM algorithm until large formula_19. Such strategies can be efficient in (for instance) Bayesian inference problems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{d} X = -\\nabla V(X ) \\, \\mathrm{d} t + \\sigma \\, \\mathrm{d} W,"
},
{
"math_id": 1,
"text": " \\sigma>0"
},
{
"math_id": 2,
"text": " V(X) "
},
{
"math_id": 3,
"text": " W(t) "
},
{
"math_id": 4,
"text": "X(t) \\in \\mathbb{R}^N "
},
{
"math_id": 5,
"text": " t "
},
{
"math_id": 6,
"text": " \\pi(X) \\propto \\exp(-V(x)) "
},
{
"math_id": 7,
"text": "\\Delta t>0"
},
{
"math_id": 8,
"text": "X_{t+\\Delta t} = X_t -\\nabla V(X_t) \\Delta t + \\sigma\\frac{\\sqrt{\\Delta t}}2 \\, (R_t+R_{t+\\Delta t}),"
},
{
"math_id": 9,
"text": " X_0 := X(0) "
},
{
"math_id": 10,
"text": " X_t \\approx X(t) "
},
{
"math_id": 11,
"text": "R_t"
},
{
"math_id": 12,
"text": "\\text{E}[ R_t \\cdot R_{s}]=N\\delta_{ts}"
},
{
"math_id": 13,
"text": "\\text{E}[\\bullet]"
},
{
"math_id": 14,
"text": " \\nabla V(X) "
},
{
"math_id": 15,
"text": " \\Delta t,\\, V(X)"
},
{
"math_id": 16,
"text": "f(X)"
},
{
"math_id": 17,
"text": " \\text{E}[|f( X_t ) - f(X(t))|] \\leq C_1 e^{-\\lambda t} \\Delta t + C_2 \\Delta t^2 "
},
{
"math_id": 18,
"text": "C_k\\geq0,\\, \\lambda>0 "
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "\\Delta t^2"
},
{
"math_id": 21,
"text": " \\Delta t"
},
{
"math_id": 22,
"text": "\\nabla V(X)"
},
{
"math_id": 23,
"text": "\\hat{X}_{t+\\Delta t} = \\hat{X}_t -\\nabla V(\\hat{X}_t) \\Delta t + \\sigma{\\sqrt{\\Delta t}} \\, R_t,"
},
{
"math_id": 24,
"text": " \\text{E}[|f(\\hat{X}_{t}) - f(X(t))|] \\leq C \\Delta t "
},
{
"math_id": 25,
"text": "C>0"
},
{
"math_id": 26,
"text": "\\Delta t"
},
{
"math_id": 27,
"text": "\\sigma"
},
{
"math_id": 28,
"text": "P_t\\in\\mathbb{R}^N"
},
{
"math_id": 29,
"text": "(X_t,P_t)"
},
{
"math_id": 30,
"text": "N"
},
{
"math_id": 31,
"text": "X'_{t+\\Delta t} = X_t -\\nabla V(X_t) \\Delta t + \\sigma\\frac{\\sqrt{\\Delta t}}2 \\, P_t,"
},
{
"math_id": 32,
"text": "P_{t+\\Delta t} \\sim \\text{Normal}(0,I),"
},
{
"math_id": 33,
"text": "X_{t+\\Delta t} = X'_{t+\\Delta t} + \\sigma\\frac{\\sqrt{\\Delta t}}2 \\, P_{t+\\Delta t},"
},
{
"math_id": 34,
"text": "t\\to\\infty"
},
{
"math_id": 35,
"text": "\\pi(X) \\propto \\exp(-V(X))"
},
{
"math_id": 36,
"text": "V(X) = -\\log(\\pi(X))"
}
] |
https://en.wikipedia.org/wiki?curid=62612416
|
62614138
|
Generalized probabilistic theory
|
A generalized probabilistic theory (GPT) is a general framework to describe the operational features of arbitrary physical theories. A GPT must specify what kind of physical systems one can find in the lab, as well as rules to compute the outcome statistics of any experiment involving labeled preparations, transformations and measurements. The framework of GPTs has been used to define hypothetical non-quantum physical theories which nonetheless possess quantum theory's most remarkable features, such as entanglement or teleportation. Notably, a small set of physically motivated axioms is enough to single out the GPT representation of quantum theory.
The mathematical formalism of GPTs has been developed since the 1950s and 1960s by many authors, and rediscovered independently several times. The earliest ideas are due to Segal and Mackey, although the first comprehensive and mathematically rigorous treatment can be traced back to the work of Ludwig, Dähn, and Stolz, all three based at the University of Marburg.
While the formalism in these earlier works is less similar to the modern one, already in the early 1970s the ideas of the Marburg school had matured and the notation had developed towards the modern usage, thanks also to the independent contribution of Davies and Lewis.
The books by Ludwig and the proceedings of a conference held in Marburg in 1973 offer a comprehensive account of these early developments.
The term "generalized probabilistic theory" itself was coined by Jonathan Barrett in 2007, based on the version of the framework introduced by Lucien Hardy.
Note that some authors use the term "operational probabilistic theory" (OPT). OPTs are an alternative way to define hypothetical non-quantum physical theories, based on the language of category theory, in which one specify the axioms that should be satisfied by observations.
Definition.
A GPT is specified by a number of mathematical structures, namely:
It can be argued that if one can prepare a state formula_0 and a different state formula_1, then one can also toss a (possibly biased) coin which lands on one side with probability formula_2 and on the other with probability formula_3 and prepare either formula_0 or formula_1, depending on the side the coin lands on. The resulting state is a statistical mixture of the states formula_0 and formula_1 and in GPTs such statistical mixtures are described by convex combinations, in this case formula_4. For this reason all state spaces are assumed to be convex sets. Following a similar reasoning, one can argue that also the set of measurement outcomes and set of physical operations must be convex.
Additionally it is always assumed that measurement outcomes and physical operations are affine maps, i.e. that if formula_5 is a physical transformation, then we must have formula_6and similarly for measurement outcomes. This follows from the argument that we should obtain the same outcome if we first prepare a statistical mixture and then apply the physical operation, or if we prepare a statistical mixture of the outcomes of the physical operations.
Note that physical operations are a subset of all affine maps which transform states into states as we must require that a physical operation yields a valid state even when it is applied to a part of a system (the notion of "part" is subtle: it is specified by explaining how different system types compose and how the global parameters of the composite system are affected by local operations).
For practical reasons it is often assumed that a general GPT is embedded in a finite-dimensional vector space, although infinite-dimensional formulations exist.
Classical, quantum, and beyond.
Classical theory is a GPT where states correspond to probability distributions and both measurements and physical operations are stochastic maps. One can see that in this case all state spaces are simplexes.
Standard quantum information theory is a GPT where system types are described by a natural number formula_7 which corresponds to the complex Hilbert space dimension. States of the systems of Hilbert space dimension formula_7 are described by the normalized positive semidefinite matrices, i.e. by the density matrices. Measurements are identified with Positive Operator valued Measures (POVMs), and the physical operations are completely positive maps. Systems compose via the tensor product of the underlying complex Hilbert spaces.
Real quantum theory is the GPT which is obtained from standard quantum information theory by restricting the theory to real Hilbert spaces. It does not satisfy the axiom of local tomography.
The framework of GPTs has provided examples of consistent physical theories which cannot be embedded in quantum theory and indeed exhibit very non-quantum features. One of the first ones was Box-world, the theory with maximal non-local correlations. Other examples are theories with third-order interference and the family of GPTs known as generalized bits.
Many features that were considered purely quantum are actually present in all non-classical GPTs. These include the impossibility of universal broadcasting, i.e., the no-cloning theorem; the existence of incompatible measurements; and the existence of entangled states or entangled measurements.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "1-p"
},
{
"math_id": 4,
"text": "px+(1-p)y"
},
{
"math_id": 5,
"text": "\\Phi"
},
{
"math_id": 6,
"text": "\\Phi(px+(1-p)y) = p\\Phi(x) + (1-p) \\Phi(y)"
},
{
"math_id": 7,
"text": "D"
}
] |
https://en.wikipedia.org/wiki?curid=62614138
|
6261834
|
Half-value layer
|
A material's half-value layer (HVL), or half-value thickness, is the thickness of the material at which the intensity of radiation entering it is reduced by one half. HVL can also be expressed in terms of air kerma rate (AKR), rather than intensity: the half-value layer is the thickness of specified material that, "attenuates the beam of radiation to an extent such that the AKR is reduced to one-half of its original value. In this definition the contribution of all scattered radiation, other than any [...] present initially in the beam concerned, is deemed to be excluded." Rather than AKR, measurements of air kerma, exposure, or exposure rate can be used to determine half value layer, as long as it is given in the description.
Half-value layer refers to the first half-value layer, where subsequent (i.e. second) half-value layers refer to the amount of specified material that will reduce the air kerma rate by one-half after material has been inserted into the beam that is equal to the sum of all previous half-value layers.
Quarter-value layer is the amount of specified material that reduces the air kerma rate (or exposure rate, exposure, air kerma, etc...) to one fourth of the value obtained without any test filters. The quarter-value layer is equal to the sum of the first and second half-value layers.
The homogeneity factor (HF) describes the polychromatic nature of the beam and is given by:
formula_0
The HF for a narrow beam will always be less than or equal to one (it is only equal to one in the case of a monoenergetic beam). In case of a narrow polychromatic beam, the HF is less than one because of beam hardening.
HVL is related to Mean free path, however the mean free path is the average distance a unit of radiation can travel in the material before being absorbed, whereas HVL is the average amount of material needed to absorb 50% of all radiation (i.e., to reduce the intensity of the incident radiation by half).
In the case of sound waves, HVL is the distance that it takes for the intensity of a sound wave to be reduced to one-half of its original value. The HVL of sound waves is determined by both the medium through which it travels, and the frequency of the beam. A "thin" half-value layer (or a quick drop of -3 dB) results from a high frequency sound wave and a medium with a high rate of attenuation, such as bone. HVL is measured in units of length.
A similar concept is the tenth-value layer or TVL. The TVL is the average amount of material needed to absorb 90% of all radiation, i.e., to reduce it to a tenth of the original intensity. 1 TVL is greater than or equal to log2(10) or approximately 3.32 HVLs, with equality achieved for a monoenergetic beam.
Here are example approximate half-value layers for a variety of materials against a source of gamma rays (Iridium-192):
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "HF=\\frac{1^{st} HVL}{2^{nd} HVL}"
}
] |
https://en.wikipedia.org/wiki?curid=6261834
|
62621991
|
GroundBIRD
|
GroundBIRD is an experiment to observe the cosmic microwave background at 145 and 220GHz. It aims to observe the B-mode polarisation signal from inflation in the early universe. It is located at Teide Observatory, on the island of Tenerife in the Canary Islands.
Scientific goals.
The telescope was constructed to measure the B-mode signal in the polarisation of the Cosmic Microwave Background (CMB), in order to look for evidence of cosmic inflation in the early universe. It aims to observe the reionization bump at formula_0 and the recombination peak around formula_1. The name 'GroundBIRD' indicates that the telescope is ground-based, while BIRD stands for B-mode Imaging Radiation Detector. It is related to the future, similarly-named, LiteBIRD CMB satellite.
Telescope.
The telescope consists of two mirrors in a Mizuguchi-Dragone configuration, with a diameter of . The telescope is inside the cryostat, which is mounted on a rotation table, with a rotary joint that provides helium gas and electricity to the cryostat. The mirrors are cooled to using a Pulse tube refrigerator to reduce the thermal noise from the mirror surfaces.
The experiment uses microwave kinetic inductance detectors (MKIDs), which are cooled to 250mK by a sorption cooler within the cryostat, which uses helium-3, and was manufactured by Chase Research Cryogenics Ltd. The signals from the detector are multiplexed, and around 100 detectors can be measured in both phase and amplitude with a single digital read-out system with a bandwidth of 200MHz, recording 1,000 samples per second. The digital system uses 12-bit ADCs and a Kintex-7 FPGA from Xilinx initially, and now uses Kintex ultrascale FPGAs. Raspberry Pis are used to monitor and control the telescope.
The cryostat rotates at 20 rpm (120° per second, 1 rotation every 3 seconds) to minimize 1/f noise. It observes at zenith angles up to 20°, mapping around 40% of the sky. The field of view is 10°, with an angular resolution of 0.5° FWHM at 145GHz, and 0.3° at 220GHz. It will measure the CMB at formula_2
The telescope was constructed at KEK in Japan. Test observations started in Japan in 2014. While it was originally intended that it would observe from the Atacama Desert in Chile, an agreement to install it at Teide Observatory was reached in 2016, at an altitude of . It was shipped to Tenerife in January 2019. In February 2020, the experiment was visited by Kenji Hiramatsu, the Japanese Ambassador to Spain.
Collaboration.
The collaboration includes scientists from:
Funding.
The project is funded by:
with additional support from:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "l<20"
},
{
"math_id": 1,
"text": "l=200"
},
{
"math_id": 2,
"text": "6<l<300"
}
] |
https://en.wikipedia.org/wiki?curid=62621991
|
6262236
|
Secure two-party computation
|
Secure two-party computation (2PC) a.k.a. Secure function evaluation is sub-problem of secure multi-party computation (MPC) that has received special attention by researchers because of its close relation to many cryptographic tasks. The goal of 2PC is to create a generic protocol that allows two parties to jointly compute an arbitrary function on their inputs without sharing the value of their inputs with the opposing party. One of the most well known examples of 2PC is Yao's Millionaires' problem, in which two parties, Alice and Bob, are millionaires who wish to determine who is wealthier without revealing their wealth. Formally, Alice has wealth formula_0, Bob has wealth formula_1, and they wish to compute formula_2 without revealing the values formula_0 or formula_1.
Yao's garbled circuit protocol for two-party computation only provided security against passive adversaries. One of the first general solutions for achieving security against active adversary was introduced by Goldreich, Micali and Wigderson by applying Zero-Knowledge Proof to enforce semi-honest behavior. This approach was known to be impractical for years due to high complexity overheads. However, significant improvements have been made toward applying this method in 2PC and Abascal, Faghihi Sereshgi, Hazay, Yuval Ishai and Venkitasubramaniam gave the first efficient protocol based on this approach. Another type of 2PC protocols that are secure against active adversaries were proposed by Yehuda Lindell and Benny Pinkas, Ishai, Manoj Prabhakaran and Amit Sahai and Jesper Buus Nielsen and Claudio Orlandi. Another solution for this problem, that explicitly works with committed input was proposed by Stanisław Jarecki and Vitaly Shmatikov.
Security.
The security of a two-party computation protocol is usually defined through a comparison with an idealised scenario that is secure by definition. The idealised scenario involves a trusted party that collects the input of the two parties mostly client and server over secure channels and returns the result if none of the parties chooses to abort. The cryptographic two-party computation protocol is secure, if it behaves no worse than this ideal protocol, but without the additional trust assumptions. This is usually modeled using a simulator. The task of the simulator is to act as a wrapper around the idealised protocol to make it appear like the cryptographic protocol. The simulation succeeds with respect to an information theoretic, respectively computationally bounded adversary if the output of the simulator is statistically close to, respectively computationally indistinguishable from the output of the cryptographic protocol. A two-party computation protocol is secure if for all adversaries there exists a successful simulator.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "a \\geq b"
}
] |
https://en.wikipedia.org/wiki?curid=6262236
|
6262575
|
Stretched exponential function
|
The stretched exponential function formula_0 is obtained by inserting a fractional power law into the exponential function. In most applications, it is meaningful only for arguments t between 0 and +∞. With "β" = 1, the usual exponential function is recovered. With a "stretching exponent" "β" between 0 and 1, the graph of log "f" versus "t" is characteristically "stretched", hence the name of the function. The compressed exponential function (with "β" > 1) has less practical importance, with the notable exception of "β" = 2, which gives the normal distribution.
In mathematics, the stretched exponential is also known as the complementary cumulative Weibull distribution. The stretched exponential is also the characteristic function, basically the Fourier transform, of the Lévy symmetric alpha-stable distribution.
In physics, the stretched exponential function is often used as a phenomenological description of relaxation in disordered systems. It was first introduced by Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor; thus it is also known as the Kohlrausch function. In 1970, G. Williams and D.C. Watts used the Fourier transform of the stretched exponential to describe dielectric spectra of polymers; in this context, the stretched exponential or its Fourier transform are also called the Kohlrausch–Williams–Watts (KWW) function. The Kohlrausch–Williams–Watts (KWW) function corresponds to the time domain charge response of the main dielectric models, such as the Cole–Cole equation, the Cole–Davidson equation, and the Havriliak–Negami relaxation, for small time arguments.
In phenomenological applications, it is often not clear whether the stretched exponential function should be used to describe the differential or the integral distribution function—or neither. In each case, one gets the same asymptotic decay, but a different power law prefactor, which makes fits more ambiguous than for simple exponentials. In a few cases, it can be shown that the asymptotic decay is a stretched exponential, but the prefactor is usually an unrelated power.
Mathematical properties.
Moments.
Following the usual physical interpretation, we interpret the function argument "t" as time, and "f"β("t") is the differential distribution. The area under the curve can thus be interpreted as a "mean relaxation time". One finds
formula_1
where Γ is the gamma function. For exponential decay, ⟨"τ"⟩ = "τ""K" is recovered.
The higher moments of the stretched exponential function are
formula_2
Distribution function.
In physics, attempts have been made to explain stretched exponential behaviour as a linear superposition of simple exponential decays. This requires a nontrivial distribution of relaxation times, "ρ"("u"), which is implicitly defined by
formula_3
Alternatively, a distribution formula_4 is used.
"ρ" can be computed from the series expansion:
formula_5
For rational values of "β", "ρ"("u") can be calculated in terms of elementary functions. But the expression is in general too complex to be useful except for the case "β" = 1/2 where
formula_6
Figure 2 shows the same results plotted in both a linear and a log representation. The curves converge to a Dirac delta function peaked at "u" = 1 as "β" approaches 1, corresponding to the simple exponential function.
The moments of the original function can be expressed as
formula_7
The first logarithmic moment of the distribution of simple-exponential relaxation times is
formula_8
where Eu is the Euler constant.
Fourier transform.
To describe results from spectroscopy or inelastic scattering, the sine or cosine Fourier transform of the stretched exponential is needed. It must be calculated either by numeric integration, or from a series expansion. The series here as well as the one for the distribution function are special cases of the Fox–Wright function. For practical purposes, the Fourier transform may be approximated by the Havriliak–Negami function, though nowadays the numeric computation can be done so efficiently that there is no longer any reason not to use the Kohlrausch–Williams–Watts function in the frequency domain.
History and further applications.
As said in the introduction, the stretched exponential was introduced by the German physicist Rudolf Kohlrausch in 1854 to describe the discharge of a capacitor (Leyden jar) that used glass as dielectric medium. The next documented usage is by Friedrich Kohlrausch, son of Rudolf, to describe torsional relaxation. A. Werner used it in 1907 to describe complex luminescence decays; Theodor Förster in 1949 as the fluorescence decay law of electronic energy donors.
Outside condensed matter physics, the stretched exponential has been used to describe the removal rates of small, stray bodies in the solar system, the diffusion-weighted MRI signal in the brain, and the production from unconventional gas wells.
In probability.
If the integrated distribution is a stretched exponential, the normalized probability density function is given by
formula_9
Note that confusingly some authors have been known to use the name "stretched exponential" to refer to the Weibull distribution.
Modified functions.
A modified stretched exponential function
formula_10
with a slowly "t"-dependent exponent "β" has been used for biological survival curves.
Wireless Communications.
In wireless communications, a scaled version of the stretched exponential function has been shown to appear in the Laplace Transform for the interference power formula_11 when the transmitters' locations are modeled as a 2D Poisson Point Process with no exclusion region around the receiver.
The Laplace transform can be written for arbitrary fading distribution as follows:
formula_12
where formula_13 is the power of the fading, formula_14 is the path loss exponent, formula_15 is the density of the 2D Poisson Point Process, formula_16 is the Gamma function, and formula_17 is the expectation of the variable formula_18.
The same reference also shows how to obtain the inverse Laplace Transform for the stretched exponential formula_19 for higher order integer formula_20 from lower order integers formula_21 and formula_22.
Internet Streaming.
The stretched exponential has been used to characterize Internet media accessing patterns, such as YouTube and other stable streaming media sites. The commonly agreed power-law accessing patterns of Web workloads mainly reflect text-based content Web workloads, such as daily updated news sites.
|
[
{
"math_id": 0,
"text": "f_\\beta (t) = e^{ -t^\\beta }"
},
{
"math_id": 1,
"text": "\\langle\\tau\\rangle \\equiv \\int_0^\\infty dt\\, e^{-(t/\\tau_K)^\\beta} = {\\tau_K \\over \\beta } \\Gamma {\\left( \\frac 1 \\beta \\right)}"
},
{
"math_id": 2,
"text": "\\langle\\tau^n\\rangle \\equiv \\int_0^\\infty dt\\, t^{n-1}\\, e^{-(t/\\tau_K)^\\beta} = {{\\tau_K}^n \\over \\beta }\\Gamma {\\left(\\frac n \\beta \\right)}."
},
{
"math_id": 3,
"text": "e^{-t^\\beta} = \\int_0^\\infty du\\,\\rho(u)\\, e^{-t/u}."
},
{
"math_id": 4,
"text": "G = u \\rho (u)"
},
{
"math_id": 5,
"text": " \\rho (u ) = -{ 1 \\over \\pi u} \\sum_{k = 0}^\\infty {(-1)^k \\over k!} \\sin (\\pi \\beta k)\\Gamma (\\beta k + 1) u^{\\beta k}"
},
{
"math_id": 6,
"text": "G(u) = u \\rho(u) = { 1 \\over 2\\sqrt{\\pi}} \\sqrt{u} e^{-u/4}\n"
},
{
"math_id": 7,
"text": "\\langle\\tau^n\\rangle = \\Gamma(n) \\int_0^\\infty d\\tau\\, t^n \\, \\rho(\\tau)."
},
{
"math_id": 8,
"text": "\\langle\\ln\\tau\\rangle = \\left( 1 - {1 \\over \\beta} \\right) {\\rm Eu} + \\ln \\tau_K "
},
{
"math_id": 9,
"text": " p(\\tau \\mid \\lambda, \\beta)~d\\tau = \\frac{\\lambda}{\\Gamma(1 + \\beta^{-1})} ~ e^{-(\\tau \\lambda)^\\beta} ~ d\\tau"
},
{
"math_id": 10,
"text": "f_\\beta (t) = e^{ -t^{\\beta(t)} }"
},
{
"math_id": 11,
"text": "I"
},
{
"math_id": 12,
"text": " L_I(s) = \\exp\\left(-\\pi \\lambda \\mathbb{E}{\\left[g^\\frac{2}{\\eta} \\right]} \\Gamma{\\left(1 - \\frac{2}{\\eta} \\right)} s^\\frac{2}{\\eta}\\right) = \\exp\\left(- t s^\\beta \\right)"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "\\eta"
},
{
"math_id": 15,
"text": "\\lambda"
},
{
"math_id": 16,
"text": "\\Gamma(\\cdot)"
},
{
"math_id": 17,
"text": "\\mathbb{E}[x]"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\exp\\left(-s^\\beta \\right)"
},
{
"math_id": 20,
"text": "\\beta = \\beta_q \\beta_b "
},
{
"math_id": 21,
"text": "\\beta_a"
},
{
"math_id": 22,
"text": "\\beta_b"
}
] |
https://en.wikipedia.org/wiki?curid=6262575
|
62626897
|
Jean-Baptiste Leblond
|
French physicist and mathematician
Jean-Baptiste Leblond (born 21 May 1957 in Boulogne-Billancourt) is a French materials scientist, member of the Mechanical Modelling Laboratory of the Pierre-et-Marie-Curie University (MISES) and professor at the same university.
Biography.
Leblond attended his scientific preparatory classes, notably in the special M' mathematics class at the Lycée Louis-le-Grand and was admitted to the École normale supérieure de la rue d'Ulm, mathematics option, in 1976. He then joined the Corps des mines and became a doctor of physical sciences.
Since 2005, he has been a member of the French Academy of Sciences and a founding member of the French Academy of Technologies (2000). He is a senior member of the Institut universitaire de France.
Leblond's kinetic theory.
This is an approach established by Leblond in his work on phase transformations.
The theory proposes an evolutionary model to quantify the composition of the different phases of a crystalline material during heat treatment.
The method is based on experimentally established CRT (Continuous Cooling Transformation) diagrams to compose TTT (Time-Temperature-Transformation) diagrams, which are widely used for numerical simulation or for the manufacture of industrial parts.
The theory posits the equivalent volume fraction of a constituent "yeq" as the stationary solution of the evolution equations describing the phase change kinetics:
formula_0 stationnart phase
We then suppose in anisothermal condition that the real fraction y is close to "yeq", it is then possible to approximate the real value "Y" by a Taylor development at order 1:
formula_1
The evolution is given by :
formula_2
τ is determined on the one hand by the incubation period (critical time) and on the other hand by the cooling rates T.
There are also other formalisms such as the theory of Kirkaldy, Johnson-Mehl-Avrami or Waeckel. One of the most classical, quite old, is that of Johnson-Mehl-Avrami. The model proposed by Jean-Baptiste Leblod is in fact based on this classical model by generalizing it on two points: 1) it considers any number of phases and transformations between these phases, and not just two phases and a single transformation; 2) the transformations can remain, after an infinitely long time, partial, and not necessarily complete as in the Johnson-Mehl-Avrami model (this is linked to the existence, in the new model, of fractions "at equilibrium" of the phases towards which the system evolves after an infinite time, not necessarily equal to 0 or 1 but which can take any value between these limits).
The Leblond model is designed for applications in the thermometallurgical treatment of steels; this explains its success with the modellers of these treatments.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\dot{y} = f ( y,T ) \\quad et \\quad f ( y_{eq},T )=0 \\, \\rightarrow "
},
{
"math_id": 1,
"text": "f ( y,T )= f ( y_{eq},T )+ \\frac{\\partial f ( y_{eq},T )}{\\partial y}(y - y_{eq}) "
},
{
"math_id": 2,
"text": "\\dot{y} = \\frac{y - y_{eq}}{ \\tau (T)} \\quad et \\quad \\frac{1}{\\tau} = - \\frac{\\partial f ( y_{eq},T )}{\\partial y}"
}
] |
https://en.wikipedia.org/wiki?curid=62626897
|
62627023
|
Interfacial rheology
|
Study of flow of matter at interfaces
Interfacial rheology is a branch of rheology that studies the flow of matter at the interface between a gas and a liquid or at the interface between two immiscible liquids. The measurement is done while having surfactants, nanoparticles or other surface active compounds present at the interface. Unlike in bulk rheology, the deformation of the bulk phase is not of interest in interfacial rheology and its effect is aimed to be minimized. Instead, the flow of the surface active compounds is of interest..
The deformation of the interface can be done either by changing the size or shape of the interface. Therefore interfacial rheological methods can be divided into two categories: dilational and shear rheology methods.
Interfacial dilational rheology.
In dilatational interfacial rheology, the size of the interface is changing over time. The change in the surface stress or surface tension of the interface is being measured during this deformation. Based on the response, interfacial viscoelasticity is calculated according to well established theories:
formula_0
formula_1
formula_2
where
Most commonly, the measurement of dilational interfacial rheology is conducted with an optical tensiometer combined to a pulsating drop module. A pendant droplet with surface active molecules in it is formed and pulsated sinusoidally. The changes in the interfacial area causes changes in the molecular interactions which then changes the surface tension. Typical measurements include performing a frequency sweep for the solution to study the kinetics of the surfactant.
In another measurement method suitable especially for insoluble surfactants, a Langmuir trough is used in an oscillating barrier mode. In this case, two barriers that limit the interfacial area are being oscillated sinusoidally and the change in surface tension measured.
Interfacial shear rheology.
In interfacial shear rheology, the interfacial area remains the same throughout the measurement. Instead, the interfacial area is sheared in order to be able to measure the surface stress present. The equations are similar to dilatational interfacial rheology but shear modulus is often marked with G instead of E like in dilational methods. In a general case, G and E are not equal.
Since interfacial rheological properties are relatively weak, it causes challenges for the measurement equipment. For high sensitivity, it is essential to maximize the contribution of the interface while minimizing the contribution of the bulk phase. The Boussinesq number, Bo, depicts how sensitive a measurement method is for detecting the interfacial viscoelasticity.
The commercialized measurement techniques for interfacial shear rheology include magnetic needle method, rotating ring method and rotating bicone method. The magnetic needle method, developed by Brooks et al., has the highest Boussinesq number of the commercialized methods. In this method, a thin magnetic needle is oscillated at the interface using a magnetic field. By following the movement of the needle with a camera, the viscoelastic properties of the interface can be detected. This method is often used in combination with a Langmuir trough in order to be able to conduct the experiment as a function of the packing density of the molecules or particles.
Applications.
When surfactants are present in a liquid, they tend to adsorb in the liquid-air or liquid-liquid interface. Interfacial rheology deals with the response of the adsorbed interfacial layer on the deformation. The response depends on the layer composition, and thus interfacial rheology is relevant in many applications in which adsorbed layer play a crucial role, for example in development surfactants, foams and emulsions. Many biological systems like pulmonary surfactant and meibum are dependent on interfacial viscoelasticity for their functionality. Interfacial rheology has been employed to understand the structure-function relationship of these physiological interfaces, how compositional deviations cause diseases such as infant respiratory distress syndrome or dry eye syndrome, and has helped to develop therapies like artificial pulmonary surfactant replacements and eye drops.
Interfacial rheology enables the study of surfactant kinetics, and the viscoelastic properties of the adsorbed interfacial layer correlate well with emulsion and foam stability. Surfactants and surface active polymers used are for stabilising emulsions and foams in food and cosmetic industries. Proteins are surface active and tend to adsorb at the interface, where they can change conformation and influence the interfacial properties. Natural surfactants like asphaltenes and resins stabilize water-oil emulsions in crude oil applications, and by understanding their behavior the crude oil separation process can be enhanced. Also enhanced oil recovery efficiency can be optimized.
Specialized setups that allow bulk exchange during interfacial rheology measurements are used to investigate the response of adsorbed proteins or surfactants upon changes in pH or salinity. These setups can also be used to mimic more complex conditions like the gastric environment to investigate the in vitro displacement or enzymatic hydrolysis of polymers adsorbed at oil-water interfaces to understand how respective emulsion are digested the stomach.
Interfacial rheology allows the probation of bacteria adsorption and biofilm formation at liquid-air or liquid-liquid interfaces.
In food science, interfacial rheology was used to understand the stability of emulsions like mayonnaise, the stability of espresso foam, the film formed on black tea, or the formation of kombucha biofilms.
|
[
{
"math_id": 0,
"text": "\\left\\vert E \\right\\vert = {d\\gamma \\over dlnA}=A{d\\gamma \\over dA}"
},
{
"math_id": 1,
"text": "\\begin{align} E' & = \\left\\vert E \\right\\vert\\cos\\delta \\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align} E'' & = \\left\\vert E \\right\\vert\\sin\\delta \\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=62627023
|
62627139
|
Yves Pomeau
|
French physicist
Yves Pomeau, born in 1942, is a French mathematician and physicist, emeritus research director at the CNRS and corresponding member of the French Academy of sciences. He was one of the founders of the Laboratoire de Physique Statistique, École Normale Supérieure, Paris. He is the son of literature professor René Pomeau.
Career.
Yves Pomeau did his state thesis in plasma physics, almost without any adviser, at the University of Orsay-France in 1970. After his thesis, he spent a year as a postdoc with Ilya Prigogine in Brussels.
He was a researcher at the CNRS from 1965 to 2006, ending his career as DR0 in the Physics Department of the Ecole Normale Supérieure (ENS) (Statistical Physics Laboratory) in 2006.
He was a lecturer in physics at the École Polytechnique for two years (1982–1984), then a scientific expert with the Direction générale de l'armement until January 2007.
He was Professor, with tenure, part-time at the Department of Mathematics, University of Arizona, from 1990 to 2008.
He was visiting scientist at Schlumberger–Doll Laboratories (Connecticut, USA) from 1983 to 1984.
He was a visiting professor at MIT in Applied Mathematics in 1986 and in Physics at UC San Diego in 1993.
He was Ulam Scholar at CNLS, Los Alamos National Lab, in 2007–2008.
He has written 3 books, and published around 400 scientific articles.
"Yves Pomeau occupies a central and unique place in modern statistical physics. His work has had a profound influence in several areas of physics, and in particular on the mechanics of continuous media. His work, nourished by the history of scientific laws, is imaginative and profound. Yves Pomeau combines a deep understanding of physical phenomena with varied and elegant mathematical descriptions. Yves Pomeau is one of the most recognized French theorists at the interface of physics and mechanics, and his pioneering work has opened up many avenues of research and has been a continuous source of inspiration for several generations of young experimental physicists and theorists worldwide."
Research.
In his thesis he showed that in a dense fluid the interactions are different from what they are at equilibrium and propagate through hydrodynamic modes, which leads to the divergence of transport coefficients in 2 spatial dimensions.
This aroused his interest in fluid mechanics, and in the transition to turbulence. Together with Paul Manneville they discovered a new mode of transition to turbulence, the transition by temporal Intermittency, which was confirmed by numerous experimental observations and CFD simulations. This is the so-called Pomeau–Manneville scenario, associated with the Pomeau-Manneville maps
In papers published in 1973 and 1976, Jean Hardy, Pomeau and Olivier de Pazzis introduced the first lattice Boltzmann model, which is called the HPP model after the authors. Generalizing ideas from his thesis, together with Uriel Frisch and Brosl Hasslacher, they found a very simplified microscopic fluid model (FHP model) which allows simulating very efficiently the complex movements of a real fluid. He was a pioneer of lattice Boltzmann methods and played a historical role in the timeline of computational physics.
Reflecting on the situation of the transition to turbulence in parallel flows, he showed that turbulence is caused by a contagion mechanism, and not by local instability. Front can be static or mobile depending on the conditions of the system, and the causes of the motion can be the variation of a free energy, where the most energetically favorable state invades the less favorable one. The consequence is that this transition belongs to the class of directed percolation phenomena in statistical physics, which has also been amply confirmed by experimental and numerical studies.
In dynamical systems theory, the structure and length of the attractors of a network corresponds to the dynamic phase of the network. The stability of Boolean network depends on the connections of their nodes. A Boolean network can exhibit stable, critical or chaotic behavior. This phenomenon is governed by a critical value of the average number of connections of nodes (formula_0), and can be characterized by the Hamming distance as distance measure. If formula_1 for every node, the transition between the stable and chaotic range depends on formula_2. Bernard Derrida and Yves Pomeau proved that, the critical value of the average number of connections is formula_3.
A droplet of nonwetting viscous liquid moves on an inclined plane by rolling along it. Together with Lakshminarayanan Mahadevan, he gave a scaling law for the uniform speed of such a droplet. With Christiane Normand, and Manuel García Velarde, he studied convective instability. Apart from simple situations, capillarity remains an area where fundamental questions remain. He showed that the discrepancies appearing in the hydrodynamics of the moving contact line on a solid surface could only be eliminated by taking into account the evaporation/condensation near this line. Capillary forces are almost always insignificant in solid mechanics. Nevertheless, with Serge Mora and collaborators they have shown theoretically and experimentally that soft gel filaments are subject to Rayleigh-Plateau instability, an instability never observed before for a solid. In collaboration with his former PhD student Basile Audoly and Henri Berestycki, he studied the speed of the propagation of a reaction front in a fast steady flow with a given structure in space. With Basile Audoly and Martine Ben Amar, Pomeau developed a theory of large deformations of elastic plates which led them to introduce the concept of ""d"-cone", that is, a geometrical cone preserving the overall developability of the surface, an idea now taken up by the solid mechanics community.
The theory of superconductivity is based on the idea of the formation of pairs of electrons that become more or less bosons undergoing Bose-Einstein condensation. This pair formation would explain the halving of the flux quantum in a superconducting loop. Together with Len Pismen and Sergio Rica they have shown that, going back to Onsager's idea explaining the quantification of the circulation in fundamental quantum states, it is not necessary to use the notion of electron pairs to understand this halving of the circulation quantum. He also analyzed the onset of BEC from the point of view of kinetic theory. Whereas the kinetic equation for a dilute Bose gas had been known for many years, the way it can describe what happens when the gas is cooled down to reach temperature below the temperature of transition. At this temperature the gas gets a macroscopic component in the quantum ground state, as had been predicted by Einstein long ago. Pomeau and collaborators showed that the solution of the kinetic equation becomes singular at zero energies and we did also find how the density of the condensate grows with time after the transition. They also derived the kinetic equation for the Bogoliubov excitations of Bose-Einstein condensates, where they found three collisional processes. Before the surge of interest in super-solids started by Moses Chan experiments, they had shown in an early simulation that a slightly modified NLS equation yields a fair representation of super-solids. With Alan C. Newell, he studied turbulent crystals in macroscopic systems.
From his more recent work we must distinguish those concerning a phenomenon typically out of equilibrium, that of the emission of photons by an atom maintained in an excited state by an intense field that creates Rabi oscillations. The theory of this phenomenon requires a precise consideration of the statistical concepts of quantum mechanics in a theory satisfying the fundamental constraints of such a theory. With Martine Le Berre and Jean Ginibre they showed that the good theory was that of a Kolmogorov equation based on the existence of a small parameter, the ratio of the photon emission rate to the atomic frequency itself.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{c}"
},
{
"math_id": 1,
"text": " p_{i}=p=const. "
},
{
"math_id": 2,
"text": " p "
},
{
"math_id": 3,
"text": " K_{c}=1/[2p(1-p)] "
}
] |
https://en.wikipedia.org/wiki?curid=62627139
|
62639397
|
Volume correction factor
|
In thermodynamics, the Volume Correction Factor (VCF), also known as Correction for the effect of Temperature on Liquid (CTL), is a standardized computed factor used to correct for the thermal expansion of fluids, primarily, liquid hydrocarbons at various temperatures and densities. It is typically a number between 0 and 2, rounded to five decimal places which, when multiplied by the observed volume of a liquid, will return a "corrected" value standardized to a base temperature (usually 60 °Fahrenheit or 15 °Celsius).
Conceptualization.
In general, VCF / CTL values have an inverse relationship with observed temperature relative to the base temperature. That is, observed temperatures above 60 °F (or the base temperature used) typically correlate with a correction factor below "1", while temperatures below 60 °F correlate with a factor above "1". This concept lies in the basis for the kinetic theory of matter and thermal expansion of matter, which states as the temperature of a substance rises, so does the average kinetic energy of its molecules. As such, a rise in kinetic energy requires more space between the particles of a given substance, which leads to its physical expansion.
Conceptually, this makes sense when applying the VCF to observed volumes. Observed temperatures below the base temperature generate a factor above "1", indicating the corrected volume must increase to account for the contraction of the substance relative to the base temperature. The opposite is true for observed temperatures above the base temperature, generating factors below "1" to account for the expansion of the substance relative to the base temperature.
Exceptions.
While the VCF is primarily used for liquid hydrocarbons, the theory and principles behind it apply to most liquids, with some exceptions. As a general principle, most liquid substances will contract in volume as temperature drops. However, certain substances, water for example, contain unique angular structures at the molecular level. As such, when these substances reach temperatures just above their freezing point, they begin to expand, since the angle of the bonds prevent the molecules from tightly fitting together, resulting in more empty space between the molecules in a solid state. Other substances which exhibit similar properties include silicon, bismuth, antimony and germanium.
While these are the exceptions to general principles of thermal expansion and contraction, they would seldom, if ever, be used in conjunction with VCF / CTL, as the correction factors are dependent upon specific constants, which are further dependent on liquid hydrocarbon classifications and densities.
Formula and usage.
The formula for Volume Correction Factor is commonly defined as:
formula_0
Where:
Usage.
In standard applications, computing the VCF or CTL requires the observed temperature of the product, and its API gravity at 60 °F. Once calculated, the corrected volume is the product of the VCF and the observed volume.
formula_20
Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: formula_21
Traditionally, VCF / CTL are found by matching the observed temperature and API gravity within standardized books and tables published by the American Petroleum Institute. These methods are often more time-consuming than entering the values into a VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
Formulas for Reference.
Density of pure water at 60 °F formula_22 or formula_23
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "VCF =C_{TL}= \\exp\\{-\\alpha_T\\Delta T[1 + 0.8\\alpha_T(\\Delta T+\\delta_T)]\\}"
},
{
"math_id": 1,
"text": "\\exp"
},
{
"math_id": 2,
"text": "e"
},
{
"math_id": 3,
"text": "\\{-\\alpha_T\\Delta T[1 + 0.8\\alpha_T(\\Delta T+\\delta_T)]\\}"
},
{
"math_id": 4,
"text": "\\Delta T"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "(t - T)"
},
{
"math_id": 8,
"text": "VCF"
},
{
"math_id": 9,
"text": "\\delta_T "
},
{
"math_id": 10,
"text": "\\delta_T=0 "
},
{
"math_id": 11,
"text": "\\alpha_{T}"
},
{
"math_id": 12,
"text": "\\alpha_{60}"
},
{
"math_id": 13,
"text": "\\alpha_{60}=\\frac{K_0}{\\rho*^2}+\\frac{K_1}{\\rho*}+{K_2}\n\n"
},
{
"math_id": 14,
"text": "\\rho^*"
},
{
"math_id": 15,
"text": "\\rightarrow"
},
{
"math_id": 16,
"text": "\\rho^* = \\rho_{60}"
},
{
"math_id": 17,
"text": "K_0"
},
{
"math_id": 18,
"text": "K_1"
},
{
"math_id": 19,
"text": "K_2"
},
{
"math_id": 20,
"text": "V_{Corrected} = VCF * V_{Observed}"
},
{
"math_id": 21,
"text": "SG = \\frac{\\rho_{Substance}}{\\rho_{H2O_T}} \\longrightarrow API_{Gravity}=\\frac{141.5}{SG}-131.5\n\n"
},
{
"math_id": 22,
"text": "=\\ 999.016_{kg/m^3}\n\n"
},
{
"math_id": 23,
"text": "0.999016_{g/cm^3}\n\n"
}
] |
https://en.wikipedia.org/wiki?curid=62639397
|
62641
|
Vector field
|
Assignment of a vector to each point in a subset of Euclidean space
In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space formula_0. A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).
A vector field is a special case of a "vector-valued function", whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space.
Likewise, n coordinates, a vector field on a domain in "n"-dimensional Euclidean space formula_0 can be represented as a vector-valued function that associates an "n"-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law ("covariance and contravariance of vectors") in passing from one coordinate system to the other.
Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).
More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.
Definition.
Vector fields on subsets of Euclidean space.
Given a subset "S" of R"n", a vector field is represented by a vector-valued function "V": "S" → R"n" in standard Cartesian coordinates ("x"1, …, "x""n"). If each component of "V" is continuous, then "V" is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an "n"-dimensional space.
One standard notation is to write formula_1 for the unit vectors in the coordinate directions. In these terms, every smooth vector field formula_2 on an open subset formula_3 of formula_4 can be written as
formula_5
for some smooth functions formula_6 on formula_3. The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, formula_7, given by differentiating in the direction of the vector field.
Example: The vector field formula_8 describes a counterclockwise rotation around the origin in formula_9. To show that the function formula_10 is rotationally invariant, compute:
formula_11
Given vector fields "V", "W" defined on "S" and a smooth function f defined on "S", the operations of scalar multiplication and vector addition,
formula_12
formula_13
make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise.
Coordinate transformation law.
In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.
Thus, suppose that ("x"1, ..., "x""n") is a choice of Cartesian coordinates, in terms of which the components of the vector V are
formula_14
and suppose that ("y"1...,"y""n") are "n" functions of the "x""i" defining a different coordinate system. Then the components of the vector "V" in the new coordinates are required to satisfy the transformation law
Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of "n" functions in each coordinate system subject to the transformation law (1) relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or "scalar" to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.
Vector fields on manifolds.
Given a differentiable manifold formula_15, a vector field on formula_15 is an assignment of a tangent vector to each point in formula_15. More precisely, a vector field formula_16 is a mapping from formula_15 into the tangent bundle formula_17 so that formula_18 is the identity mapping
where formula_19 denotes the projection from formula_17 to formula_15. In other words, a vector field is a section of the tangent bundle.
An alternative definition: A smooth vector field formula_20 on a manifold formula_15 is a linear map formula_21 such that formula_20 is a derivation: formula_22 for all formula_23.
If the manifold formula_15 is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold formula_15 is often denoted by formula_24 or formula_25 (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by formula_26 (a fraktur "X").
Examples.
Gradient field in euclidean spaces.
Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).
A vector field "V" defined on an open set "S" is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) "f" on "S" such that
formula_27
The associated flow is called the <templatestyles src="Template:Visible anchor/styles.css" />gradient flow, and is used in the method of gradient descent.
The path integral along any closed curve "γ" ("γ"(0) = "γ"(1)) in a conservative field is zero:
formula_28
Central field in euclidean spaces.
A "C"∞-vector field over R"n" \ {0} is called a central field if
formula_29
where O("n", R) is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.
The point 0 is called the center of the field.
Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.
Operations on vector fields.
Line integral.
A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve.
The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.
Given a vector field V and a curve γ, parametrized by t in ["a", "b"] (where a and b are real numbers), the line integral is defined as
formula_30
To show vector field topology one can use line integral convolution.
Divergence.
The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by
formula_31
with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.
The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.
Curl in three dimensions.
The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by
formula_32
The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.
Index of a vector field.
The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity.
Let "n be" the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension "n" − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere S"n"−1. This defines a continuous map from S to S"n"−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself.
The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)"k" around a saddle that has "k" contracting dimensions and "n"−"k" expanding dimensions.
The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes.
For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem.
For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic.
Physical intuition.
Michael Faraday, in his concept of "lines of force," emphasized that the field "itself" should be an object of study, which it has become throughout physics in the form of field theory.
In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field.
In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm.
Flow curves.
Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.
Given a vector field formula_2 defined on formula_3, one defines curves formula_33 on formula_3 such that for each formula_34 in an interval formula_35,
formula_36
By the Picard–Lindelöf theorem, if formula_2 is Lipschitz continuous there is a "unique" formula_37-curve formula_38 for each point formula_39 in formula_3 so that, for some formula_40,
formula_41
The curves formula_38 are called integral curves or trajectories (or less commonly, flow lines) of the vector field formula_2 and partition formula_3 into equivalence classes. It is not always possible to extend the interval formula_42 to the whole real number line. The flow may for example reach the edge of formula_3 in a finite time.
In two or three dimensions one can visualize the vector field as giving rise to a flow on formula_3. If we drop a particle into this flow at a point formula_19 it will move along the curve formula_43 in the flow depending on the initial point formula_19. If formula_19 is a stationary point of formula_2 (i.e., the vector field is equal to the zero vector at the point formula_19), then the particle will remain at formula_19.
Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups.
Complete vector fields.
By definition, a vector field on formula_15 is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If formula_20 is a complete vector field on formula_15, then the one-parameter group of diffeomorphisms generated by the flow along formula_20 exists for all time; it is described by a smooth mapping
formula_44
On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field formula_2 on the real line formula_45 is given by formula_46. For, the differential equation formula_47, with initial condition formula_48, has as its unique solution formula_49 if formula_50 (and formula_51 for all formula_52 if formula_53). Hence for formula_50, formula_54 is undefined at formula_55 so cannot be defined for all values of formula_34.
The Lie bracket.
The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions formula_56:
formula_57
"f"-relatedness.
Given a smooth function between manifolds, formula_58, the derivative is an induced map on tangent bundles, formula_59. Given vector fields formula_60 and formula_61, we say that formula_62 is formula_56-related to formula_2 if the equation formula_63 holds.
If formula_64 is formula_56-related to formula_65, formula_66, then the Lie bracket formula_67 is formula_56-related to formula_68.
Generalizations.
Replacing vectors by "p"-vectors ("p"th exterior power of vectors) yields "p"-vector fields; taking the dual space and exterior powers yields differential "k"-forms, and combining these yields general tensor fields.
Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\frac{\\partial}{\\partial x_1},\\ldots,\\frac{\\partial}{\\partial x_n}"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "{\\mathbf R}^n"
},
{
"math_id": 5,
"text": " \\sum_{i=1}^n V_i(x_1,\\ldots,x_n)\\frac{\\partial}{\\partial x_i}"
},
{
"math_id": 6,
"text": "V_1,\\ldots,V_n"
},
{
"math_id": 7,
"text": "V\\colon C^{\\infty}(S)\\to C^{\\infty}(S)"
},
{
"math_id": 8,
"text": "-x_2\\frac{\\partial}{\\partial x_1}+x_1\\frac{\\partial}{\\partial x_2}"
},
{
"math_id": 9,
"text": "\\mathbf{R}^2"
},
{
"math_id": 10,
"text": "x_1^2+x_2^2"
},
{
"math_id": 11,
"text": "\\bigg(-x_2\\frac{\\partial}{\\partial x_1}+x_1\\frac{\\partial}{\\partial x_2}\\bigg)(x_1^2+x_2^2) = -x_2(2x_1)+x_1(2x_2) = 0."
},
{
"math_id": 12,
"text": " (fV)(p) := f(p)V(p)"
},
{
"math_id": 13,
"text": " (V+W)(p) := V(p) + W(p),"
},
{
"math_id": 14,
"text": "V_x = (V_{1,x}, \\dots, V_{n,x})"
},
{
"math_id": 15,
"text": "M"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "TM"
},
{
"math_id": 18,
"text": " p\\circ F "
},
{
"math_id": 19,
"text": "p"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "X: C^\\infty(M) \\to C^\\infty(M)"
},
{
"math_id": 22,
"text": "X(fg) = fX(g)+X(f)g"
},
{
"math_id": 23,
"text": "f,g \\in C^\\infty(M)"
},
{
"math_id": 24,
"text": "\\Gamma (TM)"
},
{
"math_id": 25,
"text": "C^\\infty (M,TM)"
},
{
"math_id": 26,
"text": " \\mathfrak{X} (M)"
},
{
"math_id": 27,
"text": "V = \\nabla f = \\left(\\frac{\\partial f}{\\partial x_1}, \\frac{\\partial f}{\\partial x_2}, \\frac{\\partial f}{\\partial x_3}, \\dots ,\\frac{\\partial f}{\\partial x_n}\\right)."
},
{
"math_id": 28,
"text": " \\oint_\\gamma V(\\mathbf {x})\\cdot \\mathrm{d}\\mathbf {x} = \\oint_\\gamma \\nabla f(\\mathbf {x}) \\cdot \\mathrm{d}\\mathbf {x} = f(\\gamma(1)) - f(\\gamma(0))."
},
{
"math_id": 29,
"text": "V(T(p)) = T(V(p)) \\qquad (T \\in \\mathrm{O}(n, \\R))"
},
{
"math_id": 30,
"text": "\\int_\\gamma V(\\mathbf {x}) \\cdot \\mathrm{d}\\mathbf {x} = \\int_a^b V(\\gamma(t)) \\cdot \\dot \\gamma(t)\\, \\mathrm{d}t."
},
{
"math_id": 31,
"text": "\\operatorname{div} \\mathbf{F} = \\nabla \\cdot \\mathbf{F} = \\frac{\\partial F_1}{\\partial x} + \\frac{\\partial F_2}{\\partial y} + \\frac{\\partial F_3}{\\partial z},"
},
{
"math_id": 32,
"text": "\\operatorname{curl}\\mathbf{F} = \\nabla \\times \\mathbf{F} = \\left(\\frac{\\partial F_3}{\\partial y} - \\frac{\\partial F_2}{\\partial z}\\right)\\mathbf{e}_1 - \\left(\\frac{\\partial F_3}{\\partial x} - \\frac{\\partial F_1}{\\partial z}\\right)\\mathbf{e}_2 + \\left(\\frac{\\partial F_2}{\\partial x}- \\frac{\\partial F_1}{\\partial y}\\right)\\mathbf{e}_3."
},
{
"math_id": 33,
"text": "\\gamma(t)"
},
{
"math_id": 34,
"text": "t"
},
{
"math_id": 35,
"text": "I"
},
{
"math_id": 36,
"text": "\\gamma'(t) = V(\\gamma(t))\\,."
},
{
"math_id": 37,
"text": "C^1"
},
{
"math_id": 38,
"text": "\\gamma_x"
},
{
"math_id": 39,
"text": "x"
},
{
"math_id": 40,
"text": "\\varepsilon > 0"
},
{
"math_id": 41,
"text": "\\begin{align}\n\\gamma_x(0) &= x\\\\\n\\gamma'_x(t) &= V(\\gamma_x(t)) \\qquad \\forall t \\in (-\\varepsilon, +\\varepsilon) \\subset \\R.\n\\end{align}"
},
{
"math_id": 42,
"text": "(-\\varepsilon,+\\varepsilon)"
},
{
"math_id": 43,
"text": "\\gamma_p"
},
{
"math_id": 44,
"text": "\\mathbf{R}\\times M\\to M."
},
{
"math_id": 45,
"text": "\\mathbb R"
},
{
"math_id": 46,
"text": "V(x) = x^2"
},
{
"math_id": 47,
"text": "x'(t) = x^2"
},
{
"math_id": 48,
"text": "x(0) = x_0 "
},
{
"math_id": 49,
"text": "x(t) = \\frac{x_0}{1 - t x_0}"
},
{
"math_id": 50,
"text": "x_0 \\neq 0"
},
{
"math_id": 51,
"text": "x(t) = 0"
},
{
"math_id": 52,
"text": "t \\in \\R"
},
{
"math_id": 53,
"text": "x_0 = 0"
},
{
"math_id": 54,
"text": "x(t)"
},
{
"math_id": 55,
"text": "t = \\frac{1}{x_0}"
},
{
"math_id": 56,
"text": "f"
},
{
"math_id": 57,
"text": "[X,Y](f):=X(Y(f))-Y(X(f))."
},
{
"math_id": 58,
"text": "f:M\\to N"
},
{
"math_id": 59,
"text": "f_*:TM\\to TN"
},
{
"math_id": 60,
"text": "V:M\\to TM"
},
{
"math_id": 61,
"text": "W:N\\to TN"
},
{
"math_id": 62,
"text": "W"
},
{
"math_id": 63,
"text": "W\\circ f = f_*\\circ V"
},
{
"math_id": 64,
"text": "V_i"
},
{
"math_id": 65,
"text": "W_i"
},
{
"math_id": 66,
"text": "i=1,2"
},
{
"math_id": 67,
"text": "[V_1,V_2]"
},
{
"math_id": 68,
"text": "[W_1,W_2]"
}
] |
https://en.wikipedia.org/wiki?curid=62641
|
62644951
|
Esther 1
|
A chapter in the Book of Esther
Esther 1 is the first chapter of the Book of Esther in the Hebrew Bible or the Old Testament of the Christian Bible. The author of the book is unknown and modern scholars have established that the final stage of the Hebrew text would have been formed by the second century BCE. Chapters 1 and 2 form the exposition of the book. This chapter records the royal banquets of the Persian king Ahasuerus until the deposal of queen Vashti.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 22 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Royal banquet for the officials (1:1–4).
The opening section describes the sumptuous 180-day banquet by the Persian king Ahasuerus for officials from all over the Persian Empire.
"Now it came to pass, in the days of Ahasuerus, (this is Ahasuerus which reigned, from India even unto Ethiopia, over an hundred and seven and twenty provinces:)"
Two other persons are called by this name in the Old Testament:
*(1) the Ahasuerus of , the father of “Darius the Mede;” if this Darius is the same with Astyages, Ahasuerus could be identified with Cyaxares.
*(2) the Ahasuerus of , who is identified with Cambyses, the son of Cyrus.
"I am Xerxes, the great king, the only king, the king of (all) countries (which speak) all kinds of languages, the king of this (entire) big and far-reaching earth… These are the countries — in addition to Persia — over which I am king … which are bringing their tribute to me — whatever is commanded them by me, that they do and they abide by my law(s) — Media, Elam … India … (and) Cush."
The vast territorial claims are also confirmed by Herodotus ("Histories" III.97; VII.9, 65, 69f).
"That in those days, when the king Ahasuerus sat on the throne of his kingdom, which was in Shushan the palace,"
"In the third year of his reign, he made a feast unto all his princes and his servants; the power of Persia and Media, the nobles and princes of the provinces, being before him:"
Verse 3.
The immense size of the banquet, the number of its invited guests, and the length of its duration described here, was not without precedence as C. A. Moore documents a Persian banquet for 15,000 people and an Assyrian celebration with 69,574 guests in ancient times.
Royal banquet for the citizens of Susa (1:5–9).
This section narrows the focus to the subsequent shorter but equally pretentious 7-day banquets, given separately by the king (for males) and the queen (for females) for the citizens of the Persian capital Susa.
"Where were white, green, and blue, hangings, fastened with cords of fine linen and purple to silver rings and pillars of marble: the beds were of gold and silver, upon a pavement of red, and blue, and white, and black, marble."
Vashti's refusal to obey king's command (1:10–22).
On the seventh day of the banquet, the king sent for Queen Vashti to appear before him "to show off her beauty", but she refused to come. This causes histrionic reactions from the king and his seven counselors which resulted in the issuance of punishment for Vashti and a decree involving the 'whole elaborate machinery of Persian law and administration' to spread it in all over Persian lands.
"Then the king said to the wise men, which knew the times, (for so was the king's manner toward all that knew law and judgment:)"
Verse 13.
It has been noted that "It is an irony, that the king who reigns over a vast empire cannot resolve his domestic problem about his own wife without the help of the sharpest minds of Persia." The seven counselors who advise the king (cf. ) are literally "those who see the face of the king" ().
"And when the king's decree which he shall make shall be published throughout all his empire, (for it is great,) all the wives shall give to their husbands honour, both to great and small."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=62644951
|
62647556
|
Incomplete Bessel K function/generalized incomplete gamma function
|
Some mathematicians defined this type incomplete-version of Bessel function or this type generalized-version of incomplete gamma function:
formula_0
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
Properties.
One of the advantage of defining this type incomplete-version of Bessel function formula_8 is that even for example the associated Anger–Weber function defined in Digital Library of Mathematical Functions can related:
formula_9
Recurrence relations.
formula_8 satisfy this recurrence relation:
formula_10
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_v(x,y)=\\int_1^\\infty\\frac{e^{-xt-\\frac{y}{t}}}{t^{v+1}}~dt"
},
{
"math_id": 1,
"text": "\\gamma(\\alpha,x;b)=\\int_0^xt^{\\alpha-1}e^{-t-\\frac{b}{t}}~dt"
},
{
"math_id": 2,
"text": "\\Gamma(\\alpha,x;b)=\\int_x^\\infty t^{\\alpha-1}e^{-t-\\frac{b}{t}}~dt"
},
{
"math_id": 3,
"text": "K_v(x,y)=x^v\\Gamma(-v,x;xy)"
},
{
"math_id": 4,
"text": "K_v(x,y)+K_{-v}(y,x)=\\frac{2x^\\frac{v}{2}}{y^\\frac{v}{2}}K_v(2\\sqrt{xy})"
},
{
"math_id": 5,
"text": "\\gamma(\\alpha,x;0)=\\gamma(\\alpha,x)"
},
{
"math_id": 6,
"text": "\\Gamma(\\alpha,x;0)=\\Gamma(\\alpha,x)"
},
{
"math_id": 7,
"text": "\\gamma(\\alpha,x;b)+\\Gamma(\\alpha,x;b)=2b^\\frac{\\alpha}{2}K_\\alpha(2\\sqrt b)"
},
{
"math_id": 8,
"text": "K_v(x,y)"
},
{
"math_id": 9,
"text": "\\mathbf{A}_\\nu(z)=\\frac{1}{\\pi}\\int_0^\\infty e^{-\\nu t-z\\sinh t}~dt=\\frac{1}{\\pi}\\int_0^\\infty e^{-(\\nu+1)t-\\frac{ze^t}{2}+\\frac{z}{2e^t}}~d(e^t)=\\frac{1}{\\pi}\\int_1^\\infty\\frac{e^{-\\frac{zt}{2}+\\frac{z}{2t}}}{t^{\\nu+1}}~dt=\\frac{1}{\\pi}K_\\nu\\left(\\frac{z}{2},-\\frac{z}{2}\\right)"
},
{
"math_id": 10,
"text": "xK_{v-1}(x,y)+vK_v(x,y)-yK_{v+1}(x,y)=e^{-x-y}"
}
] |
https://en.wikipedia.org/wiki?curid=62647556
|
626579
|
Doomsday rule
|
Way of calculating the day of the week of a given date
The Doomsday rule, Doomsday algorithm or Doomsday method is an algorithm of determination of the day of the week for a given date. It provides a perpetual calendar because the Gregorian calendar moves in cycles of 400 years. The algorithm for mental calculation was devised by John Conway in 1973, drawing inspiration from Lewis Carroll's perpetual calendar algorithm. It takes advantage of each year having a certain day of the week upon which certain easy-to-remember dates, called the "doomsdays", fall; for example, the last day of February, April 4 (4/4), June 6 (6/6), August 8 (8/8), October 10 (10/10), and December 12 (12/12) all occur on the same day of the week in any year. Doomsday of 2024 is Thursday.
Applying the Doomsday algorithm involves three steps: determination of the anchor day for the century, calculation of the anchor day for the year from the one for the century, and selection of the closest date out of those that always fall on the doomsday, e.g., 4/4 and 6/6, and count of the number of days (modulo 7) between that date and the date in question to arrive at the day of the week. The technique applies to both the Gregorian calendar and the Julian calendar, although their doomsdays are usually different days of the week.
The algorithm is simple enough that it can be computed mentally. Conway could usually give the correct answer in under two seconds. To improve his speed, he practiced his calendrical calculations on his computer, which was programmed to quiz him with random dates every time he logged on.
Anchor days for some contemporary years.
Doomsday's anchor day for the current year in the Gregorian calendar (2024) is . For some other contemporary years:
The table is filled in horizontally, skipping one column for each leap year. This table cycles every 28 years, except in the Gregorian calendar on years that are a multiple of 100 (such as 1800, 1900, and 2100 which are not leap years) that are not also a multiple of 400 (like 2000 which is still a leap year). The full cycle is 28 years (1,461 weeks) in the Julian calendar and 400 years (20,871 weeks) in the Gregorian calendar.
Memorable dates that always land on Doomsday.
One can find the day of the week of a given calendar date by using a nearby doomsday as a reference point. To help with this, the following is a list of easy-to-remember dates for each month that always land on the doomsday.
The last day of February is always a doomsday. For January, January 3 is a doomsday during common years and January 4 a doomsday during leap years, which can be remembered as "the 3rd during 3 years in 4, and the 4th in the 4th year". For March, one can remember either Pi Day or "March 0", the latter referring to the day before March 1, i.e. the last day of February.
For the months April through December, the even numbered months are covered by the double dates 4/4, 6/6, 8/8, 10/10, and 12/12, all of which fall on the doomsday. The odd numbered months can be remembered with the mnemonic "I work from 9 to 5 at the 7-11", i.e., 9/5, 7/11, and also 5/9 and 11/7, are all doomsdays (this is true for both the Day/Month and Month/Day conventions).
Several well-known dates, such as Independence Day in United States, Boxing Day, and Valentine's Day in common years, also fall on doomsdays every year. The chart below includes only the mnemonics covered in the sources listed.
Since the doomsday for a particular year is directly related to weekdays of dates in the period from March through February of the next year, common years and leap years have to be distinguished for January and February of the same year.
January and February can be treated as the last two months of the previous year.
Example.
To find which day of the week Christmas Day of 2021 is, proceed as follows: in the year 2021, doomsday is on Sunday. Since December 12 is a doomsday, December 25, being thirteen days afterwards (two weeks less a day), fell on a Saturday. Christmas Day is always the day of the week before doomsday. In addition, July 4 (U.S. Independence Day) is always on the same day of the week as a doomsday, as are Halloween (October 31), Pi Day (March 14), and December 26 (Boxing Day).
Mnemonic weekday names.
Since this algorithm involves treating days of the week like numbers modulo 7, John Conway suggested thinking of the days of the week as "Noneday" or "Sansday" (for Sunday), "Oneday", "Twosday", "Treblesday", "Foursday", "Fiveday", and "Six-a-day" in order to recall the number-weekday relation without needing to count them out in one's head.
There are some languages, such as Slavic languages, Chinese, Estonian, Greek, Portuguese, Galician and Hebrew, that base some of the names of the week days in their positional order. The Slavic, Chinese, and Estonian agree with the table above; the other languages mentioned count from Sunday as day one.
Finding a year's anchor day.
First take the anchor day for the century. For the purposes of the doomsday rule, a century starts with '00 and ends with '99. The following table shows the anchor day of centuries 1600–1699, 1700–1799, 1800–1899, 1900–1999, 2000–2099, 2100–2199 and 2200–2299.
For the Gregorian calendar:
Mathematical formula
5 × ("c" mod 4) mod 7 + Tuesday = anchor.
Algorithmic
Let "r" = "c" mod 4
if "r" = 0 then anchor = Tuesday
if "r" = 1 then anchor = Sunday
if "r" = 2 then anchor = Friday
if "r" = 3 then anchor = Wednesday
For the Julian calendar:
6"c" mod 7 + Sunday = anchor.
Note: formula_0.
Next, find the year's anchor day. To accomplish that according to Conway:
"a" + "b" + "c"). (It is again possible here to divide by seven and take the remainder. This number is equivalent, as it must be, to "y" plus the floor of "y" divided by four.)
formula_1
For the twentieth-century year 1966, for example:
formula_2
As described in bullet 4, above, this is equivalent to:
formula_3
So doomsday in 1966 fell on Monday.
Similarly, doomsday in 2005 is on a Monday:
formula_4
Why it works.
The doomsday's anchor day calculation is effectively calculating the number of days between any given date in the base year and the same date in the current year, then taking the remainder modulo 7. When both dates come after the leap day (if any), the difference is just 365"y" + (rounded down). But 365 equals 52 × 7 + 1, so after taking the remainder we get just
formula_5
This gives a simpler formula if one is comfortable dividing large values of "y" by both 4 and 7. For example, we can compute
formula_6
which gives the same answer as in the example above.
Where 12 comes in is that the pattern of formula_7 "almost" repeats every 12 years. After 12 years, we get formula_8. If we replace "y" by "y" mod 12, we are throwing this extra day away; but adding back in formula_9 compensates for this error, giving the final formula.
For calculating the Gregorian anchor day of a century: three “common centuries” (each having 24 leap years) are followed by a “leap century” (having 25 leap years). A common century moves the doomsday forward by
formula_10
days (equivalent to two days back). A leap century moves the doomsday forward by 6 days (equivalent to one day back).
So "c" centuries move the doomsday forward by
formula_11,
but this is equivalent to
formula_12.
Four centuries move the doomsday forward by
formula_13;
so four centuries form a cycle that leaves the doomsday unchanged (and hence the “mod 4” in the century formula).
The "odd + 11" method.
A simpler method for finding the year's anchor day was discovered in 2010 by Chamberlain Fong and Michael K. Walters, and described in their paper submitted to the 7th International Congress on Industrial and Applied Mathematics (2011). Called the "odd + 11" method, it is equivalent to computing
formula_14.
It is well suited to mental calculation, because it requires no division by 4 (or 12), and the procedure is easy to remember because of its repeated use of the "odd + 11" rule. Furthermore, addition by 11 is very easy to perform mentally in base-10 arithmetic.
Extending this to get the anchor day, the procedure is often described as accumulating a running total "T" in six steps, as follows:
7 − ("T" mod 7).
Applying this method to the year 2005, for example, the steps as outlined would be:
5
5 + 11
16 (adding 11 because "T" is odd)
8
8 (do nothing since "T" is even)
7 − (8 mod 7)
7 − 1
6
The explicit formula for the odd+11 method is:
formula_15.
Although this expression looks daunting and complicated, it is actually simple because of a common subexpression that only needs to be calculated once.
Anytime adding 11 is needed, subtracting 17 yields equivalent results. While subtracting 17 may seem more difficult to mentally perform than adding 11, there are cases where subtracting 17 is easier, especially when the number is a two-digit number that ends in 7 (such as 17, 27, 37, ..., 77, 87, and 97).
Correspondence with dominical letter.
Doomsday is related to the dominical letter of the year as follows.
Look up the table below for the dominical letter (DL).
For the year 2024, the dominical letter is + = .
Computer formula for the anchor day of a year.
For computer use, the following formulas for the anchor day of a year are convenient.
For the Gregorian calendar:
formula_16
For example, the doomsday 2009 is Saturday under the Gregorian calendar (the currently accepted calendar), since
formula_17
As another example, the doomsday 1946 is Thursday, since
formula_18
For the Julian calendar:
formula_19
The formulas apply also for the proleptic Gregorian calendar and the proleptic Julian calendar. They use the floor function and astronomical year numbering for years BC.
For comparison, see the calculation of a Julian day number.
400-year cycle of anchor days.
Since in the Gregorian calendar there are 146,097 days, or exactly 20,871 seven-day weeks, in 400 years, the anchor day repeats every four centuries. For example, the anchor day of 1700–1799 is the same as the anchor day of 2100–2199, i.e. Sunday.
The full 400-year cycle of doomsdays is given in the adjacent table. The centuries are for the Gregorian and proleptic Gregorian calendar, unless marked with a J for Julian. The Gregorian leap years are highlighted.
Negative years use astronomical year numbering. Year 25BC is −24, shown in the column of −100J (proleptic Julian) or −100 (proleptic Gregorian), at the row 76.
A leap year with Monday as doomsday means that Sunday is one of 97 days skipped in the 400-year sequence. Thus the total number of years with Sunday as doomsday is 71 minus the number of leap years with Monday as doomsday, etc. Since Monday as doomsday is skipped across February 29, 2000, and the pattern of leap days is symmetric about that leap day, the frequencies of doomsdays per weekday (adding common and leap years) are symmetric about Monday. The frequencies of doomsdays of leap years per weekday are symmetric about the doomsday of 2000, Tuesday.
The frequency of a particular date being on a particular weekday can easily be derived from the above (for a date from January 1 – February 28, relate it to the doomsday of the previous year).
For example, February 28 is one day after doomsday of the previous year, so it is 58 times each on Tuesday, Thursday and Sunday, etc. February 29 is doomsday of a leap year, so it is 15 times each on Monday and Wednesday, etc.
28-year cycle.
Regarding the frequency of doomsdays in a Julian 28-year cycle, there are 1 leap year and 3 common years for every weekday, the latter 6, 17 and 23 years after the former (so with intervals of 6, 11, 6, and 5 years; not evenly distributed because after 12 years the day is skipped in the sequence of doomsdays). The same cycle applies for any given date from March 1 falling on a particular weekday.
For any given date up to February 28 falling on a particular weekday, the 3 common years are 5, 11, and 22 years after the leap year, so with intervals of 5, 6, 11, and 6 years. Thus the cycle is the same, but with the 5-year interval after instead of before the leap year.
Thus, for any date except February 29, the intervals between common years falling on a particular weekday are 6, 11, 11. See e.g. at the bottom of the page Common year starting on Monday the years in the range 1906–2091.
For February 29 falling on a particular weekday, there is just one in every 28 years, and it is of course a leap year.
Julian calendar.
The Gregorian calendar is currently accurately lining up with astronomical events such as solstices. In 1582 this modification of the Julian calendar was first instituted. In order to correct for calendar drift, 10 days were skipped, so doomsday moved back 10 days (i.e. 3 days): Thursday, October 4 (Julian, doomsday is Wednesday) was followed by Friday, October 15 (Gregorian, doomsday is Sunday). The table includes Julian calendar years, but the algorithm is for the Gregorian and proleptic Gregorian calendar only.
Note that the Gregorian calendar was not adopted simultaneously in all countries, so for many centuries, different regions used different dates for the same day.
Full examples.
Example 1 (1985).
Suppose we want to know the day of the week of September 18, 1985. We begin with the century's anchor day, Wednesday. To this, add "a", "b", and "c" above:
This yields "a" + "b" + "c"
8. Counting 8 days from Wednesday, we reach Thursday, which is the doomsday in 1985. (Using numbers: In modulo 7 arithmetic, 8 is congruent to 1. Because the century's anchor day is Wednesday (index 3), and 3 + 1 = 4, doomsday in 1985 was Thursday (index 4).) We now compare September 18 to a nearby doomsday, September 5. We see that the 18th is 13 past a doomsday, i.e. one day less than two weeks. Hence, the 18th was a Wednesday (the day preceding Thursday). (Using numbers: In modulo 7 arithmetic, 13 is congruent to 6 or, more succinctly, −1. Thus, we take one away from the doomsday, Thursday, to find that September 18, 1985, was a Wednesday.)
Example 2 (other centuries).
Suppose that we want to find the day of week that the American Civil War broke out at Fort Sumter, which was April 12, 1861. The anchor day for the century was 94 days after Tuesday, or, in other words, Friday (calculated as 18 × 5 + ⌊⌋; or just look at the chart, above, which lists the century's anchor days). The digits 61 gave a displacement of six days so doomsday was Thursday. Therefore, April 4 was Thursday so April 12, eight days later, was a Friday.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c = \\biggl\\lfloor {\\text{year} \\over 100} \\biggr\\rfloor "
},
{
"math_id": 1,
"text": "\\begin{matrix}\\left({\\left\\lfloor{\\frac{y}{12}}\\right\\rfloor+y \\bmod 12+\\left\\lfloor{\\frac{y \\bmod 12}{4}}\\right\\rfloor}\\right) \\bmod 7+\\rm{anchor}=\\rm{Doomsday}\\end{matrix}"
},
{
"math_id": 2,
"text": "\\begin{matrix}\\left({\\left\\lfloor{\\frac{66}{12}}\\right\\rfloor+66 \\bmod 12+\\left\\lfloor{\\frac{66 \\bmod 12}{4}}\\right\\rfloor}\\right) \\bmod 7+\\rm{Wednesday} & = & \\left(5+6+1\\right) \\bmod 7+\\rm{Wednesday} \\\\\n\\ & = & \\rm{Monday}\\end{matrix}"
},
{
"math_id": 3,
"text": "\\begin{matrix}\\left({66 + \\left\\lfloor{\\frac{66}{4}}\\right\\rfloor}\\right) \\bmod 7+\\rm{Wednesday} & = & \\left(66+16\\right) \\bmod 7+\\rm{Wednesday} \\\\\n\\ & = & \\rm{Monday}\\end{matrix}"
},
{
"math_id": 4,
"text": "\\left({\\left\\lfloor{\\frac{5}{12}}\\right\\rfloor+5 \\bmod 12+\\left\\lfloor{\\frac{5 \\bmod 12}{4}}\\right\\rfloor}\\right) \\bmod 7+\\rm{Tuesday}=\\rm{Monday}"
},
{
"math_id": 5,
"text": "\\left(y + \\left\\lfloor \\frac{y}{4} \\right\\rfloor\\right) \\bmod 7."
},
{
"math_id": 6,
"text": "\\left(66 + \\left\\lfloor \\frac{66}{4} \\right\\rfloor\\right) \\bmod 7 = (66 + 16) \\bmod 7 = 82 \\bmod 7 = 5"
},
{
"math_id": 7,
"text": "\\bigl(y + \\bigl\\lfloor \\tfrac{y}{4} \\bigr\\rfloor \\bigr) \\bmod 7"
},
{
"math_id": 8,
"text": "\\bigl(12 + \\tfrac{12}{4}\\bigr) \\bmod 7 = 15 \\bmod 7 = 1"
},
{
"math_id": 9,
"text": "\\bigl\\lfloor \\tfrac{y}{12} \\bigr\\rfloor"
},
{
"math_id": 10,
"text": " (100 + 24) \\bmod 7 = 2 + 3 = 5 "
},
{
"math_id": 11,
"text": " \\left(5c + \\biggl\\lfloor {c \\over 4} \\biggr\\rfloor \\right) \\bmod 7 "
},
{
"math_id": 12,
"text": " (5 (c \\bmod 4)) \\bmod 7"
},
{
"math_id": 13,
"text": " -2 - 2 - 2 - 1 = -7, \\qquad -7 \\equiv 0 \\quad \\pmod{7}"
},
{
"math_id": 14,
"text": "\\left(y + \\left\\lfloor \\frac{y}{4} \\right\\rfloor\\right) \\bmod 7"
},
{
"math_id": 15,
"text": " 7- \\left[\\frac{y+11(y\\,\\bmod 2)}{2} + 11 \\left(\\frac{y+11(y\\,\\bmod 2)}{2}\\bmod 2\\right)\\right] \\bmod 7"
},
{
"math_id": 16,
"text": "\\mbox{anchor day} = \\mbox{Tuesday} + y + \\left\\lfloor\\frac{y}{4}\\right\\rfloor - \\left\\lfloor\\frac{y}{100}\\right\\rfloor + \\left\\lfloor\\frac{y}{400}\\right\\rfloor = \\mbox{Tuesday} + 5\\times (y\\bmod 4) + 4\\times (y\\bmod 100) + 6\\times (y\\bmod 400)"
},
{
"math_id": 17,
"text": "\\mbox{Saturday (6)} \\bmod 7 = \\mbox{Tuesday (2)} + 2009 + \\left\\lfloor\\frac{2009}{4}\\right\\rfloor - \\left\\lfloor\\frac{2009}{100}\\right\\rfloor + \\left\\lfloor\\frac{2009}{400}\\right\\rfloor"
},
{
"math_id": 18,
"text": "\\mbox{Thursday (4)} \\bmod 7 = \\mbox{Tuesday (2)} + 1946 + \\left\\lfloor\\frac{1946}{4}\\right\\rfloor - \\left\\lfloor\\frac{1946}{100}\\right\\rfloor + \\left\\lfloor\\frac{1946}{400}\\right\\rfloor"
},
{
"math_id": 19,
"text": "\\mbox{anchor day} = \\mbox{Sunday} + y + \\left\\lfloor\\frac{y}{4}\\right\\rfloor = \\mbox{Sunday}+ 5\\times (y\\bmod 4) + 3\\times (y\\bmod 7)"
}
] |
https://en.wikipedia.org/wiki?curid=626579
|
6266055
|
Elliptic rational functions
|
In mathematics the elliptic rational functions are a sequence of rational functions with real coefficients. Elliptic rational functions are extensively used in the design of elliptic electronic filters. (These functions are sometimes called Chebyshev rational functions, not to be confused with certain other functions of the same name).
Rational elliptic functions are identified by a positive integer order "n" and include a parameter ξ ≥ 1 called the selectivity factor. A rational elliptic function of degree "n" in "x" with selectivity factor ξ is generally defined as:
formula_0
where
For many cases, in particular for orders of the form "n" = 2"a"3"b" where "a" and "b" are integers, the elliptic rational functions can be expressed using algebraic functions alone. Elliptic rational functions are closely related to the Chebyshev polynomials: Just as the circular trigonometric functions are special cases of the Jacobi elliptic functions, so the Chebyshev polynomials are special cases of the elliptic rational functions.
Expression as a ratio of polynomials.
For even orders, the elliptic rational functions may be expressed as a ratio of two polynomials, both of order "n".
formula_4 (for n even)
where formula_5 are the zeroes and formula_6 are the poles, and formula_7 is a normalizing constant chosen such that formula_8. The above form would be true for even orders as well except that for odd orders, there will be a pole at x=∞ and a zero at x=0 so that the above form must be modified to read:
formula_9 (for n odd)
Properties.
The canonical properties.
The only rational function satisfying the above properties is the elliptic rational function . The following properties are derived:
Normalization.
The elliptic rational function is normalized to unity at x=1:
formula_17
Nesting property.
The nesting property is written:
formula_18
This is a very important property:
formula_23
Limiting values.
The elliptic rational functions are related to the Chebyshev polynomials of the first kind formula_24 by:
formula_25
formula_26 for n even
formula_27 for n odd
Equiripple.
formula_2 has equal ripple of formula_28 in the interval formula_29. By the inversion relationship (see below), it follows that formula_30 has equiripple in formula_31 of formula_32.
Inversion relationship.
The following inversion relationship holds:
formula_33
This implies that poles and zeroes come in pairs such that
formula_34
Odd order functions will have a zero at "x=0" and a corresponding pole at infinity.
Poles and Zeroes.
The zeroes of the elliptic rational function of order "n" will be written formula_35 or formula_36 when formula_37 is implicitly known. The zeroes of the elliptic rational function will be the zeroes of the polynomial in the numerator of the function.
The following derivation of the zeroes of the elliptic rational function is analogous to that of determining the zeroes of the Chebyshev polynomials . Using the fact that for any "z"
formula_38
the defining equation for the elliptic rational functions implies that
formula_39
so that the zeroes are given by
formula_40
Using the inversion relationship, the poles may then be calculated.
From the nesting property, if the zeroes of formula_41 and formula_19 can be algebraically expressed (i.e. without the need for calculating the Jacobi ellipse functions) then the zeroes of formula_42 can be algebraically expressed. In particular, the zeroes of elliptic rational functions of order formula_43 may be algebraically expressed . For example, we can find the zeroes of formula_44 as follows: Define
formula_45
Then, from the nesting property and knowing that
formula_46
where formula_47 we have:
formula_48
formula_49
These last three equations may be inverted:
formula_50
To calculate the zeroes of formula_44 we set formula_51 in the third equation, calculate the two values of formula_52, then use these values of formula_52 in the second equation to calculate four values of formula_53 and finally, use these values in the first equation to calculate the eight zeroes of formula_44. (The formula_54 are calculated by a similar recursion.) Again, using the inversion relationship, these zeroes can be used to calculate the poles.
Particular values.
We may write the first few elliptic rational functions as:
formula_55
formula_46
where
formula_56
formula_57
where
formula_58
formula_59
formula_60
formula_61
formula_62 etc.
See for further explicit expressions of order "n=5" and formula_63.
The corresponding discrimination factors are:
formula_64
formula_65
formula_66
formula_67
formula_68 etc.
The corresponding zeroes are formula_69 where "n" is the order and "j" is the number of the zero. There will be a total of "n" zeroes for each order.
formula_70
formula_71
formula_72
formula_73
formula_74
formula_75
formula_76
formula_77
formula_78
formula_79
From the inversion relationship, the corresponding poles formula_80 may be found by formula_81
|
[
{
"math_id": 0,
"text": "R_n(\\xi,x)\\equiv \\mathrm{cd}\\left(n\\frac{K(1/L_n(\\xi))}{K(1/\\xi)}\\,\\mathrm{cd}^{-1}(x,1/\\xi),1/L_n(\\xi)\\right)"
},
{
"math_id": 1,
"text": "L_n(\\xi)=R_n(\\xi,\\xi)"
},
{
"math_id": 2,
"text": "R_n(\\xi,x)"
},
{
"math_id": 3,
"text": "|x|\\ge\\xi"
},
{
"math_id": 4,
"text": "R_n(\\xi,x)=r_0\\,\\frac{\\prod_{i=1}^n (x-x_i)}{\\prod_{i=1}^n (x-x_{pi})}"
},
{
"math_id": 5,
"text": "x_i"
},
{
"math_id": 6,
"text": "x_{pi}"
},
{
"math_id": 7,
"text": "r_0"
},
{
"math_id": 8,
"text": "R_n(\\xi,1)=1"
},
{
"math_id": 9,
"text": "R_n(\\xi,x)=r_0\\,x\\,\\frac{\\prod_{i=1}^{n-1} (x-x_i)}{\\prod_{i=1}^{n-1} (x-x_{pi})}"
},
{
"math_id": 10,
"text": "R_n^2(\\xi,x)\\le 1"
},
{
"math_id": 11,
"text": "|x|\\le 1\\,"
},
{
"math_id": 12,
"text": "R_n^2(\\xi,x)= 1"
},
{
"math_id": 13,
"text": "|x|= 1\\,"
},
{
"math_id": 14,
"text": "R_n^2(\\xi,-x)=R_n^2(\\xi,x)"
},
{
"math_id": 15,
"text": "R_n^2(\\xi,x)>1"
},
{
"math_id": 16,
"text": "x>1\\,"
},
{
"math_id": 17,
"text": "R_n(\\xi,1)=1\\,"
},
{
"math_id": 18,
"text": "R_m(R_n(\\xi,\\xi),R_n(\\xi,x))=R_{m\\cdot n}(\\xi,x)\\,"
},
{
"math_id": 19,
"text": "R_n"
},
{
"math_id": 20,
"text": "R_2"
},
{
"math_id": 21,
"text": "R_3"
},
{
"math_id": 22,
"text": "n=2^a3^b"
},
{
"math_id": 23,
"text": "L_{m\\cdot n}(\\xi)=L_m(L_n(\\xi))"
},
{
"math_id": 24,
"text": "T_n(x)"
},
{
"math_id": 25,
"text": "\\lim_{\\xi=\\rightarrow\\,\\infty}R_n(\\xi,x)=T_n(x)\\,"
},
{
"math_id": 26,
"text": "R_n(\\xi,-x)=R_n(\\xi,x)\\,"
},
{
"math_id": 27,
"text": "R_n(\\xi,-x)=-R_n(\\xi,x)\\,"
},
{
"math_id": 28,
"text": "\\pm 1"
},
{
"math_id": 29,
"text": "-1\\le x\\le 1"
},
{
"math_id": 30,
"text": "1/R_n(\\xi,x)"
},
{
"math_id": 31,
"text": "-1/\\xi \\le x\\le 1/\\xi"
},
{
"math_id": 32,
"text": "\\pm 1/L_n(\\xi)"
},
{
"math_id": 33,
"text": "R_n(\\xi,\\xi/x)=\\frac{R_n(\\xi,\\xi)}{R_n(\\xi,x)}\\,"
},
{
"math_id": 34,
"text": "x_{pi}x_{zi}=\\xi\\,"
},
{
"math_id": 35,
"text": "x_{ni}(\\xi)"
},
{
"math_id": 36,
"text": "x_{ni}"
},
{
"math_id": 37,
"text": "\\xi"
},
{
"math_id": 38,
"text": "\\mathrm{cd}\\left((2m-1)K\\left(1/z\\right),\\frac{1}{z}\\right)=0\\,"
},
{
"math_id": 39,
"text": "n \\frac{K(1/L_n)}{K(1/\\xi)}\\mathrm{cd}^{-1}(x_m,1/\\xi)=(2m-1)K(1/L_n)"
},
{
"math_id": 40,
"text": "x_m=\\mathrm{cd}\\left(K(1/\\xi)\\,\\frac{2m-1}{n},\\frac{1}{\\xi}\\right)."
},
{
"math_id": 41,
"text": "R_m"
},
{
"math_id": 42,
"text": "R_{m\\cdot n}"
},
{
"math_id": 43,
"text": "2^i3^j"
},
{
"math_id": 44,
"text": "R_8(\\xi,x)"
},
{
"math_id": 45,
"text": "\nX_n\\equiv R_n(\\xi,x)\\qquad \nL_n\\equiv R_n(\\xi,\\xi)\\qquad \nt_n\\equiv \\sqrt{1-1/L_n^2}."
},
{
"math_id": 46,
"text": "R_2(\\xi,x)=\\frac{(t+1)x^2-1}{(t-1)x^2+1}"
},
{
"math_id": 47,
"text": "t\\equiv \\sqrt{1-1/\\xi^2}"
},
{
"math_id": 48,
"text": "\nL_2=\\frac{1+t}{1-t},\\qquad \nL_4=\\frac{1+t_2}{1-t_2},\\qquad \nL_8=\\frac{1+t_4}{1-t_4}\n"
},
{
"math_id": 49,
"text": "\nX_2=\\frac{(t+1)x^2 -1}{(t-1)x^2 +1},\\qquad \nX_4=\\frac{(t_2+1)X_2^2-1}{(t_2-1)X_2^2+1},\\qquad \nX_8=\\frac{(t_4+1)X_4^2-1}{(t_4-1)X_4^2+1}.\n"
},
{
"math_id": 50,
"text": "\nx =\\frac{1}{\\pm\\sqrt{1+t \\,\\left(\\frac{1-X_2}{1+X_2}\\right)}},\\qquad\nX_2=\\frac{1}{\\pm\\sqrt{1+t_2\\,\\left(\\frac{1-X_4}{1+X_4}\\right)}},\\qquad\nX_4=\\frac{1}{\\pm\\sqrt{1+t_4\\,\\left(\\frac{1-X_8}{1+X_8}\\right)}}.\\qquad\n"
},
{
"math_id": 51,
"text": "X_8=0"
},
{
"math_id": 52,
"text": "X_4"
},
{
"math_id": 53,
"text": "X_2"
},
{
"math_id": 54,
"text": "t_n"
},
{
"math_id": 55,
"text": "R_1(\\xi,x)=x\\,"
},
{
"math_id": 56,
"text": "t \\equiv \\sqrt{1-\\frac{1}{\\xi^2}}"
},
{
"math_id": 57,
"text": "R_3(\\xi,x)=x\\,\\frac{(1-x_p^2)(x^2-x_z^2)}{(1-x_z^2)(x^2-x_p^2)}"
},
{
"math_id": 58,
"text": "G\\equiv\\sqrt{4\\xi^2+(4\\xi^2(\\xi^2\\!-\\!1))^{2/3}}"
},
{
"math_id": 59,
"text": "x_p^2\\equiv\\frac{2\\xi^2\\sqrt{G}}{\\sqrt{8\\xi^2(\\xi^2\\!+\\!1)+12G\\xi^2-G^3}-\\sqrt{G^3}}"
},
{
"math_id": 60,
"text": "x_z^2=\\xi^2/x_p^2"
},
{
"math_id": 61,
"text": "R_4(\\xi,x)=R_2(R_2(\\xi,\\xi),R_2(\\xi,x))=\\frac\n{(1+t)(1+\\sqrt{t})^2x^4-2(1+t)(1+\\sqrt{t})x^2+1}\n{(1+t)(1-\\sqrt{t})^2x^4-2(1+t)(1-\\sqrt{t})x^2+1}\n"
},
{
"math_id": 62,
"text": "R_6(\\xi,x)=R_3(R_2(\\xi,\\xi),R_2(\\xi,x))\\,"
},
{
"math_id": 63,
"text": "n=2^i\\,3^j"
},
{
"math_id": 64,
"text": "L_1(\\xi)=\\xi\\,"
},
{
"math_id": 65,
"text": "L_2(\\xi)=\\frac{1+t}{1-t}=\\left(\\xi+\\sqrt{\\xi^2-1}\\right)^2"
},
{
"math_id": 66,
"text": "L_3(\\xi)=\\xi^3\\left(\\frac{1-x_p^2}{\\xi^2-x_p^2}\\right)^2"
},
{
"math_id": 67,
"text": "L_4(\\xi)=\\left(\\sqrt{\\xi}+(\\xi^2-1)^{1/4}\\right)^4\\left(\\xi+\\sqrt{\\xi^2-1}\\right)^2"
},
{
"math_id": 68,
"text": "L_6(\\xi)=L_3(L_2(\\xi))\\,"
},
{
"math_id": 69,
"text": "x_{nj}"
},
{
"math_id": 70,
"text": "x_{11}=0\\,"
},
{
"math_id": 71,
"text": "x_{21}=\\xi\\sqrt{1-t}\\,"
},
{
"math_id": 72,
"text": "x_{22}=-x_{21}\\,"
},
{
"math_id": 73,
"text": "x_{31}=x_z\\,"
},
{
"math_id": 74,
"text": "x_{32}=0\\,"
},
{
"math_id": 75,
"text": "x_{33}=-x_{31}\\,"
},
{
"math_id": 76,
"text": "x_{41}=\\xi\\sqrt{\\left(1-\\sqrt{t}\\right)\\left(1+t-\\sqrt{t(t+1)}\\right)}\\,"
},
{
"math_id": 77,
"text": "x_{42}=\\xi\\sqrt{\\left(1-\\sqrt{t}\\right)\\left(1+t+\\sqrt{t(t+1)}\\right)}\\,"
},
{
"math_id": 78,
"text": "x_{43}=-x_{42}\\,"
},
{
"math_id": 79,
"text": "x_{44}=-x_{41}\\,"
},
{
"math_id": 80,
"text": "x_{p,ni}"
},
{
"math_id": 81,
"text": "x_{p,ni}=\\xi/(x_{ni})"
}
] |
https://en.wikipedia.org/wiki?curid=6266055
|
62662441
|
Schwarz function
|
Mathematics function in complex analysis
The Schwarz function of a curve in the complex plane is an analytic function which maps the points of the curve to their complex conjugates. It can be used to generalize the Schwarz reflection principle to reflection across arbitrary analytic curves, not just across the real axis.
The Schwarz function exists for analytic curves. More precisely, for every non-singular, analytic Jordan arc formula_0 in the complex plane, there is an open neighborhood formula_1 of formula_0 and a unique analytic function formula_2 on formula_1 such that formula_3 for every formula_4.
The "Schwarz function" was named by Philip J. Davis and Henry O. Pollak (1958) in honor of Hermann Schwarz, who introduced the Schwarz reflection principle for analytic curves in 1870. However, the Schwarz function does not explicitly appear in Schwarz's works.
Examples.
The unit circle is described by the equation formula_5, or formula_6. Thus, the Schwarz function of the unit circle is formula_7.
A more complicated example is an ellipse defined by formula_8. The Schwarz function can be found by substituting formula_9 and formula_10 and solving for formula_11. The result is:
formula_12.
This is analytic on the complex plane minus a branch cut along the line segment between the foci formula_13.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Gamma"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "S(z) = \\overline{z}"
},
{
"math_id": 4,
"text": "z \\in \\Gamma"
},
{
"math_id": 5,
"text": "|z|^2 = 1"
},
{
"math_id": 6,
"text": "\\overline{z} = 1/z"
},
{
"math_id": 7,
"text": "S(z) = 1/z"
},
{
"math_id": 8,
"text": "(x/a)^2 + (y/b)^2 = 1"
},
{
"math_id": 9,
"text": "\\textstyle x = \\frac{z + \\overline{z}}{2}"
},
{
"math_id": 10,
"text": "\\textstyle y = \\frac{z - \\overline{z}}{2i}"
},
{
"math_id": 11,
"text": "\\overline{z}"
},
{
"math_id": 12,
"text": "S(z) = \\frac{1}{a^2-b^2} \\left( (a^2+b^2)z - 2ab\\sqrt{z^2+b^2-a^2} \\right)"
},
{
"math_id": 13,
"text": "\\pm \\sqrt{a^2-b^2}"
}
] |
https://en.wikipedia.org/wiki?curid=62662441
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.