id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
68644033 | Session type | In type theory, session types are used to ensure correctness in concurrent programs. They guarantee that messages sent and received between concurrent programs are in the expected order and of the expected type. Session type systems have been adapted for both channel and actor systems.
Session types are used to ensure desirable properties in concurrent and distributed systems, i.e. absence of communication errors or deadlocks, and protocol conformance.
Binary versus multiparty session types.
Interaction between two processes can be checked using "binary" session types, while interactions between more than two processes can be checked using "multiparty" session types. In multiparty session types interactions between all participants are described using a "global type", which is then projected into "local types" that describe communication from the local view of each participant. Importantly, the global type encodes the sequencing information of the communication, which would be lost if we were to use binary session types to encode the same communication.
Formal definition of binary session types.
Binary session types can be described using send operations (formula_0), receive operations (formula_1), branches (formula_2), selections (formula_3), recursion (formula_4) and termination (formula_5).
For example, formula_6 represents a session type formula_7 which first sends a boolean (formula_8), then receives an integer (formula_9) before finally terminating (formula_5).
Implementations.
Session types have been adapted for several existing programming languages, including:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "!"
},
{
"math_id": 1,
"text": "?"
},
{
"math_id": 2,
"text": "\\&"
},
{
"math_id": 3,
"text": "\\oplus"
},
{
"math_id": 4,
"text": "rec"
},
{
"math_id": 5,
"text": "end"
},
{
"math_id": 6,
"text": "S = \\; !bool.?int.end"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "!bool"
},
{
"math_id": 9,
"text": "?int"
}
]
| https://en.wikipedia.org/wiki?curid=68644033 |
68651099 | Frobenius characteristic map | In mathematics, especially representation theory and combinatorics, a Frobenius characteristic map is an isometric isomorphism between the ring of characters of symmetric groups and the ring of symmetric functions. It builds a bridge between representation theory of the symmetric groups and algebraic combinatorics. This map makes it possible to study representation problems with help of symmetric functions and vice versa. This map is named after German mathematician Ferdinand Georg Frobenius.
Definition.
The ring of characters.
Source:
Let formula_0 be the formula_1-module generated by all irreducible characters of formula_2 over formula_3. In particular formula_4 and therefore formula_5. The ring of characters is defined to be the direct sumformula_6with the following multiplication to make formula_7 a graded commutative ring. Given formula_8 and formula_9, the product is defined to beformula_10with the understanding that formula_11 is embedded into formula_12 and formula_13 denotes the induced character.
Frobenius characteristic map.
For formula_8, the value of the Frobenius characteristic map formula_14 at formula_15, which is also called the "Frobenius image" of formula_15, is defined to be the polynomial
formula_16
Remarks.
Here, formula_17 is the integer partition determined by formula_18. For example, when formula_19 and formula_20, formula_21 corresponds to the partition formula_22. Conversely, a partition formula_23 of formula_24 (written as formula_25) determines a conjugacy class formula_26 in formula_2. For example, given formula_27, formula_28 is a conjugacy class. Hence by abuse of notation formula_29 can be used to denote the value of formula_15 on the conjugacy class determined by formula_23. Note this always makes sense because formula_15 is a class function.
Let formula_23 be a partition of formula_24, then formula_30 is the product of power sum symmetric polynomials determined by formula_23 of formula_24 variables. For example, given formula_31, a partition of formula_32,
formula_33
Finally, formula_34 is defined to be formula_35, where formula_36 is the cardinality of the conjugacy class formula_37. For example, when formula_38, formula_39. The second definition of formula_40 can therefore be justified directly:formula_41
Properties.
Inner product and isometry.
Hall inner product.
Source:
The inner product on the ring of symmetric functions is the Hall inner product. It is required that formula_42 . Here, formula_43 is a monomial symmetric function and formula_44 is a product of completely homogeneous symmetric functions. To be precise, let formula_45 be a partition of integer, thenformula_46In particular, with respect to this inner product, formula_47 form a orthogonal basis: formula_48, and the Schur polynomials formula_49 form a orthonormal basis: formula_50, where formula_51 is the Kronecker delta.
Inner product of characters.
Let formula_52, their inner product is defined to be
formula_53If formula_54, then
formula_55
Frobenius characteristic map as an isometry.
One can prove that the Frobenius characteristic map is an isometry by explicit computation. To show this, it suffices to assume that formula_52:formula_56
Ring isomorphism.
The map formula_14 is an isomorphism between formula_7 and the formula_1-ring formula_57. The fact that this map is a ring homomorphism can be shown by Frobenius reciprocity. For formula_8 and formula_9,formula_58
Defining formula_59 by formula_60, the Frobenius characteristic map can be written in a shorter form:
formula_61
In particular, if formula_15 is an irreducible representation, then formula_40 is a Schur polynomial of formula_24 variables. It follows that formula_14 maps an orthonormal basis of formula_7 to an orthonormal basis of formula_57. Therefore it is an isomorphism.
Example.
Computing the Frobenius image.
Let formula_15 be the alternating representation of formula_62, which is defined by formula_63, where formula_64 is the sign of the permutation formula_65. There are three conjugacy classes of formula_62, which can be represented by formula_66 (identity or the product of three 1-cycles), formula_67(transpositions or the products of one 2-cycle and one 1-cycle) and formula_68 (3-cycles). These three conjugacy classes therefore correspond to three partitions of formula_69 given by formula_70, formula_71, formula_72. The values of formula_15 on these three classes are formula_73 respectively. Therefore:formula_74Since formula_15 is an irreducible representation (which can be shown by computing its characters), the computation above gives the Schur polynomial of three variables corresponding to the partition formula_75.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R^n"
},
{
"math_id": 1,
"text": "\\mathbb{Z}"
},
{
"math_id": 2,
"text": "S_n"
},
{
"math_id": 3,
"text": "\\mathbb{C}"
},
{
"math_id": 4,
"text": "S_0=\\{1\\}"
},
{
"math_id": 5,
"text": "R^0=\\mathbb{Z}"
},
{
"math_id": 6,
"text": "R=\\bigoplus_{n=0}^{\\infty}R^n"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "f \\in R^n"
},
{
"math_id": 9,
"text": "g \\in R^m"
},
{
"math_id": 10,
"text": "f \\cdot g = \\operatorname{ind}_{S_m \\times S_n}^{S_{m+n}}(f \\times g)"
},
{
"math_id": 11,
"text": "S_m \\times S_n"
},
{
"math_id": 12,
"text": "S_{m+n}"
},
{
"math_id": 13,
"text": "\n\\operatorname{ind}"
},
{
"math_id": 14,
"text": "\\operatorname{ch}"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "\\operatorname{ch}(f)=\\frac{1}{n!}\\sum_{w \\in S_n}f(w)p_{\\rho(w)}=\\sum_{\\mu \\vdash n}z_\\mu^{-1}f(\\mu)p_\\mu."
},
{
"math_id": 17,
"text": "\\rho(w)"
},
{
"math_id": 18,
"text": "w"
},
{
"math_id": 19,
"text": "n=3"
},
{
"math_id": 20,
"text": "w=(12)(3)"
},
{
"math_id": 21,
"text": "\\rho(w)=(2,1)"
},
{
"math_id": 22,
"text": "3=2+1"
},
{
"math_id": 23,
"text": "\\mu"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "\\mu \\vdash n"
},
{
"math_id": 26,
"text": "K_\\mu"
},
{
"math_id": 27,
"text": "\\mu=(2,1)\\vdash 3"
},
{
"math_id": 28,
"text": "K_\\mu=\\{(12)(3),(13)(2),(23)(1)\\}"
},
{
"math_id": 29,
"text": "f(\\mu)"
},
{
"math_id": 30,
"text": "p_\\mu"
},
{
"math_id": 31,
"text": "\\mu=(3,2)"
},
{
"math_id": 32,
"text": "5"
},
{
"math_id": 33,
"text": "\\begin{aligned}\np_\\mu(x_1,x_2,x_3,x_4,x_5)&=p_3(x_1,x_2,x_3,x_4,x_5)p_2(x_1,x_2,x_3,x_4,x_5) \\\\\n &=(x_1^3+x_2^3+x_3^3+x_4^3+x_5^3)(x_1^2+x_2^2+x_3^2+x_4^2+x_5^2)\n\\end{aligned}"
},
{
"math_id": 34,
"text": "z_\\lambda"
},
{
"math_id": 35,
"text": "\\frac{n!}{k_\\lambda}"
},
{
"math_id": 36,
"text": "k_\\lambda"
},
{
"math_id": 37,
"text": "K_\\lambda"
},
{
"math_id": 38,
"text": "\\lambda = (2,1)\\vdash 3"
},
{
"math_id": 39,
"text": "z_\\lambda = \\frac{3!}{3}=2"
},
{
"math_id": 40,
"text": "\\operatorname{ch}(f)"
},
{
"math_id": 41,
"text": "\\frac{1}{n!}\\sum_{w \\in S_n}f(w)p_{\\rho(w)} = \\sum_{\\mu \\vdash n}\\frac{k_\\mu}{n!}f(\\mu)p_\\mu \n = \\sum_{\\mu \\vdash n}z_\\mu^{-1}f(\\mu)p_\\mu\n"
},
{
"math_id": 42,
"text": "\\langle h_\\mu,m_\\lambda \\rangle = \\delta_{\\mu\\lambda}"
},
{
"math_id": 43,
"text": "m_\\lambda"
},
{
"math_id": 44,
"text": "h_\\mu"
},
{
"math_id": 45,
"text": "\\mu=(\\mu_1,\\mu_2,\\cdots)"
},
{
"math_id": 46,
"text": "h_\\mu=h_{\\mu_1}h_{\\mu_2}\\cdots."
},
{
"math_id": 47,
"text": "\\{p_\\lambda\\}"
},
{
"math_id": 48,
"text": "\\langle p_\\lambda,p_\\mu \\rangle = \\delta_{\\lambda\\mu}z_\\lambda"
},
{
"math_id": 49,
"text": "\\{s_\\lambda\\}"
},
{
"math_id": 50,
"text": "\\langle s_\\lambda,s_\\mu \\rangle = \\delta_{\\lambda\\mu}"
},
{
"math_id": 51,
"text": "\\delta_{\\lambda\\mu}"
},
{
"math_id": 52,
"text": "f,g \\in R^n"
},
{
"math_id": 53,
"text": "\\langle f, g \\rangle_n = \\frac{1}{n!}\\sum_{w \\in S_n}f(w)g(w) = \\sum_{\\mu \\vdash n}z_\\mu^{-1}f(\\mu)g(\\mu)"
},
{
"math_id": 54,
"text": "f = \\sum_{n}f_n,g = \\sum_{n}g_n"
},
{
"math_id": 55,
"text": "\\langle f,g \\rangle = \\sum_n \\langle f_n, g_n \\rangle_n"
},
{
"math_id": 56,
"text": "\\begin{aligned}\n\\langle \\operatorname{ch}(f),\\operatorname{ch}(g) \\rangle &= \n \\left\\langle \\sum_{\\mu\\vdash n}z_\\mu^{-1}f(\\mu)p_\\mu,\n \\sum_{\\lambda\\vdash n}z_\\lambda^{-1}g(\\lambda)p_\\lambda\\right\\rangle \\\\\n &= \\sum_{\\mu,\\lambda\\vdash n}z_\\mu^{-1}z_\\lambda^{-1}\n f(\\mu)g(\\mu)\\langle p_\\mu,p_\\lambda \\rangle \\\\\n &= \\sum_{\\mu,\\lambda\\vdash n}z_\\mu^{-1}z_\\lambda^{-1}\n f(\\mu)g(\\mu)z_\\mu\\delta_{\\mu\\lambda} \\\\\n &= \\sum_{\\mu\\vdash n}z_{\\mu}^{-1}f(\\mu)g(\\mu) \\\\\n &= \\langle f,g \\rangle\n\\end{aligned}"
},
{
"math_id": 57,
"text": "\\Lambda"
},
{
"math_id": 58,
"text": "\\begin{aligned}\n\\operatorname{ch}(f \\cdot g) &= \\langle \\operatorname{ind}_{S_n \\times S_m}^{S_{m+n}}(f \\times g),\\psi \\rangle_{m+n} \\\\\n &= \\langle f \\times g, \\operatorname{res}_{S_n \\times S_m}^{S_{m+n}}\\psi \\rangle \\\\\n &= \\frac{1}{n!m!}\\sum_{\\pi\\sigma \\in S_n \\times S_m}(f \\times g)(\\pi\\sigma)p_{\\rho(\\pi\\sigma)} \\\\\n &= \\frac{1}{n!m!}\\sum_{\\pi \\in S_n , \\sigma \\in S_m} f(\\pi)g(\\sigma)p_{\\rho(\\pi)} p_{\\rho(\\sigma)} \\\\\n &= \\left[\\frac{1}{n!}\\sum_{\\pi \\in S_n}f(\\pi)p_{\\rho(\\pi)} \\right]\\left[\\frac{1}{m!}\\sum_{\\sigma \\in S_m}g(\\sigma)p_{\\rho(\\sigma)} \\right] \\\\\n &= \\operatorname{ch}(f)\\operatorname{ch}(g)\n\n\\end{aligned}"
},
{
"math_id": 59,
"text": "\\psi:S_n \\to \\Lambda^n"
},
{
"math_id": 60,
"text": "\\psi(w) = p_{\\rho(w)}"
},
{
"math_id": 61,
"text": "\\operatorname{ch}(f)=\\langle f, \\psi \\rangle_n, \\quad f \\in R^n."
},
{
"math_id": 62,
"text": "S_3"
},
{
"math_id": 63,
"text": "f(\\sigma)v=\\sgn(\\sigma)v"
},
{
"math_id": 64,
"text": "\\sgn(\\sigma)"
},
{
"math_id": 65,
"text": "\\sigma"
},
{
"math_id": 66,
"text": "e"
},
{
"math_id": 67,
"text": "(12)"
},
{
"math_id": 68,
"text": "(123)"
},
{
"math_id": 69,
"text": "3"
},
{
"math_id": 70,
"text": "(1,1,1)"
},
{
"math_id": 71,
"text": "(2,1)"
},
{
"math_id": 72,
"text": "(3)"
},
{
"math_id": 73,
"text": "1,-1,1"
},
{
"math_id": 74,
"text": "\\begin{aligned}\n\\operatorname{ch}(f) &= z_{(1,1,1)}^{-1}f((1,1,1))p_{(1,1,1)}+z_{(2,1)}f((2,1))p_{(2,1)}+z_{(3)}^{-1}f((3))p_{(3)} \\\\\n &= \\frac{1}{6}(x_1+x_2+x_3)^3 - \\frac{1}{2}(x_1^2+x_2^2+x_3^2)(x_1+x_2+x_3)+\\frac{1}{3}(x_1^3+x_2^3+x_3^3 ) \\\\\n &= x_1x_2x_3\n\\end{aligned}"
},
{
"math_id": 75,
"text": "3=1+1+1"
}
]
| https://en.wikipedia.org/wiki?curid=68651099 |
68654163 | Thornthwaite climate classification | Climate classification system
The Thornthwaite climate classification is a climate classification system created by American climatologist Charles Warren Thornthwaite in 1931 and modified in 1948.
1931 classification.
Precipitation effectiveness.
Thornthwaite initially divided climates based on five characteristic vegetations: Rainforest, forest, grassland, steppe and desert. One of the main factors for the local vegetation is precipitation, but most importantly, precipitation effectiveness, according to Thornthwaite. Thornthwaite based the effectiveness of precipitation on an index (the P/E index), which is the sum of the 12 monthly P/E ratios. The monthly P/E ratio can be calculated using the formula:
formula_0
Temperature efficiency.
Similarly to precipitation effectiveness, Thornthwaite also developed a T/E index to represent thermal efficiency. Featuring six climate provinces: Tropical, mesothermal, microthermal, taiga, tundra and frost.
The T-E index is the sum of the 12 monthly T-E ratios, which can be calculated as:
formula_1, where t is the mean monthly temperature in °F.
1948 modification.
After being criticized for making climatic classification complex, Thornthwaite switched vegetation with the concept of potential evapotranspiration (PET), which represents both precipitation effectiveness and thermal efficiency. Estimated PET can be calculated using Thornthwaite's own 1948 equation.
Thornthwaite developed four indices: the Moisture Index (Im), the aridity and humidity indexes (Ia/Ih), the Thermal Efficiency Index (TE) and the Summer Concentration of Thermal Efficiency (SCTE). Each of the four climatic types can be described by an English alphabet letter and are arranged exactly by the order shown previously. The first two letters are used to describe the precipitation pattern and the last two are used to describe the thermal regime. As an example, B3s2A’b’4 (Tracuateua) describes a wet (B3), megathermal (A’) climate with a large summer water deficit (s2) and which more than 48% but less than 52% of the potential evapotranspiration is felt in the summer (b’4).
Moisture Index.
The Moisture Index (Im) expresses the global moisture of the environment and is directly related with the aridity and humidity indexes. The driving factor in this system is the water budget of a region. Humidity classes range from Arid to Perhumid (Thoroughly Humid).
This index can be calculated as formula_2, where "Ih" and "Ia" are the humidity and aridity indexes, respectively.
Seasonal Variation of Effective Moisture.
The Seasonal Variation of Effective Moisture is described by two indexes: The Aridity Index ("Ia"), used in wet climates to identify and quantify the severity of drought conditions, and the Humidity Index ("Ih"), used in dry climates to identify and quantify the severity of wet conditions. These indexes are represented by the equations:
formula_3,
formula_4, where "D" is the annual water deficit, "S" is the annual water surplus, and "PET" is the annual potential evapotranspiration
Furthermore, these indices are represented by four letters, which indicate the seasonal distribution of precipitation: r (constantly rainy), d (constantly dry), s (summer deficit or surplus) and w (winter deficit or surplus) and two numbers to indicate the severity.
Wet climates (A, B, C2) can be classified as:
Dry climates (C1, D, E) can be classified as:
The deficiency of water in the soil is calculated as the difference between the potential evapotranspiration and the actual evapotranspiration.
Thermal efficiency.
The thermal efficiency index (TE) is defined as the annual potential evapotranspiration (PET) and has five different classifications: Megathermal, mesothermal, microthermal, tundra and perpetual ice.
Summer Concentration of Thermal Efficiency.
The Summer Concentration of Thermal Efficiency (SCTE) is a measure of the summer's potential evapotranspiration and can be calculated as formula_5, where "PET1", "PET2" and "PET3" are the estimated values of PET for the three hottest consecutive months.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P/E \\text{ ratio} = \\frac{\\text{total monthly precipitation}}{\\text{evapotranspiration}}"
},
{
"math_id": 1,
"text": "T-E \\text{ ratio} = \\frac{t-32}{4}"
},
{
"math_id": 2,
"text": "Im = Ih - 0.6 \\cdot Ia "
},
{
"math_id": 3,
"text": "\\mathit{Ia} = \\left( \\frac{D}{\\mathit{PET}} \\right) \\cdot 100 "
},
{
"math_id": 4,
"text": "\\mathit{Ih} = \\left( \\frac{S}{\\mathit{PET}} \\right) \\cdot 100 "
},
{
"math_id": 5,
"text": "\\mathit{SCTE} = \\left( \\frac{\\mathit{PET}1+\\mathit{PET}2+\\mathit{PET}3}{\\text{annual }\\mathit{PET}} \\right) \\cdot 100 "
}
]
| https://en.wikipedia.org/wiki?curid=68654163 |
68657567 | Biohybrid microswimmer | A biohybrid microswimmer also known as biohybrid nanorobot, can be defined as a microswimmer that consist of both biological and artificial constituents, for instance, one or several living microorganisms attached to one or various synthetic parts.
In recent years nanoscopic and objects have been designed to collectively move through direct inspiration from nature or by harnessing its existing tools. Small mesoscopic to nanoscopic systems typically operate at low Reynolds numbers (Re ≪ 1), and understanding their motion becomes challenging. For locomotion to occur, the symmetry of the system must be broken.
In addition, collective motion requires a coupling mechanism between the entities that make up the collective. To develop mesoscopic to nanoscopic entities capable of swarming behaviour, it has been hypothesised that the entities are characterised by broken symmetry with a well-defined morphology, and are powered with some material capable of harvesting energy. If the harvested energy results in a field surrounding the object, then this field can couple with the field of a neighbouring object and bring some coordination to the collective behaviour. Such robotic swarms have been categorised by an online expert panel as among the 10 great unresolved group challenges in the area of robotics. Although investigation of their underlying mechanism of action is still in its infancy, various systems have been developed that are capable of undergoing controlled and uncontrolled swarming motion by harvesting energy (e.g., light, thermal, etc.).
Over the past decade, biohybrid microrobots, in which living mobile microorganisms are physically integrated with untethered artificial structures, have gained growing interest to enable the active locomotion and cargo delivery to a target destination. In addition to the motility, the intrinsic capabilities of sensing and eliciting an appropriate response to artificial and environmental changes make cell-based biohybrid microrobots appealing for transportation of cargo to the inaccessible cavities of the human body for local active delivery of diagnostic and therapeutic agents.
Background.
Biohybrid microswimmers can be defined as microswimmers that consist of both biological and artificial constituents, for instance, one or several living microorganisms attached to one or various synthetic parts. The pioneers of this field, ahead of their time, were Montemagno and Bachand with a 1999 work regarding specific attachment strategies of biological molecules to nanofabricated substrates enabling the preparation of hybrid inorganic/organic nanoelectromechanical systems, so called NEMS. They described the production of large amounts of F1-ATPase from the thermophilic bacteria "Bacillus PS3" for the preparation of F1-ATPase biomolecular motors immobilized on a nanoarray pattern of gold, copper or nickel produced by electron beam lithography. These proteins were attached to one micron microspheres tagged with a synthetic peptide. Consequently, they accomplished the preparation of a platform with chemically active sites and the development of biohybrid devices capable of converting energy of biomolecular motors into useful work.
One of the most fundamental questions in science is what defines life. Collective motion is one of the hallmarks of life. This is commonly observed in nature at various dimensional levels as energized entities gather, in a concerted effort, into motile aggregated patterns. These motile aggregated events can be noticed, among many others, as dynamic swarms; e.g., unicellular organisms such as bacteria, locust swarms, or the flocking behaviour of birds.
Ever since Newton established his equations of motion, the mystery of motion on the microscale has emerged frequently in scientific history, as famously demonstrated by a couple of articles that should be discussed briefly. First, an essential concept, popularized by Osborne Reynolds, is that the relative importance of inertia and viscosity for the motion of a fluid depends on certain details of the system under consideration. The Reynolds number "Re", named in his honor, quantifies this comparison as a dimensionless ratio of characteristic inertial and viscous forces:
formula_0
Here, "ρ" represents the density of the fluid; "u" is a characteristic velocity of the system (for instance, the velocity of a swimming particle); "l" is a characteristic length scale (e.g., the swimmer size); and "μ" is the viscosity of the fluid. Taking the suspending fluid to be water, and using experimentally observed values for "u", one can determine that inertia is important for macroscopic swimmers like fish ("Re" = 100), while viscosity dominates the motion of microscale swimmers like bacteria ("Re" = 10−4).
The overwhelming importance of viscosity for swimming at the micrometer scale has profound implications for swimming strategy. This has been discussed memorably by E. M. Purcell, who invited the reader into the world of microorganisms and theoretically studied the conditions of their motion. In the first place, propulsion strategies of large scale swimmers often involve imparting momentum to the surrounding fluid in periodic , such as vortex shedding, and coasting between these events through inertia. This cannot be effective for microscale swimmers like bacteria: due to the large viscous damping, the inertial coasting time of a micron-sized object is on the order of 1 μs. The coasting distance of a microorganism moving at a typical speed is about 0.1 angstroms (Å). Purcell concluded that only forces that are exerted in the present moment on a microscale body contribute to its propulsion, so a constant energy conversion method is essential.
Microorganisms have optimized their metabolism for continuous energy production, while purely artificial microswimmers (microrobots) must obtain energy from the environment, since their on-board-storage-capacity is very limited. As a further consequence of the continuous dissipation of energy, biological and artificial microswimmers do not obey the laws of equilibrium statistical physics, and need to be described by non-equilibrium dynamics. Mathematically, Purcell explored the implications of low Reynolds number by taking the Navier-Stokes equation and eliminating the inertial terms:
formula_1
where formula_2 is the velocity of the fluid and formula_3 is the gradient of the pressure. As Purcell noted, the resulting equation — the Stokes equation — contains no explicit time dependence. This has some important consequences for how a suspended body (e.g., a bacterium) can swim through periodic mechanical motions or deformations (e.g., of a flagellum). First, the rate of motion is practically irrelevant for the motion of the microswimmer and of the surrounding fluid: changing the rate of motion will change the scale of the velocities of the fluid and of the microswimmer, but it will not change the pattern of fluid flow. Secondly, reversing the direction of mechanical motion will simply reverse all velocities in the system. These properties of the Stokes equation severely restrict the range of feasible swimming strategies.
Recent publications of biohybrid microswimmers include the use of sperm cells, contractive muscle cells, and bacteria as biological components, as they can efficiently convert chemical energy into movement, and additionally are capable of performing complicated motion depending on environmental conditions. In this sense, biohybrid microswimmer systems can be described as the combination of different functional components: cargo and carrier. The cargo is an element of interest to be moved (and possibly released) in a customized way. The carrier is the component responsible for the movement of the biohybrid, transporting the desired cargo, which is linked to its surface. The great majority of these systems rely on biological motile propulsion for the transportation of synthetic cargo for targeted drug delivery/ There are also examples of the opposite case: artificial microswimmers with biological cargo systems.
Over the past decade, biohybrid microrobots, in which living mobile microorganisms are physically integrated with untethered artificial structures, have gained growing interest to enable the active locomotion and cargo delivery to a target destination. In addition to the motility, the intrinsic capabilities of sensing and eliciting an appropriate response to artificial and environmental changes make cell-based biohybrid microrobots appealing for transportation of cargo to the inaccessible cavities of the human body for local active delivery of diagnostic and therapeutic agents. Active locomotion, targeting and steering of concentrated therapeutic and diagnostic agents embedded in mobile microrobots to the site of action can overcome the existing challenges of conventional therapies. To this end, bacteria have been commonly used with attached beads and ghost cell bodies.
Bacterial biohybrids.
Artificial micro and nanoswimmers are small scale devices that convert energy into movement. Since the first demonstration of their performance in 2002, the field has developed rapidly in terms of new preparation methodologies, propulsion strategies, motion control, and envisioned functionality. The field holds promise for applications such as drug delivery, environmental remediation and sensing. The initial focus of the field was largely on artificial systems, but an increasing number of "biohybrids" are appearing in the literature. Combining artificial and biological components is a promising strategy to obtain new, well-controlled microswimmer functionalities, since essential functions of living organisms are intrinsically related to the capability to move. Living beings of all scales move in response to environmental stimuli (e.g., temperature or pH), to look for food sources, to reproduce, or to escape from predators. One of the more well-known living microsystems are swimming bacteria, but directed motion occurs even at the molecular scale, where enzymes and proteins undergo conformational changes in order to carry out biological tasks.
Swimming bacterial cells have been used in the development of hybrid microswimmers. Cargo attachment to the bacterial cells might influence their swimming behavior. Bacterial cells in the swarming state have also been used in the development of hybrid microswimmers. Swarming "Serratia marcescens" cells were transferred to PDMS-coated coverslips, resulting in a structure referred to as a "bacterial carpet" by the authors. Differently shaped flat fragments of this bacterial carpets, termed "auto-mobile chips", moved above the surface of the microscope slide in two dimensions. Many other works have used "Serratia marcescens" swarming cells, as well as "E. coli" swarming cells for the development of hybrid microswimmers. Magnetotactic bacteria have been the focus of different studies due to their versatile uses in biohybrid motion systems.
Protist biohybrids.
Algal.
"Chlamydomonas reinhardtii" is a unicellular green microalga. The wild-type "C. reinhardtii" has a spherical shape that averages about 10 μm in diameter. This microorganism can perceive the visible light and be steered by it (i.e., phototaxis) with high swimming speeds in the range of 100–200 μm s−1. It has natural autofluorescence that permits label-free fluorescent imaging. "C. reinhardtii" has been actively explored as the live component of biohybrid microrobots for the active delivery of therapeutics. They are biocompatible with healthy mammalian cells, leave no known toxins, mobile in the physiologically relevant media, and allow for surface modification to carry cargo on the cell wall. Alternative attachment strategies for "C. reinhardtii" have been proposed for the assembly through modifying the interacting surfaces by electrostatic interactions and covalent bonding.
Robocoliths.
Collective motion is one of the hallmarks of life. In contrast to what is accomplished individually, multiple entities enable local interactions between each participant to occur in proximity. If we consider each participant in the collective behaviour as a (bio)physical transducer, then the energy will be converted from one type into another. The proxemics will then favour enhanced communication between neighbouring individuals via transduction of energy, leading to dynamic and complex synergetic behaviours of the composite powered structure.
In recent years nanoscopic and objects have been designed to collectively move through direct inspiration from nature or by harnessing its existing tools. Such robotic swarms were categorised by an online expert panel as among the 10 great unresolved group challenges in the area of robotics. Although investigation of their underlying mechanism of action is still in its infancy, various systems have been developed that are capable of undergoing controlled and uncontrolled swarming motion by harvesting energy (e.g., light, thermal, etc.). Importantly, this energy should be transformed into a net force for the system to move.
Small mesoscopic to nanoscopic systems typically operate at low Reynolds numbers (Re ≪ 1), and understanding their motion becomes challenging. For locomotion to occur, the symmetry of the system must be broken.14 In addition, collective motion requires a coupling mechanism between the entities that make up the collective.
To develop mesoscopic to nanoscopic entities capable of swarming behaviour, it has been hypothesised that the entities are characterised by broken symmetry with a well-defined morphology, and are powered with some material capable of harvesting energy. If the harvested energy results in a field surrounding the object, then this field can couple with the field of a neighbouring object and bring some coordination to the collective behaviour.
"Emiliania huxleyi" (EHUX) coccolithophore-derived asymmetric coccoliths stand out as candidates for the choice of a nano/mesoscopic object with broken symmetry and well-defined morphology. Besides the thermodynamical stability because of their calcite composition, the critical advantage of EHUX coccoliths is their distinctive and sophisticated asymmetric morphology. EHUX coccoliths are characterised by several hammer-headed ribs placed to form a proximal and distal disc connected by a central ring. These discs have different sizes but also allow the coccolith to have a curvature, partly resembling a wagon wheel. EHUX coccoliths can be isolated from EHUX coccolithophores, a unique group of unicellular marine algae that are the primary producers of biogenic calcite in the ocean. Coccolithophores can intracellularly produce intricate three-dimensional mineral structures, such as calcium carbonate scales (i.e., coccoliths), in a process that is driven continuously by a specialized vesicle.
After the process is finished, the formed coccoliths are secreted to the cell surface, where they form the exoskeleton (i.e., coccosphere). The broad diversity of coccolith architecture results in further possibilities for specific applications in nanotechnology or biomedicine. Inanimate coccoliths from EHUX live coccolithophores, in particular, can be isolated easily in the laboratory with a low culture cost and fast reproductive rate and have a reasonably moderate surface area (~20 m2/g) exhibiting a mesoporous structure (pore size in the range of 4 nm).
Presumably, if harvesting of energy is done on both sides of the EHUX coccolith, then it will allow generation of a net force, which means movement in a directional manner. Coccoliths have immense potential for a multitude of applications, but to enable harvesting of energy, their surface properties must be finely tuned. Inspired by the composition of adhesive proteins in mussels, dopamine self-polymerization into polydopamine is currently the most versatile functionalization strategy for virtually all types of materials. Because of its surface chemistry and wide range of light absorption properties, polydopamine is an ideal choice for aided energy harvesting function on inert substrates. In this work, we aim to exploit the benefits of polydopamine coating to provide advanced energy harvesting functionalities to the otherwise inert and inanimate coccoliths. Polydopamine (PDA has already been shown to induce movement of polystyrene beads because of thermal diffusion effects between the object and the surrounding aqueous solution of up to 2 °C under near-infrared (NIR) light excitation. However, no collective behavior has been reported. Here, we prove, for the first time, that polydopamine can act as an active component to induce, under visible light (300–600 nm), collective behavior of a structurally complex, natural, and challenging-to-control architecture such as coccoliths. As a result, the organic-inorganic hybrid combination (coccolith-polydopamine) would enable design of Robocoliths.
Dopamine polymerization proceeds in a solution, where it forms small colloidal aggregates that adsorb on the surface of the coccoliths, forming a confluent film. This film is characterized by high roughness, which translates into a high specific surface area and enhanced harvesting of energy. Because of the conjugated nature of the polymer backbone, polydopamine can absorb light over a broad electromagnetic spectrum, including the visible region.
As a result, the surface of coccoliths is endowed with a photothermal effect, locally heating and creating convection induced by the presence of PDA. This local convection is coupled with another nearby local convection, which allows coupling between individual Robocoliths, enabling their collective motion (Figure 1).
Therefore, when the light encounters the anisometric Robocoliths, they heat locally because of the photothermal conversion induced by the presence of PDA on their surface. The intense local heating produces convection that is different on either side of the Robocolith, causing its observed movement. Such convection can couple with the convection of a neighboring Robocolith, resulting in a "swarming" motion. In addition, the surface of Robocoliths is engineered to accommodate antifouling polymer brushes and potentially prevent their aggregation. Although a primary light-activated convective approach is taken as a first step to understand the motion of Robocoliths, a multitude of mechanistic approaches are currently being developed to pave the way for the next generation of multifunctional Robocoliths as swarming bio-micromachines.
Biomedical applications.
Biohybrid microswimmers, mainly composed of integrated biological actuators and synthetic cargo carriers, have recently shown promise toward minimally invasive theranostic applications. Various microorganisms, including bacteria, microalgae, and spermatozoids, have been utilised to fabricate different biohybrid microswimmers with advanced medical functionalities, such as autonomous control with environmental stimuli for targeting, navigation through narrow gaps, and accumulation to necrotic regions of tumor environments. Steerability of the synthetic cargo carriers with long-range applied external fields, such as acoustic or magnetic fields, and intrinsic taxis behaviours of the biological actuators toward various environmental stimuli, such as chemoattractants, pH, and oxygen, make biohybrid microswimmers a promising candidate for a broad range of medical active cargo delivery applications.
Bacteria have a high swimming speed and efficiency in the low Reynolds (Re) number flow regime, are capable of sensing and responding to external environmental signals, and could be externally detected via fluorescence or ultrasound imaging techniques. Due to their inherent sensing capabilities, various bacteria species have been investigated as potential anti-tumor agents and have been the subject of preclinical and clinical trials. The presence of different bacteria species in the human body, such as on the skin and the gut microenvironment, has promoted their use as potential theranostic agents or carriers in several medical applications.
On the other hand, specialised eukaryotic cells, such as red blood cells (RBCs), are one of the nature's most efficient passive carriers with high payload efficiency, deformability, degradability, and biocompatibility, and have also been used in various medical applications. RBCs and RBC-derived , such as nanoerythrosomes, have been successfully adopted as passive cargo carriers to enhance the circulation time of the applied substances in the body, and to deliver different bioactive substances for the treatment of various diseases observed in the liver, spleen and lymph nodes, and also cancer via administrating through intravenous, intraperitoneal, subcutaneous, and inhalational routes. For instance, decreased recognition of drug-loaded particles by immune cells was shown when attached to membranes of the RBCs prior to intravenous injection into mice. Additionally, the altered bioaccumulation profile of nanocarriers was shown when conjugated onto the RBCs, boosting the delivery of nanocarriers to the target organs. It was also reported that the half-life of Fasudil, a drug for pulmonary arterial hypertension, inside the body increased approximately sixfold to eightfold when it was loaded into nanoerythrosomes.
Superior cargo-carrying properties of the RBCs have also generated increased interest for their use in biohybrid microswimmer designs. Recently, active navigation and control of drug and superparamagnetic nanoparticle (SPION)-loaded RBCs were presented using sound waves and magnetic fields. RBCs were further utilized in the fabrication of soft biohybrid microswimmers powered by motile bacteria for active cargo delivery applications. RBCs, loaded with drug molecules and SPIONs, were propelled by bacteria and steered via magnetic fields, which were also capable of traveling through gaps smaller than their size due to the inherent high deformability of the RBCs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Re} = \\frac{\\rho u l}{\\mu}"
},
{
"math_id": 1,
"text": " \\begin{align} \\mu \\nabla^2 \\mathbf{u} -\\boldsymbol{\\nabla}p &= \\boldsymbol{0} \\\\ \\end{align}"
},
{
"math_id": 2,
"text": "\\mathbf{u}"
},
{
"math_id": 3,
"text": "\\boldsymbol{\\nabla} p"
}
]
| https://en.wikipedia.org/wiki?curid=68657567 |
6865890 | Vertex (curve) | Point of extreme curvature on a curve
In the geometry of plane curves, a vertex is a point of where the first derivative of curvature is zero. This is typically a local maximum or minimum of curvature, and some authors define a vertex to be more specifically a local extremum of curvature. However, other special cases may occur, for instance when the second derivative is also zero, or when the curvature is constant. For space curves, on the other hand, a vertex is a point where the torsion vanishes.
Examples.
A hyperbola has two vertices, one on each branch; they are the closest of any two points lying on opposite branches of the hyperbola, and they lie on the principal axis. On a parabola, the sole vertex lies on the axis of symmetry and in a quadratic of the form:
formula_0
it can be found by completing the square or by differentiation. On an ellipse, two of the four vertices lie on the major axis and two lie on the minor axis.
For a circle, which has constant curvature, every point is a vertex.
Cusps and osculation.
Vertices are points where the curve has 4-point contact with the osculating circle at that point. In contrast, generic points on a curve typically only have 3-point contact with their osculating circle. The evolute of a curve will generically have a cusp when the curve has a vertex; other, more degenerate and non-stable singularities may occur at higher-order vertices, at which the osculating circle has contact of higher order than four. Although a single generic curve will not have any higher-order vertices, they will generically occur within a one-parameter family of curves, at the curve in the family for which two ordinary vertices coalesce to form a higher vertex and then annihilate.
The symmetry set of a curve has endpoints at the cusps corresponding to the vertices, and the medial axis, a subset of the symmetry set, also has its endpoints in the cusps.
Other properties.
According to the classical four-vertex theorem, every simple closed planar smooth curve must have at least four vertices. A more general fact is that every simple closed space curve which lies on the boundary of a convex body, or even bounds a locally convex disk, must have four vertices. Every curve of constant width must have at least six vertices.
If a planar curve is bilaterally symmetric, it will have a vertex at the point or points where the axis of symmetry crosses the curve. Thus, the notion of a vertex for a curve is closely related to that of an optical vertex, the point where an optical axis crosses a lens surface.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax^2 + bx + c\\,\\!"
}
]
| https://en.wikipedia.org/wiki?curid=6865890 |
6866265 | Affine shape adaptation | Affine shape adaptation is a methodology for iteratively adapting the shape of the smoothing kernels in an affine group of smoothing kernels to the local image structure in neighbourhood region of a specific image point. Equivalently, affine shape adaptation can be accomplished by iteratively warping a local image patch with affine transformations while applying a rotationally symmetric filter to the warped image patches. Provided that this iterative process converges, the resulting fixed point will be "affine invariant". In the area of computer vision, this idea has been used for defining affine invariant interest point operators as well as affine invariant texture analysis methods.
Affine-adapted interest point operators.
The interest points obtained from the scale-adapted Laplacian blob detector or the multi-scale Harris corner detector with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain interest points that are more robust to perspective transformations, a natural approach is to devise a feature detector that is "invariant to affine transformations".
Affine invariance can be accomplished from measurements of the same multi-scale windowed second moment matrix formula_0 as is used in the multi-scale Harris operator provided that we extend the regular scale space concept obtained by convolution with rotationally symmetric Gaussian kernels to an "affine Gaussian scale-space" obtained by shape-adapted Gaussian kernels (; ). For a two-dimensional image formula_1, let formula_2 and let formula_3 be a positive definite 2×2 matrix. Then, a non-uniform Gaussian kernel can be defined as
formula_4
and given any input image formula_5 the affine Gaussian scale-space is the three-parameter scale-space defined as
formula_6
Next, introduce an affine transformation formula_7 where formula_8 is a 2×2-matrix, and define a transformed image formula_9 as
formula_10.
Then, the affine scale-space representations formula_11 and formula_12 of formula_5 and formula_9, respectively, are related according to
formula_13
provided that the affine shape matrices formula_14 and formula_15 are related according to
formula_16.
Disregarding mathematical details, which unfortunately become somewhat technical if one aims at a precise description of what is going on, the important message is that "the affine Gaussian scale-space is closed under affine transformations".
If we, given the notation formula_17 as well as local shape matrix formula_3 and an integration shape matrix formula_18, introduce an "affine-adapted multi-scale second-moment matrix" according to
formula_19
it can be shown that under any affine transformation formula_20 the affine-adapted multi-scale second-moment matrix transforms according to
formula_21.
Again, disregarding somewhat messy technical details, the important message here is that "given a correspondence between the image points formula_22 and formula_23, the affine transformation formula_8 can be estimated from measurements of the multi-scale second-moment matrices formula_24 and formula_25 in the two domains.
An important consequence of this study is that if we can find an affine transformation formula_8 such that formula_25 is a constant times the unit matrix, then we obtain a "fixed-point that is invariant to affine transformations" (; ). For the purpose of practical implementation, this property can often be reached by in either of two main ways. The first approach is based on "transformations of the smoothing filters" and consists of:
The second approach is based on "warpings in the image domain" and implies:
This overall process is referred to as "affine shape adaptation" (; ; ; ; ; ). In the ideal continuous case, the two approaches are mathematically equivalent. In practical implementations, however, the first filter-based approach is usually more accurate in the presence of noise while the second warping-based approach is usually faster.
In practice, the affine shape adaptation process described here is often combined with interest point detection automatic scale selection as described in the articles on blob detection and corner detection, to obtain interest points that are invariant to the full affine group, including scale changes. Besides the commonly used multi-scale Harris operator, this affine shape adaptation can also be applied to other types of interest point operators such as the Laplacian/Difference of Gaussian blob operator and the determinant of the Hessian (). Affine shape adaptation can also be used for affine invariant texture recognition and affine invariant texture segmentation.
Closely related to the notion of affine shape adaptation is the notion of "affine normalization", which defines an "affine invariant reference frame" as further described in Lindeberg (2013a,b, 2021:Appendix I.3), such that any image measurement performed in the affine invariant reference frame is affine invariant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "\\bar{x} = (x, y)^T"
},
{
"math_id": 3,
"text": "\\Sigma_t"
},
{
"math_id": 4,
"text": "g(\\bar{x}; \\Sigma) = \\frac{1}{2 \\pi \\sqrt{\\operatorname{det} \\Sigma_t}} e^{-\\bar{x} \\Sigma_t^{-1} \\bar{x}/2}"
},
{
"math_id": 5,
"text": "I_L"
},
{
"math_id": 6,
"text": "L(\\bar{x}; \\Sigma_t) = \\int_{\\bar{xi}} I_L(x-\\xi) \\, g(\\bar{\\xi}; \\Sigma_t) \\, d\\bar{\\xi}."
},
{
"math_id": 7,
"text": "\\eta = B \\xi"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "I_R"
},
{
"math_id": 10,
"text": "I_L(\\bar{\\xi}) = I_R(\\bar{\\eta})"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": "L(\\bar{\\xi}, \\Sigma_L) = R(\\bar{\\eta}, \\Sigma_R)"
},
{
"math_id": 14,
"text": "\\Sigma_L"
},
{
"math_id": 15,
"text": "\\Sigma_R"
},
{
"math_id": 16,
"text": "\\Sigma_R = B \\Sigma_L B^T"
},
{
"math_id": 17,
"text": "\\nabla L = (L_x, L_y)^T"
},
{
"math_id": 18,
"text": "\\Sigma_s"
},
{
"math_id": 19,
"text": "\\mu_L(\\bar{x}; \\Sigma_t, \\Sigma_s) = g(\\bar{x} - \\bar{\\xi}; \\Sigma_s) \\, \\left( \\nabla_L(\\bar{\\xi}; \\Sigma_t) \\nabla_L^T(\\bar{\\xi}; \\Sigma_t) \\right)"
},
{
"math_id": 20,
"text": "\\bar{q} = B \\bar{p}"
},
{
"math_id": 21,
"text": "\\mu_L(\\bar{p}; \\Sigma_t, \\Sigma_s) = B^T \\mu_R(\\bar{q}; B \\Sigma_t B^T, B \\Sigma_s B^T) B"
},
{
"math_id": 22,
"text": "\\bar{p}"
},
{
"math_id": 23,
"text": "\\bar{q} "
},
{
"math_id": 24,
"text": "\\mu_L"
},
{
"math_id": 25,
"text": "\\mu_R"
},
{
"math_id": 26,
"text": "\\mu^{-1}"
},
{
"math_id": 27,
"text": "\\hat{B} = \\mu^{1/2}"
},
{
"math_id": 28,
"text": "\\mu^{1/2}"
},
{
"math_id": 29,
"text": "\\hat{B}^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=6866265 |
6866642 | Threshold cryptosystem | A threshold cryptosystem, the basis for the field of threshold cryptography, is a cryptosystem that protects information by encrypting it and distributing it among a cluster of fault-tolerant computers. The message is encrypted using a public key, and the corresponding private key is shared among the participating parties. With a threshold cryptosystem, in order to decrypt an encrypted message or to sign a message, several parties (more than some threshold number) must cooperate in the decryption or signature protocol.
History.
Perhaps the first system with complete threshold properties for a trapdoor function (such as RSA) and a proof of security was published in 1994 by Alfredo De Santis, Yvo Desmedt, Yair Frankel, and Moti Yung.
Historically, only organizations with very valuable secrets, such as certificate authorities, the military, and governments made use of this technology. One of the earliest implementations was done in the 1990s by Certco for the planned deployment of the original Secure electronic transaction.
However, in October 2012, after a number of large public website password ciphertext compromises, RSA Security announced that it would release software to make the technology available to the general public.
In March 2019, the National Institute of Standards and Technology (NIST) conducted a workshop on threshold cryptography to establish consensus on applications, and define specifications. In July 2020, NIST published "Roadmap Toward Criteria for Threshold Schemes for Cryptographic Primitives" as NISTIR 8214A.
Methodology.
Let formula_0 be the number of parties. Such a system is called "(t,n)"-threshold, if at least "t" of these parties can efficiently decrypt the ciphertext, while fewer than "t" have no useful information. Similarly it is possible to define a "(t,n)"-threshold signature scheme, where at least "t" parties are required for creating a signature.
Application.
The most common application is in the storage of secrets in multiple locations to prevent the capture of the secret and the subsequent cryptanalysis of that system. Most often the secrets that are "split" are the secret key material of a public key cryptography or of a Digital signature scheme. The method primarily enforces the decryption or the signing operation to take place only if a threshold of the secret sharer operates (otherwise the operation is not made). This makes the method a primary trust sharing mechanism, besides its safety of storage aspects.
Derivatives of asymmetric cryptography.
Threshold versions of encryption or signature schemes can be built for many asymmetric cryptographic schemes. The natural goal of such schemes is to be as secure as the original scheme. Such threshold versions have been defined by the above and by the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=6866642 |
68668447 | Kelly's ZnS | Kelly's formula_0 is a test statistic that can be used to test a genetic region for deviations from the neutral model, based on the squared correlation of allelic identity between loci.
Details.
Given loci formula_1 and formula_2"," formula_3 the Linkage Disequilibrium between these loci, is denoted as
formula_4
where formula_5 is the frequency of the alternative allele at i and j co-occurring and formula_6 and formula_7 the frequency of the alternative allele at formula_1 and formula_2 respectively.
a standardised measure of this is formula_8 the squared correlation of allelic identity between loci formula_1 and formula_2
formula_9
Where formula_0 averages formula_8 over all pairwise combinations between S loci.
formula_10
Usage.
Inflated formula_0 scores indicate a deviation from the neutral model and can be used as a potential signature of previous selection
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z_{nS} "
},
{
"math_id": 1,
"text": "i "
},
{
"math_id": 2,
"text": "j "
},
{
"math_id": 3,
"text": "D_{ij} "
},
{
"math_id": 4,
"text": "D_{ij} = p_{ij}-p_ip_j "
},
{
"math_id": 5,
"text": "p_{ij} "
},
{
"math_id": 6,
"text": "p_{i} "
},
{
"math_id": 7,
"text": "p_{j} "
},
{
"math_id": 8,
"text": "\\delta_{ij} "
},
{
"math_id": 9,
"text": "\\delta_{ij} = \\frac{D_{ij}^2}{p_i(1 - p_i)p_j (1-p_j)}"
},
{
"math_id": 10,
"text": "Z_{nS} = \\frac{2}{S(S-1)}\\sum_{i=1}^{S-1} \\sum_{j=i+1}^{S} \\delta_{ij} "
}
]
| https://en.wikipedia.org/wiki?curid=68668447 |
6867 | Context-free language | Formal language generated by context-free grammar
In formal language theory, a context-free language (CFL), also called a Chomsky type-2 language, is a language generated by a context-free grammar (CFG).
Context-free languages have many applications in programming languages, in particular, most arithmetic expressions are generated by context-free grammars.
Background.
Context-free grammar.
Different context-free grammars can generate the same context-free language. Intrinsic properties of the language can be distinguished from extrinsic properties of a particular grammar by comparing multiple grammars that describe the language.
Automata.
The set of all context-free languages is identical to the set of languages accepted by pushdown automata, which makes these languages amenable to parsing. Further, for a given CFG, there is a direct way to produce a pushdown automaton for the grammar (and thereby the corresponding language), though going the other way (producing a grammar given an automaton) is not as direct.
Examples.
An example context-free language is formula_0, the language of all non-empty even-length strings, the entire first halves of which are a's, and the entire second halves of which are b's. L is generated by the grammar formula_1.
This language is not regular.
It is accepted by the pushdown automaton formula_2 where formula_3 is defined as follows:
formula_4
Unambiguous CFLs are a proper subset of all CFLs: there are inherently ambiguous CFLs. An example of an inherently ambiguous CFL is the union of formula_5 with formula_6. This set is context-free, since the union of two context-free languages is always context-free. But there is no way to unambiguously parse strings in the (non-context-free) subset formula_7 which is the intersection of these two languages.
Dyck language.
The language of all properly matched parentheses is generated by the grammar formula_8.
Properties.
Context-free parsing.
The context-free nature of the language makes it simple to parse with a pushdown automaton.
Determining an instance of the membership problem; i.e. given a string formula_9, determine whether formula_10 where formula_11 is the language generated by a given grammar formula_12; is also known as "recognition". Context-free recognition for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of "O"("n"2.3728596).
Conversely, Lillian Lee has shown "O"("n"3−ε) boolean matrix multiplication to be reducible to "O"("n"3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter.
Practical uses of context-free languages require also to produce a derivation tree that exhibits the structure that the grammar associates with the given string. The process of producing this tree is called "parsing". Known parsers have a time complexity that is cubic in the size of the string that is parsed.
Formally, the set of all context-free languages is identical to the set of languages accepted by pushdown automata (PDA). Parser algorithms for context-free languages include the CYK algorithm and Earley's Algorithm.
A special subclass of context-free languages are the deterministic context-free languages which are defined as the set of languages accepted by a deterministic pushdown automaton and can be parsed by a LR(k) parser.
See also parsing expression grammar as an alternative approach to grammar and parser.
Closure properties.
The class of context-free languages is closed under the following operations. That is, if "L" and "P" are context-free languages, the following languages are context-free as well:
Nonclosure under intersection, complement, and difference.
The context-free languages are not closed under intersection. This can be seen by taking the languages formula_21 and formula_22, which are both context-free. Their intersection is formula_23, which can be shown to be non-context-free by the pumping lemma for context-free languages. As a consequence, context-free languages cannot be closed under complementation, as for any languages "A" and "B", their intersection can be expressed by union and complement: formula_24. In particular, context-free language cannot be closed under difference, since complement can be expressed by difference: formula_25.
However, if "L" is a context-free language and "D" is a regular language then both their intersection formula_26 and their difference formula_27 are context-free languages.
Decidability.
In formal language theory, questions about regular languages are usually decidable, but ones about context-free languages are often not. It is decidable whether such a language is finite, but not whether it contains every possible string, is regular, is unambiguous, or is equivalent to a language with a different grammar.
The following problems are undecidable for arbitrarily given context-free grammars A and B:
The following problems are "decidable" for arbitrary context-free languages:
According to Hopcroft, Motwani, Ullman (2003),
many of the fundamental closure and (un)decidability properties of context-free languages were shown in the 1961 paper of Bar-Hillel, Perles, and Shamir
Languages that are not context-free.
The set formula_7 is a context-sensitive language, but there does not exist a context-free grammar generating this language. So there exist context-sensitive languages which are not context-free. To prove that a given language is not context-free, one may employ the pumping lemma for context-free languages or a number of other methods, such as Ogden's lemma or Parikh's theorem.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L = \\{a^nb^n:n\\geq1\\}"
},
{
"math_id": 1,
"text": "S\\to aSb ~|~ ab"
},
{
"math_id": 2,
"text": "M=(\\{q_0,q_1,q_f\\}, \\{a,b\\}, \\{a,z\\}, \\delta, q_0, z, \\{q_f\\})"
},
{
"math_id": 3,
"text": "\\delta"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\delta(q_0, a, z) &= (q_0, az) \\\\\n\\delta(q_0, a, a) &= (q_0, aa) \\\\\n\\delta(q_0, b, a) &= (q_1, \\varepsilon) \\\\\n\\delta(q_1, b, a) &= (q_1, \\varepsilon) \\\\\n\\delta(q_1, \\varepsilon, z) &= (q_f, \\varepsilon)\n\\end{align}"
},
{
"math_id": 5,
"text": "\\{a^n b^m c^m d^n | n, m > 0\\}"
},
{
"math_id": 6,
"text": "\\{a^n b^n c^m d^m | n, m > 0\\}"
},
{
"math_id": 7,
"text": "\\{a^n b^n c^n d^n | n > 0\\}"
},
{
"math_id": 8,
"text": "S\\to SS ~|~ (S) ~|~ \\varepsilon"
},
{
"math_id": 9,
"text": "w"
},
{
"math_id": 10,
"text": "w \\in L(G)"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "G"
},
{
"math_id": 13,
"text": "L \\cup P"
},
{
"math_id": 14,
"text": "L \\cdot P"
},
{
"math_id": 15,
"text": "L^*"
},
{
"math_id": 16,
"text": "\\varphi(L)"
},
{
"math_id": 17,
"text": "\\varphi"
},
{
"math_id": 18,
"text": "\\varphi^{-1}(L)"
},
{
"math_id": 19,
"text": "\\varphi^{-1}"
},
{
"math_id": 20,
"text": "\\{vu : uv \\in L \\}"
},
{
"math_id": 21,
"text": "A = \\{a^n b^n c^m \\mid m, n \\geq 0 \\}"
},
{
"math_id": 22,
"text": "B = \\{a^m b^n c^n \\mid m,n \\geq 0\\}"
},
{
"math_id": 23,
"text": "A \\cap B = \\{ a^n b^n c^n \\mid n \\geq 0\\}"
},
{
"math_id": 24,
"text": "A \\cap B = \\overline{\\overline{A} \\cup \\overline{B}} "
},
{
"math_id": 25,
"text": "\\overline{L} = \\Sigma^* \\setminus L"
},
{
"math_id": 26,
"text": "L\\cap D"
},
{
"math_id": 27,
"text": "L\\setminus D"
},
{
"math_id": 28,
"text": "L(A)=L(B)"
},
{
"math_id": 29,
"text": "L(A) \\cap L(B) = \\emptyset "
},
{
"math_id": 30,
"text": "L(A) \\subseteq L(B)"
},
{
"math_id": 31,
"text": "L(A)=\\Sigma^*"
},
{
"math_id": 32,
"text": "L(A)"
},
{
"math_id": 33,
"text": "L(A) = \\emptyset"
}
]
| https://en.wikipedia.org/wiki?curid=6867 |
6867714 | Zahorski theorem | In mathematics, Zahorski's theorem is a theorem of real analysis. It states that a necessary and sufficient condition for a subset of the real line to be the set of points of non-differentiability of a continuous real-valued function, is that it be the union of a Gδ set and a formula_0 set of zero measure.
This result was proved by Zygmunt Zahorski in 1939 and first published in 1941. | [
{
"math_id": 0,
"text": "{G_\\delta}_\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=6867714 |
68683512 | Vladimir Bogachev | Russian mathematician (born 1961)
Vladimir Igorevich Bogachev (; born in 1961) is an eminent Russian mathematician and Full Professor of the Department of Mechanics and Mathematics of the Lomonosov Moscow State University. He is an expert in measure theory, probability theory, infinite-dimensional analysis and partial differential equations arising in mathematical physics. His research was distinguished by several awards including the medal and the prize of the Academy of Sciences of the Soviet Union (1990); Award of the Japan Society for the Promotion of Science (2000); the Doob Lecture of the Bernoulli Society (2017); and the Kolmogorov Prize of the Russian Academy of Sciences (2018).
Vladimir Bogachev is one of the most cited Russian mathematicians. He is the author of more than 200 publications and 12 monographs. His total citation index by MathSciNet is 2960, with h-index=23 (by September 2021)
Biography.
Bogachev graduated with honours from Moscow State University (1983). In 1986, he received his PhD (Candidate of Sciences in Russia) under the supervision of Prof. O. G. Smolyanov.
Scientific contributions.
In 1984, V. Bogachev resolved three Aronszajn's problems on infinite-dimensional probability distributions and answered a famous question of I. M. Gelfand posed about 25 years before that. In 1992, Vladimir Bogachev proved T. Pitcher’s conjecture (stated in 1961) on the differentiability of the distributions of diffusion processes. In 1995, he proved (with Michael Röckner) the famous Shigekawa conjecture on the absolute continuity of invariant measures of diffusion processes. In 1999, in a joint work with Sergio Albeverio and Röckner, Professor Bogachev resolved the well-known problem of S. R. S. Varadhan on the uniqueness of stationary distributions, which had remained open for about 20 years.
A remarkable achievement of Vladimir Bogachev is the recently obtained (2021) answer to the question of Andrey Kolmogorov (posed in 1931) on the uniqueness of the solution to the Cauchy problem: it is shown that the Cauchy problem with a unit diffusion coefficient and locally bounded drift has a unique probabilistic solution on formula_0, and in formula_1 this is not true even for smooth drift.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^1"
},
{
"math_id": 1,
"text": "\\R^{>1}"
}
]
| https://en.wikipedia.org/wiki?curid=68683512 |
68686 | Raman spectroscopy | Spectroscopic technique
Raman spectroscopy () (named after physicist C. V. Raman) is a spectroscopic technique typically used to determine vibrational modes of molecules, although rotational and other low-frequency modes of systems may also be observed. Raman spectroscopy is commonly used in chemistry to provide a structural fingerprint by which molecules can be identified.
Raman spectroscopy relies upon inelastic scattering of photons, known as Raman scattering. A source of monochromatic light, usually from a laser in the visible, near infrared, or near ultraviolet range is used, although X-rays can also be used. The laser light interacts with molecular vibrations, phonons or other excitations in the system, resulting in the energy of the laser photons being shifted up or down. The shift in energy gives information about the vibrational modes in the system. Infrared spectroscopy typically yields similar yet complementary information.
Typically, a sample is illuminated with a laser beam. Electromagnetic radiation from the illuminated spot is collected with a lens and sent through a monochromator. Elastic scattered radiation at the wavelength corresponding to the laser line (Rayleigh scattering) is filtered out by either a notch filter, edge pass filter, or a band pass filter, while the rest of the collected light is dispersed onto a detector.
Spontaneous Raman scattering is typically very weak; as a result, for many years the main difficulty in collecting Raman spectra was separating the weak inelastically scattered light from the intense Rayleigh scattered laser light (referred to as "laser rejection"). Historically, Raman spectrometers used holographic gratings and multiple dispersion stages to achieve a high degree of laser rejection. In the past, photomultipliers were the detectors of choice for dispersive Raman setups, which resulted in long acquisition times. However, modern instrumentation almost universally employs notch or edge filters for laser rejection. Dispersive single-stage spectrographs (axial transmissive (AT) or Czerny–Turner (CT) monochromators) paired with CCD detectors are most common although Fourier transform (FT) spectrometers are also common for use with NIR lasers.
The name "Raman spectroscopy" typically refers to vibrational Raman using laser wavelengths which are not absorbed by the sample. There are many other variations of Raman spectroscopy including surface-enhanced Raman, resonance Raman, tip-enhanced Raman, polarized Raman, stimulated Raman, transmission Raman, spatially-offset Raman, and hyper Raman.
History.
Although the inelastic scattering of light was predicted by Adolf Smekal in 1923, it was not observed in practice until 1928. The Raman effect was named after one of its discoverers, the Indian scientist C. V. Raman, who observed the effect in organic liquids in 1928 together with K. S. Krishnan, and independently by Grigory Landsberg and Leonid Mandelstam in inorganic crystals. Raman won the Nobel Prize in Physics in 1930 for this discovery. The first observation of Raman spectra in gases was in 1929 by Franco Rasetti.
Systematic pioneering theory of the Raman effect was developed by Czechoslovak physicist George Placzek between 1930 and 1934. The mercury arc became the principal light source, first with photographic detection and then with spectrophotometric detection.
In the years following its discovery, Raman spectroscopy was used to provide the first catalog of molecular vibrational frequencies. Typically, the sample was held in a long tube and illuminated along its length with a beam of filtered monochromatic light generated by a gas discharge lamp. The photons that were scattered by the sample were collected through an optical flat at the end of the tube. To maximize the sensitivity, the sample was highly concentrated (1 M or more) and relatively large volumes (5 mL or more) were used.
Theory.
The magnitude of the Raman effect correlates with polarizability of the electrons in a molecule. It is a form of inelastic light scattering, where a photon excites the sample. This excitation puts the molecule into a virtual energy state for a short time before the photon is emitted. Inelastic scattering means that the energy of the emitted photon is of either lower or higher energy than the incident photon. After the scattering event, the sample is in a different rotational or vibrational state.
For the total energy of the system to remain constant after the molecule moves to a new rovibronic (rotational–vibrational–electronic) state, the scattered photon shifts to a different energy, and therefore a different frequency. This energy difference is equal to that between the initial and final rovibronic states of the molecule. If the final state is higher in energy than the initial state, the scattered photon will be shifted to a lower frequency (lower energy) so that the total energy remains the same. This shift in frequency is called a Stokes shift, or downshift. If the final state is lower in energy, the scattered photon will be shifted to a higher frequency, which is called an anti-Stokes shift, or upshift.
For a molecule to exhibit a Raman effect, there must be a change in its electric dipole-electric dipole polarizability with respect to the vibrational coordinate corresponding to the rovibronic state. The intensity of the Raman scattering is proportional to this polarizability change. Therefore, the Raman spectrum (scattering intensity as a function of the frequency shifts) depends on the rovibronic states of the molecule.
The Raman effect is based on the interaction between the electron cloud of a sample and the external electric field of the monochromatic light, which can create an induced dipole moment within the molecule based on its polarizability. Because the laser light does not excite the molecule there can be no real transition between energy levels. The Raman effect should not be confused with emission (fluorescence or phosphorescence), where a molecule in an excited electronic state emits a photon and returns to the ground electronic state, in many cases to a vibrationally excited state on the ground electronic state potential energy surface. Raman scattering also contrasts with infrared (IR) absorption, where the energy of the absorbed photon matches the difference in energy between the initial and final rovibronic states. The dependence of Raman on the electric dipole-electric dipole polarizability derivative also differs from IR spectroscopy, which depends on the electric dipole moment derivative, the atomic polar tensor (APT). This contrasting feature allows rovibronic transitions that might not be active in IR to be analyzed using Raman spectroscopy, as exemplified by the rule of mutual exclusion in centrosymmetric molecules. Transitions which have large Raman intensities often have weak IR intensities and vice versa. If a bond is strongly polarized, a small change in its length such as that which occurs during a vibration has only a small resultant effect on polarization. Vibrations involving polar bonds (e.g. C-O , N-O , O-H) are therefore, comparatively weak Raman scatterers. Such polarized bonds, however, carry their electrical charges during the vibrational motion, (unless neutralized by symmetry factors), and this results in a larger net dipole moment change during the vibration, producing a strong IR absorption band. Conversely, relatively neutral bonds (e.g. C-C , C-H , C=C) suffer large changes in polarizability during a vibration. However, the dipole moment is not similarly affected such that while vibrations involving predominantly this type of bond are strong Raman scatterers, they are weak in the IR. A third vibrational spectroscopy technique, inelastic incoherent neutron scattering (IINS), can be used to determine the frequencies of vibrations in highly symmetric molecules that may be both IR and Raman inactive. The IINS selection rules, or allowed transitions, differ from those of IR and Raman, so the three techniques are complementary. They all give the same frequency for a given vibrational transition, but the relative intensities provide different information due to the different types of interaction between the molecule and the incoming particles, photons for IR and Raman, and neutrons for IINS.
Raman shift.
Raman shifts are typically reported in wavenumbers, which have units of inverse length, as this value is directly related to energy. In order to convert between spectral wavelength and wavenumbers of shift in the Raman spectrum, the following formula can be used:
formula_0
where Δν̃ is the Raman shift expressed in wavenumber, λ0 is the excitation wavelength, and λ1 is the Raman spectrum wavelength. Most commonly, the unit chosen for expressing wavenumber in Raman spectra is inverse centimeters (cm−1). Since wavelength is often expressed in units of nanometers (nm), the formula above can scale for this unit conversion explicitly, giving
formula_1
Instrumentation.
Modern Raman spectroscopy nearly always involves the use of lasers as excitation light sources. Because lasers were not available until more than three decades after the discovery of the effect, Raman and Krishnan used a mercury lamp and photographic plates to record spectra. Early spectra took hours or even days to acquire due to weak light sources, poor sensitivity of the detectors and the weak Raman scattering cross-sections of most materials. Various colored filters and chemical solutions were used to select certain wavelength regions for excitation and detection but the photographic spectra were still dominated by a broad center line corresponding to Rayleigh scattering of the excitation source.
Technological advances have made Raman spectroscopy much more sensitive, particularly since the 1980s. The most common modern detectors are now charge-coupled devices (CCDs). Photodiode arrays and photomultiplier tubes were common prior to the adoption of CCDs. The advent of reliable, stable, inexpensive lasers with narrow bandwidths has also had an impact.
Lasers.
Raman spectroscopy requires a light source such as a laser. The resolution of the spectrum relies on the bandwidth of the laser source used. Generally shorter wavelength lasers give stronger Raman scattering due to the ν4 increase in Raman scattering cross-sections, but issues with sample degradation or fluorescence may result.
Continuous wave lasers are most common for normal Raman spectroscopy, but pulsed lasers may also be used. These often have wider bandwidths than their CW counterparts but are very useful for other forms of Raman spectroscopy such as transient, time-resolved and resonance Raman.
Detectors.
Raman scattered light is typically collected and either dispersed by a spectrograph or used with an interferometer for detection by Fourier Transform (FT) methods. In many cases commercially available FT-IR spectrometers can be modified to become FT-Raman spectrometers.
Detectors for dispersive Raman.
In most cases, modern Raman spectrometers use array detectors such as CCDs. Various types of CCDs exist which are optimized for different wavelength ranges. Intensified CCDs can be used for very weak signals and/or pulsed lasers.
The spectral range depends on the size of the CCD and the focal length of spectrograph used.
It was once common to use monochromators coupled to photomultiplier tubes. In this case the monochromator would need to be moved in order to scan through a spectral range.
Detectors for FT–Raman.
FT–Raman is almost always used with NIR lasers and appropriate detectors must be used depending on the exciting wavelength. Germanium or Indium gallium arsenide (InGaAs) detectors are commonly used.
Filters.
It is usually necessary to separate the Raman scattered light from the Rayleigh signal and reflected laser signal in order to collect high quality Raman spectra using a laser rejection filter. Notch or long-pass optical filters are typically used for this purpose. Before the advent of holographic filters it was common to use a triple-grating monochromator in subtractive mode to isolate the desired signal. This may still be used to record very small Raman shifts as holographic filters typically reflect some of the low frequency bands in addition to the unshifted laser light. However, Volume hologram filters are becoming more common which allow shifts as low as 5 cm−1 to be observed.
Applications.
Raman spectroscopy is used in chemistry to identify molecules and study chemical bonding and intramolecular bonds. Because vibrational frequencies are specific to a molecule's chemical bonds and symmetry (the fingerprint region of organic molecules is in the wavenumber range 500–1,500 cm−1), Raman provides a fingerprint to identify molecules. For instance, Raman and IR spectra were used to determine the vibrational frequencies of SiO, Si2O2, and Si3O3 on the basis of normal coordinate analyses. Raman is also used to study the addition of a substrate to an enzyme.
In solid-state physics, Raman spectroscopy is used to characterize materials, measure temperature, and find the crystallographic orientation of a sample. As with single molecules, a solid material can be identified by characteristic phonon modes. Information on the population of a phonon mode is given by the ratio of the Stokes and anti-Stokes intensity of the spontaneous Raman signal. Raman spectroscopy can also be used to observe other low frequency excitations of a solid, such as plasmons, magnons, and superconducting gap excitations. Distributed temperature sensing (DTS) uses the Raman-shifted backscatter from laser pulses to determine the temperature along optical fibers. The orientation of an anisotropic crystal can be found from the polarization of Raman-scattered light with respect to the crystal and the polarization of the laser light, if the crystal structure’s point group is known.
In nanotechnology, a Raman microscope can be used to analyze nanowires to better understand their structures, and the radial breathing mode of carbon nanotubes is commonly used to evaluate their diameter.
Raman active fibers, such as aramid and carbon, have vibrational modes that show a shift in Raman frequency with applied stress. Polypropylene fibers exhibit similar shifts.
In solid state chemistry and the bio-pharmaceutical industry, Raman spectroscopy can be used to not only identify active pharmaceutical ingredients (APIs), but to identify their polymorphic forms, if more than one exist. For example, the drug Cayston (aztreonam), marketed by Gilead Sciences for cystic fibrosis, can be identified and characterized by IR and Raman spectroscopy. Using the correct polymorphic form in bio-pharmaceutical formulations is critical, since different forms have different physical properties, like solubility and melting point.
Raman spectroscopy has a wide variety of applications in biology and medicine. It has helped confirm the existence of low-frequency phonons in proteins and DNA, promoting studies of low-frequency collective motion in proteins and DNA and their biological functions. Raman reporter molecules with olefin or alkyne moieties are being developed for tissue imaging with SERS-labeled antibodies. Raman spectroscopy has also been used as a noninvasive technique for real-time, in situ biochemical characterization of wounds. Multivariate analysis of Raman spectra has enabled development of a quantitative measure for wound healing progress. Spatially offset Raman spectroscopy (SORS), which is less sensitive to surface layers than conventional Raman, can be used to discover counterfeit drugs without opening their packaging, and to non-invasively study biological tissue. A reason why Raman spectroscopy is useful in biological applications is because its results often do not face interference from water molecules, due to the fact that they have permanent dipole moments, and as a result, the Raman scattering cannot be picked up on. This is a large advantage, specifically in biological applications. Raman spectroscopy also has a wide usage for studying biominerals. Lastly, Raman gas analyzers have many practical applications, including real-time monitoring of anesthetic and respiratory gas mixtures during surgery.
Raman spectroscopy has been used in several research projects as a means to detect explosives from a safe distance using laser beams.
Raman Spectroscopy is being further developed so it could be used in the clinical setting. Raman4Clinic is a European organization that is working on incorporating Raman Spectroscopy techniques in the medical field. They are currently working on different projects, one of them being monitoring cancer using bodily fluids such as urine and blood samples which are easily accessible. This technique would be less stressful on the patients than constantly having to take biopsies which are not always risk free.
In photovoltaics, Raman spectroscopy has gained more interest in the past few years demonstrating high efficacy in delivering important properties for such materials. This includes optoelectronic and physicochemical properties such as open circuit voltage, efficiency, and crystalline structure. This has been demonstrated with several photovoltaic technologies, including kesterite-based, CIGS devices, Monocrystalline silicon cells, and perovskites devices.
Art and cultural heritage.
Raman spectroscopy is an efficient and non-destructive way to investigate works of art and cultural heritage artifacts, in part because it is a non-invasive process which can be applied "in situ". It can be used to analyze the corrosion products on the surfaces of artifacts (statues, pottery, etc.), which can lend insight into the corrosive environments experienced by the artifacts. The resulting spectra can also be compared to the spectra of surfaces that are cleaned or intentionally corroded, which can aid in determining the authenticity of valuable historical artifacts.
It is capable of identifying individual pigments in paintings and their degradation products, which can provide insight into the working method of an artist in addition to aiding in authentication of paintings. It also gives information about the original state of the painting in cases where the pigments have degraded with age. Beyond the identification of pigments, extensive Raman microspectroscopic imaging has been shown to provide access to a plethora of trace compounds in Early Medieval Egyptian blue, which enable to reconstruct the individual "biography" of a colourant, including information on the type and provenance of the raw materials, synthesis and application of the pigment, and the ageing of the paint layer.
In addition to paintings and artifacts, Raman spectroscopy can be used to investigate the chemical composition of historical documents (such as the Book of Kells), which can provide insight about the social and economic conditions when they were created. It also offers a noninvasive way to determine the best method of preservation or conservation of such cultural heritage artifacts, by providing insight into the causes behind deterioration.
The IRUG (Infrared and Raman Users Group) Spectral Database is a rigorously peer-reviewed online database of IR and Raman reference spectra for cultural heritage materials such as works of art, architecture, and archaeological artifacts. The database is open for the general public to peruse, and includes interactive spectra for over a hundred different types of pigments and paints.
Microspectroscopy.
Raman spectroscopy offers several advantages for microscopic analysis. Since it is a light scattering technique, specimens do not need to be fixed or sectioned. Raman spectra can be collected from a very small volume (< 1 μm in diameter, < 10 μm in depth); these spectra allow the identification of species present in that volume. Water does not generally interfere with Raman spectral analysis. Thus, Raman spectroscopy is suitable for the microscopic examination of minerals, materials such as polymers and ceramics, cells, proteins and forensic trace evidence. A Raman microscope begins with a standard optical microscope, and adds an excitation laser, a monochromator or polychromator, and a sensitive detector (such as a charge-coupled device (CCD), or photomultiplier tube (PMT)). FT-Raman has also been used with microscopes, typically in combination with near-infrared (NIR) laser excitation. Ultraviolet microscopes and UV enhanced optics must be used when a UV laser source is used for Raman microspectroscopy.
In "direct imaging" (also termed "global imaging" or "wide-field illumination"), the whole field of view is examined for light scattering integrated over a small range of wavenumbers (Raman shifts). For instance, a wavenumber characteristic for cholesterol could be used to record the distribution of cholesterol within a cell culture. This technique is being used for the characterization of large-scale devices, mapping of different compounds and dynamics study. It has already been used for the characterization of graphene layers, J-aggregated dyes inside carbon nanotubes and multiple other 2D materials such as MoS2 and WSe2. Since the excitation beam is dispersed over the whole field of view, those measurements can be done without damaging the sample.
The most common approach is "hyperspectral imaging" or "chemical imaging", in which thousands of Raman spectra are acquired from all over the field of view by, for example, raster scanning of a focused laser beam through a sample. The data can be used to generate images showing the location and amount of different components. Having the full spectroscopic information available in every measurement spot has the advantage that several components can be mapped at the same time, including chemically similar and even polymorphic forms, which cannot be distinguished by detecting only one single wavenumber. Furthermore, material properties such as stress and strain, crystal orientation, crystallinity and incorporation of foreign ions into crystal lattices (e.g., doping, solid solution series) can be determined from hyperspectral maps. Taking the cell culture example, a hyperspectral image could show the distribution of cholesterol, as well as proteins, nucleic acids, and fatty acids. Sophisticated signal- and image-processing techniques can be used to ignore the presence of water, culture media, buffers, and other interferences.
Because a Raman microscope is a diffraction-limited system, its spatial resolution depends on the wavelength of light, the numerical aperture of the focusing element, and — in the case of confocal microscopy — on the diameter of the confocal aperture. When operated in the visible to near-infrared range, a Raman microscope can achieve lateral resolutions of approx. 1 μm down to 250 nm, depending on the wavelength and type of objective lens (e.g., air "vs." water or oil immersion lenses). The depth resolution (if not limited by the optical penetration depth of the sample) can range from 1–6 μm with the smallest confocal pinhole aperture to tens of micrometers when operated without a confocal pinhole. Depending on the sample, the high laser power density due to microscopic focussing can have the benefit of enhanced photobleaching of molecules emitting interfering fluorescence. However, the laser wavelength and laser power have to be carefully selected for each type of sample to avoid its degradation.
Applications of Raman imaging range from materials sciences to biological studies. For each type of sample, the measurement parameters have to be individually optimized. For that reason, modern Raman microscopes are often equipped with several lasers offering different wavelengths, a set of objective lenses, and neutral density filters for tuning of the laser power reaching the sample. Selection of the laser wavelength mainly depends on optical properties of the sample and on the aim of the investigation. For example, Raman microscopy of biological and medical specimens is often performed using red to near-infrared excitation (e.g., 785 nm, or 1,064 nm wavelength). Due to typically low absorbances of biological samples in this spectral range, the risk of damaging the specimen as well as autofluorescence emission are reduced, and high penetration depths into tissues can be achieved. However, the intensity of Raman scattering at long wavelengths is low (owing to the ω4 dependence of Raman scattering intensity), leading to long acquisition times. On the other hand, resonance Raman imaging of single-cell algae at 532 nm (green) can specifically probe the carotenoid distribution within a cell by a using low laser power of ~5 μW and only 100 ms acquisition time.
Raman scattering, specifically tip-enhanced Raman spectroscopy, produces high resolution hyperspectral images of single molecules, atoms, and DNA.
Polarization dependence of Raman scattering.
Raman scattering is polarization sensitive and can provide detailed information on symmetry of Raman active modes. While conventional Raman spectroscopy identifies chemical composition, polarization effects on Raman spectra can reveal information on the orientation of molecules in single crystals and anisotropic materials, e.g. strained plastic sheets, as well as the symmetry of vibrational modes.
Polarization–dependent Raman spectroscopy uses (plane) polarized laser excitation from a polarizer. The Raman scattered light collected is passed through a second polarizer (called the analyzer) before entering the detector. The analyzer is oriented either parallel or perpendicular to the polarization of the laser. Spectra acquired with the analyzer set at both perpendicular and parallel to the excitation plane can be used to calculate the depolarization ratio. Typically a polarization scrambler is placed between the analyzer and detector also.It is convenient in polarized Raman spectroscopy to describe the propagation and polarization directions using Porto's notation, described by and named after Brazilian physicist Sergio Pereira da Silva Porto.
For isotropic solutions, the Raman scattering from each mode either retains the polarization of the laser or becomes partly or fully depolarized. If the vibrational mode involved in the Raman scattering process is totally symmetric then the polarization of the Raman scattering will be the same as that of the incoming laser beam. In the case that the vibrational mode is not totally symmetric then the polarization will be lost (scrambled) partially or totally, which is referred to as depolarization. Hence polarized Raman spectroscopy can provide detailed information as to the symmetry labels of vibrational modes.
In the solid state, polarized Raman spectroscopy can be useful in the study of oriented samples such as single crystals. The polarizability of a vibrational mode is not equal along and across the bond. Therefore the intensity of the Raman scattering will be different when the laser's polarization is along and orthogonal to a particular bond axis. This effect can provide information on the orientation of molecules with a single crystal or material. The spectral information arising from this analysis is often used to understand macro-molecular orientation in crystal lattices, liquid crystals or polymer samples.
Characterization of the symmetry of a vibrational mode.
The polarization technique is useful in understanding the connections between molecular symmetry, Raman activity, and peaks in the corresponding Raman spectra. Polarized light in one direction only gives access to some Raman–active modes, but rotating the polarization gives access to other modes. Each mode is separated according to its symmetry.
The symmetry of a vibrational mode is deduced from the depolarization ratio ρ, which is the ratio of the Raman scattering with polarization orthogonal to the incident laser and the Raman scattering with the same polarization as the incident laser: formula_2 Here formula_3 is the intensity of Raman scattering when the analyzer is rotated 90 degrees with respect to the incident light's polarization axis, and formula_4 the intensity of Raman scattering when the analyzer is aligned with the polarization of the incident laser. When polarized light interacts with a molecule, it distorts the molecule which induces an equal and opposite effect in the plane-wave, causing it to be rotated by the difference between the orientation of the molecule and the angle of polarization of the light wave. If formula_5, then the vibrations at that frequency are "depolarized"; meaning they are not totally symmetric.
Raman Excitation Profile Analysis.
Resonance Raman selection rules can be explained by the Kramers-Heisenberg-Dirac (KHD) equation using the Albrecht A and B terms, as demonstrated. The KHD expression is conveniently linked to the polarizability of the molecule within its frame of reference.
The polarizability operator connecting the initial and final states expresses the transition polarizability as a matrix element, as a function of the incidence frequency ω0. The directions x, y, and z in the molecular frame are represented by the Cartesian tensor ρ and σ here. Analyzing Raman excitation patterns requires the use of this equation, which is a sum-over-states expression for polarizability. This series of profiles illustrates the connection between a Raman active vibration's excitation frequency and intensity.
This method takes into account sums over Franck-Condon's active vibrational states and provides insight into electronic absorption and emission spectra. Nevertheless, the work highlights a flaw in the sum-over-states method, especially for large molecules like visible chromophores, which are commonly studied in Raman spectroscopy. The difficulty arises from the potentially infinite number of intermediary steps needed. While lowering the sum at higher vibrational states can help tiny molecules get over this issue, larger molecules find it more challenging when there are more terms in the sum, particularly in the condensed phase when individual eigenstates cannot be resolved spectrally.
To overcome this, two substitute techniques that do not require adding eigenstates can be considered. Among these two methods are available: the transform method. and Heller's time-dependent approach. The goal of both approaches is to take into consideration the frequency-dependent Raman cross-section σR(ω0) of a particular normal mode.
Variants.
At least 25 variations of Raman spectroscopy have been developed. The usual purpose is to enhance the sensitivity (e.g., surface-enhanced Raman), to improve the spatial resolution (Raman microscopy), or to acquire very specific information (resonance Raman).
Spontaneous (or far-field) Raman spectroscopy.
Terms such as "spontaneous Raman spectroscopy" or "normal Raman spectroscopy" summarize Raman spectroscopy techniques based on Raman scattering by using normal far-field optics as described above. Variants of normal Raman spectroscopy exist with respect to excitation-detection geometries, combination with other techniques, use of special (polarizing) optics and specific choice of excitation wavelengths for resonance enhancement.
Enhanced (or near-field) Raman spectroscopy.
Enhancement of Raman scattering is achieved by local electric-field enhancement by optical near-field effects (e.g. localized surface plasmons).
Non-linear Raman spectroscopy.
Raman signal enhancements are achieved through non-linear optical effects, typically realized by mixing two or more wavelengths emitted by spatially and temporally synchronized pulsed lasers.
Morphologically-Directed Raman spectroscopy.
Morphologically Directed Raman Spectroscopy (MDRS) combines automated particle imaging and Raman microspectroscopy into a singular integrated platform in order to provide particle size, shape, and chemical identification. Automated particle imaging determines the particle size and shape distributions of components within a blended sample from images of individual particles. The information gathered from automated particle imaging is then utilized to direct the Raman spectroscopic analysis. The Raman spectroscopic analytical process is performed on a randomly-selected subset of the particles, allowing chemical identification of the sample’s multiple components. Tens of thousands of particles can be imaged in a matter of minutes using the MDRS method, making the process ideal for forensic analysis and investigating counterfeit pharmaceuticals and subsequent adjudications.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta \\tilde{\\nu} = \\left( \\frac{1}{\\lambda_0} - \\frac{1}{\\lambda_1} \\right) \\ , "
},
{
"math_id": 1,
"text": "\\Delta \\tilde{\\nu } (\\text{cm}^{-1}) = \\left( \\frac{1}{\\lambda_0 (\\text{nm})} - \\frac{1}{\\lambda_1 (\\text{nm})} \\right) \\times \\frac{(10^{7}\\text{nm})}{(\\text{cm})}."
},
{
"math_id": 2,
"text": "\\rho = \\frac{I_r}{I_u}"
},
{
"math_id": 3,
"text": "I_r"
},
{
"math_id": 4,
"text": "I_u"
},
{
"math_id": 5,
"text": "\\rho \\geq \\frac{3}{4}"
}
]
| https://en.wikipedia.org/wiki?curid=68686 |
68689710 | Petal projection | Form of knot diagram
In knot theory, a petal projection of a knot is a knot diagram with a single crossing, at which an odd number of non-nested arcs ("petals") all meet. Because the above-below relation between the branches of a knot at this crossing point is not apparent from the appearance of the diagram, it must be specified separately, as a permutation describing the top-to-bottom ordering of the branches.
Every knot or link has a petal projection; the minimum number of petals in such a projection defines a knot invariant, the petal number of the knot. Petal projections can be used to define the Petaluma model, a family of probability distributions on knots with a given number of petals, defined by choosing a random permutation for the branches of a petal diagram.
Petal projection.
A petal projection is a description of a knot as a special kind of knot diagram, a two-dimensional self-crossing curve formed by projecting the knot from three dimensions down to a plane. In a petal projection, this diagram has only one crossing point, forming a topological rose. Every two branches of the curve that pass through this point cross each other there; branches that meet tangentially without crossing are not allowed. The "petals" formed by arcs of the curve that leave and then return to this crossing point are all non-nested, bounding closed disks that are disjoint except for their common intersection at the crossing point.
Beyond this topological description, the precise shape of the curve is unimportant. For instance, curves of this type could be realized algebraically as certain rose curves. However, it is common instead to draw a petal projection using straight line segments across the crossing point, connected at their endpoints by smooth curves to form the petals.
In order to specify the above-below relation of the branches of the curve at the crossing point, each branch is labeled with an integer, from 1 to the number of branches, giving its position in the top-down ordering of the branches as would be seen from a three-dimensional viewpoint above the projected diagram. The cyclic permutation of these integers, in the radial ordering of the branches around the crossing point, can be used as a purely combinatorial description of the petal projection.
In order to form a single knot, rather than a link, a petal projection must have an odd number of branches at its crossing point. Every knot can be represented as a petal projection, for diagrams with a sufficiently large number of petals. The minimum possible number of petals in a petal projection of a given knot defines a knot invariant called its petal number.
Petaluma model.
The Petaluma model is a random distribution on knots, parameterized by an odd number formula_0 of petals in a petal diagram, and defined by constructing a petal diagram with this number of petals using a uniformly random permutation on its branches.
Generalization to links.
Petal projections, and the petaluma model, can be generalized from knots to links. However, for this generalization, it is no longer possible to guarantee that all petals are non-nested. Instead, the generalized petal projections for links have a different type of standard diagram allowing some nesting of the petals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2n+1"
}
]
| https://en.wikipedia.org/wiki?curid=68689710 |
68692207 | Inductive miner | Inductive miner belongs to a class of algorithms used in process discovery. Various algorithms proposed previously give process models of slightly different type from the same input. The quality of the output model depends on the soundness of the model. A number of techniques such as alpha miner, genetic miner, work on the basis of converting an event log into a workflow model, however, they do not produce models that are sound all the time. Inductive miner relies on building a directly follows graph from event log and using this graph to detect various process relations.
Definitions.
A directly follows graph is a directed graph that connects an activity A to another activity B if and only if activity B occurs chronologically right after activity A for any given case in the respective event log. A directly follows graph is represented mathematically by:
formula_0
Where
formula_1 (activities in the log)
formula_2 (directly follows relation)
formula_3
formula_4
The inductive miner technique relies on the detection of various cuts on the directly follows graph created using the event log. The core idea behind inductive miner lies in the unique methodology of discovering various divisions of the arcs in the directly follows graph, and using the smaller components after division to represent the execution sequence of the activities.The inductive miner algorithm uses the directly follows graph to detect one of the following cuts.
formula_5 is an exclusive OR cut iff: formula_6
formula_7 is a sequence cut iff: formula_8
formula_9 is a parallel cut iff:
- formula_10
- formula_11
formula_12 is a redo loop cut iff:
- formula_13
- formula_14
- formula_15
- formula_16
- formula_17 | [
{
"math_id": 0,
"text": "G(L) = (A_L, \\rightarrow_L, A^s_L, A^e_L)\n"
},
{
"math_id": 1,
"text": "A_L = Nodes\n"
},
{
"math_id": 2,
"text": "\\rightarrow_L = edges\n"
},
{
"math_id": 3,
"text": "A^s_L = Start node\n"
},
{
"math_id": 4,
"text": "A^e_L = End node\n"
},
{
"math_id": 5,
"text": "(\\times, A_1, ... , A_n)\n"
},
{
"math_id": 6,
"text": "\\forall i,j \\in \\{1, .. , n\\} (\\forall a \\in A_i, b \\in A_j (i \\neq j \\Rightarrow \\neg(a \\rightarrow_L b) ))\n\n"
},
{
"math_id": 7,
"text": "(\\rightarrow_L, A_1, .. , A_n)\n\n"
},
{
"math_id": 8,
"text": "\\forall i,j \\in \\{1, .. , n\\} (\\forall a \\in A_i, b \\in A_j \n(a \\rightarrow_L b \\land \\neg(b \\rightarrow_L a))\n)\n"
},
{
"math_id": 9,
"text": "(\\land, A_1, .. , A_n)\n\n"
},
{
"math_id": 10,
"text": "\\forall i \\in \\{1, .., n\\} ( A_i \\cap A^s_L \\neq \\emptyset \\land A_i \\cap A^e_L \\neq \\emptyset )\n\n"
},
{
"math_id": 11,
"text": "\\forall i,j \\in \\{1, .. , n\\} (\\forall a \\in A_i, b \\in A_j (i \\neq j \\Rightarrow a \\rightarrow_L b ))\n\n"
},
{
"math_id": 12,
"text": "(\\circlearrowleft, A_1, .. , A_n)\n\n"
},
{
"math_id": 13,
"text": "n \\geq 2\n\n"
},
{
"math_id": 14,
"text": "A_1 \\supseteq A^s_L \\cup A^e_L\n\n"
},
{
"math_id": 15,
"text": "\\forall i,j \\in \\{2, .. , n\\} (\\forall a \\in A_i, b \\in A_j (i \\neq j \\Rightarrow \\neg(a \\rightarrow_L b) ))\n\n"
},
{
"math_id": 16,
"text": "\\forall i \\in \\{2, .. , n\\} (\\forall b \\in A_i (\\exists a \\in A^e_L (a \\rightarrow_L b)) \\Rightarrow (\\forall a \\in A^e_L (a \\rightarrow_L b) ))\n\n"
},
{
"math_id": 17,
"text": "\\forall i \\in \\{2, .. , n\\} (\\forall b \\in A_i (\\exists a \\in A^s_L (b \\rightarrow_L a)) \\Rightarrow (\\forall a \\in A^s_L (b \\rightarrow_L a) ))\n\n"
}
]
| https://en.wikipedia.org/wiki?curid=68692207 |
68697715 | Martin curve | Mathematical representation of particulate organic carbon export to ocean floor
The Martin curve is a power law used by oceanographers to describe the export to the ocean floor of particulate organic carbon (POC). The curve is controlled with two parameters: the reference depth in the water column, and a remineralisation parameter which is a measure of the rate at which the vertical flux of POC attenuates. It is named after the American oceanographer John Martin.
The Martin Curve has been used in the study of ocean carbon cycling and has contributed to understanding the role of the ocean in regulating atmospheric CO2 levels.
Background.
The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle. POC is the link between surface primary production, the deep ocean, and marine sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO2 concentration.
The biological carbon pump (BCP) is a crucial mechanism by which atmospheric CO2 is taken up by the ocean and transported to the ocean interior. Without the BCP, the pre-industrial atmospheric CO2 concentration (~280 ppm) would have risen to ~460 ppm. At present, the particulate organic carbon (POC) flux from the surface layer of the ocean to the ocean interior has been estimated to be 4–13 Pg-C year−1. To evaluate the efficiency of the BCP, it is necessary to quantify the vertical attenuation of the POC flux with depth because the deeper that POC is transported, the longer the CO2 will be isolated from the atmosphere. Thus, an increase in the efficiency of the BCP has the potential to cause an increase of ocean carbon sequestration of atmospheric CO2 that would result in a negative feedback on global warming. Different researchers have investigated the vertical attenuation of the POC flux since the 1980s.
In 1987, Martin "et al". proposed the following power law function to describe the POC flux attenuation:
formula_0 (1)
where "z" is water depth (m), and "F""z" and "F"100 are the POC fluxes at depths of "z" metres and 100 metres respectively. Although other functions, such as an exponential curve, have also been proposed and validated, this power law function, commonly known as the "Martin curve", has been used very frequently in discussions of the BCP. The exponent b in this equation has been used as an index of BCP efficiency: the larger the exponent b, the higher the vertical attenuation rate of the POC flux and the lower the BCP efficiency. Moreover, numerical simulations have shown that a change in the value of b would significantly change the atmospheric CO2 concentration.
Subsequently, other researchers have derived alternative remineralization profiles from assumptions about particle degradability and sinking speed. However, the Martin curve has become ubiquitous as the model that assumes slower-sinking and/or labile organic matter is preferentially depleted near the surface causing increasing sinking speed and/or remineralization timescale with depth.
The Martin curve can be expressed in a slightly more general way as:
formula_1
where "f""p"("z") is the fraction of the flux of particulate organic matter from a productive layer near the surface sinking through the depth horizon "z" [m], "C""p" ["m""b"] is a scaling coefficient, and "b" is a nondimensional exponent controlling how "fp" decreases with depth. The equation is often normalised to a reference depth "zo" but this parameter can be readily absorbed into "Cp".
Vertical attenuation rate.
The vertical attenuation rate of the POC flux is very dependent on the sinking velocity and decomposition rate of POC in the water column. Because POC is labile and has little negative buoyancy, it must be aggregated with relatively heavy materials called ballast to settle gravitationally in the ocean. Materials that may serve as ballast include biogenic opal (hereinafter "opal"), CaCO3, and aluminosilicates. In 1993, Ittekkot hypothesized that the drastic decrease from ~280 to ~200 ppm of atmospheric CO2 that occurred during the last glacial maximum was caused by an increase of the input of aeolian dust (aluminosilicate ballast) to the ocean, which strengthened the BCP. In 2002, Klaas and Archer , as well as Francois "et al." who compiled and analyzed global sediment trap data, suggested that CaCO3, which has the largest density among possible ballast minerals, is globally the most important and effective facilitator of vertical POC transport, because the transfer efficiency (the ratio of the POC flux in the deep sea to that at the bottom of the surface mixed layer) is higher in subtropical and tropical areas where CaCO3 is a major component of marine snow.
Reported sinking velocities of CaCO3-rich particles are high. Numerical simulations that take into account these findings have indicated that future ocean acidification will reduce the efficiency of the BCP by decreasing ocean calcification. In addition, the POC export ratio (the ratio of the POC flux from an upper layer (a fixed depth such as 100 metres, or the euphotic zone or mixed layer) to net primary productivity) in subtropical and tropical areas is low because high temperatures in the upper layer increase POC decomposition rates. The result might be a higher transfer efficiency and a strong positive correlation between POC and CaCO3 in these low-latitude areas: labile POC, which is fresher and easier for microbes to break down, decomposes in the upper layer, and relatively refractory POC is transported to the ocean interior in low-latitude areas.
On the basis of observations that revealed a large increase of POC fluxes in high-latitude areas during diatom blooms and on the fact that diatoms are much bigger than coccolithophores, Honda and Watanabe proposed in 2010 that opal, rather than CaCO3, is crucial as ballast for effective POC vertical transport in subarctic regions. Weber et al. reported in 2016 a strong negative correlation between transfer efficiency and the picoplankton fraction of plankton as well as higher transfer efficiencies in high-latitude areas, where large phytoplankton such as diatoms predominate. They also calculated that the fraction of vertically transported CO2 that has been sequestered in the ocean interior for at least 100 years is higher in high-latitude (polar and subpolar) regions than in low-latitude regions.
In contrast, Bach et al.conducted in 2019 a mesocosm experiment to study how the plankton community structure affected sinking velocities and reported that during more productive periods the sinking velocity of aggregated particles was not necessarily higher, because the aggregated particles produced then were very fluffy; rather, the settling velocity was higher when the phytoplankton were dominated by small cells. In 2012, Henson et al. revisited the global sediment trap data and reported the POC flux is negatively correlated with the opal export flux and uncorrelated with the CaCO3 export flux.
Key factors affecting the rate of biological decomposition of sinking POC in the water column are water temperature and the dissolved oxygen (DO) concentration: the lower the water temperature and the DO concentration, the slower the biological respiration rate and, consequently, the POC flux decomposition rate. For example, in 2015 Marsay with other analysed POC flux data from neutrally buoyant sediment traps in the upper 500 m of the water column and found a significant positive correlation between the exponent b in equation (1) above and water temperature (i.e., the POC flux was attenuated more rapidly when the water was warmer). In addition, Bach "et al." found POC decomposition rates are high (low) when diatoms and "Synechococcus" (harmful algae) are the dominant phytoplankton because of increased (decreased) zooplankton abundance and the consequent increase (decrease) in grazing pressure.
Using radiochemical observations (234Th-based POC flux observations), Pavia et al. found in 2019 that the exponent b of the Martin curve was significantly smaller in the low-oxygen (hypoxic) eastern Pacific equatorial zone than in other areas; that is, vertical attenuation of the POC flux was smaller in the hypoxic area. They pointed out that a more hypoxic ocean in the future would lead to a lower attenuation of the POC flux and therefore increased BCP efficiency and could thereby be a negative feedback on global warming. McDonnell et al. reported in 2015 that vertical transport of POC is more effective in the Antarctic, where the sinking velocity is higher and the biological respiration rate is lower than in the subtropical Atlantic. Henson et al. also reported in 2019 a high export ratio during the early bloom period, when primary productivity is low, and a low export ratio during the late bloom period, when primary productivity is high. They attributed the low export ratio during the late bloom to grazing pressure by microzooplankton and bacteria.
Despite these many investigations of the BCP, the factors governing the vertical attenuation of POC flux are still under debate. Observations in subarctic regions have shown that the transfer efficiency between depths of 1000 and 2000 m is relatively low and that between the bottom of the euphotic zone and a depth of 1000 m it is relatively high. Marsay et al. therefore proposed in 2015 that the Martin curve does not appropriately express the vertical attenuation of POC flux in all regions and that a different equation should instead be developed for each region. Gloege et al. discussed in 2017 parameterization of the vertical attenuation of POC flux, and reported that vertical attenuation of the POC flux in the twilight zone (from the base of the euphotic zone to 1000 m) can be parameterised well not only by a power law model (Martin curve) but also by an exponential model and a ballast model.
However, the exponential model tends to underestimate the POC flux in the midnight zone (depths greater than 1000 metres). Cael and Bisson reported in 2018 that the exponential model (power law model) tends to underestimate the POC flux in the upper layer, and overestimate it in the deep layer. However, the abilities of both models to describe POC fluxes were comparable statistically when they were applied to the POC flux dataset from the eastern Pacific that was used to propose the "Martin curve". In a long-term study in the northeastern Pacific, Smith et al. observed in 2018 a sudden increase of the POC flux accompanied by an unusually high transfer efficiency; they have suggested that because the Martin curve cannot express such a sudden increase, it may sometimes underestimate BCP strength. In addition, contrary to previous findings, some studies have reported a significantly higher transfer efficiency, especially to the deep sea, in subtropical regions than in subarctic regions. This pattern may be attributable to small temperature and DO concentration differences in the deep sea between high-latitude and low-latitude regions, as well as to a higher sinking velocity in subtropical regions, where CaCO3 is a major component of deep-sea marine snow. Moreover, it is also possible that POC is more refractory in low-latitude areas than in high-latitude areas.
Uncertainty in the biological pump.
The ocean's biological pump regulates atmospheric carbon dioxide levels and climate by transferring organic carbon produced at the surface by phytoplankton to the ocean interior via marine snow, where the organic carbon is consumed and respired by marine microorganisms. This surface to deep transport is usually described by a power law relationship of sinking particle concentration with depth. Uncertainty in biological pump strength can be related to different variable values (parametric uncertainty) or the underlying equations (structural uncertainty) that describe organic matter export. In 2021, Lauderdale evaluated structural uncertainty using an ocean biogeochemistry model by systematically substituting six alternative remineralisation profiles fit to a reference power-law curve. Structural uncertainty makes a substantial contribution, about one-third in atmospheric pCO2 terms, to the total uncertainty of the biological pump, highlighting the importance of improving biological pump characterisation from observations and its mechanistic inclusion in climate models.
Carbon and nutrients are consumed by phytoplankton in the surface ocean during primary production, leading to a downward flux of organic matter. This "marine snow" is transformed, respired, and degraded by heterotrophic organisms in deeper waters, ultimately releasing those constituents back into dissolved inorganic form. Oceanic overturning and turbulent mixing return resource-rich deep waters back to the sunlit surface layer, sustaining global ocean productivity. The biological pump maintains this vertical gradient in nutrients through uptake, vertical transport, and remineralisation of organic matter, storing carbon in the deep ocean that is isolated from the atmosphere on centennial and millennial timescales, lowering atmospheric CO2 levels by several hundred microatmospheres. The biological pump resists simple mechanistic characterisation due to the complex suite of biological, chemical, and physical processes involved, so the fate of exported organic carbon is typically described using a depth-dependent profile to evaluate the degradation of sinking particulate matter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_{z}=F_{100} \\left(\\frac{z}{100}\\right)^{-b}"
},
{
"math_id": 1,
"text": "f_p{(z)}=C_pz^{-b}"
}
]
| https://en.wikipedia.org/wiki?curid=68697715 |
6870523 | Distributed key generation | Multiparty cryptographic process
Distributed key generation (DKG) is a cryptographic process in which multiple parties contribute to the calculation of a shared public and private key set. Unlike most public key encryption models, distributed key generation does not rely on Trusted Third Parties. Instead, the participation of a threshold of honest parties determines whether a key pair can be computed successfully. Distributed key generation prevents single parties from having access to a private key. The involvement of many parties requires Distributed key generation to ensure secrecy in the presence of malicious contributions to the key calculation.
Distributed key generation is commonly used to decrypt shared ciphertexts or create group digital signatures.
History.
Distributed key generation protocol was first specified by Torben Pedersen in 1991. This first model depended on the security of the Joint-Feldman Protocol for verifiable secret sharing during the secret sharing process.
In 1999, Rosario Gennaro, Stanislaw Jarecki, Hugo Krawczyk, and Tal Rabin produced a series of security proofs demonstrating that Feldman verifiable secret sharing was vulnerable to malicious contributions to Pedersen's distributed key generator that would leak information about the shared private key. The same group also proposed an updated distributed key generation scheme preventing malicious contributions from impacting the value of the private key.
Methods.
The distributed key generation protocol specified by Gennaro, Jarecki, Krawczyk, and Rabin assumes that a group of players has already been established by an honest party prior to the key generation. It also assumes the communication between parties is synchronous.
Avoiding the synchrony assumption.
In 2009, Aniket Kate and Ian Goldberg presented a Distributed key generation protocol suitable for use over the Internet. Unlike earlier constructions, this protocol does not require a broadcast channel or the synchronous communication assumption, and a ready-to-use library is available.
Robustness.
In many circumstances, a robust distributed key generator is necessary. Robust generator protocols can reconstruct public keys in order to remove malicious shares even if malicious parties still remain in the qualified group during the reconstruction phase. For example, robust multi-party digital signatures can tolerate a number of malicious users roughly proportionate to the length of the modulus used during key generation.
Sparse-evaluated DKG.
Distributed key generators can implement a sparse evaluation matrix in order to improve efficiency during verification stages. Sparse evaluation can improve run time from formula_0 (where formula_1 is the number of parties and formula_2 is the threshold of malicious users) to formula_3. Instead of robust verification, sparse evaluation requires that a small set of the parties verify a small, randomly picked set of shares. This results in a small probability that the key generation will fail in the case that a large number of malicious shares are not chosen for verification.
Applications.
Distributed key generation and distributed key cryptography are rarely applied over the internet because of the reliance on synchronous communication.
Distributed key cryptography is useful in key escrow services where a company can meet a threshold to decrypt a ciphertext version of private key. This way a company can require multiple employees to recover a private key without giving the escrow service a plaintext copy.
Distributed key generation is also useful in server-side password authentication. If password hashes are stored on a single server, a breach in the server would result in all the password hashes being available for attackers to analyze offline. Variations of distributed key generation can authenticate user passwords across multiple servers and eliminate single points of failure.
Distributed key generation is more commonly used for group digital signatures. This acts as a form of voting, where a threshold of group members would have to participate in order for the group to digitally sign a document.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(nt)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "O(log^3n)"
}
]
| https://en.wikipedia.org/wiki?curid=6870523 |
6871218 | Generalized complex structure | In the field of mathematics known as differential geometry, a generalized complex structure is a property of a differential manifold that includes as special cases a complex structure and a symplectic structure. Generalized complex structures were introduced by Nigel Hitchin in 2002 and further developed by his students Marco Gualtieri and Gil Cavalcanti.
These structures first arose in Hitchin's program of characterizing geometrical structures via functionals of differential forms, a connection which formed the basis of Robbert Dijkgraaf, Sergei Gukov, Andrew Neitzke and Cumrun Vafa's 2004 proposal that topological string theories are special cases of a topological M-theory. Today generalized complex structures also play a leading role in physical string theory, as supersymmetric flux compactifications, which relate 10-dimensional physics to 4-dimensional worlds like ours, require (possibly twisted) generalized complex structures.
Definition.
The generalized tangent bundle.
Consider an "N"-manifold "M". The tangent bundle of "M", which will be denoted T, is the vector bundle over "M" whose fibers consist of all tangent vectors to "M". A section of T is a vector field on "M". The cotangent bundle of "M", denoted T*, is the vector bundle over "M" whose sections are one-forms on "M".
In complex geometry one considers structures on the tangent bundles of manifolds. In symplectic geometry one is instead interested in exterior powers of the cotangent bundle. Generalized geometry unites these two fields by treating sections of the generalized tangent bundle, which is the direct sum formula_0 of the tangent and cotangent bundles, which are formal sums of a vector field and a one-form.
The fibers are endowed with a natural inner product with signature ("N", "N"). If "X" and "Y" are vector fields and "ξ" and "η" are one-forms then the inner product of "X+ξ" and "Y+η" is defined as
formula_1
A generalized almost complex structure is just an almost complex structure of the generalized tangent bundle which preserves the natural inner product:
formula_2
such that formula_3 and
formula_4
Like in the case of an ordinary almost complex structure, a generalized almost complex structure is uniquely determined by its formula_5-eigenbundle, i.e. a subbundle formula_6 of the complexified generalized tangent bundle formula_7
given by
formula_8
Such subbundle "L" satisfies the following properties:
Vice versa, any subbundle "L" satisfying (i), (ii) is the formula_5-eigenbundle of a unique generalized almost complex structure, so that the properties (i), (ii) can be considered as an alternative definition of generalized almost complex structure.
Courant bracket.
In ordinary complex geometry, an almost complex structure is integrable to a complex structure if and only if the Lie bracket of two sections of the holomorphic subbundle is another section of the holomorphic subbundle.
In generalized complex geometry one is not interested in vector fields, but rather in the formal sums of vector fields and one-forms. A kind of Lie bracket for such formal sums was introduced in 1990 and is called the Courant bracket which is defined by
formula_9
where formula_10 is the Lie derivative along the vector field "X", "d" is the exterior derivative and "i" is the interior product.
Definition.
A generalized complex structure is a generalized almost complex structure such that the space of smooth sections of "L" is closed under the Courant bracket.
Maximal isotropic subbundles.
Classification.
There is a one-to-one correspondence between maximal isotropic subbundle of formula_0 and pairs formula_11 where E is a subbundle of T and formula_12 is a 2-form. This correspondence extends straightforwardly to the complex case.
Given a pair formula_11 one can construct a maximally isotropic subbundle formula_13 of formula_0 as follows. The elements of the subbundle are the formal sums formula_14 where the vector field "X" is a section of E and the one-form "ξ" restricted to the dual space formula_15 is equal to the one-form formula_16
To see that formula_13 is isotropic, notice that if "Y" is a section of E and formula_17 restricted to formula_15 is formula_18 then formula_19 as the part of formula_17 orthogonal to formula_15 annihilates "Y". Thesefore if formula_14 and formula_20 are sections of formula_0 then
formula_21
and so formula_13 is isotropic. Furthermore, formula_13 is maximal because there are formula_22 (complex) dimensions of choices for formula_23 and formula_12 is unrestricted on the complement of formula_24 which is of (complex) dimension formula_25 Thus the total (complex) dimension in "n". Gualtieri has proven that all maximal isotropic subbundles are of the form formula_13 for some formula_26 and formula_27
Type.
The type of a maximal isotropic subbundle formula_13 is the real dimension of the subbundle that annihilates E. Equivalently it is 2"N" minus the real dimension of the projection of formula_13 onto the tangent bundle T. In other words, the type of a maximal isotropic subbundle is the codimension of its projection onto the tangent bundle. In the complex case one uses the complex dimension and the type is sometimes referred to as the complex type. While the type of a subbundle can in principle be any integer between 0 and 2"N", generalized almost complex structures cannot have a type greater than "N" because the sum of the subbundle and its complex conjugate must be all of formula_28
The type of a maximal isotropic subbundle is invariant under diffeomorphisms and also under shifts of the B-field, which are isometries of formula_0 of the form
formula_29
where "B" is an arbitrary closed 2-form called the B-field in the string theory literature.
The type of a generalized almost complex structure is in general not constant, it can jump by any even integer. However it is upper semi-continuous, which means that each point has an open neighborhood in which the type does not increase. In practice this means that subsets of greater type than the ambient type occur on submanifolds with positive codimension.
Real index.
The real index "r" of a maximal isotropic subspace "L" is the complex dimension of the intersection of "L" with its complex conjugate. A maximal isotropic subspace of formula_30 is a generalized almost complex structure if and only if "r" = 0.
Canonical bundle.
As in the case of ordinary complex geometry, there is a correspondence between generalized almost complex structures and complex line bundles. The complex line bundle corresponding to a particular generalized almost complex structure is often referred to as the canonical bundle, as it generalizes the canonical bundle in the ordinary case. It is sometimes also called the pure spinor bundle, as its sections are pure spinors.
Generalized almost complex structures.
The canonical bundle is a one complex dimensional subbundle of the bundle formula_31 of complex differential forms on "M". Recall that the gamma matrices define an isomorphism between differential forms and spinors. In particular even and odd forms map to the two chiralities of Weyl spinors. Vectors have an action on differential forms given by the interior product. One-forms have an action on forms given by the wedge product. Thus sections of the bundle formula_30 act on differential forms. This action is a representation of the action of the Clifford algebra on spinors.
A spinor is said to be a pure spinor if it is annihilated by half of a set of a set of generators of the Clifford algebra. Spinors are sections of our bundle formula_32 and generators of the Clifford algebra are the fibers of our other bundle formula_28 Therefore, a given pure spinor is annihilated by a half-dimensional subbundle E of formula_28 Such subbundles are always isotropic, so to define an almost complex structure one must only impose that the sum of E and its complex conjugate is all of formula_28 This is true whenever the wedge product of the pure spinor and its complex conjugate contains a top-dimensional component. Such pure spinors determine generalized almost complex structures.
Given a generalized almost complex structure, one can also determine a pure spinor up to multiplication by an arbitrary complex function. These choices of pure spinors are defined to be the sections of the canonical bundle.
Integrability and other structures.
If a pure spinor that determines a particular complex structure is closed, or more generally if its exterior derivative is equal to the action of a gamma matrix on itself, then the almost complex structure is integrable and so such pure spinors correspond to generalized complex structures.
If one further imposes that the canonical bundle is holomorphically trivial, meaning that it is global sections which are closed forms, then it defines a generalized Calabi-Yau structure and "M" is said to be a generalized Calabi-Yau manifold.
Local classification.
Canonical bundle.
Locally all pure spinors can be written in the same form, depending on an integer "k", the B-field 2-form "B", a nondegenerate symplectic form ω and a "k"-form Ω. In a local neighborhood of any point a pure spinor Φ which generates the canonical bundle may always be put in the form
formula_33
where Ω is decomposable as the wedge product of one-forms.
Regular point.
Define the subbundle E of the complexified tangent bundle formula_34 to be the projection of the holomorphic subbundle L of formula_30 to formula_35 In the definition of a generalized almost complex structure we have imposed that the intersection of L and its conjugate contains only the origin, otherwise they would be unable to span the entirety of formula_28 However the intersection of their projections need not be trivial. In general this intersection is of the form
formula_36
for some subbundle Δ. A point which has an open neighborhood in which the dimension of the fibers of Δ is constant is said to be a regular point.
Darboux's theorem.
Every regular point in a generalized complex manifold has an open neighborhood which, after a diffeomorphism and shift of the B-field, has the same generalized complex structure as the Cartesian product of the complex vector space formula_37 and the standard symplectic space formula_38 with the standard symplectic form, which is the direct sum of the two by two off-diagonal matrices with entries 1 and −1.
Local holomorphicity.
Near non-regular points, the above classification theorem does not apply. However, about any point, a generalized complex manifold is, up to diffeomorphism and B-field, a product of a symplectic manifold with a generalized complex manifold which is of complex type at the point, much like Weinstein's theorem for the local structure of Poisson manifolds. The remaining question of the local structure is: what does a generalized complex structure look like near a point of complex type? In fact, it will be induced by a holomorphic Poisson structure.
Examples.
Complex manifolds.
The space of complex differential forms formula_31 has a complex conjugation operation given by complex conjugation in formula_39 This allows one to define holomorphic and antiholomorphic one-forms and ("m", "n")-forms, which are homogeneous polynomials in these one-forms with "m" holomorphic factors and "n" antiholomorphic factors. In particular, all ("n", 0)-forms are related locally by multiplication by a complex function and so they form a complex line bundle.
("n", 0)-forms are pure spinors, as they are annihilated by antiholomorphic tangent vectors and by holomorphic one-forms. Thus this line bundle can be used as a canonical bundle to define a generalized complex structure. Restricting the annihilator from formula_30 to the complexified tangent bundle one gets the subspace of antiholomorphic vector fields. Therefore, this generalized complex structure on formula_30 defines an ordinary complex structure on the tangent bundle.
As only half of a basis of vector fields are holomorphic, these complex structures are of type "N". In fact complex manifolds, and the manifolds obtained by multiplying the pure spinor bundle defining a complex manifold by a complex, formula_40-closed (2,0)-form, are the only type "N" generalized complex manifolds.
Symplectic manifolds.
The pure spinor bundle generated by
formula_41
for a nondegenerate two-form "ω" defines a symplectic structure on the tangent space. Thus symplectic manifolds are also generalized complex manifolds.
The above pure spinor is globally defined, and so the canonical bundle is trivial. This means that symplectic manifolds are not only generalized complex manifolds but in fact are generalized Calabi-Yau manifolds.
The pure spinor formula_42 is related to a pure spinor which is just a number by an imaginary shift of the B-field, which is a shift of the Kähler form. Therefore, these generalized complex structures are of the same type as those corresponding to a scalar pure spinor. A scalar is annihilated by the entire tangent space, and so these structures are of type "0".
Up to a shift of the B-field, which corresponds to multiplying the pure spinor by the exponential of a closed, real 2-form, symplectic manifolds are the only type 0 generalized complex manifolds. Manifolds which are symplectic up to a shift of the B-field are sometimes called B-symplectic.
Relation to G-structures.
Some of the almost structures in generalized complex geometry may be rephrased in the language of G-structures. The word "almost" is removed if the structure is integrable.
The bundle formula_30 with the above inner product is an O(2"n", 2"n") structure. A generalized almost complex structure is a reduction of this structure to a U("n", "n") structure. Therefore, the space of generalized complex structures is the coset
formula_43
A generalized almost Kähler structure is a pair of commuting generalized complex structures such that minus the product of the corresponding tensors is a positive definite metric on formula_28 Generalized Kähler structures are reductions of the structure group to formula_44 Generalized Kähler manifolds, and their twisted counterparts, are equivalent to the bihermitian manifolds discovered by Sylvester James Gates, Chris Hull and Martin Roček in the context of 2-dimensional supersymmetric quantum field theories in 1984.
Finally, a generalized almost Calabi-Yau metric structure is a further reduction of the structure group to formula_45
Calabi versus Calabi–Yau metric.
Notice that a generalized Calabi metric structure, which was introduced by Marco Gualtieri, is a stronger condition than a generalized Calabi–Yau structure, which was introduced by Nigel Hitchin. In particular a generalized Calabi–Yau metric structure implies the existence of two commuting generalized almost complex structures. | [
{
"math_id": 0,
"text": "\\mathbf{T} \\oplus \\mathbf{T}^*"
},
{
"math_id": 1,
"text": "\\langle X+\\xi,Y+\\eta\\rangle=\\frac{1}{2}(\\xi(Y)+\\eta(X))."
},
{
"math_id": 2,
"text": "{\\mathcal J}: \\mathbf{T}\\oplus\\mathbf{T}^*\\to \\mathbf{T}\\oplus\\mathbf{T}^*"
},
{
"math_id": 3,
"text": "{\\mathcal J}^2=-{\\rm Id},"
},
{
"math_id": 4,
"text": "\\langle {\\mathcal J}(X+\\xi),{\\mathcal J}(Y+\\eta)\\rangle=\\langle X+\\xi, Y+\\eta \\rangle."
},
{
"math_id": 5,
"text": "\\sqrt{-1}"
},
{
"math_id": 6,
"text": "L"
},
{
"math_id": 7,
"text": "(\\mathbf{T}\\oplus\\mathbf{T}^*)\\otimes\\Complex "
},
{
"math_id": 8,
"text": "L=\\{X+\\xi\\in (\\mathbf{T}\\oplus\\mathbf{T}^*)\\otimes\\Complex \\ :\\ {\\mathcal J}(X+\\xi)=\\sqrt{-1}(X+\\xi)\\}"
},
{
"math_id": 9,
"text": "[X+\\xi,Y+\\eta]=[X,Y] +\\mathcal{L}_X\\eta-\\mathcal{L}_Y\\xi -\\frac{1}{2}d(i(X)\\eta-i(Y)\\xi)"
},
{
"math_id": 10,
"text": "\\mathcal{L}_X"
},
{
"math_id": 11,
"text": "(\\mathbf{E}, \\varepsilon)"
},
{
"math_id": 12,
"text": "\\varepsilon"
},
{
"math_id": 13,
"text": "L(\\mathbf{E}, \\varepsilon)"
},
{
"math_id": 14,
"text": "X+\\xi"
},
{
"math_id": 15,
"text": "\\mathbf{E}^*"
},
{
"math_id": 16,
"text": "\\varepsilon(X)."
},
{
"math_id": 17,
"text": "\\xi"
},
{
"math_id": 18,
"text": "\\varepsilon(X)"
},
{
"math_id": 19,
"text": "\\xi(Y) =\\varepsilon(X,Y),"
},
{
"math_id": 20,
"text": "Y+\\eta"
},
{
"math_id": 21,
"text": "\\langle X+\\xi,Y+\\eta\\rangle=\\frac{1}{2}(\\xi(Y)+\\eta(X))=\\frac{1}{2}(\\varepsilon(Y,X)+\\varepsilon(X,Y))=0"
},
{
"math_id": 22,
"text": "\\dim(\\mathbf{E})"
},
{
"math_id": 23,
"text": "\\mathbf{E},"
},
{
"math_id": 24,
"text": "\\mathbf{E}^*,"
},
{
"math_id": 25,
"text": "n-\\dim(\\mathbf{E})."
},
{
"math_id": 26,
"text": "\\mathbf{E}"
},
{
"math_id": 27,
"text": "\\varepsilon."
},
{
"math_id": 28,
"text": "(\\mathbf{T} \\oplus \\mathbf{T}^*) \\otimes \\Complex."
},
{
"math_id": 29,
"text": "X+\\xi\\longrightarrow X+\\xi+i_XB"
},
{
"math_id": 30,
"text": "(\\mathbf{T} \\oplus \\mathbf{T}^*) \\otimes \\Complex"
},
{
"math_id": 31,
"text": "\\mathbf{\\Lambda}^* \\mathbf{T} \\otimes \\Complex"
},
{
"math_id": 32,
"text": "\\mathbf{\\Lambda}^* \\mathbf{T},"
},
{
"math_id": 33,
"text": "\\Phi=e^{B+i\\omega}\\Omega"
},
{
"math_id": 34,
"text": "\\mathbf{T} \\otimes \\Complex"
},
{
"math_id": 35,
"text": "\\mathbf{T} \\otimes \\Complex."
},
{
"math_id": 36,
"text": "E\\cap\\overline{E}=\\Delta\\otimes\\Complex"
},
{
"math_id": 37,
"text": "\\Complex^k"
},
{
"math_id": 38,
"text": "\\R^{2n-2k}"
},
{
"math_id": 39,
"text": "\\Complex."
},
{
"math_id": 40,
"text": "\\partial"
},
{
"math_id": 41,
"text": "\\phi=e^{i\\omega}"
},
{
"math_id": 42,
"text": "\\phi"
},
{
"math_id": 43,
"text": "\\frac{O(2n,2n)}{U(n,n)}."
},
{
"math_id": 44,
"text": "U(n) \\times U(n)."
},
{
"math_id": 45,
"text": "SU(n) \\times SU(n)."
}
]
| https://en.wikipedia.org/wiki?curid=6871218 |
68714351 | Coherence (fairness) | Criterion for evaluating rules for fair division
Coherence, also called uniformity or consistency, is a criterion for evaluating rules for fair division. Coherence requires that the outcome of a fairness rule is fair not only for the overall problem, but also for each sub-problem. Every part of a fair division should be fair.
The coherence requirement was first studied in the context of apportionment. In this context, failure to satisfy coherence is called the new states paradox: when a new state enters the union, and the house size is enlarged to accommodate the number of seats allocated to this new state, some other unrelated states are affected. Coherence is also relevant to other fair division problems, such as bankruptcy problems.
Definition.
There is a "resource" to allocate, denoted by formula_0. For example, it can be an integer representing the number of seats in a "h"ouse of representatives. The resource should be allocated between some formula_1 "agents". For example, these can be federal states or political parties. The agents have different "entitlements", denoted by a vector formula_2. For example, "ti" can be the fraction of votes won by party "i". An "allocation" is a vector formula_3 with formula_4. An "allocation rule" is a rule that, for any formula_0 and entitlement vector formula_2, returns an allocation vector formula_3.
An allocation rule is called coherent (or uniform) if, for every subset "S" of agents, if the rule is activated on the subset of the resource formula_5, and on the entitlement vector formula_6, then the result is the allocation vector formula_7. That is: when the rule is activated on a subset of the agents, with the subset of resources they received, the result for them is the same.
Handling ties.
In general, an allocation rule may return more than one allocation (in case of a tie). In this case, the definition should be updated. Denote the allocation rule by formula_8, and Denote by formula_9 the set of allocation vectors returned by formula_8 on the resource formula_0 and entitlement vector formula_10. The rule formula_8 is called coherent if the following holds for every allocation vector formula_11 and any subset "S" of agents:
Coherence in apportionment.
In apportionment problems, the resource to allocate is "discrete", for example, the seats in a parliament. Therefore, each agent must receive an integer allocation.
Non-coherent methods: the new state paradox.
One of the most intuitive rules for apportionment of seats in a parliament is the largest remainder method (LRM). This method dictates that the entitlement vector should be normalized such that the sum of entitlements equals formula_0 (the total number of seats to allocate). Then each agent should get his normalized entitlement (often called "quota") rounded down. If there are remaining seats, they should be allocated to the agents with the largest remainder – the largest fraction of the entitlement. Surprisingly, this rule is "not" coherent. As a simple example, suppose formula_16 and the normalized entitlements of Alice, Bob and Chana are 0.4, 1.35, 3.25 respectively. Then the unique allocation returned by LRM is 1, 1, 3 (the initial allocation is 0, 1, 3, and the extra seat goes to Alice, since her remainder 0.4 is largest). Now, suppose that we activate the same rule on Alice and Bob alone, with their combined allocation of 2. The normalized entitlements are now 0.4/1.75 × 2 ≈ 0.45 and 1.35/1.75 × 2 ≈ 1.54. Therefore, the unique allocation returned by LRM is 0, 2 rather than 1, 1. This means that in the grand solution 1, 1, 3, the internal division between Alice and Bob does "not" follow the principle of largest remainders – it is not coherent.
Another way to look at this non-coherence is as follows. Suppose that the house size is 2, and there are two states A, B with quotas 0.4, 1.35. Then the unique allocation given by LRM is 0, 2. Now, a new state C joins the union, with quota 3.25. It is allocated 3 seats, and the house size is increased to 5 to accommodate these new seats. This change should not affect the existing states A and B. In fact, with the LRM, the existing states "are" affected: state A gains a seat, while state B loses a seat. This is called the new state paradox.
The new state paradox was actually observed in 1907, when Oklahoma became a state. It was given a fair share 5 of seats, and the total number of seats increased by that number, from 386 to 391 members. After recomputation of apportionment affected the number of seats because of other states: New York lost a seat, while Maine gained one.
Coherent methods.
Every divisor method is coherent. This follows directly from their description as picking sequences: at each iteration, the next agent to pick an item is the one with the highest ratio (entitlement / divisor). Therefore, the relative priority ordering between agents is the same even if we consider a subset of the agents.
Properties of coherent methods.
When coherency is combined with other natural requirements, it characterizes a structured class of apportionment methods. Such characterizations were proved by various authors. All results assume that the rules are homogeneous (i.e. it depends only on the percentage of votes for each party, not on the total number of votes).
Coherence in bankruptcy problems.
In bankruptcy problems, the resource to allocate is "continuous", for example, the amount of money left by a debtor. Each agent can get any fraction of the resource. However, the sum of entitlements is usually larger than the total remaining resource.
The most intuitive rule for solving such problems is the "proportional rule", in which each agent gets a part of the resource proportional to his entitlement. This rule is definitely coherent. However, it is not the only coherent rule: the Talmudic rule of the contested garment can be extended to a coherent division rule.
Coherence in organ allocation.
In most countries, the number of patients waiting for an organ transplantation is much larger than the number of available organs. Therefore, most countries choose who to allocate an organ to by some priority-ordering. Surprisingly, some priority orderings used in practice are not coherent. For example, one rule used by UNOS in the past was as follows:
Suppose the personal scores of some four patients A, B, C, D are 16, 21, 20, 23. Suppose their waiting times are A > B > C > D. Accordingly, their bonuses are 10, 7.5, 5, 2.5. So their sums are 26, 28.5, 25, 25.5, and the priority order is B > A > D > C. Now, after B receives an organ, the personal scores of A, C, D remain the same, but the bonuses change to 10, 6.67, 3.33, so the sums are 26, 26.67, 26.33, and the priority order is C > D > A. This inverts the order between the three agents.
In order to have a coherent priority ordering, the priority should be determined only by personal traits. For example, the bonus can be computed by the number of months in line, rather than by the fraction of patients.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "t_1,\\ldots,t_n"
},
{
"math_id": 3,
"text": "a_1,\\ldots,a_n"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n a_i = h"
},
{
"math_id": 5,
"text": "h_S := \\sum_{i\\in S} a_i"
},
{
"math_id": 6,
"text": "(t_i)_{i\\in S}"
},
{
"math_id": 7,
"text": "(a_i)_{i\\in S}"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "M\\big(h; (t_i)_{i=1}^n\\big)"
},
{
"math_id": 10,
"text": "t_1, \\ldots, t_n"
},
{
"math_id": 11,
"text": "(a_i)_{i=1}^n \\in M\\big(h; (t_i)_{i=1}^n\\big)"
},
{
"math_id": 12,
"text": "(a_i)_{i\\in S} \\in M\\Big(\\sum_{i\\in S} a_i; (t_i)_{i\\in S}\\Big)"
},
{
"math_id": 13,
"text": "(b_i)_{i\\in S} \\in M\\Big(\\sum_{i\\in S} a_i; (t_i)_{i\\in S}\\Big)"
},
{
"math_id": 14,
"text": "(c_i)_{i\\notin S} \\in M\\Big(\\sum_{i\\notin S} a_i; (t_i)_{i\\notin S}\\Big)"
},
{
"math_id": 15,
"text": "[(b_i)_{i\\in S}, (c_i)_{i\\notin S}] \\in M\\big(h; (t_i)_{i=1}^n\\big)"
},
{
"math_id": 16,
"text": "h = 5"
}
]
| https://en.wikipedia.org/wiki?curid=68714351 |
68716430 | Optimal apportionment | Mathematical optimization of resource allocation
Optimal apportionment is an approach to apportionment that is based on mathematical optimization.
In a problem of apportionment, there is a "resource" to allocate, denoted by formula_0. For example, it can be an integer representing the number of seats in a "h"ouse of representatives. The resource should be allocated between some formula_1 "agents". For example, these can be federal states or political parties. The agents have different "entitlements", denoted by a vector of fractions formula_2 with a sum of 1. For example, "ti" can be the fraction of votes won by party "i". The goal is to find an "allocation" - a vector formula_3 with formula_4.
The ideal share for agent "i" is his/her "quota", defined as formula_5. If it is possible to give each agent his/her quota, then the allocation is maximally fair. However, exact fairness is usually unattainable, since the quotas are not integers and the allocations must be integers. There are various approaches to cope with this difficulty (see mathematics of apportionment). The optimization-based approach aims to attain, for eacn instance, an allocation that is "as fair as possible" for this instance. An allocation is "fair" if formula_6 for all agents "i", that is, each agent's allocation is exactly proportional to his/her entitlement. in this case, we say that the "unfairness" of the allocation is 0. If this equality must be violated, one can define a measure of "total unfairness", and try to minimize it.
Minimizing the sum of unfairness levels.
The most natural measure is the "sum" of unfairness levels for individual agents, as in the utilitarian rule:
Minimizing the largest unfairneses.
One can minimize the "largest" unfairness, as in the egalitarian rule:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "t_1,\\ldots,t_n"
},
{
"math_id": 3,
"text": "a_1,\\ldots,a_n"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n a_i = h"
},
{
"math_id": 5,
"text": "q_i := t_i\\cdot h"
},
{
"math_id": 6,
"text": "a_i = q_i"
},
{
"math_id": 7,
"text": "\\sum_{i=1}^n |a_i - q_i|"
},
{
"math_id": 8,
"text": "\\sum_{i=1}^n (a_i - q_i)^2"
},
{
"math_id": 9,
"text": "\\sum_{i=1}^n q_i (a_i/q_i - 1)^2"
},
{
"math_id": 10,
"text": "\\sum_{i=1}^n a_i (q_i/a_i - 1)^2"
},
{
"math_id": 11,
"text": "\\max_{i=1}^n |q_i / a_i - 1|"
},
{
"math_id": 12,
"text": "\\max_{i,j=1}^n |q_i / a_i - q_j/a_j|"
},
{
"math_id": 13,
"text": "\\max_{i=1}^n q_i/a_i"
},
{
"math_id": 14,
"text": "\\max_{i=1}^n a_i/q_i"
},
{
"math_id": 15,
"text": "\\min_{i=1}^n (a_i - q_i)"
},
{
"math_id": 16,
"text": "\\max_{i=1}^n (q_i - a_i)"
}
]
| https://en.wikipedia.org/wiki?curid=68716430 |
6871860 | MUSIC (algorithm) | Algorithm used for frequency estimation and radio direction finding
MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation and radio direction finding.
History.
In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements.
Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of complex sinusoids in additive noise using a covariance approach. Schmidt (1977), while working at Northrop Grumman and independently Bienvenu and Kopp (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (MUltiple SIgnal Classification) and has been widely studied.
In a detailed evaluation based on thousands of simulations, the Massachusetts Institute of Technology's Lincoln Laboratory concluded in 1998 that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data).
Theory.
MUSIC method assumes that a signal vector, formula_0, consists of formula_1 complex exponentials, whose frequencies formula_2 are unknown, in the presence of Gaussian white noise, formula_3, as given by the linear model
formula_4
Here formula_5 is an formula_6 Vandermonde matrix of steering vectors formula_7 and formula_8 is the amplitude vector. A crucial assumption is that number of sources, formula_1, is less than the number of elements in the measurement vector, formula_9, i.e. formula_10.
The formula_11 autocorrelation matrix of formula_0 is then given by
formula_12
where formula_13 is the noise variance, formula_14 is formula_11 identity matrix, and formula_15 is the formula_16 autocorrelation matrix of formula_17.
The autocorrelation matrix formula_18 is traditionally estimated using sample correlation matrix
formula_19
where formula_20 is the number of vector observations and formula_21. Given the estimate of formula_18, MUSIC estimates the frequency content of the signal or autocorrelation matrix using an eigenspace method.
Since formula_18 is a Hermitian matrix, all of its formula_9 eigenvectors formula_22 are orthogonal to each other. If the eigenvalues of formula_18 are sorted in decreasing order, the eigenvectors formula_23 corresponding to the formula_1 largest eigenvalues (i.e. directions of largest variability) span the signal subspace formula_24. The remaining formula_25 eigenvectors correspond to eigenvalue equal to formula_13 and span the noise subspace formula_26, which is orthogonal to the signal subspace, formula_27.
Note that for formula_28, MUSIC is identical to Pisarenko harmonic decomposition. The general idea behind MUSIC method is to use all the eigenvectors that span the noise subspace to improve the performance of the Pisarenko estimator.
Since any signal vector formula_29 that resides in the signal subspace formula_30 must be orthogonal to the noise subspace, formula_31, it must be that formula_32 for all the eigenvectors formula_33 that spans the noise subspace. In order to measure the degree of orthogonality of formula_29 with respect to all the formula_34, the MUSIC algorithm defines a squared norm
formula_35
where the matrix formula_36 is the matrix of eigenvectors that span the noise subspace formula_26. If formula_30, then formula_37 as implied by the orthogonality condition. Taking a reciprocal of the squared norm expression creates sharp peaks at the signal frequencies. The frequency estimation function for MUSIC (or the pseudo-spectrum) is
formula_38
where formula_39 are the noise eigenvectors and
formula_40
is the candidate steering vector. The locations of the formula_1 largest peaks of the estimation function give the frequency estimates for the formula_1 signal components
formula_41
MUSIC is a generalization of Pisarenko's method, and it reduces to Pisarenko's method when formula_42. In Pisarenko's method, only a single eigenvector is used to form the denominator of the frequency estimation function; and the eigenvector is interpreted as a set of autoregressive coefficients, whose zeros can be found analytically or with polynomial root finding algorithms. In contrast, MUSIC assumes that several such functions have been added together, so zeros may not be present. Instead there are local minima, which can be located by computationally searching the estimation function for peaks.
Dimension of signal space.
The fundamental observation MUSIC and other subspace decomposition methods are based on is about the rank of the autocorrelation matrix formula_18 which is related to number of signal sources formula_1 as follows.
If the sources are complex, then formula_43 and the dimension of the signal subspace formula_24 is formula_1.
If sources are real, then formula_44 and dimension of the signal subspace is formula_45, i.e. each real sinusoid is generated by two base vectors.
This fundamental result, although often skipped in spectral analysis books, is a reason why the input signal can be distributed into formula_1 signal subspace eigenvectors spanning formula_24 (formula_45 for real valued signals) and noise subspace eigenvectors spanning formula_46. It is based on signal embedding theory and can also be explained by the topological theory of manifolds.
Comparison to other methods.
MUSIC outperforms simple methods such as picking peaks of DFT spectra in the presence of noise, when the number of components is known in advance, because it exploits knowledge of this number to ignore the noise in its final report.
Unlike DFT, it is able to estimate frequencies with accuracy higher than one sample, because its estimation function can be evaluated for any frequency, not just those of DFT bins. This is a form of superresolution.
Its chief disadvantage is that it requires the number of components to be known in advance, so the original method cannot be used in more general cases. Methods exist for estimating the number of source components purely from statistical properties of the autocorrelation matrix. See, e.g. In addition, MUSIC assumes coexistent sources to be uncorrelated, which limits its practical applications.
Recent iterative semi-parametric methods offer robust superresolution despite highly correlated sources, e.g., SAMV
Other applications.
A modified version of MUSIC, denoted as Time-Reversal MUSIC (TR-MUSIC) has been recently applied to computational time-reversal imaging. MUSIC algorithm has also been implemented for fast detection of the DTMF frequencies (Dual-tone multi-frequency signaling) in the form of C library - libmusic (including for MATLAB implementation). | [
{
"math_id": 0,
"text": "\\mathbf{x}"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "\\mathbf{n}"
},
{
"math_id": 4,
"text": "\\mathbf{x} = \\mathbf{A} \\mathbf{s} + \\mathbf{n}."
},
{
"math_id": 5,
"text": "\\mathbf{A} = [\\mathbf{a}(\\omega_1), \\cdots, \\mathbf{a}(\\omega_p)]"
},
{
"math_id": 6,
"text": "M \\times p"
},
{
"math_id": 7,
"text": "\\mathbf{a}(\\omega) = [1, e^{j\\omega}, e^{j2\\omega}, \\ldots, e^{j(M-1)\\omega}]^T"
},
{
"math_id": 8,
"text": "\\mathbf{s} = [s_1, \\ldots, s_p]^T"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "p < M"
},
{
"math_id": 11,
"text": "M \\times M"
},
{
"math_id": 12,
"text": "\\mathbf{R}_x = \\mathbf{A} \\mathbf{R}_s \\mathbf{A}^H + \\sigma^2 \\mathbf{I},"
},
{
"math_id": 13,
"text": "\\sigma^2"
},
{
"math_id": 14,
"text": "\\mathbf{I}"
},
{
"math_id": 15,
"text": "\\mathbf{R}_s"
},
{
"math_id": 16,
"text": "p \\times p"
},
{
"math_id": 17,
"text": "\\mathbf{s}"
},
{
"math_id": 18,
"text": "\\mathbf{R}_x"
},
{
"math_id": 19,
"text": "\\widehat{\\mathbf{R}}_x = \\frac{1}{N} \\mathbf{X} \\mathbf{X}^H"
},
{
"math_id": 20,
"text": "N > M"
},
{
"math_id": 21,
"text": "\\mathbf{X} = [\\mathbf{x}_1, \\mathbf{x}_2, \\ldots, \\mathbf{x}_N]"
},
{
"math_id": 22,
"text": "\\{\\mathbf{v}_1, \\mathbf{v}_2, \\ldots, \\mathbf{v}_M\\}"
},
{
"math_id": 23,
"text": "\\{\\mathbf{v}_1, \\ldots, \\mathbf{v}_p\\}"
},
{
"math_id": 24,
"text": "\\mathcal{U}_S"
},
{
"math_id": 25,
"text": "M-p"
},
{
"math_id": 26,
"text": "\\mathcal{U}_N"
},
{
"math_id": 27,
"text": "\\mathcal{U}_S \\perp \\mathcal{U}_N "
},
{
"math_id": 28,
"text": "M = p + 1"
},
{
"math_id": 29,
"text": "\\mathbf{e}"
},
{
"math_id": 30,
"text": "\\mathbf{e} \\in \\mathcal{U}_S"
},
{
"math_id": 31,
"text": "\\mathbf{e} \\perp \\mathcal{U}_N"
},
{
"math_id": 32,
"text": "\\mathbf{e} \\perp \\mathbf{v}_i"
},
{
"math_id": 33,
"text": "\\{\\mathbf{v}_i \\}_{i=p+1}^M"
},
{
"math_id": 34,
"text": "\\mathbf{v}_i \\in \\mathcal{U}_N"
},
{
"math_id": 35,
"text": "d^2 = \\| \\mathbf{U}_N^H \\mathbf{e} \\|^2 = \\mathbf{e}^H \\mathbf{U}_N \\mathbf{U}_N^H \\mathbf{e} = \\sum_{i=p+1}^{M} |\\mathbf{e}^{H} \\mathbf{v}_i|^2"
},
{
"math_id": 36,
"text": "\\mathbf{U}_N = [\\mathbf{v}_{p+1}, \\ldots, \\mathbf{v}_{M}]"
},
{
"math_id": 37,
"text": "d^2 = 0"
},
{
"math_id": 38,
"text": "\\hat P_{MU}(e^{j \\omega}) = \\frac{1}{\\mathbf{e}^H \\mathbf{U}_N \\mathbf{U}_N^H \\mathbf{e}} = \\frac{1}{\\sum_{i=p+1}^{M} |\\mathbf{e}^{H} \\mathbf{v}_i|^2},"
},
{
"math_id": 39,
"text": "\\mathbf{v}_i"
},
{
"math_id": 40,
"text": "\\mathbf{e} = \\begin{bmatrix}1 & e^{j \\omega} & e^{j 2 \\omega} & \\cdots & e^{j (M-1) \\omega}\\end{bmatrix}^T"
},
{
"math_id": 41,
"text": " \\hat{\\omega} = \\arg\\max_\\omega \\; \\hat P_{MU}(e^{j \\omega}). "
},
{
"math_id": 42,
"text": "M=p+1"
},
{
"math_id": 43,
"text": "M > p"
},
{
"math_id": 44,
"text": "M > 2p"
},
{
"math_id": 45,
"text": "2p"
},
{
"math_id": 46,
"text": "\\mathcal{U}_N "
}
]
| https://en.wikipedia.org/wiki?curid=6871860 |
6871877 | Pisarenko harmonic decomposition | Pisarenko harmonic decomposition, also referred to as Pisarenko's method, is a method of frequency estimation. This method assumes that a signal, formula_0, consists of formula_1 complex exponentials in the presence of white noise. Because the number of complex exponentials must be known "a priori", it is somewhat limited in its usefulness.
Pisarenko's method also assumes that formula_2 values of the formula_3 autocorrelation matrix are either known or estimated. Hence, given the formula_4 autocorrelation matrix, the dimension of the noise subspace is equal to one and is spanned by the eigenvector corresponding to the minimum eigenvalue. This eigenvector is orthogonal to each of the signal vectors.
The frequency estimates may be determined by setting the frequencies equal to the angles of the roots of the polynomial
formula_5
or the location of the peaks in the frequency estimation function (or the pseudo-spectrum)
formula_6,
where formula_7 is the noise eigenvector and
formula_8.
History.
The method was first discovered in 1911 by Constantin Carathéodory, then rediscovered by Vladilen Fedorovich Pisarenko in 1973 while examining the problem of estimating the frequencies of complex signals in white noise. He found that the frequencies could be derived from the eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix. | [
{
"math_id": 0,
"text": "x(n)"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "p + 1"
},
{
"math_id": 3,
"text": "M \\times M"
},
{
"math_id": 4,
"text": "(p + 1) \\times (p + 1)"
},
{
"math_id": 5,
"text": "V_{\\rm min}(z) = \\sum_{k=0}^p v_{\\rm min}(k) z^{-k}"
},
{
"math_id": 6,
"text": "\\hat P_{\\rm PHD}(e^{j \\omega}) = \\frac{1}{|\\mathbf{e}^{H} \\mathbf{v}_{\\rm min}|^2}"
},
{
"math_id": 7,
"text": "\\mathbf{v}_{\\rm min}"
},
{
"math_id": 8,
"text": "e = \\begin{bmatrix}1 & e^{j \\omega} & e^{j 2 \\omega} & \\cdots & e^{j (M-1) \\omega}\\end{bmatrix}^T"
}
]
| https://en.wikipedia.org/wiki?curid=6871877 |
68720470 | Vote-ratio monotonicity | Property of apportionment methods
Vote-ratio, weight-ratio, or population-ratio monotonicity is a property of some apportionment methods. It says that if the entitlement for formula_0 grows at a faster rate than formula_1 (i.e. formula_0 grows proportionally more than formula_1), formula_0 should not lose a seat to formula_1. More formally, if the ratio of votes or populations formula_2 increases, then formula_0 should not lose a seat while formula_1 gains a seat. Apportionments violating this rule are called population paradoxes.
A particularly severe variant, where voting "for" a party causes it to "lose" seats, is called a no-show paradox. The largest remainders method exhibits both population and no-show paradoxes.
Population-pair monotonicity.
Pairwise monotonicity says that if the "ratio" between the entitlements of two states formula_3 increases, then state formula_4 should not gain seats at the expense of state formula_5. In other words, a shrinking state should not "steal" a seat from a growing state.
Some earlier apportionment rules, such as Hamilton's method, do not satisfy VRM, and thus exhibit the population paradox. For example, after the 1900 census, Virginia lost a seat to Maine, even though Virginia's population was growing more rapidly.
Strong monotonicity.
A stronger variant of population monotonicity, called "strong" monotonicity requires that, if a state's "entitlement" (share of the population) increases, then its apportionment should not decrease, regardless of what happens to any other state's entitlement. This variant is extremely strong, however: whenever there are at least 3 states, and the house size is not exactly equal to the number of states, no apportionment method is strongly monotone for a fixed house size. Strong monotonicity failures in divisor methods happen when one state's entitlement increases, causing it to "steal" a seat from another state whose entitlement is unchanged.
However, it is worth noting that the traditional form of the divisor method, which involves using a "fixed" divisor and allowing the house size to vary, satisfies strong monotonicity in this sense.
Relation to other properties.
Balinski and Young proved that an apportionment method is VRM if-and-only-if it is a divisor method.
Palomares, Pukelsheim and Ramirez proved that very apportionment rule that is anonymous, balanced, concordant, homogenous, and coherent is vote-ratio monotone.
Vote-ratio monotonicity implies that, if population moves from state formula_5 to state formula_4 while the populations of other states do not change, then both formula_6 and formula_7 must hold. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A / B"
},
{
"math_id": 3,
"text": "i, j"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "a_i' \\geq a_i"
},
{
"math_id": 7,
"text": "a_j' \\leq a_j"
}
]
| https://en.wikipedia.org/wiki?curid=68720470 |
68720625 | Balance (apportionment) | Balance or balancedness is a property of apportionment methods, which are methods of allocating identical items between among agens, such as dividing seats in a parliament among political parties or federal states. The property says that, if two agents have exactly the same entitlements, then the number of items they receive should differ by at most one. So if two parties win the same number of votes, or two states have the same populations, then the number of seats they receive should differ by at most one.
Ideally, agents with identical entitlements should receive an identical number of items, but this may be impossible due to the indivisibility of the items. Balancedness requires that the difference between identical-entitlement agents should be the smallest difference allowed by the indivisibility, which is 1. For example, if there are 2 equal-entitlement agents and 9 items, then the allocations (4,5) and (5,4) are both allowed, but the allocations (3,6) or (6,3) are not - a difference of 3 is not justified even by indivisibility.
Definitions.
There is a "resource" to allocate, denoted by formula_0. For example, it can be an integer representing the number of seats in a "h"ouse of representatives. The resource should be allocated between some formula_1 "agents", such as states or parties. The agents have different "entitlements", denoted by a vector formula_2. For example, "ti" can be the fraction of votes won by party "i". An "allocation" is a vector formula_3 with formula_4. An "allocation rule" is a rule that, for any formula_0 and entitlement vector formula_2, returns an allocation vector formula_3.
An allocation rule is called balanced if formula_5 implies formula_6 for all "i,j". Equivalently, formula_5 implies formula_7 for all "i,j".
Properties.
All known apportionment methods are balanced. In particular, both Highest averages methods and Largest remainder methods are balanced.
Every apportionment method that is anonymous, exact and coherent, is also balanced. | [
{
"math_id": 0,
"text": "h"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "t_1,\\ldots,t_n"
},
{
"math_id": 3,
"text": "a_1,\\ldots,a_n"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n a_i = h"
},
{
"math_id": 5,
"text": "t_i = t_j"
},
{
"math_id": 6,
"text": "|a_i-a_j|\\leq 1"
},
{
"math_id": 7,
"text": "a_i\\geq a_j-1"
}
]
| https://en.wikipedia.org/wiki?curid=68720625 |
68721135 | Pierre-Louis-Georges du Buat | French hydraulic engineer
Count Pierre-Louis-George du Buat (23 April 1734 – 17 October 1809) was a French military engineer who worked on problems in hydraulics and hydrodynamics. He examined the flow of water and came up with a mathematical formulation defining the rate of flow of water through pipes which he published in "Principes d’hydraulique, vérifiés par un grand nombre d’expériences faites par ordre du gouvernement."
Early life and engineering.
Du Buat came from a noble family and was born in a manor at Buttenval, Tortisambert in Normandy. He was educated at the Royal School of Engineering in Mézières in 1750 and became a military engineer at the age of 17. He began his first work in the construction of canals of the Lys and the Aa. He became a chief engineer in 1773.
Water velocity studies.
In 1786 he established through experiments a relationship between the velocity of flow of water through a pipe of a known radius and inclination which he extended then to flow in open canals.
formula_0
"u" is the average water velocity,
"g" is the acceleration of gravity,
"m" is a coefficient depending on the roughness of the banks,
"i" is the slope of the channel bottom,
"l" is the width of the bed,
"h" the depth of the channel.
Du Buat also studied the dependence of viscosity of liquids on temperature.
Personal life and business career.
Du Buat married the daughter of Gérard Bosquet in 1758 which made him a shareholder of the Compagnie des mines d'Anzin. He left the corps of engineers in 1788 and became a director of the company in 1802.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " u = 2 i m g \\left ( \\frac{lh}{l+2h} \\right ) "
}
]
| https://en.wikipedia.org/wiki?curid=68721135 |
68726609 | Eilenberg–Niven theorem | Algebraic theorem
The Eilenberg–Niven theorem is a theorem that generalizes the fundamental theorem of algebra to quaternionic polynomials, that is, polynomials with quaternion coefficients and variables. It is due to Samuel Eilenberg and Ivan M. Niven.
Statement.
Let
formula_0
where "x", "a""0", "a""1", ... , "a""n" are non-zero quaternions and "φ"("x") is a finite sum of monomials similar to the first term but with degree less than "n". Then "P"("x") = 0 has at least one solution.
Generalizations.
If permitting multiple monomials with the highest degree, then the theorem does not hold, and "P"("x") = "x" + ixi + 1 = 0 is a counterexample with no solutions.
Eilenberg–Niven theorem can also be generalized to octonions: all octonionic polynomials with a unique monomial of higher degree have at least one solution, independent of the order of the parenthesis (the octonions are a non-associative algebra). Different from quaternions, however, the monic and non-monic octonionic polynomials do not have always the same set of zeros.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(x) = a_0 x a_1 x \\cdots x a_n + \\varphi(x)"
}
]
| https://en.wikipedia.org/wiki?curid=68726609 |
68732742 | Yaiza Canzani | Spanish and Uruguayan mathematician
Yaiza Canzani García is a Spanish and Uruguayan mathematician known for her work in mathematical analysis, and particularly in spectral geometry and microlocal analysis. She is an associate professor of mathematics at the University of North Carolina at Chapel Hill.
Education and career.
Canzani was born in Spain and grew up in Uruguay. She was an undergraduate at the University of the Republic (Uruguay), where she earned a bachelor's degree in mathematics in 2008. She completed a Ph.D. in 2013 at McGill University in Montreal, Canada, with the dissertation "Spectral Geometry of Conformally Covariant Operators" jointly supervised by Dmitry Jakobson and John Toth.
After postdoctoral study at the Institute for Advanced Study and as a Benjamin Peirce Fellow at Harvard University, she became an assistant professor of mathematics at the University of North Carolina at Chapel Hill in 2016. In 2021 she was promoted to associate professor.
Recognition.
Canzani is a recipient of a National Science Foundation CAREER Award and a Sloan Research Fellowship. She is the 2022 winner of the Sadosky Prize in analysis of the Association for Women in Mathematics. The award was given "in recognition of outstanding contributions in spectral geometry and microlocal analysis", citing her "breakthrough results on nodal sets, random waves, Weyl Laws, formula_0-norms, and other problems on eigenfunctions and eigenvalues on Riemannian manifolds".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L^p"
}
]
| https://en.wikipedia.org/wiki?curid=68732742 |
6874521 | Equating coefficients | In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.
Example in real fractions.
Suppose we want to apply partial fraction decomposition to the expression:
formula_0
that is, we want to bring it into the form:
formula_1
in which the unknown parameters are "A", "B" and "C".
Multiplying these formulas by "x"("x" − 1)("x" − 2) turns both into polynomials, which we equate:
formula_2
or, after expansion and collecting terms with equal powers of "x":
formula_3
At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0"x"2 + 0"x" + 1, having zero coefficients for the positive powers of "x". Equating the corresponding coefficients now results in this system of linear equations:
formula_4
formula_5
formula_6
Solving it results in:
formula_7
Example in nested radicals.
A similar problem, involving equating like terms rather than coefficients of like terms, arises if we wish to de-nest the nested radicals formula_8 to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters "d, e" such that
formula_9
Squaring both sides of this equation yields:
formula_10
To find "d" and "e" we equate the terms not involving square roots, so formula_11 and equate the parts involving radicals, so formula_12 which when squared implies formula_13 This gives us two equations, one quadratic and one linear, in the desired parameters "d" and "e", and these can be solved to obtain
formula_14
formula_15
which is a valid solution pair if and only if formula_16 is a rational number.
Example of testing for linear dependence of equations.
Consider this overdetermined system of equations (with 3 equations in just 2 unknowns):
formula_17
formula_18
formula_19
To test whether the third equation is linearly dependent on the first two, postulate two parameters "a" and "b" such that "a" times the first equation plus "b" times the second equation equals the third equation. Since this always holds for the right sides, all of which are 0, we merely need to require it to hold for the left sides as well:
formula_20
Equating the coefficients of "x" on both sides, equating the coefficients of "y" on both sides, and equating the constants on both sides gives the following system in the desired parameters "a", "b":
formula_21
formula_22
formula_23
Solving it gives:
formula_24
The unique pair of values "a", "b" satisfying the first two equations is ("a", "b") = (1, 1); since these values also satisfy the third equation, there do in fact exist "a", "b" such that "a" times the original first equation plus "b" times the original second equation equals the original third equation; we conclude that the third equation is linearly dependent on the first two.
Note that if the constant term in the original third equation had been anything other than –7, the values ("a", "b") = (1, 1) that satisfied the first two equations in the parameters would not have satisfied the third one ("a" – 8"b" = constant), so there would exist no "a", "b" satisfying all three equations in the parameters, and therefore the third original equation would be independent of the first two.
Example in complex numbers.
The method of equating coefficients is often used when dealing with complex numbers. For example, to divide the complex number "a"+"bi" by the complex number "c"+"di", we postulate that the ratio equals the complex number "e+fi", and we wish to find the values of the parameters "e" and "f" for which this is true. We write
formula_25
and multiply both sides by the denominator to obtain
formula_26
Equating real terms gives
formula_27
and equating coefficients of the imaginary unit "i" gives
formula_28
These are two equations in the unknown parameters "e" and "f", and they can be solved to obtain the desired coefficients of the quotient:
formula_29 | [
{
"math_id": 0,
"text": "\\frac{1}{x(x-1)(x-2)},\\,"
},
{
"math_id": 1,
"text": "\\frac{A}{x}+\\frac{B}{x-1}+\\frac{C}{x-2},\\,"
},
{
"math_id": 2,
"text": "A(x-1)(x-2) + Bx(x-2) + Cx(x-1) = 1,\\,"
},
{
"math_id": 3,
"text": "(A+B+C)x^2 - (3A+2B+C)x + 2A = 1.\\,"
},
{
"math_id": 4,
"text": "A+B+C = 0,\\,"
},
{
"math_id": 5,
"text": "3A+2B+C = 0,\\,"
},
{
"math_id": 6,
"text": "2A = 1.\\,"
},
{
"math_id": 7,
"text": "A = \\frac{1}{2},\\, B = -1,\\, C = \\frac{1}{2}.\\,"
},
{
"math_id": 8,
"text": "\\sqrt{a+b\\sqrt{c}\\ }"
},
{
"math_id": 9,
"text": "\\sqrt{a+b\\sqrt{c}\\ } = \\sqrt{d}+\\sqrt{e}."
},
{
"math_id": 10,
"text": "a+b\\sqrt{c} = d + e + 2\\sqrt{de}."
},
{
"math_id": 11,
"text": "a=d+e,"
},
{
"math_id": 12,
"text": "b\\sqrt{c}=2\\sqrt{de}"
},
{
"math_id": 13,
"text": "b^2c=4de."
},
{
"math_id": 14,
"text": "e = \\frac{a + \\sqrt{a^2-b^2c}}{2},"
},
{
"math_id": 15,
"text": "d = \\frac{a - \\sqrt{a^2-b^2c}}{2},"
},
{
"math_id": 16,
"text": "\\sqrt{a^2-b^2c}"
},
{
"math_id": 17,
"text": "x-2y+1=0,"
},
{
"math_id": 18,
"text": "3x+5y-8=0,"
},
{
"math_id": 19,
"text": "4x+3y-7=0."
},
{
"math_id": 20,
"text": "a(x-2y+1)+b(3x+5y-8) = 4x+3y-7."
},
{
"math_id": 21,
"text": "a+3b=4,"
},
{
"math_id": 22,
"text": "-2a+5b=3,"
},
{
"math_id": 23,
"text": "a-8b=-7."
},
{
"math_id": 24,
"text": "a = 1, \\ b = 1"
},
{
"math_id": 25,
"text": "\\frac{a+bi}{c+di}=e+fi,"
},
{
"math_id": 26,
"text": "(ce-fd)+(ed+cf)i=a+bi."
},
{
"math_id": 27,
"text": "ce-fd=a,"
},
{
"math_id": 28,
"text": "ed+cf=b."
},
{
"math_id": 29,
"text": "e= \\frac{ac+bd}{c^2+d^2} \\quad \\quad \\text{and} \\quad \\quad f=\\frac{bc-ad}{c^2+d^2} ."
}
]
| https://en.wikipedia.org/wiki?curid=6874521 |
6875557 | Complete manifold | Riemannian manifold in which geodesics extend infinitely in all directions
In mathematics, a complete manifold (or geodesically complete manifold) M is a (pseudo-) Riemannian manifold for which, starting at any point "p", there are straight paths extending infinitely in all directions.
Formally, a manifold formula_0 is (geodesically) complete if for any maximal geodesic formula_1, it holds that formula_2. A geodesic is maximal if its domain cannot be extended.
Equivalently, formula_0 is (geodesically) complete if for all points formula_3, the exponential map at formula_4 is defined on formula_5, the entire tangent space at formula_4.
Hopf-Rinow theorem.
The Hopf–Rinow theorem gives alternative characterizations of completeness. Let formula_6 be a "connected" Riemannian manifold and let formula_7 be its Riemannian distance function.
The Hopf–Rinow theorem states that formula_6 is (geodesically) complete if and only if it satisfies one of the following equivalent conditions:
Examples and non-examples.
Euclidean space formula_10, the sphere formula_11, and the tori formula_12 (with their natural Riemannian metrics) are all complete manifolds.
All compact Riemannian manifolds and all homogeneous manifolds are geodesically complete. All symmetric spaces are geodesically complete.
Non-examples.
A simple example of a non-complete manifold is given by the punctured plane formula_13 (with its induced metric). Geodesics going to the origin cannot be defined on the entire real line. By the Hopf–Rinow theorem, we can alternatively observe that it is not a complete metric space: any sequence in the plane converging to the origin is a non-converging Cauchy sequence in the punctured plane.
There exist non-geodesically complete compact pseudo-Riemannian (but not Riemannian) manifolds. An example of this is the Clifton–Pohl torus.
In the theory of general relativity, which describes gravity in terms of a pseudo-Riemannian geometry, many important examples of geodesically incomplete spaces arise, e.g. non-rotating uncharged black-holes or cosmologies with a Big Bang. The fact that such incompleteness is fairly generic in general relativity is shown in the Penrose–Hawking singularity theorems.
Extendiblity.
If formula_0 is geodesically complete, then it is not isometric to an open proper submanifold of any other Riemannian manifold. The converse does not hold.
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\ell : I \\to M"
},
{
"math_id": 2,
"text": "I=(-\\infty,\\infty)"
},
{
"math_id": 3,
"text": "p \\in M"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "T_pM"
},
{
"math_id": 6,
"text": "(M,g)"
},
{
"math_id": 7,
"text": "d_g : M \\times M \\to [0,\\infty)"
},
{
"math_id": 8,
"text": "(M,d_g)"
},
{
"math_id": 9,
"text": "d_g"
},
{
"math_id": 10,
"text": "\\mathbb{R}^n"
},
{
"math_id": 11,
"text": "\\mathbb{S}^n"
},
{
"math_id": 12,
"text": "\\mathbb{T}^n"
},
{
"math_id": 13,
"text": "\\mathbb{R}^2 \\smallsetminus \\lbrace 0 \\rbrace"
}
]
| https://en.wikipedia.org/wiki?curid=6875557 |
68756880 | Rathjen's psi function | In mathematics, Rathjen's formula_0 psi function is an ordinal collapsing function developed by Michael Rathjen. It collapses weakly Mahlo cardinals formula_1 to generate large countable ordinals. A weakly Mahlo cardinal is a cardinal such that the set of regular cardinals below formula_1 is closed under formula_1 (i.e. all normal functions closed in formula_1 are closed under some regular ordinal formula_2). Rathjen uses this to diagonalise over the weakly inaccessible hierarchy.
It admits an associated ordinal notation formula_3 whose limit (i.e. ordinal type) is formula_4, which is strictly greater than both formula_5 and the limit of countable ordinals expressed by Rathjen's formula_0. formula_5, which is called the "Small Rathjen ordinal" is the proof-theoretic ordinal of formula_6, Kripke–Platek set theory augmented by the axiom schema "for any formula_7-formula formula_8 satisfying formula_9, there exists an addmissible set formula_10 satisfying formula_11". It is equal to formula_12 in Rathjen's formula_0 function.
Definition.
Restrict formula_13 and formula_14 to uncountable regular cardinals formula_2; for a function formula_15 let formula_16 denote the domain of formula_15; let formula_17 denote formula_18, and let formula_19 denote the enumeration of formula_20. Lastly, an ordinal formula_21 is said to be to be strongly critical if formula_22.
For formula_23 and formula_24:
formula_25
If formula_26 for some formula_27, define formula_28 using the unique formula_29. Otherwise if formula_30 for some formula_23, then define formula_31 using the unique formula_21, where formula_32 is a set of strongly critical ordinals formula_2 explicitly defined in the original source.
For formula_23:
formula_33
formula_34
Explanation.
Rathjen originally defined the formula_0 function in more complicated a way in order to create an ordinal notation associated to it. Therefore, it is not certain whether the simplified OCF above yields an ordinal notation or not. The original formula_46 functions used in Rathjen's original OCF are also not so easy to understand, and differ from the formula_46 functions defined above.
Rathjen's formula_0 and the simplification provided above are not the same OCF. This is partially because the former is known to admit an ordinal notation, while the latter isn't known to admit an ordinal notation. Rathjen's formula_0 is often confounded with another of his OCFs which also uses the symbol formula_0, but they are distinct notions. The former one is a published OCF, while the latter one is just a function symbol in an ordinal notation associated to an unpublished OCF.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "< M"
},
{
"math_id": 3,
"text": "T(M)"
},
{
"math_id": 4,
"text": "\\psi_\\Omega(\\chi_{\\varepsilon_M+1}(0))"
},
{
"math_id": 5,
"text": "\\vert KPM\\vert"
},
{
"math_id": 6,
"text": "\\mathsf{KPM}"
},
{
"math_id": 7,
"text": "\\Delta_0"
},
{
"math_id": 8,
"text": "H(x, y)"
},
{
"math_id": 9,
"text": "\\forall x \\, \\exists y\\,(H(x, y))"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "\\forall x \\in z \\, \\exists y\\,(H(x, y))"
},
{
"math_id": 12,
"text": "\\psi_\\Omega(\\psi_{\\chi_{\\varepsilon_M+1}(0)}(0))"
},
{
"math_id": 13,
"text": "\\pi"
},
{
"math_id": 14,
"text": "\\kappa"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "\\operatorname{dom}(f)"
},
{
"math_id": 17,
"text": "\\operatorname{cl}_M(X)"
},
{
"math_id": 18,
"text": "X \\cup \\{\\alpha < M: \\alpha \\text{ is a limit point of } X\\}"
},
{
"math_id": 19,
"text": "\\operatorname{enum}(X)"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "\\alpha"
},
{
"math_id": 22,
"text": "\\varphi_\\alpha(0) = \\alpha"
},
{
"math_id": 23,
"text": "\\alpha \\in \\Gamma_{M+1}"
},
{
"math_id": 24,
"text": "\\beta \\in M"
},
{
"math_id": 25,
"text": "\n\\begin{align}\n& \\beta \\cup \\{0, M\\} \\subseteq B^n(\\alpha, \\beta) \\gamma = \\gamma_1 + \\cdots + \\gamma_k \\text{ and } \\gamma_1, \\ldots, \\gamma_k \\in B^n(\\alpha,\\beta) \\\\[5pt]\n& \\rightarrow \\gamma \\in B^{n+1}(\\alpha, \\beta)\\gamma = \\varphi_{\\gamma_0}(\\gamma_1) \\text{ and } \\gamma_0, \\gamma_1 \\in B^n(\\alpha,\\beta) \\rightarrow \\gamma \\in B^{n+1}(\\alpha, \\beta)\\pi \\in B^n(\\alpha,\\beta) \\\\[5pt]\n& \\text{and } \\gamma < \\pi \\rightarrow \\gamma \\in B^{n+1}(\\alpha, \\beta)\\delta, \\eta \\in B^n(\\alpha,\\beta) \\land \\delta < \\alpha \\land \\eta \\in \\operatorname{dom}(\\chi_\\delta) \\\\[5pt]\n& \\rightarrow \\chi_\\delta(\\eta) \\in B^{n+1}(\\alpha,\\beta)B(\\alpha,\\beta) \\\\[5pt]\n& \\bigcup_{n < \\omega}B^n(\\alpha, \\beta) \\chi_\\alpha \\\\[5pt]\n& = \\operatorname{enum}(\\operatorname{cl}(\\kappa: \\kappa \\notin B(\\alpha, \\kappa) \\land \\alpha \\in B(\\alpha,\\kappa)\\})).\n\\end{align}\n"
},
{
"math_id": 26,
"text": "\\kappa = \\chi_\\alpha(\\beta+1)"
},
{
"math_id": 27,
"text": "(\\alpha,\\beta) \\in \\Gamma_{M+1} \\times M"
},
{
"math_id": 28,
"text": "\\kappa^- := \\chi_\\alpha(\\beta)"
},
{
"math_id": 29,
"text": "(\\alpha,\\beta)"
},
{
"math_id": 30,
"text": "\\kappa = \\chi_\\alpha(0)"
},
{
"math_id": 31,
"text": "\\kappa^- := \\sup(\\operatorname{SC}_M(\\alpha) \\cup \\{0\\})"
},
{
"math_id": 32,
"text": "\\operatorname{SC}_M(\\alpha)"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n& \\kappa^- \\cup \\{\\kappa^-, M\\} \\subset C_\\kappa^n(\\alpha)\\gamma = \\gamma_1 + \\cdots+ \\gamma_k \\text{ and } \\gamma_1, \\ldots, \\gamma_k \\in C^n(\\alpha) \\rightarrow \\gamma \\in C^{n+1}(\\alpha)\\gamma = \\varphi_{\\gamma_0}(\\gamma_1) \\land \\gamma_0, \\gamma_1 \\in C^n(\\alpha,\\beta) \\\\[5pt]\n& \\rightarrow \\gamma \\in C^{n+1}(\\alpha)\\pi \\in C^n_\\kappa(\\alpha) \\cap \\kappa \\land \\gamma < \\pi \\land \\pi \\in \\textrm{R} \\\\[5pt]\n& \\rightarrow \\gamma \\in C^{n+1}_\\kappa(\\alpha)\\gamma = \\chi_\\delta(\\eta) \\land \\delta, \\eta \\in C^n_\\kappa(\\alpha) \\rightarrow \\gamma \\in C^{n+1}_\\kappa(\\alpha) \\\\[5pt]\n& \\gamma = \\Phi_\\delta(\\eta) \\land \\delta, \\eta \\in C^n_\\kappa(\\alpha) \\land 0 < \\delta \\land \\delta, \\eta <M \\rightarrow \\gamma \\in C^{n+1}_\\kappa(\\alpha)\\beta < \\alpha \\land \\pi, \\beta \\in C^n_\\kappa(\\alpha) \\land \\beta \\in C_\\pi(\\beta) \\rightarrow \\psi_\\pi(\\beta) \\in C^{n+1}_\\kappa(\\alpha)C_\\kappa(\\alpha) := \\bigcup_{C^n_\\kappa(\\alpha): n<\\omega}.\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\psi_\\kappa(\\alpha):=\\min(\\{\\xi:\\xi\\notin C_\\kappa(\\alpha)\\})."
},
{
"math_id": 35,
"text": "\\operatorname{cl}(X)"
},
{
"math_id": 36,
"text": "X \\cup \\{\\beta \\in \\operatorname{Lim} \\mid \\sup(X \\cap \\beta) = \\beta\\}"
},
{
"math_id": 37,
"text": "\\operatorname{Lim}"
},
{
"math_id": 38,
"text": "B_0(\\alpha, \\beta) = \\beta \\cup \\{0, M\\}"
},
{
"math_id": 39,
"text": "B_{n+1}(\\alpha, \\beta) = \\{\\gamma + \\delta, \\varphi_\\gamma(\\delta), \\chi_\\mu(\\delta) | \\gamma, \\delta, \\mu \\in B_n(\\alpha, \\beta) \\land \\mu < \\alpha\\}"
},
{
"math_id": 40,
"text": "B(\\alpha, \\beta) = \\bigcup_{n < \\omega}B_n(\\alpha, \\beta)"
},
{
"math_id": 41,
"text": "\\chi_\\alpha(\\beta) = \\operatorname{enum}(\\operatorname{cl}(\\{\\pi: B(\\alpha, \\pi) \\cap M \\subseteq \\pi \\land \\alpha \\in B(\\alpha, \\pi)\\})) = \\operatorname{enum}(\\{\\beta \\in \\operatorname{Lim} \\mid \\sup \\{\\pi: B(\\alpha, \\pi) \\cap M \\subseteq \\pi \\land \\alpha \\in B(\\alpha, \\pi)\\} \\cap \\beta) = \\beta\\}"
},
{
"math_id": 42,
"text": "C_0(\\alpha, \\beta) = \\beta \\cup \\{0, M\\}"
},
{
"math_id": 43,
"text": "C_{n+1}(\\alpha, \\beta) = \\{\\gamma + \\delta, \\varphi_\\gamma(\\delta), \\chi_\\mu(\\delta), \\psi_\\pi(\\mu) | \\gamma, \\delta, \\mu, \\pi \\in B_n(\\alpha, \\beta) \\land \\mu < \\alpha\\}"
},
{
"math_id": 44,
"text": "C(\\alpha, \\beta) = \\bigcup_{n < \\omega}C_n(\\alpha, \\beta)"
},
{
"math_id": 45,
"text": "\\psi_\\pi(\\alpha) = \\min(\\{\\beta: C(\\alpha, \\beta) \\cap \\pi \\subseteq \\beta \\land \\alpha \\in C(\\alpha, \\beta)\\})"
},
{
"math_id": 46,
"text": "\\chi"
}
]
| https://en.wikipedia.org/wiki?curid=68756880 |
68758670 | Global carbon reward | Proposed climate action and monetary policy
The global carbon reward is a proposed international policy for establishing and funding a new global carbon market for decarbonising all sectors of the world economy, and for establishing and funding a new economic sector dedicated to carbon dioxide removal (CDR). The policy is market-based, and it will offer proportional financial rewards in exchange for verifiable climate mitigation services and co-benefits. The policy approach was first presented in 2017 by Delton Chen, Joël van der Beek, and Jonathan Cloud to address the 2015 Paris Agreement, and it has since been refined.
The policy employs a carbon currency to establish a global reward price for mitigated carbon. The carbon currency will not convey ownership of mitigated carbon, and consequently the carbon currency cannot function as a carbon offset credit. The carbon currency will function as a financial asset and incentive.
A supranational authority is needed to implement the policy and to manage the supply and demand of the carbon currency. This authority is referred to as the carbon exchange authority. One of the authority's key functions is to coordinate the operations of major central banks in order to give the carbon currency a guaranteed floor price. A predictable rising floor price will attract private investment demand for the currency, and it will transfer a significant portion of the mitigation cost into currency markets. The policy will not result in any direct costs for governments, businesses or citizens. Consequently, the policy has scope to create a new socioeconomic pathway to achieve the goals of the Paris Agreement.
<templatestyles src="Template:TOC limit/styles.css" />
Background.
Since the start of the United Nations Framework Convention on Climate Change (UNFCCC) in 1992, the atmospheric concentration of carbon dioxide (CO2)—a dominant anthropogenic greenhouse gas (GHG)—has risen steadily, as shown by the Keeling Curve. Despite numerous Conference of the Parties (COP) meetings and several treaties, CO2 and other GHG emissions have continued at dangerously high levels.
A major hurdle to a rapid clean energy transition and global economic decarbonisation, is the need to mobilise large amounts of investment finance. According to a study of renewable energy systems by ARENA, the financial shortfall for achieving the goals of the Paris Agreement is about US $27 trillion for the 2016-2050 period. The International Energy Agency (IEA) estimate that investments in clean energy will need to increase to about US $5 trillion per year by 2030 in order to achieve net-zero carbon emissions by 2050.
Further complicating the economics of climate change is the possibility that cumulative residual CO2 emissions from fossil fuels could reach 850–1,150 GtCO2 for the period 2016–2100 even if stringent policies and carbon taxes are implemented. For these and other reasons there is an apparent need for new policies that can accelerate the transition to low-carbon energy systems and provide large-scale CDR.
The shortfall in climate finance and lack of political cooperation inspired Delton Chen, a civil engineer, to found a climate policy initiative in 2014 with the goal of combining a market mechanism with monetary policy. Seminal ideas for the policy first appeared at the 2015 Earth System Governance conference, Canberra, and in MIT’s Climate Co-Lab competitions where the policy was awarded two prizes. Between 2017–2019, Chen and his colleagues published two policy papers and in 2018 Guglielmo Zappalà wrote a thesis that compares the new policy with existing central bank policies. In 2019, Chen described the global carbon reward policy in terms of required central bank remits and operations, and the possible application of blockchain technologies. A website for the global carbon reward was launched on World Environment Day 2021.
Policy name.
The generic name of the climate policy is simply 'carbon reward' or 'global carbon reward', written with lower-case letters. When the policy name is written with capital letters—Global Carbon Reward—then the name is referring to a specific policy development project that has adopted 'Global Carbon Reward' as its brand name. This brand name refers to specific policy versions and associated assets and partnerships. The Global Carbon Reward project was originally called the 'Global 4C Risk Mitigation Policy' or simply 'Global 4C', where 4C is an acronym that stands for "complementary currencies for climate change".
Policy type.
The global carbon reward is a market-based climate policy combined with a monetary standard for mitigated carbon. The global carbon reward is justified in terms of a market hypothesis that posits a need to create a positive externality, designed to manage the systemic risks associated with anthropogenic carbon emissions. The term 'reward' is used to distinguish the market incentive from other more conventional incentives, such as carbon taxes, cap and trade, subsidies, and carbon offsets.
The market-based instrument is called a carbon currency. The policy instrument is a type of representative money that will be managed with a new monetary policy that can coordinate the world's major central banks to establish a predictable floor price for the carbon currency. The new monetary policy is called carbon quantitative easing, and it is a supranational policy because it will coordinate the quantitative easing and currency trading by central banks on a global level.
Pricing mitigated carbon.
There are three typical methods for pricing anthropogenic GHGs that have been mitigated.
The global carbon reward employs the first two methods, and it has scope to incentivise the third method by registering the resulting durable goods as a co-benefit.
Not a subsidy.
The term "reward" is used to differentiate the global carbon reward from government subsidies. The global carbon reward is different to a government subsidy because (1) the reward is issued with a representative currency and not with a national currency; (2) the reward is funded with international monetary policy and private currency trading, and not through fiscal spending; and (3) the reward is performance-based whereas subsidies are not necessarily dependent on performance.
The global carbon reward aims to create a price signal with a carbon currency, and as such the reward is not a Pigovian subsidy. Market participants are invited to trade the carbon currency as an investment. If the Coase theorem is applied, then it may be assumed that market participants will discover the reward price in a way that shares the mitigation costs in a Pareto optimal outcome.
The global carbon reward is a new kind of performance-based grant system. It is also a new kind of results-based climate financing (RBCF). According to the World Bank, RBCF is "...a well-established financing modality in the health and education sectors but it is still in an early stage of deployment in the area of climate change".
Not a carbon offset.
The global carbon reward is not a carbon offset credit. A carbon offset is a recorded reduction in CO2 or other greenhouse gas emissions that is used to compensate for emissions made elsewhere. The reward is issued as a currency that does not convey ownership of the mitigated carbon. All of the mitigated carbon that is awarded will be immediately retired from carbon markets and will be held by the authority for the policy, called the carbon exchange authority.
Carbon stock take.
The carbon currency will be directly indexed to the carbon stock take for the policy, meaning that one unit of the carbon currency will directly correspond to a specific mass of CO2e that is mitigated for a specific duration. This indexing relationship is defined by the unit of account of the carbon currency, which is 1 tCO2e mitigated for a 100-year duration. The total supply of the carbon currency will thus remain proportional to the carbon stock take.
The carbon stock take will be owned and managed by the carbon exchange authority. This is analogous to the U.S. Department of the Treasury holding gold for the gold window of the Bretton Woods system except that the carbon currency is not redeemable for mitigated carbon. If the carbon stock take falls as a result of individual enterprises defaulting on their service-level agreements, then the supply of the carbon currency can be reduced proportionally with a negative interest rate charge, otherwise called a demurrage fee.
Causal mechanisms.
The effectiveness of the global carbon reward policy will depend on a set of causal mechanisms that can remove financial bottlenecks and trigger a major shift in market behaviour for the scaling-up of effective climate action. The policy relies on a chain of causal mechanisms that are social, informational, financial and political. These include (1) the provision of globally available performance-based grants for mitigated carbon, (2) the provision of a global database for mitigation technologies and the statistics that describe their effectiveness and profitability, (3) the channeling of the mitigation cost into the foreign exchange currency market to resolve conflicts over cost sharing, and (4) the provision of individual service-level agreements for tracking and managing the carbon stock take over the long-term. The policy can also be integrated with other market and non-market policies in a (5) 'carrot and stick' approach for maximising societal cooperation.
Current status.
The theoretical background to the global carbon reward is presented in several publications and a thesis, but the policy has yet to be reviewed by policy institutions or government officials.
Popular culture.
The American science fiction writer, Kim Stanley Robinson, embraced the idea of a 'carbon coin' in his climate change novel The Ministry for the Future. The novel portrays a series of events that lead to the establishment of a transnational organisation that is mandated to deploy carbon coins to address the Paris Agreement. The author’s inspiration for using carbon coins is attributed in the novel to Delton Chen, via the phrase “Chen’s papers”.
Policy design.
Policy objectives.
The main objective of the global carbon reward is to avoid passing specific levels of average global surface warming. The climate objective needs to be defined in terms of global average temperature changes and associated probabilities of success. For example, the climate objective could be to avoid a maximum of 1.5 °C, 2.0 °C, and 2.5 °C of global warming with confidence levels of 50%, 67%, 90%, respectively. The policy’s main objective is normative, and it may be aligned with the goals of the 2015 Paris Agreement.
Secondary objectives of the global carbon reward are to maximise the co-benefits and to minimise the harms that are directly associated with the actions that are rewarded under the policy. These co-benefits may be divided into (1) energy reliability, (2) community wellbeing, and (3) ecological health, and they may also be categorised using the UN’s sustainable development goals.
Policy instrument.
The carbon currency is the economic instrument of the proposed market policy. The carbon currency will be used to (a) financially reward enterprises for mitigating carbon under the policy rules, (b) create a predictable global reward price for mitigated carbon, and (c) record the carbon stock take for the policy.
The carbon currency will be a type of representative money with a unit of account of 1 tCO2e mitigated for a 100-year duration, or similar. The carbon currency will act primarily as a store of value, and not as a medium of exchange. It will not be used as a medium of exchange in the sense that it will not be accepted for paying taxes or for making regular business transactions. The carbon currency will be readily tradable for other currencies via foreign exchange providers and remittance dealers. The supply and floor price of the carbon currency will be used to monitor progress on global economic decarbonisation and the associated systemic risks.
Pricing mechanism.
The pricing mechanism for the global carbon reward is understood in terms of the supply and demand functions for the carbon currency. The supply function is the rate at which the carbon currency is created and issued in order to reward enterprises that have successfully mitigated carbon under the rules of the policy. The demand function is underpinned by a floor price that is guaranteed by central banks, and by private demand for the currency in response to a rising or falling trend in the floor price.
Floor price.
As indicated above, the demand function for the carbon currency is underpinned by a guaranteed floor price. This floor price will be enforced by central banks through a reflexive monetary policy that triggers currency trades/swaps in open markets when necessary.
It is important to note that the floor price for the carbon currency will be the price signal that incentivises market actors to invest in mitigation projects. The price signal will be communicated as a combination of (a) the spot price for the carbon currency, and (b) the future floor price that will span a rolling 100-year period. The spot price may rise to any level under market forces, but it will never fall below the floor price because the monetary policy, called carbon quantitative easing (CQE), will be enacted to defend the floor price.
The future floor price for a rolling 100-year period will be divided into two parts: a rolling guaranteed period that spans a decade or two, followed by a rolling non-guaranteed period that spans the remainder of the 100-years. The future floor price constitutes the forward guidance that will be communicated to markets so that enterprises can make informed decisions when decarbonising their operations. It is presumed that enterprises that decarbonise will develop their own financial plans that account for the required capital investment, technological innovations, and other design factors. Given that rapid decarbonisation might introduce operational and financial risks, the purpose of the floor price is to de-risk the investments by providing a predictable revenue source that may be designated as debt-free and bankable.
Currency demand.
Private demand for the carbon currency will be generated by a rising floor price. The ideal floor price will be calibrated to achieve the climate objective, and the technical name for this ideal floor price is the risk cost of carbon (RCC).
Private demand for the carbon currency will be highest when the floor price is rising most quickly, and it will be least when the floor price is falling most quickly. If at any time this private demand is not sufficient to maintain the floor price, central banks will make up the shortfall by buying the carbon currency via CQE.
Currency supply.
As indicated above, the supply function for the carbon currency is based on assessing the mass of carbon dioxide equivalent (CO2e) that has been mitigated at the project level. This will involve setting emissions baselines, and applying standardised methods of measurement, reporting and verification. The adopted baselines and standards will be specified in service-level agreements.
The carbon currency may be offered for four kinds of mitigation service: (i) the supply of cleaner energy in specific energy markets; (ii) the consumption of cleaner energy, goods and services by businesses and households; (iii) the removal of carbon from the ambient atmosphere, and (iv) the implementation of ethical population management.
The gross amount of carbon currency that will be offered to enterprises will be proportional to the notional mass of carbon that each enterprise can verifiably mitigate over the long-term. The adjusted amount of carbon currency that will be offered to enterprises will equal the gross reward plus/minus any positive/negative adjustments. The positive adjustments are to reflect socio-ecological co-benefits, and the negative adjustments are to reflect socio-ecological harms.
Funding model.
The funding model for the policy is based on the above mentioned demand function for the carbon currency. This funding model will not result in any direct costs for governments, businesses or citizens because the mitigation cost will be channelled into the foreign exchange currency market when the world's major national currencies are devalued relative to the carbon currency. One advantage of this funding model is that it will allow governments to focus more on national priorities, such as climate adaptation, because global markets will be motivated and coordinated to achieve the agreed climate objective.
The policy is designed in such a way as to create a self-funding administrative system. The cost of the policy's administration and policing will be recovered through fees and commissions that will be charged to enterprises that earn the carbon currency as a reward.
Institutional Framework.
The institutional framework for implementing the global carbon reward will need to have the capacity to establish a supranational institution for managing the carbon currency with carbon quantitative easing (CQE). The proposed supranational institution is notionally called the carbon exchange authority. One option is to establish the carbon exchange authority under the auspices of the UNFCCC and in response to the Paris Agreement. Unlike previous treaties and agreements under the UNFCCC, the carbon exchange authority will interface with central banks via protocols for CQE, and this may require new channels for intergovernmental coordination, new mandates for central banks, and new legal structures for policy governance, international trade, and dispute resolution.
Social principles.
Common But Differentiated Responsibilities (CBDR) is the guiding principle of the UNFCCC. The CBDR principle was formalised at the Earth Summit in Rio de Janeiro, 1992. CBDR acknowledges that all states have a shared obligation to address environmental destruction, and that countries that have produced the most greenhouse gases should contribute proportionally more to climate change mitigation. CBDR is therefore consistent with the polluter pays principle.
The polluter pays principle has limitations when there is insufficient cooperation over cost sharing for the goal of protecting the global commons. Delton Chen and his colleagues propose that a new social principle is needed, called the preventative insurance principle, to explain the social context of the global carbon reward. This principle states that in order to protect the global commons — including the climate system and the planetary ecosystem — it is necessary to maximise societal cooperation by managing the mitigation costs in a way that avoids direct taxation and avoids fiscal spending by governments. The preventative insurance principle acknowledges that effective climate mitigation should be a priority given that future climate damages could be systemic, extreme, and irreversible if not adequately mitigated. The preventative insurance principle is combined with the polluter pays principle to justify a policy toolkit that consists of complementary 'carrot' and 'stick' policies.
Economic theory.
Holistic market hypothesis.
Under standard economic theory, as elaborated by leading economists such as Nicholas Stern and William Nordhaus, the market failure in carbon has resulted in a negative externality, called the social cost of carbon (SCC). According to standard theory, the SCC is a measure of the time-discounted climate-related damages caused by 1 tCO2 emitted in a given year. The SCC is used to estimate the ideal carbon tax under cost-benefit analysis.
Delton Chen, Joël van der Beek and Jonathan Cloud articulate an alternative theory, called the Holistic Market Hypothesis (HMH), that proposes that the standard theory is incomplete because the systemic risks that are structurally linked to the anthropogenic carbon balance are not addressed using the welfare theory of Arthur Cecil Pigou. The HMH states that systemic risks are probabilistic at the first order, and are therefore different to climate damages. The International Organization for Standardization (ISO) defines risk as the “effect of uncertainty on objectives” giving credence to the notion that systemic risks are not the same as social costs.
The HMH expands on the theory of Pigou by further proposing that the systemic risks associated with the anthropogenic carbon balance are unusually large and should be associated with a second externality — a positive externality. Under the HMH, the systemic risks associated with anthropogenic carbon are addressed with a second explicit price — a reward price. The reward price should be managed independently of carbon taxes, cap-and-trade, subsidies and carbon offsets.
Risk cost of carbon.
Under the HMH, the market failure in anthropogenic carbon is revised at the conceptual level to include a positive externality. The HMH thus makes the claim that the market failure in carbon consists of two externalised costs—the social cost of carbon (SCC) and the risk cost of carbon (RCC)—which are opposite and complementary. The RCC is used to quantify the positive externality, and it is evaluated using risk-effectiveness analysis. The RCC is then priced into the marketplace with a carbon currency, which is used to reward enterprises for their positive climate action.
The RCC is conceptualised as the cost of managing systemic risks that are coupled to the anthropogenic carbon balance. The RCC includes the cost of overcoming or bypassing societal systems that act as barriers to the decarbonisation of the world economy. These societal system may include monetary systems, financial systems, political systems, legal systems, etc. The RCC also includes the cost of responding preemptively to Earth systems that can produce positive climate feedbacks on carbon emissions and possible tipping points. These Earth systems include the atmosphere, cryosphere, hydrosphere, biosphere, and pedosphere, and their interactions.
The RCC is assessed as the average marginal cost of mitigating 1 tonne of CO2e for a 100-year duration, such that the global rate of mitigation is sufficient to achieve the agreed climate objective. The value of the RCC is used to establish the ideal floor price of the carbon currency. The carbon currency is an essential tool of the policy because the carbon currency can be used to bypasses the existing financial system and avoid financial intermediaries and bottlenecks.
The RCC is created "ex post" to the introduction of the global carbon reward policy. This differs to the SCC, which is generated "ex ante" to the introduction of the carbon tax. The internalisation of the RCC into the economy should produce a positive externality because it will create a new global carbon market that has the qualities of a global public good. The positive externality is a global public good because it will produce benefits that are non-rivalrous, non-excludable, and available worldwide.
Relational diagram for market policies.
The manner in which the SCC and RCC are addressed in the HMH, is explained using a relational diagram that identifies four market-based policies as the principal options for pricing carbon. The relational diagram classifies market-based policies according to two important policy functions: (a) unit of account, and (b) store of value.
The relational diagram is a matrix with two columns and two rows. The two columns refer to 'fiat units' or 'carbon units' as the binary option for the unit of account of the policy tool. The two rows refer to ‘sticks’ or ‘carrots’ as the binary option for the policy tool, whereby a stick has a negative store of value, and a carrot has a positive store of value.
Delton Chen calls this relational diagram the carbon pricing matrix. The matrix denotes four market policies: the (1) carbon tax, (2) carbon subsidy, (3) cap and trade, and (4) global carbon reward. The left side of the carbon pricing matrix is consistent with Arthur C. Pigou’s 1920 treatise on externalised costs and his proposed method of pricing negative externalities with taxes, and pricing positive externalities with subsidies. The objective of the carbon tax is ostensibly to achieve allocative efficiency by internalising the SCC. In standard economics there is no mention of an explicit objective when using the carbon subsidy, although such subsidies have been used, such as the 45Q tax credit in the United States for carbon oxide capture and sequestration.
The right side of the carbon pricing matrix is linked to the Coase theorem for private bargaining because the policies on the right employ tradable permits and tokens with carbon units. The advantage of using tradable permits and tokens is the ability to achieve a Pareto optimal outcome. Cap and trade (e.g. South Korea's emissions trading scheme) aims to internalise the SCC via the trading of emissions permits however the objective of cap and trade policies is not explicit because the stringency of the cap is subject to other considerations besides the SCC.
The right side of the carbon pricing matrix frames the ideation of the global carbon reward. 'Carbon currency' is the name given to the tradable token that is the reward. For this particular policy, the strategy is to assign a floor price to the carbon currency in order to target the required mitigation rate. The currency is then issued to market actors who successfully mitigate carbon. The actual price of the carbon currency is then discovered by allowing the currency to be traded above its assigned floor price. The approach will invite private currency trading—an example of Coasian bargaining—for achieving a Pareto optimal outcome with regards to the distribution of the mitigation cost. The RCC will be fully internalised into the economy when sufficient carbon mitigation is provided to achieve the policy's main objective.
The HMH is a theory that claims that the market failure in carbon is not a classical market failure because of the large systemic risks that are inherent to the fast carbon cycle. The HMH also says that ‘carrot and stick’ carbon pricing is needed to correct the market failure in carbon. The two explicit carbon prices that are recommended, are as follows: (1) a 'stick' to maximise the efficiency of the marketplace according to the marginal social welfare theory of Pigou, and (2) a 'carrot' to manage the systemic risks associated with the anthropogenic carbon balance. The HMH ultimately says that correcting the market failure in carbon requires a trade-off between the two main objectives, such that some economic efficiency could be sacrificed in order to limit the systemic risks.
Resolving temporal paradoxes.
The estimation of the SCC has attracted considerable attention from economists and it is often controversial because of the sensitivity of the SCC to the social discount rate (SDR). A relatively high SDR will result in a lower carbon tax, short-term planning, and less regard for future generations. The narrative surrounding the SDR is often split between two sides, with one side favouring a descriptive SDR and a relatively low carbon tax, and the other side favouring an ethical (i.e. prescriptive) SDR and a relatively high carbon tax. The HMH offers a resolution to this problem by introducing a second policy tool (i.e. the carbon currency) and a second policy objective of internalising the RCC into the economy (i.e. to manage the climate-related systemic risk). The RCC is determined independently of marginal social welfare and the SDR, and so it is not affected by time discounting.
The Tragedy of the Horizon paradoxes are anecdotes presented by Marc Carney in reference to the short planning horizon of central banks in relation to risk management. They also refer to the more general problem that the current generation is weakly incentivised to fix the climate problem for future generations. Delton Chen infers that the rising floor price for the carbon currency — based on the RCC — will produce a secular bull market in the carbon currency for resolving these paradoxes. In other words, Chen’s solution is to “…convert tomorrow’s risk into today’s profits”. This may be restated as follows: the carbon currency will act as a negative feedback on global warming because it is an investment-grade currency that is pro-cyclical with the climate risk.
Policy for net-zero carbon.
Delton Chen and his co-authors propose that the carbon currency can be used to create a new roadmap to net-zero carbon emissions. They propose that the world economy can be reconfigured as a dual-market system, comprising (a) existing markets that use national fiat currencies to price goods and services, and (b) a global market for carbon mitigating services and for receiving the carbon currency as a reward. Existing markets are framed by official national currencies, whereas the new global market will be framed by the carbon currency which does not act as a medium-of-exchange but instead acts as a price signal and store-of-value. Furthermore, they propose that the global annual mitigation rate, formula_0, that earns the carbon currency can be sub-divided into: (1) the portion that is economically coupled to existing markets, formula_1; and (2) the portion that is economically decoupled from existing markets, formula_2. The mitigation rate that is decoupled, formula_2, will be dependent on the economic value of the carbon currency given that this currency will be the primary source of funding for carbon dioxide removal (CDR).
Modified Kaya identity.
The original Kaya identity relates global CO2 emissions to various factors, including gross world product, denoted as "G." If the global carbon reward is fully implemented, then the carbon currency should be available as a reward in every country, but the carbon currency will not have the status of legal tender (i.e. it is not a medium of exchange). Subsequently, the trading of goods and services with the carbon currency will not be allowed, and so the carbon currency will not factor in the calculation of gross domestic product (GDP) or "G". However, the carbon currency will influence "G" because the currency will be used to increase the marginal value of the goods and services that have utility for reducing CO2 emissions or for removing CO2 from the ambient atmosphere.
Delton Chen and his co-authors propose that the total mass of anthropogenic CO2 that will be emitted globally can be described using a modified version of the Kaya identity, as shown below. In this modified version of the Kaya identity, that portion of mitigated CO2 that is decoupled from the economy, formula_2, is subtracted from the original Kaya identity:
formula_3
Where:
And:
Delton Chen and his co-authors propose that formula_2 will be significant because carbon dioxide removal (CDR) constitutes a new economic sector that is mostly unrelated to previous economic activity. Also, various negative emissions technologies (NETs) can be powered directly by the sun and other kinds of renewable energy, and as such a certain portion of CDR will be self-reliant in terms of energy inputs.
With reference to the above formula, the global carbon reward can be used to reduce "F" in absolute terms by incentivising the following:
The above four activities may be undertaken to increase absolute reductions in "F" until net-zero carbon (i.e. "F" = 0) is achieved. Delton Chen names the economic growth pattern that will result from these activities as optimal growth. By giving the carbon currency a predictable rising floor price, the carbon currency will attract investment demand from institutional investors and households. Furthermore, projects that are effective at mitigating carbon will report higher revenue and higher profits, and as such the carbon currency can act as an index for the profitability and effectiveness of low-carbon investments. By inviting households to invest in the carbon currency, the resulting increase in the average savings rate of the participating households could help reduce household consumption.
All of the above mentioned effects, when combined, might indirectly reduce "G" and "E," but optimal growth does not include incentives for explicitly reducing "G" or "E." The global carbon reward only treats "G" and "E" as a dependent variables, given that economic decarbonisation will influence "G" and "E." Observed changes in the quantity and the quality of "G" and "E" will be used in a feedback loop in the assessment of the carbon currency's floor price, and in the design of the reward rules.
If reducing "G" or "E" in absolute terms were to be adopted as an explicit policy objective, then this would constitute a different policy approach, called economic de-growth. According to a quantitative assessment by Keyßer and Lenzen, de-growth scenarios appear less risky than technology-driven pathways that support more consumption and economic growth. Economic de-growth and solar geoengineering are two additional mitigation strategies that could be considered if the global carbon reward and conventional policies are insufficient for achieving the desired climate objective. The global carbon reward policy does not financially reward economic de-growth or solar geoengineering, and as such implementing these strategies will require additional policies. | [
{
"math_id": 0,
"text": "\\bigtriangleup\\! Q"
},
{
"math_id": 1,
"text": "(1-\\omega) \\bigtriangleup\\! Q"
},
{
"math_id": 2,
"text": "\\omega \\bigtriangleup\\! Q"
},
{
"math_id": 3,
"text": "F = P \\cdot \\frac{G}{P} \\cdot \\frac{E}{G} \\cdot \\frac{F}{E}-\\omega \\bigtriangleup\\! Q"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "\\bigtriangleup"
},
{
"math_id": 6,
"text": "\\omega \\bigtriangleup"
}
]
| https://en.wikipedia.org/wiki?curid=68758670 |
68763568 | Entanglement of formation | Definition in quantum information theory
The entanglement of formation is a quantity that measures the entanglement of a bipartite quantum state.
Definition.
For a pure bipartite quantum state formula_0, using Schmidt decomposition, we see that the reduced density matrices of systems A and B, formula_1 and formula_2, have the same spectrum. The von Neumann entropy formula_3 of the reduced density matrix can be used to measure the entanglement of the state formula_0. We denote this kind of measure as formula_4, and call it the entanglement entropy. This is also known as the entanglement of formation of a pure state.
For a mixed bipartite state formula_5, a natural generalization is to consider all the ensemble realizations of the mixed state.
We define the entanglement of formation for mixed states by minimizing over all these ensemble realizations,
formula_6, where the infimum is taken over all the possible ways in which one can decompose formula_5 into pure states formula_7.
This kind of extension of a quantity defined on some set (here the pure states) to its convex hull (here the mixed states) is called a convex roof construction.
Properties.
Entanglement of formation quantifies how much entanglement (measured in ebits) is necessary, on average, to prepare the state. The measure clearly coincides with entanglement entropy for pure states. It is zero for all separable states and non-zero for all entangled states. By construction, formula_8 is convex.
Entanglement of formation is known to be a "non-additive" measure of entanglement. That is, there are bipartite quantum states formula_9 such that the entanglement of formation of the joint state formula_10 is smaller than the sum of the individual states' entanglement, i. e., formula_11. Note that for other states (for example pure or separable states) equality holds.
Furthermore, it has been shown that the "regularized" entanglement of formation equals the "entanglement cost". That is, for large formula_12 the entanglement of formation of formula_12 copies of a state formula_13 divided by formula_12 converges to the entanglement cost
formula_14
The non-additivity of formula_8 thus implies that there are quantum states for which there is a “bulk discount” when preparing them from pure states by local operations: it is cheaper, on average, to prepare many together than each one separately.
Relation with concurrence.
For states of two qubits, the entanglement of formation has a close relationship with concurrence. For a given state formula_15, its entanglement of formation formula_16 is related to its concurrence formula_17:
formula_18
where formula_19 is the Shannon entropy function,
formula_20
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\psi\\rangle_{AB}"
},
{
"math_id": 1,
"text": "\\rho_A"
},
{
"math_id": 2,
"text": "\\rho_B"
},
{
"math_id": 3,
"text": " S(\\rho_A)=S(\\rho_B) "
},
{
"math_id": 4,
"text": "E_{f}(|\\psi\\rangle_{AB})=S(\\rho_A)=S(\\rho_B) "
},
{
"math_id": 5,
"text": "\\rho_{AB}"
},
{
"math_id": 6,
"text": "E_f (\\rho_{AB})= \\inf\\left\\{ \\sum_i p_i E_f(|\\psi_i\\rangle_{AB})\\right\\} "
},
{
"math_id": 7,
"text": "\\rho_{AB}=\\sum_i p_i |\\psi_i \\rangle \\langle \\psi_i|_{AB}"
},
{
"math_id": 8,
"text": "E_f"
},
{
"math_id": 9,
"text": "\\rho_{AB}, \\sigma_{AB}"
},
{
"math_id": 10,
"text": "\\rho_{AB}\\otimes\\sigma_{AB}"
},
{
"math_id": 11,
"text": "E_f(\\rho_{AB}\\otimes\\sigma_{AB}) < E_f(\\rho_{AB})+E_f(\\sigma_{AB})"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\rho"
},
{
"math_id": 14,
"text": "\\lim_{n\\to\\infty} E_f(\\rho^{\\otimes n})/n = E_c(\\rho)"
},
{
"math_id": 15,
"text": " \\rho_{AB}"
},
{
"math_id": 16,
"text": "E_f (\\rho_{AB})"
},
{
"math_id": 17,
"text": "C"
},
{
"math_id": 18,
"text": "E_f=h\\left(\\frac{1+\\sqrt{1-C^2}}{2}\\right) "
},
{
"math_id": 19,
"text": " h(x) "
},
{
"math_id": 20,
"text": "h(x) = -x \\log_2 x -(1-x) \\log_2 (1-x)."
}
]
| https://en.wikipedia.org/wiki?curid=68763568 |
68777315 | Classical probability density | <templatestyles src="Hlist/styles.css"/>
The classical probability density is the probability density function that represents the likelihood of finding a particle in the vicinity of a certain location subject to a potential energy in a classical mechanical system. These probability densities are helpful in gaining insight into the correspondence principle and making connections between the quantum system under study and the classical limit.
Mathematical background.
Consider the example of a simple harmonic oscillator initially at rest with amplitude "A". Suppose that this system was placed inside a light-tight container such that one could only view it using a camera which can only take a snapshot of what's happening inside. Each snapshot has some probability of seeing the oscillator at any possible position "x" along its trajectory. The classical probability density encapsulates which positions are more likely, which are less likely, the average position of the system, and so on. To derive this function, consider the fact that the positions where the oscillator is most likely to be found are those positions at which the oscillator spends most of its time. Indeed, the probability of being at a given "x"-value is proportional to the time spent in the vicinity of that "x"-value. If the oscillator spends an infinitesimal amount of time "dt" in the vicinity "dx" of a given "x"-value, then the probability "P"("x") "dx" of being in that vicinity will be
formula_0
Since the force acting on the oscillator is conservative and the motion occurs over a finite domain, the motion will be cyclic with some period which will be denoted "T". Since the probability of the oscillator being at any possible position between the minimum possible "x"-value and the maximum possible "x"-value must sum to 1, the normalization
formula_1
is used, where "N" is the normalization constant. Since the oscillating mass covers this range of positions in half its period (a full period goes from −"A" to +"A" then back to −"A") the integral over "t" is equal to "T"/2, which sets "N" to be 2/"T".
Using the chain rule, "dt" can be put in terms of the height at which the mass is lingering by noting that "dt"
"dx"/("dx"/"dt"), so our probability density becomes
formula_2
where "v"("x") is the speed of the oscillator as a function of its position. (Note that because speed is a scalar, "v"("x") is the same for both half periods.) At this point, all that is needed is to provide a function "v"("x") to obtain "P"("x"). For systems subject to conservative forces, this is done by relating speed to energy. Since kinetic energy "K" is and the total energy "E"
"K" + "U", where "U"("x") is the potential energy of the system, the speed can be written as
formula_3
Plugging this into our expression for "P"("x") yields
formula_4
Though our starting example was the harmonic oscillator, all the math up to this point has been completely general for a particle subject to a conservative force. This formula can be generalized for any one-dimensional physical system by plugging in the corresponding potential energy function. Once this is done, "P"("x") is readily obtained for any allowed energy "E".
Examples.
Simple harmonic oscillator.
Starting with the example used in the derivation above, the simple harmonic oscillator has the potential energy function
formula_5
where "k" is the spring constant of the oscillator and "ω"
2"π"/"T" is the natural angular frequency of the oscillator. The total energy of the oscillator is given by evaluating "U"("x") at the turning points "x"
±"A". Plugging this into the expression for "P"("x") yields
formula_6
This function has two vertical asymptotes at the turning points, which makes physical sense since the turning points are where the oscillator is at rest, and thus will be most likely found in the vicinity of those "x" values. Note that even though the probability density function tends toward infinity, the probability is still finite due to the area under the curve, and not the curve itself, representing probability.
Bouncing ball.
For the lossless bouncing ball, the potential energy and total energy are
formula_7
formula_8
where "h" is the maximum height reached by the ball. Plugging these into "P"("z") yields
formula_9
where the relation formula_10 was used to simplify the factors out front. The domain of this function is formula_11 (the ball does not fall through the floor at "z"
0), so the distribution is not symmetric as in the case of the simple harmonic oscillator. Again, there is a vertical asymptote at the turning point "z"
"h".
Momentum-space distribution.
In addition to looking at probability distributions in position space, it is also helpful to characterize a system based on its momentum. Following a similar argument as above, the result is
formula_12
where "F"("x")
−"dU"/"dx" is the force acting on the particle as a function of position. In practice, this function must be put in terms of the momentum "p" by change of variables.
Simple harmonic oscillator.
Taking the example of the simple harmonic oscillator above, the potential energy and force can be written as
formula_13
formula_14
Identifying (2"mE")1/2
"p"0 as the maximum momentum of the system, this simplifies to
formula_15
Note that this has the same functional form as the position-space probability distribution. This is specific to the problem of the simple harmonic oscillator and arises due to the symmetry between "x" and "p" in the equations of motion.
Bouncing ball.
The example of the bouncing ball is more straightforward, since in this case the force is a constant,
formula_16
resulting in the probability density function
formula_17
where "p"0
"m"(2"gh")1/2 is the maximum momentum of the ball. In this system, all momenta are equally probable.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(x)\\, dx \\propto dt."
},
{
"math_id": 1,
"text": "\\int_{x_{\\rm min}}^{x_{\\rm max}} P(x)\\, dx = 1 = N \\int_{t_i}^{t_f} dt"
},
{
"math_id": 2,
"text": "P(x)\\,dx = \\frac{2}{T}\\, \\frac{dx}{dx/dt} = \\frac{2}{T}\\, \\frac{dx}{v(x)},"
},
{
"math_id": 3,
"text": "v(x) = \\sqrt{\\frac{2K}{m}} = \\sqrt{\\frac{2}{m}[E-U(x)]}."
},
{
"math_id": 4,
"text": "P(x) = \\frac{1}{T} \\sqrt{\\frac{2m}{E-U(x)}}."
},
{
"math_id": 5,
"text": "U(x) = \\frac{1}{2} kx^2 = \\frac{1}{2} m\\omega^2 x^2,"
},
{
"math_id": 6,
"text": "P(x) = \\frac{1}{\\pi}\\frac{1}{\\sqrt{A^2-x^2}}."
},
{
"math_id": 7,
"text": "U(z) = mgz,"
},
{
"math_id": 8,
"text": "E = mgh,"
},
{
"math_id": 9,
"text": "P(z) = \\frac{1}{2\\sqrt{h}}\\frac{1}{\\sqrt{h-z}},"
},
{
"math_id": 10,
"text": "T = \\sqrt{8h/g}"
},
{
"math_id": 11,
"text": "z \\in [0,h]"
},
{
"math_id": 12,
"text": "P(p) = \\frac{2}{T}\\frac{1}{|F(x)|},"
},
{
"math_id": 13,
"text": "U(x) = \\frac{1}{2}kx^2,"
},
{
"math_id": 14,
"text": "|F(x)| = |-kx| = \\sqrt{2kU(x)} = \\sqrt{\\frac{k}{m}(2mE - p^2)}."
},
{
"math_id": 15,
"text": "P(p) = \\frac{1}{\\pi} \\frac{1}{\\sqrt{p_0^2 - p^2}}."
},
{
"math_id": 16,
"text": "F(x) = mg,"
},
{
"math_id": 17,
"text": "P(p) = \\frac{1}{m\\sqrt{8gh}} = \\frac{1}{2p_0} \\text{ for } |p| < p_0,"
}
]
| https://en.wikipedia.org/wiki?curid=68777315 |
68780220 | Gaussen Index | The Gaussen Index (or Bagnouls-Gaussen Index) or xerothermic index is a method of calculating and comparing aridity.
According to Henri Gaussen (French botanist and biogeographer), a given period is said to be arid, when:
formula_0.
The resulting index number indicates the number of biologically dry days in a year for a given location (it therefore ranges between 0 and 365). The data includes not only precipitation "" but also fog, dew and humidity of the air.
In general, it is accepted that an environment is non-arid when the index is less than 100, semi-arid between 100 and 290, arid between 290 and 350, and hyperarid between 350 and 365.
This index is very useful for the use of an ombrothermic diagram, the latter always constructed on the scale model: 1 °C
2 mm precipitation.
Other indices such as the Louis Emberger rainfall quotient (which is not unique) have been defined. However, the Gaussen index which is simple and precise is still preferable. Indeed Henri Gaussen defines precisely the 4 nuances of Mediterranean climate just against this index, while Emberger defines the level of humidity in a region of Mediterranean climate but does not support precisely this Mediterranean climate.
The calculation does not reflect reality because it is based on averages. For example, according to the calculation, we find a total of 0 biologically dry days in Lyon, for 60 biologically dry days in Marseille.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P < {2 \\times T} "
}
]
| https://en.wikipedia.org/wiki?curid=68780220 |
68789736 | Lenglart's inequality | Mathematical Inequality
In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. Later slight modifications are also called Lenglart's inequality.
Statement.
Let "X" be a non-negative right-continuous formula_0-adapted process and let "G" be a non-negative right-continuous non-decreasing predictable process such that formula_1 for any bounded stopping time formula_2. Then
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}_t"
},
{
"math_id": 1,
"text": "\\mathbb{E}[X(\\tau)\\mid \\mathcal{F}_0]\\leq \\mathbb{E}[G(\\tau)\\mid \\mathcal{F}_0]< \\infty"
},
{
"math_id": 2,
"text": "\\tau"
}
]
| https://en.wikipedia.org/wiki?curid=68789736 |
68789853 | Stochastic Gronwall inequality | Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm.
Statement.
Let formula_0 be a non-negative right-continuous formula_1-adapted process. Assume that formula_2 is a deterministic non-decreasing càdlàg function with formula_3 and let formula_4
be a non-decreasing and càdlàg adapted process starting from formula_5. Further, let formula_6 be an formula_1- local martingale with formula_7 and càdlàg paths.
Assume that for all formula_8,
formula_9
where formula_10.
and define formula_11. Then the following estimates hold for formula_12 and formula_13:
Proof.
It has been proven by Lenglart's inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X(t),\\, t\\geq 0"
},
{
"math_id": 1,
"text": "(\\mathcal{F}_t)_{t\\ge 0}"
},
{
"math_id": 2,
"text": "A:[0,\\infty)\\to[0,\\infty)"
},
{
"math_id": 3,
"text": "A(0)=0"
},
{
"math_id": 4,
"text": "H(t),\\,t\\geq 0"
},
{
"math_id": 5,
"text": "H(0)\\geq 0"
},
{
"math_id": 6,
"text": "M(t),\\,t\\geq 0"
},
{
"math_id": 7,
"text": "M(0)=0"
},
{
"math_id": 8,
"text": "t\\geq 0"
},
{
"math_id": 9,
"text": " X(t)\\leq \\int_0^t X^*(u^-)\\,d A(u)+M(t)+H(t),"
},
{
"math_id": 10,
"text": "X^*(u):=\\sup_{r\\in[0,u]}X(r)"
},
{
"math_id": 11,
"text": "c_p=\\frac{p^{-p}}{1-p}"
},
{
"math_id": 12,
"text": "p\\in (0,1)"
},
{
"math_id": 13,
"text": "T>0"
},
{
"math_id": 14,
"text": "\\mathbb{E} \\big(H(T)^p\\big)<\\infty"
},
{
"math_id": 15,
"text": "H"
},
{
"math_id": 16,
"text": "\\mathbb{E}\\left[\\left(X^*(T)\\right)^p\\Big\\vert\\mathcal{F}_0\\right]\\leq \\frac{c_p}{p}\\mathbb{E}\\left[(H(T))^p\\big\\vert\\mathcal{F}_0\\right] \\exp \\left\\lbrace c_p^{1/p}A(T)\\right\\rbrace"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "\\mathbb{E}\\left[\\left(X^*(T)\\right)^p\\Big\\vert\\mathcal{F}_0\\right]\\leq \\frac{c_p+1}{p}\\mathbb{E}\\left[(H(T))^p\\big\\vert\\mathcal{F}_0\\right] \\exp \\left\\lbrace (c_p+1)^{1/p}A(T)\\right\\rbrace"
},
{
"math_id": 19,
"text": "\\mathbb{E} H(T)<\\infty,"
},
{
"math_id": 20,
"text": "\\displaystyle{\\mathbb{E}\\left[\\left(X^*(T)\\right)^p\\Big\\vert\\mathcal{F}_0\\right]\\leq \\frac{c_p}{p}\\left(\\mathbb{E}\\left[ H(T)\\big\\vert\\mathcal{F}_0\\right]\\right)^p \\exp \\left\\lbrace c_p^{1/p} A(T)\\right\\rbrace}"
}
]
| https://en.wikipedia.org/wiki?curid=68789853 |
6879051 | Line chart | Chart type
A line chart or line graph, also known as curve chart, is a type of chart that displays information as a series of data points called 'markers' connected by straight line segments. It is a basic type of chart common in many fields. It is similar to a scatter plot except that the measurement points are ordered (typically by their x-axis value) and joined with straight line segments. A line chart is often used to visualize a trend in data over intervals of time – a time series – thus the line is often drawn chronologically. In these cases they are known as run charts.
History.
Some of the earliest known line charts are generally credited to Francis Hauksbee, Nicolaus Samuel Cruquius, Johann Heinrich Lambert and William Playfair.
Example.
In the experimental sciences, data collected from experiments are often visualized by a graph. For example, if one collects data on the speed of an object at certain points in time, one can visualize the data in a data table such as the following:
Such a table representation of data is a great way to display exact values, but it can prevent the discovery and understanding of patterns in the values. In addition, a table display is often erroneously considered to be an objective, neutral collection or storage of the data (and may in that sense even be erroneously considered to be the data itself) whereas it is in fact just one of various possible visualizations of the data.
Understanding the process described by the data in the table is aided by producing a graph or line chart of "speed versus time". Such a visualisation appears in the figure to the right. This visualization can let the viewer quickly understand the entire process at a glance.
This visualization can however be misunderstood, especially when expressed as showing the mathematical function formula_0 that expresses the speed formula_1 (the dependent variable) as a function of time formula_2. This can be misunderstood as showing speed to be a variable that is dependent only on time. This would however only be true in the case of an object being acted on only by a constant force acting in a vacuum.
Best-fit.
Charts often include an overlaid mathematical function depicting the best-fit trend of the scattered data. This layer is referred to as a best-fit layer and the graph containing this layer is often referred to as a line graph.
It is simple to construct a "best-fit" layer consisting of a set of line segments connecting adjacent data points; however, such a "best-fit" is usually not an ideal representation of the trend of the underlying scatter data for the following reasons:
In either case, the best-fit layer can reveal trends in the data. Further, measurements such as the gradient or the area under the curve can be made visually, leading to more conclusions or results from the data table.
A true best-fit layer should depict a continuous mathematical function whose parameters are determined by using a suitable error-minimization scheme, which appropriately weights the error in the data values. Such curve fitting functionality is often found in graphing software or spreadsheets. Best-fit curves may vary from simple linear equations to more complex quadratic, polynomial, exponential, and periodic curves.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v(t)"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "t"
}
]
| https://en.wikipedia.org/wiki?curid=6879051 |
6880370 | Philosophy of language | In analytic philosophy, philosophy of language investigates the nature of language and the relations between language, language users, and the world. Investigations may include inquiry into the nature of meaning, intentionality, reference, the constitution of sentences, concepts, learning, and thought.
Gottlob Frege and Bertrand Russell were pivotal figures in analytic philosophy's "linguistic turn". These writers were followed by Ludwig Wittgenstein ("Tractatus Logico-Philosophicus"), the Vienna Circle, logical positivists, and Willard Van Orman Quine.
History.
Ancient philosophy.
In the West, inquiry into language stretches back to the 5th century BC with Socrates, Plato, Aristotle, and the Stoics. Linguistic speculation predated systematic descriptions of grammar which emerged c. the 5th century BC in India and c. the 3rd century BC in Greece.
In the dialogue "Cratylus", Plato considered the question of whether the names of things were determined by convention or by nature. He criticized conventionalism because it led to the bizarre consequence that anything can be conventionally denominated by any name. Hence, it cannot account for the correct or incorrect application of a name. He claimed that there was a natural correctness to names. To do this, he pointed out that compound words and phrases have a range of correctness. He also argued that primitive names had a natural correctness, because each phoneme represented basic ideas or sentiments. For example, for Plato the letter "l" and its sound represented the idea of softness. However, by the end of "Cratylus", he had admitted that some social conventions were also involved, and that there were faults in the idea that phonemes had individual meanings. Plato is often considered a proponent of extreme realism.
Aristotle interested himself with issues of logic, categories, and the creation of meaning. He separated all things into categories of species and genus. He thought that the meaning of a predicate was established through an abstraction of the similarities between various individual things. This theory later came to be called "nominalism". However, since Aristotle took these similarities to be constituted by a real commonality of form, he is more often considered a proponent of moderate realism.
The Stoics made important contributions to the analysis of grammar, distinguishing five parts of speech: nouns, verbs, appellatives (names or epithets), conjunctions and articles. They also developed a sophisticated doctrine of the "lektón" associated with each sign of a language, but distinct from both the sign itself and the thing to which it refers. This "lektón" was the meaning or sense of every term. The complete "lektón" of a sentence is what we would now call its proposition. Only propositions were considered truth-bearing—meaning they could be considered true or false—while sentences were simply their vehicles of expression. Different "lektá" could also express things besides propositions, such as commands, questions and exclamations.
Medieval philosophy.
Medieval philosophers were greatly interested in the subtleties of language and its usage. For many scholastics, this interest was provoked by the necessity of translating Greek texts into Latin. There were several noteworthy philosophers of language in the medieval period. According to Peter J. King, (although this has been disputed), Peter Abelard anticipated the modern theories of reference. Also, William of Ockham's "Summa Logicae" brought forward one of the first serious proposals for codifying a mental language.
The scholastics of the high medieval period, such as Ockham and John Duns Scotus, considered logic to be a "scientia sermocinalis" (science of language). The result of their studies was the elaboration of linguistic-philosophical notions whose complexity and subtlety has only recently come to be appreciated. Many of the most interesting problems of modern philosophy of language were anticipated by medieval thinkers. The phenomena of vagueness and ambiguity were analyzed intensely, and this led to an increasing interest in problems related to the use of "syncategorematic" words such as "and", "or", "not", "if", and "every". The study of "categorematic" words (or "terms") and their properties was also developed greatly. One of the major developments of the scholastics in this area was the doctrine of the "suppositio". The "suppositio" of a term is the interpretation that is given of it in a specific context. It can be "proper" or "improper" (as when it is used in metaphor, metonyms and other figures of speech). A proper "suppositio", in turn, can be either formal or material accordingly when it refers to its usual non-linguistic referent (as in "Charles is a man"), or to itself as a linguistic entity (as in ""Charles" has seven letters"). Such a classification scheme is the precursor of modern distinctions between use and mention, and between language and metalanguage.
There is a tradition called speculative grammar which existed from the 11th to the 13th century. Leading scholars included Martin of Dacia and Thomas of Erfurt (see "Modistae").
Modern philosophy.
Linguists of the Renaissance and Baroque periods such as Johannes Goropius Becanus, Athanasius Kircher and John Wilkins were infatuated with the idea of a philosophical language reversing the confusion of tongues, influenced by the gradual discovery of Chinese characters and Egyptian hieroglyphs ("Hieroglyphica"). This thought parallels the idea that there might be a universal language of music.
European scholarship began to absorb the Indian linguistic tradition only from the mid-18th century, pioneered by Jean François Pons and Henry Thomas Colebrooke (the "editio princeps" of Varadarāja, a 17th-century Sanskrit grammarian, dating to 1849).
In the early 19th century, the Danish philosopher Søren Kierkegaard insisted that language ought to play a larger role in Western philosophy. He argued that philosophy has not sufficiently focused on the role language plays in cognition and that future philosophy ought to proceed with a conscious focus on language:
<templatestyles src="Template:Blockquote/styles.css" />If the claim of philosophers to be unbiased were all it pretends to be, it would also have to take account of language and its whole significance in relation to speculative philosophy ... Language is partly something originally given, partly that which develops freely. And just as the individual can never reach the point at which he becomes absolutely independent ... so too with language.
Contemporary philosophy.
The phrase "linguistic turn" was used to describe the noteworthy emphasis that contemporary philosophers put upon language.
Language began to play a central role in Western philosophy in the early 20th century. One of the central figures involved in this development was the German philosopher Gottlob Frege, whose work on philosophical logic and the philosophy of language in the late 19th century influenced the work of 20th-century analytic philosophers Bertrand Russell and Ludwig Wittgenstein. The philosophy of language became so pervasive that for a time, in analytic philosophy circles, philosophy as a whole was understood to be a matter of philosophy of language.
In continental philosophy, the foundational work in the field was Ferdinand de Saussure's "Cours de linguistique générale", published posthumously in 1916.
Major topics and subfields.
Meaning.
The topic that has received the most attention in the philosophy of language has been the "nature" of meaning, to explain what "meaning" is, and what we mean when we talk about meaning. Within this area, issues include: the nature of synonymy, the origins of meaning itself, our apprehension of meaning, and the nature of composition (the question of how meaningful units of language are composed of smaller meaningful parts, and how the meaning of the whole is derived from the meaning of its parts).
There have been several distinctive explanations of what a linguistic "meaning" is. Each has been associated with its own body of literature.
Reference.
Investigations into how language interacts with the world are called theories of reference. Gottlob Frege was an advocate of a mediated reference theory. Frege divided the semantic content of every expression, including sentences, into two components: sense and reference. The sense of a sentence is the thought that it expresses. Such a thought is abstract, universal and objective. The sense of any sub-sentential expression consists in its contribution to the thought that its embedding sentence expresses. Senses determine reference and are also the modes of presentation of the objects to which expressions refer. Referents are the objects in the world that words pick out. The senses of sentences are thoughts, while their referents are truth values (true or false). The referents of sentences embedded in propositional attitude ascriptions and other opaque contexts are their usual senses.
Bertrand Russell, in his later writings and for reasons related to his theory of acquaintance in epistemology, held that the only directly referential expressions are, what he called, "logically proper names". Logically proper names are such terms as "I", "now", "here" and other indexicals. He viewed proper names of the sort described above as "abbreviated definite descriptions" (see "Theory of descriptions"). Hence "Joseph R. Biden" may be an abbreviation for "the current President of the United States and husband of Jill Biden". Definite descriptions are denoting phrases (see "On Denoting") which are analyzed by Russell into existentially quantified logical constructions. Such phrases denote in the sense that there is an object that satisfies the description. However, such objects are not to be considered meaningful on their own, but have meaning only in the proposition expressed by the sentences of which they are a part. Hence, they are not directly referential in the same way as logically proper names, for Russell.
On Frege's account, any referring expression has a sense as well as a referent. Such a "mediated reference" view has certain theoretical advantages over Mill's view. For example, co-referential names, such as "Samuel Clemens" and "Mark Twain", cause problems for a directly referential view because it is possible for someone to hear "Mark Twain is Samuel Clemens" and be surprised – thus, their cognitive content seems different.
Despite the differences between the views of Frege and Russell, they are generally lumped together as descriptivists about proper names. Such descriptivism was criticized in Saul Kripke's "Naming and Necessity".
Kripke put forth what has come to be known as "the modal argument" (or "argument from rigidity"). Consider the name "Aristotle" and the descriptions "the greatest student of Plato", "the founder of logic" and "the teacher of Alexander". Aristotle obviously satisfies all of the descriptions (and many of the others we commonly associate with him), but it is not necessarily true that if Aristotle existed then Aristotle was any one, or all, of these descriptions. Aristotle may well have existed without doing any single one of the things for which he is known to posterity. He may have existed and not have become known to posterity at all or he may have died in infancy. Suppose that Aristotle is associated by Mary with the description "the last great philosopher of antiquity" and (the actual) Aristotle died in infancy. Then Mary's description would seem to refer to Plato. But this is deeply counterintuitive. Hence, names are "rigid designators", according to Kripke. That is, they refer to the same individual in every possible world in which that individual exists. In the same work, Kripke articulated several other arguments against "Frege–Russell" descriptivism (see also Kripke's causal theory of reference).
The whole philosophical enterprise of studying reference has been critiqued by linguist Noam Chomsky in various works.
Composition and parts.
It has long been known that there are different parts of speech. One part of the common sentence is the lexical word, which is composed of nouns, verbs, and adjectives. A major question in the field – perhaps the single most important question for formalist and structuralist thinkers – is how the meaning of a sentence emerges from its parts.
Many aspects of the problem of the composition of sentences are addressed in the field of linguistics of syntax. Philosophical semantics tends to focus on the principle of compositionality to explain the relationship between meaningful parts and whole sentences. The principle of compositionality asserts that a sentence can be understood on the basis of the meaning of the "parts" of the sentence (i.e., words, morphemes) along with an understanding of its "structure" (i.e., syntax, logic). Further, syntactic propositions are arranged into "discourse" or "narrative" structures, which also encode meanings through pragmatics like temporal relations and pronominals.
It is possible to use the concept of "functions" to describe more than just how lexical meanings work: they can also be used to describe the meaning of a sentence. In the sentence "The horse is red", "the horse" can be considered to be the product of a "propositional function". A propositional function is an operation of language that takes an entity (in this case, the horse) as an input and outputs a "semantic fact" (i.e., the proposition that is represented by "The horse is red"). In other words, a propositional function is like an algorithm. The meaning of "red" in this case is whatever takes the entity "the horse" and turns it into the statement, "The horse is red."
Linguists have developed at least two general methods of understanding the relationship between the parts of a linguistic string and how it is put together: syntactic and semantic trees. Syntactic trees draw upon the words of a sentence with the "grammar" of the sentence in mind; semantic trees focus upon the role of the "meaning" of the words and how those meanings combine to provide insight onto the genesis of semantic facts.
Mind and language.
Innateness and learning.
Some of the major issues at the intersection of philosophy of language and philosophy of mind are also dealt with in modern psycholinguistics. Some important questions regard the amount of innate language, if language acquisition is a special faculty in the mind, and what the connection is between thought and language.
There are three general perspectives on the issue of language learning. The first is the behaviorist perspective, which dictates that not only is the solid bulk of language learned, but it is learned via conditioning. The second is the "hypothesis testing perspective", which understands the child's learning of syntactic rules and meanings to involve the postulation and testing of hypotheses, through the use of the general faculty of intelligence. The final candidate for explanation is the innatist perspective, which states that at least some of the syntactic settings are innate and hardwired, based on certain modules of the mind.
There are varying notions of the structure of the brain when it comes to language. Connectionist models emphasize the idea that a person's lexicon and their thoughts operate in a kind of distributed, associative network. Nativist models assert that there are specialized devices in the brain that are dedicated to language acquisition. Computation models emphasize the notion of a representational language of thought and the logic-like, computational processing that the mind performs over them. Emergentist models focus on the notion that natural faculties are a complex system that emerge from simpler biological parts. Reductionist models attempt to explain higher-level mental processes in terms of the basic low-level neurophysiological activity.
Communication.
Firstly, this field of study seeks to better understand what speakers and listeners do with language in communication, and how it is used socially. Specific interests include the topics of language learning, language creation, and speech acts.
Secondly, the question of how language relates to the minds of both the speaker and the interpreter is investigated. Of specific interest is the grounds for successful translation of words and concepts into their equivalents in another language.
Language and thought.
An important problem which touches both philosophy of language and philosophy of mind is to what extent language influences thought and vice versa. There have been a number of different perspectives on this issue, each offering a number of insights and suggestions.
Linguists Sapir and Whorf suggested that language limited the extent to which members of a "linguistic community" can think about certain subjects (a hypothesis paralleled in George Orwell's novel "Nineteen Eighty-Four"). In other words, language was analytically prior to thought. Philosopher Michael Dummett is also a proponent of the "language-first" viewpoint.
The stark opposite to the Sapir–Whorf position is the notion that thought (or, more broadly, mental content) has priority over language. The "knowledge-first" position can be found, for instance, in the work of Paul Grice. Further, this view is closely associated with Jerry Fodor and his language of thought hypothesis. According to his argument, spoken and written language derive their intentionality and meaning from an internal language encoded in the mind. The main argument in favor of such a view is that the structure of thoughts and the structure of language seem to share a compositional, systematic character. Another argument is that it is difficult to explain how signs and symbols on paper can represent anything meaningful unless some sort of meaning is infused into them by the contents of the mind. One of the main arguments against is that such levels of language can lead to an infinite regress. In any case, many philosophers of mind and language, such as Ruth Millikan, Fred Dretske and Fodor, have recently turned their attention to explaining the meanings of mental contents and states directly.
Another tradition of philosophers has attempted to show that language and thought are coextensive – that there is no way of explaining one without the other. Donald Davidson, in his essay "Thought and Talk", argued that the notion of belief could only arise as a product of public linguistic interaction. Daniel Dennett holds a similar "interpretationist" view of propositional attitudes. To an extent, the theoretical underpinnings to cognitive semantics (including the notion of semantic framing) suggest the influence of language upon thought. However, the same tradition views meaning and grammar as a function of conceptualization, making it difficult to assess in any straightforward way.
Some thinkers, like the ancient sophist Gorgias, have questioned whether or not language was capable of capturing thought at all.
There are studies that prove that languages shape how people understand causality. Some of them were performed by Lera Boroditsky. For example, English speakers tend to say things like "John broke the vase" even for accidents. However, Spanish or Japanese speakers would be more likely to say "the vase broke itself". In studies conducted by Caitlin Fausey at Stanford University speakers of English, Spanish and Japanese watched videos of two people popping balloons, breaking eggs and spilling drinks either intentionally or accidentally. Later everyone was asked whether they could remember who did what. Spanish and Japanese speakers did not remember the agents of accidental events as well as did English speakers.
Russian speakers, who make an extra distinction between light and dark blue in their language, are better able to visually discriminate shades of blue. The Piraha, a tribe in Brazil, whose language has only terms like few and many instead of numerals, are not able to keep track of exact quantities.
In one study German and Spanish speakers were asked to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key"—a word that is masculine in German and feminine in Spanish—the German speakers were more likely to use words like "hard", "heavy", "jagged", "metal", "serrated" and "useful" whereas Spanish speakers were more likely to say "golden", "intricate", "little", "lovely", "shiny" and "tiny". To describe a "bridge", which is feminine in German and masculine in Spanish, the German speakers said "beautiful", "elegant", "fragile", "peaceful", "pretty" and "slender", and the Spanish speakers said "big", "dangerous", "long", "strong", "sturdy" and "towering". This was the case even though all testing was done in English, a language without grammatical gender.
In a series of studies conducted by Gary Lupyan, people were asked to look at a series of images of imaginary aliens. Whether each alien was friendly or hostile was determined by certain subtle features but participants were not told what these were. They had to guess whether each alien was friendly or hostile, and after each response they were told if they were correct or not, helping them learn the subtle cues that distinguished friend from foe. A quarter of the participants were told in advance that the friendly aliens were called "leebish" and the hostile ones "grecious", while another quarter were told the opposite. For the rest, the aliens remained nameless. It was found that participants who were given names for the aliens learned to categorize the aliens far more quickly, reaching 80 per cent accuracy in less than half the time taken by those not told the names. By the end of the test, those told the names could correctly categorize 88 per cent of aliens, compared to just 80 per cent for the rest. It was concluded that naming objects helps us categorize and memorize them.
In another series of experiments, a group of people was asked to view furniture from an IKEA catalog. Half the time they were asked to label the object – whether it was a chair or lamp, for example – while the rest of the time they had to say whether or not they liked it. It was found that when asked to label items, people were later less likely to recall the specific details of products, such as whether a chair had arms or not. It was concluded that labeling objects helps our minds build a prototype of the typical object in the group at the expense of individual features.
Social interaction and language.
A common claim is that language is governed by social conventions. Questions inevitably arise on surrounding topics. One question regards what a convention exactly is, and how it is studied, and second regards the extent that conventions even matter in the study of language. David Kellogg Lewis proposed a worthy reply to the first question by expounding the view that a convention is a "rationally self-perpetuating regularity in behavior". However, this view seems to compete to some extent with the Gricean view of speaker's meaning, requiring either one (or both) to be weakened if both are to be taken as true.
Some have questioned whether or not conventions are relevant to the study of meaning at all. Noam Chomsky proposed that the study of language could be done in terms of the I-Language, or internal language of persons. If this is so, then it undermines the pursuit of explanations in terms of conventions, and relegates such explanations to the domain of "metasemantics". "Metasemantics" is a term used by philosopher of language Robert Stainton to describe all those fields that attempt to explain how semantic facts arise. One fruitful source of research involves investigation into the social conditions that give rise to, or are associated with, meanings and languages. "Etymology" (the study of the origins of words) and "stylistics" (philosophical argumentation over what makes "good grammar", relative to a particular language) are two other examples of fields that are taken to be metasemantic.
Many separate (but related) fields have investigated the topic of linguistic convention within their own research paradigms. The presumptions that prop up each theoretical view are of interest to the philosopher of language. For instance, one of the major fields of sociology, symbolic interactionism, is based on the insight that human social organization is based almost entirely on the use of meanings. In consequence, any explanation of a social structure (like an institution) would need to account for the shared meanings which create and sustain the structure.
Rhetoric is the study of the particular words that people use to achieve the proper emotional and rational effect in the listener, be it to persuade, provoke, endear, or teach. Some relevant applications of the field include the examination of propaganda and didacticism, the examination of the purposes of swearing and pejoratives (especially how it influences the behaviors of others, and defines relationships), or the effects of gendered language. It can also be used to study linguistic transparency (or speaking in an accessible manner), as well as performative utterances and the various tasks that language can perform (called "speech acts"). It also has applications to the study and interpretation of law, and helps give insight to the logical concept of the domain of discourse.
Literary theory is a discipline that some literary theorists claim overlaps with the philosophy of language. It emphasizes the methods that readers and critics use in understanding a text. This field, an outgrowth of the study of how to properly interpret messages, is
closely tied to the ancient discipline of hermeneutics.
Truth.
Finally, philosophers of language investigate how language and meaning relate to truth and the reality being referred to. They tend to be less interested in which sentences are "actually true", and more in "what kinds of meanings can be true or false". A truth-oriented philosopher of language might wonder whether or not a meaningless sentence can be true or false, or whether or not sentences can express propositions about things that do not exist, rather than the way sentences are used.
Problems in the philosophy of language.
Nature of language.
In the philosophical tradition stemming from the Ancient Greeks, such as Plato and Aristotle, language is seen as a tool for making statements about the reality by means of predication; e.g. "Man is a rational animal", where "Man" is the subject and "is a rational animal" is the predicate, which expresses a property of the subject. Such structures also constitute the syntactic basis of syllogism, which remained the standard model of formal logic until the early 20th century, when it was replaced with predicate logic. In linguistics and philosophy of language, the classical model survived in the Middle Ages, and the link between Aristotelian philosophy of science and linguistics was elaborated by Thomas of Erfurt's Modistae grammar (c. 1305), which gives an example of the analysis of the transitive sentence: "Plato strikes Socrates", where "Socrates" is the object and part of the predicate.
The social and evolutionary aspects of language were discussed during the classical and mediaeval periods. Plato's dialogue Cratylus investigates the iconicity of words, arguing that words are made by "wordsmiths" and selected by those who need the words, and that the study of language is external to the philosophical objective of studying ideas. Age-of-Enlightenment thinkers accommodated the classical model with a Christian worldview, arguing that God created Man social and rational, and, out of these properties, Man created his own cultural habits including language. In this tradition, the logic of the subject-predicate structure forms a general, or 'universal' grammar, which governs thinking and underpins all languages. Variation between languages was investigated in the "Port-Royal Grammar" of Arnauld and Lancelot, among others, who described it as accidental and separate from the logical requirements of thought and language.
The classical view was overturned in the early 19th century by the advocates of German romanticism. Humboldt and his contemporaries questioned the existence of a universal inner form of thought. They argued that, since thinking is verbal, language must be the prerequisite for thought. Therefore, every nation has its own unique way of thinking, a worldview, which has evolved with the linguistic history of the nation. Diversity became emphasized with a focus on the uncontrollable sociohistorical construction of language. Influential romantic accounts include Grimm's sound laws of linguistic evolution, Schleicher's "Darwinian" species-language analogy, the Völkerpsychologie accounts of language by Steinthal and Wundt, and Saussure's semiology, a dyadic model of semiotics, i.e., language as a sign system with its own inner logic, separated from physical reality.
In the early 20th century, logical grammar was defended by Frege and Husserl. Husserl's 'pure logical grammar' draws from 17th-century rational universal grammar, proposing a formal semantics that links the structures of physical reality (e.g., "This paper is white") with the structures of the mind, meaning, and the surface form of natural languages. Husserl's treatise was, however, rejected in general linguistics. Instead, linguists opted for Chomsky's theory of universal grammar as an innate biological structure that generates syntax in a formalistic fashion, i.e., irrespective of meaning.
Many philosophers continue to hold the view that language is a logically based tool of expressing the structures of reality by means of predicate-argument structure. Proponents include, with different nuances, Russell, Wittgenstein, Sellars, Davidson, Putnam, and Searle. Attempts to revive logical formal semantics as a basis of linguistics followed, e.g., the Montague grammar. Despite resistance from linguists including Chomsky and Lakoff, formal semantics was established in the late twentieth century. However, its influence has been mostly limited to computational linguistics, with little impact on general linguistics.
The incompatibility with genetics and neuropsychology of Chomsky's innate grammar gave rise to new psychologically and biologically oriented theories of language in the 1980s, and these have gained influence in linguistics and cognitive science in the 21st century. Examples include Lakoff's conceptual metaphor, which argues that language arises automatically from visual and other sensory input, and different models inspired by Dawkins's memetics, a neo-Darwinian model of linguistic units as the units of natural selection. These include cognitive grammar, construction grammar, and usage-based linguistics.
Problem of universals and composition.
One debate that has captured the interest of many philosophers is the debate over the meaning of "universals". It might be asked, for example, why when people say the word "rocks", what it is that the word represents. Two different answers have emerged to this question. Some have said that the expression stands for some real, abstract universal out in the world called "rocks". Others have said that the word stands for some collection of particular, individual rocks that are associated with merely a nomenclature. The former position has been called "philosophical realism", and the latter "nominalism".
The issue here can be explicated in examination of the proposition "Socrates is a man".
From the realist's perspective, the connection between S and M is a connection between two abstract entities. There is an entity, "man", and an entity, "Socrates". These two things connect in some way or overlap.
From a nominalist's perspective, the connection between S and M is the connection between a particular entity (Socrates) and a vast collection of particular things (men). To say that Socrates is a man is to say that Socrates is a part of the class of "men". Another perspective is to consider "man" to be a "property" of the entity, "Socrates".
There is a third way, between nominalism and (extreme) realism, usually called "moderate realism" and attributed to Aristotle and Thomas Aquinas. Moderate realists hold that "man" refers to a real essence or form that is really present and identical in Socrates and all other men, but "man" does not exist as a separate and distinct entity. This is a realist position, because "man" is real, insofar as it really exists in all men; but it is a moderate realism, because "man" is not an entity separate from the men it informs.
Formal versus informal approaches.
Another of the questions that has divided philosophers of language is the extent to which formal logic can be used as an effective tool in the analysis and understanding of natural languages. While most philosophers, including Gottlob Frege, Alfred Tarski and Rudolf Carnap, have been more or less skeptical about formalizing natural languages, many of them developed formal languages for use in the sciences or formalized "parts" of natural language for investigation. Some of the most prominent members of this tradition of formal semantics include Tarski, Carnap, Richard Montague and Donald Davidson.
On the other side of the divide, and especially prominent in the 1950s and '60s, were the so-called "ordinary language philosophers". Philosophers such as P. F. Strawson, John Langshaw Austin and Gilbert Ryle stressed the importance of studying natural language without regard to the truth-conditions of sentences and the references of terms. They did not believe that the social and practical dimensions of linguistic meaning could be captured by any attempts at formalization using the tools of logic. Logic is one thing and language is something entirely different. What is important is not expressions themselves but what people use them to do in communication.
Hence, Austin developed a theory of speech acts, which described the kinds of things which can be done with a sentence (assertion, command, inquiry, exclamation) in different contexts of use on different occasions. Strawson argued that the truth-table semantics of the logical connectives (e.g., formula_0, formula_1 and formula_2) do not capture the meanings of their natural language counterparts ("and", "or" and "if-then"). While the "ordinary language" movement basically died out in the 1970s, its influence was crucial to the development of the fields of speech-act theory and the study of pragmatics. Many of its ideas have been absorbed by theorists such as Kent Bach, Robert Brandom, Paul Horwich and Stephen Neale. In recent work, the division between semantics and pragmatics has become a lively topic of discussion at the interface of philosophy and linguistics, for instance in work by Sperber and Wilson, Carston and Levinson.
While keeping these traditions in mind, the question of whether or not there is any grounds for conflict between the formal and informal approaches is far from being decided. Some theorists, like Paul Grice, have been skeptical of any claims that there is a substantial conflict between logic and natural language.
Game theoretical approach.
Game theory has been suggested as a tool to study the evolution of language. Some researchers that have developed game theoretical approaches to philosophy of language are David K. Lewis, Schuhmacher, and Rubinstein.
Translation and interpretation.
Translation and interpretation are two other problems that philosophers of language have attempted to confront. In the 1950s, W.V. Quine argued for the indeterminacy of meaning and reference based on the principle of "radical translation". In "Word and Object", Quine asks readers to imagine a situation in which they are confronted with a previously undocumented, group of indigenous people where they must attempt to make sense of the utterances and gestures that its members make. This is the situation of radical translation.
He claimed that, in such a situation, it is impossible "in principle" to be absolutely certain of the meaning or reference that a speaker of the indigenous peoples language attaches to an utterance. For example, if a speaker sees a rabbit and says "gavagai", is she referring to the whole rabbit, to the rabbit's tail, or to a temporal part of the rabbit? All that can be done is to examine the utterance as a part of the overall linguistic behaviour of the individual, and then use these observations to interpret the meaning of all other utterances. From this basis, one can form a manual of translation. But, since reference is indeterminate, there will be many such manuals, no one of which is more correct than the others. For Quine, as for Wittgenstein and Austin, meaning is not something that is associated with a single word or sentence, but is rather something that, if it can be attributed at all, can only be attributed to a whole language. The resulting view is called "semantic holism".
Inspired by Quine's discussion, Donald Davidson extended the idea of radical translation to the interpretation of utterances and behavior within a single linguistic community. He dubbed this notion "radical interpretation". He suggested that the meaning that any individual ascribed to a sentence could only be determined by attributing meanings to many, perhaps all, of the individual's assertions, as well as their mental states and attitudes.
Vagueness.
One issue that has troubled philosophers of language and logic is the problem of the vagueness of words. The specific instances of vagueness that most interest philosophers of language are those where the existence of "borderline cases" makes it seemingly impossible to say whether a predicate is true or false. Classic examples are "is tall" or "is bald", where it cannot be said that some borderline case (some given person) is tall or not-tall. In consequence, vagueness gives rise to the paradox of the heap. Many theorists have attempted to solve the paradox by way of "n"-valued logics, such as fuzzy logic, which have radically departed from classical two-valued logics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\land "
},
{
"math_id": 1,
"text": " \\lor "
},
{
"math_id": 2,
"text": " \\rightarrow "
}
]
| https://en.wikipedia.org/wiki?curid=6880370 |
6881120 | Prior knowledge for pattern recognition | Pattern recognition is a very active field of research intimately bound to machine learning. Also known as classification or statistical classification, pattern recognition aims at building a classifier that can determine the class of an input pattern. This procedure, known as training, corresponds to learning an unknown decision function based only on a set of input-output pairs formula_0 that form the training data (or training set). Nonetheless, in real world applications such as character recognition, a certain amount of information on the problem is usually known beforehand. The incorporation of this prior knowledge into the training is the key element that will allow an increase of performance in many applications.
Prior Knowledge.
Prior knowledge refers to all information about the problem available in addition to the training data. However, in this most general form, determining a model from a finite set of samples without prior knowledge is an ill-posed problem, in the sense that a unique model may not exist. Many classifiers incorporate the general smoothness assumption that a test pattern similar to one of the training samples tends to be assigned to the same class.
The importance of prior knowledge in machine learning is suggested by its role in search and optimization. Loosely, the no free lunch theorem states that all search algorithms have the same average performance over all problems, and thus implies that to gain in performance on a certain application one must use a specialized algorithm that includes some prior knowledge about the problem.
The different types of prior knowledge encountered in pattern recognition are now regrouped under two main categories: class-invariance and knowledge on the data.
Class-invariance.
A very common type of prior knowledge in pattern recognition is the invariance of the class (or the output of the classifier) to a transformation of the input pattern. This type of knowledge is referred to as transformation-invariance. The mostly used transformations used in image recognition are:
Incorporating the invariance to a transformation formula_1 parametrized in formula_2 into a classifier of output formula_3 for an input pattern formula_4 corresponds to enforcing the equality
formula_5
Local invariance can also be considered for a transformation centered at formula_6, so that formula_7, by using the constraint
formula_8
The function formula_9 in these equations can be either the decision function of the classifier or its real-valued output.
Another approach is to consider class-invariance with respect to a "domain of the input space" instead of a transformation. In this case, the problem becomes finding formula_9 so that
formula_10
where formula_11 is the membership class of the region formula_12 of the input space.
A different type of class-invariance found in pattern recognition is permutation-invariance, i.e. invariance of the class to a permutation of elements in a structured input. A typical application of this type of prior knowledge is a classifier invariant to permutations of rows of the matrix inputs.
Knowledge of the data.
Other forms of prior knowledge than class-invariance concern the data more specifically and are thus of particular interest for real-world applications. The three particular cases that most often occur when gathering data are:
Prior knowledge of these can enhance the quality of the recognition if included in the learning. Moreover, not taking into account the poor quality of some data or a large imbalance between the classes can mislead the decision of a classifier. | [
{
"math_id": 0,
"text": "(\\boldsymbol{x}_i,y_i)"
},
{
"math_id": 1,
"text": "T_{\\theta}: \\boldsymbol{x} \\mapsto T_{\\theta}\\boldsymbol{x}"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "f(\\boldsymbol{x})"
},
{
"math_id": 4,
"text": "\\boldsymbol{x}"
},
{
"math_id": 5,
"text": "\nf(\\boldsymbol{x}) = f(T_{\\theta}\\boldsymbol{x}), \\quad \\forall \\boldsymbol{x}, \\theta ."
},
{
"math_id": 6,
"text": "\\theta=0"
},
{
"math_id": 7,
"text": "T_0\\boldsymbol{x} = \\boldsymbol{x}"
},
{
"math_id": 8,
"text": "\n \\left.\\frac{\\partial}{\\partial \\theta}\\right|_{\\theta=0} f(T_{\\theta} \\boldsymbol{x}) = 0 .\n"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "\n\tf(\\boldsymbol{x}) = y_{\\mathcal{P}},\\ \\forall \\boldsymbol{x}\\in \\mathcal{P} ,\n"
},
{
"math_id": 11,
"text": "y_{\\mathcal{P}}"
},
{
"math_id": 12,
"text": "\\mathcal{P}"
}
]
| https://en.wikipedia.org/wiki?curid=6881120 |
6882629 | Goldman–Hodgkin–Katz flux equation | Expression of the ionic flux across a cell membrane
The Goldman–Hodgkin–Katz flux equation (or GHK flux equation or GHK current density equation) describes the ionic flux across a cell membrane as a function of the transmembrane potential and the concentrations of the ion inside and outside of the cell. Since both the voltage and the concentration gradients influence the movement of ions, this process is a simplified version of "electrodiffusion". Electrodiffusion is most accurately defined by the Nernst–Planck equation and the GHK flux equation is a solution to the Nernst–Planck equation with the assumptions listed below.
Origin.
The American David E. Goldman of Columbia University, and the English Nobel laureates Alan Lloyd Hodgkin and Bernard Katz derived this equation.
Assumptions.
Several assumptions are made in deriving the GHK flux equation (Hille 2001, p. 445) :
Equation.
The GHK flux equation for an ion S (Hille 2001, p. 445):
formula_0
where
Implicit definition of reversal potential.
The reversal potential is shown to be contained in the GHK flux equation (Flax 2008). The proof is replicated from the reference (Flax 2008) here.
We wish to show that when the flux is zero, the transmembrane potential is not zero. Formally it is written formula_2 which is equivalent to writing formula_3, which states that when the transmembrane potential is zero, the flux is not zero.
However, due to the form of the GHK flux equation when formula_4, formula_5. This is a problem as the value of formula_6 is indeterminate.
We turn to l'Hôpital's rule to find the solution for the limit:
formula_7
where formula_8 represents the differential of f and the result is :
formula_9
It is evident from the previous equation that when formula_4, formula_10 if formula_11 and thus
formula_12
which is the definition of the reversal potential.
By setting formula_13 we can also obtain the reversal potential :
formula_14
which reduces to :
formula_15
and produces the Nernst equation :
formula_16
Rectification.
Since one of the assumptions of the GHK flux equation is that the ions move independently of each other, the total flow of ions across the membrane is simply equal to the sum of two oppositely directed fluxes. Each flux approaches an asymptotic value as the membrane potential diverges from zero. These asymptotes are
formula_17
formula_18
and
formula_19
formula_20
where subscripts 'i' and 'o' denote the intra- and extracellular compartments, respectively. Intuitively one may understand these limits as follows: if an ion is only found outside a cell, then the flux is Ohmic (proportional to voltage) when the voltage causes the ion to flow into the cell, but no voltage could cause the ion to flow out of the cell, since there are no ions inside the cell in the first place.
Keeping all terms except "V"m constant, the equation yields a straight line when plotting "formula_1"S against "V"m. It is evident that the ratio between the two asymptotes is merely the ratio between the two concentrations of S, [S]i and [S]o. Thus, if the two concentrations are identical, the slope will be identical (and constant) throughout the voltage range (corresponding to Ohm's law scaled by the surface area). As the ratio between the two concentrations increases, so does the difference between the two slopes, meaning that the current is larger in one direction than the other, given an equal driving force of opposite signs. This is contrary to the result obtained if using Ohm's law scaled by the surface area, and the effect is called rectification.
The GHK flux equation is mostly used by electrophysiologists when the ratio between [S]i and [S]o is large and/or when one or both of the concentrations change considerably during an action potential. The most common example is probably intracellular calcium, [Ca2+]i, which during a cardiac action potential cycle can change 100-fold or more, and the ratio between [Ca2+]o and [Ca2+]i can reach 20,000 or more. | [
{
"math_id": 0,
"text": "\\Phi_{S} = P_{S}z_{S}^2\\frac{V_{m}F^{2}}{RT}\\frac{[\\mbox{S}]_{i} - [\\mbox{S}]_{o}\\exp(-z_{S}V_{m}F/RT)}{1 - \\exp(-z_{S}V_{m}F/RT)} "
},
{
"math_id": 1,
"text": "\\Phi"
},
{
"math_id": 2,
"text": "\\lim_{\\Phi_{S}\\rightarrow0} V_{m}\\ne0"
},
{
"math_id": 3,
"text": "\\lim_{V_{m}\\rightarrow0} \\Phi_{S}\\ne0"
},
{
"math_id": 4,
"text": "V_{m}=0"
},
{
"math_id": 5,
"text": "\\Phi_{S}=\\frac{0}{0}"
},
{
"math_id": 6,
"text": "\\frac{0}{0}"
},
{
"math_id": 7,
"text": "\\lim_{V_{m}\\rightarrow0} \\Phi_{S} = P_{S}\\frac{z_{S}^2F^{2}}{RT}\\frac{[V_{m}([\\mbox{S}]_{i} - [\\mbox{S}]_{o}\\exp(-z_{S}V_{m}F/RT))]'}{[1 - \\exp(-z_{S}V_{m}F/RT)]'} "
},
{
"math_id": 8,
"text": "[f]'"
},
{
"math_id": 9,
"text": "\\lim_{V_{m}\\rightarrow0} \\Phi_{S} = P_{S}z_{S}F([\\mbox{S}]_{i} - [\\mbox{S}]_{o})"
},
{
"math_id": 10,
"text": "\\Phi_{S}\\ne0"
},
{
"math_id": 11,
"text": "([\\mbox{S}]_{i} - [\\mbox{S}]_{o})\\ne0"
},
{
"math_id": 12,
"text": "\\lim_{\\Phi_{S}\\rightarrow0}V_{m}\\ne0"
},
{
"math_id": 13,
"text": "\\Phi_{S}=0"
},
{
"math_id": 14,
"text": "\\Phi_{S}=0=P_{S}\\frac{z_{S}^2F^{2}}{RT}\\frac{V_{m}([\\mbox{S}]_{i} - [\\mbox{S}]_{o}\\exp(-z_{S}V_{m}F/RT))}{1 - \\exp(-z_{S}V_{m}F/RT)}"
},
{
"math_id": 15,
"text": "[\\mbox{S}]_{i} - [\\mbox{S}]_{o}\\exp(-z_{S}V_{m}F/RT)=0"
},
{
"math_id": 16,
"text": "V_{m}=-\\frac{RT}{z_{S}F}\\ln\\left (\\frac{[\\mbox{S}]_{i}}{[\\mbox{S}]_{o}}\\right )"
},
{
"math_id": 17,
"text": "\\Phi_{S|i\\to o} = P_{S}z_{S}^2 \\frac{V_{m}F^{2}}{RT}[\\mbox{S}]_{i}\\ \\mbox{for}\\ V_{m} \\gg \\; 0"
},
{
"math_id": 18,
"text": "\\Phi_{S|i\\to o} = 0 \\; \\mbox{for}\\ V_{m} \\ll \\; 0"
},
{
"math_id": 19,
"text": "\\Phi_{S|o\\to i} = P_{S}z_{S}^2 \\frac{V_{m}F^{2}}{RT}[\\mbox{S}]_{o}\\ \\mbox{for}\\ V_{m} \\ll \\; 0"
},
{
"math_id": 20,
"text": "\\Phi_{S|o\\to i} = 0 \\; \\mbox{for}\\ V_{m} \\gg \\; 0"
}
]
| https://en.wikipedia.org/wiki?curid=6882629 |
68841311 | Syntactic parsing (computational linguistics) | Automatic analysis of syntactic structure of natural language
Syntactic parsing is the automatic analysis of syntactic structure of natural language, especially syntactic relations (in dependency grammar) and labelling spans of constituents (in constituency grammar). It is motivated by the problem of structural ambiguity in natural language: a sentence can be assigned multiple grammatical parses, so some kind of knowledge beyond computational grammar rules is needed to tell which parse is intended. Syntactic parsing is one of the important tasks in computational linguistics and natural language processing, and has been a subject of research since the mid-20th century with the advent of computers.
Different theories of grammar propose different formalisms for describing the syntactic structure of sentences. For computational purposes, these formalisms can be grouped under constituency grammars and dependency grammars. Parsers for either class call for different types of algorithms, and approaches to the two problems have taken different forms. The creation of human-annotated treebanks using various formalisms (e.g. Universal Dependencies) has proceeded alongside the development of new algorithms and methods for parsing.
Part-of-speech tagging (which resolves some semantic ambiguity) is a related problem, and often a prerequisite for or a subproblem of syntactic parsing. Syntactic parses can be used for information extraction (e.g. event parsing, semantic role labelling, entity labelling) and may be further used to extract formal semantic representations.
Constituency parsing.
Constituency parsing involves parsing in accordance with constituency grammar formalisms, such as Minimalism or the formalism of the Penn Treebank. This, at the very least, means telling which spans are constituents (e.g. "[The man] is here.") and what kind of constituent it is (e.g. "[The man]" is a noun phrase) on the basis of a context-free grammar (CFG) which encodes rules for constituent formation and merging.
Algorithms generally require the CFG to be converted to Chomsky Normal Form (with two children per constituent), which can be done without losing any information about the tree or reducing expressivity using the algorithm first described by Hopcroft and Ullman in 1979.
CKY.
The most popular algorithm for constituency parsing is the Cocke–Kasami–Younger algorithm (CKY), which is a dynamic programming algorithm which constructs a parse in worst-case formula_0 time, on a sentence of formula_1 words and formula_2 is the size of a CFG given in Chomsky Normal Form.
Given the issue of ambiguity (e.g. preposition-attachment ambiguity in English) leading to multiple acceptable parses, it is necessary to be able to score the probability of parses to pick the most probable one. One way to do this is by using a probabilistic context-free grammar (PCFG) which has a probability of each constituency rule, and modifying CKY to maximise probabilities when parsing bottom-up.
A further modification is the lexicalized PCFG, which assigns a head to each constituent and encodes rule for each lexeme in that head slot. Thus, where a PCFG may have a rule "NP → DT NN" (a noun phrase is a determiner and a noun) while a lexicalized PCFG will specifically have rules like "NP(dog) → DT NN(dog)" or "NP(person)" etc. In practice this leads to some performance improvements.
More recent work does neural scoring of span probabilities (which can take into account context unlike (P)CFGs) to feed to CKY, such as by using a recurrent neural network or transformer on top of word embeddings.
In 2022, Nikita Kitaev et al. introduced an incremental parser that first learns discrete labels (out of a fixed vocabulary) for each input token given only the left-hand context, which are then the only inputs to a CKY chart parser with probabilities calculated using a learned neural span scorer. This approach is not only linguistically-motivated, but also competitive with previous approaches to constituency parsing. Their work won the best paper award at ACL 2022.
Transition-based.
Following the success of formula_3 transition-based parsing for dependency grammars, work began on adapting the approach to constituency parsing. The first such work was by Kenji Sagae and Alon Lavie in 2005, which relied on a feature-based classifier to greedily make transition decisions. This was followed by the work of Yue Zhang and Stephen Clark in 2009, which added beam search to the decoder to make more globally-optimal parses. The first parser of this family to outperform a chart-based parser was the one by Muhua Zhu et al. in 2013, which took on the problem of length differences of different transition sequences due to unary constituency rules (a non-existent problem for dependency parsing) by adding a padding operation.
Note that transition-based parsing can be purely greedy (i.e. picking the best option at each time-step of building the tree, leading to potentially non-optimal or ill-formed trees) or use beam search to increase performance while not sacrificing efficiency.
Sequence-to-sequence.
A different approach to constituency parsing leveraging neural sequence models was developed by Oriol Vinyals et al. in 2015. In this approach, constituent parsing is modelled like machine translation: the task is sequence-to-sequence conversion from the sentence to a constituency parse, in the original paper using a deep LSTM with an attention mechanism. The gold training trees have to be linearised for this kind of model, but the conversion does not lose any information. This runs in formula_3 with a beam search decoder of width 10 (but they found little benefit from greater beam size and even limiting it to greedy decoding performs well), and achieves competitive performance with traditional algorithms for context-free parsing like CKY.
Dependency parsing.
Dependency parsing is parsing according to a dependency grammar formalism, such as Universal Dependencies (which is also a project that produces multilingual dependency treebanks). This means assigning a head (or multiple heads in some formalisms like Enhanced Dependencies, e.g. in the case of coordination) to every token and a corresponding dependency relation for each edge, eventually constructing a tree or graph over the whole sentence.
There are broadly three modern paradigms for modelling dependency parsing: transition-based, grammar-based, and graph-based.
Transition-based.
Many modern approaches to dependency tree parsing use transition-based parsing (the base form of this is sometimes called arc-standard) as formulated by Joakim Nivre in 2003, which extends on shift-reduce parsing by keeping a running stack of tokens, and deciding from three operations for the next token encountered:
The algorithm can be formulated as comparing the top two tokens of the stack (after adding the next token to the stack) or the top token on the stack and the next token in the sentence.
Training data for such an algorithm is created by using an oracle, which constructs a sequence of transitions from gold trees which are then fed to a classifier. The classifier learns which of the three operations is optimal given the current state of the stack, buffer, and current token. Modern methods use a neural classifier which is trained on word embeddings, beginning with work by Danqi Chen and Christopher Manning in 2014. In the past, feature-based classifiers were also common, with features chosen from part-of-speech tags, sentence position, morphological information, etc.
This is an formula_3 greedy algorithm, so it does not guarantee the best possible parse or even a necessarily valid parse, but it is efficient. It is also not necessarily the case that a particular tree will have only one sequence of valid transitions that can reach it, so a dynamic oracle (which may permit multiple choices of operations) will increase performance.
A modification to this is arc-eager parsing, which adds another operation: Reduce (remove the top token on the stack). Practically, this results in earlier arc-formation.
These all only support projective trees so far, wherein edges do not cross given the token ordering from the sentence. For non-projective trees, Nivre in 2009 modified arc-standard transition-based parsing to add the operation Swap (swap the top two tokens on the stack, assuming the formulation where the next token is always added to the stack first). This increases runtime to formula_4 in the worst-case but practically still near-linear.
Grammar-based.
A chart-based dynamic programming approach to projective dependency parsing was proposed by Michael Collins in 1996 and further optimised by Jason Eisner in the same year. This is an adaptation of CKY (previously mentioned for constituency parsing) to headed dependencies, a benefit being that the only change from constituency parsing is that every constituent is headed by one of its descendant nodes. Thus, one can simply specify which child provides the head for every constituency rule in the grammar (e.g. an NP is headed by its child N) to go from constituency CKY parsing to dependency CKY parsing.
McDonald's original adaptation had a runtime of formula_5, and Eisner's dynamic programming optimisations reduced runtime to formula_6. Eisner suggested three different scoring methods for calculating span probabilities in his paper.
Graph-based.
Exhaustive search of the possible formula_7 edges in the dependency tree, with backtracking in the case an ill-formed tree is created, gives the baseline formula_6 runtime for graph-based dependency parsing. This approach was first formally described by Michael A. Covington in 2001, but he claimed that it was "an algorithm that has been known, in some form, since the 1960s".
The problem of parsing can also be modelled as finding a maximum-probability spanning arborescence over the graph of all possible dependency edges, and then picking dependency labels for the edges in tree we find. Given this, we can use an extension of the Chu–Liu/Edmonds algorithm with an edge scorer and a label scorer. This algorithm was first described by Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajič in 2005. It can handle non-projective trees unlike the arc-standard transition-based parser and CKY. As before, the scorers can be neural (trained on word embeddings) or feature-based. This runs in formula_4 with Tarjan's extension of the algorithm.
Evaluation.
The performance of syntactic parsers is measured using standard evaluation metrics. Both constituency and dependency parsing approaches can be evaluated for the ratio of exact matches (percentage of sentences that were perfectly parsed), and precision, recall, and F1-score calculated based on the correct constituency or dependency assignments in the parse relative to that number in reference and/or hypothesis parses. The latter are also known as the PARSEVAL metrics.
Dependency parsing can also be evaluated using attachment score. Unlabelled attachment score (UAS) is the percentage of tokens with correctly assigned heads, while labelled attachment score (LAS) is the percentage of tokens with correctly assigned heads "and" dependency relation labels.
Conversion between parses.
Given that much work on English syntactic parsing depended on the Penn Treebank, which used a constituency formalism, many works on dependency parsing developed ways to deterministically convert the Penn formalism to a dependency syntax, in order to use it as training data. One of the major conversion algorithms was Penn2Malt, which reimplemented previous work on the problem.
Work in the dependency-to-constituency conversion direction benefits from the faster runtime of dependency parsing algorithms. One approach is using constrained CKY parsing, ignoring spans which obviously violate the dependency parse's structure and thus reducing runtime to formula_4. Another approach is to train a classifier to find an ordering for all the dependents of every token, which results in a structure isomorphic to the constituency parse.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
Dependency parsing | [
{
"math_id": 0,
"text": "\\mathcal{O}\\left( n^3 \\cdot \\left| G \\right| \\right)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\left| G \\right|"
},
{
"math_id": 3,
"text": "O(n)"
},
{
"math_id": 4,
"text": "O(n^2)"
},
{
"math_id": 5,
"text": "O(n^5)"
},
{
"math_id": 6,
"text": "O(n^3)"
},
{
"math_id": 7,
"text": "n^2"
}
]
| https://en.wikipedia.org/wiki?curid=68841311 |
68846045 | Constrained equal awards | Division rule for solving bankruptcy problems
Constrained equal awards (CEA), also called constrained equal gains, is a division rule for solving bankruptcy problems. According to this rule, each claimant should receive an equal amount, except that no claimant should receive more than his/her claim. In the context of taxation, it is known as leveling tax.
Formal definition.
There is a certain amount of money to divide, denoted by "formula_0" (=Estate or Endowment). There are "n" "claimants". Each claimant "i" has a "claim" denoted by "formula_1". Usually, formula_2, that is, the estate is insufficient to satisfy all the claims.
The CEA rule says that each claimant "i" should receive formula_3, where "r" is a constant chosen such that formula_4. The rule can also be described algorithmically as follows:
Examples.
Examples with two claimants:
Examples with three claimants:
Usage.
In the Jewish law, if several creditors have claims to the same bankrupt debtor, all of which have the same precedence (e.g. all loans have the same date), then the debtor's assets are divided according to CEA.
Characterizations.
The CEA rule has several characterizations. It is the only rule satisfying the following sets of axioms:
Dual rule.
The constrained equal losses (CEL) rule is the "dual" of the CEA rule, that is: for each problem formula_20, we have formula_21.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n c_i > E"
},
{
"math_id": 3,
"text": "\\min(c_i, r)"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n \\min(c_i,r) = E"
},
{
"math_id": 5,
"text": "CEA(60,90; 100) = (50,50)"
},
{
"math_id": 6,
"text": "r=50"
},
{
"math_id": 7,
"text": "E/n"
},
{
"math_id": 8,
"text": "CEA(40,80; 100) = (40,60)"
},
{
"math_id": 9,
"text": "r=60"
},
{
"math_id": 10,
"text": "CEA(50,100,150; 100) = (33.333, 33.333, 33.333)"
},
{
"math_id": 11,
"text": "r=33.333"
},
{
"math_id": 12,
"text": "CEA(50,100,150; 200) = (50, 75, 75)"
},
{
"math_id": 13,
"text": "r=75"
},
{
"math_id": 14,
"text": "CEA(50,100,150; 300) = (50, 100, 150)"
},
{
"math_id": 15,
"text": "r=150"
},
{
"math_id": 16,
"text": "CEA(100,200,300; 300) = (100,100,100)"
},
{
"math_id": 17,
"text": "r=100"
},
{
"math_id": 18,
"text": "CEA(100,200,300; 500) = (100,200,200)"
},
{
"math_id": 19,
"text": "r=200"
},
{
"math_id": 20,
"text": "(c,E)"
},
{
"math_id": 21,
"text": "CEL(c,E) = c - CEA(c, \\sum c - E)"
}
]
| https://en.wikipedia.org/wiki?curid=68846045 |
68846141 | Constrained equal losses | Division rule for solving bankruptcy problems
Constrained equal losses (CEL) is a division rule for solving bankruptcy problems. According to this rule, each claimant should lose an equal amount from his or her claim, except that no claimant should receive a negative amount. In the context of taxation, it is known as poll tax.
Formal definition.
There is a certain amount of money to divide, denoted by "formula_0" (=Estate or Endowment). There are "n" "claimants". Each claimant "i" has a "claim" denoted by "formula_1". Usually, formula_2, that is, the estate is insufficient to satisfy all the claims.
The CEL rule says that each claimant "i" should receive formula_3, where "r" is a constant chosen such that formula_4. The rule can also be described algorithmically as follows:
Examples.
Examples with two claimants:
Examples with three claimants:
Usage.
In the Jewish law, if several bidders participate in an auction and then revoke their bids simultaneously, they have to compensate the seller for the loss. The loss is divided among the bidders according to the CEL rule.
Characterizations.
The CEL rule has several characterizations. It is the only rule satisfying the following sets of axioms:
Dual rule.
The constrained equal awards (CEA) rule is the "dual" of the CEL rule, that is: for each problem formula_16, we have formula_17.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n c_i > E"
},
{
"math_id": 3,
"text": "\\max(0, c_i-r)"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n \\max(0, c_i-r) = E"
},
{
"math_id": 5,
"text": "CEL(60,90; 100) = (35,65)"
},
{
"math_id": 6,
"text": "r=25"
},
{
"math_id": 7,
"text": "CEL(50,100; 100) =(25,75)"
},
{
"math_id": 8,
"text": "CEL(40,80; 100) = (30,70)"
},
{
"math_id": 9,
"text": "r=10"
},
{
"math_id": 10,
"text": "CEL(50,100,150; 100) = (0, 25, 75)"
},
{
"math_id": 11,
"text": "r=75"
},
{
"math_id": 12,
"text": "CEL(50,100,150; 200) = (16.667, 66.666, 116.667)"
},
{
"math_id": 13,
"text": "r=33.333"
},
{
"math_id": 14,
"text": "CEL(50,100,150; 300) = (50, 100, 150)"
},
{
"math_id": 15,
"text": "r=0"
},
{
"math_id": 16,
"text": "(c,E)"
},
{
"math_id": 17,
"text": "CEA(c,E) = c - CEL(c, \\sum c - E)"
}
]
| https://en.wikipedia.org/wiki?curid=68846141 |
68846197 | Proportional rule (bankruptcy) | The proportional rule is a division rule for solving bankruptcy problems. According to this rule, each claimant should receive an amount proportional to their claim. In the context of taxation, it corresponds to a proportional tax.
Formal definition.
There is a certain amount of money to divide, denoted by "formula_0" (=Estate or Endowment). There are "n" "claimants". Each claimant "i" has a "claim" denoted by "formula_1". Usually, formula_2, that is, the estate is insufficient to satisfy all the claims.
The proportional rule says that each claimant "i" should receive formula_3, where "r" is a constant chosen such that formula_4. In other words, each agent gets formula_5.
Examples.
Examples with two claimants:
Examples with three claimants:
Characterizations.
The proportional rule has several characterizations. It is the only rule satisfying the following sets of axioms:
Truncated-proportional rule.
There is a variant called truncated-claims proportional rule, in which each claim larger than "E" is truncated to "E", and then the proportional rule is activated. That is, it equals formula_13, where formula_14. The results are the same for the two-claimant problems above, but for the three-claimant problems we get:
Adjusted-proportional rule.
The adjusted proportional rule first gives, to each agent "i", their "minimal right", which is the amount not claimed by the other agents. Formally, formula_18. Note that formula_19 implies formula_20.
Then, it revises the claim of agent "i" to formula_21, and the estate to formula_22. Note that that formula_23.
Finally, it activates the truncated-claims proportional rule, that is, it returns formula_24, where formula_25.
With two claimants, the revised claims are always equal, so the remainder is divided equally. Examples:
With three or more claimants, the revised claims may be different. In all the above three-claimant examples, the minimal rights are formula_36 and thus the outcome is equal to TPROP, for example, formula_37.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n c_i > E"
},
{
"math_id": 3,
"text": "r \\cdot c_i"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n r\\cdot c_i = E"
},
{
"math_id": 5,
"text": "\\frac{c_i}{\\sum_{j=1}^n c_j}\\cdot E"
},
{
"math_id": 6,
"text": "PROP(60,90; 100) = (40,60)"
},
{
"math_id": 7,
"text": "r = 2/3"
},
{
"math_id": 8,
"text": "PROP(50,100; 100) = (33.333,66.667)"
},
{
"math_id": 9,
"text": "PROP(40,80; 100) = (33.333,66.667)"
},
{
"math_id": 10,
"text": "PROP(100,200,300; 100) = (16.667, 33.333, 50)"
},
{
"math_id": 11,
"text": "PROP(100,200,300; 200) = (33.333, 66.667, 100)"
},
{
"math_id": 12,
"text": "PROP(100,200,300; 300) = (50, 100, 150)"
},
{
"math_id": 13,
"text": "PROP(c_1',\\ldots,c_n',E)"
},
{
"math_id": 14,
"text": "c'_i := \\min(c_i, E)"
},
{
"math_id": 15,
"text": "TPROP(100,200,300; 100) = (33.333, 33.333, 33.333)"
},
{
"math_id": 16,
"text": "TPROP(100,200,300; 200) = (40, 80, 80)"
},
{
"math_id": 17,
"text": "TPROP(100,200,300; 300) = (50, 100, 150)"
},
{
"math_id": 18,
"text": "m_i := \\max(0, E-\\sum_{j\\neq i} c_j)"
},
{
"math_id": 19,
"text": "\\sum_{i=1}^n c_i \\geq E"
},
{
"math_id": 20,
"text": "m_i \\leq c_i"
},
{
"math_id": 21,
"text": "c'_i := c_i - m_i"
},
{
"math_id": 22,
"text": "E' := E - \\sum_i m_i"
},
{
"math_id": 23,
"text": "E' \\geq 0"
},
{
"math_id": 24,
"text": "TPROP(c_1,\\ldots,c_n,E') = PROP(c_1'',\\ldots,c_n'',E')"
},
{
"math_id": 25,
"text": "c''_i := \\min(c'_i, E')"
},
{
"math_id": 26,
"text": "APROP(60,90; 100) = (35,65)"
},
{
"math_id": 27,
"text": "(m_1,m_2) = (10,40)"
},
{
"math_id": 28,
"text": "(c_1',c_2') = (50,50)"
},
{
"math_id": 29,
"text": "E'=50"
},
{
"math_id": 30,
"text": "APROP(50,100; 100) = (25,75)"
},
{
"math_id": 31,
"text": "(m_1,m_2) = (0,50)"
},
{
"math_id": 32,
"text": "APROP(40,80; 100) = (30,70)"
},
{
"math_id": 33,
"text": "(m_1,m_2) = (20,60)"
},
{
"math_id": 34,
"text": "(c_1',c_2') = (20,20)"
},
{
"math_id": 35,
"text": "E'=20"
},
{
"math_id": 36,
"text": "(0,0,0)"
},
{
"math_id": 37,
"text": "APROP(100,200,300; 200) = TPROP(100,200,300; 200) = (20, 40, 40)"
}
]
| https://en.wikipedia.org/wiki?curid=68846197 |
68857410 | Jordan–Pólya number | In mathematics, the Jordan–Pólya numbers are the numbers that can be obtained by multiplying together one or more factorials, not required to be distinct from each other. For instance, formula_0 is a Jordan–Pólya number because formula_1. Every tree has a number of symmetries that is a Jordan–Pólya number, and every Jordan–Pólya number arises in this way as the order of an automorphism group of a tree. These numbers are named after Camille Jordan and George Pólya, who both wrote about them in the context of symmetries of trees.
These numbers grow more quickly than polynomials but more slowly than exponentials. As well as in the symmetries of trees, they arise as the numbers of transitive orientations of comparability graphs and in the problem of finding factorials that can be represented as products of smaller factorials.
Sequence and growth rate.
The sequence of Jordan–Pólya numbers begins:
<templatestyles src="Block indent/styles.css"/>
They form the smallest multiplicatively closed set containing all of the factorials.
The formula_2th Jordan–Pólya number grows more quickly than any polynomial of formula_2, but more slowly than any exponential function of formula_2. More precisely, for every formula_3, and every sufficiently large formula_4 (depending on formula_5), the number formula_6 of Jordan–Pólya numbers up to formula_4 obeys the inequalities
formula_7
Factorials that are products of smaller factorials.
Every Jordan–Pólya number formula_2, except 2, has the property that its factorial formula_8 can be written as a product of smaller factorials. This can be done simply by expanding formula_9 and then replacing formula_2 in this product by its representation as a product of factorials. It is conjectured, but unproven, that the only numbers formula_2 whose factorial formula_8 equals a product of smaller factorials are the Jordan–Pólya numbers (except 2) and the two exceptional numbers 9 and 10, for which formula_10 and formula_11. The only other known representation of a factorial as a product of smaller factorials, not obtained by replacing formula_2 in the product expansion of formula_8, is formula_12, but as formula_13 is itself a Jordan–Pólya number, it also has the representation formula_14.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "480"
},
{
"math_id": 1,
"text": "480=2!\\cdot 2!\\cdot5!"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "\\varepsilon>0"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "\\varepsilon"
},
{
"math_id": 6,
"text": "J(x)"
},
{
"math_id": 7,
"text": "\n\\exp\\frac{(2-\\varepsilon)\\sqrt{\\log x}}{\\log\\log x} < J(x) <\n\\exp\\frac{(4+\\varepsilon)\\sqrt{\\log x}\\log\\log\\log x}{\\log\\log x}."
},
{
"math_id": 8,
"text": "n!"
},
{
"math_id": 9,
"text": "n!=n\\cdot(n-1)!"
},
{
"math_id": 10,
"text": "9!=2!\\cdot3!\\cdot3!\\cdot7!"
},
{
"math_id": 11,
"text": "10!=6!\\cdot7!=3!\\cdot5!\\cdot7!"
},
{
"math_id": 12,
"text": "16!=2!\\cdot5!\\cdot14!"
},
{
"math_id": 13,
"text": "16"
},
{
"math_id": 14,
"text": "16!=2!^4\\cdot 15!"
}
]
| https://en.wikipedia.org/wiki?curid=68857410 |
6885770 | Bootstrapping (statistics) | Statistical method
Bootstrapping is a procedure for estimating the distribution of an estimator by resampling (often with replacement) one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.
Bootstrapping estimates the properties of an estimand (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution function of the observed data. In the case where a set of observations can be assumed to be from an independent and identically distributed population, this can be implemented by constructing a number of resamples with replacement, of the observed data set (and of equal size to the observed data set). A key result in Efron's seminal paper that introduced the bootstrap is the favorable performance of bootstrap methods using sampling with replacement compared to prior methods like the jackknife that sample without replacement. However, since its introduction, numerous variants on the bootstrap have been proposed, including methods that sample without replacement or that create bootstrap samples larger or smaller than the original data.
The bootstrap may also be used for constructing hypothesis tests. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation of standard errors.
History.
The bootstrap was first described by Bradley Efron in "Bootstrap methods: another look at the jackknife" (1979), inspired by earlier work on the jackknife. Improved estimates of the variance were developed later. A Bayesian extension was developed in 1981.
The bias-corrected and accelerated (formula_0) bootstrap was developed by Efron in 1987, and the approximate bootstrap confidence interval (ABC, or approximate formula_0) procedure in 1992.
Approach.
The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modeled by "resampling" the sample data and performing inference about a sample from resampled data (resampled → sample). As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable.
More formally, the bootstrap works by treating inference of the true probability distribution "J", given the original data, as being analogous to an inference of the empirical distribution "Ĵ", given the resampled data. The accuracy of inferences regarding "Ĵ" using the resampled data can be assessed because we know "Ĵ". If "Ĵ" is a reasonable approximation to "J", then the quality of inference on "J" can in turn be inferred.
As an example, assume we are interested in the average (or mean) height of people worldwide. We cannot measure all the people in the global population, so instead, we sample only a tiny part of it, and measure that. Assume the sample is of size "N"; that is, we measure the heights of "N" individuals. From that single sample, only one estimate of the mean can be obtained. In order to reason about the population, we need some sense of the variability of the mean that we have computed. The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of size "N". The bootstrap sample is taken from the original by using sampling with replacement (e.g. we might 'resample' 5 times from [1,2,3,4,5] and get [2,5,4,4,1]), so, assuming "N" is sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples, we compute its mean (each of these is called a "bootstrap estimate"). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any other statistic or estimator.)
Discussion.
Advantages.
A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates of standard errors and confidence intervals for complex estimators of the distribution, such as percentile points, proportions, Odds ratio, and correlation coefficients. However, despite its simplicity, bootstrapping can be applied to complex sampling designs (e.g. for population divided into s strata with ns observations per strata, bootstrapping can be applied for each stratum). Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality. Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data.
Disadvantages.
Bootstrapping depends heavily on the estimator used and, though simple, naive use of bootstrapping will not always yield asymptotically valid results and can lead to inconsistency. Although bootstrapping is (under some conditions) asymptotically consistent, it does not provide general finite-sample guarantees. The result may depend on the representative sample. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples or large enough of a sample size) where these would be more formally stated in other approaches. Also, bootstrapping can be time-consuming and there are not many available software for bootstrapping as it is difficult to automate using traditional statistical computer packages.
Recommendations.
Scholars have recommended more bootstrap samples as available computing power has increased. If the results may have substantial real-world consequences, then one should use as many samples as is reasonable, given available computing power and time. Increasing the number of samples cannot increase the amount of information in the original data; it can only reduce the effects of random sampling errors which can arise from a bootstrap procedure itself. Moreover, there is evidence that numbers of samples greater than 100 lead to negligible improvements in the estimation of standard errors. In fact, according to the original developer of the bootstrapping method, even setting the number of samples at 50 is likely to lead to fairly good standard error estimates.
Adèr et al. recommend the bootstrap procedure for the following situations:
*When the theoretical distribution of a statistic of interest is complicated or unknown. Since the bootstrapping procedure is distribution-independent it provides an indirect method to assess the properties of the distribution underlying the sample and the parameters of interest that are derived from this distribution.
*When the sample size is insufficient for straightforward statistical inference. If the underlying distribution is well-known, bootstrapping provides a way to account for the distortions caused by the specific sample that may not be fully representative of the population.
* When power calculations have to be performed, and a small pilot sample is available. Most power and sample size calculations are heavily dependent on the standard deviation of the statistic of interest. If the estimate used is incorrect, the required sample size will also be wrong. One method to get an impression of the variation of the statistic is to use a small pilot sample and perform bootstrapping on it to get impression of the variance.
However, Athreya has shown that if one performs a naive bootstrap on the sample mean when the underlying population lacks a finite variance (for example, a power law distribution), then the bootstrap distribution will not converge to the same limit as the sample mean. As a result, confidence intervals on the basis of a Monte Carlo simulation of the bootstrap could be misleading. Athreya states that "Unless one is reasonably sure that the underlying distribution is not heavy tailed, one should hesitate to use the naive bootstrap".
Types of bootstrap scheme.
In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling" below) unlike subsampling, in which resampling is without replacement and is valid under much weaker conditions compared to the bootstrap. In small samples, a parametric bootstrap approach might be preferred. For other problems, a "smooth bootstrap" will likely be preferred.
For regression problems, various other alternatives are available.
Case resampling.
The bootstrap is generally useful for estimating the distribution of a statistic (e.g. mean, variance) without using normality assumptions (as required, e.g., for a z-statistic or a t-statistic). In particular, the bootstrap is useful when there is no analytical form or an asymptotic theory (e.g., an applicable central limit theorem) to help estimate the distribution of the statistics of interest. This is because bootstrap methods can apply to most random quantities, e.g., the ratio of variance and mean. There are at least two ways of performing case resampling.
Estimating the distribution of sample mean.
Consider a coin-flipping experiment. We flip the coin and record whether it lands heads or tails. Let "X
x"1, "x"2, …, "x"10 be 10 observations from the experiment. "xi"
1 if the i th flip lands heads, and 0 otherwise. By invoking the assumption that the average of the coin flips is normally distributed, we can use the t-statistic to estimate the distribution of the sample mean,
formula_2
Such a normality assumption can be justified either as an approximation of the distribution of each "individual" coin flip or as an approximation of the distribution of the "average" of a large number of coin flips. The former is a poor approximation because the true distribution of the coin flips is Bernoulli instead of normal. The latter is a valid approximation in "infinitely large" samples due to the central limit theorem.
However, if we are not ready to make such a justification, then we can use the bootstrap instead. Using case resampling, we can derive the distribution of formula_3. We first resample the data to obtain a "bootstrap resample". An example of the first resample might look like this "X"1*
"x"2, "x"1, "x"10, "x"10, "x"3, "x"4, "x"6, "x"7, "x"1, "x"9. There are some duplicates since a bootstrap resample comes from sampling with replacement from the data. Also the number of data points in a bootstrap resample is equal to the number of data points in our original observations. Then we compute the mean of this resample and obtain the first "bootstrap mean": "μ"1*. We repeat this process to obtain the second resample "X"2* and compute the second bootstrap mean "μ"2*. If we repeat this 100 times, then we have "μ"1*, "μ"2*, ..., "μ"100*. This represents an "empirical bootstrap distribution" of sample mean. From this empirical distribution, one can derive a "bootstrap confidence interval" for the purpose of hypothesis testing.
Regression.
In regression problems, "case resampling" refers to the simple scheme of resampling individual cases – often rows of a data set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable. However, the method is open to criticism.
In regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information. As such, alternative bootstrap procedures should be considered.
Bayesian bootstrap.
Bootstrapping can be interpreted in a Bayesian framework using a scheme that creates new data sets through reweighting the initial data. Given a set of formula_4 data points, the weighting assigned to data point formula_5 in a new data set formula_6 is formula_7, where formula_8 is a low-to-high ordered list of formula_9 uniformly distributed random numbers on formula_10, preceded by 0 and succeeded by 1. The distributions of a parameter inferred from considering many such data sets formula_6 are then interpretable as posterior distributions on that parameter.
Smooth bootstrap.
Under this scheme, a small amount of (usually normally distributed) zero-centered random noise is added onto each resampled observation. This is equivalent to sampling from a kernel density estimate of the data. Assume "K" to be a symmetric kernel density function with unit variance. The standard kernel estimator formula_11 of formula_12 is
formula_13
where formula_14 is the smoothing parameter. And the corresponding distribution function estimator formula_15 is
formula_16
Parametric bootstrap.
Based on the assumption that the original data set is a realization of a random sample from a distribution of a specific parametric type, in this case a parametric model is fitted by parameter θ, often by maximum likelihood, and samples of random numbers are drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the estimate of original function F can be written as formula_17. This sampling process is repeated many times as for other bootstrap methods. Considering the centered sample mean in this case, the random sample original distribution function formula_18 is replaced by a bootstrap random sample with function formula_19, and the probability distribution of formula_20 is approximated by that of formula_21, where formula_22, which is the expectation corresponding to formula_19. The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model.
Resampling residuals.
Another approach to bootstrapping in regression problems is to resample residuals. The method proceeds as follows.
This scheme has the advantage that it retains the information in the explanatory variables. However, a question arises as to which residuals to resample. Raw residuals are one option; another is studentized residuals (in linear regression). Although there are arguments in favor of using studentized residuals; in practice, it often makes little difference, and it is easy to compare the results of both schemes.
Gaussian process regression bootstrap.
When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables.
Regression model:
formula_29 formula_30 is a noise term.
Gaussian process prior:
For any finite collection of variables, "x"1, ..., "x""n", the function outputs formula_31 are jointly distributed according to a multivariate Gaussian with mean formula_32 and covariance matrix formula_33
Assume formula_34 Then formula_35,
where formula_36, and formula_37 is the standard Kronecker delta function.
Gaussian process posterior:
According to GP prior, we can get
formula_38,
where formula_39 and formula_40
Let x1*...,xs* be another finite collection of variables, it's obvious that
formula_41,
where formula_42, formula_43, formula_44
According to the equations above, the outputs "y" are also jointly distributed according to a multivariate Gaussian. Thus,
formula_45
where formula_46, formula_47, formula_48, and formula_49 is formula_50 identity matrix.
Wild bootstrap.
The wild bootstrap, proposed originally by Wu (1986), is suited when the model exhibits heteroskedasticity. The idea is, as the residual bootstrap, to leave the regressors at their sample value, but to resample the response variable based on the residuals values. That is, for each replicate, one computes a new formula_51 based on
formula_52
so the residuals are randomly multiplied by a random variable formula_53 with mean 0 and variance 1. For most distributions of formula_54 (but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variable formula_53, such as
*The standard normal distribution
*A distribution suggested by Mammen (1993).
formula_55
Approximately, Mammen's distribution is:
formula_56
*Or the simpler distribution, linked to the Rademacher distribution:
formula_57
Block bootstrap.
The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data (see Blocking (statistics)). The block bootstrap has been used mainly with data correlated in time (i.e. time series) but can also be used with data correlated in space, or among groups (so-called cluster data).
Time series: Simple block bootstrap.
In the (simple) block bootstrap, the variable of interest is split into non-overlapping blocks.
Time series: Moving block bootstrap.
In the moving block bootstrap, introduced by Künsch (1989), data is split into "n" − "b" + 1 overlapping blocks of length "b": Observation 1 to b will be block 1, observation 2 to "b" + 1 will be block 2, etc. Then from these "n" − "b" + 1 blocks, "n"/"b" blocks will be drawn at random with replacement. Then aligning these n/b blocks in the order they were picked, will give the bootstrap observations.
This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. But, it was shown that varying randomly the block length can avoid this problem. This method is known as the "stationary bootstrap." Other related modifications of the moving block bootstrap are the "Markovian bootstrap" and a stationary bootstrap method that matches subsequent blocks based on standard deviation matching.
Time series: Maximum entropy bootstrap.
Vinod (2006), presents a method that bootstraps time series data using maximum entropy principles satisfying the Ergodic theorem with mean-preserving and mass-preserving constraints. There is an R package, meboot, that utilizes the method, which has applications in econometrics and computer science.
Cluster data: block bootstrap.
Cluster data describes data where many observations per unit are observed. This could be observing many firms in many states or observing students in many classes. In such cases, the correlation structure is simplified, and one does usually make the assumption that data is correlated within a group/cluster, but independent between groups/clusters. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are left unchanged. Cameron et al. (2008) discusses this for clustered errors in linear regression.
Methods for improving computational efficiency.
The bootstrap is a powerful technique although may require substantial computing resources in both time and memory. Some techniques have been developed to reduce this burden. They can generally be combined with many of the different types of Bootstrap schemes and various choices of statistics.
Parallel processing.
Most bootstrap methods are embarrassingly parallel algorithms. That is, the statistic of interest for each bootstrap sample does not depend on other bootstrap samples. Such computations can therefore be performed on separate CPUs or compute nodes with the results from the separate nodes eventually aggregated for final analysis.
Poisson bootstrap.
The nonparametric bootstrap samples items from a list of size n with counts drawn from a multinomial distribution. If formula_58 denotes the number times element i is included in a given bootstrap sample, then each formula_58 is distributed as a binomial distribution with n trials and mean 1, but formula_58 is not independent of formula_59 for formula_60.
The Poisson bootstrap instead draws samples assuming all formula_58's are independently and identically distributed as Poisson variables with mean 1.
The rationale is that the limit of the binomial distribution is Poisson:
formula_61
The Poisson bootstrap had been proposed by Hanley and MacGibbon as potentially useful for non-statisticians using software like SAS and SPSS, which lacked the bootstrap packages of R and S-Plus programming languages. The same authors report that for large enough n, the results are relatively similar to the nonparametric bootstrap estimates but go on to note the Poisson bootstrap has seen minimal use in applications.
Another proposed advantage of the Poisson bootstrap is the independence of the formula_58 makes the method easier to apply for large datasets that must be processed as streams.
A way to improve on the Poisson bootstrap, termed "sequential bootstrap", is by taking the first samples so that the proportion of unique values is ≈0.632 of the original sample size n. This provides a distribution with main empirical characteristics being within a distance of formula_62. Empirical investigation has shown this method can yield good results. This is related to the reduced bootstrap method.
Bag of Little Bootstraps.
For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB) provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set into formula_63 equal-sized buckets and aggregating the data within each bucket. This pre-aggregated data set becomes the new sample data over which to draw samples with replacement. This method is similar to the Block Bootstrap, but the motivations and definitions of the blocks are very different. Under certain assumptions, the sample distribution should approximate the full bootstrapped scenario. One constraint is the number of buckets formula_64where formula_65 and the authors recommend usage of formula_66 as a general solution.
Choice of statistic.
The bootstrap distribution of a point estimator of a population parameter has been used to produce a bootstrapped confidence interval for the parameter's true value if the parameter can be written as a function of the population's distribution.
Population parameters are estimated with many point estimators. Popular families of point-estimators include mean-unbiased minimum-variance estimators, median-unbiased estimators, Bayesian estimators (for example, the posterior distribution's mode, median, mean), and maximum-likelihood estimators.
A Bayesian point estimator and a maximum-likelihood estimator have good performance when the sample size is infinite, according to asymptotic theory. For practical problems with finite samples, other estimators may be preferable. Asymptotic theory suggests techniques that often improve the performance of bootstrapped estimators; the bootstrapping of a maximum-likelihood estimator may often be improved using transformations related to pivotal quantities.
Deriving confidence intervals from the bootstrap distribution.
The bootstrap distribution of a parameter-estimator is often used to calculate confidence intervals for its population-parameter. A variety of methods for constructing the confidence intervals have been proposed, although there is disagreement which method is the best.
Desirable properties.
The survey of bootstrap confidence interval methods of DiCiccio and Efron and consequent discussion lists several desired properties of confidence intervals, which generally are not all simultaneously met.
Methods for bootstrap confidence intervals.
There are several methods for constructing confidence intervals from the bootstrap distribution of a real parameter:
formula_73 where formula_74 denotes the formula_75 percentile of the bootstrapped coefficients formula_76.
formula_77 where formula_74 denotes the formula_75 percentile of the bootstrapped coefficients formula_76.
See Davison and Hinkley (1997, equ. 5.18 p. 203) and Efron and Tibshirani (1993, equ 13.5 p. 171).
This method can be applied to any statistic. It will work well in cases where the bootstrap distribution is symmetrical and centered on the observed statistic and where the sample statistic is median-unbiased and has maximum concentration (or minimum risk with respect to an absolute value loss function). When working with small sample sizes (i.e., less than 50), the basic / reversed percentile and percentile confidence intervals for (for example) the variance statistic will be too narrow. So that with a sample of 20 points, 90% confidence interval will include the true variance only 78% of the time. The basic / reverse percentile confidence intervals are easier to justify mathematically but they are less accurate in general than percentile confidence intervals, and some authors discourage their use.
formula_78 where formula_79 denotes the formula_75 percentile of the bootstrapped Student's t-test formula_80, and formula_81 is the estimated standard error of the coefficient in the original model.
The studentized test enjoys optimal properties as the statistic that is bootstrapped is pivotal (i.e. it does not depend on nuisance parameters as the t-test follows asymptotically a N(0,1) distribution), unlike the percentile bootstrap.
Bootstrap hypothesis testing.
Efron and Tibshirani suggest the following algorithm for comparing the means of two independent samples:
Let formula_82 be a random sample from distribution F with sample mean formula_83 and sample variance formula_84. Let formula_85 be another, independent random sample from distribution G with mean formula_86 and variance formula_87
Example applications.
Smoothed bootstrap.
In 1878, Simon Newcomb took observations on the speed of light.
The data set contains two outliers, which greatly influence the sample mean. (The sample mean need not be a consistent estimator for any population mean, because no mean needs to exist for a heavy-tailed distribution.) A well-defined and robust statistic for the central tendency is the sample median, which is consistent and median-unbiased for the population median.
The bootstrap distribution for Newcomb's data appears below. We can reduce the discreteness of the bootstrap distribution by adding a small amount of random noise to each bootstrap sample. A conventional choice is to add noise with a standard deviation of formula_103 for a sample size "n"; this noise is often drawn from a Student-t distribution with "n-1" degrees of freedom. This results in an approximately-unbiased estimator for the variance of the sample mean. This means that samples taken from the bootstrap distribution will have a variance which is, on average, equal to the variance of the total population.
Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richer support. However, note that whether the smoothed or standard bootstrap procedure is favorable is case-by-case and is shown to depend on both the underlying distribution function and on the quantity being estimated.
In this example, the bootstrapped 95% (percentile) confidence-interval for the population median is (26, 28.5), which is close to the interval for (25.98, 28.46) for the smoothed bootstrap.
Relation to other approaches to inference.
Relationship to other resampling methods.
The bootstrap is distinguished from:
For more details see resampling.
Bootstrap aggregating (bagging) is a meta-algorithm based on averaging model predictions obtained from models trained on multiple bootstrap samples.
U-statistics.
In situations where an obvious statistic can be devised to measure a required characteristic using only a small number, "r", of data items, a corresponding statistic based on the entire sample can be formulated. Given an "r"-sample statistic, one can create an "n"-sample statistic by something similar to bootstrapping (taking the average of the statistic over all subsamples of size "r"). This procedure is known to have certain good properties and the result is a U-statistic. The sample mean and sample variance are of this form, for "r" = 1 and "r" = 2.
Asymptotic theory.
The bootstrap has under certain conditions desirable asymptotic properties. The asymptotic properties most often described are weak convergence / consistency of the sample paths of the bootstrap empirical process and the validity of confidence intervals derived from the bootstrap. This section describes the convergence of the empirical bootstrap.
Stochastic convergence.
This paragraph summarizes more complete descriptions of stochastic convergence in van der Vaart and Wellner and Kosorok. The bootstrap defines a stochastic process, a collection of random variables indexed by some set formula_104, where formula_104 is typically the real line (formula_105) or a family of functions. Processes of interest are those with bounded sample paths, i.e., sample paths in L-infinity (formula_106), the set of all uniformly bounded functions from formula_104 to formula_105. When equipped with the uniform distance, formula_106 is a metric space, and when formula_107, two subspaces of formula_106 are of particular interest, formula_108, the space of all continuous functions from formula_104 to the unit interval [0,1], and formula_109, the space of all cadlag functions from formula_104 to [0,1]. This is because formula_108 contains the distribution functions for all continuous random variables, and formula_109 contains the distribution functions for all random variables. Statements about the consistency of the bootstrap are statements about the convergence of the sample paths of the bootstrap process as random elements of the metric space formula_106 or some subspace thereof, especially formula_108 or formula_109.
Consistency.
Horowitz in a recent review defines consistency as: the bootstrap estimator formula_110 is consistent [for a statistic formula_111] if, for each formula_112, formula_113 converges in probability to 0 as formula_114, where formula_115 is the distribution of the statistic of interest in the original sample, formula_112 is the true but unknown distribution of the statistic, formula_116 is the asymptotic distribution function of formula_111, and formula_117 is the indexing variable in the distribution function, i.e., formula_118. This is sometimes more specifically called consistency relative to the Kolmogorov-Smirnov distance.
Horowitz goes on to recommend using a theorem from Mammen
that provides easier to check necessary and sufficient conditions for consistency for statistics of a certain common form. In particular, let formula_119 be the random sample. If formula_120 for a sequence of numbers formula_121 and formula_122, then the bootstrap estimate of the cumulative distribution function estimates the empirical cumulative distribution function if and only if formula_111 converges in distribution to the standard normal distribution.
Strong consistency.
Convergence in (outer) probability as described above is also called weak consistency. It can also be shown with slightly stronger assumptions, that the bootstrap is strongly consistent, where convergence in (outer) probability is replaced by convergence (outer) almost surely. When only one type of consistency is described, it is typically weak consistency. This is adequate for most statistical applications since it implies confidence bands derived from the bootstrap are asymptotically valid.
Showing consistency using the central limit theorem.
In simpler cases, it is possible to use the central limit theorem directly to show the consistency of the bootstrap procedure for estimating the distribution of the sample mean.
Specifically, let us consider formula_123 independent identically distributed random variables with formula_124 and formula_125 for each formula_126. Let formula_127. In addition, for each formula_126, conditional on formula_123, let formula_128 be independent random variables with distribution equal to the empirical distribution of formula_123. This is the sequence of bootstrap samples.
Then it can be shown that
formula_129
where formula_130 represents probability conditional on formula_123, formula_126, formula_131, and formula_132.
To see this, note that formula_133 satisfies the Lindeberg condition, so the CLT holds.
The Glivenko–Cantelli theorem provides theoretical background for the bootstrap method.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BC_a"
},
{
"math_id": 1,
"text": "\\binom {2n-1}n = \\frac{(2n-1)!}{n! (n-1)!}"
},
{
"math_id": 2,
"text": " \\bar{x} = \\frac{1}{10} (x_1 + x_2 + \\cdots + x_{10})."
},
{
"math_id": 3,
"text": " \\bar{x}"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "\\mathcal{D}^J"
},
{
"math_id": 7,
"text": "w^J_i = x^J_i - x^J_{i-1}"
},
{
"math_id": 8,
"text": "\\mathbf{x}^J"
},
{
"math_id": 9,
"text": "N-1"
},
{
"math_id": 10,
"text": "[0,1]"
},
{
"math_id": 11,
"text": "\\hat{f\\,}_h(x)"
},
{
"math_id": 12,
"text": "f(x)"
},
{
"math_id": 13,
"text": "\\hat{f\\,}_h(x)={1\\over nh}\\sum_{i=1}^nK\\left({x-X_i\\over h}\\right),"
},
{
"math_id": 14,
"text": "h"
},
{
"math_id": 15,
"text": "\\hat{F\\,}_h(x)"
},
{
"math_id": 16,
"text": "\\hat{F\\,}_h(x)=\\int_{-\\infty}^x \\hat f_h(t)\\,dt."
},
{
"math_id": 17,
"text": "\\hat{F}=F_{\\hat{\\theta}}"
},
{
"math_id": 18,
"text": "F_{\\theta}"
},
{
"math_id": 19,
"text": "F_{\\hat{\\theta}}"
},
{
"math_id": 20,
"text": "\\bar{X_{n}}-\\mu_{\\theta}"
},
{
"math_id": 21,
"text": "\\bar{X}_n^*-\\mu^*"
},
{
"math_id": 22,
"text": "\\mu^*=\\mu_{\\hat{\\theta}}"
},
{
"math_id": 23,
"text": "\\widehat{y\\,}_i"
},
{
"math_id": 24,
"text": "\\widehat{\\varepsilon\\,}_i = y_i - \\widehat{y\\,}_i, (i = 1,\\dots, n)"
},
{
"math_id": 25,
"text": "\\widehat{\\varepsilon\\,}_j"
},
{
"math_id": 26,
"text": "y^*_i = \\widehat{y\\,}_i + \\widehat{\\varepsilon\\,}_j"
},
{
"math_id": 27,
"text": "y^*_i"
},
{
"math_id": 28,
"text": "\\widehat\\mu^*_i"
},
{
"math_id": 29,
"text": "y(x)=f(x)+\\varepsilon,\\ \\ \\varepsilon\\sim \\mathcal{N}(0,\\sigma^2),"
},
{
"math_id": 30,
"text": "\\varepsilon"
},
{
"math_id": 31,
"text": "f(x_1),\\ldots,f(x_n)"
},
{
"math_id": 32,
"text": "m=[m(x_1),\\ldots,m(x_n)]^\\intercal\n"
},
{
"math_id": 33,
"text": "(K)_{ij}=k(x_i,x_j)."
},
{
"math_id": 34,
"text": "f(x)\\sim \\mathcal{GP}(m,k)."
},
{
"math_id": 35,
"text": "y(x)\\sim \\mathcal{GP}(m,l)"
},
{
"math_id": 36,
"text": "l(x_i,x_j)=k(x_i,x_j)+\\sigma^2\\delta(x_i,x_j)"
},
{
"math_id": 37,
"text": "\\delta(x_i,x_j)"
},
{
"math_id": 38,
"text": "[y(x_1),\\ldots,y(x_r)]\\sim \\mathcal{N}(m_0,K_0)\n"
},
{
"math_id": 39,
"text": "m_0=[m(x_1),\\ldots,m(x_r)]^\\intercal"
},
{
"math_id": 40,
"text": "(K_0)_{ij}=k(x_i,x_j)+\\sigma^2\\delta(x_i,x_j)."
},
{
"math_id": 41,
"text": "[y(x_1),\\ldots,y(x_r),f(x_1^*),\\ldots,f(x_s^*)]^\\intercal\\sim \\mathcal{N}(\\binom{m_0}{m_*}\\begin{pmatrix} K_0 & K_* \\\\ K_*^\\intercal & K_{**} \\end{pmatrix})\n"
},
{
"math_id": 42,
"text": "m_*=[m(x_1^*),\\ldots,m(x_s^*)]^\\intercal"
},
{
"math_id": 43,
"text": "(K_{**})_{ij}=k(x_i^*,x_j^*)"
},
{
"math_id": 44,
"text": "(K_*)_{ij}=k(x_i,x_j^*). "
},
{
"math_id": 45,
"text": "[f(x_1^*),\\ldots,f(x_s^*)]^\\intercal\\mid([y(x)]^\\intercal=y)\\sim \\mathcal{N}(m_\\text{post},K_\\text{post}),"
},
{
"math_id": 46,
"text": "y=[y_1,...,y_r]^\\intercal"
},
{
"math_id": 47,
"text": "m_\\text{post}=m_*+K_*^\\intercal(K_O+\\sigma^2I_r)^{-1}(y-m_0)"
},
{
"math_id": 48,
"text": "K_\\text{post}=K_{**}-K_*^\\intercal(K_O+\\sigma^2I_r)^{-1}K_*"
},
{
"math_id": 49,
"text": "I_r"
},
{
"math_id": 50,
"text": "r\\times r"
},
{
"math_id": 51,
"text": "y"
},
{
"math_id": 52,
"text": "y^*_i = \\widehat{y\\,}_i + \\widehat{\\varepsilon\\,}_i v_i"
},
{
"math_id": 53,
"text": "v_i"
},
{
"math_id": 54,
"text": " v_i "
},
{
"math_id": 55,
"text": " v_i = \\begin{cases}\n-(\\sqrt{5} -1)/2 & \\text{with probability } (\\sqrt{5} +1)/(2\\sqrt{5}), \\\\\n(\\sqrt{5} +1)/2 & \\text{with probability } (\\sqrt{5} -1)/(2\\sqrt{5})\n\\end{cases}"
},
{
"math_id": 56,
"text": " v_i = \\begin{cases}\n-0.6180\\quad\\text{(with a 0 in the units' place)} & \\text{with probability } 0.7236, \\\\\n+1.6180\\quad\\text{(with a 1 in the units' place)} & \\text{with probability } 0.2764.\n\\end{cases}"
},
{
"math_id": 57,
"text": " v_i =\\begin{cases}\n-1 & \\text{with probability } 1/2, \\\\\n+1 & \\text{with probability } 1/2.\n\\end{cases}"
},
{
"math_id": 58,
"text": "W_i"
},
{
"math_id": 59,
"text": "W_j"
},
{
"math_id": 60,
"text": "i \\neq j"
},
{
"math_id": 61,
"text": "\\lim_{n\\to \\infty} \\operatorname{Binomial}(n,1/n) = \\operatorname{Poisson}(1)"
},
{
"math_id": 62,
"text": "O(n^{3/4})"
},
{
"math_id": 63,
"text": "b"
},
{
"math_id": 64,
"text": "b=n^\\gamma "
},
{
"math_id": 65,
"text": "\\gamma \\in [0.5, 1]"
},
{
"math_id": 66,
"text": "b=n^{0.7}"
},
{
"math_id": 67,
"text": "1 - \\alpha"
},
{
"math_id": 68,
"text": "1 - \\alpha"
},
{
"math_id": 69,
"text": "O(1 / \\sqrt{n})"
},
{
"math_id": 70,
"text": "O(n^{-1})"
},
{
"math_id": 71,
"text": "O_p(n^{-3/2})"
},
{
"math_id": 72,
"text": "O(\\cdot)"
},
{
"math_id": 73,
"text": " (2\\widehat{\\theta\\,} -\\theta^{*}_{(1-\\alpha/2)},2\\widehat{\\theta\\,} -\\theta^{*}_{(\\alpha/2)})"
},
{
"math_id": 74,
"text": " \\theta^{*}_{(1-\\alpha/2)} "
},
{
"math_id": 75,
"text": "1-\\alpha/2 "
},
{
"math_id": 76,
"text": " \\theta^{*} "
},
{
"math_id": 77,
"text": " (\\theta^{*}_{(\\alpha/2)},\\theta^{*}_{(1-\\alpha/2)})"
},
{
"math_id": 78,
"text": " (\\widehat{\\theta\\,} - t^{*}_{(1-\\alpha/2)}\\cdot \\widehat{\\text{se}}_\\theta,\\widehat{\\theta\\,} - t^{*}_{(\\alpha/2)}\\cdot \\widehat{\\text{se}}_\\theta) "
},
{
"math_id": 79,
"text": " t^{*}_{(1-\\alpha/2)} "
},
{
"math_id": 80,
"text": " t^{*}=(\\widehat{\\theta\\,}^{*}-\\widehat{\\theta\\,})/\\widehat{\\text{se}}_{\\widehat{\\theta\\,}^*} "
},
{
"math_id": 81,
"text": "\\widehat{\\text{se}}_\\theta "
},
{
"math_id": 82,
"text": " x_1, \\ldots, x_n "
},
{
"math_id": 83,
"text": "\\bar{x}"
},
{
"math_id": 84,
"text": "\\sigma_x^2"
},
{
"math_id": 85,
"text": " y_1, \\ldots, y_m "
},
{
"math_id": 86,
"text": "\\bar{y}"
},
{
"math_id": 87,
"text": "\\sigma_y^2"
},
{
"math_id": 88,
"text": " t = \\frac{\\bar{x}-\\bar{y}}{\\sqrt{\\sigma_x^2/n + \\sigma_y^2/m}} "
},
{
"math_id": 89,
"text": " x_i' = x_i - \\bar{x} + \\bar{z} "
},
{
"math_id": 90,
"text": " y_i' = y_i - \\bar{y} + \\bar{z}, "
},
{
"math_id": 91,
"text": "\\bar{z}"
},
{
"math_id": 92,
"text": " x_i^* "
},
{
"math_id": 93,
"text": "n"
},
{
"math_id": 94,
"text": " x_i' "
},
{
"math_id": 95,
"text": " y_i^* "
},
{
"math_id": 96,
"text": "m"
},
{
"math_id": 97,
"text": " y_i' "
},
{
"math_id": 98,
"text": " t^* = \\frac{\\bar{x^*}-\\bar{y^*}}{\\sqrt{\\sigma_x^{*2}/n + \\sigma_y^{*2}/m}} "
},
{
"math_id": 99,
"text": "B"
},
{
"math_id": 100,
"text": "B=1000"
},
{
"math_id": 101,
"text": " p = \\frac{\\sum_{i=1}^B I\\{t_i^* \\geq t\\}}{B} "
},
{
"math_id": 102,
"text": " I\\{\\text{condition}\\} = 1 "
},
{
"math_id": 103,
"text": "\\sigma/\\sqrt n"
},
{
"math_id": 104,
"text": "T"
},
{
"math_id": 105,
"text": "\\mathbb{R}"
},
{
"math_id": 106,
"text": "\\ell^\\infty(T)"
},
{
"math_id": 107,
"text": "T = \\mathbb{R}"
},
{
"math_id": 108,
"text": "C[0,1]"
},
{
"math_id": 109,
"text": "D[0,1]"
},
{
"math_id": 110,
"text": "G_n(\\cdot, F_n)"
},
{
"math_id": 111,
"text": "T_n"
},
{
"math_id": 112,
"text": "F_0"
},
{
"math_id": 113,
"text": "\\sup_\\tau |G_n(\\tau, F_n) - G_\\infty(\\tau, F_0)|"
},
{
"math_id": 114,
"text": "n \\to \\infty"
},
{
"math_id": 115,
"text": "F_n"
},
{
"math_id": 116,
"text": "G_\\infty(\\tau, F_0)"
},
{
"math_id": 117,
"text": "\\tau"
},
{
"math_id": 118,
"text": "P(T_n \\leq \\tau) = G_n(\\tau, F_0)"
},
{
"math_id": 119,
"text": "\\{X_i : i=1, \\ldots, n\\}"
},
{
"math_id": 120,
"text": "T_n = \\frac{\\sum_{i=1}^n g_n(X_i) - t_n}{\\sigma_n}"
},
{
"math_id": 121,
"text": "t_n"
},
{
"math_id": 122,
"text": "\\sigma_n"
},
{
"math_id": 123,
"text": " X_{n1}, \\ldots, X_{nn} "
},
{
"math_id": 124,
"text": " \\mathbb{E}[X_{n1}] = \\mu "
},
{
"math_id": 125,
"text": " \\text{Var}[X_{n1}] = \\sigma^2 < \\infty "
},
{
"math_id": 126,
"text": " n \\ge 1 "
},
{
"math_id": 127,
"text": " \\bar{X}_n = n^{-1}(X_{n1} + \\cdots + X_{nn}) "
},
{
"math_id": 128,
"text": "X^*_{n1}, \\ldots, X^*_{nn}"
},
{
"math_id": 129,
"text": " \\sup_{\\tau \\in \\mathbb{R}} \\left | P^* \\left(\\frac{\\sqrt{n} (\\bar{X}^*_n - \\bar{X}_n)}{\\hat{\\sigma}_n} \\le \\tau \\right) - P \\left(\\frac{\\sqrt{n} (\\bar{X}_n - \\mu)}{\\sigma} \\le \\tau \\right) \\right | \\to 0 \\text{ in probability as } n \\to \\infty, "
},
{
"math_id": 130,
"text": " P^* "
},
{
"math_id": 131,
"text": " \\bar{X}^*_n = n^{-1}(X^*_{n1} + \\cdots + X^*_{nn}) "
},
{
"math_id": 132,
"text": " \\hat{\\sigma}_n^2 = n^{-1} \\sum_{i=1}^{n}(X_{ni} - \\bar{X}_n)^2 "
},
{
"math_id": 133,
"text": "(X_{ni}^* - \\bar X_n)/\\sqrt n\\hat{\\sigma}_n"
}
]
| https://en.wikipedia.org/wiki?curid=6885770 |
6885778 | Separable partial differential equation | A separable partial differential equation can be broken into a set of equations of lower dimensionality (fewer independent variables) by a method of separation of variables. It generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations.
The most common form of separation of variables is simple separation of variables. A solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called formula_0-separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on formula_1 is an example of a partial differential equation that admits solutions through formula_0-separation of variables; in the three-dimensional case this uses 6-sphere coordinates.
Example.
For example, consider the time-independent Schrödinger equation
formula_2
for the function formula_3 (in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function formula_4 in three dimensions is of the form
formula_5
then it turns out that the problem can be separated into three one-dimensional ODEs for functions formula_6, formula_7, and formula_8, and the final solution can be written as formula_9. (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.) | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "{\\mathbb R}^n"
},
{
"math_id": 2,
"text": "[-\\nabla^2 + V(\\mathbf{x})]\\psi(\\mathbf{x}) = E\\psi(\\mathbf{x})"
},
{
"math_id": 3,
"text": "\\psi(\\mathbf{x})"
},
{
"math_id": 4,
"text": "V(\\mathbf{x})"
},
{
"math_id": 5,
"text": "V(x_1,x_2,x_3) = V_1(x_1) + V_2(x_2) + V_3(x_3),"
},
{
"math_id": 6,
"text": "\\psi_1(x_1)"
},
{
"math_id": 7,
"text": "\\psi_2(x_2)"
},
{
"math_id": 8,
"text": "\\psi_3(x_3)"
},
{
"math_id": 9,
"text": "\\psi(\\mathbf{x}) = \\psi_1(x_1) \\cdot \\psi_2(x_2) \\cdot \\psi_3(x_3)"
}
]
| https://en.wikipedia.org/wiki?curid=6885778 |
68867066 | Fibbinary number | Numbers whose binary representation does not contain two consecutive ones
In mathematics, the fibbinary numbers are the numbers whose binary representation does not contain two consecutive ones. That is, they are sums of distinct and non-consecutive powers of two.
Relation to binary and Fibonacci numbers.
The fibbinary numbers were given their name by Marc LeBrun, because they combine certain properties of binary numbers and Fibonacci numbers:
13 + 5 + 1), the binary sequence 101001, interpreted as a binary number, represents 41
32 + 8 + 1, and the 19th fibbinary number is 41.
Properties.
Because the property of having no two consecutive ones defines a regular language, the binary representations of fibbinary numbers can be recognized by a finite automaton, which means that the fibbinary numbers form a 2-automatic set.
The fibbinary numbers include the Moser–de Bruijn sequence, sums of distinct powers of four. Just as the fibbinary numbers can be formed by reinterpreting Zeckendorff representations as binary, the Moser–de Bruijn sequence can be formed by reinterpreting binary representations as quaternary.
A number formula_0 is a fibbinary number if and only if the binomial coefficient formula_1 is odd. Relatedly, formula_0 is fibbinary if and only if the central Stirling number of the second kind formula_2 is odd.
Every fibbinary number formula_3 takes one of the two forms formula_4 or formula_5, where formula_6 is another fibbinary number.
Correspondingly, the power series whose exponents are fibbinary numbers,
formula_7
obeys the functional equation
formula_8
provide asymptotic formulas for the number of integer partitions in which all parts are fibbinary.
If a hypercube graph formula_9 of dimension formula_10 is indexed by integers from 0 to formula_11, so that two vertices are adjacent when their indexes have binary representations with Hamming distance one, then the subset of vertices indexed by the fibbinary numbers forms a Fibonacci cube as its induced subgraph.
Every number has a fibbinary multiple. For instance, 15 is not fibbinary, but multiplying it by 11 produces 165 (101001012), which is.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\tbinom{3n}{n}"
},
{
"math_id": 2,
"text": "\\textstyle \\left\\{{2n\\atop n}\\right\\}"
},
{
"math_id": 3,
"text": "f_i"
},
{
"math_id": 4,
"text": "2f_j"
},
{
"math_id": 5,
"text": "4f_j+1"
},
{
"math_id": 6,
"text": "f_j"
},
{
"math_id": 7,
"text": "B(x)=1+x+x^2+x^4+x^5+x^8+\\cdots,"
},
{
"math_id": 8,
"text": "B(x)=xB(x^4)+B(x^2)."
},
{
"math_id": 9,
"text": "Q_d"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "2^d-1"
}
]
| https://en.wikipedia.org/wiki?curid=68867066 |
68870101 | Contested garment rule | Jewish bankruptcy guidance
The contested garment (CG) rule, also called concede-and-divide, is a division rule for solving problems of conflicting claims (also called "bankruptcy problems"). The idea is that, if one claimant's claim is less than 100% of the estate to divide, then he effectively "concedes" the unclaimed estate to the other claimant. Therefore, we first give to each claimant, the amount conceded to him/her by the other claimant. The remaining amount is then divided equally among the two claimants.
The CG rule first appeared in the Mishnah, exemplified by a case of conflict over a garment, hence the name. In the Mishnah, it was described only for two-people problems. But in 1985, Robert Aumann and Michael Maschler have proved that, in every bankruptcy problem, there is a unique division that is consistent with the CG rule for each pair of claimants. They call the rule, that selects this unique division, the CG-consistent rule (it is also called the Talmud rule).
Problem description.
There is a divisible resource, denoted by "formula_0" (=Estate or Endowment). There are "n" people who claim this resource or parts of it; they are called "claimants". The amount claimed by each claimant "i" is denoted by "formula_1". Usually, formula_2, that is, the estate is insufficient to satisfy all the claims. The goal is to allocate to each claimant an amount "formula_3" such that formula_4.
Two claimants.
With two claimants, the CG rule works in the following way.
Summing the amounts given to each claimant, we can write the following formula:formula_9For example:
These two examples are first mentioned in the first Mishnah of Bava Metzia:""Two are holding a garment. One says, "I found it," and the other says, "I found it":"
Many claimants.
To extend the CG rule to problems with three or more claimants, we apply the general principle of "consistency" (also called "coherence"), which says that every part of a fair division should be fair. In particular, we seek an allocation that respects the CG rule for each pair of claimants. That is, for every claimants "i" and "j":formula_16.Apriori, it is not clear that such an allocation always exists, or that it is unique. However, it can be proved that a unique CG-consistent allocation always exists. It can be described by the following algorithm:
Note that, with two claimants, once the claims are truncated to be at most the estate, the condition formula_19 always holds. For example:
Here are some three-claimant examples:
The first three examples appear in another Mishnah, in Ketubot:"Suppose a man, who was married to three women, died; the marriage contract of one wife was for 100 dinars, and the marriage contract of the second wife was for 200 dinars, and the marriage contract of the third wife was for 300, and all three contracts were issued on the same date so that none of the wives has precedence over any of the others."
Constructive description.
The CG rule can be described in a constructive way. Suppose "E" increases from 0 to the half-sum of the claims: the first units are divided equally, until each claimant receives formula_28. Then, the claimant with the smallest formula_1 is put on hold, and the next units are divided equally among the remaining claimants until each of them up to the next-smallest formula_1. Then, the claimant with the second-smallest formula_1 is put on hold too. This goes on until either the estate is fully divided, or each claimant gets exactly formula_29. If some estate remains, then the losses are divided in a symmetric way, starting with an estate equal to the sum of all claims, and decreasing down to half this sum.
Properties.
The CG rule is "self-dual". This means that it treats gains and losses symmetrically: it divides gains in the same way that it divides losses. Formally: formula_30.
Game-theoretic analysis.
The CG rule can be derived independently, as the nucleolus of a certain cooperative game defined based on the claims.
Piniles' rule.
Zvi Menahem Piniles, a 19th-century Jewish scholar, presented a different rule to explain the cases in Ketubot. His rule is similar to the CG rule, but it is not consistent with the CG rule when there are two claimants. The rule works as follows:
Examples with two claimants:
Examples with three claimants:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n c_i > E"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n x_i = E"
},
{
"math_id": 5,
"text": "c_i' := \\min(c_i,E) "
},
{
"math_id": 6,
"text": "E-c_2'"
},
{
"math_id": 7,
"text": "E-c_1'"
},
{
"math_id": 8,
"text": "E-(E-c_2')-(E-c_1') = c_1'+c_2'-E"
},
{
"math_id": 9,
"text": "CG(c_1,c_2; E) = \\left( \\frac{E+c_1'-c_2'}{2}~,~\\frac{E+c_2'-c_1'}{2} \\right)"
},
{
"math_id": 10,
"text": "E=1 "
},
{
"math_id": 11,
"text": "c_1=c_2=1 "
},
{
"math_id": 12,
"text": "CG(1,1; 1) = (1/2, 1/2)"
},
{
"math_id": 13,
"text": "c_1=1 "
},
{
"math_id": 14,
"text": "c_2=1/2 "
},
{
"math_id": 15,
"text": "CG(1,1/2; 1) = (3/4, 1/4)"
},
{
"math_id": 16,
"text": "(x_i, x_j) = CG(c_i, c_j; x_i+x_j)"
},
{
"math_id": 17,
"text": "\\sum_{i=1}^n c_i > 2 E"
},
{
"math_id": 18,
"text": "CEA(c_1/2,\\ldots,c_n/2; E)"
},
{
"math_id": 19,
"text": "\\sum_{i=1}^n c_i \\leq 2 E"
},
{
"math_id": 20,
"text": "(c_1/2,\\ldots,c_n/2) + CEL(c_1/2,\\ldots,c_n/2; E-\\sum_j (c_j/2))"
},
{
"math_id": 21,
"text": "CG(1,1/2; 1) = (1/2,1/4) + CEL(1/2,1/4; 1/4) = (1/2,1/4) + (1/4, 0) = (3/4,1/4)"
},
{
"math_id": 22,
"text": "CG(100,200,300; 100) = (33.333, 33.333, 33.333)"
},
{
"math_id": 23,
"text": "CG(100,200,300; 200) = (50, 75, 75)"
},
{
"math_id": 24,
"text": "CG(100,200,300; 300) = (50, 100, 150)"
},
{
"math_id": 25,
"text": "CG(100,200,300; 400) = (50, 125, 225)"
},
{
"math_id": 26,
"text": "CG(100,200,300; 500) = (66.667, 166.667, 266.667)"
},
{
"math_id": 27,
"text": "CG(100,200,300; 600) = (100, 200, 300)"
},
{
"math_id": 28,
"text": "\\min_i(c_i/2)"
},
{
"math_id": 29,
"text": "c_i/2"
},
{
"math_id": 30,
"text": "CG(c,E) = c - CG(c, \\sum c - E)"
},
{
"math_id": 31,
"text": "(c_1/2,\\ldots,c_n/2) + CEA(c_1/2,\\ldots,c_n/2; E-\\sum_{j=1}^n c_j/2)"
},
{
"math_id": 32,
"text": "PINI(60,90; 100) = (42.5, 57.5)"
},
{
"math_id": 33,
"text": "PINI(50,100; 100) = (37.5, 62.5)"
},
{
"math_id": 34,
"text": "PINI(100,200,300; 100) = (33.333, 33.333, 33.333)"
},
{
"math_id": 35,
"text": "CEA(50,100,150; 100) = (33.333, 33.333, 33.333)"
},
{
"math_id": 36,
"text": "PINI(100,200,300; 200) = (50, 75, 75)"
},
{
"math_id": 37,
"text": "CEA(50,100,150; 200) = (50, 75, 75)"
},
{
"math_id": 38,
"text": "PINI(100,200,300; 300) = (50, 100, 150)"
},
{
"math_id": 39,
"text": "CEA(50,100,150; 300) = (50, 100, 150)"
}
]
| https://en.wikipedia.org/wiki?curid=68870101 |
68870278 | Finite promise games and greedy clique sequences | Collection of mathematical games
The finite promise games are a collection of mathematical games developed by American mathematician Harvey Friedman in 2009 which are used to develop a family of fast-growing functions formula_0, formula_1 and formula_2. The greedy clique sequence is a graph theory concept, also developed by Friedman in 2010, which are used to develop fast-growing functions formula_3, formula_4 and formula_5.
formula_6 represents the theory of ZFC plus, the infinite family of axioms "there exists a strongly formula_7-Mahlo cardinal for all positive integers formula_7. and formula_8 represents the theory of ZFC plus "for each formula_7, there is a strongly formula_7-Mahlo cardinal". formula_9 represents the theory of ZFC plus, for each formula_7, "there is a formula_7-stationary Ramsey cardinal", and formula_10 represents the theory of ZFC plus "for each formula_7, there is a strongly formula_7-stationary Ramsey cardinal". formula_11 represents the theory of ZFC plus, for each formula_7, "there is a formula_7-huge cardinal", and formula_12 represents the theory of ZFC plus "for each formula_7, there is a strongly formula_7-huge cardinal".
Finite promise games.
Each of the games is finite, predetermined in length, and has two players (Alice and Bob). At each turn, Alice chooses an integer or a number of integers (an offering) and the Bob has to make one of two kinds of promises restricting his future possible moves. In all games, Bob wins if and only if Bob has kept all of his promises.
Finite piecewise linear copy/invert games.
Here, formula_13 is the set of integers, and formula_14 is the set of non-negative integers. Here, all letters represent integers. We say that a map formula_15 is "piecewise linear" if formula_16 can be defined by various affine functions with integer coefficients on each of finitely many pieces, where each piece is defined by a finite set of linear inequalities with integer coefficients. For some piecewise linear map formula_15, a formula_16-inversion of formula_17 is some formula_18 such that formula_19. We then define the game formula_20 for nonzero formula_21.
formula_20 has formula_22 rounds, and alternates between Alice and Bob. At every stage of the game, Alice is required to play formula_23, called her "offering", which is either of the form formula_24 or formula_25, where formula_26 and formula_27 are integers previously played by Bob. Bob is then required to either:
In RCA0, it can be proven that Bob always has a winning strategy for any given game. The game formula_28 is a modified version where Bob is forced to accept all factorial offers by Alice formula_29. Bob always has a winning strategy for formula_28 for sufficiently large formula_30, although this cannot be proven in any given consistent fragment of formula_6, and only formula_8. The function formula_31 is the smallest formula_32 such that Bob can win formula_28 for any formula_33 such that formula_34 and formula_35 are greater than or equal to formula_32 and all the following values are less than formula_36:
Finite polynomial copy/invert games.
Let formula_38 be a polynomial with integer coefficients. A special formula_39-inversion at formula_17 in formula_13 consists of formula_40 such that formula_41. We now define the game formula_42 for nonzero formula_21, where formula_43 are polynomials with integer coefficients. formula_42 consists of formula_22 alternating plays by Alice and Bob. At every stage of the game, Alice is required to play formula_44 of the form formula_45, formula_46 or formula_47, where formula_26 is a formula_7-tuple of integers previously played by Bob. Bob is then required to either:
Let formula_43 be polynomials with integer coefficients. In RCA0, it can be proven that Bob always has a winning strategy for any given game. If formula_30 are sufficiently large then Bob wins formula_49, which is formula_50 where Bob is forced to accept all double factorials formula_29 offered by Alice. However, once again, this cannot be proven in any given consistent fragment of formula_6, and only formula_8. The function formula_51 is the smallest formula_32 such that Bob can win formula_49 for any formula_52 such that formula_34 and formula_35 are greater than or equal to formula_32 and all the following values are less than formula_36:
Finite linear copy/invert games.
We say that formula_54 are "additively equivalent" if and only if formula_55. For nonzero integers formula_56 and formula_57, we define the game formula_58 which consists of formula_22 alternating rounds between Alice and Bob. At every stage of the game, Alice is required to play an integer formula_23 of the form formula_24 or formula_25, where formula_59 are integers previously played by Bob. Bob is then required to either:
Let formula_57. In RCA0, it can be proven that Bob always has a winning strategy for any given game. Let formula_57. If formula_34 is sufficiently large, then Bob wins formula_66, where Bob accepts all factorials formula_29 offered by Alice. However, once again, this cannot be proven in any given consistent fragment of formula_6, and only formula_8. The function formula_67 is the smallest formula_32 such that Bob can win formula_68 for any formula_69 such that formula_34 is greater than or equal to formula_32, formula_21 are positive and all the following values are less than formula_36:
Functions.
As shown by Friedman, the three functions formula_31, formula_51 and formula_67 are extremely fast-growing, eventually dominating any functions provably recursive in any consistent fragment of formula_6 (one of these is ZFC), but they are computable and provably total in formula_8.
Greedy clique sequences.
formula_72 denotes the set of all tuples of rational numbers. We use subscripts to denote indexes into tuples (starting at 1) and angle brackets to denote concatenation of tuples, e.g. formula_73. Given formula_74, we define the "upper shift" of formula_36, denoted formula_75 to be the result of adding 1 to all its nonnegative components. Given formula_76, we say that formula_77 and formula_78. formula_76 are called "order equivalent" if and only if they have the same length and for all formula_79, formula_80 iff formula_81. A set formula_82 is "order invariant" iff for all order equivalent formula_17 and formula_26, formula_83.
Let formula_84 be a graph with vertices in formula_72. Let formula_85 be the set defined as follows: for every edge formula_86 in formula_84, their concatenation formula_87 is in formula_85. Then if formula_85 is order invariant, we say that formula_84 is order invariant. When formula_84 is order invariant, formula_84 has infinite edges. We are given formula_88, formula_89, and a simple graph formula_90 (or a digraph in the case of upper shift greedy down clique sequences) with vertices in formula_91. We define a sequence formula_17 as a nonempty tuple formula_92 where formula_93. This is not a tuple but rather a tuple of tuples. When formula_94, formula_17 is said to be an "upper shift greedy clique sequence" in formula_95 if it satisfies the following:
When formula_94, formula_17 is said to be an "upper shift down greedy clique sequence" in formula_95 if it satisfies the following:
When formula_109, formula_17 is said to be an "extreme" "upper shift down greedy clique sequence" in formula_110 if it satisfies the following:
The thread of formula_17 is a subsequence formula_117 defined inductively like so:
Given a thread formula_120, we say that is "open" if formula_121. Using this Harvey Friedman defined three very powerful functions:
formula_122 and formula_123 eventually dominate all functions provably recursive in formula_9, but are themselves provably recursive in formula_10. formula_124 eventually dominates all functions provably recursive in formula_11, but is itself provably total in formula_12. | [
{
"math_id": 0,
"text": "FPLCI(k)"
},
{
"math_id": 1,
"text": "FPCI(k)"
},
{
"math_id": 2,
"text": "FLCI(k)"
},
{
"math_id": 3,
"text": "USGCS(k)"
},
{
"math_id": 4,
"text": "USGDCS(k)"
},
{
"math_id": 5,
"text": "USGDCS_2(k)"
},
{
"math_id": 6,
"text": "\\mathsf{SMAH}"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "\\mathsf{SMAH^+}"
},
{
"math_id": 9,
"text": "\\mathsf{SRP}"
},
{
"math_id": 10,
"text": "\\mathsf{SRP^+}"
},
{
"math_id": 11,
"text": "\\mathsf{HUGE}"
},
{
"math_id": 12,
"text": "\\mathsf{HUGE^+}"
},
{
"math_id": 13,
"text": "\\Z"
},
{
"math_id": 14,
"text": "\\N"
},
{
"math_id": 15,
"text": "T: \\N^k \\rightarrow \\N"
},
{
"math_id": 16,
"text": "T"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "y_1, \\ldots, y_k < x"
},
{
"math_id": 19,
"text": "T(y_1, \\ldots, y_k) = x"
},
{
"math_id": 20,
"text": "G(T, n, s)"
},
{
"math_id": 21,
"text": "n, s"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "x \\in [0, s]"
},
{
"math_id": 24,
"text": "y + z"
},
{
"math_id": 25,
"text": "w!"
},
{
"math_id": 26,
"text": "y"
},
{
"math_id": 27,
"text": "z"
},
{
"math_id": 28,
"text": "G_m(T, n, s)"
},
{
"math_id": 29,
"text": "> m"
},
{
"math_id": 30,
"text": "m, s"
},
{
"math_id": 31,
"text": "FPLCI(a)"
},
{
"math_id": 32,
"text": "N"
},
{
"math_id": 33,
"text": "(m, T, n, s)"
},
{
"math_id": 34,
"text": "m"
},
{
"math_id": 35,
"text": "s"
},
{
"math_id": 36,
"text": "a"
},
{
"math_id": 37,
"text": "\\N^k"
},
{
"math_id": 38,
"text": "P: \\Z^k \\rightarrow Z"
},
{
"math_id": 39,
"text": "P"
},
{
"math_id": 40,
"text": "0 < y_1, \\ldots, y_n < \\frac{x}{2}"
},
{
"math_id": 41,
"text": "P(y_1, \\ldots, y_n) = x"
},
{
"math_id": 42,
"text": "G(P, Q, n, s)"
},
{
"math_id": 43,
"text": "P, Q: \\Z^k \\rightarrow \\Z"
},
{
"math_id": 44,
"text": "x \\in [-s, s]"
},
{
"math_id": 45,
"text": "P(y)"
},
{
"math_id": 46,
"text": "Q(y)"
},
{
"math_id": 47,
"text": " (z!)!"
},
{
"math_id": 48,
"text": "Q"
},
{
"math_id": 49,
"text": "G_m(P, Q, n, s)"
},
{
"math_id": 50,
"text": "G(P,Q,n,s)"
},
{
"math_id": 51,
"text": "FPCI(a)"
},
{
"math_id": 52,
"text": "(m, P, Q, n, s)"
},
{
"math_id": 53,
"text": "\\Z^k"
},
{
"math_id": 54,
"text": "x, y \\in \\N^k"
},
{
"math_id": 55,
"text": "\\sum^{i}_{q=1} x_q = \\sum^{j}_{q=1} x_q \\implies \\sum^{i}_{q=1} y_q = \\sum^{j}_{q=1} y_q"
},
{
"math_id": 56,
"text": "p, n, s"
},
{
"math_id": 57,
"text": "v_1, \\ldots, v_p \\in \\N^k"
},
{
"math_id": 58,
"text": "G(v_1, \\ldots, v_p; n, s)"
},
{
"math_id": 59,
"text": "y, z"
},
{
"math_id": 60,
"text": "\\sum^k_{q=1} y_q"
},
{
"math_id": 61,
"text": "(y_1,\\ldots,y_k)"
},
{
"math_id": 62,
"text": "v_j"
},
{
"math_id": 63,
"text": "y_1, \\ldots, y_k"
},
{
"math_id": 64,
"text": "y_1,\\ldots,y_k"
},
{
"math_id": 65,
"text": "x = \\sum^k_{q=1} y_q"
},
{
"math_id": 66,
"text": "G_m(v_1,\\ldots, v_p; n, s)"
},
{
"math_id": 67,
"text": "FLCI(a)"
},
{
"math_id": 68,
"text": "G_m(v_1, \\ldots, v_p; n, s)"
},
{
"math_id": 69,
"text": "(m, v_1, \\ldots, v_p; n, s)"
},
{
"math_id": 70,
"text": "p"
},
{
"math_id": 71,
"text": "v"
},
{
"math_id": 72,
"text": "\\Q^*"
},
{
"math_id": 73,
"text": "\\langle (0, 1), (2, 3) \\rangle = (0, 1, 2, 3)"
},
{
"math_id": 74,
"text": "a \\in \\Q^*"
},
{
"math_id": 75,
"text": "\\textrm{us}(a)"
},
{
"math_id": 76,
"text": "a, b \\in \\Q^*"
},
{
"math_id": 77,
"text": "a \\preceq b \\iff \\textrm{max}(a) \\leq \\textrm{max}(b)"
},
{
"math_id": 78,
"text": "a \\prec b \\iff \\textrm{max}(a) < \\textrm{max}(b)"
},
{
"math_id": 79,
"text": "i, j"
},
{
"math_id": 80,
"text": "a_i < a_j"
},
{
"math_id": 81,
"text": "b_i < b_j"
},
{
"math_id": 82,
"text": "E \\subseteq \\Q^*"
},
{
"math_id": 83,
"text": "x \\in E \\iff y \\in E"
},
{
"math_id": 84,
"text": "H"
},
{
"math_id": 85,
"text": "E"
},
{
"math_id": 86,
"text": "(x, y)"
},
{
"math_id": 87,
"text": "\\langle x, y \\rangle"
},
{
"math_id": 88,
"text": "k \\in \\N"
},
{
"math_id": 89,
"text": "S \\subseteq \\Q^*"
},
{
"math_id": 90,
"text": "G"
},
{
"math_id": 91,
"text": "S"
},
{
"math_id": 92,
"text": "(x_1, \\ldots, x_n)"
},
{
"math_id": 93,
"text": "x_i \\in S"
},
{
"math_id": 94,
"text": "S = \\Q^k"
},
{
"math_id": 95,
"text": "\\Q^k"
},
{
"math_id": 96,
"text": "x_1"
},
{
"math_id": 97,
"text": "2 \\leq 2m \\leq k-1"
},
{
"math_id": 98,
"text": "Y = \\langle x_1, \\ldots, x_{2m-1} \\rangle"
},
{
"math_id": 99,
"text": "y = (Y_m, \\ldots, Y_{m+k-1})"
},
{
"math_id": 100,
"text": "x_{2m} \\preceq y"
},
{
"math_id": 101,
"text": "(y, x_{2m})"
},
{
"math_id": 102,
"text": "x_{2m+1} = \\textrm{us}(x_{2m})"
},
{
"math_id": 103,
"text": "\\{x_2, \\ldots, x_n\\}"
},
{
"math_id": 104,
"text": "x_{2m} = y"
},
{
"math_id": 105,
"text": "x_{2m} \\prec y"
},
{
"math_id": 106,
"text": "f, g \\in \\{x_2, \\ldots, x_n\\}"
},
{
"math_id": 107,
"text": "g \\prec f"
},
{
"math_id": 108,
"text": "(f, g)"
},
{
"math_id": 109,
"text": "S = \\Q^k \\cup \\Q^{k+1}"
},
{
"math_id": 110,
"text": "\\Q^k \\cup \\Q^{k+1}"
},
{
"math_id": 111,
"text": "x_{2m} \\in [0, k]^k"
},
{
"math_id": 112,
"text": "x_{2m+1} = (k + \\frac{1}{2}, \\textrm{us}(x_{2m}))"
},
{
"math_id": 113,
"text": "x_{2m} \\in \\Q^k \\smallsetminus [0, k]^k"
},
{
"math_id": 114,
"text": "x_{2m} = (k + \\frac{1}{2}, z)"
},
{
"math_id": 115,
"text": "x_{2m} \\in \\Q^{k+1}"
},
{
"math_id": 116,
"text": "x_{2m+1} = \\textrm{us}^{-1}(z)"
},
{
"math_id": 117,
"text": "(u_1, \\ldots, u_r) \\in [1, n]"
},
{
"math_id": 118,
"text": "u_1 = 1"
},
{
"math_id": 119,
"text": "u_{i+1} = \\textrm{max}(j \\in [u_i, 2^{u_1}]: x_j \\prec x_{u_i})"
},
{
"math_id": 120,
"text": "u"
},
{
"math_id": 121,
"text": "2^{u_r} \\leq n"
},
{
"math_id": 122,
"text": "USGCS"
},
{
"math_id": 123,
"text": "USGDCS"
},
{
"math_id": 124,
"text": "USGDCS_2"
}
]
| https://en.wikipedia.org/wiki?curid=68870278 |
68870642 | Cross-polarization | Spectroscopy technique
Cross-polarization (CP), originally published as proton-enhanced nuclear induction spectroscopy is a solid-state nuclear magnetic resonance (ssNMR) technique to transfer nuclear magnetization from different types of nuclei via heteronuclear dipolar interactions. The 1H-X cross-polarization dramatically improves the sensitivity of ssNMR experiments of most experiments involving spin-1/2 nuclei, capitalizing on the higher 1H polarisation, and shorter T1(1H) relaxation times. It was developed by Michael Gibby, Alexander Pines and Professor John S. Waugh at the Massachusetts Institute of Technology.
In this technique the natural nuclear polarization of an abundant spin (typically 1H) is exploited to increase the polarization of a rare spin (such as 13C, 15N, 31P) by irradiating the sample with radio waves at the frequencies matching the Hartmann–Hahn condition:
formula_0
where formula_1 are the gyromagnetic ratios, formula_2 is the spinning rate, and formula_3 is an integer. This process is sometimes referred to as "spin-locking". The power of one contact pulse is typically ramped to achieve a more broadband and efficient magnetisation transfer.
The evolution of the X NMR signal intensity during the cross polarisation is a build-up and decay process whose time axis is usually referred to as the "contact time". At short CP contact times, a build-up of X magnetisation occurs, during which the transfer of 1H magnetisation from nearby spins (and remote spins through proton spin diffusion) to X occurs. For longer CP contact times, the X magnetisation decreases from T1ρ(X) relaxation, i.e. the decay of the magnetisation during a spin lock.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma_H B_1(^{1}\\text{H}) = \\gamma_X B_1(\\text{X}) \\pm n \\omega_R"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "\\omega_R"
},
{
"math_id": 3,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=68870642 |
68870800 | Strategic bankruptcy problem | Problem in mathematical sociology
A strategic bankruptcy problem is a variant of a bankruptcy problem (also called "claims problem") in which claimants may act strategically, that is, they may manipulate their claims or their behavior. There are various kinds of strategic bankruptcy problems, differing in the assumptions about the possible ways in which claimants may manipulate.
Definitions.
There is a divisible resource, denoted by "formula_0" (=Estate or Endowment). There are "n" people who claim this resource or parts of it; they are called "claimants". The amount claimed by each claimant "i" is denoted by "formula_1". Usually, formula_2, that is, the estate is insufficient to satisfy all the claims. The goal is to allocate to each claimant an amount "formula_3" such that formula_4.
Unit-selection game.
O'Neill describes the following game.
Naturally, the agents would try to choose units such that the overlap between different agents is minimal. This game has a Nash equilibrium. In any Nash equilibrium, there is some integer "k" such that each unit is claimed by either "k" or "k"+1 claimants. When there are two claimants, there is a unique equilibrium payoff vector, and it is identical to the one returned by the contested garment rule.
Rule-proposal games.
Chun's game.
Chun describes the following game.
The process converges. Moreover, it has a unique Nash equilibrium, in which the payoffs are equal to the ones prescribed by the constrained equal awards rule.
Herrero's game.
Herrero describes a dual game, in which, at each round, each claimant's claim is replaced with the "minimum" amount awarded to him by a proposed rule. This process, too, has a unique Nash equilibrium, in which the payoffs are equal to the ones prescribed by the constrained equal "losses" rule.
Amount-proposal game.
Sonn describes the following sequential game.
Sonn proves that, when the discount factor approaches 1, the limit of payoff vectors of this game converges to the constrained equal awards payoffs.
Division-proposal games.
Serrano's game.
Serrano describes another sequential game of offers. It is parametrized by a two-claimant rule "R".
If "R" satisfies resource monotonicity and super-modularity, then the above game has a unique subgame perfect equilibrium, at which each agent receives the amount recommended by the consistent extension of "R".
Corchon and Herrero's game.
Corchon and Herrero describe the following game. It is parametrized by a "compromise function" (for example: arithmetic mean).
A two-claimant rule is implementable in dominant strategies (using arithmetic mean) if-and-only-if it is strictly increasing in each claim, and the allocation of agnet "i" is a function of "formula_1" and "formula_5". Rules for more than two claimants are usually not implementable in dominant strategies.
Implementation game for downward-manipulation of claims.
Dagan, Serrano and Volij consider a setting in which the claims are private information. Claimants may report false claims, as long as they are lower than the true ones. This assumption is relevant in taxation, where claimants may report incomes lower than the true ones. For each rule that is "consistent" and "strictly-claims-monotonic" (a person with higher claim gets strictly more), they construct a sequential game that implements this rule in subgame-perfect equilibrium.
Costly manipulations of claims.
Landsburg considers a setting in which claims are private information, and claimants may report false claims, but this manipulation is costly. The cost of manipulation increases with the magnitude of manipulation. In the special case in which the sum of claims equals the estate, there is a single generalized rule that is a truthful mechanism, and it is a generalization of constrained equal losses.
Manipulation by pre-donations.
Sertel considers a two-claimant setting in which a claimant may manipulate by pre-donating some of his claims to the other claimant. The payoff is then calculated using the Nash Bargaining Solution. In equilibrium, both claimants receive the payoffs prescribed by the contested garment rule.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "\\sum_{i=1}^n c_i > E"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^n x_i = E"
},
{
"math_id": 5,
"text": "E-c_j"
}
]
| https://en.wikipedia.org/wiki?curid=68870800 |
68876514 | Cornelius Greither | German mathematician
Cornelius Greither (born 1956) is a German mathematician specialising in Iwasawa theory and the structure of Galois modules.
Education and career.
Greither completed his PhD in 1983 at the Ludwig-Maximilians-Universität München under the supervision of Bodo Pareigis: his thesis bears the title "Zum Kürzungsproblem kommutativer Algebren".
He habilitated in 1988 at same university, with thesis title "Cyclic Galois extensions and normal bases".
In 1992, Greither proved the Iwasawa main conjecture for abelian number fields in the formula_0 case.
In 1999, together with D. R. Rapogle, K. Rubin, and A. Srivastav, he proved a converse to the Hilbert–Speiser theorem.
Greither was a full professor at the Universität der Bundeswehr München. He retired in 2022, and he is now an emeritus.
Greither is on the editorial boards of the journals "Archivum mathematicum Brno", "New York Journal of Mathematics", as well as the "Journal de Théorie des Nombres Bordeaux". Until 2014, he was an associate editor of "Annales mathématiques du Québec". | [
{
"math_id": 0,
"text": "p=2"
}
]
| https://en.wikipedia.org/wiki?curid=68876514 |
6888412 | Wheat and chessboard problem | Mathematical problem
The wheat and chessboard problem (sometimes expressed in terms of rice grains) is a mathematical problem expressed in textual form as:
<templatestyles src="Template:Blockquote/styles.css" />If a chessboard were to have wheat placed upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?
The problem may be solved using simple addition. With 64 squares on a chessboard, if the number of grains doubles on successive squares, then the sum of grains on all 64 squares is: 1 + 2 + 4 + 8 + ... and so forth for the 64 squares. The total number of grains can be shown to be 264−1 or 18,446,744,073,709,551,615 (eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen, over 1.4 trillion metric tons), which is over 2,000 times the annual world production of wheat.
This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power, capital-sigma notation, and geometric series. Updated for modern times using pennies and a hypothetical question such as "Would you rather have a million dollars or a penny on day one, doubled every day until day 30?", the formula has been used to explain compound interest. (Doubling would yield over one billion seventy three million pennies, or over 10 million dollars: 230−1=1,073,741,823).
Origins.
The problem appears in different stories about the invention of chess. One of them includes the geometric progression problem. The story is first known to have been recorded in 1256 by Ibn Khallikan. Another version has the inventor of chess (in some tellings Sessa, an ancient Indian Minister) request his ruler give him wheat according to the wheat and chessboard problem. The ruler laughs it off as a meager prize for a brilliant invention, only to have court treasurers report the unexpectedly huge number of wheat grains would outstrip the ruler's resources. Versions differ as to whether the inventor becomes a high-ranking advisor or is executed.
Macdonnell also investigates the earlier development of the theme.
<templatestyles src="Template:Blockquote/styles.css" />[According to al-Masudi's early history of India], shatranj, or chess was invented under an Indian king, who expressed his preference for this game over backgammon. [...] The Indians, he adds, also calculated an arithmetical progression with the squares of the chessboard. [...] The early fondness of the Indians for enormous calculations is well known to students of their mathematics, and is exemplified in the writings of the great astronomer Āryabaṭha (born 476 A.D.). [...] An additional argument for the Indian origin of this calculation is supplied by the Arabic name for the square of the chessboard, (بيت, "beit"), 'house'. [...] For this has doubtless a historical connection with its Indian designation koṣṭhāgāra, 'store-house', 'granary' [...].
Solutions.
The simple, brute-force solution is just to manually double and add each step of the series:
formula_0 = 1 + 2 + 4 + ... + 9,223,372,036,854,775,808 = 18,446,744,073,709,551,615
where formula_0 is the total number of grains.
The series may be expressed using exponents:
formula_1
and, represented with capital-sigma notation as:
formula_2
It can also be solved much more easily using:
formula_3
A proof of which is:
formula_4
Multiply each side by 2:
formula_5
Subtract original series from each side:
formula_6
The solution above is a particular case of the sum of a geometric series, given by
formula_7
where formula_8 is the first term of the series, formula_9 is the common ratio and formula_10 is the number of terms.
In this problem formula_11, formula_12 and formula_13.
Thus,
formula_14
for formula_10 being any positive integer.
The exercise of working through this problem may be used to explain and demonstrate exponents and the quick growth of exponential and geometric sequences. It can also be used to illustrate sigma notation.
When expressed as exponents, the geometric series is: 20 + 21 + 22
+ 23 + ... and so forth, up to 263. The base of each exponentiation, "2", expresses the doubling at each square, while the exponents represent the position of each square (0 for the first square, 1 for the second, and so on.).
The number of grains is the 64th Mersenne number.
Second half of the chessboard.
In technology strategy, the "second half of the chessboard" is a phrase, coined by Ray Kurzweil, in reference to the point where an exponentially growing factor begins to have a significant economic impact on an organization's overall business strategy. While the number of grains on the first half of the chessboard is large, the amount on the second half is vastly (232 > 4 billion times) larger.
The number of grains of wheat on the first half of the chessboard is 1 + 2 + 4 + 8 + ... + 2,147,483,648, for a total of 4,294,967,295 (232 − 1) grains, or about 279 tonnes of wheat (assuming 65 mg as the mass of one grain of wheat).
The number of grains of wheat on the "second" half of the chessboard is 232 + 233 + 234 + ... + 263, for a total of 264 − 232 grains. This is equal to the square of the number of grains on the first half of the board, plus itself. The first square of the second half alone contains one more grain than the entire first half. On the 64th square of the chessboard alone, there would be 263 = 9,223,372,036,854,775,808 grains, more than two billion times as many as on the first half of the chessboard.
On the entire chessboard there would be 264 − 1 = 18,446,744,073,709,551,615 grains of wheat, weighing about 1,199,000,000,000 metric tons. This is over 1,600 times the global production of wheat (729 million metric tons in 2014 and 780.8 million tonnes in 2019).
Use.
Carl Sagan titled the second chapter of his final book "The Persian Chessboard" and wrote, referring to bacteria, that "Exponentials can't go on forever, because they will gobble up everything." Similarly, "The Limits to Growth" uses the story to present suggested consequences of exponential growth: "Exponential growth never can go on very long in a finite space with finite resources."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_{64}"
},
{
"math_id": 1,
"text": "T_{64} = 2^0 + 2^1 + 2^2 + \\cdots + 2^{63}"
},
{
"math_id": 2,
"text": "\\sum_{k=0}^{63} 2^k.\\, "
},
{
"math_id": 3,
"text": "T_{64} = 2^{64}- 1. \\, "
},
{
"math_id": 4,
"text": "s = 2^0 + 2^1 + 2^2 + \\cdots + 2^{63}."
},
{
"math_id": 5,
"text": "2s = 2^1 + 2^2 + 2^3 + \\cdots + 2^{63} + 2^{64}."
},
{
"math_id": 6,
"text": "\\begin{align}\n2s - s & = \\qquad\\quad \\cancel{2^1} + \\cancel{2^2} + \\cdots + \\cancel{2^{63}} + 2^{64} \\\\\n & \\quad - 2^0 - \\cancel{2^1} - \\cancel{2^2} - \\cdots - \\cancel{2^{63}} \\\\\n & = 2^{64} - 2^0 \\\\\n\\therefore s & = 2^{64}- 1.\n\\end{align}"
},
{
"math_id": 7,
"text": "a + ar + a r^2 + a r^3 + \\cdots + a r^{n-1} = \\sum_{k=0}^{n-1} ar^k= a \\, \\frac{1-r^{n}}{1-r},"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "a = 1"
},
{
"math_id": 12,
"text": "r = 2"
},
{
"math_id": 13,
"text": "n = 64"
},
{
"math_id": 14,
"text": "\\begin{align} \\sum_{k=0}^{n-1} 2^k & = 2^0 + 2^1 + 2^2 + \\cdots + 2^{n-1}\n\\\\\n& = 2^{n}-1\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=6888412 |
68891550 | Mary Emily Sinclair | American mathematician
Mary Emily Sinclair (September 27, 1878 – June 3, 1955) was an American mathematician whose research concerned algebraic surfaces and the calculus of variations. She was the first woman to earn a doctorate in mathematics at the University of Chicago, and became Clark Professor of Mathematics at Oberlin College.
Early life and education.
Sinclair was born on September 27, 1878, in Worcester, Massachusetts; she was the fourth of five children of John Elbridge Sinclair and Marietta S. Fletcher Sinclair. Her father was a mathematics professor at the Worcester Polytechnic Institute, and also had two daughters from an earlier marriage. Her mother, originally from Worcester, taught English and modern languages at the Worcester Polytechnic Institute but stopped when her children were born. After graduating in 1896 from the Worcester Classical High School, Mary Emily Sinclair became a student at Oberlin, and as a student served as president of the Oberlin branch of the YWCA. She graduated Phi Beta Kappa in 1900 with an A.B.
While working as a teacher at a seminary in Hartford, Connecticut, Sinclair continued her studies through the University of Chicago, earning a master's degree in mathematics in 1903. After briefly teaching at Lake Erie College in 1903, she taught at the University of Nebraska from 1904 through 1907, while continuing her graduate study at the University of Chicago. She completed her Ph.D. in 1908, with the dissertation "On a Compound Discontinuous Solution Connected with the Surface of Revolution of Minimum Area" in the calculus of variations, supervised by Oskar Bolza. She became the first woman to earn a doctorate in mathematics at the University of Chicago, and her doctorate marked the beginning of a period in which the university gave over 50 doctorates to women through 1946, likely the most of any American university.
Career and later life.
Meanwhile, in 1907, Sinclair had taken a position as instructor at Oberlin College, where she would remain for the rest of her career, teaching there for 37 years. The next year, on receiving her doctorate, she was promoted to associate professor. Although unmarried, she adopted two children as infants in 1914 and 1915. Also in 1915, she became one of the founding members of the Mathematical Association of America. She was promoted to professor in 1925, and became head of the mathematics department at Oberlin in 1939. In 1941 she was named Clark Professor of Mathematics. She retired in 1944, but continued to teach mathematics to US Navy students for two more years through Berea College.
She returned to Oberlin in 1947, but was heavily injured in a carjacking incident in 1950. She moved to Belfast, Maine, in 1953, living there with her daughter-in-law, and died there on June 3, 1955.
Sinclair's discriminant surface.
Sinclair's 1903 master's thesis was "Concerning the discriminantal surface for the quintic in the normal form: formula_0".
In it, she uses Tschirnhaus transformations to put quintic functions with real coefficients into the form given in the title, and uses the discriminant, a polynomial of the coefficients formula_1, formula_2, and formula_3, to classify polynomials of this form by their numbers of real roots. Physical models of her discriminant surface were made from her design by a German company, and a stone carving based on it was created in 2003 by sculptor Helaman Ferguson.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u^5+10xu^3+5yu+z=0"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "z"
}
]
| https://en.wikipedia.org/wiki?curid=68891550 |
68897722 | Curium compounds | Curium compounds are compounds containing the element curium (Cm). Curium usually forms compounds in the +3 oxidation state, although compounds with curium in the +4, +5 and +6 oxidation states are also known.
Oxides.
Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:
<chem>4CmO2 ->[\Delta T] 2Cm2O3 + O2</chem>.
Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:
<chem>2CmO2 + H2 -> Cm2O3 + H2O</chem>
Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.
Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
Halides.
The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:
formula_0
A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).
The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C:
formula_1
Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:
formula_2
Chalcogenides and pnictides.
Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Organocurium compounds and biological aspects.
Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.
Formation of the complexes of the type Cm(n-C3H7-BTP)3 (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.
Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{2\\ CmF_3\\ +\\ F_2\\ \\longrightarrow\\ 2\\ CmF_4}"
},
{
"math_id": 1,
"text": "\\mathrm{CmCl_3\\ +\\ 3\\ NH_4I\\ \\longrightarrow \\ CmI_3\\ +\\ 3\\ NH_4Cl}"
},
{
"math_id": 2,
"text": "\\mathrm{CmCl_3\\ +\\ \\ H_2O\\ \\longrightarrow \\ CmOCl\\ +\\ 2\\ HCl}"
}
]
| https://en.wikipedia.org/wiki?curid=68897722 |
68898712 | Bacon–Shor code | The Bacon–Shor code is a subsystem error correcting code. In a subsystem code, information is encoded in a subsystem of a Hilbert space. Subsystem codes lend to simplified error correcting procedures unlike codes which encode information in the subspace of a Hilbert space. This simplicity led to the first claim of fault tolerant circuit demonstration on a quantum computer. It is named after Dave Bacon and Peter Shor.
Given the stabilizer generators of Shor's code: formula_0, 4 stabilizers can be removed from this generator by recognizing gauge symmetries in the code to get: formula_1. Error correction is now simplified because 4 stabilizers are needed to measure errors instead of 8. A gauge group can be created from the stabilizer generators:formula_2. Given that the Bacon–Shor code is defined on a square lattice where the qubits are placed on the vertices; laying the qubits on a grid in a way that corresponds to the gauge group shows how only 2 qubit nearest-neighbor measurements are needed to infer the error syndromes. The simplicity of deducing the syndromes reduces the overheard for fault tolerant error correction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\langle X_{0}X_{1}X_{2}X_{3}X_{4}X_{5}, X_{0}X_{1}X_{2}X_{6}X_{7}X_{8}, Z_{0}Z_{1}, Z_{1}Z_{2}, Z_{3}Z_{4}, Z_{4}Z_{5}, Z_{6}Z_{7}, Z_{7}Z_{8}\\rangle"
},
{
"math_id": 1,
"text": " \\langle X_{0}X_{1}X_{2}X_{3}X_{4}X_{5}, X_{0}X_{1}X_{2}X_{6}X_{7}X_{8}, Z_{0}Z_{1}Z_{3}Z_{4}Z_{6}Z_{7}, Z_{1}Z_{2}Z_{4}Z_{5}Z_{7}Z_{8} \\rangle"
},
{
"math_id": 2,
"text": "\\langle Z_{1}Z_{2}, X_{2}X_{8}, Z_{4}Z_{5}, X_{5}X_{8}, Z_{0}Z_{1}, X_{0}X_{6}, Z_{3}Z_{4}, X_{3}X_{6}, X_{1}X_{7}, X_{4}X_{7}, Z_{6}Z_{7}, Z_{7}Z_{8}\\rangle "
}
]
| https://en.wikipedia.org/wiki?curid=68898712 |
6891551 | Feller process | Stochastic process
In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.
Definitions.
Let "X" be a locally compact Hausdorff space with a countable base. Let "C"0("X") denote the space of all real-valued continuous functions on "X" that vanish at infinity, equipped with the sup-norm ||"f" ||. From analysis, we know that "C"0("X") with the sup norm is a Banach space.
A Feller semigroup on "C"0("X") is a collection {"T""t"}"t" ≥ 0 of positive linear maps from "C"0("X") to itself such that
Warning: This terminology is not uniform across the literature. In particular, the assumption that "T""t" maps "C"0("X") into itself
is replaced by some authors by the condition that it maps "C"b("X"), the space of bounded continuous functions, into itself.
The reason for this is twofold: first, it allows including processes that enter "from infinity" in finite time. Second, it is more suitable to the treatment of
spaces that are not locally compact and for which the notion of "vanishing at infinity" makes no sense.
A Feller transition function is a probability transition function associated with a Feller semigroup.
A Feller process is a Markov process with a Feller transition function.
Generator.
Feller processes (or transition semigroups) can be described by their infinitesimal generator. A function "f" in "C"0 is said to be in the domain of the generator if the uniform limit
formula_0
exists. The operator "A" is the generator of "Tt", and the space of functions on which it is defined is written as "DA".
A characterization of operators that can occur as the infinitesimal generator of Feller processes is given by the Hille–Yosida theorem. This uses the resolvent of the Feller semigroup, defined below.
Resolvent.
The resolvent of a Feller process (or semigroup) is a collection of maps ("Rλ")"λ" > 0 from "C"0("X") to itself defined by
formula_1
It can be shown that it satisfies the identity
formula_2
Furthermore, for any fixed "λ" > 0, the image of "Rλ" is equal to the domain "DA" of the generator "A", and
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " Af = \\lim_{t\\rightarrow 0} \\frac{T_tf - f}{t},"
},
{
"math_id": 1,
"text": "R_\\lambda f = \\int_0^\\infty e^{-\\lambda t}T_t f\\,dt."
},
{
"math_id": 2,
"text": "R_\\lambda R_\\mu = R_\\mu R_\\lambda = (R_\\mu-R_\\lambda)/(\\lambda-\\mu)."
},
{
"math_id": 3,
"text": "\n\\begin{align}\n& R_\\lambda = (\\lambda - A)^{-1}, \\\\\n& A = \\lambda - R_\\lambda^{-1}.\n\\end{align}\n"
},
{
"math_id": 4,
"text": "(\\Omega, \\mathcal{F}, (\\mathcal{F}_t)_{t\\geq 0})"
},
{
"math_id": 5,
"text": "(\\mathcal{F}_{t^+})_{t\\geq 0}"
},
{
"math_id": 6,
"text": "\\tau"
},
{
"math_id": 7,
"text": "\\{\\tau < \\infty\\}"
},
{
"math_id": 8,
"text": "t\\ge 0"
},
{
"math_id": 9,
"text": "X_{\\tau + t}"
},
{
"math_id": 10,
"text": "\\mathcal{F}_{\\tau^+}"
},
{
"math_id": 11,
"text": "X_\\tau"
}
]
| https://en.wikipedia.org/wiki?curid=6891551 |
689427 | Latent semantic analysis | Technique in natural language processing
Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text (the distributional hypothesis). A matrix containing word counts per document (rows represent unique words and columns represent each document) is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.
An information retrieval technique using latent semantic structure was patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI).
Overview.
Occurrence matrix.
LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.
This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
Rank lowering.
After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations:
The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
{(car), (truck), (flower)} → {(1.3452 * car + 0.2828 * truck), (flower)}
This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also partially mitigates the problem with polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.
Derivation.
Let formula_0 be a matrix where element formula_1 describes the occurrence of term formula_2 in document formula_3 (this can be, for example, the frequency). formula_0 will look like this:
formula_4
Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:
formula_5
Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:
formula_6
Now the dot product formula_7 between two term vectors gives the correlation between the terms over the set of documents. The matrix product formula_8 contains all these dot products. Element formula_9 (which is equal to element formula_10) contains the dot product formula_7 (formula_11). Likewise, the matrix formula_12 contains the dot products between all the document vectors, giving their correlation over the terms: formula_13.
Now, from the theory of linear algebra, there exists a decomposition of formula_0 such that formula_14 and formula_15 are orthogonal matrices and formula_16 is a diagonal matrix. This is called a singular value decomposition (SVD):
formula_17
The matrix products giving us the term and document correlations then become
formula_18
Since formula_19 and formula_20 are diagonal we see that formula_14 must contain the eigenvectors of formula_8, while formula_15 must be the eigenvectors of formula_12. Both products have the same non-zero eigenvalues, given by the non-zero entries of formula_19, or equally, by the non-zero entries of formula_21. Now the decomposition looks like this:
formula_22
The values formula_23 are called the singular values, and formula_24 and formula_25 the left and right singular vectors.
Notice the only part of formula_14 that contributes to formula_26 is the formula_27 row.
Let this row vector be called formula_28.
Likewise, the only part of formula_29 that contributes to formula_30 is the formula_31 column, formula_32.
These are "not" the eigenvectors, but "depend" on "all" the eigenvectors.
It turns out that when you select the formula_33 largest singular values, and their corresponding singular vectors from formula_14 and formula_15, you get the rank formula_33 approximation to formula_0 with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "semantic space". The row "term" vector formula_34 then has formula_33 entries mapping it to a lower-dimensional space. These new dimensions do not relate to any comprehensible concepts. They are a lower-dimensional approximation of the higher-dimensional space. Likewise, the "document" vector formula_35 is an approximation in this lower-dimensional space. We write this approximation as
formula_36
You can now do the following:
To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:
formula_44
Note here that the inverse of the diagonal matrix formula_45 may be found by inverting each nonzero value within the matrix.
This means that if you have a query vector formula_37, you must do the translation formula_46 before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:
formula_47
formula_48
formula_49
Applications.
The new low-dimensional space typically can be used to:
Synonymy and polysemy are fundamental problems in natural language processing:
Commercial applications.
LSA has been used to assist in performing prior art searches for patents.
Applications in human memory.
The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall and memory search. There is a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect.
When participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall.
Another model, termed Word Association Spaces (WAS) is also used in memory studies by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.
Implementation.
The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory.
A fast, incremental, low-memory, large-matrix SVD algorithm has been developed. MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.
In recent years progress has been made to reduce the computational complexity of SVD; for instance, by using a parallel ARPACK algorithm to perform parallel eigenvalue decomposition it is possible to speed up the SVD computation cost while providing comparable prediction quality.
Limitations.
Some of LSA's drawbacks include:
{(car), (truck), (flower)} ↦ {(1.3452 * car + 0.2828 * truck), (flower)}
the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} ↦ {(1.3452 * car + 0.2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no immediately obvious meaning in natural language. Though, the (1.3452 * car + 0.2828 * bottle) component could be justified because both bottles and cars have transparent and opaque parts, are man made and with high probability contain logos/words on their surface; thus, in many ways these two concepts "share semantics." That is, within a language in question, there may not be a readily available word to assign and explainability becomes an analysis task as opposed to simple word/class/concept assignment task.
Alternative methods.
Semantic hashing.
In semantic hashing documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method.
Latent semantic indexing.
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.
LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents.
Called "latent semantic indexing" because of its ability to correlate semantically related terms that are latent in a collection of text, it was first applied to text at Bellcore in the late 1980s. The method, also called latent semantic analysis (LSA), uncovers the underlying latent semantic structure in the usage of words in a body of text and how it can be used to extract the meaning of the text in response to user queries, commonly referred to as concept searches. Queries, or concept searches, against a set of documents that have undergone LSI will return results that are conceptually similar in meaning to the search criteria even if the results don’t share a specific word or words with the search criteria.
Benefits of LSI.
LSI helps overcome synonymy by increasing recall, one of the most problematic constraints of Boolean keyword queries and vector space models. Synonymy is often the cause of mismatches in the vocabulary used by the authors of documents and the users of information retrieval systems. As a result, Boolean or keyword queries often return irrelevant results and miss information that is relevant.
LSI is also used to perform automated document categorization. In fact, several experiments have demonstrated that there are a number of correlations between the way LSI and humans process and categorize text. Document categorization is the assignment of documents to one or more predefined categories based on their similarity to the conceptual content of the categories. LSI uses "example" documents to establish the conceptual basis for each category. During categorization processing, the concepts contained in the documents being categorized are compared to the concepts contained in the example items, and a category (or categories) is assigned to the documents based on the similarities between the concepts they contain and the concepts that are contained in the example documents.
Dynamic clustering based on the conceptual content of documents can also be accomplished using LSI. Clustering is a way to group documents based on their conceptual similarity to each other without using example documents to establish the conceptual basis for each cluster. This is very useful when dealing with an unknown collection of unstructured text.
Because it uses a strictly mathematical approach, LSI is inherently independent of language. This enables LSI to elicit the semantic content of information written in any language without requiring the use of auxiliary structures, such as dictionaries and thesauri. LSI can also perform cross-linguistic concept searching and example-based categorization. For example, queries can be made in one language, such as English, and conceptually similar results will be returned even if they are composed of an entirely different language or of multiple languages.
LSI is not restricted to working only with words. It can also process arbitrary character strings. Any object that can be expressed as text can be represented in an LSI vector space. For example, tests with MEDLINE abstracts have shown that LSI is able to effectively classify genes based on conceptual modeling of the biological information contained in the titles and abstracts of the MEDLINE citations.
LSI automatically adapts to new and changing terminology, and has been shown to be very tolerant of noise (i.e., misspelled words, typographical errors, unreadable characters, etc.). This is especially important for applications using text derived from Optical Character Recognition (OCR) and speech-to-text conversion. LSI also deals effectively with sparse, ambiguous, and contradictory data.
Text does not need to be in sentence form for LSI to be effective. It can work with lists, free-form notes, email, Web-based content, etc. As long as a collection of text contains multiple terms, LSI can be used to identify patterns in the relationships between the important terms and concepts contained in the text.
LSI has proven to be a useful solution to a number of conceptual matching problems. The technique has been shown to capture key relationship information, including causal, goal-oriented, and taxonomic information.
Mathematics of LSI.
LSI uses common linear algebra techniques to learn the conceptual correlations in a collection of text. In general, the process involves constructing a weighted term-document matrix, performing a Singular Value Decomposition on the matrix, and using the matrix to identify the concepts contained in the text.
Term-document matrix.
LSI begins by constructing a term-document matrix, formula_50, to identify the occurrences of the formula_51 unique terms within a collection of formula_52 documents. In a term-document matrix, each term is represented by a row, and each document is represented by a column, with each matrix cell, formula_53, initially representing the number of times the associated term appears in the indicated document, formula_54. This matrix is usually very large and very sparse.
Once a term-document matrix is constructed, local and global weighting functions can be applied to it to condition the data. The weighting functions transform each cell, formula_53 of formula_50, to be the product of a local term weight, formula_55, which describes the relative frequency of a term in a document, and a global weight, formula_56, which describes the relative frequency of the term within the entire collection of documents.
Some common local weighting functions are defined in the following table.
Some common global weighting functions are defined in the following table.
Empirical studies with LSI report that the Log and Entropy weighting functions work well, in practice, with many data sets. In other words, each entry formula_53 of formula_50 is computed as:
formula_57
formula_58
Rank-reduced singular value decomposition.
A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, formula_50, into three other matrices— an m by r term-concept vector matrix formula_59, an r by r singular values matrix formula_60, and a n by r concept-document vector matrix, formula_61, which satisfy the following relations:
formula_62
formula_63
formula_64
In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min("m,n"). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors.
The SVD is then truncated to reduce the rank by keeping only the largest k « r diagonal entries in the singular value matrix S,
where k is typically on the order 100 to 300 dimensions.
This effectively reduces the term and document vector matrix sizes to m by k and n by k respectively. The SVD operation, along with this reduction, has the effect of preserving the most important semantic information in the text while reducing noise and other undesirable artifacts of the original space of A. This reduced set of matrices is often denoted with a modified formula such as:
A ≈ A"k" = T"k" S"k" D"k"T
Efficient LSI algorithms only compute the first k singular values and term and document vectors as opposed to computing a full SVD and then truncating it.
Note that this rank reduction is essentially the same as doing Principal Component Analysis (PCA) on the matrix A, except that PCA subtracts off the means. PCA loses the sparseness of the A matrix, which can make it infeasible for large lexicons.
Querying and augmenting LSI vector spaces.
The computed T"k" and D"k" matrices define the term and document vector spaces, which with the computed singular values, S"k", embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors.
The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1. The new column in A is computed using the originally derived global term weights and applying the same local weighting function to the terms in the query or in the new document.
A drawback to computing vectors in this way, when adding new searchable documents, is that terms that were not known during the SVD phase for the original index are ignored. These terms will have no impact on the global weights and learned correlations derived from the original collection of text. However, the computed vectors for the new text are still very relevant for similarity comparisons with all other document vectors.
The process of augmenting the document vector spaces for an LSI index with new documents in this manner is called "folding in". Although the folding-in process does not account for the new semantic content of the new text, adding a substantial number of documents in this way will still provide good results for queries as long as the terms and concepts they contain are well represented within the LSI index to which they are being added. When the terms and concepts of a new set of documents need to be included in an LSI index, either the term-document matrix, and the SVD, must be recomputed or an incremental update method (such as the one described in ) is needed.
Additional uses of LSI.
It is generally acknowledged that the ability to work with text on a semantic basis is essential to modern information retrieval systems. As a result, the use of LSI has significantly expanded in recent years as earlier challenges in scalability and performance have been overcome.
LSI is being used in a variety of information retrieval and text processing applications, although its primary application has been for concept searching and automated document categorization. Below are some other ways in which LSI is being used:
LSI is increasingly being used for electronic document discovery (eDiscovery) to help enterprises prepare for litigation. In eDiscovery, the ability to cluster, categorize, and search large collections of unstructured text on a conceptual basis is essential. Concept-based searching using LSI has been applied to the eDiscovery process by leading providers as early as 2003.
Challenges to LSI.
Early challenges to LSI focused on scalability and performance. LSI requires relatively high computational performance and memory in comparison to other information retrieval techniques. However, with the implementation of modern high-speed processors and the availability of inexpensive memory, these considerations have been largely overcome. Real-world applications involving more than 30 million documents that were fully processed through the matrix and SVD computations are common in some LSI applications. A fully scalable (unlimited number of documents, online training) implementation of LSI is contained in the open source gensim software package.
Another challenge to LSI has been the alleged difficulty in determining the optimal number of dimensions to use for performing the SVD. As a general rule, fewer dimensions allow for broader comparisons of the concepts contained in a collection of text, while a higher number of dimensions enable more specific (or more relevant) comparisons of concepts. The actual number of dimensions that can be used is limited by the number of documents in the collection. Research has demonstrated that around 300 dimensions will usually provide the best results with moderate-sized document collections (hundreds of thousands of documents) and perhaps 400 dimensions for larger document collections (millions of documents). However, recent studies indicate that 50-1000 dimensions are suitable depending on the size and nature of the document collection. Checking the proportion of variance retained, similar to PCA or factor analysis, to determine the optimal dimensionality is not suitable for LSI. Using a synonym test or prediction of missing words are two possible methods to find the correct dimensionality. When LSI topics are used as features in supervised learning methods, one can use prediction error measurements to find the ideal dimensionality.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Implementations.
Due to its cross-domain applications in Information Retrieval, Natural Language Processing (NLP), Cognitive Science and Computational Linguistics, LSA has been implemented to support many different kinds of applications. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "(i,j)"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "\n\\begin{matrix} \n & \\textbf{d}_j \\\\\n & \\downarrow \\\\\n\\textbf{t}_i^T \\rightarrow &\n\\begin{bmatrix} \nx_{1,1} & \\dots & x_{1,j} & \\dots & x_{1,n} \\\\\n\\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\nx_{i,1} & \\dots & x_{i,j} & \\dots & x_{i,n} \\\\\n\\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\nx_{m,1} & \\dots & x_{m,j} & \\dots & x_{m,n} \\\\\n\\end{bmatrix}\n\\end{matrix}\n"
},
{
"math_id": 5,
"text": "\\textbf{t}_i^T = \\begin{bmatrix} x_{i,1} & \\dots & x_{i,j} & \\dots & x_{i,n} \\end{bmatrix}"
},
{
"math_id": 6,
"text": "\\textbf{d}_j = \\begin{bmatrix}\nx_{1,j} \\\\\n\\vdots \\\\\nx_{i,j} \\\\\n\\vdots \\\\\nx_{m,j} \\\\\n \\end{bmatrix}"
},
{
"math_id": 7,
"text": "\\textbf{t}_i^T \\textbf{t}_p"
},
{
"math_id": 8,
"text": "X X^T"
},
{
"math_id": 9,
"text": "(i,p)"
},
{
"math_id": 10,
"text": "(p,i)"
},
{
"math_id": 11,
"text": " = \\textbf{t}_p^T \\textbf{t}_i"
},
{
"math_id": 12,
"text": "X^T X"
},
{
"math_id": 13,
"text": "\\textbf{d}_j^T \\textbf{d}_q = \\textbf{d}_q^T \\textbf{d}_j"
},
{
"math_id": 14,
"text": "U"
},
{
"math_id": 15,
"text": "V"
},
{
"math_id": 16,
"text": "\\Sigma"
},
{
"math_id": 17,
"text": "\n\\begin{matrix}\nX = U \\Sigma V^T\n\\end{matrix}\n"
},
{
"math_id": 18,
"text": "\n\\begin{matrix}\nX X^T &=& (U \\Sigma V^T) (U \\Sigma V^T)^T = (U \\Sigma V^T) (V^{T^T} \\Sigma^T U^T) = U \\Sigma V^T V \\Sigma^T U^T = U \\Sigma \\Sigma^T U^T \\\\\nX^T X &=& (U \\Sigma V^T)^T (U \\Sigma V^T) = (V^{T^T} \\Sigma^T U^T) (U \\Sigma V^T) = V \\Sigma^T U^T U \\Sigma V^T = V \\Sigma^T \\Sigma V^T\n\\end{matrix}\n"
},
{
"math_id": 19,
"text": "\\Sigma \\Sigma^T"
},
{
"math_id": 20,
"text": "\\Sigma^T \\Sigma"
},
{
"math_id": 21,
"text": "\\Sigma^T\\Sigma"
},
{
"math_id": 22,
"text": "\n\\begin{matrix} \n & X & & & U & & \\Sigma & & V^T \\\\\n & (\\textbf{d}_j) & & & & & & & (\\hat{\\textbf{d}}_j) \\\\\n & \\downarrow & & & & & & & \\downarrow \\\\\n(\\textbf{t}_i^T) \\rightarrow \n&\n\\begin{bmatrix}\nx_{1,1} & \\dots & x_{1,j} & \\dots & x_{1,n} \\\\\n\\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\nx_{i,1} & \\dots & x_{i,j} & \\dots & x_{i,n} \\\\\n\\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\nx_{m,1} & \\dots & x_{m,j} & \\dots & x_{m,n} \\\\\n\\end{bmatrix}\n&\n=\n&\n(\\hat{\\textbf{t}}_i^T) \\rightarrow\n&\n\\begin{bmatrix} \n\\begin{bmatrix} \\, \\\\ \\, \\\\ \\textbf{u}_1 \\\\ \\, \\\\ \\,\\end{bmatrix} \n\\dots\n\\begin{bmatrix} \\, \\\\ \\, \\\\ \\textbf{u}_l \\\\ \\, \\\\ \\, \\end{bmatrix}\n\\end{bmatrix}\n&\n\\cdot\n&\n\\begin{bmatrix} \n\\sigma_1 & \\dots & 0 \\\\\n\\vdots & \\ddots & \\vdots \\\\\n0 & \\dots & \\sigma_l \\\\\n\\end{bmatrix}\n&\n\\cdot\n&\n\\begin{bmatrix} \n\\begin{bmatrix} & & \\textbf{v}_1 & & \\end{bmatrix} \\\\\n\\vdots \\\\\n\\begin{bmatrix} & & \\textbf{v}_l & & \\end{bmatrix}\n\\end{bmatrix}\n\\end{matrix}\n"
},
{
"math_id": 23,
"text": "\\sigma_1, \\dots, \\sigma_l"
},
{
"math_id": 24,
"text": "u_1, \\dots, u_l"
},
{
"math_id": 25,
"text": "v_1, \\dots, v_l"
},
{
"math_id": 26,
"text": "\\textbf{t}_i"
},
{
"math_id": 27,
"text": "i\\textrm{'th}"
},
{
"math_id": 28,
"text": "\\hat{\\textrm{t}}^T_i"
},
{
"math_id": 29,
"text": "V^T"
},
{
"math_id": 30,
"text": "\\textbf{d}_j"
},
{
"math_id": 31,
"text": "j\\textrm{'th}"
},
{
"math_id": 32,
"text": "\\hat{ \\textrm{d}}_j"
},
{
"math_id": 33,
"text": "k"
},
{
"math_id": 34,
"text": "\\hat{\\textbf{t}}^T_i"
},
{
"math_id": 35,
"text": "\\hat{\\textbf{d}}_j"
},
{
"math_id": 36,
"text": "X_k = U_k \\Sigma_k V_k^T"
},
{
"math_id": 37,
"text": "q"
},
{
"math_id": 38,
"text": "\\Sigma_k \\cdot \\hat{\\textbf{d}}_j "
},
{
"math_id": 39,
"text": "\\Sigma_k \\cdot \\hat{\\textbf{d}}_q "
},
{
"math_id": 40,
"text": "p"
},
{
"math_id": 41,
"text": "\\Sigma_k \\cdot \\hat{\\textbf{t}}_i"
},
{
"math_id": 42,
"text": "\\Sigma_k \\cdot \\hat{\\textbf{t}}_p"
},
{
"math_id": 43,
"text": "\\hat{\\textbf{t}}"
},
{
"math_id": 44,
"text": "\\hat{\\textbf{d}}_j = \\Sigma_k^{-1}U_k^T{\\textbf{d}}_j "
},
{
"math_id": 45,
"text": "\\Sigma_k"
},
{
"math_id": 46,
"text": "\\hat{\\textbf{q}} = \\Sigma_k^{-1} U_k^T \\textbf{q}"
},
{
"math_id": 47,
"text": "\\textbf{t}_i^T = \\hat{\\textbf{t}}_i^T \\Sigma_k V_k^T"
},
{
"math_id": 48,
"text": "\\hat{\\textbf{t}}_i^T = \\textbf{t}_i^T V_k^{-T} \\Sigma_k^{-1} = \\textbf{t}_i^T V_k \\Sigma_k^{-1}"
},
{
"math_id": 49,
"text": "\\hat{\\textbf{t}}_i = \\Sigma_k^{-1} V_k^T \\textbf{t}_i"
},
{
"math_id": 50,
"text": "A"
},
{
"math_id": 51,
"text": "m"
},
{
"math_id": 52,
"text": "n"
},
{
"math_id": 53,
"text": "a_{ij}"
},
{
"math_id": 54,
"text": "\\mathrm{tf_{ij}}"
},
{
"math_id": 55,
"text": "l_{ij}"
},
{
"math_id": 56,
"text": "g_i"
},
{
"math_id": 57,
"text": "g_i = 1 + \\sum_j \\frac{p_{ij} \\log p_{ij}}{\\log n}"
},
{
"math_id": 58,
"text": "a_{ij} = g_i \\ \\log (\\mathrm{tf}_{ij} + 1)"
},
{
"math_id": 59,
"text": "T"
},
{
"math_id": 60,
"text": "S"
},
{
"math_id": 61,
"text": "D"
},
{
"math_id": 62,
"text": "A \\approx TSD^T"
},
{
"math_id": 63,
"text": "T^T T = I_r \\quad D^T D = I_r "
},
{
"math_id": 64,
"text": "S_{1,1} \\geq S_{2,2} \\geq \\ldots \\geq S_{r,r} > 0 \\quad S_{i,j} = 0 \\; \\text{where} \\; i \\neq j"
}
]
| https://en.wikipedia.org/wiki?curid=689427 |
68946 | Born–Oppenheimer approximation | The notion that the motion of atomic nuclei and electrons can be separated
In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the best-known mathematical approximation in molecular dynamics. Specifically, it is the assumption that the wave functions of atomic nuclei and electrons in a molecule can be treated separately, based on the fact that the nuclei are much heavier than the electrons. Due to the larger relative mass of a nucleus compared to an electron, the coordinates of the nuclei in a system are approximated as fixed, while the coordinates of the electrons are dynamic. The approach is named after Max Born and his 23-year-old graduate student J. Robert Oppenheimer, the latter of whom proposed it in 1927 during a period of intense fervent in the development of quantum mechanics.
The approximation is widely used in quantum chemistry to speed up the computation of molecular wavefunctions and other properties for large molecules. There are cases where the assumption of separable motion no longer holds, which make the approximation lose validity (it is said to "break down"), but even then the approximation is usually used as a starting point for more refined methods.
In molecular spectroscopy, using the BO approximation means considering molecular energy as a sum of independent terms, e.g.: formula_0 These terms are of different orders of magnitude and the nuclear spin energy is so small that it is often omitted. The electronic energies formula_1 consist of kinetic energies, interelectronic repulsions, internuclear repulsions, and electron–nuclear attractions, which are the terms typically included when computing the electronic structure of molecules.
Example.
The benzene molecule consists of 12 nuclei and 42 electrons. The Schrödinger equation, which must be solved to obtain the energy levels and wavefunction of this molecule, is a partial differential eigenvalue equation in the three-dimensional coordinates of the nuclei and electrons, giving 3 × 12 = 36 nuclear + 3 × 42 = 126 electronic = 162 variables for the wave function. The computational complexity, i.e., the computational power required to solve an eigenvalue equation, increases faster than the square of the number of coordinates.
When applying the BO approximation, two smaller, consecutive steps can be used:
For a given position of the nuclei, the "electronic" Schrödinger equation is solved, while treating the nuclei as stationary (not "coupled" with the dynamics of the electrons). This corresponding eigenvalue problem then consists only of the 126 electronic coordinates. This electronic computation is then repeated for other possible positions of the nuclei, i.e. deformations of the molecule. For benzene, this could be done using a grid of 36 possible nuclear position coordinates. The electronic energies on this grid are then connected to give a potential energy surface for the nuclei. This potential is then used for a second Schrödinger equation containing only the 36 coordinates of the nuclei.
So, taking the most optimistic estimate for the complexity, instead of a large equation requiring at least formula_2 hypothetical calculation steps, a series of smaller calculations requiring formula_3 (with "N" being the number of grid points for the potential) and a very small calculation requiring formula_4 steps can be performed. In practice, the scaling of the problem is larger than formula_5, and more approximations are applied in computational chemistry to further reduce the number of variables and dimensions.
The slope of the potential energy surface can be used to simulate molecular dynamics, using it to express the mean force on the nuclei caused by the electrons and thereby skipping the calculation of the nuclear Schrödinger equation.
Detailed description.
The BO approximation recognizes the large difference between the electron mass and the masses of atomic nuclei, and correspondingly the time scales of their motion. Given the same amount of momentum, the nuclei move much more slowly than the electrons. In mathematical terms, the BO approximation consists of expressing the wavefunction (formula_6) of a molecule as the product of an electronic wavefunction and a nuclear (vibrational, rotational) wavefunction. formula_7. This enables a separation of the Hamiltonian operator into electronic and nuclear terms, where cross-terms between electrons and nuclei are neglected, so that the two smaller and decoupled systems can be solved more efficiently.
In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator "T"n is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian "H"e the nuclear positions are no longer variable, but are constant parameters (they enter the equation "parametrically"). The electron–nucleus interactions are "not" removed, i.e., the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the "clamped-nuclei" approximation.)
The electronic Schrödinger equation
formula_8
where formula_9 is the electronic wavefunction for given positions of nuclei (fixed R), is solved approximately. The quantity r stands for all electronic coordinates and R for all nuclear coordinates. The electronic energy eigenvalue "E"e depends on the chosen positions R of the nuclei. Varying these positions R in small steps and repeatedly solving the electronic Schrödinger equation, one obtains "E"e as a function of R. This is the potential energy surface (PES): formula_10. Because this procedure of recomputing the electronic wave functions as a function of an infinitesimally changing nuclear geometry is reminiscent of the conditions for the adiabatic theorem, this manner of obtaining a PES is often referred to as the "adiabatic approximation" and the PES itself is called an "adiabatic surface".
In the second step of the BO approximation the nuclear kinetic energy "T"n (containing partial derivatives with respect to the components of R) is reintroduced, and the Schrödinger equation for the nuclear motion
formula_11
is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue "E" is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. In accord with the Hellmann–Feynman theorem, the nuclear potential is taken to be an average over electron configurations of the sum of the electron–nuclear and internuclear electric potentials.
Derivation.
It will be discussed how the BO approximation may be derived and under which conditions it is applicable. At the same time we will show how the BO approximation may be improved by including vibronic coupling. To that end the second step of the BO approximation is generalized to a set of coupled eigenvalue equations depending on nuclear coordinates only. Off-diagonal elements in these equations are shown to be nuclear kinetic energy terms.
It will be shown that the BO approximation can be trusted whenever the PESs, obtained from the solution of the electronic Schrödinger equation, are well separated:
formula_12.
We start from the "exact" non-relativistic, time-independent molecular Hamiltonian:
formula_13
with
formula_14
The position vectors formula_15 of the electrons and the position vectors formula_16 of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as formula_17 (distance between electron "i" and nucleus "A") and similar definitions hold for formula_18 and formula_19.
We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the two-body Coulomb interactions among the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are "ZA" and "MA" – the atomic number and mass of nucleus "A".
It is useful to introduce the total nuclear momentum and to rewrite the nuclear kinetic energy operator as follows:
formula_20
Suppose we have "K" electronic eigenfunctions formula_21 of formula_22, that is, we have solved
formula_23
The electronic wave functions formula_24 will be taken to be real, which is possible when there are no magnetic or spin interactions. The "parametric dependence" of the functions formula_24 on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although formula_24 is a real-valued function of formula_25, its functional form depends on formula_26.
For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, formula_24 is a molecular orbital (MO) given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of formula_26, the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO formula_24.
We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider
formula_27
which in general will not be zero.
The total wave function formula_28 is expanded in terms of formula_29:
formula_30
with
formula_31
and where the subscript formula_32 indicates that the integration, implied by the bra–ket notation, is over electronic coordinates only. By definition, the matrix with general element
formula_33
is diagonal. After multiplication by the real function formula_34 from the left and integration over the electronic coordinates formula_25 the total Schrödinger equation
formula_35
is turned into a set of "K" coupled eigenvalue equations depending on nuclear coordinates only
formula_36
The column vector formula_37 has elements formula_38. The matrix formula_39 is diagonal, and the nuclear Hamilton matrix is non-diagonal; its off-diagonal ("vibronic coupling") terms formula_40 are further discussed below. The vibronic coupling in this approach is through nuclear kinetic energy terms.
Solution of these coupled equations gives an approximation for energy and wavefunction that goes beyond the Born–Oppenheimer approximation.
Unfortunately, the off-diagonal kinetic energy terms are usually difficult to handle. This is why often a diabatic transformation is applied, which retains part of the nuclear kinetic energy terms on the diagonal, removes the kinetic energy terms from the off-diagonal and creates coupling terms between the adiabatic PESs on the off-diagonal.
If we can neglect the off-diagonal elements the equations will uncouple and simplify drastically. In order to show when this neglect is justified, we suppress the coordinates in the notation and write, by applying the Leibniz rule for differentiation, the matrix elements of formula_41 as
formula_42
The diagonal (formula_43) matrix elements formula_44 of the operator formula_45 vanish, because we assume time-reversal invariant, so formula_24 can be chosen to be always real. The off-diagonal matrix elements satisfy
formula_46
The matrix element in the numerator is
formula_47
The matrix element of the one-electron operator appearing on the right side is finite.
When the two surfaces come close, formula_48, the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down, and a coupled set of nuclear motion equations must be considered instead of the one equation appearing in the second step of the BO approximation.
Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected, and hence the whole matrix of formula_49 is effectively zero. The third term on the right side of the expression for the matrix element of "T"n (the "Born–Oppenheimer diagonal correction") can approximately be written as the matrix of formula_49 squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well separated surfaces, and a diagonal, uncoupled, set of nuclear motion equations results:
formula_50
which are the normal second step of the BO equations discussed above.
We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born–Oppenheimer approximation breaks down, and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation.
The Born–Oppenheimer approximation with correct symmetry.
To include the correct symmetry within the Born–Oppenheimer (BO) approximation, a molecular system presented in terms of (mass-dependent) nuclear coordinates formula_51 and formed by the two lowest BO adiabatic potential energy surfaces (PES) formula_52 and formula_53 is considered. To ensure the validity of the BO approximation, the energy "E" of the system is assumed to be low enough so that formula_53 becomes a closed PES in the region of interest, with the exception of sporadic infinitesimal sites surrounding degeneracy points formed by formula_52 and formula_54 (designated as (1, 2) degeneracy points).
The starting point is the nuclear adiabatic BO (matrix) equation written in the form
formula_55
where formula_56 is a column vector containing the unknown nuclear wave functions formula_57, formula_58 is a diagonal matrix containing the corresponding adiabatic potential energy surfaces formula_59, "m" is the reduced mass of the nuclei, "E" is the total energy of the system, formula_60 is the gradient operator with respect to the nuclear coordinates formula_51, and formula_61 is a matrix containing the vectorial non-adiabatic coupling terms (NACT):
formula_62
Here formula_63 are eigenfunctions of the electronic Hamiltonian assumed to form a complete Hilbert space in the given region in configuration space.
To study the scattering process taking place on the two lowest surfaces, one extracts from the above BO equation the two corresponding equations:
formula_64
formula_65
where formula_66 ("k" = 1, 2), and formula_67 is the (vectorial) NACT responsible for the coupling between formula_52 and formula_54.
Next a new function is introduced:
formula_68
and the corresponding rearrangements are made:
In order for this equation to yield a solution with the correct symmetry, it is suggested to apply a perturbation approach based on an elastic potential formula_74, which coincides with formula_52 at the asymptotic region.
The equation with an elastic potential can be solved, in a straightforward manner, by substitution. Thus, if formula_75 is the solution of this equation, it is presented as
formula_76
where formula_77 is an arbitrary contour, and the exponential function contains the relevant symmetry as created while moving along formula_77.
The function formula_78 can be shown to be a solution of the (unperturbed/elastic) equation
formula_79
Having formula_80, the full solution of the above decoupled equation takes the form
formula_81
where formula_82 satisfies the resulting inhomogeneous equation:
formula_83
In this equation the inhomogeneity ensures the symmetry for the perturbed part of the solution along any contour and therefore for the solution in the required region in configuration space.
The relevance of the present approach was demonstrated while studying a two-arrangement-channel model (containing one inelastic channel and one reactive channel) for which the two adiabatic states were coupled by a Jahn–Teller conical intersection. A nice fit between the symmetry-preserved single-state treatment and the corresponding two-state treatment was obtained. This applies in particular to the reactive state-to-state probabilities (see Table III in Ref. 5a and Table III in Ref. 5b), for which the ordinary BO approximation led to erroneous results, whereas the symmetry-preserving BO approximation produced the accurate results, as they followed from solving the two coupled equations.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
Resources related to the Born–Oppenheimer approximation: | [
{
"math_id": 0,
"text": "E_\\text{total} = E_\\text{electronic} + E_\\text{vibrational} + E_\\text{rotational} + E_\\text{nuclear spin}."
},
{
"math_id": 1,
"text": "E_\\text{electronic}"
},
{
"math_id": 2,
"text": "162^2 = 26\\,244"
},
{
"math_id": 3,
"text": "126^2 N = 15\\,876 \\,N"
},
{
"math_id": 4,
"text": "36^2 = 1296"
},
{
"math_id": 5,
"text": "n^2"
},
{
"math_id": 6,
"text": "\\Psi_\\mathrm{total}"
},
{
"math_id": 7,
"text": " \\Psi_\\mathrm{total} = \\psi_\\mathrm{electronic} \\psi_\\mathrm{nuclear} "
},
{
"math_id": 8,
"text": " H_\\text{e}(\\mathbf r, \\mathbf R) \\chi(\\mathbf r, \\mathbf R) = E_\\text{e} \\chi(\\mathbf r, \\mathbf R) "
},
{
"math_id": 9,
"text": " \\chi(\\mathbf r, \\mathbf R)\n "
},
{
"math_id": 10,
"text": " E_e(\\mathbf R)\n "
},
{
"math_id": 11,
"text": " [T_\\text{n} + E_\\text{e}(\\mathbf R)] \\phi(\\mathbf R) = E \\phi(\\mathbf R) "
},
{
"math_id": 12,
"text": " E_0(\\mathbf{R}) \\ll E_1(\\mathbf{R}) \\ll E_2(\\mathbf{R}) \\ll \\cdots \\text{ for all }\\mathbf{R}"
},
{
"math_id": 13,
"text": "\nH = H_\\text{e} + T_\\text{n}\n"
},
{
"math_id": 14,
"text": "\nH_\\text{e} =\n-\\sum_{i}{\\frac{1}{2}\\nabla_i^2} -\n\\sum_{i,A}{\\frac{Z_A}{r_{iA}}} + \\sum_{i>j}{\\frac{1}{r_{ij}}} + \\sum_{B > A}{\\frac{Z_A Z_B}{R_{AB}}}\n\\quad\\text{and}\\quad T_\\text{n} = -\\sum_{A}{\\frac{1}{2M_A}\\nabla_A^2}.\n"
},
{
"math_id": 15,
"text": "\\mathbf{r} \\equiv \\{\\mathbf{r}_i\\}"
},
{
"math_id": 16,
"text": "\\mathbf{R} \\equiv \\{\\mathbf{R}_A = (R_{Ax}, R_{Ay}, R_{Az})\\}"
},
{
"math_id": 17,
"text": "r_{iA} \\equiv |\\mathbf{r}_i - \\mathbf{R}_A|"
},
{
"math_id": 18,
"text": "r_{ij}"
},
{
"math_id": 19,
"text": " R_{AB}"
},
{
"math_id": 20,
"text": " T_\\text{n} = \\sum_{A} \\sum_{\\alpha=x,y,z} \\frac{P_{A\\alpha} P_{A\\alpha}}{2M_A}\n\\quad\\text{with}\\quad\nP_{A\\alpha} = -i \\frac{\\partial}{\\partial R_{A\\alpha}}. "
},
{
"math_id": 21,
"text": "\\chi_k (\\mathbf{r}; \\mathbf{R})"
},
{
"math_id": 22,
"text": "H_\\text{e}"
},
{
"math_id": 23,
"text": "\nH_\\text{e} \\chi_k(\\mathbf{r}; \\mathbf{R}) = E_k(\\mathbf{R}) \\chi_k(\\mathbf{r}; \\mathbf{R}) \\quad\\text{for}\\quad k = 1, \\ldots, K.\n"
},
{
"math_id": 24,
"text": "\\chi_k"
},
{
"math_id": 25,
"text": "\\mathbf{r}"
},
{
"math_id": 26,
"text": "\\mathbf{R}"
},
{
"math_id": 27,
"text": "\nP_{A\\alpha}\\chi_k(\\mathbf{r}; \\mathbf{R}) = -i \\frac{\\partial\\chi_k(\\mathbf{r}; \\mathbf{R})}{\\partial R_{A\\alpha}}\n\\quad \\text{for}\\quad\n\\alpha = x,y,z,\n"
},
{
"math_id": 28,
"text": "\\Psi(\\mathbf{R}, \\mathbf{r})"
},
{
"math_id": 29,
"text": "\\chi_k(\\mathbf{r}; \\mathbf{R})"
},
{
"math_id": 30,
"text": "\n\\Psi(\\mathbf{R}, \\mathbf{r}) = \\sum_{k=1}^K \\chi_k(\\mathbf{r}; \\mathbf{R}) \\phi_k(\\mathbf{R}),\n"
},
{
"math_id": 31,
"text": "\n\\langle \\chi_{k'}(\\mathbf{r}; \\mathbf{R}) | \\chi_k(\\mathbf{r}; \\mathbf{R}) \\rangle_{(\\mathbf{r})} = \\delta_{k' k},\n"
},
{
"math_id": 32,
"text": "(\\mathbf{r})"
},
{
"math_id": 33,
"text": " \\big(\\mathbb{H}_\\text{e}(\\mathbf{R})\\big)_{k'k} \\equiv \\langle \\chi_{k'}(\\mathbf{r}; \\mathbf{R})\n | H_\\text{e} |\n \\chi_k(\\mathbf{r}; \\mathbf{R}) \\rangle_{(\\mathbf{r})} = \\delta_{k'k} E_k(\\mathbf{R})\n"
},
{
"math_id": 34,
"text": "\\chi_{k'}(\\mathbf{r}; \\mathbf{R})"
},
{
"math_id": 35,
"text": "\nH \\Psi(\\mathbf{R}, \\mathbf{r}) = E \\Psi(\\mathbf{R}, \\mathbf{r})\n"
},
{
"math_id": 36,
"text": " [\\mathbb{H}_\\text{n}(\\mathbf{R}) + \\mathbb{H}_\\text{e}(\\mathbf{R})] \\boldsymbol{\\phi}(\\mathbf{R}) =\n E \\boldsymbol{\\phi}(\\mathbf{R}).\n"
},
{
"math_id": 37,
"text": "\\boldsymbol{\\phi}(\\mathbf{R})"
},
{
"math_id": 38,
"text": "\\phi_k(\\mathbf{R}),\\ k = 1, \\ldots, K"
},
{
"math_id": 39,
"text": "\\mathbb{H}_\\text{e}(\\mathbf{R})"
},
{
"math_id": 40,
"text": " \\big(\\mathbb{H}_\\text{n}(\\mathbf{R})\\big)_{k'k}"
},
{
"math_id": 41,
"text": "T_\\text{n}"
},
{
"math_id": 42,
"text": "\nT_\\text{n}(\\mathbf{R})_{k'k} \\equiv\n \\big(\\mathbb{H}_\\text{n}(\\mathbf{R})\\big)_{k'k}\n = \\delta_{k'k} T_\\text{n}\n - \\sum_{A,\\alpha}\\frac{1}{M_A} \\langle\\chi_{k'}|P_{A\\alpha}|\\chi_k\\rangle_{(\\mathbf{r})} P_{A\\alpha} + \\langle\\chi_{k'}|T_\\text{n}|\\chi_k\\rangle_{(\\mathbf{r})}.\n"
},
{
"math_id": 43,
"text": "k' = k"
},
{
"math_id": 44,
"text": "\\langle\\chi_{k}|P_{A\\alpha}|\\chi_k\\rangle_{(\\mathbf{r})}"
},
{
"math_id": 45,
"text": "P_{A\\alpha}"
},
{
"math_id": 46,
"text": "\n\\langle\\chi_{k'}|P_{A\\alpha}|\\chi_k\\rangle_{(\\mathbf{r})} =\n \\frac{\\langle\\chi_{k'}| [P_{A\\alpha}, H_\\text{e}] |\\chi_k\\rangle_{(\\mathbf{r})}}\n {E_{k}(\\mathbf{R}) - E_{k'}(\\mathbf{R})}.\n"
},
{
"math_id": 47,
"text": "\n\\langle\\chi_{k'}| [P_{A\\alpha}, H_\\mathrm{e}] |\\chi_k\\rangle_{(\\mathbf{r})} =\niZ_A\\sum_i \\left\\langle\\chi_{k'}\\left|\\frac{(\\mathbf{r}_{iA})_\\alpha}{r_{iA}^3}\\right|\\chi_k\\right\\rangle_{(\\mathbf{r})}\n\\quad\\text{with}\\quad\n\\mathbf{r}_{iA} \\equiv \\mathbf{r}_i - \\mathbf{R}_A.\n"
},
{
"math_id": 48,
"text": "E_{k}(\\mathbf{R}) \\approx E_{k'}(\\mathbf{R})"
},
{
"math_id": 49,
"text": "P^A_\\alpha"
},
{
"math_id": 50,
"text": "\n[T_\\text{n} + E_k(\\mathbf{R})] \\phi_k(\\mathbf{R}) = E \\phi_k(\\mathbf{R})\n\\quad\\text{for}\\quad\nk = 1, \\ldots, K,\n"
},
{
"math_id": 51,
"text": "\\mathbf{q}"
},
{
"math_id": 52,
"text": "u_1(\\mathbf{q})"
},
{
"math_id": 53,
"text": "u_2 (\\mathbf{q})"
},
{
"math_id": 54,
"text": "u_2(\\mathbf{q})"
},
{
"math_id": 55,
"text": "-\\frac{\\hbar^2}{2m} (\\nabla + \\tau)^2 \\Psi + (\\mathbf{u} - E)\\Psi = 0, "
},
{
"math_id": 56,
"text": "\\Psi(\\mathbf{q}) "
},
{
"math_id": 57,
"text": "\\psi_k(\\mathbf{q})"
},
{
"math_id": 58,
"text": "\\mathbf{u}(\\mathbf{q})"
},
{
"math_id": 59,
"text": "u_k(\\mathbf{q})"
},
{
"math_id": 60,
"text": "\\nabla"
},
{
"math_id": 61,
"text": "\\mathbf{\\tau}(\\mathbf{q})"
},
{
"math_id": 62,
"text": "\\mathbf{\\tau}_{jk} = \\langle \\zeta_j | \\nabla\\zeta_k \\rangle."
},
{
"math_id": 63,
"text": "|\\zeta_n\\rangle"
},
{
"math_id": 64,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^2\\psi_1 + (\\tilde{u}_1 - E)\\psi_1 - \\frac{\\hbar^2}{2m} [2\\mathbf{\\tau}_{12}\\nabla + \\nabla\\mathbf{\\tau}_{12}]\\psi_2 = 0,"
},
{
"math_id": 65,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^2\\psi_2 + (\\tilde{u}_2 - E)\\psi_2 + \\frac{\\hbar^2}{2m} [2\\mathbf{\\tau}_{12}\\nabla + \\nabla\\mathbf{\\tau}_{12}]\\psi_1 = 0,"
},
{
"math_id": 66,
"text": "\\tilde{u}_k(\\mathbf{q}) = u_k(\\mathbf{q}) + (\\hbar^{2}/2m)\\tau_{12}^2"
},
{
"math_id": 67,
"text": "\\mathbf\\tau_{12} = \\mathbf\\tau_{12}(\\mathbf{q})"
},
{
"math_id": 68,
"text": " \\chi = \\psi_1 + i\\psi_2, "
},
{
"math_id": 69,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^{2}\\chi + (\\tilde{u}_1 - E)\\chi + i\\frac{\\hbar^2}{2m}[2\\mathbf{\\tau}_{12}\\nabla + \\nabla\\mathbf{\\tau}_{12}]\\chi + i(u_1 - u_2)\\psi_2 = 0."
},
{
"math_id": 70,
"text": "\\psi_{2}(\\mathbf{q}) \\sim 0"
},
{
"math_id": 71,
"text": "u_1(\\mathbf{q}) \\sim u_2(\\mathbf{q})"
},
{
"math_id": 72,
"text": "u_1(\\mathbf{q}) - u_2(\\mathbf{q}) \\sim 0"
},
{
"math_id": 73,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^{2}\\chi + (\\tilde{u}_1 - E)\\chi + i\\frac{\\hbar^2}{2m}[2\\mathbf{\\tau}_{12}\\nabla + \\nabla\\mathbf{\\tau}_{12}]\\chi = 0."
},
{
"math_id": 74,
"text": "u_0(\\mathbf{q})"
},
{
"math_id": 75,
"text": "\\chi_0"
},
{
"math_id": 76,
"text": "\\chi_0(\\mathbf{q}|\\Gamma) = \\xi_{0}(\\mathbf{q}) \\exp\\left[-i \\int_\\Gamma d\\mathbf{q}' \\cdot \\mathbf{\\tau}(\\mathbf{q}'|\\Gamma)\\right],"
},
{
"math_id": 77,
"text": "\\Gamma"
},
{
"math_id": 78,
"text": "\\xi_0(\\mathbf{q})"
},
{
"math_id": 79,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^{2}\\xi_0 + (u_0 - E) \\xi_0 = 0."
},
{
"math_id": 80,
"text": "\\chi_0(\\mathbf{q}|\\Gamma)"
},
{
"math_id": 81,
"text": "\\chi(\\mathbf{q}|\\Gamma) = \\chi_0(\\mathbf{q}|\\Gamma) + \\eta(\\mathbf{q}|\\Gamma),"
},
{
"math_id": 82,
"text": "\\eta(\\mathbf{q}|\\Gamma)"
},
{
"math_id": 83,
"text": "-\\frac{\\hbar^2}{2m} \\nabla^{2}\\eta + (\\tilde{u}_1 - E)\\eta + i\\frac{\\hbar^2}{2m}[2\\mathbf{\\tau}_{12}\\nabla + \\nabla\\mathbf{\\tau}_{12}]\\eta = (u_1 - u_0)\\chi_0."
}
]
| https://en.wikipedia.org/wiki?curid=68946 |
68953726 | Crassostrea rhizophorae | Species of bivalve
<templatestyles src="Template:Taxobox/core/styles.css" />
Crassostrea rhizophorae, also known as the mangrove cupped oyster, is a species of bivalve in the family Ostreidae. "C. rhizophorae" is one of the predominant oyster species in the South Atlantic, specifically in Central and South America. It is often found in the vast mangrove ecosystem along the coast of Brazil.
Environment.
"C. rhizophorae" is typically found in the intertidal or shallow subtidal regions of tropical mangroves and other estuarine regions. The optimal vertical range for "C. rhizophorae" is between 1.0 m and 1.5 m above the 0.0 m level of spring tides. At greater depths, the substrate is too soft for the oysters to settle and the pressure from predators like crabs and fish is too extreme. Above 1.5 m, "C. rhizophorae" will not settle due to extensive exposure time. Due to the narrow vertical band that "C. rhizophorae" inhabits, species survive best when securely fixed on rocks, hard substrates, and on mangrove roots, such as the aerial roots of the red mangrove ("Rhizophora mangle"). Like most oysters, "C. rhizophorae" tend for form clusters of individuals which may develop into oyster reefs.
The optimal salinity range for "C. rhizophorae" is approximately 7.2 to 28.8‰, however, it can tolerate significant salinity fluctuations of short duration, which are experienced in Central and South America during the rainy seasons. "C. rhizophorae" thrives best in temperatures below . While it is able to withstand fluctuations, very few larvae are found at temperatures exceeding 30 °C.
Characteristics.
"C. rhizophorae" is often called the Caribbean or mangrove oyster due to the environment that it is found in. This species of oysters is an oviparous species, which indicates that they are animals that reproduce by laying their eggs without much embryonic development within the mother. "C. rhizophorae", and more generally the genus, "Crassostrea", are cup-like, or cupped, oysters, meaning that the shell itself has a cup shape to it.
"C. rhizophorae" has a promyal chamber and small ostia. The oyster also has a thin, foliaceous, deeply cupped right valve and the upper left valve is small and flat, which enables it to fit into the lower one. The beak is twisted dorsally, and the muscle scar is near the dorsal margin of the shell. The muscle scar is often unpigmented.
Adult "C. rhizophorae" can reach up to 10 cm in height. However, in their natural environment, their growth is stunted, leading to a maximum height of 5 cm.
Diet.
"C. rhizophorae" tend to consume any microscopic particles that are carried in suspension in the water, regardless of their nutritional value. They consume a great range of organisms belonging to the following groups: Cyanobacteria, Xanthophyta, Bacillariophyta, Dinophyta, Euglenophyta, Chlorophyta, Protozoa, Rotifera, Annelida, Arthropoda, and Mollusca. "C. rhizophorae" have also been shown to consume fragments of Phytoplankton, Zooplankton, and Phanerogamae and grains of sediment. A study found that Bacillariophyta was the dominant group of consumption by "C. rhizophorae" at 63% of the food content in the stomach, followed by Chlorophyta at 12% of the food content in the stomach. This study also looked at the percentage of food items in the stomach contents. They categorized certain amounts of food as "full", "almost full", "almost empty", and "empty". 57% of the individuals were categorized as being in the full stage, which suggests the existence of good availability of food for "C. rhizophorae" in the environment that they are in.
Reproduction and growth.
Reproduction.
"C. rhizophorae" have primary bisexual gonads that form associations of cells in the connective tissue anterior to the heart by the time they reach 0.7 cm or 45 days after setting. The gonada has cells for both sexes but this is shown the most with spermatogenesis cells in 90% of animals that are sexually mature before reaching 2.0 cm or 120 days after setting. In older individuals ranging from 6 to 18 months and 4 to 6 cm in size, 83.5% were females so most change happened between 2 and 4 cm in size, yet only 0.5% are hermaphrodictic. The active gonad goes through prematuration and maturation stage before spawning and then after partial spawning, the gonad enters a recuperation stage. During this stage, the gametogenesis starts a new maturation that leads to the complete cytolysis of the gamete and obliteration of the follicles. Most adult oysters ranging from 4 to 6 cm in length become mature without an undifferentiated stage after the spawning or resting stage.
Due to the constant high water temperature, gametogenesis happens twice during the year, March and October. These peaks happen when drastic changes in salinity, rainy periods, but intense rains like 150 mm per week depress spawning. If done in the lab, "C. rhizophorae" embryonic development can be done in 24 hours at a density of formula_0 to formula_1 ovocyte per liter when fertilized at concentrations of 500 to 5000 spermatozoans per ovocyte. From this it was determined that the best range of salinities for embryonic development is 25% to 37% and the best temperatures are around 25 but below 30 degrees Celsius.
Growth.
"C. rhizophorae" can grow in a variety of locations, but grow best in the roots of mangroves. "C. rhizophorae" tend to grow to 4 to 7 cm in length, and it can take up to 18 months for most members of the species to reach their full size. The maximum size of "C. rhizophorae" is approximately 7 to 8 cm. Adult "C. rhizophorae" can reach up to 10 cm in height. However, in their natural environment, their growth is stunted, leading to a maximum height of 5 cm.
"C. rhizophorae" begin their life as floating larvae, which soon settle onto a solid substrate. Once settled onto their substrate, the growing oysters are known as spat. Spat grow 1 cm a month for the first 3 months and then growth rates slow to an approximate growth of 0.78 cm a month. After reaching 6.5 cm, growth rates drop considerably. "C. rhizophorae" grow best during the rainy season due to a higher influx of nutrients into estuarine areas.
The size class between 4.1 and 6.0 cm is of most interest for fishers, as oysters of this size tend to yield the most meat. The best time to harvest "C. rhizophorae" is 2 years after spawning.
Fishing industry.
"C. rhizophorae" is a vital fishery resource for the Caribbean and South Atlantic. In the early 2000s, as many as 5,600 metric tons of "C. rhizophorae" were harvested in the Caribbean and South Atlantic. Due to high consumer demands and declines in "C. rhizophorae" populations due to pollution, "C. rhizophorae" is now most commonly farmed using artificial reefs known as farming platforms. These platforms are typically made of branches of mangrove trees suspended from racks in the inter- and sub-tidal regions. These allow for farmers to maintain populations of "C. rhizophorae" that meet consumer demands while preventing overfishing.
The artificial reefs of "C. rhizophorae" have also acted as nursery environments for many marine and estuarine species in the Caribbean. These artificial reefs also provide a reproductive substrate for fishes and protect them from predation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^4"
},
{
"math_id": 1,
"text": "4 \\times10^4"
}
]
| https://en.wikipedia.org/wiki?curid=68953726 |
68954344 | Freightos Baltic Index | The Freightos Baltic Index (FBX) (also sometimes known as the Freightos Baltic Daily Index or Freightos Baltic Global Container Index) is a daily freight container index issued by the Baltic Exchange and Freightos. The index measures global container freight rates by calculating spot rates for 40-foot containers on 12 global tradelanes. It is reported around the world as a proxy for shipping stocks, and is a general shipping market bellwether. The FBX is currently one of the most widely used freight rate indices.
History.
The Freightos International Freight Index was first launched as a weekly freight index in early 2017. The Freightos Baltic Index has been in wide use since 2018. It is currently the only freight rate index that is issued daily, and is also the only IOSCO-compliant freight index that is currently regulated by the EU (in particular, the European Securities and Markets Authority). The index is calculated from real-time anonymized data. As of February 2020, 50 to 70 million price points were collected by the FBX each month. The FBX was converted into a daily index in February 2021.
Calculation.
12 regional tradelane indices are first calculated. The weighted average of these 12 indices is then usd to obtain the FBX Global Container Index (FBX). The formulas given below are in use as of October 2020. Note that the FBX calculating formula is updated periodically, with the last update issued in mid-2021.
Tradelane index.
The FBX tradelanes are calculated using the median port to port all-in freight rate. This median price (freight rate) is calculated for standard 40-foot containers that are not refrigerated.
The tradelane index is calculated as follows.
formula_0
Where:
For reference, a list of the 12 tradelanes used in the index calculations is also provided in the section below.
FBX Global Container Index.
The FBX Global Container Index (FBX) is a weighted average of 12 regional tradelane indices. It is calculated as follows.
formula_6
Where:
The FBX prices used are rolling short-term Freight All Kind (FAK) spot tariffs and related surcharges between carriers, freight forwarders, and high-volume shippers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\sum_{i=1}^{n} ({median}(L, C_i) \\times V_i)}{\\sum_{i=1}^{n} (V_i)}"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "C_1, C_2,\\cdots, C_n"
},
{
"math_id": 3,
"text": "V_1, V_2,\\cdots, V_n"
},
{
"math_id": 4,
"text": "{median} (L, C_i,\\cdots,C_j)"
},
{
"math_id": 5,
"text": "C_i,\\cdots,C_j"
},
{
"math_id": 6,
"text": "\\frac{\\sum_{i=1}^{12} (I_i \\times V_i)}{\\sum_{i=1}^{12} (V_i)}"
},
{
"math_id": 7,
"text": "I_1, I_2,\\cdots, I_{12}"
},
{
"math_id": 8,
"text": "V_1, V_2,\\cdots, V_{12}"
}
]
| https://en.wikipedia.org/wiki?curid=68954344 |
689743 | Cupola (geometry) | Solid made by joining an n- and 2n-gon with triangles and squares
In geometry, a cupola is a solid formed by joining two polygons, one (the base) with twice as many edges as the other, by an alternating band of isosceles triangles and rectangles. If the triangles are equilateral and the rectangles are squares, while the base and its opposite face are regular polygons, the triangular, square, and pentagonal cupolae all count among the Johnson solids, and can be formed by taking sections of the cuboctahedron, rhombicuboctahedron, and rhombicosidodecahedron, respectively.
A cupola can be seen as a prism where one of the polygons has been collapsed in half by merging alternate vertices.
A cupola can be given an extended Schläfli symbol {"n"} || t{"n"}, representing a regular polygon {"n"} joined by a parallel of its truncation, t{"n"} or {2"n"}.
Cupolae are a subclass of the prismatoids.
Its dual contains a shape that is sort of a weld between half of an n-sided trapezohedron and a 2"n"-sided pyramid.
Examples.
The above-mentioned three polyhedra are the only non-trivial convex cupolae with regular faces: The "hexagonal cupola" is a plane figure, and the triangular prism might be considered a "cupola" of degree 2 (the cupola of a line segment and a square). However, cupolae of higher-degree polygons may be constructed with irregular triangular and rectangular faces.
Coordinates of the vertices.
The definition of the cupola does not require the base (or the side opposite the base, which can be called the top) to be a regular polygon, but it is convenient to consider the case where the cupola has its maximal symmetry, C"n"v. In that case, the top is a regular n-gon, while the base is either a regular 2"n"-gon or a 2"n"-gon which has two different side lengths alternating and the same angles as a regular 2"n"-gon. It is convenient to fix the coordinate system so that the base lies in the xy-plane, with the top in a plane parallel to the xy-plane. The z-axis is the n-fold axis, and the mirror planes pass through the z-axis and bisect the sides of the base. They also either bisect the sides or the angles of the top polygon, or both. (If n is even, half of the mirror planes bisect the sides of the top polygon and half bisect the angles, while if n is odd, each mirror plane bisects one side and one angle of the top polygon.) The vertices of the base can be designated &NoBreak;&NoBreak; through &NoBreak;&NoBreak; while the vertices of the top polygon can be designated &NoBreak;}&NoBreak; through &NoBreak;&NoBreak; With these conventions, the coordinates of the vertices can be written as:
formula_0
for "j" = 1, 2, ..., "n".
Since the polygons &NoBreak;&NoBreak; etc. are rectangles, this puts a constraint on the values of &NoBreak;&NoBreak; The distance formula_1 is equal to
formula_2
while the distance formula_3 is equal to
formula_4
These are to be equal, and if this common edge is denoted by s,
formula_5
These values are to be inserted into the expressions for the coordinates of the vertices given earlier.
Star-cupolae.
Star cupolae exist for any top base {"n"/"d"} where 6/5 < "n"/"d" < 6 and d is odd. At these limits, the cupolae collapse into plane figures. Beyond these limits, the triangles and squares can no longer span the distance between the two base polygons (it can still be made with non-equilateral isosceles triangles and non-square rectangles). If d is even, the bottom base {2"n"/"d"} becomes degenerate; then we can form a "cupoloid" or "semicupola" by withdrawing this degenerate face and letting the triangles and squares connect to each other here (through single edges) rather than to the late bottom base (through its double edges). In particular, the tetrahemihexahedron may be seen as a {3/2}-cupoloid.
The cupolae are all orientable, while the cupoloids are all non-orientable. For a cupoloid, if "n"/"d" > 2, then the triangles and squares do not cover the entire (single) base, and a small membrane is placed in this base {"n"/"d"}-gon that simply covers empty space. Hence the {5/2}- and {7/2}-cupoloids pictured above have membranes (not filled in), while the {5/4}- and {7/4}-cupoloids pictured above do not.
The height h of an {"n"/"d"}-cupola or cupoloid is given by the formula:
formula_6
In particular, "h" = 0 at the limits "n"/"d" = 6 and "n"/"d" = 6/5, and h is maximized at "n"/"d" = 2 (in the digonal cupola: the triangular prism, where the triangles are upright).
In the images above, the star cupolae have been given a consistent colour scheme to aid identifying their faces: the base {"n"/"d"}-gon is red, the base {2"n"/"d"}-gon is yellow, the squares are blue, and the triangles are green. The cupoloids have the base {"n"/"d"}-gon red, the squares yellow, and the triangles blue, as the base {2"n"/"d"}-gon has been withdrawn.
Hypercupolae.
The hypercupolae or polyhedral cupolae are a family of convex nonuniform polychora (here four-dimensional figures), analogous to the cupolas. Each one's bases are a Platonic solid and its expansion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{rllcc}\n V_{2j-1} :& \\biggl( r_b \\cos\\left(\\frac{2\\pi(j-1)}{n} + \\alpha\\right), & r_b \\sin\\left(\\frac{2\\pi(j-1)}{n} + \\alpha\\right), & 0 \\biggr) \\\\[2pt]\n V_{2j} :& \\biggl( r_b \\cos\\left(\\frac{2\\pi j}{n} - \\alpha\\right), & r_b \\sin\\left(\\frac{2\\pi j}{n} - \\alpha\\right), & 0 \\biggr) \\\\[2pt]\n V_{2n+j} :& \\biggl( r_t \\cos\\frac{\\pi j}{n}, & r_t \\sin\\frac{\\pi j}{n}, & h \\biggr)\n\\end{array}"
},
{
"math_id": 1,
"text": "\\bigl|V_1 V_2 \\bigr|"
},
{
"math_id": 2,
"text": "\\begin{align}\n & r_b \\sqrt{ \\left[\\cos\\left(\\tfrac{2\\pi}{n} - \\alpha\\right) - \\cos \\alpha\\right]^2 + \\left[\\sin\\left(\\tfrac{2\\pi}{n} - \\alpha\\right) - \\sin\\alpha\\right]^2} \\\\[5pt]\n =\\ & r_b \\sqrt{ \\left[\\cos^2 \\left(\\tfrac{2\\pi}{n} - \\alpha\\right) - 2\\cos\\left(\\tfrac{2pi}{n} - \\alpha\\right)\\cos\\alpha + \\cos^2 \\alpha \\right] + \\left[\\sin^2 \\left(\\tfrac{2\\pi}{n} - \\alpha\\right) - 2\\sin\\left(\\tfrac{2\\pi}{n} - \\alpha\\right) \\sin\\alpha + \\sin^2 \\alpha \\right] } \\\\[5pt]\n =\\ & r_b \\sqrt{ 2\\left[1 - \\cos\\left(\\tfrac{2\\pi}{n} - \\alpha\\right) \\cos\\alpha - \\sin\\left(\\tfrac{2\\pi}{n} - \\alpha\\right)\\sin\\alpha \\right]} \\\\[5pt]\n =\\ & r_b \\sqrt{ 2\\left[1 - \\cos\\left(\\tfrac{2\\pi}{n} - 2\\alpha\\right)\\right]}\n\\end{align}"
},
{
"math_id": 3,
"text": "\\bigl| V_{2n+1}V_{2n+2} \\bigr|"
},
{
"math_id": 4,
"text": "\\begin{align}\n & r_t \\sqrt{ \\left[ \\cos\\tfrac{\\pi}{n} - 1 \\right]^2 + \\sin^2 \\tfrac{\\pi}{n} } \\\\[5pt]\n =\\ & r_t \\sqrt{ \\left[ \\cos^2\\tfrac{\\pi}{n} - 2\\cos\\tfrac{\\pi}{n} + 1 \\right] + \\sin^2\\tfrac{\\pi}{n} } \\\\[5pt]\n =\\ & r_t \\sqrt{2 \\left[1 - \\cos\\tfrac{\\pi}{n} \\right]}\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n r_b &= \\frac{s}{ \\sqrt{2\\left[1 - \\cos\\left(\\tfrac{2\\pi}{n} - 2\\alpha \\right) \\right] }} \\\\[4pt]\n r_t &= \\frac{s}{ \\sqrt{2\\left[1 - \\cos\\tfrac{\\pi}{n} \\right] }}\n\\end{align}"
},
{
"math_id": 6,
"text": "h = \\sqrt{1 - \\frac{1}{4 \\sin^{2} \\left( \\frac{\\pi d}{n} \\right)}}."
}
]
| https://en.wikipedia.org/wiki?curid=689743 |
689767 | Spherical pendulum | In physics, a spherical pendulum is a higher dimensional analogue of the pendulum. It consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity.
Owing to the spherical geometry of the problem, spherical coordinates are used to describe the position of the mass in terms of formula_0, where r is fixed such that formula_1.
Lagrangian mechanics.
Routinely, in order to write down the kinetic formula_2 and potential formula_3 parts of the Lagrangian formula_4 in arbitrary generalized coordinates the position of the mass is expressed along Cartesian axes. Here, following the conventions shown in the diagram,
formula_5
formula_6
formula_7.
Next, time derivatives of these coordinates are taken, to obtain velocities along the axes
formula_8
formula_9
formula_10.
Thus,
formula_11
and
formula_12
formula_13
The Lagrangian, with constant parts removed, is
formula_14
The Euler–Lagrange equation involving the polar angle formula_15
formula_16
gives
formula_17
and
formula_18
When formula_19 the equation reduces to the differential equation for the motion of a simple gravity pendulum.
Similarly, the Euler–Lagrange equation involving the azimuth formula_20,
formula_21
gives
formula_22.
The last equation shows that angular momentum around the vertical axis, formula_23 is conserved. The factor formula_24 will play a role in the Hamiltonian formulation below.
The second order differential equation determining the evolution of formula_20 is thus
formula_25.
The azimuth formula_20, being absent from the Lagrangian, is a cyclic coordinate, which implies that its conjugate momentum is a constant of motion.
The conical pendulum refers to the special solutions where formula_26 and formula_27 is a constant not depending on time.
Hamiltonian mechanics.
The Hamiltonian is
formula_28
where conjugate momenta are
formula_29
and
formula_30.
In terms of coordinates and momenta it reads
formula_31
Hamilton's equations will give time evolution of coordinates and momenta in four first-order differential equations
formula_32
formula_33
formula_34
formula_35
Momentum formula_36 is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis.
Trajectory.
Trajectory of the mass on the sphere can be obtained from the expression for the total energy
formula_37
by noting that the horizontal component of angular momentum formula_38 is a constant of motion, independent of time. This is true because neither gravity nor the reaction from the sphere act in directions that would affect this component of angular momentum.
Hence
formula_39
formula_40
which leads to an elliptic integral of the first kind for formula_15
formula_41
and an elliptic integral of the third kind for formula_20
formula_42.
The angle formula_15 lies between two circles of latitude, where
formula_43. | [
{
"math_id": 0,
"text": "(r, \\theta, \\phi)"
},
{
"math_id": 1,
"text": "r = l"
},
{
"math_id": 2,
"text": "T=\\tfrac{1}{2}mv^2"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "L=T-V"
},
{
"math_id": 5,
"text": "x=l\\sin\\theta\\cos\\phi"
},
{
"math_id": 6,
"text": "y=l\\sin\\theta\\sin\\phi"
},
{
"math_id": 7,
"text": "z=l(1-\\cos\\theta)"
},
{
"math_id": 8,
"text": "\\dot x=l\\cos\\theta\\cos\\phi\\,\\dot\\theta-l\\sin\\theta\\sin\\phi\\,\\dot\\phi"
},
{
"math_id": 9,
"text": "\\dot y=l\\cos\\theta\\sin\\phi\\,\\dot\\theta+l\\sin\\theta\\cos\\phi\\,\\dot\\phi"
},
{
"math_id": 10,
"text": "\\dot z=l\\sin\\theta\\,\\dot\\theta"
},
{
"math_id": 11,
"text": "\nv^2=\\dot x ^2+\\dot y ^2+\\dot z ^2\n=l^2\\left(\\dot\\theta ^2+\\sin^2\\theta\\,\\dot\\phi ^2\\right)\n"
},
{
"math_id": 12,
"text": "\nT=\\tfrac{1}{2}mv^2\n=\\tfrac{1}{2}ml^2\\left(\\dot\\theta ^2+\\sin^2\\theta\\,\\dot\\phi ^2\\right)\n"
},
{
"math_id": 13,
"text": "\nV=mg\\,z=mg\\,l(1-\\cos\\theta)\n"
},
{
"math_id": 14,
"text": "\nL=\\frac{1}{2}\nml^2\\left(\n \\dot{\\theta}^2+\\sin^2\\theta\\ \\dot{\\phi}^2\n\\right)\n+ mgl\\cos\\theta.\n"
},
{
"math_id": 15,
"text": "\\theta"
},
{
"math_id": 16,
"text": "\n\\frac{d}{dt}\\frac{\\partial}{\\partial\\dot\\theta}L-\\frac{\\partial}{\\partial\\theta}L=0\n"
},
{
"math_id": 17,
"text": "\n\\frac{d}{dt}\n\\left(ml^2\\dot{\\theta}\n\\right)\n-ml^2\\sin\\theta\\cdot\\cos\\theta\\,\\dot{\\phi}^2+\nmgl\\sin\\theta =0\n"
},
{
"math_id": 18,
"text": "\n\\ddot\\theta=\\sin\\theta\\cos\\theta\\dot\\phi ^2-\\frac{g}{l}\\sin\\theta\n"
},
{
"math_id": 19,
"text": "\\dot\\phi=0"
},
{
"math_id": 20,
"text": "\\phi"
},
{
"math_id": 21,
"text": "\n\\frac{d}{dt}\\frac{\\partial}{\\partial\\dot\\phi}L-\\frac{\\partial}{\\partial\\phi}L=0\n"
},
{
"math_id": 22,
"text": "\n\\frac{d}{dt}\n\\left(\n ml^2\\sin^2\\theta\n \\cdot \n \\dot{\\phi}\n\\right)\n=0\n"
},
{
"math_id": 23,
"text": "|\\mathbf L_z| = l\\sin\\theta \\times ml\\sin\\theta\\,\\dot\\phi"
},
{
"math_id": 24,
"text": "ml^2\\sin^2\\theta"
},
{
"math_id": 25,
"text": "\\ddot\\phi\\,\\sin\\theta = -2\\,\\dot\\theta\\,\\dot{\\phi}\\,\\cos\\theta"
},
{
"math_id": 26,
"text": "\\dot\\theta=0"
},
{
"math_id": 27,
"text": "\\dot\\phi"
},
{
"math_id": 28,
"text": "H=P_\\theta\\dot \\theta + P_\\phi\\dot \\phi-L"
},
{
"math_id": 29,
"text": "P_\\theta=\\frac{\\partial L}{\\partial \\dot \\theta}=ml^2\\cdot \\dot \\theta"
},
{
"math_id": 30,
"text": "P_\\phi=\\frac{\\partial L}{\\partial \\dot \\phi} = ml^2 \\sin^2\\! \\theta \\cdot \\dot \\phi"
},
{
"math_id": 31,
"text": "H = \\underbrace{\\left[\\frac{1}{2}ml^2\\dot\\theta^2 + \\frac{1}{2}ml^2\\sin^2\\theta\\dot \\phi^2\\right]}_{T} + \\underbrace{ \\bigg[-mgl\\cos\\theta\\bigg]}_{V}=\n{P_\\theta^2\\over 2ml^2}+{P_\\phi^2\\over 2ml^2\\sin^2\\theta}-mgl\\cos\\theta"
},
{
"math_id": 32,
"text": "\\dot {\\theta}={P_\\theta \\over ml^2}"
},
{
"math_id": 33,
"text": "\\dot {\\phi}={P_\\phi \\over ml^2\\sin^2\\theta}"
},
{
"math_id": 34,
"text": "\\dot {P_\\theta}={P_\\phi^2\\over ml^2\\sin^3\\theta}\\cos\\theta-mgl\\sin\\theta"
},
{
"math_id": 35,
"text": "\\dot {P_\\phi}=0"
},
{
"math_id": 36,
"text": "P_\\phi"
},
{
"math_id": 37,
"text": "E=\\underbrace{\\left[\\frac{1}{2}ml^2\\dot\\theta^2 + \\frac{1}{2}ml^2\\sin^2\\theta \\dot \\phi^2\\right]}_{T}+\\underbrace{ \\bigg[-mgl\\cos\\theta\\bigg]}_{V}"
},
{
"math_id": 38,
"text": "L_z = ml^2\\sin^2\\!\\theta \\,\\dot\\phi"
},
{
"math_id": 39,
"text": "E=\\frac{1}{2}ml^2\\dot\\theta^2 + \\frac{1}{2}\\frac{L_z^2}{ml^2\\sin^2\\theta}-mgl\\cos\\theta"
},
{
"math_id": 40,
"text": "\\left(\\frac{d\\theta}{dt}\\right)^2=\\frac{2}{ml^2}\\left[E-\\frac{1}{2}\\frac{L_z^2}{ml^2\\sin^2\\theta}+mgl\\cos\\theta\\right]"
},
{
"math_id": 41,
"text": "t(\\theta)=\\sqrt{\\tfrac{1}{2}ml^2}\\int\\left[E-\\frac{1}{2}\\frac{L_z^2}{ml^2\\sin^2\\theta}+mgl\\cos\\theta\\right]^{-\\frac{1}{2}}\\,d\\theta"
},
{
"math_id": 42,
"text": "\\phi(\\theta)=\\frac{L_z}{l\\sqrt{2m}}\\int\\sin^{-2}\\theta \\left[E-\\frac{1}{2}\\frac{L_z^2}{ml^2\\sin^2\\theta}+mgl\\cos\\theta\\right]^{-\\frac{1}{2}}\\,d\\theta"
},
{
"math_id": 43,
"text": "E>\\frac{1}{2}\\frac{L_z^2}{ml^2\\sin^2\\theta}-mgl\\cos\\theta"
}
]
| https://en.wikipedia.org/wiki?curid=689767 |
6898473 | X-ray standing waves | The X-ray standing wave (XSW) technique can be used to study the structure of surfaces and interfaces with high spatial resolution and chemical selectivity. Pioneered by B.W. Batterman in the 1960s, the availability of synchrotron light has stimulated the application of this interferometric technique to a wide range of problems in surface science.
Basic principles.
An X-ray standing wave (XSW) field is created by interference between an X-ray beam impinging on a sample and a reflected beam. The reflection may be generated at the Bragg condition for a crystal lattice or an engineered multilayer superlattice; in these cases, the period of the XSW equals the periodicity of the reflecting planes. X-ray reflectivity from a mirror surface at small incidence angles may also be used to generate long-period XSWs.
The spatial modulation of the XSW field, described by the dynamical theory of X-ray diffraction, undergoes a pronounced change when the sample is scanned through the Bragg condition. Due to a relative phase variation between the incoming and reflected beams, the nodal planes of the XSW field shift by half the XSW period. Depending on the position of the atoms within this wave field, the measured element-specific absorption of X-rays varies in a characteristic way. Therefore, measurement of the absorption (via X-ray fluorescence or photoelectron yield) can reveal the position of the atoms relative to the reflecting planes. The absorbing atoms can be thought of as "detecting" the phase of the XSW; thus, this method overcomes the phase problem of X-ray crystallography.
For quantitative analysis, the normalized fluorescence or photoelectron yield formula_0 is described by
formula_1,
where formula_2 is the reflectivity and formula_3 is the relative phase of the interfering beams. The characteristic shape of formula_0 can be used to derive precise structural information about the surface atoms because the two parameters formula_4 (coherent fraction) and formula_5 (coherent position) are directly related to the Fourier representation of the atomic distribution function. Therefore, with a sufficiently large number of Fourier components being measured, XSW data can be used to establish the distribution of the different atoms in the unit cell (XSW imaging).
Experimental considerations.
XSW measurements of single crystal surfaces are performed on a diffractometer. The crystal is rocked through a Bragg diffraction condition, and the reflectivity and XSW yield are simultaneously measured. XSW yield is usually detected as X-ray fluorescence (XRF). XRF detection enables "in situ" measurements of interfaces between a surface and gas or liquid environments, since hard X-rays can penetrate these media. While XRF gives an element-specific XSW yield, it is not sensitive to the chemical state of the absorbing atom. Chemical state sensitivity is achieved using photoelectron detection, which requires ultra-high vacuum instrumentation.
Measurements of atomic positions at or near single crystal surfaces require substrates of very high crystal quality. The intrinsic width of a Bragg reflection, as calculated by dynamical diffraction theory, is extremely small (on the order of 0.001° under conventional X-ray diffraction conditions). Crystal defects such as mosaicity can substantially broaden the measured reflectivity, which obscures the modulations in the XSW yield needed to locate the absorbing atom. For defect-rich substrates such as metal single crystals, a normal-incidence or back-reflection geometry is used. In this geometry, the intrinsic width of the Bragg reflection is maximized. Instead of rocking the crystal in space, the energy of the incident beam is tuned through the Bragg condition. Since this geometry requires soft incident X-rays, this geometry typically uses XPS detection of the XSW yield.
Selected applications.
Applications which require ultra-high vacuum conditions:
Applications which do not require ultra-high vacuum conditions:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y_p"
},
{
"math_id": 1,
"text": "Y_{p}(\\Omega) = 1 + R + 2C \\sqrt{R} f_H \\cos (\\nu - 2\\pi P_H )"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "f_H"
},
{
"math_id": 5,
"text": "P_H"
}
]
| https://en.wikipedia.org/wiki?curid=6898473 |
68985499 | Lenia | Continuous generalization of cellular automata
Lenia is a family of cellular automata created by Bert Wang-Chak Chan. It is intended to be a continuous generalization of Conway's Game of Life, with continuous states, space and time. As a consequence of its continuous, high-resolution domain, the complex autonomous patterns ("lifeforms" or "spaceships") generated in Lenia are described as differing from those appearing in other cellular automata, being "geometric, metameric, fuzzy, resilient, adaptive, and rule-generic".
Lenia won the 2018 Virtual Creatures Contest at the Genetic and Evolutionary Computation Conference in Kyoto, an honorable mention for the ALIFE Art Award at ALIFE 2018 in Tokyo, and Outstanding Publication of 2019 by the International Society for Artificial Life (ISAL).
Rules.
Iterative updates.
Let formula_0 be the "lattice" or "grid" containing a set of states formula_1. Like many cellular automata, Lenia is updated iteratively; each output state is a pure function of the previous state, such that
formula_2
where formula_3 is the initial state and formula_4 is the "global rule", representing the application of the "local rule" over every site formula_5. Thus formula_6.
If the simulation is advanced by formula_7 at each timestep, then the time resolution formula_8.
State sets.
Let formula_9 with maximum formula_10. This is the "state set" of the automaton and characterizes the possible states that may be found at each site. Larger formula_11 correspond to higher "state resolutions" in the simulation. Many cellular automata use the lowest possible state resolution, i.e. formula_12. Lenia allows for much higher resolutions. Note that the actual value at each site is not in formula_13 but rather an integer multiple of formula_14; therefore we have formula_15 for all formula_16. For example, given formula_17, formula_18.
Neighborhoods.
Mathematically, neighborhoods like those in Game of Life may be represented using a set of position vectors in formula_19. For the classic Moore neighborhood used by Game of Life, for instance, formula_20; i.e. a square of size 3 centered on every site.
In Lenia's case, the neighborhood is instead a ball of radius formula_21 centered on a site, formula_22, which may include the original site itself.
Note that the neighborhood vectors are not the absolute position of the elements, but rather a set of relative positions (deltas) with respect to any given site.
Local rule.
There are discrete and continuous variants of Lenia. Let formula_23 be a vector in formula_19 within formula_0 representing the position of a given site, and formula_24 be the set of sites neighboring formula_23. Both variations comprise two stages:
Once formula_29 is computed, it is scaled by the chosen time resolution formula_7 and added to the original state value:formula_30Here, the clip function is defined by formula_31 .
The local rules are defined as follows for discrete and continuous Lenia:
formula_32
Kernel generation.
There are many ways to generate the convolution kernel formula_33. The final kernel is the composition of a "kernel shell" formula_34 and a "kernel skeleton" formula_35.
For the kernel shell formula_34, Chan gives several functions that are defined radially. Kernel shell functions are unimodal and subject to the constraint formula_36 (and typically formula_37 as well). Example kernel functions include:
formula_38
Here, formula_39 is the indicator function.
Once the kernel shell has been defined, the kernel skeleton formula_35 is used to expand it and compute the actual values of the kernel by transforming the shell into a series of concentric rings. The height of each ring is controlled by a "kernel peak" vector formula_40, where formula_41 is the "rank" of the parameter vector. Then the kernel skeleton formula_35 is defined as
formula_42
The final kernel formula_43 is therefore
formula_44
such that formula_33 is normalized to have an element sum of formula_45 and formula_46 (for conservation of mass). formula_47 in the discrete case, and formula_48 in the continuous case.
Growth mappings.
The growth mapping formula_49, which is analogous to an activation function, may be any function that is unimodal, nonmonotonic, and accepts parameters formula_50. Examples include
formula_51
where formula_52 is a potential value drawn from formula_53.
Game of Life.
The Game of Life may be regarded as a special case of discrete Lenia with formula_54. In this case, the kernel would be rectangular, with the functionformula_55and the growth rule also rectangular, with formula_56.
Patterns.
By varying the convolutional kernel, the growth mapping and the initial condition, over 400 "species" of "life" have been discovered in Lenia, displaying "self-organization, self-repair, bilateral and radial symmetries, locomotive dynamics, and sometimes chaotic nature". Chan has created a taxonomy for these patterns.
Related work.
Other works have noted the strong similarity between cellular automata update rules and convolutions. Indeed, these works have focused on reproducing cellular automata using simplified convolutional neural networks. Mordvintsev et al. investigated the emergence of self-repairing pattern generation. Gilpin found that any cellular automaton could be represented as a convolutional neural network, and trained neural networks to reproduce existing cellular automata
In this light, cellular automata may be seen as a special case of recurrent convolutional neural networks. Lenia's update rule may also be seen as a single-layer convolution (the "potential field" formula_33) with an activation function (the "growth mapping" formula_57). However, Lenia uses far larger, fixed, kernels and is not trained via gradient descent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{L}"
},
{
"math_id": 1,
"text": "S^\\mathcal{L}"
},
{
"math_id": 2,
"text": "\\Phi(A^0) = A^{\\Delta t}, \\Phi(A^{\\Delta t}) = A^{2\\Delta t}, \\ldots, \\Phi(A^t) = A^{t + \\Delta t},\\ldots"
},
{
"math_id": 3,
"text": "A^0"
},
{
"math_id": 4,
"text": "\\Phi : S^\\mathcal{L} \\rightarrow S^\\mathcal{L}"
},
{
"math_id": 5,
"text": "\\mathbf{x}\\in\\cal{L}"
},
{
"math_id": 6,
"text": "\\Phi^N(A^t) = A^{t + N\\Delta t}"
},
{
"math_id": 7,
"text": "\\Delta t"
},
{
"math_id": 8,
"text": "T = \\frac{1}{\\Delta t}"
},
{
"math_id": 9,
"text": "S = \\{0, 1, \\ldots, P-1, P\\}"
},
{
"math_id": 10,
"text": "P \\in \\Z"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "P = 1"
},
{
"math_id": 13,
"text": "[0,P]"
},
{
"math_id": 14,
"text": "\\Delta p = \\frac{1}{P}"
},
{
"math_id": 15,
"text": "A^t(\\mathbf{x}) \\in [0, 1]"
},
{
"math_id": 16,
"text": "\\mathbf{x} \\in \\mathcal{L}"
},
{
"math_id": 17,
"text": "P = 4"
},
{
"math_id": 18,
"text": "\\mathbf{A}^t(\\mathbf{x}) \\in \\{0, 0.25, 0.5, 0.75, 1\\}"
},
{
"math_id": 19,
"text": "\\R^2"
},
{
"math_id": 20,
"text": "\\mathcal{N} = \\{-1, 0, 1\\}^2"
},
{
"math_id": 21,
"text": "R"
},
{
"math_id": 22,
"text": "\\mathcal{N} = \\{\\mathbf{x} \\in \\mathcal{L} : \\lVert \\mathbf{x} \\rVert_2 \\leq R\\}"
},
{
"math_id": 23,
"text": "\\mathbf{x}"
},
{
"math_id": 24,
"text": "\\mathcal{N}"
},
{
"math_id": 25,
"text": "\\mathbf{K} : \\mathcal{N} \\rightarrow S"
},
{
"math_id": 26,
"text": "\\mathbf{U}^t(\\mathbf{x})=\\mathbf{K} * \\mathbf{A}^t(\\mathbf{x})"
},
{
"math_id": 27,
"text": "G : [0, 1] \\rightarrow [-1, 1]"
},
{
"math_id": 28,
"text": "\\mathbf{G}^t(\\mathbf{x})=G(\\mathbf{U}^t(\\mathbf{x}))"
},
{
"math_id": 29,
"text": "\\mathbf{G}^t"
},
{
"math_id": 30,
"text": "\\mathbf{A}^{t+\\Delta t}(\\mathbf{x}) = \\text{clip}(\\mathbf{A}^{t} + \\Delta t \\;\\mathbf{G}^t(\\mathbf{x}),\\; 0,\\; 1)"
},
{
"math_id": 31,
"text": "\\operatorname{clip}(u,a,b):=\\min(\\max(u,a),b)"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\mathbf{U}^t(\\mathbf{x}) &= \\begin{cases}\n \\sum_{\\mathbf{n} \\in \\mathcal{N}} \\mathbf{K(n)}\\mathbf{A}^t(\\mathbf{x}+\\mathbf{n})\\Delta x^2, & \\text{discrete Lenia} \\\\\n \\int_{\\mathbf{n} \\in \\mathcal{N}} \\mathbf{K(n)}\\mathbf{A}^t(\\mathbf{x}+\\mathbf{n})dx^2, & \\text{continuous Lenia}\n\\end{cases} \\\\\n\\mathbf{G}^t(\\mathbf{x}) &= G(\\mathbf{U}^t(\\mathbf{x})) \\\\\n\\mathbf{A}^{t+\\Delta t}(\\mathbf{x}) &= \\text{clip}(\\mathbf{A}^t(\\mathbf{x}) + \\Delta t\\;\\mathbf{G}^t(\\mathbf{x}),\\; 0,\\; 1)\n\\end{align}"
},
{
"math_id": 33,
"text": "\\mathbf{K}"
},
{
"math_id": 34,
"text": "K_C"
},
{
"math_id": 35,
"text": "K_S"
},
{
"math_id": 36,
"text": "K_C(0) = K_C(1) = 0 "
},
{
"math_id": 37,
"text": "K_C\\left(\\frac{1}{2}\\right) = 1"
},
{
"math_id": 38,
"text": "K_C(r) = \\begin{cases} \n \\exp\\left(\\alpha - \\frac{\\alpha}{4r(1-r)}\\right), & \\text{exponential}, \\alpha=4 \\\\ \n (4r(1-r))^\\alpha, & \\text{polynomial}, \\alpha=4 \\\\\n \\mathbf{1}_{\\left[\\frac{1}{4},\\frac{3}{4}\\right]}(r), & \\text{rectangular} \\\\\n \\ldots, & \\text{etc.}\n\\end{cases}"
},
{
"math_id": 39,
"text": "\\mathbf{1}_A(r)"
},
{
"math_id": 40,
"text": "\\beta = (\\beta_1, \\beta_2, \\ldots, \\beta_B) \\in [0,1]^B"
},
{
"math_id": 41,
"text": "B"
},
{
"math_id": 42,
"text": "K_S(r;\\beta)=\\beta_{\\lfloor Br \\rfloor} K_C(Br \\text{ mod } 1)"
},
{
"math_id": 43,
"text": "\\mathbf{K}(\\mathbf{n})"
},
{
"math_id": 44,
"text": "\\mathbf{K}(\\mathbf{n}) = \\frac{K_S(\\lVert \\mathbf{n} \\rVert_2)}{|K_S|}"
},
{
"math_id": 45,
"text": "1"
},
{
"math_id": 46,
"text": "\\mathbf{K} * \\mathbf{A} \\in [0, 1]"
},
{
"math_id": 47,
"text": "|K_S| = \\textstyle \\sum_{\\mathcal{N}} \\displaystyle K_S \\, \\Delta x^2"
},
{
"math_id": 48,
"text": "\\int_{N} K_S \\,dx^2"
},
{
"math_id": 49,
"text": "G : [0, 1] \\rightarrow [-1,1]"
},
{
"math_id": 50,
"text": "\\mu,\\sigma \\in \\R"
},
{
"math_id": 51,
"text": "G(u;\\mu,\\sigma) = \\begin{cases}\n 2\\exp\\left(-\\frac{(u-\\mu)^2}{2\\sigma^2}\\right)-1, & \\text{exponential} \\\\\n 2\\cdot\\mathbf{1}_{[\\mu\\pm3\\sigma]}(u)\\left(1-\\frac{(u-\\mu)^2}{9\\sigma^2}\\right)^\\alpha-1, & \\text{polynomial}, \\alpha=4 \\\\\n 2\\cdot\\mathbf{1}_{[\\mu\\pm\\sigma]}(u)-1, & \\text{rectangular} \\\\\n \\ldots, & \\text{etc.}\n\\end{cases}"
},
{
"math_id": 52,
"text": "u"
},
{
"math_id": 53,
"text": "\\mathbf{U}^t"
},
{
"math_id": 54,
"text": "R = T = P = 1"
},
{
"math_id": 55,
"text": "K_C(r) = \\mathbf{1}_{\\left[\\frac{1}{4},\\frac{3}{4}\\right]}(r) + \\frac{1}{2}\\mathbf{1}_{\\left[0,\\frac{1}{4}\\right)}(r)"
},
{
"math_id": 56,
"text": "\\mu = 0.35, \\sigma = 0.07"
},
{
"math_id": 57,
"text": "G"
}
]
| https://en.wikipedia.org/wiki?curid=68985499 |
689895 | Mechanism design | Field of economics and game theory
Mechanism design, sometimes called implementation theory or institution design, is a branch of economics, social choice, and game theory that deals with designing game forms (or mechanisms) to implement a given social choice function. Because it starts with the end of the game (an optimal result) and then works backwards to find a game that implements it, it is sometimes described as reverse game theory.
Mechanism design has broad applications, including traditional domains of economics such as market design, but also political science (through voting theory) and even networked systems (such as in inter-domain routing).
Mechanism design studies solution concepts for a class of private-information games. Leonid Hurwicz explains that "in a design problem, the goal function is the main given, while the mechanism is the unknown. Therefore, the design problem is the inverse of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism."
The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson "for having laid the foundations of mechanism design theory." The related works of William Vickrey that established the field earned him the 1996 Nobel prize.
Description.
One person, called the "principal", would like to condition his behavior on information privately known to the players of a game. For example, the principal would like to know the true quality of a used car a salesman is pitching. He cannot learn anything simply by asking the salesman, because it is in the salesman's interest to distort the truth. However, in mechanism design, the principal does have one advantage: He may design a game whose rules influence others to act the way he would like.
Without mechanism design theory, the principal's problem would be difficult to solve. He would have to consider all the possible games and choose the one that best influences other players' tactics. In addition, the principal would have to draw conclusions from agents who may lie to him. Thanks to the revelation principle, the principal only needs to consider games in which agents truthfully report their private information.
Foundations.
Mechanism.
A game of mechanism design is a game of private information in which one of the agents, called the principal, chooses the payoff structure. Following Harsanyi (1967), the agents receive secret "messages" from nature containing information relevant to payoffs. For example, a message may contain information about their preferences or the quality of a good for sale. We call this information the agent's "type" (usually noted formula_2 and accordingly the space of types formula_0). Agents then report a type to the principal (usually noted with a hat formula_3) that can be a strategic lie. After the report, the principal and the agents are paid according to the payoff structure the principal chose.
The timing of the game is:
In order to understand who gets what, it is common to divide the outcome formula_5 into a goods allocation and a money transfer, formula_7 where formula_8 stands for an allocation of goods rendered or received as a function of type, and formula_9 stands for a monetary transfer as a function of type.
As a benchmark the designer often defines what should happen under full information. Define a social choice function formula_1 mapping the (true) type profile directly to the allocation of goods received or rendered,
formula_10
In contrast a mechanism maps the "reported" type profile to an "outcome" (again, both a goods allocation formula_8 and a money transfer formula_9)
formula_11
Revelation principle.
A proposed mechanism constitutes a Bayesian game (a game of private information), and if it is well-behaved the game has a Bayesian Nash equilibrium. At equilibrium agents choose their reports strategically as a function of type
formula_12
It is difficult to solve for Bayesian equilibria in such a setting because it involves solving for agents' best-response strategies and for the best inference from a possible strategic lie. Thanks to a sweeping result called the revelation principle, no matter the mechanism a designer can confine attention to equilibria in which agents truthfully report type. The revelation principle states: "To every Bayesian Nash equilibrium there corresponds a Bayesian game with the same equilibrium outcome but in which players truthfully report type."
This is extremely useful. The principle allows one to solve for a Bayesian equilibrium by assuming all players truthfully report type (subject to an incentive compatibility constraint). In one blow it eliminates the need to consider either strategic behavior or lying.
Its proof is quite direct. Assume a Bayesian game in which the agent's strategy and payoff are functions of its type and what others do, formula_13. By definition agent "i"'s equilibrium strategy formula_14 is Nash in expected utility:
formula_15
Simply define a mechanism that would induce agents to choose the same equilibrium. The easiest one to define is for the mechanism to commit to playing the agents' equilibrium strategies "for" them.
formula_16
Under such a mechanism the agents of course find it optimal to reveal type since the mechanism plays the strategies they found optimal anyway. Formally, choose formula_17 such that
formula_18
Implementability.
The designer of a mechanism generally hopes either
To implement a social choice function formula_1 is to find some transfer function formula_19 that motivates agents to pick formula_1. Formally, if the equilibrium strategy profile under the mechanism maps to the same goods allocation as a social choice function,
formula_20
we say the mechanism implements the social choice function.
Thanks to the revelation principle, the designer can usually find a transfer function formula_19 to implement a social choice by solving an associated truthtelling game. If agents find it optimal to truthfully report type,
formula_21
we say such a mechanism is truthfully implementable. The task is then to solve for a truthfully implementable formula_19 and impute this transfer function to the original game. An allocation formula_22 is truthfully implementable if there exists a transfer function formula_19 such that
formula_23
which is also called the incentive compatibility (IC) constraint.
In applications, the IC condition is the key to describing the shape of formula_19 in any useful way. Under certain conditions it can even isolate the transfer function analytically. Additionally, a participation (individual rationality) constraint is sometimes added if agents have the option of not playing.
Necessity.
Consider a setting in which all agents have a type-contingent utility function formula_24. Consider also a goods allocation formula_22 that is vector-valued and size formula_25 (which permits formula_25 number of goods) and assume it is piecewise continuous with respect to its arguments.
The function formula_22 is implementable only if
formula_26
whenever formula_27 and formula_28 and "x" is continuous at formula_2. This is a necessary condition and is derived from the first- and second-order conditions of the agent's optimization problem assuming truth-telling.
Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution (MRS) increases as a function of the type,
formula_29
In short, agents will not tell the truth if the mechanism does not offer higher agent types a better deal. Otherwise, higher types facing any mechanism that punishes high types for reporting will lie and declare they are lower types, violating the truthtelling incentive-compatibility constraint. The second piece is a monotonicity condition waiting to happen,
formula_30
which, to be positive, means higher types must be given more of the good.
There is potential for the two pieces to interact. If for some type range the contract offered less quantity to higher types formula_31, it is possible the mechanism could compensate by giving higher types a discount. But such a contract already exists for low-type agents, so this solution is pathological. Such a solution sometimes occurs in the process of solving for a mechanism. In these cases it must be "ironed". In a multiple-good environment it is also possible for the designer to reward the agent with more of one good to substitute for less of another (e.g. butter for margarine). Multiple-good mechanisms are an area of continuing research in mechanism design.
Sufficiency.
Mechanism design papers usually make two assumptions to ensure implementability:
formula_32
This is known by several names: the single-crossing condition, the sorting condition and the Spence–Mirrlees condition. It means the utility function is of such a shape that the agent's MRS is increasing in type.
formula_33
This is a technical condition bounding the rate of growth of the MRS.
These assumptions are sufficient to provide that any monotonic formula_22 is implementable (a formula_19 exists that can implement it). In addition, in the single-good setting the single-crossing condition is sufficient to provide that only a monotonic formula_22 is implementable, so the designer can confine his search to a monotonic formula_22.
Highlighted results.
Revenue equivalence theorem.
Vickrey (1961) gives a celebrated result that any member of a large class of auctions assures the seller of the same expected revenue and that the expected revenue is the best the seller can do. This is the case if
The last condition is crucial to the theorem. An implication is that for the seller to achieve higher revenue he must take a chance on giving the item to an agent with a lower valuation. Usually this means he must risk not selling the item at all.
Vickrey–Clarke–Groves mechanisms.
The Vickrey (1961) auction model was later expanded by Clarke (1971) and Groves to treat a public choice problem in which a public project's cost is borne by all agents, e.g. whether to build a municipal bridge. The resulting "Vickrey–Clarke–Groves" mechanism can motivate agents to choose the socially efficient allocation of the public good even if agents have privately known valuations. In other words, it can solve the "tragedy of the commons"—under certain conditions, in particular quasilinear utility or if budget balance is not required.
Consider a setting in which formula_34 number of agents have quasilinear utility with private valuations formula_35 where the currency formula_9 is valued linearly. The VCG designer designs an incentive compatible (hence truthfully implementable) mechanism to obtain the true type profile, from which the designer implements the socially optimal allocation
formula_36
The cleverness of the VCG mechanism is the way it motivates truthful revelation. It eliminates incentives to misreport by penalizing any agent by the cost of the distortion he causes. Among the reports the agent may make, the VCG mechanism permits a "null" report saying he is indifferent to the public good and cares only about the money transfer. This effectively removes the agent from the game. If an agent does choose to report a type, the VCG mechanism charges the agent a fee if his report is pivotal, that is if his report changes the optimal allocation "x" so as to harm other agents. The payment is calculated
formula_37
which sums the distortion in the utilities of the other agents (and not his own) caused by one agent reporting.
Gibbard–Satterthwaite theorem.
Gibbard (1973) and Satterthwaite (1975) give an impossibility result similar in spirit to Arrow's impossibility theorem. For a very general class of games, only "dictatorial" social choice functions can be implemented.
A social choice function "f"() is dictatorial if one agent always receives his most-favored goods allocation,
formula_38
The theorem states that under general conditions any truthfully implementable social choice function must be dictatorial if,
Myerson–Satterthwaite theorem.
Myerson and Satterthwaite (1983) show there is no efficient way for two parties to trade a good when they each have secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of welfare economics.
Shapley value.
Phillips and Marden (2018) proved that for cost-sharing games with concave cost functions, the optimal cost-sharing rule that firstly optimizes the worst-case inefficiencies in a game (the price of anarchy), and then secondly optimizes the best-case outcomes (the price of stability), is precisely the Shapley value cost-sharing rule. A symmetrical statement is similarly valid for utility-sharing games with convex utility functions.
Price discrimination.
Mirrlees (1971) introduces a setting in which the transfer function "t"() is easy to solve for. Due to its relevance and tractability it is a common setting in the literature. Consider a single-good, single-agent setting in which the agent has quasilinear utility with an unknown type parameter formula_2
formula_40
and in which the principal has a prior CDF over the agent's type formula_41. The principal can produce goods at a convex marginal cost "c"("x") and wants to maximize the expected profit from the transaction
formula_42
subject to IC and IR conditions
formula_43
formula_44
The principal here is a monopolist trying to set a profit-maximizing price scheme in which it cannot identify the type of the customer. A common example is an airline setting fares for business, leisure and student travelers. Due to the IR condition it has to give every type a good enough deal to induce participation. Due to the IC condition it has to give every type a good enough deal that the type prefers its deal to that of any other.
A trick given by Mirrlees (1971) is to use the envelope theorem to eliminate the transfer function from the expectation to be maximized,
formula_45
formula_46
Integrating,
formula_47
where formula_48 is some index type. Replacing the incentive-compatible formula_49 in the maximand,
formula_50
after an integration by parts. This function can be maximized pointwise.
Because formula_51 is incentive-compatible already the designer can drop the IC constraint. If the utility function satisfies the Spence–Mirrlees condition then a monotonic formula_22 function exists. The IR constraint can be checked at equilibrium and the fee schedule raised or lowered accordingly. Additionally, note the presence of a hazard rate in the expression. If the type distribution bears the monotone hazard ratio property, the FOC is sufficient to solve for "t"(). If not, then it is necessary to check whether the monotonicity constraint (see sufficiency, above) is satisfied everywhere along the allocation and fee schedules. If not, then the designer must use Myerson ironing.
Myerson ironing.
In some applications the designer may solve the first-order conditions for the price and allocation schedules yet find they are not monotonic. For example, in the quasilinear setting this often happens when the hazard ratio is itself not monotone. By the Spence–Mirrlees condition the optimal price and allocation schedules must be monotonic, so the designer must eliminate any interval over which the schedule changes direction by flattening it.
Intuitively, what is going on is the designer finds it optimal to bunch certain types together and give them the same contract. Normally the designer motivates higher types to distinguish themselves by giving them a better deal. If there are insufficiently few higher types on the margin the designer does not find it worthwhile to grant lower types a concession (called their information rent) in order to charge higher types a type-specific contract.
Consider a monopolist principal selling to agents with quasilinear utility, the example above. Suppose the allocation schedule formula_22 satisfying the first-order conditions has a single interior peak at formula_52 and a single interior trough at formula_53, illustrated at right.
Proof.
The proof uses the theory of optimal control. It considers the set of intervals formula_62 in the nonmonotonic region of formula_22 over which it might flatten the schedule. It then writes a Hamiltonian to obtain necessary conditions for a formula_22 within the intervals
Condition two ensures that the formula_22 satisfying the optimal control problem reconnects to the schedule in the original problem at the interval boundaries (no jumps). Any formula_22 satisfying the necessary conditions must be flat because it must be monotonic and yet reconnect at the boundaries.
As before maximize the principal's expected payoff, but this time subject to the monotonicity constraint
formula_63
and use a Hamiltonian to do it, with shadow price formula_64
formula_65
where formula_8 is a state variable and formula_66 the control. As usual in optimal control the costate evolution equation must satisfy
formula_67
Taking advantage of condition 2, note the monotonicity constraint is not binding at the boundaries of the formula_2 interval,
formula_68
meaning the costate variable condition can be integrated and also equals 0
formula_69
The average distortion of the principal's surplus must be 0. To flatten the schedule, find an formula_8 such that its inverse image maps to a formula_2 interval satisfying the condition above.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Theta"
},
{
"math_id": 1,
"text": "f(\\theta)"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\hat\\theta"
},
{
"math_id": 4,
"text": "y()"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "y(\\hat\\theta)"
},
{
"math_id": 7,
"text": "y(\\theta) = \\{ x(\\theta), t(\\theta) \\}, \\ x \\in X, t \\in T "
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "f(\\theta): \\Theta \\rightarrow Y"
},
{
"math_id": 11,
"text": "y(\\hat\\theta): \\Theta \\rightarrow Y"
},
{
"math_id": 12,
"text": "\\hat\\theta(\\theta)"
},
{
"math_id": 13,
"text": "u_i\\left(s_i(\\theta_i),s_{-i}(\\theta_{-i}), \\theta_{i} \\right)"
},
{
"math_id": 14,
"text": "s(\\theta_i)"
},
{
"math_id": 15,
"text": "s_i(\\theta_i) \\in \\arg\\max_{s'_i \\in S_i} \\sum_{\\theta_{-i}} \\ p(\\theta_{-i} \\mid \\theta_i) \\ u_i\\left(s'_i, s_{-i}(\\theta_{-i}),\\theta_i \\right)"
},
{
"math_id": 16,
"text": "y(\\hat\\theta) : \\Theta \\rightarrow S(\\Theta) \\rightarrow Y "
},
{
"math_id": 17,
"text": "y(\\theta)"
},
{
"math_id": 18,
"text": "\n\\begin{align}\n\\theta_i \\in {} & \\arg\\max_{\\theta'_i \\in \\Theta} \\sum_{\\theta_{-i}} \\ p(\\theta_{-i} \\mid \\theta_i) \\ u_i\\left( y(\\theta'_i, \\theta_{-i}),\\theta_i \\right) \\\\[5pt]\n& = \\sum_{\\theta_{-i}} \\ p(\\theta_{-i} \\mid \\theta_i) \\ u_i\\left(s_i(\\theta), s_{-i}(\\theta_{-i}),\\theta_i \\right)\n\\end{align}\n"
},
{
"math_id": 19,
"text": "t(\\theta)"
},
{
"math_id": 20,
"text": "f(\\theta) = x \\left(\\hat\\theta(\\theta) \\right)"
},
{
"math_id": 21,
"text": "\\hat\\theta(\\theta) = \\theta"
},
{
"math_id": 22,
"text": "x(\\theta)"
},
{
"math_id": 23,
"text": "u(x(\\theta),t(\\theta),\\theta) \\geq u(x(\\hat\\theta),t(\\hat\\theta),\\theta) \\ \\forall \\theta,\\hat\\theta \\in \\Theta"
},
{
"math_id": 24,
"text": "u(x,t,\\theta)"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": " \\sum^n_{k=1} \\frac{\\partial}{\\partial \\theta} \\left( \\frac{\\partial u / \\partial x_k}{\\left|\\partial u / \\partial t\\right|} \\right) \\frac{\\partial x}{\\partial \\theta} \\geq 0 "
},
{
"math_id": 27,
"text": "x=x(\\theta)"
},
{
"math_id": 28,
"text": "t=t(\\theta)"
},
{
"math_id": 29,
"text": "\\frac \\partial {\\partial \\theta} \\left( \\frac{\\partial u / \\partial x_k}{\\left|\\partial u / \\partial t\\right|} \\right) = \\frac{\\partial}{\\partial \\theta} \\mathrm{MRS}_{x,t}"
},
{
"math_id": 30,
"text": "\\frac{\\partial x}{\\partial \\theta} "
},
{
"math_id": 31,
"text": "\\partial x / \\partial \\theta < 0"
},
{
"math_id": 32,
"text": "\\frac{\\partial}{\\partial \\theta} \\frac{\\partial u / \\partial x_k}{\\left|\\partial u / \\partial t\\right|} > 0 \\ \\forall k"
},
{
"math_id": 33,
"text": "\\exists K_0, K_1 \\text{ such that } \\left| \\frac{\\partial u / \\partial x_k}{\\partial u / \\partial t} \\right| \\leq K_0 + K_1 |t|"
},
{
"math_id": 34,
"text": "I"
},
{
"math_id": 35,
"text": "v(x,t,\\theta)"
},
{
"math_id": 36,
"text": " x^*_I(\\theta) \\in \\underset{x\\in X}{\\operatorname{argmax}} \\sum_{i \\in I} v(x,\\theta_i) "
},
{
"math_id": 37,
"text": " t_i(\\hat\\theta) = \\sum_{j \\in I-i} v_j(x^*_{I-i}(\\theta_{I-i}),\\theta_j) - \\sum_{j \\in I-i} v_j(x^*_I (\\hat\\theta_i,\\theta_I),\\theta_j) "
},
{
"math_id": 38,
"text": "\\text{for } f(\\Theta)\\text{, } \\exists i \\in I \\text{ such that } u_i(x,\\theta_i) \\geq u_i(x',\\theta_i) \\ \\forall x' \\in X"
},
{
"math_id": 39,
"text": "f(\\Theta) = X"
},
{
"math_id": 40,
"text": "u(x,t,\\theta) = V(x,\\theta) - t"
},
{
"math_id": 41,
"text": "P(\\theta)"
},
{
"math_id": 42,
"text": "\\max_{x(\\theta),t(\\theta)} \\mathbb{E}_\\theta \\left[ t(\\theta) - c\\left(x(\\theta)\\right) \\right]"
},
{
"math_id": 43,
"text": " u(x(\\theta),t(\\theta),\\theta) \\geq u(x(\\theta'),t(\\theta'),\\theta) \\ \\forall \\theta,\\theta' "
},
{
"math_id": 44,
"text": " u(x(\\theta),t(\\theta),\\theta) \\geq \\underline{u}(\\theta) \\ \\forall \\theta "
},
{
"math_id": 45,
"text": "\\text{let } U(\\theta) = \\max_{\\theta'} u\\left(x(\\theta'),t(\\theta'),\\theta \\right)"
},
{
"math_id": 46,
"text": "\\frac{dU}{d\\theta} = \\frac{\\partial u}{\\partial \\theta} = \\frac{\\partial V}{\\partial \\theta}"
},
{
"math_id": 47,
"text": "U(\\theta) = \\underline{u}(\\theta_0) + \\int^\\theta_{\\theta_0} \\frac{\\partial V}{\\partial \\tilde\\theta} d\\tilde\\theta"
},
{
"math_id": 48,
"text": "\\theta_0"
},
{
"math_id": 49,
"text": "t(\\theta) = V(x(\\theta),\\theta) - U(\\theta)"
},
{
"math_id": 50,
"text": "\\begin{align}& \\mathbb{E}_\\theta \\left[ V(x(\\theta),\\theta) - \\underline{u}(\\theta_0) - \\int^\\theta_{\\theta_0} \\frac{\\partial V}{\\partial \\tilde\\theta} d\\tilde\\theta - c\\left(x(\\theta)\\right) \\right] \\\\\n&{} =\\mathbb{E}_\\theta \\left[ V(x(\\theta),\\theta) - \\underline{u}(\\theta_0) - \\frac{1-P(\\theta)}{p(\\theta)} \\frac{\\partial V}{\\partial \\theta} - c\\left(x(\\theta)\\right) \\right]\\end{align}"
},
{
"math_id": 51,
"text": "U(\\theta)"
},
{
"math_id": 52,
"text": "\\theta_1"
},
{
"math_id": 53,
"text": "\\theta_2>\\theta_1"
},
{
"math_id": 54,
"text": " \\int^{\\phi_1(x)}_{\\phi_2(x)} \\left( \\frac{\\partial V}{\\partial x}(x,\\theta) - \\frac{1-P(\\theta)}{p(\\theta)} \\frac{\\partial^2 V}{\\partial \\theta \\, \\partial x}(x,\\theta) - \\frac{\\partial c}{\\partial x}(x) \\right) d\\theta = 0"
},
{
"math_id": 55,
"text": "\\phi_1(x)"
},
{
"math_id": 56,
"text": "\\theta \\leq \\theta_1"
},
{
"math_id": 57,
"text": "\\phi_2(x)"
},
{
"math_id": 58,
"text": "\\theta \\geq \\theta_2"
},
{
"math_id": 59,
"text": "\\phi_1"
},
{
"math_id": 60,
"text": "\\phi_2"
},
{
"math_id": 61,
"text": "\\phi(x)"
},
{
"math_id": 62,
"text": "\\left[\\underline\\theta, \\overline\\theta \\right] "
},
{
"math_id": 63,
"text": "\\frac{\\partial x}{\\partial \\theta} \\geq 0"
},
{
"math_id": 64,
"text": "\\nu(\\theta)"
},
{
"math_id": 65,
"text": "H = \\left( V(x,\\theta) - \\underline{u}(\\theta_0) - \\frac{1-P(\\theta)}{p(\\theta)} \\frac{\\partial V}{\\partial \\theta}(x,\\theta) - c(x) \\right)p(\\theta) + \\nu(\\theta) \\frac{\\partial x}{\\partial \\theta} "
},
{
"math_id": 66,
"text": "\\partial x/\\partial \\theta"
},
{
"math_id": 67,
"text": " \\frac{\\partial \\nu}{\\partial \\theta} = -\\frac{\\partial H}{\\partial x} = -\\left( \\frac{\\partial V}{\\partial x}(x,\\theta) - \\frac{1-P(\\theta)}{p(\\theta)} \\frac{\\partial^2 V}{\\partial \\theta \\, \\partial x}(x,\\theta) - \\frac{\\partial c}{\\partial x}(x) \\right) p(\\theta) "
},
{
"math_id": 68,
"text": "\\nu(\\underline\\theta) = \\nu(\\overline\\theta) = 0"
},
{
"math_id": 69,
"text": "\\int^{\\overline\\theta}_{\\underline\\theta} \\left( \\frac{\\partial V}{\\partial x}(x,\\theta) - \\frac{1-P(\\theta)}{p(\\theta)} \\frac{\\partial^2 V}{\\partial \\theta \\, \\partial x}(x,\\theta) - \\frac{\\partial c}{\\partial x}(x) \\right) p(\\theta) \\, d\\theta = 0 "
}
]
| https://en.wikipedia.org/wiki?curid=689895 |
68991463 | (Q,r) model | The (Q,r) model is a class of models in inventory theory. A general (Q,r) model can be extended from both the EOQ model and the base stock model
Overview.
Costs.
The number of orders per year can be computed as formula_18, the annual fixed order cost is F(Q,r)A. The fill rate is given by:
formula_19
The annual stockout cost is proportional to D[1 - S(Q,r)], with the fill rate beying:
formula_20
Inventory holding cost is formula_21, average inventory being:
formula_22
Backorder cost approach.
The annual backorder cost is proportional to backorder level:
formula_23
Total cost function and optimal reorder point.
The total cost is given by the sum of setup costs, purchase order cost, backorders cost and inventory carrying cost:
formula_24
The optimal reorder quantity and optimal reorder point are given by:
formula_25
formula_26
Normal distribution.
In the case lead-time demand is normally distributed:
formula_27
Stockout cost approach.
The total cost is given by the sum of setup costs, purchase order cost, stockout cost and inventory carrying cost:
formula_28
What changes with this approach is the computation of the optimal reorder point:
formula_29
Lead-Time Variability.
X is the random demand during replenishment lead time:
formula_30
In expectation:
formula_31
Variance of demand is given by:
formula_32
Hence standard deviation is:
formula_33
Poisson distribution.
if demand is Poisson distributed:
formula_34
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "g(x)"
},
{
"math_id": 4,
"text": "G(x)"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "b"
},
{
"math_id": 11,
"text": "Q"
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "SS=r-\\theta"
},
{
"math_id": 14,
"text": "F(Q,r)"
},
{
"math_id": 15,
"text": "S(Q,r)"
},
{
"math_id": 16,
"text": "B(Q,r)"
},
{
"math_id": 17,
"text": "I(Q,r)"
},
{
"math_id": 18,
"text": "F(Q,r) = \\frac {D}{Q}"
},
{
"math_id": 19,
"text": "S(Q,r)=\\frac{1}{Q} \\int_{r}^{r+Q} G(x)dx"
},
{
"math_id": 20,
"text": "S(Q,r)=\\frac{1}{Q} \\int_{r}^{r+Q} G(x) dx = 1 - \\frac{1}{Q} [B(r))-B(r+Q)]"
},
{
"math_id": 21,
"text": "hI(Q,r)"
},
{
"math_id": 22,
"text": "I(Q,r)=\\frac{Q+1}{2}+r-\\theta+B(Q,r)"
},
{
"math_id": 23,
"text": "B(Q,r) = \\frac{1}{Q} \\int_{r}^{r+Q} B(x+1)dx"
},
{
"math_id": 24,
"text": "Y(Q,r) = \\frac{D}{Q} A + b B(Q,r) +h I(Q,r)"
},
{
"math_id": 25,
"text": "Q^*=\\sqrt{\\frac{2AD}{h}}"
},
{
"math_id": 26,
"text": "G(r^* + 1) = \\frac{b} {b+h}"
},
{
"math_id": 27,
"text": "r^* = \\theta + z \\sigma"
},
{
"math_id": 28,
"text": "Y(Q,r) = \\frac{D} {Q} A + kD[1-S(Q,r)] +h I(Q,r)"
},
{
"math_id": 29,
"text": "G(r^*)=\\frac{kD}{kD+hQ}"
},
{
"math_id": 30,
"text": "X = \\sum_{t=1}^{L} D_{t}"
},
{
"math_id": 31,
"text": "\\operatorname{E}[X] = \\operatorname{E}[L] \\operatorname{E}[D_{t}] =\\ell d = \\theta"
},
{
"math_id": 32,
"text": "\\operatorname{Var}(x) = \\operatorname{E}[L] \\operatorname{Var}(D_{t}) + \\operatorname{E}[D_{t}]^{2}\\operatorname{Var}(L) = \\ell \\sigma^{2}_{D} + d^{2} \\sigma^{2}_{L}"
},
{
"math_id": 33,
"text": "\\sigma = \\sqrt{\\operatorname{Var}(X)} =\\sqrt{ \\ell \\sigma^{2}_{D} + d^{2} \\sigma^{2}_{L} }"
},
{
"math_id": 34,
"text": "\\sigma = \\sqrt{ \\ell \\sigma^{2}_{D} + d^{2} \\sigma^{2}_{L} }= \\sqrt{\\theta + d^{2} \\sigma^{2}_{L}}"
}
]
| https://en.wikipedia.org/wiki?curid=68991463 |
6899907 | Paley construction | In mathematics, the Paley construction is a method for constructing Hadamard matrices using finite fields. The construction was described in 1933 by the English mathematician Raymond Paley.
The Paley construction uses quadratic residues in a finite field GF("q") where "q" is a power of an odd prime number. There are two versions of the construction depending on whether "q" is congruent to 1 or 3 modulo 4.
Quadratic character and Jacobsthal matrix.
Let "q" be a power of an odd prime. In the finite field GF("q") the quadratic character χ("a") indicates whether the element "a" is zero, a non-zero square, or a non-square:
formula_0
For example, in GF(7) the non-zero squares are 1 = 12 = 62, 4 = 22 = 52, and 2 = 32 = 42. Hence χ(0) = 0, χ(1) = χ(2) = χ(4) = 1, and χ(3) = χ(5) = χ(6) = −1.
The Jacobsthal matrix "Q" for GF("q") is the "q" × "q" matrix with rows and columns indexed by elements of GF("q") such that the entry in row "a" and column "b" is χ("a" − "b"). For example, in GF(7), if the rows and columns of the Jacobsthal matrix are indexed by the field elements 0, 1, 2, 3, 4, 5, 6, then
formula_1
The Jacobsthal matrix has the properties "QQ"T = "qI" − "J" and "QJ" = "JQ" = 0 where "I" is the "q" × "q" identity matrix and "J" is the "q" × "q" all 1 matrix. If "q" is congruent to 1 mod 4 then −1 is a square in GF("q")
which implies that "Q" is a symmetric matrix. If "q" is congruent to 3 mod 4 then −1 is not a square, and "Q" is a skew-symmetric matrix. When "q" is a prime number and rows and columns are indexed by field elements in the usual 0, 1, 2, … order, "Q" is a circulant matrix. That is, each row is obtained from the row above by cyclic permutation.
Paley construction I.
If "q" is congruent to 3 mod 4 then
formula_2
is a Hadamard matrix of size "q" + 1. Here "j" is the all-1 column vector of length "q" and "I" is the ("q"+1)×("q"+1) identity matrix. The matrix "H" is a skew Hadamard matrix, which means it satisfies "H" + "H"T = 2"I".
Paley construction II.
If "q" is congruent to 1 mod 4 then the matrix obtained by replacing all 0 entries in
formula_3
with the matrix
formula_4
and all entries ±1 with the matrix
formula_5
is a Hadamard matrix of size 2("q" + 1). It is a symmetric Hadamard matrix.
Examples.
Applying Paley Construction I to the Jacobsthal matrix for GF(7), one produces the 8 × 8 Hadamard matrix,
formula_6
For an example of the Paley II construction when "q" is a prime power rather than a prime number, consider GF(9). This is an extension field of GF(3) obtained
by adjoining a root of an irreducible quadratic. Different irreducible quadratics produce equivalent fields. Choosing "x"2+"x"−1 and letting "a" be a root of this polynomial, the nine elements of GF(9) may be written 0, 1, −1, "a", "a"+1, "a"−1, −"a", −"a"+1, −"a"−1. The non-zero squares are 1 = (±1)2, −"a"+1 = (±"a")2, "a"−1 = (±("a"+1))2, and −1 = (±("a"−1))2. The Jacobsthal matrix is
formula_7
It is a symmetric matrix consisting of nine 3 × 3 circulant blocks. Paley Construction II produces the symmetric 20 × 20 Hadamard matrix,
The Hadamard conjecture.
The size of a Hadamard matrix must be 1, 2, or a multiple of 4. The Kronecker product of two Hadamard matrices of sizes "m" and "n" is an Hadamard matrix of size "mn". By forming Kronecker products of matrices from the Paley construction and the 2 × 2 matrix,
formula_8
Hadamard matrices of every permissible size up to 100 except for 92 are produced. In his 1933 paper, Paley says “It seems probable that, whenever "m" is divisible by 4, it is possible to construct an orthogonal matrix of order "m" composed of ±1, but the general theorem has every appearance of difficulty.” This appears to be the first published statement of the Hadamard conjecture. A matrix of size 92 was eventually constructed by Baumert, Golomb, and Hall, using a construction due to Williamson combined with a computer search. Currently, Hadamard matrices have been shown to exist for all formula_9 for "m" < 668. | [
{
"math_id": 0,
"text": "\\chi(a) = \\begin{cases} 0 & \\text{if }a = 0\\\\\n1 & \\text{if }a = b^2\\text{ for some non-zero }b \\in \\mathrm{GF}(q)\\\\\n-1 & \\text{if }a\\text{ is not the square of any element in }\\mathrm{GF}(q).\\end{cases}"
},
{
"math_id": 1,
"text": "Q = \\begin{bmatrix}\n0 & -1 & -1 & 1 & -1 & 1 & 1\\\\\n1 & 0 & -1 & -1 & 1 & -1 & 1\\\\\n1 & 1 & 0 & -1 & -1 & 1 & -1\\\\\n-1 & 1 & 1 & 0 & -1 & -1 & 1\\\\\n1 & -1 & 1 & 1 & 0 & -1 & -1\\\\\n-1 & 1 & -1 & 1 & 1 & 0 & -1\\\\\n-1 & -1 & 1 & -1 & 1 & 1 & 0\\end{bmatrix}."
},
{
"math_id": 2,
"text": "H = I + \\begin{bmatrix}\n0 & j^T \\\\\n-j & Q \\end{bmatrix}"
},
{
"math_id": 3,
"text": "\\begin{bmatrix}\n0 & j^T \\\\\nj & Q \\end{bmatrix}"
},
{
"math_id": 4,
"text": "\\begin{bmatrix}\n1 & -1 \\\\\n-1 & -1 \\end{bmatrix}"
},
{
"math_id": 5,
"text": "\\pm\\begin{bmatrix}\n1 & 1 \\\\\n1 & -1 \\end{bmatrix}"
},
{
"math_id": 6,
"text": "\\begin{bmatrix}\n1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n-1 & 1 & -1 & -1 & 1 & -1 & 1 & 1 \\\\\n-1 & 1 & 1 & -1 & -1 & 1 & -1 & 1 \\\\\n-1& 1 & 1 & 1 & -1 & -1 & 1 & -1 \\\\\n-1 & -1 & 1 & 1 & 1 & -1 & -1 & 1 \\\\\n-1 & 1 & -1 & 1 & 1 & 1 & -1 & -1 \\\\\n-1 & -1 & 1 & -1 & 1 & 1 & 1 & -1 \\\\\n-1 & -1 & -1 & 1 & -1 & 1 & 1 & 1\n\\end{bmatrix}."
},
{
"math_id": 7,
"text": "Q = \\begin{bmatrix}\n0 & 1 & 1 & -1 & -1 & 1 & -1 & 1 & -1\\\\\n1 & 0 & 1 & 1 & -1 & -1 & -1 & -1 & 1\\\\\n1 & 1 & 0 & -1 & 1 & -1 & 1 & -1 & -1\\\\\n-1 & 1 & -1 & 0 & 1 & 1 & -1 & -1 & 1\\\\\n-1 & -1 & 1 & 1 & 0 & 1 & 1 & -1 & -1\\\\\n1 & -1 & -1 & 1 & 1 & 0 & -1 & 1 & -1\\\\\n-1 & -1 & 1 & -1 & 1 & -1 & 0 & 1 & 1\\\\\n1 & -1 & -1 & -1 & -1 & 1 & 1 & 0 & 1\\\\\n-1 & 1 & -1 & 1 & -1 & -1 & 1 & 1 & 0\\end{bmatrix}."
},
{
"math_id": 8,
"text": "H_2 = \\begin{bmatrix}\n1 & 1 \\\\\n1 & -1 \\end{bmatrix},"
},
{
"math_id": 9,
"text": "m \\,\\equiv\\, 0 \\mod 4"
}
]
| https://en.wikipedia.org/wiki?curid=6899907 |
69004308 | Feuerbach hyperbola | Unique curve associated with every triangle
In geometry, the Feuerbach hyperbola is a rectangular hyperbola passing through important triangle centers such as the Orthocenter, Gergonne point, Nagel point and Schiffler point. The center of the hyperbola is the Feuerbach point, the point of tangency of the incircle and the nine-point circle.
Equation.
It has the trilinear equation formula_0(here formula_1 are the angles at the respective vertices and formula_2 is the barycentric coordinate).
Properties.
Like other rectangular hyperbolas, the orthocenter of any three points on the curve lies on the hyperbola. So, the orthocenter of the triangle formula_3 lies on the curve.
The line formula_4 is tangent to this hyperbola at formula_5.
Isogonal conjugate of OI.
The hyperbola is the isogonal conjugate of formula_4, the line joining the circumcenter and the incenter. This fact leads to a few interesting properties. Specifically all the points lying on the line formula_4 have their isogonal conjugates lying on the hyperbola. The Nagel point lies on the curve since its isogonal conjugate is the point of concurrency of the lines joining the vertices and the opposite Mixtilinear incircle touchpoints, also the in-similitude of the incircle and the circumcircle. Similarly, the Gergonne point lies on the curve since its isogonal conjugate is the ex-similitude of the incircle and the circumcircle.
The pedal circle of any point on the hyperbola passes through the Feuerbach point, the center of the hyperbola.
Kariya's theorem.
Given a triangle formula_3, let formula_6 be the touchpoints of the incircle formula_7 with the sides of the triangle opposite to vertices formula_1 respectively. Let formula_8 be points lying on the lines formula_9 such that formula_10. Then, the lines formula_11 are concurrent at a point lying on the Feuerbach hyperbola.
The Kariya's theorem has a long history. It was proved independently by Auguste Boutin and V. Retali., but it became famous only after Kariya's paper. Around that time, many generalizations of this result were given. Kariya's theorem can be used for the construction of the Feuerbach hyperbola.
Both Lemoine's theorem and Kariya's theorem are a special case of Jacobi's theorem. | [
{
"math_id": 0,
"text": "\\frac{\\cos B - \\cos C}{\\alpha}+ \\frac{\\cos C - \\cos A}{\\beta}+ \\frac{\\cos A - \\cos B}{\\gamma}"
},
{
"math_id": 1,
"text": "A,B,C"
},
{
"math_id": 2,
"text": "(\\alpha,\\beta,\\gamma)"
},
{
"math_id": 3,
"text": "ABC"
},
{
"math_id": 4,
"text": "OI"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": "A_1,B_1,C_1"
},
{
"math_id": 7,
"text": "\\odot I"
},
{
"math_id": 8,
"text": "X,Y,Z"
},
{
"math_id": 9,
"text": "IA_1, IB_1,IC_1"
},
{
"math_id": 10,
"text": "IX=IY= IZ"
},
{
"math_id": 11,
"text": "AX,BY,CZ"
}
]
| https://en.wikipedia.org/wiki?curid=69004308 |
69004416 | Longest-processing-time-first scheduling | Algorithm for job scheduling
Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of "jobs", each of which has a specific processing-time. There is also a number "m" specifying the number of "machines" that can process the jobs. The LPT algorithm works as follows:
Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time.
LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem. Later, it was applied to many other variants of the problem.
LPT can also be described in a more abstract way, as an algorithm for multiway number partitioning. The input is a set "S" of numbers, and a positive integer "m"; the output is a partition of "S" into "m" subsets. LPT orders the input from largest to smallest, and puts each input in turn into the part with the smallest sum so far.
Examples.
If the input set is S = {4, 5, 6, 7, 8} and "m" = 2, then the resulting partition is {8, 5, 4}, {7, 6}. If "m" = 3, then the resulting 3-way partition is {8}, {7, 4}, {6, 5}.
Properties.
LPT might not find the optimal partition. For example, in the above instance the optimal partition {8,7}, {6,5,4}, where both sums are equal to 15. However, its suboptimality is bounded both in the worst case and in the average case; see Performance guarantees below.
The running time of LPT is dominated by the sorting, which takes O("n" log "n") time, where "n" is the number of inputs.
LPT is "monotone" in the sense that, if one of the input numbers increases, the objective function (the largest sum or the smallest sum of a subset in the output) weakly increases. This is in contrast to Multifit algorithm.
Performance guarantees: identical machines.
When used for identical-machines scheduling, LPT attains the following approximation ratios.
Worst-case maximum sum.
In the worst case, the largest sum in the greedy partition is at most formula_0 times the optimal (minimum) largest sum.
A more detailed analysis yields a factor of formula_1 times the optimal (minimum) largest sum. (for example, when "m" =2 this ratio is formula_2).
The factor formula_3 is tight. Suppose there are formula_4 inputs (where "m" is even): formula_5. Then the greedy algorithm returns:
with a maximum of formula_10, but the optimal partition is:
with a maximum of formula_15.
Input consideration.
An even more detailed analysis takes into account the number of inputs in the max-sum part.
Worst-case minimum sum.
In the worst case, the "smallest" sum in the returned partition is at least formula_19 times the optimal (maximum) smallest sum.
Proof.
The proof is by contradiction. We consider a "minimal counterexample", that is, a counterexample with a smallest "m" and fewest input numbers. Denote the greedy partition by P1...,P"m", and the optimal partition by Q1...,Q"m". Some properties of a minimal counterexample are:
The proof that a minimal counterexample does not exist uses a "weighting scheme". Each input x is assigned a weight w(x) according to its size and greedy bundle Pi:
This weighting scheme has the following properties:
Upper bound on the ratio.
A more sophisticated analysis shows that the ratio is at most formula_20 (for example, when "m"=2 the ratio is 5/6).
Tightness and example.
The above ratio is tight.
Suppose there are 3"m"-1 inputs (where "m" is even). The first 2"m" inputs are: 2"m"-1, 2"m"-1, 2"m"-2, 2"m"-2, ..., "m", "m". The last "m"-1 inputs are all "m". Then the greedy algorithm returns:
with a minimum of 3"m"-1. But the optimal partition is:
with a minimum of 4"m"-2.
Restricted LPT.
There is a variant of LPT, called Restricted-LPT or RLPT, in which the inputs are partitioned into subsets of size "m" called "ranks" (rank 1 contains the largest "m" inputs, rank 2 the next-largest "m" inputs, etc.). The inputs in each rank must be assigned to "m" different bins: rank 1 first, then rank 2, etc. The minimum sum in RLPT is at most the minimum sum at LPT. The approximation ratio of RLPT for maximizing the minimum sum is at most "m".
Average-case maximum sum.
In the average case, if the input numbers are distributed uniformly in [0,1], then the largest sum in an LPT schedule satisfies the following properties:
General objectives.
Let "Ci" (for "i" between 1 and "m") be the sum of subset "i" in a given partition. Instead of minimizing the objective function max("Ci"), one can minimize the objective function max("f"("Ci")), where "f" is any fixed function. Similarly, one can minimize the objective function sum("f"("Ci")). Alon, Azar, Woeginger and Yadid prove that, if "f" satisfies the following two conditions:
Then the LPT rule has a finite approximation ratio for minimizing sum("f"("Ci")).
Performance with divisible item sizes.
An important special case is that the item sizes form a "divisible sequence" (also called "factored"). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, and in addition, the largest item sizes divides the bin size, then LPT always finds a scheduling that minimizes the maximum size,Thm.4 and maximizes the minimum size.Thm.5
Adaptations to other settings.
Besides the simple case of identical-machines scheduling, LPT has been adapted to more general settings.
Uniform machines.
In uniform-machines scheduling, different machines may have different speeds. The LPT rule assigns each job to the machine on which its "completion time" will be earliest (that is, LPT may assign a job to a machine with a "larger" current load, if this machine is so fast that it would finish that job "earlier" than all other machines).
Cardinality constraints.
In the balanced partition problem, there are constraints on the "number" of jobs that can be assigned to each machine. A simple constraint is that each machine can process at most "c" jobs. The LPT rule assigns each job to the machine with the smallest load from among those with fewer than "c" jobs. This rule is called "modified LPT" or "MLPT".
Another constraint is that the number of jobs on all machines should be formula_33 rounded either up or down. In an adaptation of LPT called "restricted LPT" or "RLPT", inputs are assigned in pairs - one to each machine (for "m"=2 machines). The resulting partition is balanced by design.
Kernel constraints - non-simultaneous availability.
In the "kernel partitioning problem", there are some "m" pre-specified jobs called kernels, and each kernel must be scheduled to a unique machine. An equivalent problem is scheduling when machines are available in different times: each machine "i" becomes available at some time "ti ≥" 0 (the time "ti" can be thought of as the length of the kernel job).
A simple heuristic algorithm, called SLPT, assigns each kernel to a different subset, and then runs the LPT algorithm.
Online settings.
Often, the inputs come online, and their sizes becomes known only when they arrive. In this case, it is not possible to sort them in advance. List scheduling is a similar algorithm that takes a list in any order, not necessarily sorted. Its approximation ratio is formula_38.
A more sophisticated adaptation of LPT to an online setting attains an approximation ratio of 3/2.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{4}{3}"
},
{
"math_id": 1,
"text": "\\frac{4m-1}{3m} = \\frac{4}{3}-\\frac{1}{3 m}"
},
{
"math_id": 2,
"text": "7/6\\approx 1.167"
},
{
"math_id": 3,
"text": "\\frac{4m-1}{3m} "
},
{
"math_id": 4,
"text": "2 m+1"
},
{
"math_id": 5,
"text": "2 m-1,2 m-1, 2 m-2,2 m-2, \\ldots, m+1,m+1, m,m, m"
},
{
"math_id": 6,
"text": "2 m-1,m,m"
},
{
"math_id": 7,
"text": "2 m-1,m"
},
{
"math_id": 8,
"text": "2 m-2,m+1"
},
{
"math_id": 9,
"text": "3 m/2,3 m/2-1"
},
{
"math_id": 10,
"text": "4 m-1"
},
{
"math_id": 11,
"text": "m,m,m"
},
{
"math_id": 12,
"text": "2 m-1,m+1"
},
{
"math_id": 13,
"text": "2 m-2,m+2"
},
{
"math_id": 14,
"text": "3 m/2,3 m/2"
},
{
"math_id": 15,
"text": "3 m"
},
{
"math_id": 16,
"text": "OPT/j"
},
{
"math_id": 17,
"text": "\\frac{L+1}{L}-\\frac{1}{L m} = 1 + \\frac{1}{L} - \\frac{1}{L m} "
},
{
"math_id": 18,
"text": "\\frac{4}{3}-\\frac{1}{3 m}"
},
{
"math_id": 19,
"text": "\\frac{3}{4}"
},
{
"math_id": 20,
"text": "\\frac{3 m-1}{4 m-2}"
},
{
"math_id": 21,
"text": "\\frac{n}{4}+\\frac{1}{4n+4}"
},
{
"math_id": 22,
"text": "\\frac{n}{4}+\\frac{e}{2n+2}"
},
{
"math_id": 23,
"text": "1 + O(\\log{\\log{n}}/n)"
},
{
"math_id": 24,
"text": "1 + O(1/n)"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "O(\\log{n}/n)"
},
{
"math_id": 27,
"text": "O(m^2/n)"
},
{
"math_id": 28,
"text": "2 m/(m+1)"
},
{
"math_id": 29,
"text": "(1+\\sqrt{17})/4\\approx 1.281"
},
{
"math_id": 30,
"text": "\\sqrt{1.5}\\approx 1.2247"
},
{
"math_id": 31,
"text": "(4 m-1)/(3 m)"
},
{
"math_id": 32,
"text": "(3 m-1)/(4 m-2)"
},
{
"math_id": 33,
"text": "n/m"
},
{
"math_id": 34,
"text": "\\frac{n}{4}+\\frac{1}{2n+2}"
},
{
"math_id": 35,
"text": "\\Theta(1/n)"
},
{
"math_id": 36,
"text": "\\frac{3 m-1}{2 m}"
},
{
"math_id": 37,
"text": "\\frac{2 m-1}{3 m-2}"
},
{
"math_id": 38,
"text": "\\frac{2 m-1}{m} = 2-\\frac{1}{m}"
}
]
| https://en.wikipedia.org/wiki?curid=69004416 |
69004808 | Tsachik Gelander | Israeli mathematician
Tsachik Gelander (צחיק גלנדר) is an Israeli mathematician working in the fields of Lie groups, topological groups, symmetric spaces, lattices and discrete subgroups (of Lie groups as well as general locally compact groups). He is a professor in Northwestern University.
Gelander earned his PhD from the Hebrew University of Jerusalem in 2003, under the supervision of Shahar Mozes. His doctoral dissertation, "Counting Manifolds and Tits Alternative", won the Haim Nessyahu Prize in Mathematics, awarded by the Israel Mathematical Union for the best annual doctoral dissertations in mathematics. After holding a Gibbs Assistant Professorship at Yale University, and faculty positions at the Hebrew University of Jerusalem and the Weizmann Institute of Science, Gelander joined Northwestern where he is currently a professor of mathematics. He contributed to the theory of lattices, Fuchsian groups and local rigidity, and the work on Chern's conjecture and the Derivation Problem. Among his well-known results is the solution to the Goldman conjecture, i.e. that the action of formula_0 on the deformation variety of a compact Lie group is ergodic when formula_1 is at least formula_2.
He gave the distinguished Nachdiplom Lectures at ETH Zurich in 2011, and was an invited speaker at the 2018 International Congress of Mathematicians, giving a talk under the title of "Asymptotic Invariants of Locally Symmetric Spaces." He was one of the recipients of the first call of the European Research Council (ERC) Starting Grant (2007), and in 2021 he won the ERC Advanced Grant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Out(F_n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "3"
}
]
| https://en.wikipedia.org/wiki?curid=69004808 |
69015860 | Brendel–Bormann oscillator model | The Brendel–Bormann oscillator model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit to the complex refractive index of materials with absorption lineshapes exhibiting non-Lorentzian broadening, such as metals and amorphous insulators, across broad spectral ranges, typically near-ultraviolet, visible, and infrared frequencies. The dispersion relation bears the names of R. Brendel and D. Bormann, who derived the model in 1992, despite first being applied to optical constants in the literature by Andrei M. Efimov and E. G. Makarova in 1983. Around that time, several other researchers also independently discovered the model. The Brendel-Bormann oscillator model is aphysical because it does not satisfy the Kramers–Kronig relations. The model is non-causal, due to a singularity at zero frequency, and non-Hermitian. These drawbacks inspired J. Orosco and C. F. M. Coimbra to develop a similar, causal oscillator model.
Mathematical formulation.
The general form of an oscillator model is given by
formula_0
where
The Brendel-Bormann oscillator is related to the Lorentzian oscillator formula_6 and Gaussian oscillator formula_7, given by
formula_8
formula_9
where
The Brendel-Bormann oscillator formula_14 is obtained from the convolution of the two aforementioned oscillators in the manner of
formula_15,
which yields
formula_16
where
The square root in the definition of formula_19 must be taken such that its imaginary component is positive. This is achieved by:
formula_20
formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varepsilon(\\omega) = \\varepsilon_{\\infty} + \\sum_{j} \\chi_{j}"
},
{
"math_id": 1,
"text": "\\varepsilon"
},
{
"math_id": 2,
"text": "\\varepsilon_{\\infty}"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "\\chi_{j}"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\\left(\\chi^{L}\\right)"
},
{
"math_id": 7,
"text": "\\left(\\chi^{G}\\right)"
},
{
"math_id": 8,
"text": "\\chi_{j}^{L}(\\omega; \\omega_{0,j}) = \\frac{s_{j}}{\\omega_{0,j}^{2} - \\omega^{2} - i \\Gamma_{j} \\omega} "
},
{
"math_id": 9,
"text": "\\chi_{j}^{G}(\\omega) = \\frac{1}{\\sqrt{2 \\pi} \\sigma_{j}} \\exp{\\left[ -\\left( \\frac{\\omega}{\\sqrt{2} \\sigma_{j}} \\right)^{2} \\right]}"
},
{
"math_id": 10,
"text": "s_{j}"
},
{
"math_id": 11,
"text": "\\omega_{0,j}"
},
{
"math_id": 12,
"text": "\\Gamma_{j}"
},
{
"math_id": 13,
"text": "\\sigma_{j}"
},
{
"math_id": 14,
"text": "\\left(\\chi^{BB}\\right)"
},
{
"math_id": 15,
"text": "\\chi_{j}^{BB}(\\omega) = \\int_{-\\infty}^{\\infty} \\chi_{j}^{G}(x-\\omega_{0,j}) \\chi_{j}^{L}(\\omega; x) dx"
},
{
"math_id": 16,
"text": "\\chi_{j}^{BB}(\\omega) = \\frac{i \\sqrt{\\pi} s_{j}}{2 \\sqrt{2} \\sigma_{j} a_{j}(\\omega)} \\left[ w\\left( \\frac{a_{j}(\\omega) - \\omega_{0,j}}{\\sqrt{2}\\sigma_{j}} \\right) + w\\left( \\frac{a_{j}(\\omega) + \\omega_{0,j}}{\\sqrt{2}\\sigma_{j}} \\right) \\right]"
},
{
"math_id": 17,
"text": "w(z)"
},
{
"math_id": 18,
"text": "a_{j} = \\sqrt{\\omega^{2}+i \\Gamma_{j} \\omega}"
},
{
"math_id": 19,
"text": "a_{j}"
},
{
"math_id": 20,
"text": "\\Re\\left( a_{j} \\right) = \\omega \\sqrt{\\frac{\\sqrt{1+\\left( \\Gamma_{j}/\\omega \\right)^{2}}+1}{2}}"
},
{
"math_id": 21,
"text": "\\Im\\left( a_{j} \\right) = \\omega \\sqrt{\\frac{\\sqrt{1+\\left( \\Gamma_{j}/\\omega \\right)^{2}}-1}{2}}"
}
]
| https://en.wikipedia.org/wiki?curid=69015860 |
69019121 | Meyers–Serrin theorem | In functional analysis the Meyers–Serrin theorem, named after James Serrin and Norman George Meyers, states that smooth functions are dense in the Sobolev space formula_0
for arbitrary domains formula_1.
Historical relevance.
Originally there were two spaces: formula_0 defined as the set of all functions which have weak derivatives of order up to k all of which are in formula_2 and formula_3 defined as the closure of the smooth functions with respect to the corresponding Sobolev norm (obtained by summing over the formula_2 norms of the functions and all derivatives). The theorem establishes the equivalence formula_4 of both definitions. It is quite surprising that, in contradistinction to many other density theorems, this result does not require any smoothness of the domain formula_5. According to the standard reference on Sobolev spaces by Adams and Fournier (p 60): "This result, published in 1964 by Meyers and Serrin ended much confusion about the relationship of these spaces that existed in the literature before that time. It is surprising that this elementary result remained undiscovered for so long." | [
{
"math_id": 0,
"text": "W^{k,p}(\\Omega)"
},
{
"math_id": 1,
"text": "\\Omega \\subseteq \\R^n"
},
{
"math_id": 2,
"text": "L^p"
},
{
"math_id": 3,
"text": "H^{k,p}(\\Omega)"
},
{
"math_id": 4,
"text": "W^{k,p}(\\Omega)=H^{k,p}(\\Omega)"
},
{
"math_id": 5,
"text": "\\Omega"
}
]
| https://en.wikipedia.org/wiki?curid=69019121 |
690246 | Matrix of ones | Matrix where every entry is equal to one
In mathematics, a matrix of ones or all-ones matrix has every entry equal to one. Examples of standard notation are given below:
formula_0
Some sources call the all-ones matrix the unit matrix, but that term may also refer to the identity matrix, a different type of matrix.
A vector of ones or all-ones vector is matrix of ones having row or column form; it should not be confused with "unit vectors".
Properties.
For an "n" × "n" matrix of ones "J", the following properties hold:
When "J" is considered as a matrix over the real numbers, the following additional properties hold:
Applications.
The all-ones matrix arises in the mathematical field of combinatorics, particularly involving the application of algebraic methods to graph theory. For example, if "A" is the adjacency matrix of an "n"-vertex undirected graph "G", and "J" is the all-ones matrix of the same dimension, then "G" is a regular graph if and only if "AJ" = "JA". As a second example, the matrix appears in some linear-algebraic proofs of Cayley's formula, which gives the number of spanning trees of a complete graph, using the matrix tree theorem.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J_2 = \\begin{pmatrix}\n1 & 1 \\\\\n1 & 1 \n\\end{pmatrix};\\quad\nJ_3 = \\begin{pmatrix}\n1 & 1 & 1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & 1\n\\end{pmatrix};\\quad\nJ_{2,5} = \\begin{pmatrix}\n1 & 1 & 1 & 1 & 1 \\\\\n1 & 1 & 1 & 1 & 1 \n\\end{pmatrix};\\quad\nJ_{1,2} = \\begin{pmatrix}\n1 & 1 \n\\end{pmatrix}.\\quad"
},
{
"math_id": 1,
"text": "(x - n)x^{n-1}"
},
{
"math_id": 2,
"text": "x^2-nx"
},
{
"math_id": 3,
"text": " J^k = n^{k-1} J"
},
{
"math_id": 4,
"text": "k = 1,2,\\ldots ."
},
{
"math_id": 5,
"text": "\\tfrac1n J"
},
{
"math_id": 6,
"text": "\\exp(\\mu J)=I+\\frac{e^{\\mu n}-1}{n}J"
}
]
| https://en.wikipedia.org/wiki?curid=690246 |
69025670 | Balanced number partitioning | Balanced number partitioning is a variant of multiway number partitioning in which there are constraints on the number of items allocated to each set. The input to the problem is a set of "n" items of different sizes, and two integers "m", "k". The output is a partition of the items into "m" subsets, such that the number of items in each subset is at most "k". Subject to this, it is required that the sums of sizes in the "m" subsets are as similar as possible.
An example application is identical-machines scheduling where each machine has a job-queue that can hold at most "k" jobs. The problem has applications also in manufacturing of VLSI chips, and in assigning tools to machines in flexible manufacturing systems.
In the standard three-field notation for optimal job scheduling problems, the problem of minimizing the largest sum is sometimes denoted by "P | # ≤ k | "C"max". The middle field "# ≤ k" denotes that the number of jobs in each machine should be at most "k". This is in contrast to the unconstrained version, which is denoted by "formula_0".
Two-way balanced partitioning.
A common special case called two-way balanced partitioning is when there should be two subsets ("m" = 2). The two subsets should contain floor("n"/2) and ceiling("n"/2) items. It is a variant of the partition problem. It is NP-hard to decide whether there exists a partition in which the sums in the two subsets are equal; see problem [SP12]. There are many algorithms that aim to find a balanced partition in which the sum is as nearly-equal as possible.
Balanced triplet partitioning.
Another special case called 3-partitioning is when the number of items in each subset should be at most 3 ("k" = 3). Deciding whether there exists such a partition with equal sums is exactly the 3-partition problem, which is known to be strongly NP-hard. There are approximation algorithms that aim to find a partition in which the sum is as nearly-equal as possible.
Balanced partitioning with larger cardinalities.
A more general case, called "k"-partitioning, is when the number of items in each subset should be at most "k", where "k" can be any positive integer..
Relations between balanced and unconstrained problems.
There are some general relations between approximations to the balanced partition problem and the standard (unconstrained) partition problem.
Different cardinality constraints.
The cardinality constraints can be generalized by allowing a different constraint on each subset. This variant is introduced in the "open problems" section of, who call the "ki"-partitioning problem. He, Tan, Zhu and Yao present an algorithm called HARMONIC2 for maximizing the smallest sum with different cardinality constraints. They prove that its worst-case ratio is at least formula_24.
Categorized cardinality constraints.
Another generalization of cardinality constraints is as follows. The input items are partitioned into "k" categories. For each category "h", there is a capacity constraint "kh". Each of the "m" subsets may contain at most "kh" items from category "h". In other words: all "m" subsets should be independent set of a particular partition matroid. Two special cases of this problem have been studied.
Kernel partitioning.
In the "kernel balanced-partitioning problem", some "m" pre-specified items are "kernels", and each of the "m" subsets must contain a single kernel (and an unlimited number of non-kernels). Here, there are two categories: the kernel category with capacity 1, and the non-kernel category with unlimited capacity.
One-per-category partitioning.
In another variant of this problem, there are some "k" categories of size "m", and each subset should contain exactly one item from each category. That is, "k""h" = 1 for each category "h".
See also.
Matroid-constrained number partitioning is a generalization in which a fixed matroid is given as a parameter, and each of the "m" subsets should be an independent set or a base of this matroid.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P\\parallel C_\\max"
},
{
"math_id": 1,
"text": "\\frac{n}{4}+\\frac{1}{2n+2}"
},
{
"math_id": 2,
"text": "\\Theta(1/n)"
},
{
"math_id": 3,
"text": "O(\\log n/n^2)"
},
{
"math_id": 4,
"text": "n^{-\\Theta(\\log n)}"
},
{
"math_id": 5,
"text": "\\frac{4 m-1}{3 m}"
},
{
"math_id": 6,
"text": "\\frac{3 m-1}{4 m-2}"
},
{
"math_id": 7,
"text": "7/6"
},
{
"math_id": 8,
"text": "2-\\frac{1}{m}"
},
{
"math_id": 9,
"text": "2-\\frac{2}{m+1}"
},
{
"math_id": 10,
"text": "4/3"
},
{
"math_id": 11,
"text": "\\frac{4 m}{3 m+1}"
},
{
"math_id": 12,
"text": "O(n\\log n)"
},
{
"math_id": 13,
"text": "2-\\sum_{j=0}^{k-1}\\frac{j!}{k!}"
},
{
"math_id": 14,
"text": "2-\\frac{1}{k-1}"
},
{
"math_id": 15,
"text": "\\max((\\sum x_i)/m, x_1)"
},
{
"math_id": 16,
"text": "\\max((\\sum x_i)/m, x_1, x_k+x_{m+1})"
},
{
"math_id": 17,
"text": "\\max\\left(\\frac{2}{k}, \\frac{1}{m}\\right)"
},
{
"math_id": 18,
"text": "\\max\\left(\\frac{1}{k}, \\frac{1}{\\lceil \\sum_{i=1}^m \\frac{1}{i}\\rceil+1}\\right)"
},
{
"math_id": 19,
"text": "O(1/\\ln{m})"
},
{
"math_id": 20,
"text": "\\sum_{i=1}^m \\left\\lfloor\\left\\lfloor \\frac{k+i-1}{i}\\right\\rfloor x \\right\\rfloor = k"
},
{
"math_id": 21,
"text": "2-\\frac{2}{m}"
},
{
"math_id": 22,
"text": "\\max\\left(r, \\frac{k+2}{k+1}-\\frac{1}{m(k+1)}\\right)"
},
{
"math_id": 23,
"text": "\\max\\left(\\frac{7}{6}, \\frac{5}{4}-\\frac{1}{4 m}\\right)"
},
{
"math_id": 24,
"text": "\\max\\left(\\frac{1}{k_m}, \\frac{k_1}{k_m}\\frac{1}{\\left\\lceil \\sum_{i=1}^m \\frac{1}{i}\\right\\rceil+1}\\right)"
},
{
"math_id": 25,
"text": "\\frac{3 m-1}{2 m}"
},
{
"math_id": 26,
"text": "\\frac{2 m-1}{3 m-2}"
},
{
"math_id": 27,
"text": "\\frac{1}{m}"
},
{
"math_id": 28,
"text": "\\frac{m}{2 m-1}"
},
{
"math_id": 29,
"text": "\\frac{m-1}{2 m-3}"
}
]
| https://en.wikipedia.org/wiki?curid=69025670 |
69034 | Bluff (poker) | Tactic in poker and other card games
In the card game of poker, a bluff is a bet or raise made with a hand which is not thought to be the best hand. "To bluff" is to make such a bet. The objective of a bluff is to induce a fold by at least one opponent who holds a better hand. The size and frequency of a bluff determines its profitability to the "bluffer". By extension, the phrase "calling somebody's bluff" is often used outside the context of poker to describe situations where one person demands that another proves a claim, or proves that they are not being deceptive.
Pure bluff.
A pure bluff, or stone-cold bluff, is a bet or raise with an inferior hand that has little or no chance of improving. A player making a pure bluff believes they can win the pot only if all opponents fold. The pot odds for a bluff are the ratio of the size of the bluff to the pot. A pure bluff has a positive expectation (will be profitable in the long run) when the probability of being called by an opponent is lower than the pot odds for the bluff.
For example, suppose that after all the cards are out, a player holding a busted drawing hand decides that the only way to win the pot is to make a pure bluff. If the player bets the size of the pot on a pure bluff, the bluff will have a positive expectation if the probability of being called is less than 50%. Note, however, that the opponent may also consider the pot odds when deciding whether to call. In this example, the opponent will be facing 2-to-1 pot odds for the call. The opponent will have a positive expectation for calling the bluff if the opponent believes the probability the player is bluffing is at least 33%.
Semi-bluff.
In games with multiple betting rounds, to bluff on one round with an inferior or drawing hand that might improve in a later round is called a semi-bluff. A player making a semi-bluff can win the pot two different ways: by all opponents folding immediately or by catching a card to improve the player's hand. In some cases a player may be on a draw but with odds strong enough that they are favored to win the hand. In this case their bet is not classified as a semi-bluff even though their bet may force opponents to fold hands with better current strength.
For example, a player in a stud poker game with four spade-suited cards showing (but none among their downcards) on the penultimate round might raise, hoping that their opponents believe the player already has a flush. If their bluff fails and they are called, the player still might be dealt a spade on the final card and win the showdown (or they might be dealt another non-spade and try to bluff again, in which case it is a "pure bluff" on the final round rather than a semi-bluff).
Bluffing circumstances.
Bluffing may be more effective in some circumstances than others. Bluffs have a higher expectation when the probability of being called decreases. Several game circumstances may decrease the probability of being called (and increase the profitability of the bluff):
The opponent's current state of mind should be taken into consideration when bluffing. Under certain circumstances external pressures or events can significantly impact an opponent's decision making skills.
Optimal bluffing frequency.
If a player bluffs too infrequently, observant opponents will recognize that the player is betting for value and will call with very strong hands or with drawing hands only when they are receiving favorable pot odds. If a player bluffs too frequently, observant opponents "snap off" their bluffs by calling or re-raising. Occasional bluffing disguises not just the hands a player is bluffing with, but also their legitimate hands that opponents may think they may be bluffing with. David Sklansky, in his book "The Theory of Poker", states "Mathematically, the optimal bluffing strategy is to bluff in such a way that the chances against your bluffing are identical to the pot odds your opponent is getting."
Optimal bluffing also requires that the bluffs must be performed in such a manner that opponents cannot tell when a player is bluffing or not. To prevent bluffs from occurring in a predictable pattern, game theory suggests the use of a randomizing agent to determine whether to bluff. For example, a player might use the colors of their hidden cards, the second hand on their watch, or some other unpredictable mechanism to determine whether to bluff.
Example (Texas Hold'em).
Here is an example for the game of Texas Hold'em, from "The Theory of Poker":
when I bet my $100, creating a $300 pot, my opponent was getting 3-to-1 odds
from the pot. Therefore my optimum strategy was ... [to make] the odds against
my bluffing 3-to-1.
Since the dealer will always bet with (nut hands) in this situation, they should bluff with (their) "Weakest hands/bluffing range" 1/3 of the time in order to make the odds 3-to-1 against a bluff.
Ex:
On the last betting round (river), Worm has been betting a "semi-bluff" drawing hand with: A♠ K♠ on the board:
10♠ 9♣ 2♠ 4♣
against Mike's A♣ 10♦ hand.
The river comes out:
2♣
The pot is currently 30 dollars, and Worm is contemplating a 30-dollar bluff on the river. If Worm does bluff in this situation, they are giving Mike 2-to-1 pot odds to call with their two pair (10's and 2's).
In these hypothetical circumstances, Worm will have the nuts 50% of the time, and be on a busted draw 50% of the time. Worm will bet the nuts 100% of the time, and bet with a bluffing hand (using mixed optimal strategies):
formula_0
Where "s" is equal to the percentage of the pot that Worm is bluff betting with and "x" is equal to the percentage of busted draws Worm should be bluffing with to bluff optimally.
Pot = 30 dollars.
Bluff bet = 30 dollars.
"s" = 30(pot) / 30(bluff bet) = 1.
Worm should be bluffing with their busted draws:
formula_1 Where "s" = 1
"Assuming four trials", Worm has the nuts two times, and has a busted draw two times. (EV = expected value)
Under the circumstances of this example: Worm will bet their nut hand two times, for every one time they bluff against Mike's hand (assuming Mike's hand would lose to the nuts and beat a bluff). This means that (if Mike called all three bets) Mike would win one time, and lose two times, and would break even against 2-to-1 pot odds. This also means that Worm's odds against bluffing is also 2-to-1 (since they will value bet twice, and bluff once).
Say in this example, Worm decides to use the second hand of their watch to determine when to bluff (50% of the time). If the second hand of the watch is between 1 and 30 seconds, Worm will check their hand down (not bluff). If the second hand of the watch is between 31 and 60 seconds, Worm will bluff their hand. Worm looks down at their watch, and the second hand is at 45 seconds, so Worm decides to bluff. Mike folds his two pair saying, "the way you've been betting your hand, I don't think my two pair on the board will hold up against your hand." Worm takes the pot by using optimal bluffing frequencies.
This example is meant to illustrate how optimal bluffing frequencies work. Because it was an example, we assumed that Worm had the nuts 50% of the time, and a busted draw 50% of the time. In real game situations, this is not usually the case.
The purpose of optimal bluffing frequencies is to make the opponent (mathematically) indifferent between calling and folding. Optimal bluffing frequencies are based upon game theory and the Nash equilibrium, and "assist" the player using these strategies to become unexploitable. By bluffing in optimal frequencies, you will typically end up breaking even on your bluffs (in other words, optimal bluffing frequencies "are not" meant to generate positive expected value from the bluffs alone). Rather, optimal bluffing frequencies allow you to gain "more" value from your value bets, because your opponent is indifferent between calling or folding when you bet (regardless of whether it's a value bet or a bluff bet).
Bluffing in other games.
Although bluffing is most often considered a poker term, similar tactics are useful in other games as well. In these situations, a player makes a play that should not be profitable unless an opponent misjudges it as being made from a position capable of justifying it. Since a successful bluff requires deceiving one's opponent, it occurs only in games in which the players conceal information from each other. In games like chess and backgammon, both players can see the same board and so should simply make the best legal move available. Examples include:
Artificial intelligence.
Evan Hurwitz and Tshilidzi Marwala developed a software agent that bluffed while playing a poker-like game. They used intelligent agents to design agent outlooks. The agent was able to learn to predict its opponents' reactions based on its own cards and the actions of others. By using reinforcement neural networks, the agents were able to learn to bluff without prompting.
Economic theory.
In economics, bluffing has been explained as rational equilibrium behavior in games with information asymmetries. For instance, consider the hold-up problem, a central ingredient of the theory of incomplete contracts. There are two players. Today player A can make an investment; tomorrow player B offers how to divide the returns of the investment. If player A rejects the offer, they can realize only a fraction x<1 of these returns on their own. Suppose player A has private information about x. Goldlücke and Schmitz (2014) have shown that player A might make a large investment even if player A is weak (i.e., when they know that x is small). The reason is that a large investment may lead player B to believe that player A is strong (i.e., x is large), so that player B will make a generous offer. Hence, bluffing can be a profitable strategy for player A.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x = s/(1+s)"
},
{
"math_id": 1,
"text": "x = 1/(1+s) = 50\\%"
}
]
| https://en.wikipedia.org/wiki?curid=69034 |
6903447 | Folded unipole antenna | Antenna used for radio broadcasts
The folded unipole antenna is a type of monopole mast radiator antenna used as a transmitting antenna mainly in the medium wave band for AM radio broadcasting stations. It consists of a vertical metal rod or mast mounted over and connected at its base to a grounding system consisting of buried wires. The mast is surrounded by a "skirt" of vertical wires electrically attached at or near the top of the mast. The skirt wires are connected by a metal ring near the mast base, and the feedline feeding power from the transmitter is connected between the ring and the ground.
It has seen much use for refurbishing medium wave AM broadcasting station towers in the United States and other countries. When an AM radio station shares a tower with other antennas such as FM broadcasting antennas, the folded unipole is often a good choice. Since the base of the tower connects to the ground system, unlike in an ordinary mast radiator tower in which the base is at high voltage, the transmission lines to any antennas mounted on the tower, as well as aircraft lighting power lines, can be run up the side of the tower without requiring isolators.
Invention.
The folded unipole antenna was first devised for broadcast use by John H. Mullaney, an American radio broadcast pioneer, and consulting engineer. It was designed to solve some difficult problems with existing medium wave (MW), frequency modulation (FM), and amplitude modulation (AM) broadcast antenna installations.
Typical installation.
Since folded unipoles are most often used for refurbishing old broadcast antennas, the first subsection below describes a typical monopole antenna used as a starting point. The subsection that follows next describes how surrounding skirt wires are added to convert an ordinary broadcast tower into a folded unipole.
The picture at the right shows a small folded unipole antenna constructed from an existing triangular monopole tower; it has only three vertical wires comprising its "skirt".
Conventional monopole antennas.
A typical monopole transmitting antenna for an AM radio station is a series-fed mast radiator; a vertical steel lattice mast which is energized and radiates radio waves. One side of the feedline which feeds power from the transmitter to the antenna is connected to the mast, the other side to a ground (electricity) system consisting of buried wires radiating from a terminal next to the base of the mast. The mast is supported on a thick ceramic insulator which isolates it electrically from the ground. US FCC regulations require the ground system to have 120 buried copper or phosphor bronze radial wires at least one-quarter wavelength long; there is usually a ground-screen in the immediate vicinity of the tower. To minimize corrosion, all the ground system components are bonded together, usually by using brazing or coin silver solder.
The mast has diagonal guy cables attached to it, anchored to concrete anchors in the ground, to support it. The guy lines have strain insulators in them to isolate them electrically from the mast, to prevent the high voltage from reaching the ground. To prevent the conductive guy lines from disturbing the radiation pattern of the antenna, additional strain insulators are sometimes inserted in the lines to divide them into a series of short, electrically separate segments, to ensure all segments are too short to resonate at the operating frequency.
In the U.S., the Federal Communications Commission (FCC) requires that the transmitter power measurements for a single series-fed tower calculated at this feed point as the current squared multiplied by the resistive part of the feed-point impedance.
formula_0
Electrically short monopole antennas have low resistance and high capacitive (negative) reactance. Depending on desired recipients and the surrounding terrain, and particularly depending on locations of spacious expanses of open water, a longer antenna may tend to send signals out in directions that are increasingly more advantageous, up to the point that the antenna's electrical height exceeds about wavelengths tall.
Reactance is zero only for towers slightly shorter than wavelength, but the reactance will in any case rise or fall depending on humidity, dust, salty spume, or ice collecting on the tower or its feedline.
Regardless of its height, the antenna feed system has an impedance matching system housed in a small shed at the tower's base (called a "tuning hut" or "coupling hut" or "helix hut"). The matching network is adjusted to join the antenna's impedance to the characteristic impedance of the feedline joining it to the transmitter. If the tower is too short (or too tall) for the frequency, the antenna's capacitive (or inductive) reactance will be counteracted the opposite reactance by the matching network, as well as raising or lowering the feedpoint resistance of the antenna to match the feedline's characteristic impedance. The combined limitations of the matching network, ground wires, and tower can cause the system to have a narrow bandwidth; in extreme cases the effects of narrow bandwidth can be severe enough to detract from the audio fidelity of the radio broadcast.
Electrically short antennas have low radiation resistance, which makes normal loss in other parts of the system relatively more costly in terms of lost broadcast power. The losses in the ground system, matching network(s), feedline wires, and structure of the tower all are in series with the antenna feed current, and each wastes a share of the broadcast power heating the soil or metal in the tower.
Folded unipole antennas.
Heuristically, the unipole's outer skirt wires can be thought of as attached segments of several tall, narrow, single-turn coils, all wired in parallel, with the central mast completing the final side of each turn. Equivalently, each skirt wire makes a parallel wire stub, with the mast being the other parallel "wire"; the closed end at the top of the stub, where the skirt connects to the mast, makes a transmission line stub inductor. Either way of looking at it, the effect of the skirt wires is to add inductive reactance to the antenna mast, which helps neutralize a short mast's capacitive reactance.
For the normal case of a short monopole, the inductive reactance introduced by the skirt wires decreases as the frequency decreases and the bare mast's capacitive reactance increases. With increasing frequency, up to frequency where the skirt is a quarter wavelength, the inductive reactance rises and capacitive reactance drops. So for a short antenna, the skirt's inductance and the mast's capacitance can only cancel at a single frequency, since the reactance magnitudes increase and decrease in opposite manner with frequency.
With a longer antenna mast, at least a quarter-wave tall, the reactances can be more elaborately configured: The contrary reactances can be made to cancel each other at more than one frequency, at least in part, and to rise and fall by approximately the same amount. Approximate balance between the opposing reactances adds up to reduce the total reactance of the whole antenna at the decreased (and increased) frequencies, thus widening the antenna's low-reactance bandwidth. However, there is nothing particularly remarkable about a longer antenna having a wider low-reactance bandwidth.
If the greater part of the unbalanced radio current can be made to flow in the skirt wires, instead of in the mast, the outer ring of skirt wires will also effectively add electrical width to the mast, which also will improve bandwidth by causing the unbalanced currents in the unipole to function like a "cage antenna".
Usually folded-unipoles are constructed by modifying an existing monopole antenna, and not all possible unipole improvements can be achieved on every monopole.
The resulting skirt enveloping the mast connects only at the tower top, or some midpoint near the top, and to the isolated conducting ring that surrounds the tower base; the skirt wires remain insulated from the mast at every other point along its entire length.
Unipole electrical operation and design.
Balanced and unbalanced currents are important for understanding antennas, because unbalanced current always radiates, and close-spaced balanced current never radiates. The following sketch of how a unipole antenna works separately considers the balanced and unbalanced currents flowing through the antenna. The sum of the two is the actual current seen in any one conductor.
Total current broken into balanced and unbalanced parts.
By the electrical superposition principle, the total currents flowing in the antenna can be considered as split into the sum of independent balanced and unbalanced currents. The balanced and unbalanced parts of the antenna's currents add to make the "true" current profile; equivalently, if we call the "true" current measured flowing through the mast formula_1 and formula_2 the sum of all the "true" currents measured in the skirt wires (by symmetry assumed to all be the same) then the balanced and unbalanced parts of the "true" currents are
formula_3 and
formula_4
Going the other way, the "true" currents in the mast and skirt, from the conceptual balanced and unbalanced currents are
formula_5 and
formula_6
So as an example, from a simplified point of view, the distinction between an antenna and its feedline, is that the balanced current flows anti-parallel in the feedline, which does not radiate, and is rechanneled into unbalanced, vector parallel paths inside the antenna, which do radiate.
Balanced feed current.
The electrical behavior of the skirt and mast can be thought of as similar to a coaxial feedline, with the skirt corresponding to the coax's outer shield, and the mast serving as the core wire, or center conductor. The connection of the skirt and mast at the top acts as a short at the end of the virtual coax, and because the "coax" is, by design, less than a quarter wave long at the attachment point it is effectively an inductive shorted stub. Regardless of the configured skirt and mast sizes and spacing, which determine the impedance seen by the balanced current, the feed current circulating through the skirt and the mast produce a voltage difference between the top and the skirt feed point and between the top and ground plane which is half of the voltage difference between the feedpoint and the ground (possibly with exceedingly minor variations).
The only current considered so-far is balanced: The same total feed current rises up the skirt wires as flows down through the mast to the ground-level feedpoint (or vice versa), and back through the (balanced) feedline, making an electrically closed circuit. The magnetic fields of the current flowing up are equal and opposite to the current flowing down, so the magnetic fields (very nearly) all cancel, and consequently balanced currents (mostly) do not radiate. So the situation on the antenna after considering just the balanced feed current is that it creates a voltage difference between the antenna top and the ground plane, and nothing in terms of radio waves. That voltage difference serves as an electrical exciter of an unbalanced current.
Unbalanced radiating current.
If one then considers separately the antenna from the "point of view" of any prospective unbalanced current, it sees an unbalanced voltage between the connection point near the top of the mast and the ground plane at the antenna base. (For RF analysis, the backwards path through the feedpoint to the radio is treated as a virtual path to ground, ignoring the balanced feed current.) The self-cancelled balanced currents won't electrically affect the unbalanced currents (other than having created the voltage difference in common to all), although they do add to make the "true" current profile in the antenna.
There are two possible paths that unbalanced current can take in response to the voltage difference between the top and the bottom: Either down (or up) through the mast, or down (or up) through the skirt wires. Because the currents along each path are driven by the same voltage, they will flow in the same direction. The current divides in proportion to the admittance (reciprocal impedance) of each path to ground. The amount of current along each path is determined by the sizes and number of the wire(s) along each path, and to some extent the mutual impedance of the adjacent conductors (mast and skirt wires) and the currents flowing in those wires (parallel currents in adjoining wires crowd out each other's magnetic fields, making it harder to push the current through). All unbalanced current radiates; the radiation from the several vector-parallel current paths all add.
Design choices and results.
Compared to balanced currents through the same two conductors, the electrical impedance countering the flow of unbalanced currents is very high – roughly 500~600 Ω and higher, depending mostly on the wire diameter, but also rising with closer or larger parallel currents in adjacent wires. The impedance against the flow of balanced currents is roughly 300~500 Ω and lower, depending mostly on the spacing between the wires, dropping when wires are more closely spaced. Consequently, the flow of balanced current will tend to be larger in magnitude than its unbalanced counterpart, and the difference becomes greater the closer the conductors are spaced.
The electrical design of a unipole antenna lies in choosing the sizes and number of the skirt wires, their lengths, and (if possible) the size of the central mast, in order to adjust the relative impedances (or admittances) of the balanced and unbalanced current, in order to maximize radiation and to present a reactance-free balanced feedpoint impedance for the feedline. (Other design considerations, like cost of materials and ease of erection, may lead to choices sub-optimal for electrical performance.)
Because of the large number of free design parameters, compared to other antenna types, an exceedingly diverse variety of different unipole antennas can be made, and their performance will all be different. Unlike a commonly used antenna such as the simple doublet, there is no "typical unipole" performance figure. That being said, however, field testing discussed below shows that when just considering antenna efficiency, the power radiated per power fed to a unipole is very nearly the same as an ordinary monopole antenna with the same height: Other than the advantage of being able to tailor the feedpoint impedance, there appears to be no inherent superior performance for unipoles' when compared to a basic monopole. The only unipole design advantage boils down to it having an elaborately configurable built-in feedpoint impedance matching system.
Performance comparisons.
When a well-made folded-unipole replaces a decrepit antenna, or one with a poor original design, there will of course be an improvement in performance; the sudden improvement may be cause for mistakenly inferred superiority in the design.
Experiments show that folded-unipole performance is the same as other monopole designs: Direct comparisons between folded unipoles and more conventional vertical antennas of the same height, all well-made, and with nearly equivalent radiator widths, show essentially no difference in radiation pattern in actual measurements by Rackley, Cox, Moser, & King (1996) and by Cox & Moser (2002).
The expected wider bandwidth was also not found during antenna range tests of several folded unipoles.
Replaced shunt-fed antenna.
Most commonly, folded-unipole designs were used to replace a shunt-fed antenna – a different broadcast antenna design that also has a grounded base. A "shunt-fed" (or "slant-wire") antenna comprises a grounded tower with the top of a sloping single-wire feed-line attached at a point on the mast that results in an approximate match to the impedance desired at the other end of the sloping feed-wire.
If a well-made folded-unipole antenna replaced an aged-out slant-fed antenna, station engineers could notice a marked improvement in performance. Such improvements may have provoked conjectures that folded-unipole antennas had power gains, or other wonderful characteristics, but those suppositions are not borne out by radio engineering calculations.
Ground system maintenance.
Sites of ground-mounted monopole antennas require landscape maintenance: Keeping weeds and grass covering the antenna's ground plane wires as short as possible, since green plants in between the antenna tower and the antenna ground system will dissipate power of the radio waves passing through them, reducing antenna efficiency. Folded-unipole antenna sites were alleged to be less affected by weeds and long grass on top of the ground wires that cause attenuation in other monopole antenna designs, but measurements show no such advantage.
Self-resonant unipole patents.
A possible improvement over the basic folded-unipole antenna is the "self resonant" unipole antenna, described in U.S. patent 6133890.
Another possible improvement to the folded unipole is described in U.S. patent 4658266, which concerns a more carefully designed form of ground plane for use with all monopole types (only incidentally including folded unipoles).
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ P = I^2\\ R\\ "
},
{
"math_id": 1,
"text": "\\ I_\\mathsf{mast}\\ ,"
},
{
"math_id": 2,
"text": "\\ I_\\mathsf{skirt}\\ "
},
{
"math_id": 3,
"text": "\\ I_\\mathsf{balnc} = \\tfrac{1}{2}\\left( I_\\mathsf{mast} - I_\\mathsf{skirt} \\right)\\ ,"
},
{
"math_id": 4,
"text": "\\ I_\\mathsf{unbal} = \\tfrac{1}{2}\\left( I_\\mathsf{mast} + I_\\mathsf{skirt} \\right) ~."
},
{
"math_id": 5,
"text": "\\ I_\\mathsf{mast} = I_\\mathsf{unbal} + I_\\mathsf{balnc} \\ ,"
},
{
"math_id": 6,
"text": "\\ I_\\mathsf{skirt} = I_\\mathsf{unbal} - I_\\mathsf{balnc} ~."
}
]
| https://en.wikipedia.org/wiki?curid=6903447 |
690346 | Signal strength in telecommunications | In telecommunications, particularly in radio frequency engineering, signal strength refers to the transmitter power output as received by a reference antenna at a distance from the transmitting antenna. High-powered transmissions, such as those used in broadcasting, are expressed in dB-millivolts per metre (dBmV/m). For very low-power systems, such as mobile phones, signal strength is usually expressed in dB-microvolts per metre (dBμV/m) or in decibels above a reference level of one milliwatt (dBm). In broadcasting terminology, 1 mV/m is 1000 μV/m or 60 dBμ (often written dBu).
Relationship to average radiated power.
The electric field strength at a specific point can be determined from the power delivered to the transmitting antenna, its geometry and radiation resistance. Consider the case of a center-fed half-wave dipole antenna in free space, where the total length L is equal to one half wavelength (λ/2). If constructed from thin conductors, the current distribution is essentially sinusoidal and the radiating electric field is given by
formula_0
where formula_1 is the angle between the antenna axis and the vector to the observation point, formula_2 is the peak current at the feed-point, formula_3 is the permittivity of free-space, formula_4 is the speed of light in vacuum, and formula_5 is the distance to the antenna in meters. When the antenna is viewed broadside (formula_6) the electric field is maximum and given by
formula_7
Solving this formula for the peak current yields
formula_8
The average power to the antenna is
formula_9
where formula_10 is the center-fed half-wave antenna's radiation resistance. Substituting the formula for formula_11 into the one for formula_12 and solving for the maximum electric field yields
formula_13
Therefore, if the average power to a half-wave dipole antenna is 1 mW, then the maximum electric field at 313 m (1027 ft) is 1 mV/m (60 dBμ).
For a short dipole (formula_14) the current distribution is nearly triangular. In this case, the electric field and radiation resistance are
formula_15
Using a procedure similar to that above, the maximum electric field for a center-fed short dipole is
formula_16
RF signals.
Although there are cell phone base station tower networks across many nations globally, there are still many areas within those nations that do not have good reception. Some rural areas are unlikely to ever be covered effectively since the cost of erecting a cell tower is too high for only a few customers. Even in areas with high signal strength, basements and the interiors of large buildings often have poor reception.
Weak signal strength can also be caused by destructive interference of the signals from local towers in urban areas, or by the construction materials used in some buildings causing significant attenuation of signal strength. Large buildings such as warehouses, hospitals and factories often have no usable signal further than a few metres from the outside walls.
This is particularly true for the networks which operate at higher frequency since these are attenuated more by intervening obstacles, although they are able to use reflection and diffraction to circumvent obstacles.
Estimated received signal strength.
The estimated received signal strength in an active RFID tag can be estimated as follows:
formula_17
In general, you can take the path loss exponent into account:
formula_18
The effective path loss depends on frequency, topography, and environmental conditions.
Actually, one could use any known "signal power" dBm0 at any distance r0 as a reference:
formula_19
formula_20 would give an estimate of the number of decades, which coincides with an average path loss of 40 dB/decade.
Estimate the cell radius.
When we measure cell distance "r" and received power dBmm pairs,
we can estimate the mean cell radius as follows:
formula_21
Specialized calculation models exist to plan the location of a new cell tower, taking into account local conditions and radio equipment parameters, as well as consideration that mobile radio signals have line-of-sight propagation, unless reflection occurs.
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\nE_\\theta (r) =\n{-jI_\\circ\\over 2\\pi\\varepsilon_0 c\\, r}\n{\\cos\\left(\\scriptstyle{\\pi\\over 2}\\cos\\theta\\right)\\over\\sin\\theta}\ne^{j\\left(\\omega t-kr\\right)}\n"
},
{
"math_id": 1,
"text": "\\scriptstyle{\\theta}"
},
{
"math_id": 2,
"text": "\\scriptstyle{I_\\circ}"
},
{
"math_id": 3,
"text": "\\scriptstyle{\\varepsilon_0 \\, = \\, 8.85\\times 10^{-12} \\, F/m }"
},
{
"math_id": 4,
"text": "\\scriptstyle{c \\, = \\, 3\\times 10^8 \\, m/s}"
},
{
"math_id": 5,
"text": "\\scriptstyle{r}"
},
{
"math_id": 6,
"text": "\\scriptstyle{\\theta \\, = \\, \\pi/2}"
},
{
"math_id": 7,
"text": "\n\\vert E_{\\pi/2}(r) \\vert = { I_\\circ \\over 2\\pi\\varepsilon_0 c\\, r }\\, . \n"
},
{
"math_id": 8,
"text": "\nI_\\circ = 2\\pi\\varepsilon_0 c \\, r\\vert E_{\\pi/2}(r) \\vert \\, .\n"
},
{
"math_id": 9,
"text": " {P_{avg} = {1 \\over 2} R_a \\, I_\\circ^2 } "
},
{
"math_id": 10,
"text": " \\scriptstyle{R_a = 73.13\\,\\Omega} "
},
{
"math_id": 11,
"text": " \\scriptstyle{I_\\circ} "
},
{
"math_id": 12,
"text": " \\scriptstyle{P_{avg}} "
},
{
"math_id": 13,
"text": "\n\\vert E_{\\pi/2}(r)\\vert \\, = \\, {1 \\over \\pi\\varepsilon_0 c \\, r}\n\\sqrt{{ P_{avg} \\over 2R_a}} \\, = \\, \n{9.91 \\over r} \\sqrt{ P_{avg} } \\quad (L = \\lambda /2) \\, .\n"
},
{
"math_id": 14,
"text": "\\scriptstyle{L \\ll \\lambda /2}"
},
{
"math_id": 15,
"text": "\nE_\\theta (r) =\n{-jI_\\circ \\sin (\\theta) \\over 4 \\varepsilon_0 c\\, r} \\left ( {L \\over \\lambda} \\right )\ne^{j\\left(\\omega t-kr\\right)} \\, , \\quad\nR_a = 20\\pi^2 \\left ( {L \\over \\lambda} \\right )^2 .\n"
},
{
"math_id": 16,
"text": "\n\\vert E_{\\pi/2}(r)\\vert \\, = \\, {1 \\over \\pi\\varepsilon_0 c \\, r}\n\\sqrt{{ P_{avg} \\over 160}} \\, = \\, \n{9.48 \\over r} \\sqrt{ P_{avg} } \\quad (L \\ll \\lambda /2)\\, .\n"
},
{
"math_id": 17,
"text": "\\mathrm{dBm_e} = -43.0 - 40.0\\ \\log_{10}\\left( \\frac{r}{R}\\right)"
},
{
"math_id": 18,
"text": "\\mathrm{dBm_e} = -43.0 - 10.0 \\ \\gamma \\ \\log_{10}\\left( \\frac{r}{R}\\right)"
},
{
"math_id": 19,
"text": "\\mathrm{dBm_e} = \\mathrm{dBm}_0 - 10.0 \\ \\gamma \\ \\log_{10}\\left( \\frac{r}{r_0} \\right)"
},
{
"math_id": 20,
"text": "\\log_{10} ( R / r )"
},
{
"math_id": 21,
"text": "R_e = \\operatorname{avg}[ \\ r \\ 10 ^ { ( \\mathrm{dBm_m} + 43.0 ) / 40.0 } \\ ]"
}
]
| https://en.wikipedia.org/wiki?curid=690346 |
69050331 | Ognev's mole | Species of mammal
<templatestyles src="Template:Taxobox/core/styles.css" />
Ognev's mole (Talpa ognevi) is a species of mammal in the family Talpidae. It occurs in the southeastern coastal area of the Black Sea from northeastern Turkey to Georgia. It inhabits different habitats associated with moist soils in lowland areas. Little information is available about its life history.
Externally, Ognev's mole resembles the Caucasian mole ("T. caucasica"), which occurs further north, but is larger and has more robust teeth. It was scientifically named in 1944, but for a time it was considered a subspecies of "T. caucasica". However, genetic analysis found major differences, and in 2018 Ognev's mole was recognized as an independent species. No data has yet been collected on the status of the population.
Taxonomy.
Ognev's mole is a species of the genus "Talpa", which contains Eurasian moles. The genus includes around a dozen other members, including the European mole ("Talpa europaea") as its most famous representative. The Eurasian moles belong to the tribe of true moles (Talpini) and the mole family (Talpidae). The true moles in turn include the mostly burrowing forms of the moles, while other members of the family only partially live underground, move above ground or have a semi-aquatic way of life.
Description.
The first scientific description of Ognev's mole was made in 1944 by Sergei Uljanowitsch Stroganow under the name "Talpa romana ognevi" and thus as a subspecies of the Roman mole ("T. romana"). The comparatively larger size of the animals and their robust tooth structure compared to the Caucasian mole, which occurs further north, was primarily what motivated Stroganov to incorporate the new species into the Roman mole. As a type locality, he gave Bakuriani in the region around Borjomi in southern Georgia. The holotype is formed by an adult male animal originating from there. In addition, Stroganov examined seven other individuals, some of which had been found in the vicinity of Kutaisi. With the specific epithet, Stroganow honored the Soviet zoologist Sergey Ivanovich Ognev.
In 1989, Ognev's mole was designated as one of three subspecies of the Caucasian mole by Vladimir Sokolov. The distinction was based on size, as the body shape of Ognev's mole stood out as extremely distinctive from the other Caucasian moles. In line with the moles of the Caucasus region and southern Europe, Ognev's mole also has a caecoidal structure of the sacrum (the opening of the foramen on the fourth sacral vertebra is directed backwards). This is a striking difference from the europaeoidal structure (the opening of the foramen on the fourth sacral vertebra is covered by a bone bridge) of the pelvic area in numerous Central and Western European moles.
Molecular genetic studies since the 2010s have shown a relatively basal position of the Caucasian moles together with the Altai mole ("T. altaica") within the Eurasian moles. The separation of this group dates back to the transition from the Miocene to the Pliocene more than 5 million years ago. In 2015, genetic analyses showed a clear separation between the moles of the northern and southern Caucasus region. This was supported by the deep divergence between the two lines, which, according to the results, had been distinct since the end of the Pliocene around 3 to 2.5 million years ago. The authors of the study therefore suspected an independent position of Ognev's mole, but omitted a species position, as no genetic material from individuals from the type locality was available to them. Three years later, in the eighth volume of the standard work "Handbook of the Mammals of the World," Ognev's mole was granted species status. This separation from the Caucasian mole is also supported by individual cytogenetic data, since the largest chromosome in Ognev's mole has two arms, whereas the Caucasian mole has an acrocentric structure.
Distribution and habitat.
The distribution of Ognev's mole includes the southeastern coastal areas of the Black Sea. It occurs from the Artvin province in northeastern Turkey to the neighboring areas of Georgia to the north, where the habitat extends inland to the upper reaches of the Kura River. The northern limit of the distribution has not been adequately delineated. The animals prefer lowlands and river valleys near the coast. Higher areas are mostly inhabited by the sympatric Levant mole ("T. levantis transcaucasica"). Ognev's mole can be found in gardens, fields, and wooded landscapes with moist soils.
Characteristics.
Anatomy.
Ognev's mole reaches a head-trunk length of 13.4 to 14.2 cm, a tail length of 2.0 to 2.6 cm and a weight of 62 to 91 g. The sexual dimorphism is only slightly pronounced, males are on average 5% heavier than females. With the specified dimensions, Ognev's mole is larger than the closely related Caucasian mole. Outwardly, both types are similar. Like all moles, they are characterized by a cylindrical and robust body, a short neck and shovel-like front feet. The coat color has a blackish gray to black hue. Occasionally, yellowish spots are formed on the muzzle, throat and chest. Similar to the Caucasian mole, but unlike the European mole ("Talpa europaea"), the eyes are covered with a translucent skin. The rear foot has a length of 1.8 to 2.0 cm.
Features of skull and teeth.
The length of the skull varies between 33.6 and 35.0 mm, the width on the zygomatic arch is 12.1 to 13.7 mm, and the cranium is 15.9 to 17.2 mm. It has a robust rostrum that is between 9.0 and 10.1 mm wide. The dentition has 44 teeth with the following tooth formula: formula_0.
Compared to the Caucasian mole, the upper molars are very strong. The upper row of teeth extends over 14.7 to 15.8 mm in length, of which the molars take up 6.0 to 7.4 mm. In proportion, the upper row of teeth takes up around 40% of the length of the skull.
Genetic characteristics.
The diploid chromosome set is 2n = 38. It consists of 8 metacentric, 3 submetacentric, 2 subtelocentric and 5 teloacrocentric pairs of chromosomes. The largest chromosome has two arms. The X chromosome is (sub) metacentric, the Y chromosome is speckled.
Life history.
There is little information about the life history of Ognev's mole. Presumably it resembles that of the Caucasian mole.
Threats and conservation.
Ognev's mole has not yet been registered by the IUCN. Information on the status of populations and protective measures is not available. | [
{
"math_id": 0,
"text": "\\frac{3.1.4.3}{3.1.4.3}"
}
]
| https://en.wikipedia.org/wiki?curid=69050331 |
69051197 | Talysch mole | Species of mammal
<templatestyles src="Template:Taxobox/core/styles.css" />
The Talysch mole (Talpa talyschensis) is a species of mammal in the family Talpidae. It is a small member of the family, which outwardly resembles the Levant mole ("T. levantis"), but is genetically closer to Père David's mole ("T. davidiana"). It is common on the southwest coast of the Caspian Sea, from southern of Azerbaijan through most of the north of Iran. The habitat includes temperate rainforests and scrub areas. There is little information about the life history of the Talysch mole. It was described in 1945, but had long been considered a subspecies of various other Eurasian moles, and was only recognized as a distinct species in the mid-2010s. No surveys have been carried out to quantify the status of the species.
Taxonomy.
The Talysch mole is a species of the genus "Talpa", which contains Eurasian moles. The genus includes around a dozen other members, including the European mole ("Talpa europaea") as its most famous representative. The Eurasian moles belong to the tribe of true moles (Talpini) and the mole family (Talpidae). The true moles in turn include the mostly burrowing forms of the moles, while other members of the family only partially live underground, move above ground or have a semi-aquatic way of life.
The first scientific description of the Talysch mole was provided by Nikolai Kusmitsch Vereschtschagin in 1945. He used the name "Talpa orientalis talyschensis", seeing it as a subspecies of "Talpa orientalis", a species that is now counted as synonymous with the Caucasian mole ("T. caucasica"). The lectotype consists of a skull of an adult male. The type locality is in the south of Azerbaijan in the Talysh Mountains around Masallı.
In the 1960s and 1970s, the Talysch mole was sometimes also considered a variant of the blind mole ("Talpa caeca"). However, the close external resemblance to the Levant mole ("Talpa levantis"), along with a matching karyotype, led the Talysch mole to be recognized as the eastern subspecies of the Levant mole by the end of the 20th century, despite both taxa being widely separated in range. The main differences between the two species are the details of the tooth design.
Some scientists, however, questioned the close connection between the Talysch and Levantine moles based on anatomical data. This view was supported by molecular genetic studies from 2015. This study found a closer relationship to the Père David's mole, which occurs further south. Both species diverged at the end of the Pliocene, around 2.5 million years ago. The lineage that led to the Levant mole, on the other hand, had already diverged during the transition from the Miocene to the Pliocene. As a result, the Talysch mole was recognized as an independent species, which was also confirmed by the eighth volume of the standard work "Handbook of the Mammals of the World" in 2018.
Distribution.
The distribution area of the Talysch mole includes the southwestern coastal areas of the Caspian Sea. The northern limit of the occurrence is reached in the region around Lankaran in southern Azerbaijan, the southern limit is found near Chalus in northern Iran. This species is restricted to the Caspian Hyrcanian mixed forests, which stretch across the Talysh and Alborz Mountains and consist of temperate rainforests and boxwood thickets with a large amounts of moss. The altitude distribution ranges from sea level to around 300 m.
Characteristics.
Description.
The Talysh mole is a small member of the genus "Talpa". Its head-trunk length is 10.4 to 11.4 cm, the tail length 2.0 to 2.5 cm and the weight 31 to 49 g. The sexual dimorphism is only slightly pronounced. In body dimensions, the Talysh mole is comparable to the Levant mole. Like all Eurasian moles, it is characterized by a cylindrical and sturdy body, the neck is short and the forefeet resembles grave digging. The fur has a dark gray to blackish color. The eyes remain hidden under the skin. The rear foot length is 1.6 to 1.7 cm.
Features of skull and teeth.
The skull is 31.1 mm long on average, and the cranium is 15.0 mm wide and 8.7 mm high. The rostrum is 8.3 mm wide at the base and narrows to 4.3 mm towards the front. The tooth formula is: formula_0; the dentition consists of 44 teeth. On the anterior upper molar, the mesostyle, a small cusp between the two main cusps on the lip side (paraconus and metaconus), has two small tips, whereas this is only single-pointed in the Levant mole. The upper row of teeth is around 11.3 mm long.
Genetics.
The diploid chromosome set is 2n = 34.
Life history.
Little information is available about the way of life of the Talysch mole; it is likely similar to that of the Levant mole. In the region around Chalus, numerous molehills have been observed in forests and in bush areas on sandy subsoil.
Threats and conservation.
Talysch mole has not yet been recognized by the IUCN. Information on the status of populations and protective measures is not available. | [
{
"math_id": 0,
"text": "\\frac{3.1.4.3}{3.1.4.3}"
}
]
| https://en.wikipedia.org/wiki?curid=69051197 |
690512 | Akaike information criterion | Estimator for quality of a statistical model
The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.
AIC is founded on information theory. When a statistical model is used to represent the process that generated the data, the representation will almost never be exact; so some information will be lost by using the model to represent the process. AIC estimates the relative amount of information lost by a given model: the less information a model loses, the higher the quality of that model.
In estimating the amount of information lost by a model, AIC deals with the trade-off between the goodness of fit of the model and the simplicity of the model. In other words, AIC deals with both the risk of overfitting and the risk of underfitting.
The Akaike information criterion is named after the Japanese statistician Hirotsugu Akaike, who formulated it. It now forms the basis of a paradigm for the foundations of statistics and is also widely used for statistical inference.
Definition.
Suppose that we have a statistical model of some data. Let "k" be the number of estimated parameters in the model. Let formula_0 be the maximized value of the likelihood function for the model. Then the AIC value of the model is the following.
formula_1
Given a set of candidate models for the data, the preferred model is the one with the minimum AIC value. Thus, AIC rewards goodness of fit (as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discourages overfitting, which is desired because increasing the number of parameters in the model almost always improves the goodness of the fit.
AIC is founded in information theory. Suppose that the data is generated by some unknown process "f". We consider two candidate models to represent "f": "g"1 and "g"2. If we knew "f", then we could find the information lost from using "g"1 to represent "f" by calculating the Kullback–Leibler divergence, "D"KL("f" ‖ "g"1); similarly, the information lost from using "g"2 to represent "f" could be found by calculating "D"KL("f" ‖ "g"2). We would then, generally, choose the candidate model that minimized the information loss.
We cannot choose with certainty, because we do not know "f". showed, however, that we can estimate, via AIC, how much more (or less) information is lost by "g"1 than by "g"2. The estimate, though, is only valid asymptotically; if the number of data points is small, then some correction is often necessary (see AICc, below).
Note that AIC tells nothing about the absolute quality of a model, only the quality relative to other models. Thus, if all the candidate models fit poorly, AIC will not give any warning of that. Hence, after selecting a model via AIC, it is usually good practice to validate the absolute quality of the model. Such validation commonly includes checks of the model's residuals (to determine whether the residuals seem like random) and tests of the model's predictions. For more on this topic, see "statistical model validation".
How to use AIC in practice.
To apply AIC in practice, we start with a set of candidate models, and then find the models' corresponding AIC values. There will almost always be information lost due to using a candidate model to represent the "true model," i.e. the process that generated the data. We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss.
Suppose that there are "R" candidate models. Denote the AIC values of those models by AIC1, AIC2, AIC3, ..., AIC"R". Let AICmin be the minimum of those values. Then the quantity exp((AICmin − AIC"i")/2) can be interpreted as being proportional to the probability that the "i"th model minimizes the (estimated) information loss.
As an example, suppose that there are three candidate models, whose AIC values are 100, 102, and 110. Then the second model is exp((100 − 102)/2)
0.368 times as probable as the first model to minimize the information loss. Similarly, the third model is exp((100 − 110)/2)
0.007 times as probable as the first model to minimize the information loss.
In this example, we would omit the third model from further consideration. We then have three options: (1) gather more data, in the hope that this will allow clearly distinguishing between the first two models; (2) simply conclude that the data is insufficient to support selecting one model from among the first two; (3) take a weighted average of the first two models, with weights proportional to 1 and 0.368, respectively, and then do statistical inference based on the weighted multimodel.
The quantity exp((AICmin − AIC"i")/2) is known as the "relative likelihood" of model "i". It is closely related to the likelihood ratio used in the likelihood-ratio test. Indeed, if all the models in the candidate set have the same number of parameters, then using AIC might at first appear to be very similar to using the likelihood-ratio test. There are, however, important distinctions. In particular, the likelihood-ratio test is valid only for nested models, whereas AIC (and AICc) has no such restriction.
Hypothesis testing.
Every statistical hypothesis test can be formulated as a comparison of statistical models. Hence, every statistical hypothesis test can be replicated via AIC. Two examples are briefly described in the subsections below. Details for those examples, and many more examples, are given by and .
Replicating Student's "t"-test.
As an example of a hypothesis test, consider the "t"-test to compare the means of two normally-distributed populations. The input to the "t"-test comprises a random sample from each of the two populations.
To formulate the test as a comparison of models, we construct two different models. The first model models the two populations as having potentially different means and standard deviations. The likelihood function for the first model is thus the product of the likelihoods for two distinct normal distributions; so it has four parameters: "μ"1, "σ"1, "μ"2, "σ"2. To be explicit, the likelihood function is as follows (denoting the sample sizes by "n"1 and "n"2).
formula_2
formula_3
The second model models the two populations as having the same means but potentially different standard deviations. The likelihood function for the second model thus sets "μ"1
"μ"2 in the above equation; so it has three parameters.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different means.
The "t"-test assumes that the two populations have identical standard deviations; the test tends to be unreliable if the assumption is false and the sizes of the two samples are very different (Welch's "t"-test would be better). Comparing the means of the populations via AIC, as in the example above, has an advantage by not making such assumptions.
Comparing categorical data sets.
For another example of a hypothesis test, suppose that we have two populations, and each member of each population is in one of two categories—category #1 or category #2. Each population is binomially distributed. We want to know whether the distributions of the two populations are the same. We are given a random sample from each of the two populations.
Let "m" be the size of the sample from the first population. Let "m"1 be the number of observations (in the sample) in category #1; so the number of observations in category #2 is "m" − "m"1. Similarly, let "n" be the size of the sample from the second population. Let "n"1 be the number of observations (in the sample) in category #1.
Let p be the probability that a randomly-chosen member of the first population is in category #1. Hence, the probability that a randomly-chosen member of the first population is in category #2 is 1 − "p". Note that the distribution of the first population has one parameter. Let q be the probability that a randomly-chosen member of the second population is in category #1. Note that the distribution of the second population also has one parameter.
To compare the distributions of the two populations, we construct two different models. The first model models the two populations as having potentially different distributions. The likelihood function for the first model is thus the product of the likelihoods for two distinct binomial distributions; so it has two parameters: p, q. To be explicit, the likelihood function is as follows.
formula_4
The second model models the two populations as having the same distribution. The likelihood function for the second model thus sets "p"
"q" in the above equation; so the second model has one parameter.
We then maximize the likelihood functions for the two models (in practice, we maximize the log-likelihood functions); after that, it is easy to calculate the AIC values of the models. We next calculate the relative likelihood. For instance, if the second model was only 0.01 times as likely as the first model, then we would omit the second model from further consideration: so we would conclude that the two populations have different distributions.
Foundations of statistics.
Statistical inference is generally regarded as comprising hypothesis testing and estimation. Hypothesis testing can be done via AIC, as discussed above. Regarding estimation, there are two types: point estimation and interval estimation. Point estimation can be done within the AIC paradigm: it is provided by maximum likelihood estimation. Interval estimation can also be done within the AIC paradigm: it is provided by likelihood intervals. Hence, statistical inference generally can be done within the AIC paradigm.
The most commonly used paradigms for statistical inference are frequentist inference and Bayesian inference. AIC, though, can be used to do statistical inference without relying on either the frequentist paradigm or the Bayesian paradigm: because AIC can be interpreted without the aid of significance levels or Bayesian priors. In other words, AIC can be used to form a foundation of statistics that is distinct from both frequentism and Bayesianism.
Modification for small sample size.
When the sample size is small, there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit. To address such potential overfitting, AICc was developed: AICc is AIC with a correction for small sample sizes.
The formula for AICc depends upon the statistical model. Assuming that the model is univariate, is linear in its parameters, and has normally-distributed residuals (conditional upon regressors), then the formula for AICc is as follows.
formula_5
—where "n" denotes the sample size and "k" denotes the number of parameters. Thus, AICc is essentially AIC with an extra penalty term for the number of parameters. Note that as "n" → ∞, the extra penalty term converges to 0, and thus AICc converges to AIC.
If the assumption that the model is univariate and linear with normal residuals does not hold, then the formula for AICc will generally be different from the formula above. For some models, the formula can be difficult to determine. For every model that has AICc available, though, the formula for AICc is given by AIC plus terms that includes both "k" and "k"2. In comparison, the formula for AIC includes "k" but not "k"2. In other words, AIC is a first-order estimate (of the information loss), whereas AICc is a second-order estimate.
Further discussion of the formula, with examples of other assumptions, is given by and by . In particular, with other assumptions, bootstrap estimation of the formula is often feasible.
To summarize, AICc has the advantage of tending to be more accurate than AIC (especially for small samples), but AICc also has the disadvantage of sometimes being much more difficult to compute than AIC. Note that if all the candidate models have the same "k" and the same formula for AICc, then AICc and AIC will give identical (relative) valuations; hence, there will be no disadvantage in using AIC, instead of AICc. Furthermore, if "n" is many times larger than "k"2, then the extra penalty term will be negligible; hence, the disadvantage in using AIC, instead of AICc, will be negligible.
History.
The Akaike information criterion was formulated by the statistician Hirotsugu Akaike. It was originally named "an information criterion". It was first announced in English by Akaike at a 1971 symposium; the proceedings of the symposium were published in 1973. The 1973 publication, though, was only an informal presentation of the concepts. The first formal publication was a 1974 paper by Akaike.
The initial derivation of AIC relied upon some strong assumptions. showed that the assumptions could be made much weaker. Takeuchi's work, however, was in Japanese and was not widely known outside Japan for many years. (Translated in )
AICc was originally proposed for linear regression (only) by . That instigated the work of , and several further papers by the same authors, which extended the situations in which AICc could be applied.
The first general exposition of the information-theoretic approach was the volume by . It includes an English presentation of the work of Takeuchi. The volume led to far greater use of AIC, and it now has more than 64,000 citations on Google Scholar.
Akaike called his approach an "entropy maximization principle", because the approach is founded on the concept of entropy in information theory. Indeed, minimizing AIC in a statistical model is effectively equivalent to maximizing entropy in a thermodynamic system; in other words, the information-theoretic approach in statistics is essentially applying the Second Law of Thermodynamics. As such, AIC has roots in the work of Ludwig Boltzmann on entropy. For more on these issues, see and .
Usage tips.
Counting parameters.
A statistical model must account for random errors. A straight line model might be formally described as "y""i" = "b"0 + "b"1"x""i" + "ε""i". Here, the "ε""i" are the residuals from the straight line fit. If the "ε""i" are assumed to be i.i.d. Gaussian (with zero mean), then the model has three parameters:
"b"0, "b"1, and the variance of the Gaussian distributions.
Thus, when calculating the AIC value of this model, we should use "k"=3. More generally, for any least squares model with i.i.d. Gaussian residuals, the variance of the residuals' distributions should be counted as one of the parameters.
As another example, consider a first-order autoregressive model, defined by
"x""i" = "c" + "φx""i"−1 + "ε""i", with the "ε""i" being i.i.d. Gaussian (with zero mean). For this model, there are three parameters: "c", "φ", and the variance of the "ε""i". More generally, a "p"th-order autoregressive model has "p" + 2 parameters. (If, however, "c" is not estimated from the data, but instead given in advance, then there are only "p" + 1 parameters.)
Transforming data.
The AIC values of the candidate models must all be computed with the same data set. Sometimes, though, we might want to compare a model of the response variable, "y", with a model of the logarithm of the response variable, log("y"). More generally, we might want to compare a model of the data with a model of transformed data. Following is an illustration of how to deal with data transforms (adapted from : "Investigators should be sure that all hypotheses are modeled using the same response variable").
Suppose that we want to compare two models: one with a normal distribution of "y" and one with a normal distribution of log("y"). We should "not" directly compare the AIC values of the two models. Instead, we should transform the normal cumulative distribution function to first take the logarithm of "y". To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/"y". Hence, the transformed distribution has the following probability density function:
formula_6
—which is the probability density function for the log-normal distribution. We then compare the AIC value of the normal model against the AIC value of the log-normal model.
For misspecified model, Takeuchi's Information Criterion (TIC) might be more appropriate. However, TIC often suffers from instability caused by estimation errors.
Comparisons with other model selection methods.
The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes. Their fundamental differences have been well-studied in regression variable selection and autoregression order selection problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred. If the goal is selection, inference, or interpretation, BIC or leave-many-out cross-validations are preferred. A comprehensive overview of AIC and other popular model selection methods is given by Ding et al. (2018)
Comparison with BIC.
The formula for the Bayesian information criterion (BIC) is similar to the formula for AIC, but with a different penalty for the number of parameters. With AIC the penalty is 2"k", whereas with BIC the penalty is ln("n")"k".
A comparison of AIC/AICc and BIC is given by , with follow-up remarks by . The authors show that AIC/AICc can be derived in the same Bayesian framework as BIC, just by using different prior probabilities. In the Bayesian derivation of BIC, though, each candidate model has a prior probability of 1/"R" (where "R" is the number of candidate models). Additionally, the authors present a few simulation studies that suggest AICc tends to have practical/performance advantages over BIC.
A point made by several researchers is that AIC and BIC are appropriate for different tasks. In particular, BIC is argued to be appropriate for selecting the "true model" (i.e. the process that generated the data) from the set of candidate models, whereas AIC is not appropriate. To be specific, if the "true model" is in the set of candidates, then BIC will select the "true model" with probability 1, as "n" → ∞; in contrast, when selection is done via AIC, the probability can be less than 1. Proponents of AIC argue that this issue is negligible, because the "true model" is virtually never in the candidate set. Indeed, it is a common aphorism in statistics that "all models are wrong"; hence the "true model" (i.e. reality) cannot be in the candidate set.
Another comparison of AIC and BIC is given by . Vrieze presents a simulation study—which allows the "true model" to be in the candidate set (unlike with virtually all real data). The simulation study demonstrates, in particular, that AIC sometimes selects a much better model than BIC even when the "true model" is in the candidate set. The reason is that, for finite "n", BIC can have a substantial risk of selecting a very bad model from the candidate set. This reason can arise even when "n" is much larger than "k"2. With AIC, the risk of selecting a very bad model is minimized.
If the "true model" is not in the candidate set, then the most that we can hope to do is select the model that best approximates the "true model". AIC is appropriate for finding the best approximating model, under certain assumptions. (Those assumptions include, in particular, that the approximating is done with regard to information loss.)
Comparison of AIC and BIC in the context of regression is given by . In regression, AIC is asymptotically optimal for selecting the model with the least mean squared error, under the assumption that the "true model" is not in the candidate set. BIC is not asymptotically optimal under the assumption. Yang additionally shows that the rate at which AIC converges to the optimum is, in a certain sense, the best possible.
Comparison with least squares.
Sometimes, each candidate model assumes that the residuals are distributed according to independent identical normal distributions (with zero mean). That gives rise to least squares model fitting.
With least squares fitting, the maximum likelihood estimate for the variance of a model's residuals distributions is
formula_7,
where the residual sum of squares is
formula_8
Then, the maximum value of a model's log-likelihood function is (see Normal distribution#Log-likelihood):
formula_9
where "C" is a constant independent of the model, and dependent only on the particular data points, i.e. it does not change if the data does not change.
That gives:
formula_10
Because only differences in AIC are meaningful, the constant "C" can be ignored, which allows us to conveniently take the following for model comparisons:
formula_11
Note that if all the models have the same "k", then selecting the model with minimum AIC is equivalent to selecting the model with minimum RSS—which is the usual objective of model selection based on least squares.
Comparison with cross-validation.
Leave-one-out cross-validation is asymptotically equivalent to AIC, for ordinary linear regression models. Asymptotic equivalence to AIC also holds for mixed-effects models.
Comparison with Mallows's "Cp".
Mallows's "Cp" is equivalent to AIC in the case of (Gaussian) linear regression.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat L"
},
{
"math_id": 1,
"text": "\\mathrm{AIC} \\, = \\, 2k - 2\\ln(\\hat L)"
},
{
"math_id": 2,
"text": "\n \\mathcal{L}(\\mu_1,\\sigma_1,\\mu_2,\\sigma_2) \\, = \\, \n"
},
{
"math_id": 3,
"text": " \\; \\; \\; \\; \\; \\; \\; \\; \n \\prod_{i=1}^{n_1} \\frac{1}{\\sqrt{2 \\pi}\\sigma_1} \\exp\\left( -\\frac{(x_i-\\mu_1)^2}{2\\sigma_1^2}\\right) \\; \\, \\boldsymbol\\cdot \\, \n \\prod_{i=n_1+1}^{n_1+n_2} \\frac{1}{\\sqrt{2 \\pi}\\sigma_2} \\exp\\left( -\\frac{(x_i-\\mu_2)^2}{2\\sigma_2^2}\\right) \n"
},
{
"math_id": 4,
"text": "\n\\mathcal{L}(p,q) \\, = \\, \n \\frac{m!}{m_1! (m-m_1)!} p^{m_1} (1-p)^{m-m_1} \\; \\, \\boldsymbol\\cdot \\; \\; \n \\frac{n!}{n_1! (n-n_1)!} q^{n_1} (1-q)^{n-n_1} \n"
},
{
"math_id": 5,
"text": "\\mathrm{AICc} \\, = \\, \\mathrm{AIC} + \\frac{2k^2 + 2k}{n - k - 1}"
},
{
"math_id": 6,
"text": "y \\mapsto \\, \\frac{1}{y} \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\,\\exp \\left(-\\frac{\\left(\\ln y-\\mu\\right)^2}{2\\sigma^2}\\right)"
},
{
"math_id": 7,
"text": "\\hat\\sigma^2 = \\mathrm{RSS}/n"
},
{
"math_id": 8,
"text": "\\textstyle \\mathrm{RSS} = \\sum_{i=1}^n (y_i-f(x_i;\\hat{\\theta}))^2"
},
{
"math_id": 9,
"text": "\\ln(\\hat L)=\n -\\frac{n}{2}\\ln(2\\pi) - \\frac{n}{2}\\ln(\\hat\\sigma^2) - \\frac{1}{2\\hat\\sigma^2}\\mathrm{RSS} \n\\, = \\, - \\frac{n}{2}\\ln(\\hat\\sigma^2) + C\n "
},
{
"math_id": 10,
"text": "\\mathrm{AIC} = 2k - 2\\ln(\\hat L) = 2k + n\\ln(\\hat\\sigma^2) - 2C"
},
{
"math_id": 11,
"text": "\\Delta \\mathrm{AIC} = 2k + n\\ln(\\hat\\sigma^2)"
}
]
| https://en.wikipedia.org/wiki?curid=690512 |
6906155 | Road speed limit enforcement in Australia | Overview of road speed limit enforcement methods applied in Australia
Road speed limit enforcement in Australia constitutes the actions taken by the authorities to force road users to comply with the speed limits in force on Australia's roads. Speed limit enforcement equipment such as speed cameras and other technologies such as radar and LIDAR are widely used by the authorities. In some regions, aircraft equipped with VASCAR devices are also used.
Each of the Australian states have their own speed limit enforcement policies and strategies and approved enforcement devices.
Methods.
Mobile Gatso speed camera.
This mobile camera or speed camera is used in Victoria and Queensland and can be operated in various manners. Without a flash, the only evidence of speed camera on the outside of the car is a black rectangular box, which sends out the radar beam, about 30 cm by 10 cm, mounted on the front of the car. On the older models of the camera, and on rainy days or in bad light, a cable is used to link it to a box with a flash placed just in front of the vehicle. The operator sits in the car and takes the pictures, which are then uploaded to a laptop computer. In both states unmarked cars are used. In Victoria these cameras are operated by Serco contractors, while in Queensland uniformed police officers operate them.
Many of the modern Gatso cameras now feature full capability, flashless operation. The advent of infra-red flash technology has provided Gatsos with the capacity to capture vehicles exceeding the limit in varying conditions - without emitting a bright flash, which in many cases can be considered distracting to the driver, especially if taken head-on. Infra-red light is invisible to the human eye, but when paired with a camera with an infra-red sensor, can be used as a flash to produce a clear image in low light conditions.
Mobile Multanova speed camera.
Used only in Western Australia, this Doppler RADAR-based camera is mounted usually on a tripod on the side of the road. It is sometimes covered by a black sheet and there is usually a "anywhere anytime" sign following it chained onto a pole or tree. It is sometimes incorrectly referred to as a "Multinova". Multanovas are manufactured by a Swiss company of the same name - Western Australia utilises the 6F and the 9F models.
During the daytime, the Multanova unit uses a standard "white" flash, but in low light or night time, a red filter is added to the flash so as to not dazzle the driver.
The camera is always accompanied by a white station wagon or by a black or, more commonly a white, silver or brown Nissan X-Trail, staffed by an un-sworn police officer (not a contractor) who is responsible for assembling and disassembling the unit, supervising it and operating the accompanying laptop in the car for the few hours that it is deployed at a location. The Nissan X-Trail usually has a bull bar and spotlights on it and a large, thick antenna. The camera stays usually for about four to five hours. There were 25 in use in Perth at the beginning of 2008.
As of late 2011 Multanova use in WA has been discontinued in favour of LIDAR exclusively.
Fixed speed-only camera.
These cameras come in many forms, some free standing on poles; others mounted on bridges or overhead gantries. The cameras may consist of a box for taking photographs, as well as a smaller box for the flash, or only a single box containing all the instruments. Recently introduced infrared cameras, do not emit a blinding flash and can therefore be used to take front-on photographs showing the driver's face.
Most states are now starting to replace older analogue film fixed cameras with modern digital variants.
Fixed speed cameras can use Doppler RADAR or Piezo strips embedded in the road to measure a vehicle's speed as it passes the camera.
However ANPR technology is also used to time vehicles between two or more fixed cameras that are a known distance apart (typically at least several kilometres). The average speed is then calculated using the formula: formula_0. The longer distance over which the speed is measured prevents drivers from slowing down momentarily for a camera before speeding up again. The SAFE-T-CAM system uses this technology, but was designed to only targets heavy vehicles. Newer ANPR cameras in Victoria are able to target any vehicle.
Fixed dual speed and red light camera.
These cameras are used in the Northern Territory, South Australia, the Australian Capital Territory, Victoria, New South Wales and Western Australia. They detect speeding at the intersection as well as running a red light. They look the same as red light cameras, except they are digital and look slightly more modern. Some of the Victorian cameras are Traffipax brand.
In New South Wales and South Australia dual redlight/speed cameras are identified by a "Safety Camera" sign.
Queensland is in the process of investigating conversion to dual redlight/speed cameras as the current system is reaching end-of-life.
Other speed checking devices.
Police also use other technology that does not rely photographs being taken of an offence, typically where officers enforce the speed limit in person.
'Silver Eagle'.
New South Wales police used the Silver Eagle vehicle-mounted unit. This radar device is typically mounted on the right hand side of the vehicle just behind the driver, and is operated from inside the vehicle. The units are approved for use only in rural areas where traffic is sparse, and may be used from a stationary or moving vehicle.
'Stalker'.
Police vehicles in New South Wales have recently been fitted with a dual-radar known as the Stalker DSR 2X, which is able to monitor vehicles moving in two different directions at the same time.
Other.
NSW police also use LIDAR devices as well as vehicle speedometers and speed estimates to prosecute speeding motorists.
The TIRTL device is deployed as a speed measurement sensor in Victoria and New South Wales. The device consists of a pair of sensors embedded in the curb that use a series of infrared beams to monitor vehicles at wheel height. Although the sensors themselves are very difficult to see, they are accompanied by a standard Traffipax camera to capture images of the offence. The state of New South Wales approved the device in November 2008 for use in the state as dual red light / speed cameras (named "safety cameras" under the Roads & Traffic Authority's terminology).
Motorcycle and bicycle-mounted police in New South Wales are equipped with the binocular-styled "Pro-Lite+" LIDAR device.
History.
Victoria.
Started with a small trial in 1985 using signed cameras with minimal effect. The major introduction was at the end of 1989 with hidden speed cameras starting at around 500 hours/month increasing to 4,000 hours/month by 1992. During the testing of the cameras the percentage of drivers speeding (over the speed camera thresholds) was 24% and by the end of 1992 this had dropped to 4%. The revenue collected by each camera dropped from $2,000/hour to $1,000/hour over 18 months. The road toll dropped from 776 in 1989 (no cameras) to 396 in 1992 (49% drop).
New South Wales.
Mobile speed cameras were first used in New South Wales in 1991. In 1999 the authorities began to install fixed cameras, and signs warning of their presence, at crash black spots.
Western Australia.
The government of Western Australia started using speed cameras in 1988.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " speed = {distance \\over time} "
}
]
| https://en.wikipedia.org/wiki?curid=6906155 |
69062088 | Configuration linear program | Linear programming for Combinatorial optimization
The configuration linear program (configuration-LP) is a linear programming technique used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock problem. Later, it has been applied to the bin packing and job scheduling problems. In the configuration-LP, there is a variable for each possible "configuration" - each possible multiset of items that can fit in a single bin (these configurations are also known as "patterns") . Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.
In bin packing.
The integral LP.
In the bin packing problem, there are "n" items with different sizes. The goal is to pack the items into a minimum number of bins, where each bin can contain at most "B". A "feasible configuration" is a set of sizes with a sum of at most "B".
Denote by "S" the set of different sizes (and their number). Denote by "C" the set of different configurations (and their number). For each size "s" in "S" and configuration "c" in "C", denote:
Then, the configuration LP of bin-packing is: formula_0
formula_1 for all "s" in "S" (- all "ns" items of size "s" are packed).
formula_2 for all "c" in "C" (- there are at most "n" bins overall, so at most "n" of each individual configuration). The configuration LP is an integer linear program, so in general it is NP-hard. Moreover, even the problem itself is generally very large: it has "C" variables and "S" constraints. If the smallest item size is "eB" (for some fraction "e" in (0,1)), then there can be up to 1/"e" items in each bin, so the number of configurations "C" ~ "S"1/"e", which can be very large if "e" is small (if e is considered a constant, then the integer LP can be solved by exhaustive search: there are at most "S1/e" configurations, and for each configuration there are at most "n" possible values, so there are at most formula_3 combinations to check. For each combination, we have to check "S" constraints, so the run-time is formula_4, which is polynomial in "n" when "S, e" are constant).
However, this ILP serves as a basis for several approximation algorithms. The main idea of these algorithms is to reduce the original instance into a new instance in which "S" is small and "e" is large, so "C" is relatively small. Then, the ILP can be solved either by complete search (if "S", "C" are sufficiently small), or by relaxing it into a "fractional" LP.
The fractional LP.
The fractional configuration LP of bin-packing It is the linear programming relaxation of the above ILP. It replaces the last constraint formula_2 with the constraint formula_5. In other words, each configuration can be used a fractional number of times. The relaxation was first presented by Gilmore and Gomory, and it is often called the Gilmore-Gomory linear program.
In short, the fractional LP can be written as follows:formula_6Where 1 is the vector (1...,1) of size "C", A is an "S"-by-"C" matrix in which each column represents a single configuration, and n is the vector ("n"1...,"nS").
Solving the fractional LP.
A linear program with no integrality constraints can be solved in time polynomial in the number of variables and constraints. The problem is that the number of variables in the fractional configuration LP is equal to the number of possible configurations, which might be huge. Karmarkar and Karp present an algorithm that overcomes this problem.
First, they construct the dual linear program of the fractional LP:formula_7.It has "S" variables "y"1...,"yS", and "C" constraints: for each configuration "c", there is a constraint formula_8, where formula_9 is the column of A representing the configuration "c". 3It has the following economic interpretation. For each size "s", we should determine a nonnegative price "ys". Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1.
Second, they apply a variant of the ellipsoid method, which does not need to list all the constraints - it just needs a "separation oracle". A separation oracle is an algorithm that, given a vector y, either asserts that it is feasible, or finds a constraint that it violates. The separation oracle for the dual LP can be implemented by solving the knapsack problem with sizes s and values y: if the optimal solution of the knapsack problem has a total value "at most" 1, then y is feasible; if it is "larger" than 1, than y is "not" feasible, and the optimal solution of the knapsack problem identifies a configuration for which the constraint is violated.
Third, they show that, with an approximate solution to the knapsack problem, one can get an approximate solution to the dual LP, and from this, an approximate solution to the primal LP; see Karmarkar-Karp bin packing algorithms.
All in all, for any tolerance factor "h", finds a basic feasible solution of cost at most LOPT(I) + "h", and runs in time:
formula_10,
where "S" is the number of different sizes, "n" is the number of different items, and the size of the smallest item is "eB". In particular, if "e" ≥ 1/"n" and "h"=1, the algorithm finds a solution with at most LOPT+1 bins in time: formula_11. A randomized variant of this algorithm runs in expected time:
formula_12.
Rounding the fractional LP.
Karmarkar and Karp further developed a way to round the fractional LP into an approximate solution to the integral LP; see Karmarkar-Karp bin packing algorithms. Their proof shows that the additive integrality gap of this LP is in O(log2("n")). Later, Hoberg and Rothvoss improved their result and proved that the integrality gap is in O(log("n")). The best known lower bound on the integrality gap is a constant Ω(1). Finding the exact integrality gap is an open problem.
In bin covering.
In the bin covering problem, there are "n" items with different sizes. The goal is to pack the items into a "maximum" number of bins, where each bin should contain "at least" "B". A natural configuration LP for this problem could be:formula_13where A represents all configurations of items with sum "at least" "B" (one can take only the inclusion-minimal configurations). The problem with this LP is that, in the bin-covering problem, handling small items is problematic, since small items may be essential for the optimal solution. With small items allowed, the number of configurations may be too large even for the technique of Karmarkar and Karp. Csirik, Johnson and Kenyon present an alternative LP. First, they define a set of items that are called "small". Let "T" be the total size of all small items. Then, they construct a matrix A representing all configurations with sum < 2. Then, they consider the above LP with one additional constraint:formula_14formula_15formula_16formula_17The additional constraint guarantees that the "vacant space" in the bins can be filled by the small items. The dual of this LP is more complex and cannot be solved by a simple knapsack-problem separation oracle. Csirik, Johnson and Kenyon present a different method to solve it approximately in time exponential in 1/epsilon. Jansen and Solis-Oba present an improved method to solve it approximately in time exponential in 1/epsilon.
In machine scheduling.
In the problem of unrelated-machines scheduling, there are some "m" different machines that should process some "n" different jobs. When machine "i" processes job "j", it takes time "pi","j". The goal is to partition the jobs among the machines such that maximum completion time of a machine is as small as possible. The decision version of this problem is: given time "T", is there a partition in which the completion time of all machines is at most "T"?
For each machine "i", there are finitely many subsets of jobs that can be processed by machine "i" in time at most "T". Each such subset is called a "configuration" for machine "i". Denote by "Ci"("T") the set of all configurations for machine "i", given time "T". For each machine "i" and configuration "c" in "Ci"("T"), define a variable formula_18 which equals 1 iff the actual configuration used in machine "i" is "c", and 0 otherwise. Then, the LP constraints are:
Properties.
The integrality gap of the configuration-LP for unrelated-machines scheduling is 2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{minimize}~~~\\sum_{c\\in C}x_c~~~\\text{subject to}"
},
{
"math_id": 1,
"text": "\\sum_{c\\in C}a_{s,c}x_c \\geq n_s"
},
{
"math_id": 2,
"text": "x_c\\in\\{0,\\ldots,n\\}"
},
{
"math_id": 3,
"text": " n^{S^{1/e}}"
},
{
"math_id": 4,
"text": "S\\cdot n^{S^{1/e}}"
},
{
"math_id": 5,
"text": "x_c \\geq 0"
},
{
"math_id": 6,
"text": "\\text{minimize}~~\\mathbf{1}\\cdot \\mathbf{x}~~~\\text{s.t.}~~ A \\mathbf{x}\\geq \\mathbf{n}~~~\\text{and}~~ \\mathbf{x}\\geq 0"
},
{
"math_id": 7,
"text": "\\text{maximize}~~\\mathbf{n}\\cdot \\mathbf{y}~~~\\text{s.t.}~~ A^T \\mathbf{y} \\leq \\mathbf{1}~~~\\text{and}~~ \\mathbf{y}\\geq 0"
},
{
"math_id": 8,
"text": "A^c\\cdot y\\leq 1"
},
{
"math_id": 9,
"text": "A^c"
},
{
"math_id": 10,
"text": "O\\left(S^8 \\log{S} \\log^2(\\frac{S n}{e h}) + \\frac{S^4 n \\log{S}}{h}\\log(\\frac{S n}{e h}) \\right)"
},
{
"math_id": 11,
"text": "O\\left(S^8 \\log{S} \\log^2{n} + S^4 n \\log{S}\\log{n} \\right)"
},
{
"math_id": 12,
"text": "O\\left(S^7 \\log{S} \\log^2(\\frac{S n}{e h}) + \\frac{S^4 n \\log{S}}{h}\\log(\\frac{S n}{e h}) \\right)"
},
{
"math_id": 13,
"text": "\\text{maximize}~~\\mathbf{1}\\cdot \\mathbf{x}~~~\\text{s.t.}~~ A \\mathbf{x}\\leq \\mathbf{n}~~~\\text{and}~~ \\mathbf{x}\\geq 0"
},
{
"math_id": 14,
"text": "\\text{maximize}~~\\mathbf{1}\\cdot \\mathbf{x}~~\\text{s.t.}"
},
{
"math_id": 15,
"text": "A \\mathbf{x}\\leq \\mathbf{n}"
},
{
"math_id": 16,
"text": "\\sum_{c\\in C: sum(c)<B} (B-sum(c))\\cdot x_c \\leq T"
},
{
"math_id": 17,
"text": "\\mathbf{x}\\geq 0"
},
{
"math_id": 18,
"text": "x_{i,c}"
},
{
"math_id": 19,
"text": "\\sum_{c\\in C_i(T)}x_{i,c} = 1"
},
{
"math_id": 20,
"text": "\\sum_{i=1}^m \\sum_{c\\ni j, c\\in C_i(T)}x_{i,c} = 1"
},
{
"math_id": 21,
"text": "x_{i,j} \\in \\{0,1\\}"
}
]
| https://en.wikipedia.org/wiki?curid=69062088 |
690647 | Edge coloring | Problem of coloring a graph's edges such that meeting edges do not match
In graph theory, a proper edge coloring of a graph is an assignment of "colors" to the edges of the graph so that no two incident edges have the same color. For example, the figure to the right shows an edge coloring of a graph by the colors red, blue, and green. Edge colorings are one of several different types of graph coloring. The edge-coloring problem asks whether it is possible to color the edges of a given graph using at most k different colors, for a given value of k, or with the fewest possible colors. The minimum required number of colors for the edges of a given graph is called the chromatic index of the graph. For example, the edges of the graph in the illustration can be colored by three colors but cannot be colored by two colors, so the graph shown has chromatic index three.
By Vizing's theorem, the number of colors needed to edge color a simple graph is either its maximum degree Δ or Δ+1. For some graphs, such as bipartite graphs and high-degree planar graphs, the number of colors is always Δ, and for multigraphs, the number of colors may be as large as 3Δ/2. There are polynomial time algorithms that construct optimal colorings of bipartite graphs, and colorings of non-bipartite simple graphs that use at most Δ+1 colors; however, the general problem of finding an optimal edge coloring is NP-hard and the fastest known algorithms for it take exponential time. Many variations of the edge-coloring problem, in which an assignments of colors to edges must satisfy other conditions than non-adjacency, have been studied. Edge colorings have applications in scheduling problems and in frequency assignment for fiber optic networks.
Examples.
A cycle graph may have its edges colored with two colors if the length of the cycle is even: simply alternate the two colors around the cycle. However, if the length is odd, three colors are needed.
A complete graph Kn with n vertices is edge-colorable with "n" − 1 colors when n is an even number; this is a special case of Baranyai's theorem. provides the following geometric construction of a coloring in this case: place n points at the vertices and center of a regular ("n" − 1)-sided polygon. For each color class, include one edge from the center to one of the polygon vertices, and all of the perpendicular edges connecting pairs of polygon vertices. However, when n is odd, n colors are needed: each color can only be used for ("n" − 1)/2 edges, a 1/"n" fraction of the total.
Several authors have studied edge colorings of the odd graphs, n-regular graphs in which the vertices represent teams of "n" − 1 players selected from a pool of 2"n" − 1 players, and in which the edges represent possible pairings of these teams (with one player left as "odd man out" to referee the game). The case that "n" = 3 gives the well-known Petersen graph. As explains the problem (for "n" = 6), the players wish to find a schedule for these pairings such that each team plays each of its six games on different days of the week, with Sundays off for all teams; that is, formalizing the problem mathematically, they wish to find a 6-edge-coloring of the 6-regular odd graph "O"6. When n is 3, 4, or 8, an edge coloring of "O""n" requires "n" + 1 colors, but when it is 5, 6, or 7, only n colors are needed.
Definitions.
As with its vertex counterpart, an edge coloring of a graph, when mentioned without any qualification, is always assumed to be a proper coloring of the edges, meaning no two adjacent edges are assigned the same color. Here, two distinct edges are considered to be adjacent when they share a common vertex. An edge coloring of a graph G may also be thought of as equivalent to a vertex coloring of the line graph "L"("G"), the graph that has a vertex for every edge of G and an edge for every pair of adjacent edges in G.
A proper edge coloring with k different colors is called a (proper) k-edge-coloring. A graph that can be assigned a k-edge-coloring is said to be k-edge-colorable. The smallest number of colors needed in a (proper) edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′("G"). The chromatic index is also sometimes written using the notation χ1("G"); in this notation, the subscript one indicates that edges are one-dimensional objects. A graph is k-edge-chromatic if its chromatic index is exactly k. The chromatic index should not be confused with the chromatic number χ("G") or χ0("G"), the minimum number of colors needed in a proper vertex coloring of G.
Unless stated otherwise all graphs are assumed to be simple, in contrast to multigraphs in which two or more edges may be connecting the same pair of endpoints and in which there may be self-loops. For many problems in edge coloring, simple graphs behave differently from multigraphs, and additional care is needed to extend theorems about edge colorings of simple graphs to the multigraph case.
Relation to matching.
A matching in a graph G is a set of edges, no two of which are adjacent; a perfect matching is a matching that includes edges touching all of the vertices of the graph, and a maximum matching is a matching that includes as many edges as possible. In an edge coloring, the set of edges with any one color must all be non-adjacent to each other, so they form a matching. That is, a proper edge coloring is the same thing as a partition of the graph into disjoint matchings.
If the size of a maximum matching in a given graph is small, then many matchings will be needed in order to cover all of the edges of the graph. Expressed more formally, this reasoning implies that if a graph has m edges in total, and if at most β edges may belong to a maximum matching, then every edge coloring of the graph must use at least "m"/β different colors. For instance, the 16-vertex planar graph shown in the illustration has "m" = 24 edges. In this graph, there can be no perfect matching; for, if the center vertex is matched, the remaining unmatched vertices may be grouped into three different connected components with four, five, and five vertices, and the components with an odd number of vertices cannot be perfectly matched. However, the graph has maximum matchings with seven edges, so β = 7. Therefore, the number of colors needed to edge-color the graph is at least 24/7, and since the number of colors must be an integer it is at least four.
For a regular graph of degree k that does not have a perfect matching, this lower bound can be used to show that at least "k" + 1 colors are needed. In particular, this is true for a regular graph with an odd number of vertices (such as the odd complete graphs); for such graphs, by the handshaking lemma, k must itself be even. However, the inequality χ′ ≥ "m"/β does not fully explain the chromatic index of every regular graph, because there are regular graphs that do have perfect matchings but that are not "k"-edge-colorable. For instance, the Petersen graph is regular, with "m" = 15 and with β = 5 edges in its perfect matchings, but it does not have a 3-edge-coloring.
Relation to degree.
Vizing's theorem.
The edge chromatic number of a graph G is very closely related to the maximum degree Δ("G"), the largest number of edges incident to any single vertex of G. Clearly, χ′("G") ≥ Δ("G"), for if Δ different edges all meet at the same vertex v, then all of these edges need to be assigned different colors from each other, and that can only be possible if there are at least Δ colors available to be assigned. Vizing's theorem (named for Vadim G. Vizing who published it in 1964) states that this bound is almost tight: for any graph, the edge chromatic number is either Δ("G") or Δ("G") + 1.
When χ′("G") = Δ("G"), "G" is said to be of class 1; otherwise, it is said to be of class 2.
Every bipartite graph is of class 1, and almost all random graphs are of class 1. However, it is NP-complete to determine whether an arbitrary graph is of class 1.
proved that planar graphs of maximum degree at least eight are of class one and conjectured that the same is true for planar graphs of maximum degree seven or six. On the other hand, there exist planar graphs of maximum degree ranging from two through five that are of class two. The conjecture has since been proven for graphs of maximum degree seven. Bridgeless planar cubic graphs are all of class 1; this is an equivalent form of the four color theorem.
Regular graphs.
A 1-factorization of a "k"-regular graph, a partition of the edges of the graph into perfect matchings, is the same thing as a "k"-edge-coloring of the graph. That is, a regular graph has a 1-factorization if and only if it is of class 1. As a special case of this, a 3-edge-coloring of a cubic (3-regular) graph is sometimes called a Tait coloring.
Not every regular graph has a 1-factorization; for instance, the Petersen graph does not. More generally the snarks are defined as the graphs that, like the Petersen graph, are bridgeless, 3-regular, and of class 2.
According to the theorem of , every bipartite regular graph has a 1-factorization. The theorem was stated earlier in terms of projective configurations and was proven by Ernst Steinitz.
Multigraphs.
For multigraphs, in which multiple parallel edges may connect the same two vertices, results that are similar to but weaker than Vizing's theorem are known relating the edge chromatic number χ′("G"), the maximum degree Δ("G"), and the multiplicity μ("G"), the maximum number of edges in any bundle of parallel edges. As a simple example showing that Vizing's theorem does not generalize to multigraphs, consider a Shannon multigraph, a multigraph with three vertices and three bundles of μ("G") parallel edges connecting each of the three pairs of vertices. In this example, Δ("G") = 2μ("G") (each vertex is incident to only two out of the three bundles of μ("G") parallel edges) but the edge chromatic number is 3μ("G") (there are 3μ("G") edges in total, and every two edges are adjacent, so all edges must be assigned different colors to each other). In a result that inspired Vizing, showed that this is the worst case: χ′("G") ≤ (3/2)Δ("G") for any multigraph G. Additionally, for any multigraph G, χ′("G") ≤ Δ("G") + μ("G"), an inequality that reduces to Vizing's theorem in the case of simple graphs (for which μ("G") = 1).
Algorithms.
Because the problem of testing whether a graph is class 1 is NP-complete, there is no known polynomial time algorithm for edge-coloring every graph with an optimal number of colors. Nevertheless, a number of algorithms have been developed that relax one or more of these criteria: they only work on a subset of graphs, or they do not always use an optimal number of colors, or they do not always run in polynomial time.
Optimally coloring special classes of graphs.
In the case of bipartite graphs or multigraphs with maximum degree Δ, the optimal number of colors is exactly Δ. showed that an optimal edge coloring of these graphs can be found in the near-linear time bound O("m" log Δ), where m is the number of edges in the graph; simpler, but somewhat slower, algorithms are described by and . The algorithm of begins by making the input graph regular, without increasing its degree or significantly increasing its size, by merging pairs of vertices that belong to the same side of the bipartition and then adding a small number of additional vertices and edges. Then, if the degree is odd, Alon finds a single perfect matching in near-linear time, assigns it a color, and removes it from the graph, causing the degree to become even. Finally, Alon applies an observation of , that selecting alternating subsets of edges in an Euler tour of the graph partitions it into two regular subgraphs, to split the edge coloring problem into two smaller subproblems, and his algorithm solves the two subproblems recursively. The total time for his algorithm is O("m" log "m").
For planar graphs with maximum degree Δ ≥ 7, the optimal number of colors is again exactly Δ. With the stronger assumption that Δ ≥ 9, it is possible to find an optimal edge coloring in linear time .
For d-regular graphs which are pseudo-random in the sense that their adjacency matrix has second largest eigenvalue (in absolute value) at most d1−ε, d is the optimal number of colors .
Algorithms that use more than the optimal number of colors.
and describe polynomial time algorithms for coloring any graph with Δ + 1 colors, meeting the bound given by Vizing's theorem; see Misra & Gries edge coloring algorithm.
For multigraphs, present the following algorithm, which they attribute to Eli Upfal. Make the input multigraph G Eulerian by adding a new vertex connected by an edge to every odd-degree vertex, find an Euler tour, and choose an orientation for the tour. Form a bipartite graph H in which there are two copies of each vertex of G, one on each side of the bipartition, with an edge from a vertex u on the left side of the bipartition to a vertex v on the right side of the bipartition whenever the oriented tour has an edge from u to v in G. Apply a bipartite graph edge coloring algorithm to H. Each color class in H corresponds to a set of edges in G that form a subgraph with maximum degree two; that is, a disjoint union of paths and cycles, so for each color class in H it is possible to form three color classes in G. The time for the algorithm is bounded by the time to edge color a bipartite graph, O("m" log Δ) using the algorithm of . The number of colors this algorithm uses is at most formula_0, close to but not quite the same as Shannon's bound of formula_1. It may also be made into a parallel algorithm in a straightforward way. In the same paper, Karloff and Shmoys also present a linear time algorithm for coloring multigraphs of maximum degree three with four colors (matching both Shannon's and Vizing's bounds) that operates on similar principles: their algorithm adds a new vertex to make the graph Eulerian, finds an Euler tour, and then chooses alternating sets of edges on the tour to split the graph into two subgraphs of maximum degree two. The paths and even cycles of each subgraph may be colored with two colors per subgraph. After this step, each remaining odd cycle contains at least one edge that may be colored with one of the two colors belonging to the opposite subgraph. Removing this edge from the odd cycle leaves a path, which may be colored using the two colors for its subgraph.
A greedy coloring algorithm that considers the edges of a graph or multigraph one by one, assigning each edge the first available color, may sometimes use as many as 2Δ − 1 colors, which may be nearly twice as many number of colors as is necessary. However, it has the advantage that it may be used in the online algorithm setting in which the input graph is not known in advance; in this setting, its competitive ratio is two, and this is optimal: no other online algorithm can achieve a better performance. However, if edges arrive in a random order, and the input graph has a degree that is at least logarithmic, then smaller competitive ratios can be achieved.
Several authors have made conjectures that imply that the fractional chromatic index of any multigraph (a number that can be computed in polynomial time using linear programming) is within one of the chromatic index. If these conjectures are true, it would be possible to compute a number that is never more than one off from the chromatic index in the multigraph case, matching what is known via Vizing's theorem for simple graphs. Although unproven in general, these conjectures are known to hold when the chromatic index is at least formula_2, as can happen for multigraphs with sufficiently large multiplicity.
Exact algorithms.
It is straightforward to test whether a graph may be edge colored with one or two colors, so the first nontrivial case of edge coloring is testing whether a graph has a 3-edge-coloring.
As showed, it is possible to test whether a graph has a 3-edge-coloring in time O(1.344"n"), while using only polynomial space. Although this time bound is exponential, it is significantly faster than a brute force search over all possible assignments of colors to edges. Every biconnected 3-regular graph with n vertices has O(2"n"/2) 3-edge-colorings; all of which can be listed in time O(2"n"/2) (somewhat slower than the time to find a single coloring); as Greg Kuperberg observed, the graph of a prism over an "n"/2-sided polygon has Ω(2"n"/2) colorings (lower instead of upper bound), showing that this bound is tight.
By applying exact algorithms for vertex coloring to the line graph of the input graph, it is possible to optimally edge-color any graph with m edges, regardless of the number of colors needed, in time 2"m""m"O(1) and exponential space, or in time O(2.2461"m") and only polynomial space .
Because edge coloring is NP-complete even for three colors, it is unlikely to be fixed parameter tractable when parametrized by the number of colors. However, it is tractable for other parameters. In particular, showed that for graphs of treewidth w, an optimal edge coloring can be computed in time O("nw"(6"w")"w"("w" + 1)/2), a bound that depends superexponentially on w but only linearly on the number n of vertices in the graph.
formulate the edge coloring problem as an integer program and describe their experience using an integer programming solver to edge color graphs. However, they did not perform any complexity analysis of their algorithm.
Additional properties.
A graph is uniquely k-edge-colorable if there is only one way of partitioning the edges into k color classes, ignoring the "k"! possible permutations of the colors. For "k" ≠ 3, the only uniquely k-edge-colorable graphs are paths, cycles, and stars, but for "k" = 3 other graphs may also be uniquely k-edge-colorable. Every uniquely 3-edge-colorable graph has exactly three Hamiltonian cycles (formed by deleting one of the three color classes) but there exist 3-regular graphs that have three Hamiltonian cycles and are not uniquely 3-colorable, such as the generalized Petersen graphs "G"(6"n" + 3, 2) for "n" ≥ 2. The only known nonplanar uniquely 3-colorable graph is the generalized Petersen graph "G"(9,2), and it has been conjectured that no others exist.
investigated the non-increasing sequences of numbers "m"1, "m"2, "m"3, ... with the property that there exists a proper edge coloring of a given graph G with "m"1 edges of the first color, "m"2 edges of the second color, etc. They observed that, if a sequence P is feasible in this sense, and is greater in lexicographic order than a sequence Q with the same sum, then Q is also feasible. For, if "P" > "Q" in lexicographic order, then P can be transformed into Q by a sequence of steps, each of which reduces one of the numbers "mi" by one unit and increases another later number
"mj" with "i" < "j" by one unit. In terms of edge colorings, starting from a coloring that realizes P, each of these same steps may be performed by swapping colors i and j on a Kempe chain, a maximal path of edges that alternate between the two colors. In particular, any graph has an equitable edge coloring, an edge coloring with an optimal number of colors in which every two color classes differ in size by at most one unit.
The De Bruijn–Erdős theorem may be used to transfer many edge coloring properties of finite graphs to infinite graphs. For instance, Shannon's and Vizing's theorems relating the degree of a graph to its chromatic index both generalize straightforwardly to infinite graphs.
considers the problem of finding a graph drawing of a given cubic graph with the properties that all of the edges in the drawing have one of three different slopes and that no two edges lie on the same line as each other. If such a drawing exists, then clearly the slopes of the edges may be used as colors in a 3-edge-coloring of the graph. For instance, the drawing of the utility graph "K"3,3 as the edges and long diagonals of a regular hexagon represents a 3-edge-coloring of the graph in this way. As Richter shows, a 3-regular simple bipartite graph, with a given Tait coloring, has a drawing of this type that represents the given coloring if and only if the graph is 3-edge-connected. For a non-bipartite graph, the condition is a little more complicated: a given coloring can be represented by a drawing if the bipartite double cover of the graph is 3-edge-connected, and if deleting any monochromatic pair of edges leads to a subgraph that is still non-bipartite. These conditions may all be tested easily in polynomial time; however, the problem of testing whether a 4-edge-colored 4-regular graph has a drawing with edges of four slopes, representing the colors by slopes, is complete for the existential theory of the reals, a complexity class at least as difficult as being NP-complete.
As well as being related to the maximum degree and maximum matching number of a graph, the chromatic index is closely related to the linear arboricity la("G") of a graph G, the minimum number of linear forests (disjoint unions of paths) into which the graph's edges may be partitioned. A matching is a special kind of linear forest, and in the other direction, any linear forest can be 2-edge-colored, so for every G it follows that la("G") ≤ χ′("G") ≤ 2 la("G"). Akiyama's conjecture (named for Jin Akiyama) states that formula_3, from which it would follow more strongly that 2 la("G") − 2 ≤ χ′("G") ≤ 2 la("G"). For graphs of maximum degree three, la("G") is always exactly two, so in this case the bound χ′("G") ≤ 2 la("G") matches the bound given by Vizing's theorem.
Other types.
The Thue number of a graph is the number of colors required in an edge coloring meeting the stronger requirement that, in every even-length path, the first and second halves of the path form different sequences of colors.
The arboricity of a graph is the minimum number of colors required so that the edges of each color have no cycles (rather than, in the standard edge coloring problem, having no adjacent pairs of edges). That is, it is the minimum number of forests into which the edges of the graph may be partitioned into. Unlike the chromatic index, the arboricity of a graph may be computed in polynomial time.
List edge-coloring is a problem in which one is given a graph in which each edge is associated with a list of colors, and must find a proper edge coloring in which the color of each edge is drawn from that edge's list. The list chromatic index of a graph G is the smallest number k with the property that, no matter how one chooses lists of colors for the edges, as long as each edge has at least k colors in its list, then a coloring is guaranteed to be possible. Thus, the list chromatic index is always at least as large as the chromatic index. The Dinitz conjecture on the completion of partial Latin squares may be rephrased as the statement that the list edge chromatic number of the complete bipartite graph "Kn,n" equals its edge chromatic number, n. resolved the conjecture by proving, more generally, that in every bipartite graph the chromatic index and list chromatic index are equal. The equality between the chromatic index and the list chromatic index has been conjectured to hold, even more generally, for arbitrary multigraphs with no self-loops; this conjecture remains open.
Many other commonly studied variations of vertex coloring have also been extended to edge colorings. For instance, complete edge coloring is the edge-coloring variant of complete coloring, a proper edge coloring in which each pair of colors must be represented by at least one pair of adjacent edges and in which the goal is to maximize the total number of colors. Strong edge coloring is the edge-coloring variant of strong coloring, an edge coloring in which every two edges with adjacent endpoints must have different colors. Strong edge coloring has applications in channel allocation schemes for wireless networks.
Acyclic edge coloring is the edge-coloring variant of acyclic coloring, an edge coloring for which every two color classes form an acyclic subgraph (that is, a forest). The acyclic chromatic index of a graph formula_4, denoted by formula_5, is the smallest number of colors needed to have a proper acyclic edge coloring of formula_4. It has been conjectured that formula_6, where formula_7 is the maximum degree of formula_4. Currently the best known bound is formula_8. The problem becomes easier when formula_4 has large girth. More specifically, there is a constant formula_9 such that if the girth of formula_4 is at least formula_10, then formula_11. A similar result is that for all formula_12 there exists an formula_13 such that if formula_4 has girth at least formula_13, then formula_14.
studied 3-edge-colorings of cubic graphs with the additional property that no two bichromatic cycles share more than a single edge with each other. He showed that the existence of such a coloring is equivalent to the existence of a drawing of the graph on a three-dimensional integer grid, with edges parallel to the coordinate axes and each axis-parallel line containing at most two vertices. However, like the standard 3-edge-coloring problem, finding a coloring of this type is NP-complete.
Total coloring is a form of coloring that combines vertex and edge coloring, by requiring both the vertices and edges to be colored. Any incident pair of a vertex and an edge, or an edge and an edge, must have distinct colors, as must any two adjacent vertices. It has been conjectured (combining Vizing's theorem and Brooks' theorem) that any graph has a total coloring in which the number of colors is at most the maximum degree plus two, but this remains unproven.
If a 3-regular graph on a surface is 3-edge-colored, its dual graph forms a triangulation of the surface which is also edge colored (although not, in general, properly edge colored) in such a way that every triangle has one edge of each color. Other colorings and orientations of triangulations, with other local constraints on how the colors are arranged at the vertices or faces of the triangulation, may be used to encode several types of geometric object. For instance, rectangular subdivisions (partitions of a rectangular subdivision into smaller rectangles, with three rectangles meeting at every vertex) may be described combinatorially by a "regular labeling", a two-coloring of the edges of a triangulation dual to the subdivision, with the constraint that the edges incident to each vertex form four contiguous subsequences, within each of which the colors are the same. This labeling is dual to a coloring of the rectangular subdivision itself in which the vertical edges have one color and the horizontal edges have the other color. Similar local constraints on the order in which colored edges may appear around a vertex may also be used to encode straight-line grid embeddings of planar graphs and three-dimensional polyhedra with axis-parallel sides. For each of these three types of regular labelings, the set of regular labelings of a fixed graph forms a distributive lattice that may be used to quickly list all geometric structures based on the same graph (such as all axis-parallel polyhedra having the same skeleton) or to find structures satisfying additional constraints.
A deterministic finite automaton may be interpreted as a directed graph in which each vertex has the same out-degree d, and in which the edges are d-colored in such a way that every two edges with the same source vertex have distinct colors. The road coloring problem is the problem of edge-coloring a directed graph with uniform out-degrees, in such a way that the resulting automaton has a synchronizing word. solved the road coloring problem by proving that such a coloring can be found whenever the given graph is strongly connected and aperiodic.
Ramsey's theorem concerns the problem of k-coloring the edges of a large complete graph "Kn" in order to avoid creating monochromatic complete subgraphs "Ks" of some given size s. According to the theorem, there exists a number "R""k"("s") such that, whenever "n" ≥ "R"("s"), such a coloring is not possible. For instance, "R"2(3) = 6, that is, if the edges of the graph "K"6 are 2-colored, there will always be a monochromatic triangle.
A path in an edge-colored graph is said to be a rainbow path if no color repeats on it. A graph is said to be rainbow colored if there is a rainbow path between any two pairs of vertices.
An edge-colouring of a graph G with colours 1. . . t is an interval t coloring if all colours are used, and the colours of edges incident to each vertex of G are distinct and form an interval of integers.
Applications.
Edge colorings of complete graphs may be used to schedule a round-robin tournament into as few rounds as possible so that each pair of competitors plays each other in one of the rounds; in this application, the vertices of the graph correspond to the competitors in the tournament, the edges correspond to games, and the edge colors correspond to the rounds in which the games are played. Similar coloring techniques may also be used to schedule other sports pairings that are not all-play-all; for instance, in the National Football League, the pairs of teams that will play each other in a given year are determined, based on the teams' records from the previous year, and then an edge coloring algorithm is applied to the graph formed by the set of pairings in order to assign games to the weekends on which they are played. For this application, Vizing's theorem implies that no matter what set of pairings is chosen (as long as no teams play each other twice in the same season), it is always possible to find a schedule that uses at most one more weekend than there are games per team.
Open shop scheduling is a problem of scheduling production processes, in which there are a set of objects to be manufactured, each object has a set of tasks to be performed on it (in any order), and each task must be performed on a specific machine, preventing any other task that requires the same machine from being performed at the same time. If all tasks have the same length, then this problem may be formalized as one of edge coloring a bipartite multigraph, in which the vertices on one side of the bipartition represent the objects to be manufactured, the vertices on the other side of the bipartition represent the manufacturing machines, the edges represent tasks that must be performed, and the colors represent time steps in which each task may be performed. Since bipartite edge coloring may be performed in polynomial time, the same is true for this restricted case of open shop scheduling.
study the problem of link scheduling for time-division multiple access network communications protocols on sensor networks as a variant of edge coloring. In this problem, one must choose time slots for the edges of a wireless communications network so that each node of the network can communicate with each neighboring node without interference. Using a strong edge coloring (and using two time slots for each edge color, one for each direction) would solve the problem but might use more time slots than necessary. Instead, they seek a coloring of the directed graph formed by doubling each undirected edge of the network, with the property that each directed edge uv has a different color from the edges that go out from v and from the neighbors of v. They propose a heuristic for this problem based on a distributed algorithm for (Δ + 1)-edge-coloring together with a postprocessing phase that reschedules edges that might interfere with each other.
In fiber-optic communication, the path coloring problem is the problem of assigning colors (frequencies of light) to pairs of nodes that wish to communicate with each other, and paths through a fiber-optic communications network for each pair, subject to the restriction that no two paths that share a segment of fiber use the same frequency as each other. Paths that pass through the same communication switch but not through any segment of fiber are allowed to use the same frequency. When the communications network is arranged as a star network, with a single central switch connected by separate fibers to each of the nodes, the path coloring problem may be modeled exactly as a problem of edge coloring a graph or multigraph, in which the communicating nodes form the graph vertices, pairs of nodes that wish to communicate form the graph edges, and the frequencies that may be used for each pair form the colors of the edge coloring problem. For communications networks with a more general tree topology, local path coloring solutions for the star networks defined by each switch in the network may be patched together to form a single global solution.
Open problems.
list 23 open problems concerning edge coloring. They include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "3 \\left\\lceil\\frac\\Delta2\\right\\rceil"
},
{
"math_id": 1,
"text": "\\left\\lfloor\\frac{3\\Delta}2\\right\\rfloor"
},
{
"math_id": 2,
"text": "\\Delta+\\sqrt{\\Delta/2}"
},
{
"math_id": 3,
"text": "\\mathop{\\mathrm{la}}(G) \\leq \\left\\lceil\\frac{\\Delta+1}{2}\\right\\rceil"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "a'(G)"
},
{
"math_id": 6,
"text": "a'(G) \\le \\Delta + 2 "
},
{
"math_id": 7,
"text": "\\Delta"
},
{
"math_id": 8,
"text": "a'(G) \\le \\lceil 3.74 (\\Delta -1) \\rceil"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": " c \\Delta \\log \\Delta"
},
{
"math_id": 11,
"text": "a'(G) \\le \\Delta + 2"
},
{
"math_id": 12,
"text": "\\epsilon > 0"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "a'(G) \\le (1+\\epsilon) \\Delta"
}
]
| https://en.wikipedia.org/wiki?curid=690647 |
690669 | List coloring | In graph theory, a branch of mathematics, list coloring is a type of graph coloring where each vertex can be restricted to a list of allowed colors. It was first studied in the 1970s in independent papers by Vizing
and by Erdős, Rubin, and Taylor.
Definition.
Given a graph "G" and given a set "L"("v") of colors for each vertex "v" (called a list), a list coloring is a "choice function" that maps every vertex "v" to a color in the list "L"("v"). As with graph coloring, a list coloring is generally assumed to be proper, meaning no two adjacent vertices receive the same color. A graph is k"-choosable (or k"-list-colorable) if it has a proper list coloring no matter how one assigns a list of "k" colors to each vertex. The choosability (or list colorability or list chromatic number) ch("G") of a graph "G" is the least number "k" such that "G" is "k"-choosable.
More generally, for a function "f" assigning a positive integer "f"("v") to each vertex "v", a graph "G" is f"-choosable (or f"-list-colorable) if it has a list coloring no matter how one assigns a list of "f"("v") colors to each vertex "v". In particular, if formula_0 for all vertices "v", "f"-choosability corresponds to "k"-choosability.
Examples.
Consider the complete bipartite graph "G" = "K"2,4, having six vertices "A", "B", "W", "X", "Y", "Z" such that "A" and "B" are each connected to all of "W", "X", "Y", and "Z", and no other vertices are connected. As a bipartite graph, "G" has usual chromatic number 2: one may color "A" and "B" in one color and "W", "X", "Y", "Z" in another and no two adjacent vertices will have the same color. On the other hand, "G" has list-chromatic number larger than 2, as the following construction shows: assign to "A" and "B" the lists {red, blue} and {green, black}. Assign to the other four vertices the lists {red, green}, {red, black}, {blue, green}, and {blue, black}. No matter which choice one makes of a color from the list of "A" and a color from the list of "B", there will be some other vertex such that both of its choices are already used to color its neighbors. Thus, "G" is not 2-choosable.
On the other hand, it is easy to see that "G" is 3-choosable: picking arbitrary colors for the vertices "A" and "B" leaves at least one available color for each of the remaining vertices, and these colors may be chosen arbitrarily.
More generally, let "q" be a positive integer, and let "G" be the complete bipartite graph "K""q","q""q". Let the available colors be represented by the "q"2 different two-digit numbers in radix "q".
On one side of the bipartition, let the "q" vertices be given sets of colors {"i"0, "i"1, "i"2, ...} in which the first digits are equal to each other, for each of the "q" possible choices of the first digit "i".
On the other side of the bipartition, let the "qq" vertices be given sets of colors {0"a", 1"b", 2"c", ...} in which the first digits are all distinct, for each of the "qq" possible choices of the "q"-tuple ("a", "b", "c", ...).
The illustration shows a larger example of the same construction, with "q" = 3.
Then, "G" does not have a list coloring for "L": no matter what set of colors is chosen for the vertices on the small side of the bipartition, this choice will conflict with all of the colors for one of the vertices on the other side of the bipartition. For instance if the vertex with color set {00,01} is colored 01, and the vertex with color set {10,11} is colored 10, then the vertex with color set {01,10} cannot be colored.
Therefore, the list chromatic number of "G" is at least "q" + 1.
Similarly, if formula_1, then the complete bipartite graph "K""n", "n" is not "k"-choosable. For, suppose that 2"k" − 1 colors are available in total, and that, on a single side of the bipartition, each vertex has available to it a different "k"-tuple of these colors than each other vertex. Then, each side of the bipartition must use at least "k" colors, because every set of "k" − 1 colors will be disjoint from the list of one vertex. Since at least "k" colors are used on one side and at least "k" are used on the other, there must be one color which is used on both sides, but this implies that two adjacent vertices have the same color. In particular, the utility graph "K"3,3 has list-chromatic number at least three, and the graph "K"10,10 has list-chromatic number at least four.
Properties.
For a graph "G", let χ("G") denote the chromatic number and Δ("G") the maximum degree of "G". The list coloring number ch("G") satisfies the following properties.
Computing choosability and ("a", "b")-choosability.
Two algorithmic problems have been considered in the literature:
It is known that "k"-choosability in bipartite graphs is formula_3-complete for any "k" ≥ 3, and the same applies for 4-choosability in planar graphs, 3-choosability in planar triangle-free graphs, and (2, 3)-choosability in bipartite planar graphs. For P5-free graphs, that is, graphs excluding a 5-vertex path graph, "k"-choosability is fixed-parameter tractable.
It is possible to test whether a graph is 2-choosable in linear time by repeatedly deleting vertices of degree zero or one until reaching the 2-core of the graph, after which no more such deletions are possible. The initial graph is 2-choosable if and only if its 2-core is either an even cycle or a theta graph formed by three paths with shared endpoints, with two paths of length two and the third path having any even length.
Applications.
List coloring arises in practical problems concerning channel/frequency assignment.
References.
Further reading | [
{
"math_id": 0,
"text": "f(v) = k"
},
{
"math_id": 1,
"text": "n=\\binom{2k-1}{k}"
},
{
"math_id": 2,
"text": "f : V \\to \\{a,\\dots,b\\}"
},
{
"math_id": 3,
"text": "\\Pi^p_2"
}
]
| https://en.wikipedia.org/wiki?curid=690669 |
6906913 | Townsend discharge | Gas ionization process
In electromagnetism, the Townsend discharge or Townsend avalanche is an ionisation process for gases where free electrons are accelerated by an electric field, collide with gas molecules, and consequently free additional electrons. Those electrons are in turn accelerated and free additional electrons. The result is an avalanche multiplication that permits significantly increased electrical conduction through the gas. The discharge requires a source of free electrons and a significant electric field; without both, the phenomenon does not occur.
The Townsend discharge is named after John Sealy Townsend, who discovered the fundamental ionisation mechanism by his work circa 1897 at the Cavendish Laboratory, Cambridge.
General description.
The avalanche occurs in a gaseous medium that can be ionised (such as air). The electric field and the mean free path of the electron must allow free electrons to acquire an energy level (velocity) that can cause impact ionisation. If the electric field is too small, then the electrons do not acquire enough energy. If the mean free path is too short, then the electron gives up its acquired energy in a series of non-ionising collisions. If the mean free path is too long, then the electron reaches the anode before colliding with another molecule.
The avalanche mechanism is shown in the accompanying diagram. The electric field is applied across a gaseous medium; initial ions are created with ionising radiation (for example, cosmic rays). An original ionisation event produces an ion pair; the positive ion accelerates towards the cathode while the free electron accelerates towards the anode. If the electric field is strong enough, then the free electron can gain sufficient velocity (energy) to liberate another electron when it next collides with a molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause further impact ionisations, and so on. This process is effectively a chain reaction that generates free electrons. Initially, the number of collisions grows exponentially, but eventually, this relationship will break down—the limit to the multiplication in an electron avalanche is known as the Raether limit.
The Townsend avalanche can have a large range of current densities. In common gas-filled tubes, such as those used as gaseous ionisation detectors, magnitudes of currents flowing during this process can range from about 10−18 to 10−5 amperes.
Quantitative description.
Townsend's early experimental apparatus consisted of planar parallel plates forming two sides of a chamber filled with a gas. A direct-current high-voltage source was connected between the plates, the lower plate being the cathode while the other was the anode. He forced the cathode to emit electrons using the photoelectric effect by irradiating it with x-rays, and he found that the current flowing through the chamber depended on the electric field between the plates. However, this current showed an exponential increase as the plate gaps became small, leading to the conclusion that the gas ions were multiplying as they moved between the plates due to the high electric field.
Townsend observed currents varying exponentially over ten or more orders of magnitude with a constant applied voltage when the distance between the plates was varied. He also discovered that gas pressure influenced conduction: he was able to generate ions in gases at low pressure with a much lower voltage than that required to generate a spark. This observation overturned conventional thinking about the amount of current that an irradiated gas could conduct.
The experimental data obtained from his experiments are described by the formula
formula_0
where
The almost-constant voltage between the plates is equal to the breakdown voltage needed to create a self-sustaining avalanche: it "decreases" when the current reaches the glow discharge regime. Subsequent experiments revealed that the current I rises faster than predicted by the above formula as the distance d increases; two different effects were considered in order to better model the discharge: positive ions and cathode emission.
Gas ionisation caused by motion of positive ions.
Townsend put forward the hypothesis that positive ions also produce ion pairs, introducing a coefficient formula_1 expressing the number of ion pairs generated per unit length by a positive ion (cation) moving from anode to cathode. The following formula was found:
formula_2
since formula_3, in very good agreement with experiments.
The "first Townsend coefficient" ( α ), also known as "first Townsend avalanche coefficient", is a term used where secondary ionisation occurs because the primary ionisation electrons gain sufficient energy from the accelerating electric field, or from the original ionising particle. The coefficient gives the number of secondary electrons produced by primary electron per unit path length.
Cathode emission caused by impact of ions.
Townsend, Holst and Oosterhuis also put forward an alternative hypothesis, considering the augmented emission of electrons by the cathode caused by impact of positive ions. This introduced "Townsend's second ionisation coefficient" formula_4, the average number of electrons released from a surface by an incident positive ion, according to the formula
formula_5
These two formulas may be thought as describing limiting cases of the effective behavior of the process: either can be used to describe the same experimental results. Other formulas describing various intermediate behaviors are found in the literature, particularly in reference 1 and citations therein.
Conditions.
A Townsend discharge can be sustained only over a limited range of gas pressure and electric field intensity. The accompanying plot shows the variation of voltage drop and the different operating regions for a gas-filled tube with a constant pressure, but a varying current between its electrodes. The Townsend avalanche phenomena occurs on the sloping plateau B-D. Beyond D, the ionisation is sustained.
At higher pressures, discharges occur more rapidly than the calculated time for ions to traverse the gap between electrodes, and the streamer theory of spark discharge of Raether, Meek, and Loeb is applicable. In highly non-uniform electric fields, the corona discharge process is applicable. See Electron avalanche for further description of these mechanisms.
Discharges in vacuum require vaporization and ionisation of electrode atoms. An arc can be initiated without a preliminary Townsend discharge, for example when electrodes touch and are then separated.
Penning discharge.
In the presence of a magnetic field, the likelihood of an avalanche discharge occurring under high vacuum conditions can be increased through a phenomenon known as Penning discharge. This occurs when electrons can become trapped within a potential minimum, thereby extending the mean free path of the electrons [Fränkle 2014].
Applications.
Gas-discharge tubes.
The starting of Townsend discharge sets the upper limit to the blocking voltage a glow discharge gas-filled tube can withstand. This limit is the Townsend discharge breakdown voltage, also called ignition voltage of the tube.
The occurrence of Townsend discharge, leading to glow discharge breakdown, shapes the current–voltage characteristic of a gas-discharge tube such as a neon lamp in such a way that it has a negative differential resistance region of the S-type. The negative resistance can be used to generate electrical oscillations and waveforms, as in the relaxation oscillator whose schematic is shown in the picture on the right. The sawtooth shaped oscillation generated has frequency
formula_6
where
Since temperature and time stability of the characteristics of gas diodes and neon lamps is low, and also the statistical dispersion of breakdown voltages is high, the above formula can only give a qualitative indication of what the real frequency of oscillation is.
Gas phototubes.
Avalanche multiplication during Townsend discharge is naturally used in gas phototubes, to amplify the photoelectric charge generated by incident radiation (visible light or not) on the cathode: achievable current is typically 10~20 times greater respect to that generated by vacuum phototubes.
Ionising radiation detectors.
Townsend avalanche discharges are fundamental to the operation of gaseous ionisation detectors such as the Geiger–Müller tube and the proportional counter in either detecting ionising radiation or measuring its energy. The incident radiation will ionise atoms or molecules in the gaseous medium to produce ion pairs, but different use is made by each detector type of the resultant avalanche effects.
In the case of a GM tube, the high electric field strength is sufficient to cause complete ionisation of the fill gas surrounding the anode from the initial creation of just one ion pair. The GM tube output carries information that the event has occurred, but no information about the energy of the incident radiation.
In the case of proportional counters, multiple creation of ion pairs occurs in the "ion drift" region near the cathode. The electric field and chamber geometries are selected so that an "avalanche region" is created in the immediate proximity of the anode. A negative ion drifting towards the anode enters this region and creates a localised avalanche that is independent of those from other ion pairs, but which can still provide a multiplication effect. In this way, spectroscopic information on the energy of the incident radiation is available by the magnitude of the output pulse from each initiating event.
The accompanying plot shows the variation of ionisation current for a co-axial cylinder system. In the ion chamber region, there are no avalanches and the applied voltage only serves to move the ions towards the electrodes to prevent re-combination. In the proportional region, localised avalanches occur in the gas space immediately round the anode which are numerically proportional to the number of original ionising events. Increasing the voltage further increases the number of avalanches until the Geiger region is reached where the full volume of the fill gas around the anodes ionised, and all proportional energy information is lost. Beyond the Geiger region, the gas is in continuous discharge owing to the high electric field strength.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{I}{I_0}=e^{\\alpha_n d}, \\, "
},
{
"math_id": 1,
"text": "\\alpha_p"
},
{
"math_id": 2,
"text": "\\frac{I}{I_0}=\\frac{(\\alpha_n-\\alpha_p)e^{(\\alpha_n-\\alpha_p)d}}{\\alpha_n-\\alpha_p e^{(\\alpha_n-\\alpha_p)d}}\n\\qquad\\Longrightarrow\\qquad \\frac{I}{I_0}\\cong\\frac{e^{\\alpha_n d}}{1 - ({\\alpha_p/\\alpha_n}) e^{\\alpha_n d}}"
},
{
"math_id": 3,
"text": "\\alpha_p \\ll \\alpha_n"
},
{
"math_id": 4,
"text": "\\epsilon_i"
},
{
"math_id": 5,
"text": "\\frac{I}{I_0}=\\frac{e^{\\alpha_n d}}{1 - {\\epsilon_i}\\left(e^{\\alpha_n d}-1\\right)}."
},
{
"math_id": 6,
"text": "f\\cong\\frac{1}{R_1C_1\\ln\\frac{V_1-V_\\text{GLOW}}{V_1-V_\\text{TWN}}},"
},
{
"math_id": 7,
"text": "V_\\text{GLOW}"
},
{
"math_id": 8,
"text": "V_\\text{TWN}"
},
{
"math_id": 9,
"text": "C_1"
},
{
"math_id": 10,
"text": "R_1"
},
{
"math_id": 11,
"text": "V_1"
}
]
| https://en.wikipedia.org/wiki?curid=6906913 |
690702 | Total coloring | In graph theory, total coloring is a type of graph coloring on the vertices and edges of a graph. When used without any qualification, a total coloring is always assumed to be "proper" in the sense that no adjacent edges, no adjacent vertices and no edge and either endvertex are assigned the same color. The total chromatic number χ″("G") of a graph "G" is the fewest colors needed in any total coloring of "G".
The total graph "T" = "T"("G") of a graph "G" is a graph such that (i) the vertex set of "T" corresponds to the vertices and edges of "G" and (ii) two vertices are adjacent in "T" if and only if their corresponding elements are either adjacent or incident in "G". Then total coloring of "G" becomes a (proper) vertex coloring of "T"("G"). A total coloring is a partitioning of the vertices and edges of the graph into total independent sets.
Some inequalities for χ″("G"):
Here Δ("G") is the maximum degree; and ch′("G"), the edge choosability.
Total coloring arises naturally since it is simply a mixture of vertex and edge colorings. The next step is to look for any Brooks-typed or Vizing-typed upper bound on the total chromatic number in terms of maximum degree.
The total coloring version of maximum degree upper bound is a difficult problem that has eluded mathematicians for 50 years. A trivial lower bound for χ″("G") is Δ("G") + 1. Some graphs such as cycles of length formula_0 and complete bipartite graphs of the form formula_1 need Δ("G") + 2 colors but no graph has been found that requires more colors. This leads to the speculation that every graph needs either Δ("G") + 1 or Δ("G") + 2 colors, but never more:
Total coloring conjecture (Behzad, Vizing). formula_2
Apparently, the term "total coloring" and the statement of total coloring conjecture were independently introduced by Behzad and Vizing in numerous occasions between 1964 and 1968 (see Jensen & Toft). The conjecture is known to hold for a few important classes of graphs, such as all bipartite graphs and most planar graphs except those with maximum degree 6. The planar case can be completed if Vizing's planar graph conjecture is true. Also, if the list coloring conjecture is true, then formula_3
Results related to total coloring have been obtained. For example, Kilakos and Reed (1993) proved that the fractional chromatic number of the total graph of a graph "G" is at most Δ("G") + 2. | [
{
"math_id": 0,
"text": "n \\not \\equiv 0 \\bmod 3"
},
{
"math_id": 1,
"text": "K_{n,n}"
},
{
"math_id": 2,
"text": "\\chi''(G) \\le \\Delta(G)+2."
},
{
"math_id": 3,
"text": "\\chi''(G) \\le \\Delta(G) +3."
}
]
| https://en.wikipedia.org/wiki?curid=690702 |
69071767 | Prompt engineering | Structuring text as input to generative AI
Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative AI model.
A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem about leaves falling", or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, providing relevant context or assigning a role to the AI such as "Act as a native French speaker". A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison → house, chat → cat, chien →" (the expected response being "dog"), an approach called few-shot learning.
When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style, layout, lighting, and aesthetic.
In-context learning.
Prompt engineering is enabled by in-context learning, defined as a model's ability to temporarily learn from prompts. The ability for in-context learning is an emergent ability of large language models. In-context learning itself is an emergent property of model scale, meaning breaks in downstream scaling laws occur such that its efficacy increases at a different rate in larger models than in smaller models.
In contrast to training and fine-tuning for each specific task, which are not temporary, what has been learnt during in-context learning is of a temporary nature. It does not carry the temporary contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other. This result of "mesa-optimization" within transformer layers, is a form of meta-learning or "learning to learn".
History.
In 2018, researchers first proposed that all previously separate tasks in NLP could be cast as a question answering problem over a context. In addition, they trained a first single, joint, multi-task model that would answer any task-related question like "What is the sentiment" or "Translate this sentence to German" or "Who is the president?"
In 2021, researchers fine-tuned one generatively pretrained model (T0) on performing 12 NLP tasks (using 62 datasets, as each task can have multiple datasets). The model showed good performance on new tasks, surpassing models trained directly on just performing one task (without pretraining). To solve a task, T0 is given the task in a structured prompt, for example codice_0 is the prompt used for making T0 solve entailment.
A repository for prompts reported that over 2,000 public prompts for around 170 datasets were available in February 2022.
In 2022 the "chain-of-thought" prompting technique was proposed by Google researchers.
In 2023 several text-to-text and text-to-image prompt databases were publicly available.
Text-to-text.
Chain-of-thought.
"Chain-of-thought" (CoT) prompting is a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. Chain-of-thought prompting improves reasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic a train of thought. It allows large language models to overcome difficulties with some reasoning tasks that require logical thinking and multiple steps to solve, such as arithmetic or commonsense reasoning questions.
For example, given the question "Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?", a CoT prompt might induce the LLM to answer "A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9."
As originally proposed, each CoT prompt included a few Q&A examples. This made it a "few-shot" prompting technique. However, simply appending the words "Let's think step-by-step", has also proven effective, which makes CoT a "zero-shot" prompting technique. This allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples.
When applied to PaLM, a 540B parameter language model, CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, achieving state of the art results at the time on the GSM8K mathematical reasoning benchmark. It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability.
Example:
A: Let's think step by step.
Other techniques.
Chain-of-thought prompting is just one of many prompt-engineering techniques. Various other techniques have been proposed. At least 29 distinct techniques have been published.
Chain-of-Symbol (CoS) Prompting
Chain-of-Symbol prompting in conjunction with CoT prompting assists LLMs with its difficulty of spatial reasoning in text. In other words, using arbitrary symbols such as ' / ' assist the LLM to interpret spacing in text. This assists in reasoning and increases the performance of the LLM.
Example:
Input:
There are a set of bricks. The yellow brick C is on top of the brick E. The yellow brick D is on top of the brick A. The yellow brick E is on top of the brick D. The white brick A is on top of the brick B. For the brick B, the color is white. Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick D?
B/A/D/E/C
C/E
E/D
D
Output:
So we get the result as C, E, D.
Generated knowledge prompting.
"Generated knowledge prompting" first prompts the model to generate relevant facts for completing the prompt, then proceed to complete the prompt. The completion quality is usually higher, as the model can be conditioned on relevant facts.
Example:
Generate some knowledge about the concepts in the input.
Knowledge:
Least-to-most prompting.
"Least-to-most prompting" prompts a model to first list the sub-problems to a problem, then solve them in sequence, such that later sub-problems can be solved with the help of answers to previous sub-problems.
Example:
A: Let's break down this problem:
1.
Self-consistency decoding.
"Self-consistency decoding" performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts. If the rollouts disagree by a lot, a human can be queried for the correct chain of thought.
Complexity-based prompting.
Complexity-based prompting performs several CoT rollouts, then select the rollouts with the longest chains of thought, then select the most commonly reached conclusion out of those.
Self-refine.
Self-refine prompts the LLM to solve the problem, then prompts the LLM to critique its solution, then prompts the LLM to solve the problem again in view of the problem, solution, and critique. This process is repeated until stopped, either by running out of tokens, time, or by the LLM outputting a "stop" token.
Example critique:
I have some code. Give one suggestion to improve readability. Don't fix the code, just give a suggestion.
Suggestion:
Example refinement:
Let's use this suggestion to improve the code.
New Code:
Tree-of-thought.
"Tree-of-thought prompting" generalizes chain-of-thought by prompting the model to generate one or more "possible next steps", and then running the model on each of the possible next steps by breadth-first, beam, or some other method of tree search.
Maieutic prompting.
Maieutic prompting is similar to tree-of-thought. The model is prompted to answer a question with an explanation. The model is then prompted to explain parts of the explanation, and so on. Inconsistent explanation trees are pruned or discarded. This improves performance on complex commonsense reasoning.
Example:
A: True, because
A: False, because
Directional-stimulus prompting.
"Directional-stimulus prompting" includes a hint or cue, such as desired keywords, to guide a language model toward the desired output.
Example:
Keywords:
Q: Write a short summary of the article in 2-4 sentences that accurately incorporates the provided keywords.
A:
Prompting to disclose uncertainty.
By default, the output of language models may not contain estimates of uncertainty. The model may output text that appears confident, though the underlying token predictions have low likelihood scores. Large language models like GPT-4 can have accurately calibrated likelihood scores in their token predictions, and so the model output uncertainty can be directly estimated by reading out the token prediction likelihood scores.
But if one cannot access such scores (such as when one is accessing the model through a restrictive API), uncertainty can still be estimated and incorporated into the model output. One simple method is to prompt the model to use words to estimate uncertainty. Another is to prompt the model to refuse to answer in a standardized way if the input does not satisfy conditions.
Automatic prompt generation.
Retrieval-augmented generation.
Retrieval-augmented generation (RAG) is a two-phase process involving document retrieval and answer formulation by a Large Language Model (LLM). The initial phase utilizes dense embeddings to retrieve documents. This retrieval can be based on a variety of database formats depending on the use case, such as a vector database, summary index, tree index, or keyword table index.
In response to a query, a document retriever selects the most relevant documents. This relevance is typically determined by first encoding both the query and the documents into vectors, then identifying documents whose vectors are closest in Euclidean distance to the query vector. Following document retrieval, the LLM generates an output that incorporates information from both the query and the retrieved documents. This method is particularly beneficial for handling proprietary or dynamic information that was not included in the initial training or fine-tuning phases of the model. RAG is also notable for its use of "few-shot" learning, where the model uses a small number of examples, often automatically retrieved from a database, to inform its outputs.
Graph retrieval-augmented generation.
GraphRAG, coined by Microsoft Research, extends RAG such that instead of relying solely on vector similarity (as in most RAG approaches), GraphRAG uses the LLM-generated knowledge graph. This graph allows the model to connect disparate pieces of information, synthesize insights, and holistically understand summarized semantic concepts over large data collections.
Researchers have demonstrated GraphRAG's effectiveness using datasets like the Violent Incident Information from News Articles (VIINA). By combining LLM-generated knowledge graphs with graph machine learning, GraphRAG substantially improves both the comprehensiveness and diversity of generated answers for global sensemaking questions.
Earlier work showed the effectiveness of using a knowledge graph for question answering using text-to-query generation. These techniques can be combined to perform search across both unstructured and structured data, providing expanded context and improved ranking.
Using language models to generate prompts.
Large language models (LLM) themselves can be used to compose prompts for large language models.
The "automatic prompt engineer" algorithm uses one LLM to beam search over prompts for another LLM:
CoT examples can be generated by LLM themselves. In "auto-CoT", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions nearest to the centroids of each cluster are selected. An LLM does zero-shot CoT on each question. The resulting CoT examples are added to the dataset. When prompted with a new question, CoT examples to the nearest questions can be retrieved and added to the prompt.
Text-to-image.
In 2022, text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney were released to the public. These models take text prompts as input and use them to generate AI art images. Text-to-image models typically do not understand grammar and sentence structure in the same way as large language models, and require a different set of prompting techniques.
Prompt formats.
A text-to-image prompt commonly includes a description of the subject of the art (such as "bright orange poppies"), the desired medium (such as "digital painting" or "photography"), style (such as "hyperrealistic" or "pop-art"), lighting (such as "rim lighting" or "crepuscular rays"), color and texture.
The Midjourney documentation encourages short, descriptive prompts: instead of "Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils", an effective prompt might be "Bright orange California poppies drawn with colored pencils".
Word order affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily.
Artist styles.
Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase "in the style of Greg Rutkowski" has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski.
Negative prompts.
Text-to-image models do not natively understand negation. The prompt "a party with no cake" is likely to produce an image including a cake. As an alternative, "negative prompts" allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image. A common approach is to include generic undesired terms such as "ugly, boring, bad anatomy" in the negative prompt for an image.
Text-to-video.
Text-to-video (TTV) generation is an emerging technology enabling the creation of videos directly from textual descriptions. This field holds potential for transforming video production, animation, and storytelling. By utilizing the power of artificial intelligence, TTV allows users to bypass traditional video editing tools and translate their ideas into moving images.
Models include:
Non-text prompts.
Some approaches augment or replace natural language text prompts with non-text input.
Textual inversion and embeddings.
For text-to-image models, "Textual inversion" performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a "pseudo-word" which can be included in a prompt to express the content or style of the examples.
Image prompting.
In 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points.
Using gradient descent to search for prompts.
In "prefix-tuning", "prompt tuning" or "soft prompting", floating-point-valued vectors are searched directly by gradient descent, to maximize the log-likelihood on outputs.
Formally, let formula_0 be a set of soft prompt tokens (tunable embeddings), while formula_1 and formula_2 be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence formula_3, and fed to the large language models (LLM). The losses are computed over the formula_4 tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary.
More formally, this is prompt tuning. Let an LLM be written as formula_5, where formula_6 is a sequence of linguistic tokens, formula_7 is the token-to-vector function, and formula_8 is the rest of the model. In prefix-tuning, one provide a set of input-output pairs formula_9, and then use gradient descent to search for formula_10. In words, formula_11 is the log-likelihood of outputting formula_12, if the model first encodes the input formula_13 into the vector formula_14, then prepend the vector with the "prefix vector" formula_15, then apply formula_8.
For prefix tuning, it is similar, but the "prefix vector" formula_15 is preappended to the hidden states in every layer of the model.
An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for formula_16 where formula_17 is ranges over token sequences of a specified length.
Prompt injection.
"Prompt injection" is a family of related computer security exploits carried out by getting a machine learning model (such as an LLM) which was trained to follow human-given instructions to follow instructions provided by a malicious user. This stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompts) provided by the ML model's operator. | [
{
"math_id": 0,
"text": "\\mathbf{E} = \\{\\mathbf{e_1}, \\dots, \\mathbf{e_k}\\}"
},
{
"math_id": 1,
"text": "\\mathbf{X} = \\{\\mathbf{x_1}, \\dots, \\mathbf{x_m}\\}"
},
{
"math_id": 2,
"text": "\\mathbf{Y} = \\{\\mathbf{y_1}, \\dots, \\mathbf{y_n}\\}"
},
{
"math_id": 3,
"text": "\\text{concat}(\\mathbf{E};\\mathbf{X};\\mathbf{Y})"
},
{
"math_id": 4,
"text": "\\mathbf{Y}"
},
{
"math_id": 5,
"text": "LLM(X) = F(E(X)) "
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": "F"
},
{
"math_id": 9,
"text": "\\{(X^i, Y^i)\\}_i"
},
{
"math_id": 10,
"text": "\\arg\\max_{\\tilde Z} \\sum_i \\log Pr[Y^i | \\tilde Z \\ast E(X^i)]"
},
{
"math_id": 11,
"text": "\\log Pr[Y^i | \\tilde Z \\ast E(X^i)]"
},
{
"math_id": 12,
"text": "Y^i"
},
{
"math_id": 13,
"text": "X^i"
},
{
"math_id": 14,
"text": "E(X^i)"
},
{
"math_id": 15,
"text": "\\tilde Z"
},
{
"math_id": 16,
"text": "\\arg\\max_{\\tilde X} \\sum_i \\log Pr[Y^i | \\tilde X \\ast X^i]"
},
{
"math_id": 17,
"text": "\\tilde X"
}
]
| https://en.wikipedia.org/wiki?curid=69071767 |
69072129 | Token-based replay | Conformance checking algorithm
Token-based replay technique is a conformance checking algorithm that checks how well a process conforms with its model by replaying each trace on the model (in Petri net notation ). Using the four counters "produced tokens, consumed tokens, missing tokens, and remaining tokens," it records the situations where a transition is forced to fire and the remaining tokens after the replay ends. Based on the count at each counter, we can compute the "fitness value" between the trace and the model.
The algorithm.
The token-replay technique uses four counters to keep track of a trace during the replaying:
Invariants:
At the beginning, a token is produced for the source place (p = 1) and at the end, a token is consumed from the sink place (c' = c + 1). When the replay ends, the fitness value can be computed as follows:
formula_2
Example.
Suppose there is a process model in Petri net notation as follows:
Example 1: Replay the trace (a, b, c, d) on the model M.
The fitness of the trace (formula_17) on the model formula_18 is:
formula_19
Example 2: Replay the trace (a, b, d) on the model M.
The fitness of the trace (formula_26) on the model formula_18 is:
formula_27
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p+m \\ge c \\ge m"
},
{
"math_id": 1,
"text": "r = p + m - c"
},
{
"math_id": 2,
"text": "\\frac{1}{2}(1 - \\frac{m}{c}) + \\frac{1}{2}(1 - \\frac{r}{p})\n"
},
{
"math_id": 3,
"text": "p = 1"
},
{
"math_id": 4,
"text": "\\mathbf{a}"
},
{
"math_id": 5,
"text": "p = 1 + 2 = 3"
},
{
"math_id": 6,
"text": "c = 1"
},
{
"math_id": 7,
"text": "\\mathbf{b}"
},
{
"math_id": 8,
"text": "p = 3 + 1 = 4"
},
{
"math_id": 9,
"text": "c = 1 + 1 = 2"
},
{
"math_id": 10,
"text": "\\mathbf{c}"
},
{
"math_id": 11,
"text": "p = 4 + 1 = 5"
},
{
"math_id": 12,
"text": "c = 2 + 1 = 3"
},
{
"math_id": 13,
"text": "\\mathbf{d}"
},
{
"math_id": 14,
"text": "p = 5 + 1 = 6"
},
{
"math_id": 15,
"text": "c = 3 + 2 = 5"
},
{
"math_id": 16,
"text": "c = 5 + 1 = 6"
},
{
"math_id": 17,
"text": "\\mathbf{a, b, c, d}"
},
{
"math_id": 18,
"text": "\\mathbf{M}"
},
{
"math_id": 19,
"text": "\\frac{1}{2}(1 - \\frac{m}{c}) + \\frac{1}{2}(1 - \\frac{r}{p})\n= \\frac{1}{2}(1 - \\frac{0}{6}) + \\frac{1}{2}(1 - \\frac{0}{6})\n= 1"
},
{
"math_id": 20,
"text": "m = 1"
},
{
"math_id": 21,
"text": "[\\mathbf{b, d}]"
},
{
"math_id": 22,
"text": "c = 2 + 2 = 4"
},
{
"math_id": 23,
"text": "c = 4 + 1 = 5"
},
{
"math_id": 24,
"text": "[\\mathbf{a, c}]"
},
{
"math_id": 25,
"text": "r = 1"
},
{
"math_id": 26,
"text": "\\mathbf{a, b, d}"
},
{
"math_id": 27,
"text": "\\frac{1}{2}(1 - \\frac{m}{c}) + \\frac{1}{2}(1 - \\frac{r}{p})\n= \\frac{1}{2}(1 - \\frac{1}{5}) + \\frac{1}{2}(1 - \\frac{1}{5})\n= 0.8"
}
]
| https://en.wikipedia.org/wiki?curid=69072129 |
69072799 | Mixtilinear incircles of a triangle | Circle tangent to two sides of a triangle and its circumcircle
In plane geometry, a mixtilinear incircle of a triangle is a circle which is tangent to two of its sides and internally tangent to its circumcircle. The mixtilinear incircle of a triangle tangent to the two sides containing vertex formula_0 is called the "formula_0-mixtilinear incircle." Every triangle has three unique mixtilinear incircles, one corresponding to each vertex.
Proof of existence and uniqueness.
The formula_0-excircle of triangle formula_1 is unique. Let formula_2 be a transformation defined by the composition of an inversion centered at formula_0 with radius formula_3 and a reflection with respect to the angle bisector on formula_0. Since inversion and reflection are bijective and preserve touching points, then formula_2 does as well. Then, the image of the formula_0-excircle under formula_2 is a circle internally tangent to sides formula_4 and the circumcircle of formula_1, that is, the formula_0-mixtilinear incircle. Therefore, the formula_0-mixtilinear incircle exists and is unique, and a similar argument can prove the same for the mixtilinear incircles corresponding to formula_5 and formula_6.
Construction.
The formula_0-mixtilinear incircle can be constructed with the following sequence of steps.
This construction is possible because of the following fact:
Lemma.
The incenter is the midpoint of the touching points of the mixtilinear incircle with the two sides.
Proof.
Let formula_15 be the circumcircle of triangle formula_1 and formula_16 be the tangency point of the formula_0-mixtilinear incircle formula_17 and formula_15. Let formula_18 be the intersection of line formula_19 with formula_15 and formula_20 be the intersection of line formula_21 with formula_15. Homothety with center on formula_16 between formula_17 and formula_15 implies that formula_22 are the midpoints of formula_23 arcs formula_24 and formula_25 respectively. The inscribed angle theorem implies that formula_26 and formula_27 are triples of collinear points. Pascal's theorem on hexagon formula_28 inscribed in formula_29 implies that formula_30 are collinear. Since the angles formula_31 and formula_32 are equal, it follows that formula_33 is the midpoint of segment formula_34.
Other properties.
Radius.
The following formula relates the radius formula_35 of the incircle and the radius formula_36 of the formula_0-mixtilinear incircle of a triangle formula_1:formula_37
where formula_38 is the magnitude of the angle at formula_0.
Circles related to the tangency point with the circumcircle.
formula_44 and formula_45 are cyclic quadrilaterals.
Spiral similarities.
formula_16 is the center of a spiral similarity that maps formula_46 to formula_47 respectively.
Relationship between the three mixtilinear incircles.
Lines joining vertices and mixtilinear tangency points.
The three lines joining a vertex to the point of contact of the circumcircle with the corresponding mixtilinear incircle meet at the external center of similitude of the incircle and circumcircle. The Online Encyclopedia of Triangle Centers lists this point as X(56). It is defined by trilinear coordinates:
formula_48
and barycentric coordinates:
formula_49
Radical center.
The radical center of the three mixtilinear incircles is the point formula_50 which divides formula_51 in the ratio: formula_52where formula_53 are the incenter, inradius, circumcenter and circumradius respectively. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "ABC"
},
{
"math_id": 2,
"text": "\\Phi"
},
{
"math_id": 3,
"text": "\\sqrt{AB \\cdot AC}"
},
{
"math_id": 4,
"text": "AB, AC"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "AI"
},
{
"math_id": 9,
"text": "AB"
},
{
"math_id": 10,
"text": "AC"
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "E"
},
{
"math_id": 13,
"text": "O_A"
},
{
"math_id": 14,
"text": "O_AE"
},
{
"math_id": 15,
"text": "\\Gamma"
},
{
"math_id": 16,
"text": "T_A"
},
{
"math_id": 17,
"text": "\\Omega_A"
},
{
"math_id": 18,
"text": "X \\neq T_A"
},
{
"math_id": 19,
"text": "T_AD"
},
{
"math_id": 20,
"text": "Y \\neq T_A"
},
{
"math_id": 21,
"text": "T_AE"
},
{
"math_id": 22,
"text": "X, Y "
},
{
"math_id": 23,
"text": "\\Gamma "
},
{
"math_id": 24,
"text": "AB "
},
{
"math_id": 25,
"text": "AC "
},
{
"math_id": 26,
"text": "X, I, C "
},
{
"math_id": 27,
"text": "Y, I, B "
},
{
"math_id": 28,
"text": "XCABYT_A "
},
{
"math_id": 29,
"text": "\\Gamma "
},
{
"math_id": 30,
"text": "D, I, E "
},
{
"math_id": 31,
"text": "\\angle{DAI} "
},
{
"math_id": 32,
"text": "\\angle{IAE} "
},
{
"math_id": 33,
"text": "I "
},
{
"math_id": 34,
"text": "DE "
},
{
"math_id": 35,
"text": "r"
},
{
"math_id": 36,
"text": "\\rho_A"
},
{
"math_id": 37,
"text": "r = \\rho_A \\cdot \\cos^2{\\frac{\\alpha}{2}}"
},
{
"math_id": 38,
"text": "\\alpha"
},
{
"math_id": 39,
"text": "BC"
},
{
"math_id": 40,
"text": "T_AI"
},
{
"math_id": 41,
"text": "T_AXAY"
},
{
"math_id": 42,
"text": "T_AA"
},
{
"math_id": 43,
"text": "XT_AY"
},
{
"math_id": 44,
"text": "T_ABDI"
},
{
"math_id": 45,
"text": "T_ACEI"
},
{
"math_id": 46,
"text": "B, I"
},
{
"math_id": 47,
"text": "I, C"
},
{
"math_id": 48,
"text": "\\frac{a}{c+a-b} : \\frac{b}{c+a-b} : \\frac{c}{a+b-c},"
},
{
"math_id": 49,
"text": "\\frac{a^2}{b+c-a} : \\frac{b^2}{c+a-b} : \\frac{c^2}{a+b-c}."
},
{
"math_id": 50,
"text": "J"
},
{
"math_id": 51,
"text": "OI"
},
{
"math_id": 52,
"text": "OJ:JI=2R:-r"
},
{
"math_id": 53,
"text": "I, r, O, R"
}
]
| https://en.wikipedia.org/wiki?curid=69072799 |
690728 | Harmonious coloring | Vertex coloring where no two linked nodes have the same color pairing
In graph theory, a harmonious coloring is a (proper) vertex coloring in which every pair of colors appears on "at most" one pair of adjacent vertices. It is the opposite of the complete coloring, which instead requires every color pairing to occur "at least" once. The harmonious chromatic number χH("G") of a graph G is the minimum number of colors needed for any harmonious coloring of G.
Every graph has a harmonious coloring, since it suffices to assign every vertex a distinct color; thus χH("G") ≤ |V("G")|. There trivially exist graphs G with χH("G") > χ("G") (where χ is the chromatic number); one example is any path of length > 2, which can be 2-colored but has no harmonious coloring with 2 colors.
Some properties of χH("G"):
formula_0
where T"k",3 is the complete k-ary tree with 3 levels. (Mitchem 1989)
Harmonious coloring was first proposed by Harary and Plantholt (1982). Still very little is known about it. | [
{
"math_id": 0,
"text": "\\chi_{H}(T_{k,3}) = \\left\\lceil\\frac{3(k+1)}{2}\\right\\rceil,"
}
]
| https://en.wikipedia.org/wiki?curid=690728 |
6907330 | Conformational entropy | Entropy associated with a molecule's possible conformations
In chemical thermodynamics, conformational entropy is the entropy associated with the number of conformations of a molecule. The concept is most commonly applied to biological macromolecules such as proteins and RNA, but also be used for polysaccharides and other molecules. To calculate the conformational entropy, the possible conformations of the molecule may first be discretized into a finite number of states, usually characterized by unique combinations of certain structural parameters, each of which has been assigned an energy. In proteins, backbone dihedral angles and side chain rotamers are commonly used as parameters, and in RNA the base pairing pattern may be used. These characteristics are used to define the degrees of freedom (in the statistical mechanics sense of a possible "microstate"). The conformational entropy associated with a particular structure or state, such as an alpha-helix, a folded or an unfolded protein structure, is then dependent on the probability of the occupancy of that structure.
The entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the tertiary structure of its folded native state. In particular, the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding. However, a recent study has shown that side-chain conformational entropy can stabilize native structures among alternative compact structures. The conformational entropy of RNA and proteins can be estimated; for example, empirical methods to estimate the loss of conformational entropy in a particular side chain on incorporation into a folded protein can roughly predict the effects of particular point mutations in a protein. Side-chain conformational entropies can be defined as Boltzmann sampling over all possible rotameric states:
formula_0
where R is the gas constant and pi is the probability of a residue being in rotamer i.
The limited conformational range of proline residues lowers the conformational entropy of the denatured state and thus stabilizes the native states. A correlation has been observed between the thermostability of a protein and its proline residue content.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = -R \\sum_{i} p_{i} \\ln p_{i}"
}
]
| https://en.wikipedia.org/wiki?curid=6907330 |
690736 | Complete coloring | Vertex coloring where every color pairing appears at least once
In graph theory, a complete coloring is a (proper) vertex coloring in which every pair of colors appears on "at least" one pair of adjacent vertices. Equivalently, a complete coloring is minimal in the sense that it cannot be transformed into a proper coloring with fewer colors by merging pairs of color classes. The achromatic number ψ("G") of a graph G is the maximum number of colors possible in any complete coloring of G.
A complete coloring is the opposite of a harmonious coloring, which requires every pair of colors to appear on "at most" one pair of adjacent vertices.
Complexity theory.
Finding ψ("G") is an optimization problem. The decision problem for complete coloring can be phrased as:
INSTANCE: a graph "G" = ("V", "E") and positive integer k
QUESTION: does there exist a partition of V into k or more disjoint sets "V"1, "V"2, …, "Vk" such that each Vi is an independent set for G and such that for each pair of distinct sets "Vi", "Vj", "Vi" ∪ "Vj" is not an independent set.
Determining the achromatic number is NP-hard; determining if it is greater than a given number is NP-complete, as shown by Yannakakis and Gavril in 1978 by transformation from the minimum maximal matching problem.
Note that any coloring of a graph with the minimum number of colors must be a complete coloring, so minimizing the number of colors in a complete coloring is just a restatement of the standard graph coloring problem.
Algorithms.
For any fixed "k", it is possible to determine whether the achromatic number of a given graph is at least "k", in linear time.
The optimization problem permits approximation and is approximable within a formula_0 approximation ratio.
Special classes of graphs.
The NP-completeness of the achromatic number problem holds also for some special classes of graphs:
bipartite graphs,
complements of bipartite graphs (that is, graphs having no independent set of more than two vertices), cographs and interval graphs, and even for trees.
For complements of trees, the achromatic number can be computed in polynomial time. For trees, it can be approximated to within a constant factor.
The achromatic number of an "n"-dimensional hypercube graph is known to be proportional to formula_1, but the constant of proportionality is not known precisely.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O\\left(|V|/\\sqrt{\\log |V|}\\right)"
},
{
"math_id": 1,
"text": "\\sqrt{n2^n}"
}
]
| https://en.wikipedia.org/wiki?curid=690736 |
690742 | Exact coloring | Concept in graph theory
In graph theory, an exact coloring is a (proper) vertex coloring in which every pair of colors appears on exactly one pair of adjacent vertices.
That is, it is a partition of the vertices of the graph into disjoint independent sets such that, for each pair of distinct independent sets in the partition, there is exactly one edge with endpoints in each set.
Complete graphs, detachments, and Euler tours.
Every "n"-vertex complete graph "K""n" has an exact coloring with "n" colors, obtained by giving each vertex a distinct color.
Every graph with an "n"-color exact coloring may be obtained as a "detachment" of a complete graph, a graph obtained from the complete graph by splitting each vertex into an independent set and reconnecting each edge incident to the vertex to exactly one of the members of the corresponding independent set.
When "k" is an odd number, A path or cycle with formula_0 edges has an exact coloring, obtained by forming an exact coloring of the complete graph "K""k" and then finding an Euler tour of this complete graph. For instance, a path with three edges has a complete 3-coloring.
Related types of coloring.
Exact colorings are closely related to harmonious colorings (colorings in which each pair of colors appears at most once) and complete colorings (colorings in which each pair of colors appears at least once). Clearly, an exact coloring is a coloring that is both harmonious and complete. A graph "G" with "n" vertices and "m" edges has a harmonious "k"-coloring if and only if formula_1 and the graph formed from "G" by adding formula_2 isolated edges has an exact coloring. A graph "G" with the same parameters has a complete "k"-coloring if and only if formula_3 and there exists a subgraph "H" of "G" with an exact "k"-coloring in which each edge of "G" − "H" has endpoints of different colorings. The need for the condition on the edges of "G" − "H" is shown by the example of a four-vertex cycle, which has a subgraph with an exact 3-coloring (the three-edge path) but does not have a complete 3-coloring itself.
Computational complexity.
It is NP-complete to determine whether a given graph has an exact coloring, even in the case that the graph is a tree. However, the problem may be solved in polynomial time for trees of bounded degree.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tbinom{k}{2}"
},
{
"math_id": 1,
"text": "m\\le\\tbinom{k}{2}"
},
{
"math_id": 2,
"text": "\\tbinom{k}{2}-m"
},
{
"math_id": 3,
"text": "m\\ge\\tbinom{k}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=690742 |
69079808 | Lunar arithmetic | Arithmetic operations
Lunar arithmetic, formerly called dismal arithmetic, is a version of arithmetic in which the addition and multiplication operations on digits are defined as the max and min operations. Thus, in lunar arithmetic,
formula_0 and formula_1
The lunar arithmetic operations on nonnegative multidigit numbers are performed as in usual arithmetic as illustrated in the following examples. The world of lunar arithmetic is restricted to the set of nonnegative integers.
976 +
348
978 (adding digits column-wise)
976 ×
348
876 (multiplying the digits of 976 by 8)
444 (multiplying the digits of 976 by 4)
333 (multiplying the digits of 976 by 3)
34876 (adding digits column-wise)
The concept of lunar arithmetic was proposed by David Applegate, Marc LeBrun, and Neil Sloane.
In the general definition of lunar arithmetic, one considers numbers expressed in an arbitrary base formula_2 and define lunar arithmetic operations as the max and min operations on the digits corresponding to the chosen base. However, for simplicity, in the following discussion it will be assumed that the numbers are represented using 10 as the base.
Properties of the lunar operations.
A few of the elementary properties of the lunar operations are listed below.
Some standard sequences.
Even numbers.
It may be noted that, in lunar arithmetic, formula_3 and formula_4. The even numbers are numbers of the form formula_5. The first few distinct even numbers under lunar arithmetic are listed below:
formula_6
These are the numbers whose digits are all less than or equal to 2.
Squares.
A square number is a number of the form formula_7. So in lunar arithmetic, the first few squares are the following.
formula_8
Triangular numbers.
A triangular number is a number of the form formula_9. The first few triangular lunar numbers are:
formula_10
Factorials.
In lunar arithmetic, the first few values of the factorial formula_11 are as follows:
formula_12
Prime numbers.
In the usual arithmetic, a prime number is defined as a number formula_13 whose only possible factorisation is formula_14. Analogously, in the lunar arithmetic, a prime number is defined as a number formula_15 whose only factorisation is formula_16 where 9 is the multiplicative identity which corresponds to 1 in usual arithmetic. Accordingly, the following are the first few prime numbers in lunar arithmetic:
formula_17
formula_18
Every number of the form formula_19, where formula_20 is arbitrary, is a prime in lunar arithmetic. Since formula_20 is arbitrary this shows that there are an infinite number of primes in lunar arithmetic.
Sumsets and lunar multiplication.
There is an interesting relation between the operation of forming sumsets of subsets of nonnegative integers and lunar multiplication on binary numbers. Let formula_21 and formula_22 be nonempty subsets of the set formula_23 of nonnegative integers. The sumset formula_24 is defined by
formula_25
To the set formula_21 we can associate a unique binary number formula_26 as follows. Let formula_27.
For formula_28 we define
formula_29
and then we define
formula_30
It has been proved that
formula_31 where the "formula_32" on the right denotes the lunar multiplication on binary numbers.
Magic squares of squares using lunar arithmetic.
A magic square of squares is a magic square formed by squares of numbers. It is not known whether there are any magic squares of squares of order 3 with the usual addition and multiplication of integers. However, it has been observed that, if we consider the lunar arithmetic operations, there are an infinite amount of magic squares of squares of order 3. Here is an example:
formula_33
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2+7=\\max\\{2,7\\}=7"
},
{
"math_id": 1,
"text": "2\\times 7 = \\min\\{2,7\\}=2."
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "n+n\\ne 2\\times n"
},
{
"math_id": 4,
"text": "n+n=n"
},
{
"math_id": 5,
"text": "2 \\times n"
},
{
"math_id": 6,
"text": "0,1,2,10,11,12,20,21,22,100, 101, 102, 120, 121, 122, \\ldots"
},
{
"math_id": 7,
"text": "n\\times n"
},
{
"math_id": 8,
"text": "0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 111, 112, 113, 114, 115, 116, 117, 118, 119, 200, \\ldots"
},
{
"math_id": 9,
"text": "1+2+\\cdots+n"
},
{
"math_id": 10,
"text": "0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 29, 29, 29, 29, 29, \\ldots"
},
{
"math_id": 11,
"text": "n!=1\\times 2\\times \\cdots \\times n"
},
{
"math_id": 12,
"text": "1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 110, 1110, 11110, 111110, 1111110, \\ldots "
},
{
"math_id": 13,
"text": "p"
},
{
"math_id": 14,
"text": "1\\times p"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "9\\times n"
},
{
"math_id": 17,
"text": "19, 29, 39, 49, 59, 69, 79, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 109, 209, 219,"
},
{
"math_id": 18,
"text": "309, 319, 329, 409, 419, 429, 439, 509, 519, 529, 539, 549, 609, 619, 629, 639, \\dots"
},
{
"math_id": 19,
"text": "10 \\ldots ( n\\text{ zeros}) \\ldots 09"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "A "
},
{
"math_id": 22,
"text": "B "
},
{
"math_id": 23,
"text": "N"
},
{
"math_id": 24,
"text": "A+B "
},
{
"math_id": 25,
"text": "A+B=\\{a+b:a\\in A, \\, b\\in B\\}. "
},
{
"math_id": 26,
"text": "\\beta(A) "
},
{
"math_id": 27,
"text": "m=\\max(A) "
},
{
"math_id": 28,
"text": "i=0,1,\\ldots,m "
},
{
"math_id": 29,
"text": "b_i=\\begin{cases} 1& \\text{if } i\\in A\\\\ 0 &\\text{if } i\\notin A\\end{cases} "
},
{
"math_id": 30,
"text": "\\beta(A)=b_mb_{m-1}\\ldots b_0."
},
{
"math_id": 31,
"text": "\\beta(A+B)=\\beta(A)\\times\\beta(B) "
},
{
"math_id": 32,
"text": "\\times "
},
{
"math_id": 33,
"text": " \\begin{matrix}44^2 & 38^2 & 45^2\\\\ 46^2&0^2&28^2\\\\ 18^2 &47^2 &8^2\\end{matrix}"
}
]
| https://en.wikipedia.org/wiki?curid=69079808 |
69085621 | Triangle conic | Conic plane curve associated with a given triangle
In Euclidean geometry, a triangle conic is a conic in the plane of the reference triangle and associated with it in some way. For example, the circumcircle and the incircle of the reference triangle are triangle conics. Other examples are the Steiner ellipse, which is an ellipse passing through the vertices and having its centre at the centroid of the reference triangle; the Kiepert hyperbola which is a conic passing through the vertices, the centroid and the orthocentre of the reference triangle; and the Artzt parabolas, which are parabolas touching two sidelines of the reference triangle at vertices of the triangle.
The terminology of "triangle conic" is widely used in the literature without a formal definition; that is, without precisely formulating the relations a conic should have with the reference triangle so as to qualify it to be called a triangle conic (see ). However, Greek mathematician Paris Pamfilos defines a triangle conic as a "conic circumscribing a triangle △"ABC" (that is, passing through its vertices) or inscribed in a triangle (that is, tangent to its side-lines)". The terminology "triangle circle" (respectively, "ellipse, hyperbola, parabola") is used to denote a circle (respectively, ellipse, hyperbola, parabola) associated with the reference triangle is some way.
Even though several triangle conics have been studied individually, there is no comprehensive encyclopedia or catalogue of triangle conics similar to Clark Kimberling's Encyclopedia of Triangle Centres or Bernard Gibert's Catalogue of Triangle Cubics.
Equations of triangle conics in trilinear coordinates.
The equation of a general triangle conic in trilinear coordinates "x" : "y" : "z" has the form
formula_0
The equations of triangle circumconics and inconics have respectively the forms
formula_1
Special triangle conics.
In the following, a few typical special triangle conics are discussed. In the descriptions, the standard notations are used: the reference triangle is always denoted by △"ABC". The angles at the vertices A, B, C are denoted by A, B, C and the lengths of the sides opposite to the vertices A, B, C are respectively a, b, c. The equations of the conics are given in the trilinear coordinates "x" : "y" : "z". The conics are selected as illustrative of the several different ways in which a conic could be associated with a triangle.
Families of triangle conics.
Hofstadter ellipses.
An Hofstadter ellipse is a member of a one-parameter family of ellipses in the plane of △"ABC" defined by the following equation in trilinear coordinates:
formula_2
where t is a parameter and
formula_3
The ellipses corresponding to t and 1 − "t" are identical. When "t" = 1/2 we have the inellipse
formula_4
and when "t" → 0 we have the circumellipse
formula_5
Conics of Thomson and Darboux.
The family of Thomson conics consists of those conics inscribed in the reference triangle △"ABC" having the property that the normals at the points of contact with the sidelines are concurrent. The family of Darboux conics contains as members those circumscribed conics of the reference △"ABC" such that the normals at the vertices of △"ABC" are concurrent. In both cases the points of concurrency lie on the Darboux cubic.
Conics associated with parallel intercepts.
Given an arbitrary point in the plane of the reference triangle △"ABC", if lines are drawn through P parallel to the sidelines BC, CA, AB intersecting the other sides at Xb, Xc, Yc, Ya, Za, Zb then these six points of intersection lie on a conic. If P is chosen as the symmedian point, the resulting conic is a circle called the Lemoine circle. If the trilinear coordinates of P are "u" : "v" : "w" the equation of the six-point conic is
formula_6
Yff conics.
The members of the one-parameter family of conics defined by the equation
formula_7
where formula_8 is a parameter, are the Yff conics associated with the reference triangle △"ABC". A member of the family is associated with every point "P"("u" : "v" : "w") in the plane by setting
formula_9
The Yff conic is a parabola if
formula_10 (say).
It is an ellipse if formula_11 and formula_12 and it is a hyperbola if formula_13. For formula_14, the conics are imaginary.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "rx^2 + sy^2 + tz^2 + 2uyz + 2vzx + 2wxy = 0."
},
{
"math_id": 1,
"text": "\\begin{align}\n& uyz + vzx + wxy = 0 \\\\[2pt]\n& l^2 x^2 + m^2 y^2 + n^2 z^2 - 2mnyz - 2nlzx - 2lmxy = 0\n\\end{align}"
},
{
"math_id": 2,
"text": "x^2 + y^2 + z^2 + yz\\left[D(t) + \\frac{1}{D(t)}\\right] + zx\\left[E(t) + \\frac{1}{E(t)}\\right] + xy\\left[F(t) + \\frac{1}{F(t)}\\right] = 0"
},
{
"math_id": 3,
"text": "\\begin{align}\n D(t) &= \\cos A - \\sin A \\cot tA \\\\ \n E(t) &= \\cos B - \\sin B \\cot tB \\\\\n F(t) &= \\sin C - \\cos C \\cot tC\n\\end{align}"
},
{
"math_id": 4,
"text": "x^2+y^2+z^2 - 2yz- 2zx - 2xy =0"
},
{
"math_id": 5,
"text": "\\frac{a}{Ax}+\\frac{b}{By}+\\frac{c}{Cz}=0."
},
{
"math_id": 6,
"text": "-(u + v + w)^2(bcuyz + cavzx + abwxy) + (ax + by + cz)(vw(v + w)ax + wu(w + u)by + uv(u + v)cz) = 0"
},
{
"math_id": 7,
"text": "x^2+y^2+z^2-2\\lambda(yz+zx+xy)=0,"
},
{
"math_id": 8,
"text": "\\lambda"
},
{
"math_id": 9,
"text": "\\lambda=\\frac{u^2+v^2+w^2}{2(vw+wu+uv)}."
},
{
"math_id": 10,
"text": "\\lambda=\\frac{a^2+b^2+c^2}{a^2+b^2+c^2-2(bc+ca+ab)}=\\lambda_0"
},
{
"math_id": 11,
"text": "\\lambda < \\lambda_0"
},
{
"math_id": 12,
"text": "\\lambda_0 > \\frac{1}{2}"
},
{
"math_id": 13,
"text": "\\lambda_0 < \\lambda < -1"
},
{
"math_id": 14,
"text": " -1 < \\lambda <\\frac{1}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=69085621 |
690861 | Flash freezing | Process where objects are frozen quickly by exposure to cryogenic temperatures
In physics and chemistry, flash freezing is the process whereby objects are rapidly frozen. This is done by subjecting them to cryogenic temperatures, or it can be done through direct contact with liquid nitrogen at . It is commonly used in the food industry.
Flash freezing is of great importance in atmospheric science, as its study is necessary for a proper climate model for the formation of ice clouds in the upper troposphere, which effectively scatter incoming solar radiation and prevent Earth from becoming overheated by the sun.
The process is also closely related to classical nucleation theory, which helps in the understanding of the many materials, phenomena, and theories in related situations.
Overview.
When water freezes slowly, crystals grow from fewer nucleation sites, resulting in fewer and larger crystals. This damages cell walls and causes cell dehydration. When water freezes quickly, as in flash freezing, there are more nucleation sites, and more, smaller crystals. This results in much less damage to cell walls, proportional to the rate of freezing. This is why flash freezing is good for food and tissue preservation.
Applications and techniques.
Flash freezing is used in the food industry to quickly freeze perishable food items (see frozen food). In this case, food items are subjected to temperatures well below the freezing point of water. Thus, smaller ice crystals are formed, causing less damage to cell membranes.
Flash freezing techniques are used to freeze biological samples quickly so that large ice crystals cannot form and damage the sample. This rapid freezing is done by submerging the sample in liquid nitrogen or a mixture of dry ice and ethanol.
American inventor Clarence Birdseye developed the "quick-freezing" process of food preservation in the 20th century using a cryogenic process. In practice, a mechanical freezing process is usually used due to cost instead. There has been continuous optimization of the freezing rate in mechanical freezing to minimize ice crystal size.
The results have important implications in climate control research. One of the current debates is whether the formation of ice occurs near the surface or within the micrometre-sized droplets suspended in clouds. If it is the former, effective engineering approaches may be able to be taken to tune the surface tension of water so that the ice crystallization rate can be controlled.
How water freezes.
There are phenomena like supercooling, in which the water is cooled below its freezing point, but the water remains liquid if there are too few defects to seed crystallization. One can therefore observe a delay until the water adjusts to the new, below-freezing temperature. Supercooled liquid water must become ice at -48 C (-55 F), not just because of the extreme cold, but because the molecular structure of water changes physically to form tetrahedron shapes, with each water molecule loosely bonded to four others. This suggests the structural change from liquid to "intermediate ice". The crystallization of ice from supercooled water is generally initiated by a process called nucleation. The speed and size of nucleation occurs within nanoseconds and nanometers.
The surface environment does not play a decisive role in the formation of ice and snow. The density fluctuations inside drops result in the possible freezing regions covering the middle and the surface regions. The freezing from the surface or from within may be random. However, in the strange world of water, tiny amounts of liquid water are theoretically still present, even as temperatures go below and almost all the water has turned solid, either into crystalline ice or amorphous water. Below , ice is crystallizing too fast for any property of the remaining liquid to be measured. The freezing speed directly influences the nucleation process and ice crystal size. A supercooled liquid will stay in a liquid state below the normal freezing point when it has little opportunity for nucleation; that is if it is pure enough and has a smooth enough container. Once agitated it will rapidly become a solid. During the final stage of freezing, an ice drop develops a pointy tip, which is not observed for most other liquids, and arises because water expands as it freezes. Once the liquid is completely frozen, the sharp tip of the drop attracts water vapor in the air, much like a sharp metal lightning rod attracts electrical charges. The water vapor collects on the tip and a tree of small ice crystals starts to grow. An opposite effect has been shown to preferentially extract water molecules from the sharp edge of potato wedges in the oven.
If a microscopic droplet of water is cooled very fast, it forms what is called a glass (low-density amorphous ice) in which all the tetrahedrons of water molecules are not lined up, but amorphous. The change in the structure of water controls the rate at which ice forms. Depending on its temperature and pressure, water ice has 16 different crystalline forms in which water molecules cling to each other with hydrogen bonds. When water is cooled, its structure becomes closer to the structure of ice, which is why the density goes down, and this should be reflected in an increased crystallization rate showing these crystalline forms.
Related quantities.
For the understanding of flash freezing, various related quantities might be useful.
Crystal growth or nucleation is the formation of a new thermodynamic phase or a new structure via self-assembly. Nucleation is often found to be very sensitive to impurities in the system. For nucleation of a new thermodynamic phase, such as the formation of ice in water below , if the system is not evolving with time and nucleation occurs in one step, then the probability that nucleation has not occurred should undergo exponential decay. This can also be observed in the nucleation of ice in supercooled small water droplets. The decay rate of the exponential gives the nucleation rate and is given by
formula_0
Where
Classical nucleation theory is a widely used approximate theory for estimating these rates, and how they vary with variables such as temperature. It correctly predicts that the time needed for nucleation decreases extremely rapidly when supersaturated.
Nucleation can be divided into homogeneous nucleation and heterogeneous nucleation. First comes homogeneous nucleation, because this is much simpler. Classical nucleation theory assumes that for a microscopic nucleus of a new phase, the free energy of a droplet can be written as the sum of a bulk term, proportional to a volume and surface term.
formula_5
The first term is the volume term, and, assuming that the nucleus is spherical, this is the volume of a sphere of radiusformula_6. formula_7 is the difference in free energy per unit volume between the thermodynamic phase nucleation is occurring in, and the phase that is nucleating.
critical nucleus radius, at some intermediate value offormula_6, the free energy goes through a maximum, and so the probability of formation of a nucleus goes through a minimum. There is a least-probable nucleus occurs, i.e., the one with the highest value of formula_8 where
formula_9
This is called the critical nucleus and occurs at a critical nucleus radius
formula_10
The addition of new molecules to nuclei larger than this critical radius decreases the free energy, so these nuclei are more probable.
Heterogeneous nucleation, nucleation with the nucleus at a surface, is much more common than homogeneous nucleation. Heterogeneous nucleation is typically much faster than homogeneous nucleation because the nucleation barrier formula_1 is much lower at a surface. This is because the nucleation barrier comes from the positive term in the free energyformula_8, which is the surface term. Thus, in conclusion, the nucleation probability is highest at a surface instead of the center of a liquid.
The Laplace pressure is the pressure difference between the inside and the outside of a curved surface between a gas region and a liquid region. The Laplace pressure is determined from the Young–Laplace equation given as
formula_11.
where formula_12 and formula_13 are the principal radii of curvature and formula_14 (also denoted as formula_15) is the surface tension.
The surface tension can be defined in terms of force or energy. The surface tension of a liquid is the ratio of the change in the energy of the liquid, and the change in the surface area of the liquid (that led to the change in energy). It can be defined asformula_16. This work W is interpreted as the potential energy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R\\ =\\ N_S Zj\\exp \\left( \\frac{-\\Delta G^*}{k_BT} \\right)"
},
{
"math_id": 1,
"text": "{\\displaystyle \\Delta G^{*}} "
},
{
"math_id": 2,
"text": "{\\displaystyle N_{S}}"
},
{
"math_id": 3,
"text": "{\\displaystyle j}"
},
{
"math_id": 4,
"text": "{\\displaystyle Z}"
},
{
"math_id": 5,
"text": "{\\displaystyle \\Delta G={\\frac {4}{3}}\\pi r^{3}\\Delta g+4\\pi r^{2}\\sigma } "
},
{
"math_id": 6,
"text": "{\\displaystyle r}"
},
{
"math_id": 7,
"text": "{\\displaystyle \\Delta g}"
},
{
"math_id": 8,
"text": "{\\displaystyle \\Delta G} "
},
{
"math_id": 9,
"text": "{\\displaystyle {\\frac {dG}{dr}}=0}"
},
{
"math_id": 10,
"text": "{\\displaystyle r^{*}=-{\\frac {2\\sigma }{\\Delta g}}}"
},
{
"math_id": 11,
"text": "\\Delta P \\equiv P_\\text{inside} - P_\\text{outside} = \\gamma\\left(\\frac{1}{R_1}+\\frac{1}{R_2}\\right)"
},
{
"math_id": 12,
"text": "{\\displaystyle R_{1}}"
},
{
"math_id": 13,
"text": "{\\displaystyle R_{2}}"
},
{
"math_id": 14,
"text": "{\\displaystyle \\gamma }"
},
{
"math_id": 15,
"text": "{\\displaystyle \\sigma }"
},
{
"math_id": 16,
"text": "\\gamma=\\frac{W}{\\Delta A}"
}
]
| https://en.wikipedia.org/wiki?curid=690861 |
690874 | Matrix decoder | Matrix decoding is an audio technology where a small number of discrete audio channels (e.g., 2) are decoded into a larger number of channels on play back (e.g., 5). The channels are generally, but not always, arranged for transmission or recording by an encoder, and decoded for playback by a decoder. The function is to allow multichannel audio, such as quadraphonic sound or surround sound to be encoded in a stereo signal, and thus played back as stereo on stereo equipment, and as surround on surround equipment – this is "compatible" multichannel audio.
Process.
Matrix encoding does "not" allow one to encode several channels in "fewer" channels without losing information: one cannot fit 5 channels into 2 (or even 3 into 2) without losing information, as this loses dimensions: the decoded signals are not independent. The idea is rather to encode something that will both be an acceptable approximation of the surround sound when decoded, and acceptable (or even superior) stereo.
Notation.
The notation for matrix encoding consists of the number of original discrete audio channels separated by a colon from the number of encoded and decoded channels. For example, four channels encoded into two discrete channels and decoded back to four-channels would be notated:
4:2:4
Some methods derive new channels from the existing ones, with no special encoding of the audio source. For example, five discrete channels decoded to six channels would be notated:
5:5:6
Such derived channel "decoders" may take advantage of the Haas effect, as well as audio cues inherent in the source channels.
Many matrix encoding methods have been developed:
Hafler circuit (2:2:4).
The earliest and simpler form of decoding is the Hafler circuit, deriving back channels out of normal stereo recording (2:2:4). It was used for decoding only (encoding sound was not considered).
Dynaquad matrix (2:2:4) / (4:2:4).
The Dynaquad matrix introduced in 1969 was based on the Hafler circuit, but also used for a specific encoding of 4 sound channels in some albums (4:2:4).
Electro-Voice Stereo-4 matrix (2:2:4) / (4:2:4).
The Stereo-4 matrix was invented by Leonard Feldman and Jon Fixler, introduced in 1970, and sold by Electro-Voice and Radio Shack. This matrix was used to encode 4 sound channels on many record albums (4:2:4).
SQ matrix, "Stereo Quadraphonic", CBS SQ (4:2:4).
"formula_0 phase-shift, formula_1 phase-shift"
The basic SQ matrix had mono/stereo anomalies as well as encoding/decoding problems, heavily criticized by Michael Gerzon and others.
An attempt to improve the system lead to the use of other encoders or sound capture techniques, yet the decoding matrix remained unchanged.
Position Encoder.
An N/2 encoder that encoded every position in a 360° circle - it had 16 inputs and each could be dialed to the exact direction desired, generating an optimized encode.
Forward-Oriented encoder.
"formula_0 phase-shift, formula_1 phase-shift"
The Forward-Oriented encoder caused Center Back to be encoded as Center Front and was recommended for live broadcast use for maximum mono compatibility - it also encoded Center Left/Center Right and both diagonal splits in the optimal manner.
Could be used to modify existing 2-channel stereo recordings and create 'synthesized SQ' that when played through a Full-Logic or Tate DES SQ decoder, exhibited a 180° or 270° synthesized quad effect. Many stereo FM radio stations broadcasting SQ in the 1970s used their Forward-Oriented SQ encoder for this. For SQ "decoders", CBS designed a circuit that produced the 270° enhancement using the 90° phase shifters in the decoder. Sansui's QS Encoders and QS Vario-Matrix Decoders had a similar capability.
Backwards-Oriented encoder.
"formula_0 phase-shift, formula_1 phase-shift"
The Backwards-Oriented Encoder was the reverse of the Forward-Oriented Encoder - it allowed sounds to be placed optimally in the back half of the room, but mono-compatibility was sacrificed.
When used with standard stereo recordings it created "extra wide" stereo with sounds outside the speakers.
Some encoding mixers had channel strips switchable between forward-oriented and backwards-oriented encoding.
London Box.
It encoded the Center Back in such a way that it didn't cancel in mono playback, thus its output was usually mixed with that of a Position Encoder or a Forward Oriented encoder. After 1972, the vast majority of SQ Encoded albums were mixed with either the Position Encoder or the Forward-Oriented encoder.
Ghent microphone.
In addition, CBS created the SQ Ghent Microphone, which was a spatial microphone system using the Neumann QM-69 mic. The signals from the QM-69 were differenced, and then phase-matrixed into 2-channel SQ.
With the Ghent Microphone, SQ was transformed from a Matrix into a Kernel and an additional signal could be derived to provide N:3:4 performance.
Universal SQ.
In 1976, Ben Bauer integrated matrix and discrete systems into USQ, or Universal SQ.
It was a hierarchical 4-4-4 discrete matrix that used the SQ matrix as the baseband for discrete quadraphonic FM broadcasts using additional difference signals called "T" and "Q". For a USQ FM broadcast, the additional "T" modulation was placed at 38 kHz in quadrature to the standard stereo difference signal and the "Q" modulation was placed on a carrier at 76 kHz. For standard 2-channel SQ Matrix broadcasts, CBS recommended that an optional pilot-tone be placed at 19 kHz in quadrature to the regular pilot-tone to indicate SQ encoded signals and activate the listeners Logic decoder.
CBS argued that the SQ system should be selected as the standard for quadraphonic FM because, in FCC listening tests of the various four channel broadcast proposals, the 4:2:4 SQ system, decoded with a CBS Paramatrix decoder, outperformed 4:3:4 (without logic) as well as all other 4:2:4 (with logic) systems tested, approaching the performance of a discrete master tape within a very slight margin. At the same time, the SQ "fold" to stereo and mono was preferred to the stereo and mono "fold" of 4:4:4, 4:3:4 and all other 4:2:4 encoding systems.
Tate DES decoder.
The Directional Enhancement System, also known as the Tate DES, was an advanced decoder that enhanced the directionality of the basic SQ matrix.
It first matrixed the four outputs of the SQ decoder to derive additional signals, then compared their envelopes to detect the predominant direction and degree of dominance.
A processor section, implemented outside of the Tate IC chips, applied variable attack/decay timing to the control signals and determined the coefficients of the "B" (Blend) matrices needed to enhance the directionality. These were acted upon by true analog multipliers in the Matrix Multiplier IC's, to multiply the incoming matrix by the "B" matrices and produce outputs in which the directionality of all predominant sounds were enhanced.
Since the DES could recognize all three directions of the Energy Sphere simultaneously, and enhance the separation, it had a very open and 'discrete' sounding soundfield.
In addition, the enhancement was done with sufficient additional complexity that all non-dominant sounds were kept at their proper levels.
Dolby used the Tate DES IC's in their theater processors until around 1986, when they developed the Pro Logic system. Unfortunately, delays and problems kept the Tate DES IC's from the market until the late-1970s and only two consumer decoders were ever made that employed them, the Audionics Space & Image Composer and the Fosgate Tate II 101A. The Fosgate used a faster, updated version of the IC, called the Tate II, and additional circuitry that provided for separation enhancement around the full 360 soundfield. Unlike the earlier Full Wave-matching Logic decoders for SQ, that varied the output levels to enhance directionality, the Tate DES cancelled SQ signal crosstalk as a function of the predominant directionality, keeping non-dominant sounds and reverberation in its proper spatial locations at their correct level.
QS matrix, "Regular Matrix", "Quadraphonic Sound" (4:2:4).
"formula_0 phase-shift, formula_1 phase-shift"
Matrix H (4:2:4).
"j = 20° phase-shift"
"k = 25° phase-shift"
"l = 55° phase-shift"
"m = 115° phase-shift"
Ambisonic UHJ kernel (3:2:4 or more).
"formula_0 phase-shift, formula_1 phase-shift"
Dolby Stereo and Dolby Surround (matrix) 4:2:4.
Dolby Stereo and Dolby Surround are also known as Dolby MP, Dolby SVA and Pro Logic.
Dolby SVA matrix is the original name of the Dolby Stereo 4:2:4 encoding matrix.
The term "Dolby Surround" refers to both the encoding and decoding in the home environment, while in the theater it is known "Dolby Stereo", "Dolby Motion Picture matrix" or "Dolby MP".
"Pro Logic" refers to the decoder used, there is no special Pro Logic encoding matrix.
The Ultra Stereo system, developed by different company, is compatible and uses similar matrixes to Dolby Stereo.
The Dolby Stereo Matrix is straightforward: the four original channels: Left (L), Center (C), Right (R), and Surround (S), are combined into two, known as Left-total (LT) and Right-total (RT) by this formula:
where "j = 90° phase-shift"
The center channel information is carried by both LT and RT in phase, and surround channel information by both LT and RT but out of phase. The surround channel is a single limited frequency-range (7 kHz low-pass filtered) mono rear channel, dynamically compressed and placed with a lower volume than the rest. This allows for better separation of signals.
This gives good compatibility with both mono playback, which reproduces L, C and R from the mono speaker with C at a level 3 dB higher than L or R, but surround information cancels out. It also gives good compatibility with two-channel stereo playback where C is reproduced from both left and right speakers to form a phantom center and surround is reproduced from both speakers but in a diffuse manner.
A simple 4-channel decoder could simply send the sum signal (L+R) to the center speaker, and the difference signal (L-R) to the surrounds. But such a decoder would provide poor separation between adjacent speaker channels, thus anything intended for the center speaker would also reproduce from left and right speakers only 3 dB below the level in the center speaker. Similarly anything intended for the left speaker would be reproduced from both the center and surround speakers, again only 3 dB below the level in the left speaker. There is, however, complete separation between left and right, and between center and surround channels.
To overcome this problem the cinema decoder uses so-called "logic" circuitry to improve the separation. The logic circuitry decides which speaker channel has the highest signal level and gives it priority, attenuating the signals fed to the adjacent channels. Because there already is complete separation between opposite channels there is no need to attenuate those, in effect the decoder switches between L and R priority and C and S priority. This places some limitations on mixing for Dolby Stereo and to ensure that sound mixers mixed soundtracks appropriately they would monitor the sound mix via a Dolby Stereo encoder and decoder in tandem. In addition to the logic circuitry the surround channel is also fed via a delay, adjustable up to 100 ms to suit auditoria of differing sizes, to ensure that any leakage of program material intended for left or right speakers into the surround channel is always heard first from the intended speaker. This exploits the "Precedence effect" to localize the sound to the intended direction.
Dolby Pro Logic II matrix (5:2:5).
"formula_0 phase-shift, formula_1 phase-shift"
The Pro Logic II matrix provides for stereo full frequency back channels.
Normally a sub-woofer channel is driven by simply filtering and redirecting the existing bass frequencies of the original stereo track.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j = +90^\\circ"
},
{
"math_id": 1,
"text": "k = -90^\\circ"
}
]
| https://en.wikipedia.org/wiki?curid=690874 |
69087532 | Piecewise-constant valuation | Piece-wise division of objects
A piecewise-constant valuation is a kind of a function that represents the utility of an agent over a continuous resource, such as land. It occurs when the resource can be partitioned into a finite number of regions, and in each region, the value-density of the agent is constant. A piecewise-uniform valuation is a piecewise-constant valuation in which the constant is the same in all regions.
Piecewise-constant and piecewise-uniform valuations are particularly useful in algorithms for fair cake-cutting.
Formal definition.
There is a "resource" represented by a set "C." There is a "valuation" over the resource, defined as a continuous measure formula_0. The measure "V" can be represented by a "value-density function" formula_1. The value-density function assigns, to each point of the resource, a real value. The measure "V" of each subset "X" of "C" is the integral of "v" over "X".
A valuation "V" is called piecewise-constant, if the corresponding value-density function "v" is a piecewise-constant function. In other words: there is a partition of the resource "C" into finitely many regions, "C"1...,"Ck", such that for each "j" in 1...,"k", the function "v" inside "Cj" equals some constant "Uj".
A valuation "V" is called piecewise-uniform if the constant is the same for all regions, that is, for each "j" in 1...,"k", the function "v" inside "Cj" equals some constant "U".
Generalization.
A piecewise-linear valuation is a generalization of piecewise-constant valuation in which the value-density in each region "j" is a linear function, "ajx"+"bj" (piecewise-constant corresponds to the special case in which "aj"=0 for all "j").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V: 2^C\\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "v: C\\to \\mathbb{R}"
}
]
| https://en.wikipedia.org/wiki?curid=69087532 |
69087855 | Lorentz oscillator model | Theoretical model describing the optical response of bound charges
The Lorentz oscillator model describes the optical response of bound charges. The model is named after the Dutch physicist Hendrik Antoon Lorentz. It is a classical, phenomenological model for materials with characteristic resonance frequencies (or other characteristic energy scales) for optical absorption, e.g. ionic and molecular vibrations, interband transitions (semiconductors), phonons, and collective excitations.
Derivation of electron motion.
The model is derived by modeling an electron orbiting a massive, stationary nucleus as a spring-mass-damper system. The electron is modeled to be connected to the nucleus via a hypothetical spring and its motion is damped by via a hypothetical damper. The damping force ensures that the oscillator's response is finite at its resonance frequency. For a time-harmonic driving force which originates from the electric field, Newton's second law can be applied to the electron to obtain the motion of the electron and expressions for the dipole moment, polarization, susceptibility, and dielectric function.
Equation of motion for electron oscillator:
formula_0
where
For time-harmonic fields:
formula_9
formula_10
The stationary solution of this equation of motion is:
formula_11
The fact that the above solution is complex means there is a time delay (phase shift) between the driving electric field and the response of the electron's motion.
Dipole moment.
The displacement, formula_12, induces a dipole moment, formula_13, given by
formula_14
formula_15 is the polarizability of single oscillator, given by
formula_16
Polarization.
The polarization formula_17 is the dipole moment per unit volume. For macroscopic material properties N is the density of charges (electrons) per unit volume. Considering that each electron is acting with the same dipole moment we have the polarization as below
formula_18
Electric displacement.
The electric displacement formula_19 is related to the polarization density formula_17 by
formula_20
Dielectric function.
The complex dielectric function is given by
formula_21
where formula_22 and formula_23 is the so-called plasma frequency.
In practice, the model is commonly modified to account for multiple absorption mechanisms present in a medium. The modified version is given by
formula_24
where
formula_25
and
Separating the real and imaginary components,
formula_31
Complex conductivity.
The complex optical conductivity in general is related to the complex dielectric function
formula_32
Substituting the formula of formula_33 in the equation above we obtain
formula_34
Separating the real and imaginary components,
formula_35
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n \\mathbf F_\\text{net} =\\mathbf F_\\text{damping} + \\mathbf F_\\text{spring} + \\mathbf F_\\text{driving} &= m\\frac{\\mathrm d^2 \\mathbf r}{\\mathrm dt^2} \\\\[1ex]\n \\frac{ -m}{ \\tau} \\frac{\\mathrm d\\mathbf r}{\\mathrm dt} - k \\mathbf r - {e} \\mathbf E(t) &= m\\frac{\\mathrm d^2 \\mathbf r}{\\mathrm dt^2} \\\\[1ex]\n \\frac{\\mathrm d^2 \\mathbf r}{\\mathrm dt^2} + \\frac{ 1}{ \\tau} \\frac{\\mathrm d\\mathbf r}{\\mathrm dt} + \\omega_0^2 \\mathbf r\\; &= \\; \\frac{-e}{m} \\mathbf E(t)\n\\end{align}"
},
{
"math_id": 1,
"text": "\\mathbf r"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": " \\tau"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "\\omega_0 = \\sqrt{k / m }"
},
{
"math_id": 7,
"text": "e"
},
{
"math_id": 8,
"text": "\\mathbf E(t)"
},
{
"math_id": 9,
"text": "\\mathbf E(t) = \\mathbf E_0 e^{- i \\omega t}"
},
{
"math_id": 10,
"text": "\\mathbf r(t) = \\mathbf r_0 e^{- i \\omega t}"
},
{
"math_id": 11,
"text": "\\mathbf r(\\omega) = \\frac{\\frac{-e}{m}} {\\omega_0^2 - \\omega^2 - i \\omega/\\tau} \\mathbf E(\\omega) "
},
{
"math_id": 12,
"text": "\\mathbf r "
},
{
"math_id": 13,
"text": "\\mathbf p"
},
{
"math_id": 14,
"text": "\\mathbf p(\\omega) = -e \\mathbf r(\\omega) = \\hat\\alpha(\\omega) \\mathbf E(\\omega) . "
},
{
"math_id": 15,
"text": "\\hat \\alpha(\\omega)"
},
{
"math_id": 16,
"text": "\\hat \\alpha(\\omega) = \\frac{e^2}{m} \\frac{1}{(\\omega_0^2 - \\omega^2) - i \\omega/\\tau} ."
},
{
"math_id": 17,
"text": "\\mathbf P "
},
{
"math_id": 18,
"text": "\\mathbf P = N \\mathbf p = N \\hat \\alpha(\\omega) \\mathbf E(\\omega) . "
},
{
"math_id": 19,
"text": "\\mathbf D "
},
{
"math_id": 20,
"text": "\\mathbf D = \\hat\\varepsilon \\mathbf E = \\mathbf E + 4\\pi \\mathbf P = (1 + 4\\pi N \\hat \\alpha) \\mathbf E "
},
{
"math_id": 21,
"text": "\\hat \\varepsilon(\\omega) = 1 + \\frac{4\\pi N e^2}{m} \\frac{1}{(\\omega_0^2 - \\omega^2) - i \\omega/\\tau} "
},
{
"math_id": 22,
"text": "4\\pi N e^2/m = \\omega_p^2 "
},
{
"math_id": 23,
"text": " \\omega_p "
},
{
"math_id": 24,
"text": "\\hat \\varepsilon(\\omega) = \\varepsilon_{\\infty} + \\sum_{j} \\chi_{j}^{L}(\\omega; \\omega_{0,j}) "
},
{
"math_id": 25,
"text": "\\chi_{j}^{L}(\\omega; \\omega_{0,j}) = \\frac{s_j}{\\omega_{0,j}^2 - \\omega^2 - i \\Gamma_j \\omega} "
},
{
"math_id": 26,
"text": "\\varepsilon_{\\infty}"
},
{
"math_id": 27,
"text": "s_{j} = \\omega_p^{2} f_{j}"
},
{
"math_id": 28,
"text": "f_{j}"
},
{
"math_id": 29,
"text": "j"
},
{
"math_id": 30,
"text": "\\Gamma_{j} = 1/\\tau"
},
{
"math_id": 31,
"text": "\\hat \\varepsilon(\\omega)\n= \\varepsilon_1(\\omega) + i \\varepsilon_2(\\omega)\n= \\left[ \\varepsilon_{\\infty} + \\sum_{j} \\frac{s_{j} (\\omega_{0,j}^2 - \\omega^2)}{\\left(\\omega_{0,j}^{2} - \\omega^{2}\\right)^{2} + \\left(\\Gamma_{j} \\omega\\right)^2} \\right]\n + i \\left[ \\sum_{j} \\frac{s_{j} (\\Gamma_{j} \\omega)}{\\left(\\omega_{0,j}^{2} - \\omega^{2}\\right)^{2} + \\left(\\Gamma_{j} \\omega\\right)^{2}} \\right]"
},
{
"math_id": 32,
"text": " \\hat \\sigma(\\omega) = \\frac{\\omega}{4\\pi i} \\left(\\hat\\varepsilon(\\omega) - 1\\right) "
},
{
"math_id": 33,
"text": " \\hat\\varepsilon(\\omega)"
},
{
"math_id": 34,
"text": "\\hat \\sigma(\\omega) = \\frac{N e^2}{m} \\frac{\\omega}{\\omega/\\tau + i \\left(\\omega_0^2 - \\omega^2 \\right)} "
},
{
"math_id": 35,
"text": "\\hat \\sigma(\\omega) = \\sigma_1(\\omega) + i \\sigma_2(\\omega) =\n\\frac{N e^2}{m} \\frac{\\frac{\\omega^2}{\\tau}}{\\left(\\omega_0^2 - \\omega^2\\right)^2 + \\omega^2 / \\tau^2 } - i \\frac{N e^2}{m} \\frac{\\left(\\omega_0^2 - \\omega^2\\right) \\omega}{\\left(\\omega_0^2 - \\omega^2\\right)^2 + \\omega^2/\\tau^2}"
}
]
| https://en.wikipedia.org/wiki?curid=69087855 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.