id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
67191679 | 2 Chronicles 2 | Second Book of Chronicles, chapter 2
2 Chronicles 2 is the second chapter of the Second Book of Chronicles the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is Solomon's ascension and wealth.
Text.
This chapter was originally written in the Hebrew language and is divided into 18 verses in Christian Bibles, but into 17 verses in the Hebrew Bible with the following verse numbering comparison:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Solomon gathers building materials for the Temple (2:1–10).
The section records Solomon's request to Huram (or "Hiram" in 1 Kings) the king of Tyre, who was a friend of David (verses 2–9), in which the skilfully structured message actually contains temple worship theology, establishing the temple as the second tabernacle (verse 3) with rituals as stated in the Torah (verses 4–5; cf. Exodus 30:1-8; Leviticus 24:5-9; Numbers 28-29 etc.) as the ground for the dedicatory prayer in 2 Chronicles 6:18. The man sent by Huram should be skilled in carpentry, as well as other crafts and works with various materials (verse 6; for examples, the curtain in 2 Chronicles 3:14), basically an equivalent of Bezalel and his assistant Oholiab, who constructed the tabernacle at Mount Sinai (Exodus 31:18). Solomon worked together with the Phoenicians in parallel with what David did (2 Samuel 5:11; 1 Chronicles 22:4).
"Behold, I am building a temple for the name of the Lord my God, to dedicate it to Him, to burn before Him sweet incense, for the continual showbread, for the burnt offerings morning and evening, on the Sabbaths, on the New Moons, and on the set feasts of the Lord our God. This is an ordinance forever to Israel."
Huram's reply to Solomon (1:11–18).
The salutation 'my lord' in verse 14 indicates Solomon's supremacy over Huram. Joppa (verse 15) was an important Israelite seaport (cf. ; about trading relations with the Phoenicians (Sidon and Tyre), mentioning Lebanese wood being transported across the sea to Joppa). According to 1 Kings 9:22 (cf. 2 Chronicles 5:29), Israelites were not employed as forced laborers, but the foreigners were (the same as in the time of David (1 Chronicles 22:2)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67191679 |
67192104 | Year loss table | Mathematical object used in risk modelling.A year loss table (YLT) is a table that lists historical or simulated years, with financial losses for each year. YLTs are widely used in catastrophe modeling as a way to record and communicate historical or simulated losses from catastrophes. The use of lists of years with historical or simulated financial losses is discussed in many references on catastrophe modelling and disaster risk management, but it is only more recently that the term "YLT" has been standardized.
Overview.
Year of interest.
In a simulated YLT, each year of simulated loss is considered a possible loss outcome for a single year, defined as the year of interest, which is usually in the future. In insurance industry catastrophe modelling, the year of interest is often this year or next, due to the annual nature of many insurance contracts.
Events.
Many YLTs are event based; that is, they are constructed from historical or simulated catastrophe events, each of which has an associated loss. Each event is allocated to one or more years in the YLT and there may be multiple events in a year. The events may have an associated frequency model, that specifies the distribution for the number of different types of events per year, and an associated severity distribution, that specifies the distribution of loss for each event.
Use in insurance.
YLTs are widely used in the insurance industry, as they are a flexible way to store samples from a distribution of possible losses. Two properties make them particularly useful:
Examples of YLTs.
YLTs are often stored in either long-form or short-form.
Long-form YLTs.
In a long-form YLT, each row corresponds to a different loss-causing event. For each event, the YLT records the year, the event, the loss, and any other relevant information about the event.
In this example:
Short-form YLTs.
In a short-form YLT, each row of the YLT corresponds to a different year. For each event, the YLT records the year, the loss, and any other relevant information about that year.
The same YLT above, condensed to a short form, would look like:
Frequency models.
The most commonly used frequency model for the events in a YLT is the Poisson distribution with constant parameters. An alternative frequency model is the mixed Poisson distribution, which allows for the temporal and spatial clustering of events.
Stochastic parameter YLTs.
When YLTs are generated from parametrized mathematical models, they may use the same parameter values in each year (fixed parameter YLTs), or different parameter values in each year (stochastic parameter YLTs).
As an example, the annual frequency of hurricanes hitting the United States might be modelled as a Poisson distribution with an estimated mean of 1.67 hurricanes per year. The estimation uncertainty around the estimate of the mean might considered to be a gamma distribution. In a fixed parameter YLT, the number of hurricanes every year would be simulated using a Poisson distribution with a mean of 1.67 hurricanes per year, and the distribution of estimation uncertainty would be ignored. In a stochastic parameter YLT, the number of hurricanes in each year would be simulated by first simulating the mean number of hurricanes for that year from the gamma distribution, and then simulating the number of hurricanes itself from a Poisson distribution with the simulated mean.
In the fixed parameter YLT the mean of the Poisson distribution used to model the frequency of hurricanes, by year, would be:
In the stochastic parameter YLT the mean of the Poisson distribution used to model the frequency of hurricanes, by year, might be:
Adjusting YLTs and WYLTs.
It is often of interest to adjust YLTs, perform sensitivity tests, or make adjustments for climate change. Adjustments can be made in several different ways. If a YLT has been created by simulating from a list of events with given frequencies, then one simple way to adjust the YLT is to resimulate but with different frequencies. Resimulation with different frequencies can be made much more accurate by using the incremental simulation approach.
YLTs can be adjusted by applying weights to the years, which converts a YLT to a WYLT. An example would be adjusting weather and climate risk YLTs to account for the effects of climate variability and change.
A general and principled method for applying weights to YLTs is importance sampling, in which the weight on the year formula_0 is given by the ratio of the probability of year formula_0 in the adjusted model to the probability of year formula_0 in the unadjusted model. Importance sampling can be applied to both fixed parameter YLTs and stochastic parameter YLTs.
WYLTs are less flexible in some ways than YLTs. For instance, two WYLTs with different weights, cannot easily be combined to create a new WYLT. For this reason, it may be useful to convert WYLTs to YLTs. This can be done using the method of repeat-and-delete, in which years with high weights are repeated one or more times and years with low weights are deleted.
Calculating metrics from YLTs and WYLTs.
Standard risk metrics can be calculated straightforwardly from YLTs and WYLTs. Some examples are:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
}
]
| https://en.wikipedia.org/wiki?curid=67192104 |
6719257 | Introduction to the mathematics of general relativity | The mathematics of general relativity is complicated. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed approaches the speed of light, meaning that more variables and more complicated mathematics are required to calculate the object's motion. As a result, relativity requires the use of concepts such as vectors, tensors, pseudotensors and curvilinear coordinates.
For an introduction based on the example of particles following circular orbits about a large mass, nonrelativistic and relativistic treatments are given in, respectively, Newtonian motivations for general relativity and Theoretical motivation for general relativity.
Vectors and tensors.
Vectors.
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point "A" to the point "B"; the Latin word "vector" means "one who carries". The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from "A" to "B". Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.
Tensors.
A tensor extends the concept of a vector to additional directions. A scalar, that is, a simple number without a direction, would be shown on a graph as a point, a zero-dimensional object. A vector, which has a magnitude and direction, would appear on a graph as a line, which is a one-dimensional object. A vector is a first-order tensor, since it holds one direction.
A second-order tensor has two magnitudes and two directions, and would appear on a graph as two lines similar to the hands of a clock. The "order" of a tensor is the number of directions contained within, which is separate from the dimensions of the individual directions. A second-order tensor in two dimensions might be represented mathematically by a 2-by-2 matrix, and in three dimensions by a 3-by-3 matrix, but in both cases the matrix is "square" for a second-order tensor. A third-order tensor has three magnitudes and directions, and would be represented by a cube of numbers, 3-by-3-by-3 for directions in three dimensions, and so on.
Applications.
Vectors are fundamental in the physical sciences. They can be used to represent any quantity that has both a magnitude and direction, such as velocity, the magnitude of which is speed. For example, the velocity "5 meters per second upward" could be represented by the vector (0, 5) (in 2 dimensions with the positive "y" axis as 'up'). Another quantity represented by a vector is force, since it has a magnitude and direction. Vectors also describe many other physical quantities, such as displacement, acceleration, momentum, and angular momentum. Other physical vectors, such as the electric and magnetic field, are represented as a system of vectors at each point of a physical space; that is, a vector field.
Tensors also have extensive applications in physics:
Dimensions.
In general relativity, four-dimensional vectors, or four-vectors, are required. These four dimensions are length, height, width and time. A "point" in this context would be an event, as it has both a location and a time. Similar to vectors, tensors in relativity require four dimensions. One example is the Riemann curvature tensor.
Coordinate transformation.
In physics, as well as mathematics, a vector is often identified with a tuple, or list of numbers, which depend on a coordinate system or reference frame. If the coordinates are transformed, such as by rotation or stretching the coordinate system, the components of the vector also transform. The vector itself does not change, but the reference frame does. This means that the components of the vector have to change to compensate.
The vector is called "covariant" or "contravariant" depending on how the transformation of the vector's components is related to the transformation of coordinates.
In Einstein notation, contravariant vectors and components of tensors are shown with superscripts, e.g. "xi", and covariant vectors and components of tensors with subscripts, e.g. "xi". Indices are "raised" or "lowered" by multiplication by an appropriate matrix, often the identity matrix.
Coordinate transformation is important because relativity states that there is not one reference point (or perspective) in the universe that is more favored than another. On earth, we use dimensions like north, east, and elevation, which are used throughout the entire planet. There is no such system for space. Without a clear reference grid, it becomes more accurate to describe the four dimensions as towards/away, left/right, up/down and past/future. As an example event, assume that Earth is a motionless object, and consider the signing of the Declaration of Independence. To a modern observer on Mount Rainier looking east, the event is ahead, to the right, below, and in the past. However, to an observer in medieval England looking north, the event is behind, to the left, neither up nor down, and in the future. The event itself has not changed: the location of the observer has.
Oblique axes.
An oblique coordinate system is one in which the axes are not necessarily orthogonal to each other; that is, they meet at angles other than right angles. When using coordinate transformations as described above, the new coordinate system will often appear to have oblique axes compared to the old system.
Nontensors.
A nontensor is a tensor-like quantity that behaves like a tensor in the raising and lowering of indices, but that does not transform like a tensor under a coordinate transformation. For example, Christoffel symbols cannot be tensors themselves if the coordinates do not change in a linear way.
In general relativity, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor.
Curvilinear coordinates and curved spacetime.
Curvilinear coordinates are coordinates in which the angles between axes can change from point to point. This means that rather than having a grid of straight lines, the grid instead has curvature.
A good example of this is the surface of the Earth. While maps frequently portray north, south, east and west as a simple square grid, that is not in fact the case. Instead, the longitude lines running north and south are curved and meet at the north pole. This is because the Earth is not flat, but instead round.
In general relativity, energy and mass have curvature effects on the four dimensions of the universe (= spacetime). This curvature gives rise to the gravitational force. A common analogy is placing a heavy object on a stretched out rubber sheet, causing the sheet to bend downward. This curves the coordinate system around the object, much like an object in the universe curves the coordinate system it sits in. The mathematics here are conceptually more complex than on Earth, as it results in four dimensions of curved coordinates instead of three as used to describe a curved 2D surface.
Parallel transport.
The interval in a high-dimensional space.
In a Euclidean space, the separation between two points is measured by the distance between the two points. The distance is purely spatial, and is always positive. In spacetime, the separation between two events is measured by the "invariant interval" between the two events, which takes into account not only the spatial separation between the events, but also their separation in time. The interval, "s"2, between two events is defined as:
formula_0(spacetime interval),
where "c" is the speed of light, and Δ"r" and Δ"t" denote differences of the space and time coordinates, respectively, between the events. The choice of signs for "s"2 above follows the space-like convention (−+++). A notation like Δ"r"2 means (Δ"r")2. The reason "s"2 is called the interval and not "s" is that "s"2 can be positive, zero or negative.
Spacetime intervals may be classified into three distinct types, based on whether the temporal separation ("c"2Δ"t"2) or the spatial separation (Δ"r"2) of the two events is greater: time-like, light-like or space-like.
Certain types of world lines are called geodesics of the spacetime – straight lines in the case of flat Minkowski spacetime and their closest equivalent in the curved spacetime of general relativity. In the case of purely time-like paths, geodesics are (locally) the paths of greatest separation (spacetime interval) as measured along the path between two events, whereas in Euclidean space and Riemannian manifolds, geodesics are paths of shortest distance between two points. The concept of geodesics becomes central in general relativity, since geodesic motion may be thought of as "pure motion" (inertial motion) in spacetime, that is, free from any external influences.
The covariant derivative.
The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule, which takes as its inputs: (1) a vector, u, (along which the derivative is taken) defined at a point "P", and (2) a vector field, v, defined in a neighborhood of "P". The output is a vector, also at the point "P". The primary difference from the usual directional derivative is that the covariant derivative must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system.
Parallel transport.
Given the covariant derivative, one can define the parallel transport of a vector v at a point "P" along a curve "γ" starting at "P". For each point "x" of "γ", the parallel transport of v at "x" will be a function of "x", and can be written as v("x"), where v(0)
v. The function v is determined by the requirement that the covariant derivative of v("x") along "γ" is 0. This is similar to the fact that a constant function is one whose derivative is constantly 0.
Christoffel symbols.
The equation for the covariant derivative can be written in terms of Christoffel symbols. The Christoffel symbols find frequent use in Einstein's theory of general relativity, where spacetime is represented by a curved 4-dimensional Lorentz manifold with a Levi-Civita connection. The Einstein field equations – which determine the geometry of spacetime in the presence of matter – contain the Ricci tensor. Since the Ricci tensor is derived from the Riemann curvature tensor, which can be written in terms of Christoffel symbols, a calculation of the Christoffel symbols is essential. Once the geometry is determined, the paths of particles and light beams are calculated by solving the geodesic equations in which the Christoffel symbols explicitly appear.
Geodesics.
In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational force, is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting around a star is the projection of a geodesic of the curved 4-dimensional spacetime geometry around the star onto 3-dimensional space.
A curve is a geodesic if the tangent vector of the curve at any point is equal to the parallel transport of the tangent vector of the base point.
Curvature tensor.
The Riemann curvature tensor "Rρσμν" tells us, mathematically, how much curvature there is in any given region of space. In flat space this tensor is zero.
Contracting the tensor produces 2 more mathematical objects:
The Riemann curvature tensor can be expressed in terms of the covariant derivative.
The Einstein tensor G is a rank-2 tensor defined over pseudo-Riemannian manifolds. In index-free notation it is defined as
formula_1
where R is the Ricci tensor, g is the metric tensor and "R" is the scalar curvature. It is used in the Einstein field equations.
Stress–energy tensor.
The stress–energy tensor (sometimes stress–energy–momentum tensor or energy–momentum tensor) is a tensor quantity in physics that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. The stress–energy tensor is the source of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity. Because this tensor has 2 indices (see next section) the Riemann curvature tensor has to be contracted into the Ricci tensor, also with 2 indices.
Einstein equation.
The Einstein field equations (EFE) or Einstein's equations are a set of 10 equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).
The Einstein field equations can be written as
formula_2
where "Gμν" is the Einstein tensor and "Tμν" is the stress–energy tensor.
This implies that the curvature of space (represented by the Einstein tensor) is directly connected to the presence of matter and energy (represented by the stress–energy tensor).
Schwarzschild solution and black holes.
In Einstein's theory of general relativity, the Schwarzschild metric (also Schwarzschild vacuum or Schwarzschild solution), is a solution to the Einstein field equations which describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, the angular momentum of the mass, and the universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. The solution is named after Karl Schwarzschild, who first published the solution in 1916, just before his death.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s^2 = \\Delta r^2 - c^2\\Delta t^2 \\,"
},
{
"math_id": 1,
"text": "\\mathbf{G}=\\mathbf{R}-\\tfrac12\\mathbf{g}R,"
},
{
"math_id": 2,
"text": "G_{\\mu \\nu}= {8 \\pi G \\over c^4} T_{\\mu \\nu} ,"
}
]
| https://en.wikipedia.org/wiki?curid=6719257 |
67193 | Testicle | Internal organ in the male reproductive system
A testicle or testis (pl.: testes) is the male gonad in all bilaterians, including humans. It is homologous to the female ovary. The functions of the testicles are to produce both sperm and androgens, primarily testosterone. Testosterone release is controlled by the anterior pituitary luteinizing hormone, whereas sperm production is controlled both by the anterior pituitary follicle-stimulating hormone and gonadal testosterone.
Structure.
Appearance.
Males have two testicles of similar size contained within the scrotum, which is an extension of the abdominal wall. Scrotal asymmetry, in which one testicle extends farther down into the scrotum than the other, is common. This is because of the differences in the vasculature's anatomy. For 85% of men, the right testis hangs lower than the left one.
Measurement and volume.
The volume of the testicle can be estimated by palpating it and comparing it to ellipsoids (an orchidometer) of known sizes. Another method is to use calipers, a ruler, or an ultrasound image to obtain the three measurements of the x, y, and z axes (length, depth and width). These measurements can then be used to calculate the volume, using the formula for the volume of an ellipsoid:
formula_0
formula_1
However, the most accurate calculation of actual testicular volume is gained from the formula:
formula_2
An average adult testicle measures up to . The Tanner scale, which is used to assess the maturity of the male genitalia, assigns a maturity stage to the calculated volume ranging from stage I, a volume of less than 1.5 cm3; to stage V, a volume greater than 20 cm3. Normal volume is 15 to 25 cm3; the average is 18 cm3 per testis (range 12–30 cm3).
The number of spermatozoa an adult human male produces is directly proportional to testicular volume, as larger testicles contain more seminiferous tubules and Sertoli cells as a result. As such, men with larger testicles produce on average more sperm cells in each ejaculate, as testicular volume is positively correlated with semen profiles.
Internal structure.
Duct system.
The testes are covered by a tough fibrous shell called the tunica albuginea. Under the tunica albuginea, the testes contain very fine-coiled tubes called seminiferous tubules. The tubules are lined with a layer of cells (germ cells) that develop from puberty through old age into sperm cells (also known as spermatozoa or male gametes). The developing sperm travel through the seminiferous tubules to the rete testis located in the mediastinum testis, to the efferent ducts, and then to the epididymis where newly created sperm cells mature (spermatogenesis). The sperm move into the vas deferens, and are eventually expelled through the urethra and out of the urethral orifice through muscular contractions.
Primary cell types.
Within the seminiferous tubules, the germ cells develop into spermatogonia, spermatocytes, spermatids and spermatozoa through the process of spermatogenesis. The gametes contain DNA for fertilization of an ovum. Sertoli cells – the true epithelium of the seminiferous epithelium, critical for the support of germ cell development into spermatozoa. Sertoli cells secrete inhibin. Peritubular myoid cells surround the seminiferous tubules.
Between tubules (interstitial cells) exist Leydig cells – cells localized between seminiferous tubules that produce and secrete testosterone and other androgens important for puberty (including secondary sexual characteristics like facial hair), sexual behavior, and libido. Sertoli cells support spermatogenesis. Testosterone controls testicular volume.
Immature Leydig cells and interstitial macrophages and epithelial cells are also present.
Blood supply and lymphatic drainage.
The testis has three sources of arterial blood supply: the testicular artery, the cremasteric artery, and the artery to the ductus deferens. Blood supply and lymphatic drainage of the testes and scrotum are distinct:
Layers.
Many anatomical features of the adult testis reflect its developmental origin in the abdomen. The layers of tissue enclosing each testicle are derived from the layers of the anterior abdominal wall. The cremasteric muscle arises from the internal oblique muscle.
The blood–testis barrier.
Large molecules cannot pass from the blood into the lumen of a seminiferous tubule due to the presence of tight junctions between adjacent Sertoli cells. The spermatogonia occupy the basal compartment (deep to the level of the tight junctions) and the more mature forms, such as primary and secondary spermatocytes and spermatids, occupy the adluminal compartment.
The function of the blood–testis barrier may be to prevent an auto-immune reaction. Mature sperm (and their antigens) emerge significantly after immune tolerance is set in infancy. Since sperm are antigenically different from self-tissue, a male animal can react immunologically to his own sperm. The male can make antibodies against them.
Injection of sperm antigens causes inflammation of the testis (auto-immune orchitis) and reduced fertility. The blood–testis barrier may reduce the likelihood that sperm proteins will induce an immune response.
Temperature regulation and responses.
Carl Richard Moore in 1926 proposed that testicles were external due to spermatogenesis being enhanced at temperatures slightly less than core body temperature outside the body. The spermatogenesis is less efficient at lower and higher temperatures than 33 °C. Because the testes are located outside the body, the smooth tissue of the scrotum can move them closer or further away from the body. The temperature of the testes is maintained at 34.4 °C, a little below body temperature, as temperatures above 36.7 °C impede spermatogenesis. There are a number of mechanisms to maintain the testes at the optimum temperature.
The cremasteric muscle covers the testicles and the spermatic cord. When this muscle contracts, the cord shortens and the testicles move closer up toward the body, which provides slightly more warmth to maintain optimal testicular temperature. When cooling is required, the cremasteric muscle relaxes and the testicles lower away from the warm body and are able to cool. Contraction also occurs in response to physical stress, such as blunt trauma; the testicles withdraw and the scrotum shrinks very close to the body in an effort to protect them.
The cremasteric reflex will reflexively raise the testicles. The testicles can also be lifted voluntarily using the pubococcygeus muscle, which partially activates related muscles.
Gene and protein expression.
The human genome includes approximately 20,000 protein coding genes: 80% of these genes are expressed in adult testes. The testes have the highest fraction of tissue type-specific genes compared to other organs and tissues. About 1000 of them are highly specific for the testes, and about 2,200 show an elevated pattern of expression. A majority of these genes encode for proteins that are expressed in the seminiferous tubules and have functions related to spermatogenesis. Sperm cells express proteins that result in the development of flagella; these same proteins are expressed in the female in cells lining the fallopian tube and cause the development of cilia. Sperm cell flagella and fallopian tube cilia are homologous structures. The testis-specific proteins that show the highest level of expression are protamines.
Development.
There are two phases in which the testes grow substantially. These are the embryonic and pubertal phases.
During mammalian development, the gonads are at first capable of becoming either ovaries or testes. In humans, starting at about week 4, the gonadal rudiments are present within the intermediate mesoderm adjacent to the developing kidneys. At about week 6, sex cords develop within the forming testes. These are made up of early Sertoli cells that surround and nurture the germ cells that migrate into the gonads shortly before sex determination begins. In males, the sex-specific gene SRY that is found on the Y chromosome initiates sex determination by downstream regulation of sex-determining factors (such as GATA4, SOX9 and AMH), which lead to development of the male phenotype, including directing development of the early bipotential gonad toward the male path of development.
Testes follow the path of descent, from high in the posterior fetal abdomen to the inguinal ring and beyond to the inguinal canal and into the scrotum. In most cases (97% full-term, 70% preterm), both testes have descended by birth. In most other cases, only one testis fails to descend. This is called cryptorchidism. In most cases of cryptorchidism, the issue will mostly resolve itself within the first half year of life. However, if the testes do not descend far enough into the scrotum, surgical anchoring in the scrotum is required due to risks of infertility and testicular cancer.
The testes grow in response to the start of spermatogenesis. Size depends on lytic function, sperm production (amount of spermatogenesis present in testis), interstitial fluid, and Sertoli cell fluid production. The testicles are fully descended before the male reaches puberty.
Clinical significance.
Diseases and conditions.
Medical condition
Testicular prostheses are available to mimic the appearance and feel of one or both testicles, when absent as from injury or as treatment in association to gender dysphoria. There have also been some instances of their implantation in dogs.
Scientists are working on developing lab-grown testicles that might help infertile men in the future.
Effects of exogenous hormones.
To some extent, it is possible to change testicular size. Short of direct injury or subjecting them to adverse conditions, e.g., higher temperature than they are normally accustomed to, they can be shrunk by competing against their intrinsic hormonal function through the use of externally administered steroidal hormones. Steroids taken for muscle enhancement (especially anabolic steroids) often have the undesired side effect of testicular shrinkage.
Stimulation of testicular functions via gonadotropic-like hormones may enlarge their size. Testes may shrink or atrophy during hormone replacement therapy or through chemical castration.
In all cases, the loss in testes volume corresponds with a loss of spermatogenesis.
Society and culture.
The testicles of calves, lambs, roosters, turkeys, and other animals are eaten in many parts of the world, often under euphemistic culinary names. Testicles are a by-product of the castration of young animals raised for meat, so they might have been a late-spring seasonal specialty. In modern times, they are generally frozen and available year-round.
In the Middle Ages, men who wanted a boy sometimes had their left testicle removed. This was because people believed that the right testicle made "boy" sperm and the left made "girl" sperm. As early as 330 BC, Aristotle prescribed the ligation (tying off) of the left testicle in men wishing to have boys.
Etymology and slang.
One theory about the etymology of the word "testis" is based on Roman law. The original Latin word , "witness", was used in the firmly established legal principle "" (one witness [equals] no witness), meaning that testimony by any one person in court was to be disregarded unless corroborated by the testimony of at least another. This led to the common practice of producing two witnesses, bribed to testify the same way in cases of lawsuits with ulterior motives. Since such witnesses always came in pairs, the meaning was accordingly extended, often in the diminutive ("testiculus, testiculi").
Another theory says that "testis" is influenced by a loan translation, from Greek "defender (in law), supporter" that is "two glands side by side".
There are multiple slang terms for the testes. They may be referred to as "balls". Frequently, "nuts" (sometimes intentionally misspelled as "nutz") are also a slang term for the testes due to the geometric resemblance. One variant of the term includes "Deez Nuts", which was used for a satirical political candidate in 2016.
In Spanish, the term is used, which is Spanish for eggs.
Other animals.
External appearance.
In seasonal breeders, the weight of the testes often increases during the breeding season. The testicles of a dromedary camel are long, deep and in width. The right testicle is often smaller than the left.
In sharks, the testicle on the right side is usually larger. In many bird and mammal species, the left may be larger. Fish usually have two testes of a similar size. The primitive jawless fish have only a single testis, located in the midline of the body, although this forms from the fusion of paired structures in the embryo.
Location.
Internal.
The basal condition for mammals is to have internal testes. The testes of monotremes, xenarthrans, and afrotherians remain within the abdomen (testicondy). There are also some marsupials with external testes and boreoeutherian mammals with internal testes, such as the rhinoceros. Cetaceans such as whales and dolphins also have internal testes. As external testes would increase drag in the water, they have internal testes, which are kept cool by special circulatory systems that cool the arterial blood going to the testes by placing the arteries near veins bringing cooled venous blood from the skin. In odobenids and phocids, the location of the testes is para-abdominal, though otariids have scrotal testes.
External.
Boreoeutherian land mammals, the large group of mammals that includes humans, have externalized testes. Their testes function best at temperatures lower than their core body temperature. Their testes are located outside of the body and are suspended by the spermatic cord within the scrotum.
There are several hypotheses as to why most boreotherian mammals have external testes that operate best at a temperature that is slightly less than the core body temperature. One view is that it is stuck with enzymes evolved in a colder temperature due to external testes evolving for different reasons. Another view is that the lower temperature of the testes simply is more efficient for sperm production.
The classic hypothesis is that cooler temperature of the testes allows for more efficient fertile spermatogenesis. There are no possible enzymes operating at normal core body temperature that are as efficient as the ones evolved.
Early mammals had lower body temperatures and thus their testes worked efficiently within their body. However, boreotherian mammals may have higher body temperatures than the other mammals and had to develop external testes to keep them cool. One argument is that mammals with internal testes, such as the monotremes, armadillos, sloths, elephants, and rhinoceroses, have a lower core body temperatures than those mammals with external testes.
Researchers have wondered why birds, despite having very high core body temperatures, have internal testes and did not evolve external testes. It was once theorized that birds used their air sacs to cool the testes internally, but later studies revealed that birds' testes are able to function at core body temperature.
Some mammals with seasonal breeding cycles keep their testes internal until the breeding season. After that, their testes descend and increase in size and become external.
The ancestor of the boreoeutherian mammals may have been a small mammal that required very large testes for sperm competition and thus had to place its testes outside the body. This might have led to enzymes involved in spermatogenesis, spermatogenic DNA polymerase beta and recombinase activities evolving a unique temperature optimum that is slightly less than core body temperature. When the boreoeutherian mammals diversified into forms that were larger or did not require intense sperm competition, they still produced enzymes that operated best at cooler temperatures and had to keep their testes outside the body. This position is made less parsimonious because the kangaroo, a non-boreoeutherian mammal, has external testicles. Separately from boreotherian mammals, the ancestors of kangaroos might have also been subject to heavy sperm competition and thus developed external testes; however, kangaroo external testes are suggestive of a possible adaptive function for external testes in large animals.
One argument for the evolution of external testes is that it protects the testes from abdominal cavity pressure changes caused by jumping and galloping.
Mild, transient scrotal heat stress causes DNA damage, reduced fertility and abnormal embryonic development in mice. DNA strand breaks were found in spermatocytes recovered from testicles subjected to 40 °C or 42 °C for 30 minutes. These findings suggest that the external location of the testicles provides the adaptive benefit of protecting spermatogenic cells from heat-induced DNA damage that could otherwise lead to infertility and germline mutation.
Size.
The relative size of the testes is often influenced by mating systems. Testicular size as a proportion of body weight varies widely. In the mammalian kingdom, there is a tendency for testicular size to correspond with multiple mates (e.g., harems, polygamy). Production of testicular output sperm and spermatic fluid is also larger in polygamous animals, possibly a spermatogenic competition for survival. The testes of the right whale are likely to be the largest of any animal, each weighing around 500 kg (1,100 lb).
Among the Hominidae, gorillas have little female promiscuity and sperm competition and the testes are small compared to body weight (0.03%). Chimpanzees have high promiscuity and large testes compared to body weight (0.3%). Human testicular size falls between these extremes (0.08%).
Testis weight also varies in seasonal breeders like red foxes, golden jackals, and coyotes.
Internal structure.
Amphibians and most fish do not possess seminiferous tubules. Instead, the sperm are produced in spherical structures called "sperm ampullae". These are seasonal structures, releasing their contents during the breeding season, and then being reabsorbed by the body. Before the next breeding season, new sperm ampullae begin to form and ripen. The ampullae are otherwise essentially identical to the seminiferous tubules in higher vertebrates, including the same range of cell types.
See also.
<templatestyles src="Div col/styles.css"/>
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Volume = \\frac{4}{3} \\cdot \\pi \\cdot \\frac{length}{2} \\cdot \\frac{width}{2} \\cdot \\frac{depth}{2}"
},
{
"math_id": 1,
"text": "\\approx length \\cdot width \\cdot depth \\cdot 0.52"
},
{
"math_id": 2,
"text": "\\approx length \\cdot width \\cdot depth \\cdot 0.71"
}
]
| https://en.wikipedia.org/wiki?curid=67193 |
671956 | Dirac string | Unobservable spacetime curves needed to describe Dirac monopoles
In physics, a Dirac string is a one-dimensional curve in space, conceived of by the physicist Paul Dirac, stretching between two hypothetical Dirac monopoles with opposite magnetic charges, or from one magnetic monopole out to infinity. The gauge potential cannot be defined on the Dirac string, but it is defined everywhere else. The Dirac string acts as the solenoid in the Aharonov–Bohm effect, and the requirement that the position of the Dirac string should not be observable implies the Dirac quantization rule: the product of a magnetic charge and an electric charge must always be an integer multiple of formula_0. Also, a change of position of a Dirac string corresponds to a gauge transformation. This shows that Dirac strings are not gauge invariant, which is consistent with the fact that they are not observable.
The Dirac string is the only way to incorporate magnetic monopoles into Maxwell's equations, since the magnetic flux running along the interior of the string maintains their validity. If Maxwell equations are modified to allow magnetic charges at the fundamental level then the magnetic monopoles are no longer Dirac monopoles, and do not require attached Dirac strings.
Details.
The quantization forced by the Dirac string can be understood in terms of the cohomology of the fibre bundle representing the gauge fields over the base manifold of space-time. The magnetic charges of a gauge field theory can be understood to be the group generators of the cohomology group formula_1 for the fiber bundle "M". The cohomology arises from the idea of classifying all possible gauge field strengths formula_2, which are manifestly exact forms, modulo all possible gauge transformations, given that the field strength "F" must be a closed form: formula_3. Here, "A" is the vector potential and "d" represents the gauge-covariant derivative, and "F" the field strength or curvature form on the fiber bundle. Informally, one might say that the Dirac string carries away the "excess curvature" that would otherwise prevent "F" from being a closed form, as one has that formula_3 everywhere except at the location of the monopole. | [
{
"math_id": 0,
"text": "2\\pi\\hbar"
},
{
"math_id": 1,
"text": "H^2(M)"
},
{
"math_id": 2,
"text": "F=dA"
},
{
"math_id": 3,
"text": "dF=0"
}
]
| https://en.wikipedia.org/wiki?curid=671956 |
67195961 | Aristotle's axiom | Axiom in the foundations of geometry
Aristotle's axiom is an axiom in the foundations of geometry, proposed by Aristotle in "On the Heavens" that states:
If formula_0 is an acute angle and AB is any segment, then there exists a point P on the ray formula_1 and a point Q on the ray formula_2, such that PQ is perpendicular to OX and PQ > AB.
Aristotle's axiom is a consequence of the Archimedean property, and the conjunction of Aristotle's axiom and the Lotschnittaxiom, which states that "Perpendiculars raised on each side of a right angle intersect", is equivalent to the Parallel Postulate.
Without the parallel postulate, Aristotle's axiom is equivalent to each of the following three incidence-geometric statements:
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\widehat{\\rm XOY}"
},
{
"math_id": 1,
"text": "\\overrightarrow{OY}"
},
{
"math_id": 2,
"text": "\\overrightarrow{OX}"
}
]
| https://en.wikipedia.org/wiki?curid=67195961 |
67197507 | Gouy–Stodola theorem | In thermodynamics and thermal physics, the Gouy-Stodola theorem is an important theorem for the quantification of irreversibilities in an open system, and aids in the exergy analysis of thermodynamic processes. It asserts that the rate at which work is lost during a process, or at which exergy is destroyed, is proportional to the rate at which entropy is generated, and that the proportionality coefficient is the temperature of the ambient heat reservoir. In the literature, the theorem often appears in a slightly modified form, changing the proportionality coefficient.
The theorem is named jointly after the French physicist Georges Gouy and Slovak physicist Aurel Stodola, who demonstrated the theorem in 1889 and 1905 respectively. Gouy used it while working on exergy and utilisable energy, and Stodola while working on steam and gas engines.
Overview.
The Gouy-Stodola theorem is often applied upon an open thermodynamic system, which can exchange heat with some thermal reservoirs. It holds both for systems which cannot exchange mass, and systems which mass can enter and leave.
Observe such a system, as sketched in the image shown, as it is going through some process. It is in contact with multiple reservoirs, of which one, that at temperature formula_0, is the environment reservoir. During the process, the system produces work and generates entropy. Under these conditions, the theorem has two general forms.
Work form.
The reversible work is the maximal useful work which can be obtained, formula_1, and can only be fully utilized in an ideal reversible process. An irreversible process produces some work formula_2, which is less than formula_3. The lost work is then formula_4; in other words, formula_5 is the work which was lost or not exploited during the process due to irreversibilities. In terms of lost work, the theorem generally statesformula_6where formula_7 is the rate at which work is lost, and formula_8 is the rate at which entropy is generated. Time derivatives are denoted by dots. The theorem, as stated above, holds only for the entire thermodynamic universe - the system along with its surroundings, together:formula_9where the index "tot" denotes the total quantities produced within or by the entire universe.
Note that formula_7 is a relative quantity, in that it is measured in relation to a specific thermal reservoir. In the above equations, formula_7 is defined in reference to the environment reservoir, at formula_0. When comparing the actual process to an ideal, reversible process between the same endpoints (in order to evaluate formula_3, so as to find the value of formula_5), only the heat interaction with the reference reservoir formula_0 is allowed to vary. The heat interactions between the system and other reservoirs are kept the same. So, if a different reference reservoir formula_10 is chosen, the theorem would read formula_11, where this time formula_7 is in relation to formula_10, and in the corresponding reversible process, only the heat interaction with formula_10 is different.
By integrating over the lifetime of the process, the theorem can also be expressed in terms of final quantities, rather than rates: formula_12.
Adiabatic case.
The theorem also holds for adiabatic processes. That is, for closed systems, which are not in thermal contact with any heat reservoirs.
Similarly to the non-adiabatic case, the lost work is measured relative to some reference reservoir formula_0. Even though the process itself is adiabatic, the corresponding reversible process may not be, and might require heat exchange with the reference reservoir. Thus, this can be thought of as a special case of the above statement of the theorem - an adiabatic process is one for which the heat interactions with all reservoirs are zero, and in the reversible process, only the heat interaction with the reference thermal reservoir may be different.
The adiabatic case of the theorem holds also for the other formulation of the theorem, presented below.
Exergy form.
The exergy of the system is the maximal amount of useful work that the system can generate, during a process which brings it to equilibrium with its environment, or the amount of energy available. During an irreversible process, such as heat exchanges with reservoirs, exergy is destroyed. Generally, the theorem states thatformula_13where formula_14 is the rate at which exergy is destroyed, and formula_8 is the rate at which entropy is generated. As above, time derivatives are denoted by dots.
Unlike the lost work formulation, this version of the theorem holds for both the system (the control volume) and for its surroundings (the environment and the thermal reservoirs) separately:formula_15andformula_16where the index "sys" denotes quantities produced within or by the system itself, and "surr" within or by the surroundings. Therefore, summing these two forms, the theorem also holds for the thermodynamic universe as a whole:formula_17where the index "tot" denotes the total quantities of the entire universe.
Thus, the exergy formulation of the theorem is less limited, as it can be applied on different regions separately. Nevertheless, the work form is used more often.
The proof of the theorem, in both forms, uses the first law of thermodynamics, writing out the terms formula_7, formula_14, and formula_8 in the relevant regions, and comparing them.
Modified coefficient and effective temperature.
In many cases, it is preferable to use a slightly modified version of the Gouy-Stodola theorem in work form, where formula_0 is replaced by some effective temperature. When this is done, it often enlarges the scope of the theorem, and adapts it to be applicable to more systems or situations. For example, the corrections elaborated below are only necessary when the system exchanges heat with more than one reservoir - if it exchanges heat only at the environmental temperature formula_0, the simple form above holds true. Additionally, modifications may change the reversible process to which the real process is compared in calculating formula_7.
The modified theorem then readsformula_18where formula_19 is the effective temperature.
For a flow process, let formula_20 denote the specific entropy (entropy per unit mass) at the inlet, where mass flows in, and formula_21 the specific entropy at the outlet, where mass flows out. Similarly, denote the specific enthalpies by formula_22 and formula_23. The inlet and outlet, in this case, function as initial and final states a process: mass enters the system at an initial state (the inlet, indexed "1"), undergoes some process, and then leaves at a final state (the outlet, indexed "2").
This process is then compared to a reversible process, with the same initial state, but with a (possibly) different final state. The theoretical specific entropy and enthalpy after this ideal, isentropic process are given by formula_24 and formula_25, respectively. When the actual process is compared to this theoretical reversible process and formula_7 is evaluated, the proper effective temperature is given byformula_26In general, formula_19 lies somewhere in between the final temperature in the actual process formula_27 and the final temperature in the theoretical reversible process formula_28.
This equation above can sometimes be simplified. If both the pressure and the specific heat capacity remain constant, then the changes in enthalpy and entropy can be written in terms of the temperatures, andformula_29However, it is important to note that this version of the theorem doesn't relate the exact values which the original theorem does. Specifically, in comparing the actual process to a reversible one, the modified version allows the final state to be different between the two. This is in contrast to the original version, wherein reversible process is constructed to match so that the final states are the same.
Applications.
In general, the Gouy-Stodola theorem is used to quantify irreversibilities in a system and to perform exergy analysis. That is, it allows one to take a thermodynamic system and better understand how inefficient it is (energy-wise), how much work is lost, how much room there is for improvement and where. The second law of thermodynamics states, in essence, that the entropy of a system only increases. Over time, thermodynamic systems tend to gain entropy and lose energy (in approaching equilibrium): thus, the entropy is "somehow" related to how much exergy or potential for useful work a system has. The Gouy-Stodola theorem provides a concrete link. For the most part, this is how the theorem is used - to find and quantify inefficiencies in a system.
Flow processes.
A flow process is a type of thermodynamic process, where matter flows in and out of an open system called the control volume. Such a process may be steady, meaning that the matter and energy flowing into and out of the system are constant through time. It can also be unsteady, or transient, meaning that the flows may change and differ at different times.
Many proofs of the theorem demonstrate it specifically for flow systems. Thus, the theorem is particularly useful in performing exergy analysis on such systems.
Vapor compression and absorption.
The Gouy-Stodola theorem is often applied to refrigeration cycles. These are thermodynamic cycles or mechanical systems where external work can be used to move heat from low temperature sources to high temperature sinks, or vice versa. Specifically, the theorem is useful in analyzing vapor compression and vapor absorption refrigeration cycles.
The theorem can help identify which components of a system have major irreversibilities, and how much exergy they destroy. It can be used to find at which temperatures the performance is optimal, or what size system should be constructed. Overall, that is, the Gouy-Stodola theorem is a tool to find and quantify inefficiencies in a system, and can point to how to minimize them - this is the goal of exergy analysis. When the theorem is used for these purposes, it is usually applied in its modified form.
In ecology.
Macroscopically, the theorem may be useful environmentally, in ecophysics. An ecosystem is a complex system, where many factors and components interact, some biotic and some abiotic. The Gouy-Stodola theorem can find how much entropy is generated by each part of the system, or how much work is lost. Where there is human interference in an ecosystem, whether the ecosystem continues to exist or is lost may depend on how many irreversibilities it can support. The amount of entropy which is generated or the amount of work the system can perform may vary. Hence, two different states (for example, a healthy forest versus one which has undergone significant deforestation) of the same ecosystem may be compared in terms of entropy generation, and this may be used to evaluate the sustainability of the ecosystem under human interference.
In biology.
The theorem is also useful on a more microscopic scale, in biology. Living systems, such as cells, can be analyzed thermodynamically. They are rather complex systems, where many energy transformations occur, and they often waste heat. Hence, the Gouy-Stodola theorem may be useful, in certain situations, to perform exergy analysis on such systems. In particular, it may help to highlight differences between healthy and diseased cells.
Generally, the theorem may find applications in fields of biomedicine, or where biology and physics cross over, such as biochemical engineering thermodynamics.
As a variational principle.
A variational principle in physics, such as the principle of least action or Fermat's principle in optics, allows one to describe the system in a global manner and to solve it using the calculus of variations. In thermodynamics, such a principle would allow a Lagrangian formulation. The Gouy-Stodola theorem can be used as the basis for such a variational principle, in thermodynamics. It has been proven to satisfy the necessary conditions.
This is fundamentally different from most of the theorem's other uses - here, it isn't being applied in order to locate components with irreversibilities or loss of exergy, but rather helps give some more general information about the system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_0"
},
{
"math_id": 1,
"text": "W_{rev}=W_{max}"
},
{
"math_id": 2,
"text": "W_{actual}"
},
{
"math_id": 3,
"text": "W_{rev}"
},
{
"math_id": 4,
"text": "W_{lost}=W_{rev}-W_{actual}"
},
{
"math_id": 5,
"text": "W_{lost}"
},
{
"math_id": 6,
"text": "\\dot{W}_{lost}=T_0\\dot{S_{g}}"
},
{
"math_id": 7,
"text": "\\dot{W}_{lost}"
},
{
"math_id": 8,
"text": "\\dot{S_{g}}"
},
{
"math_id": 9,
"text": "\\dot{W}_{lost,tot}=T_0\\dot{S}_{g,tot}"
},
{
"math_id": 10,
"text": "T_{ref}"
},
{
"math_id": 11,
"text": "\\dot{W}_{lost,tot}=T_{ref}\\dot{S}_{g,tot}"
},
{
"math_id": 12,
"text": "{W}_{lost,tot}=T_0{S}_{g,tot}"
},
{
"math_id": 13,
"text": "\\dot{\\psi}_{d}=T_0\\dot{S_{g}}"
},
{
"math_id": 14,
"text": "\\dot{\\psi}_{d}"
},
{
"math_id": 15,
"text": "\\dot{\\psi}_{d,sys}=T_0\\dot{S}_{g,sys}"
},
{
"math_id": 16,
"text": "\\dot{\\psi}_{d,surr}=T_0\\dot{S}_{g,surr}"
},
{
"math_id": 17,
"text": "\\dot{\\psi}_{d,tot}=\\dot{\\psi}_{d,sys}+\\dot{\\psi}_{d,surr}=T_0\\dot{S}_{g,sys}+T_0\\dot{S}_{g,surr}=T_0\\dot{S}_{g,tot}"
},
{
"math_id": 18,
"text": "\\dot{W}_{lost}=T_{eff}\\dot{S_{g}}"
},
{
"math_id": 19,
"text": "T_{eff}"
},
{
"math_id": 20,
"text": "s_{1}"
},
{
"math_id": 21,
"text": "s_{2}"
},
{
"math_id": 22,
"text": "h_{1}"
},
{
"math_id": 23,
"text": "h_{2}"
},
{
"math_id": 24,
"text": "s_{2,rev}"
},
{
"math_id": 25,
"text": "h_{2,rev}"
},
{
"math_id": 26,
"text": "T_{eff}=\\frac{h_{2}-h_{2,rev}}{s_{2}-s_{2,rev}}"
},
{
"math_id": 27,
"text": "T_{2}"
},
{
"math_id": 28,
"text": "T_{2,rev}"
},
{
"math_id": 29,
"text": "T_{eff}=\\frac{T_{2}-T_{2,rev}}{\\ln T_{2}-\\ln T_{2,rev}}=\\frac{T_{2}-T_{2,rev}}{\\ln\\frac{T_{2}}{T_{2,rev}}}"
}
]
| https://en.wikipedia.org/wiki?curid=67197507 |
67206069 | 2 Chronicles 4 | Second Book of Chronicles, chapter 4
2 Chronicles 4 is the fourth chapter of the Second Book of Chronicles the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is the construction of the temple's interior decoration.
Text.
This chapter was originally written in the Hebrew language and is divided into 22 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The bronze altar and molten sea (4:1–5).
This section records the construction of the bronze altar (verse 1; cf. 1 Kings 8:64; 2 Kings 16:14–15; 2 Chronicles 1:5; Ezekiel 43:13–17) and the molten sea (verses 2–5; cf. 1 Kings 7:23–26). The altar was a formidable object, probably made of wood and covered with bronze, with the measures probably referring to the base.
"Then he made a bronze altar that was twenty cubits long, twenty cubits wide, and ten cubits high."
"Also he made a molten sea of ten cubits from brim to brim, round in compass, and five cubits the height thereof; and a line of thirty cubits did compass it round about."
Verse 2.
The approximation of the mathematical constant "π" ("pi"), defined as the ratio of a circle's circumference to its diameter, can apparently be calculated from this verse as 30 cubits divided by 10 cubits to yield "3". However, Matityahu Hacohen Munk observed that the spelling for "line" in Hebrew, normally written as "qaw", in is written ("ketiv") as "qaweh". Using gematria, "qaweh" yields "111" whereas "qaw" yields "106", so when used in calculation it results in π = "3.1415094", very close to the modern definition of "3.1415926". Charles Ryrie gives another explanation based on verse 5 (cf. 1 Kings 7:26) that the molten sea has a brim of a handbreadth (about ) wide, so when the inside diameter, subtracting 10 cubits (about ; from outer "brim to brim") with 2 times (two handbreadth) to yield , is divided by π, it results in or 30 cubits which is the circumference given in this verse.
"And the thickness of it was an handbreadth, and the brim of it like the work of the brim of a cup, with flowers of lilies; and it received and held three thousand baths."
The temple's interior (4:6–22).
Verse 10–22 closely parallel 1 Kings 7:39–50 except for the omission of materials in 1 Kings 7:27–37.
1 Kings 7:38 corresponds to 2 Chronicles 4:6, while 1 Kings 7:38–39a is reworked at 2 Chronicles 4:6a, but verses 6b–9 have no parallel in Kings, and 1 Kings 7:39b—51 corresponds to 2 Chronicles 4:10-5:1. The (lengthy) passage in Kings concerning the stands for the basins is only found in verse 14. The function of the basin (verse 6) is related to Exodus 30:17-21, where a copper basin is used for ceremonial washing. The list of golden materials in verses 7–9 corresponds to that in 1 Kings 7:48–50 (cf. verses 19–22), presented in the order of the Chronicler's (original) list in 1 Chronicles 28:15–18. Whereas the tabernacle was equipped with only one lampstand (Exodus 25:31–40; 31:8; Leviticus 24:1–4; Numbers 8:2–4), an interesting similarity to 13:11, there were ten in the Temple (verse 7; cf. multiple lampstands in 1 Chronicles 28:15; 2 Chronicles 4:20; 1 Kings 7:49). Both the tabernacle (Exodus 25:23-30; 26:35; Leviticus 24:5–9; 2 Chronicles 13:11) and Solomon's temple according to 1 Kings 7:48 only mention one shewbread table, but there were ten in verse 8, and by contrast to the one, the ten tables in the Chronicles (1 Chronicles 28:16) are not explicitly characterized as covered in gold. Whereas 1 Kings 6:36 only briefly mentions the inner courtyard, the Chronicler clearly distinguishes between the priests' court (1 Kings 6:36; 7:12) and the precinct for laymen.
" In the plain of Jordan did the king cast them, in the clay ground between Succoth and Zeredathah."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67206069 |
67207752 | Hopfion | Topological soliton
A hopfion is a topological soliton. It is a stable three-dimensional localised configuration of a three-component field formula_0 with a knotted topological structure. They are the three-dimensional counterparts of 2D skyrmions, which exhibit similar topological properties in 2D. Hopfions are widely studied in many physical systems over the last half century, as summarized here http://hopfion.com
The soliton is mobile and stable: i.e. it is protected from a decay by an energy barrier. It can be deformed but always conserves an integer Hopf topological invariant. It is named after the German mathematician, Heinz Hopf.
A model that supports hopfions was proposed as follows
formula_1
The terms of higher-order derivatives are required to stabilize the hopfions.
Stable hopfions were predicted within various physical platforms, including Yang–Mills theory, superconductivity and magnetism.
Experimental observation.
Hopfions have been observed experimentally in chiral colloidal magnetic materials, in chiral liquid crystals, in Ir/Co/Pt multilayers using X-ray magnetic circular dichroism and in the polarization of free-space monochromatic light.
In chiral magnets, a helical-background variant of the hopfion has been theoretically predicted to occur within the spiral magnetic phase, where it was called a "heliknoton". In recent years, the concept of a "fractional hopfion" has also emerged where not all preimages of magnetisation have a nonzero linking.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{n}=(n_x,n_y,n_z)"
},
{
"math_id": 1,
"text": "H= (\\partial {\\bf n})^2 + (\\epsilon_{ijk}{\\bf n}\\cdot\\partial_i {\\bf n}\\times \\partial_j{\\bf n})^2"
}
]
| https://en.wikipedia.org/wiki?curid=67207752 |
672080 | Homentropic flow | In fluid mechanics, a homentropic flow has uniform and constant entropy. It distinguishes itself from an isentropic or particle isentropic flow, where the entropy level of each fluid particle does not change with time, but may vary from particle to particle. This means that a homentropic flow is necessarily isentropic, but an isentropic flow need not be homentropic.
A homentropic and perfect gas is an example of a barotropic fluid where the pressure and density are related byformula_0where formula_1 is a constant. | [
{
"math_id": 0,
"text": "P(\\rho) = K\\rho^\\gamma,"
},
{
"math_id": 1,
"text": "K"
}
]
| https://en.wikipedia.org/wiki?curid=672080 |
672190 | Bifundamental representation | In mathematics and theoretical physics, a bifundamental representation is a representation obtained as a tensor product of two fundamental or antifundamental representations.
For example, the "MN"-dimensional representation ("M","N") of the group
formula_0
is a bifundamental representation.
These representations occur in quiver diagrams. | [
{
"math_id": 0,
"text": "SU(M) \\times SU(N)"
}
]
| https://en.wikipedia.org/wiki?curid=672190 |
672202 | Yang–Mills theory | In physics, a quantum field theory
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in physics:
Yang–Mills theory and the mass gap. Quantum particles described by the theory have mass but the classical waves of the field travel at the speed of light.
Yang–Mills theory is a quantum field theory for nuclear binding devised by Chen Ning Yang and Robert Mills in 1953, as well as a generic term for the class of similar theories. The Yang–Mills theory is a gauge theory based on a special unitary group SU("n"), or more generally any compact Lie group. A Yang–Mills theory seeks to describe the behavior of elementary particles using these non-abelian Lie groups and is at the core of the unification of the electromagnetic force and weak forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory of the strong force (based on SU(3)). Thus it forms the basis of the understanding of the Standard Model of particle physics.
History and qualitative description.
Gauge theory in electrodynamics.
All known fundamental interactions can be described in terms of gauge theories, but working this out took decades. Hermann Weyl's pioneering work on this project started in 1915 when his colleague Emmy Noether proved that every conserved physical quantity has a matching symmetry, and culminated in 1928 when he published his book applying the geometrical theory of symmetry (group theory) to quantum mechanics. Weyl named the relevant symmetry in Noether's theorem the "gauge symmetry", by analogy to distance standardization in railroad gauges.
Erwin Schrödinger in 1922, three years before working on his equation, connected Weyl's group concept to electron charge. Schrödinger showed that the group formula_0 produced a phase shift formula_1 in electromagnetic fields that matched the conservation of electric charge. As the theory of quantum electrodynamics developed in the 1930's and 1940's the formula_0 group transformations played a central role. Many physicists thought there must be an analog for the dynamics of nucleons.
Chen Ning Yang in particular was obsessed with this possibility.
Yang and Mills find the nuclear force gauge theory.
Yang's core idea was to look for a conserved quantity in nuclear physics comparable to electric charge and use it to develop a corresponding gauge theory comparable to electrodynamics. He settled on conservation of isospin, a quantum number that distinguishes a neutron from a proton, but he made no progress on a theory. Taking a break from Princeton in the summer of 1953, Yang met a collaborator who could help: Robert Mills. As Mills himself describes:"During the academic year 1953–1954, Yang was a visitor to Brookhaven National Laboratory ... I was at Brookhaven also...and was assigned to the same office as Yang. Yang, who has demonstrated on a number of occasions his generosity to physicists beginning their careers, told me about his idea of generalizing gauge invariance and we discussed it at some length...I was able to contribute something to the discussions, especially with regard to the quantization procedures, and to a small degree in working out the formalism; however, the key ideas were Yang's."
In the summer 1953, Yang and Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to non-abelian groups, selecting the group formula_2 to provide an explanation for isospin conservation in collisions involving the strong interactions. Yang's presentation of the work at Princeton in February 1954 was challenged by Pauli, asking about the mass in the field developed with the gauge invariance idea. Pauli knew that this might be an issue as he had worked on applying gauge invariance but chose not to publish it, viewing the massless excitations of the theory to be "unphysical 'shadow particles'". Yang and Mills published in October 1954; near the end of the paper, they admit:
<templatestyles src="Template:Blockquote/styles.css" />
This problem of unphysical massless excitation blocked further progress.
The idea was set aside until 1960, when the concept of particles acquiring mass through symmetry breaking in massless theories was put forward, initially by Jeffrey Goldstone, Yoichiro Nambu, and Giovanni Jona-Lasinio. This prompted a significant restart of Yang–Mills theory studies that proved successful in the formulation of both electroweak unification and quantum chromodynamics (QCD). The electroweak interaction is described by the gauge group SU(2) × U(1), while QCD is an SU(3) Yang–Mills theory. The massless gauge bosons of the electroweak SU(2) × U(1) mix after spontaneous symmetry breaking to produce the 3 massive weak bosons (, , and ) as well as the still-massless photon field. The dynamics of the photon field and its interactions with matter are, in turn, governed by the U(1) gauge theory of quantum electrodynamics. The Standard Model combines the strong interaction with the unified electroweak interaction (unifying the weak and electromagnetic interaction) through the symmetry group SU(3) × SU(2) × U(1). In the current epoch the strong interaction is not unified with the electroweak interaction, but from the observed running of the coupling constants it is believed they all converge to a single value at very high energies.
Phenomenology at lower energies in quantum chromodynamics is not completely understood due to the difficulties of managing such a theory with a strong coupling. This may be the reason why confinement has not been theoretically proven, though it is a consistent experimental observation. This shows why QCD confinement at low energy is a mathematical problem of great relevance, and why the Yang–Mills existence and mass gap problem is a Millennium Prize Problem.
Parallel work on non-Abelian gauge theories.
In 1953, in a private correspondence, Wolfgang Pauli formulated a six-dimensional theory of Einstein's field equations of general relativity, extending the five-dimensional theory of Kaluza, Klein, Fock, and others to a higher-dimensional internal space. However, there is no evidence that Pauli developed the Lagrangian of a gauge field or the quantization of it. Because Pauli found that his theory "leads to some rather unphysical shadow particles", he refrained from publishing his results formally. Although Pauli did not publish his six-dimensional theory, he gave two seminar lectures about it in Zürich in November 1953.
In January 1954 Ronald Shaw, a graduate student at the University of Cambridge also developed a non-Abelian gauge theory for nuclear forces.
However, the theory needed massless particles in order to maintain gauge invariance. Since no such massless particles were known at the time, Shaw and his supervisor Abdus Salam chose not to publish their work.
Shortly after Yang and Mills published their paper in October 1954, Salam encouraged Shaw to publish his work to mark his contribution. Shaw declined, and instead it only forms a chapter of his PhD thesis published in 1956.
Mathematical overview.
Yang–Mills theories are special examples of gauge theories with a non-abelian symmetry group given by the Lagrangian
formula_3
with the generators formula_4 of the Lie algebra, indexed by a, corresponding to the F-quantities (the curvature or field-strength form) satisfying
formula_5
Here, the f abc are structure constants of the Lie algebra (totally antisymmetric if the generators of the Lie algebra are normalised such that formula_6 is proportional to formula_7), the covariant derivative is defined as
formula_8
I is the identity matrix (matching the size of the generators), formula_9 is the vector potential, and g is the coupling constant. In four dimensions, the coupling constant g is a pure number and for a SU("n") group one has formula_10
The relation
formula_11
can be derived by the commutator
formula_12
The field has the property of being self-interacting and the equations of motion that one obtains are said to be semilinear, as nonlinearities are both with and without derivatives. This means that one can manage this theory only by perturbation theory with small nonlinearities.
Note that the transition between "upper" ("contravariant") and "lower" ("covariant") vector or tensor components is trivial for "a" indices (e.g. formula_13), whereas for μ and ν it is nontrivial, corresponding e.g. to the usual Lorentz signature, formula_14
From the given Lagrangian one can derive the equations of motion given by
formula_15
Putting formula_16 these can be rewritten as
formula_17
A Bianchi identity holds
formula_18
which is equivalent to the Jacobi identity
formula_19
since formula_20 Define the dual strength tensor
formula_21 then the Bianchi identity can be rewritten as
formula_22
A source formula_23 enters into the equations of motion as
formula_24
Note that the currents must properly change under gauge group transformations.
We give here some comments about the physical dimensions of the coupling. In D dimensions, the field scales as formula_25 and so the coupling must scale as formula_26 This implies that Yang–Mills theory is not renormalizable for dimensions greater than four. Furthermore, for the coupling is dimensionless and both the field and the square of the coupling have the same dimensions of the field and the coupling of a massless quartic scalar field theory. So, these theories share the scale invariance at the classical level.
Quantization.
A method of quantizing the Yang–Mills theory is by functional methods, i.e. path integrals. One introduces a generating functional for n-point functions as
formula_27
but this integral has no meaning as it is because the potential vector can be arbitrarily chosen due to the gauge freedom. This problem was already known for quantum electrodynamics but here becomes more severe due to non-abelian properties of the gauge group. A way out has been given by Ludvig Faddeev and Victor Popov with the introduction of a "ghost field" (see Faddeev–Popov ghost) that has the property of being unphysical since, although it agrees with Fermi–Dirac statistics, it is a complex scalar field, which violates the spin–statistics theorem. So, we can write the generating functional as
formula_28
being
formula_29
for the field,
formula_30
for the gauge fixing and
formula_31
for the ghost. This is the expression commonly used to derive Feynman's rules (see Feynman diagram). Here we have ca for the ghost field while ξ fixes the gauge's choice for the quantization. Feynman's rules obtained from this functional are the following
These rules for Feynman's diagrams can be obtained when the generating functional given above is rewritten as
formula_32
with
formula_33
being the generating functional of the free theory. Expanding in g and computing the functional derivatives, we are able to obtain all the n-point functions with perturbation theory. Using LSZ reduction formula we get from the n-point functions the corresponding process amplitudes, cross sections and decay rates. The theory is renormalizable and corrections are finite at any order of perturbation theory.
For quantum electrodynamics the ghost field decouples because the gauge group is abelian. This can be seen from the coupling between the gauge field and the ghost field that is formula_34 For the abelian case, all the structure constants formula_35 are zero and so there is no coupling. In the non-abelian case, the ghost field appears as a useful way to rewrite the quantum field theory without physical consequences on the observables of the theory such as cross sections or decay rates.
One of the most important results obtained for Yang–Mills theory is asymptotic freedom. This result can be obtained by assuming that the coupling constant g is small (so small nonlinearities), as for high energies, and applying perturbation theory. The relevance of this result is due to the fact that a Yang–Mills theory that describes strong interaction and asymptotic freedom permits proper treatment of experimental results coming from deep inelastic scattering.
To obtain the behavior of the Yang–Mills theory at high energies, and so to prove asymptotic freedom, one applies perturbation theory assuming a small coupling. This is verified a posteriori in the ultraviolet limit. In the opposite limit, the infrared limit, the situation is the opposite, as the coupling is too large for perturbation theory to be reliable. Most of the difficulties that research meets is just managing the theory at low energies. That is the interesting case, being inherent to the description of hadronic matter and, more generally, to all the observed bound states of gluons and quarks and their confinement (see hadrons). The most used method to study the theory in this limit is to try to solve it on computers (see lattice gauge theory). In this case, large computational resources are needed to be sure the correct limit of infinite volume (smaller lattice spacing) is obtained. This is the limit the results must be compared with. Smaller spacing and larger coupling are not independent of each other, and larger computational resources are needed for each. As of today, the situation appears somewhat satisfactory for the hadronic spectrum and the computation of the gluon and ghost propagators, but the glueball and hybrids spectra are yet a questioned matter in view of the experimental observation of such exotic states. Indeed, the σ resonance
is not seen in any of such lattice computations and contrasting interpretations have been put forward. This is a hotly debated issue.
Open problems.
Yang–Mills theories met with general acceptance in the physics community after Gerard 't Hooft, in 1972, worked out their renormalization, relying on a formulation of the problem worked out by his advisor Martinus Veltman.
Renormalizability is obtained even if the gauge bosons described by this theory are massive, as in the electroweak theory, provided the mass is only an "acquired" one, generated by the Higgs mechanism.
The mathematics of the Yang–Mills theory is a very active field of research, yielding e.g. invariants of differentiable structures on four-dimensional manifolds via work of Simon Donaldson. Furthermore, the field of Yang–Mills theories was included in the Clay Mathematics Institute's list of "Millennium Prize Problems". Here the prize-problem consists, especially, in a proof of the conjecture that the lowest excitations of a pure Yang–Mills theory (i.e. without matter fields) have a finite mass-gap with regard to the vacuum state. Another open problem, connected with this conjecture, is a proof of the confinement property in the presence of additional fermions.
In physics the survey of Yang–Mills theories does not usually start from perturbation analysis or analytical methods, but more recently from systematic application of numerical methods to lattice gauge theories.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "U(1)"
},
{
"math_id": 1,
"text": "e^{i\\theta}"
},
{
"math_id": 2,
"text": "SU(2)"
},
{
"math_id": 3,
"text": "\\ \\mathcal{L}_\\mathrm{gf} = -\\tfrac{1}{2}\\operatorname{tr}(F^2) = - \\tfrac{1}{4} F^{a\\mu \\nu} F_{\\mu \\nu}^a\\ "
},
{
"math_id": 4,
"text": "\\ T^a\\ "
},
{
"math_id": 5,
"text": "\\ \\operatorname{tr}\\left( T^a\\ T^b \\right) = \\tfrac{1}{2} \\delta^{ab}\\ , \\qquad \\left[ T^a,\\ T^b \\right] = i\\ f^{abc}\\ T^c ~."
},
{
"math_id": 6,
"text": "\\ \\operatorname{tr}(T^a\\ T^b)\\ "
},
{
"math_id": 7,
"text": "\\ \\delta^{ab}\\ "
},
{
"math_id": 8,
"text": "\\ D_\\mu = I\\ \\partial_\\mu - i\\ g\\ T^a\\ A^a_\\mu\\ ,"
},
{
"math_id": 9,
"text": "\\ A^a_\\mu\\ "
},
{
"math_id": 10,
"text": "\\ a, b, c = 1 \\ldots n^2-1 ~."
},
{
"math_id": 11,
"text": "\\ F_{\\mu \\nu}^a = \\partial_\\mu A_\\nu^a - \\partial_\\nu A_\\mu^a + g\\ f^{abc}\\ A_\\mu^b\\ A_\\nu^c\\ "
},
{
"math_id": 12,
"text": "\\ \\left[ D_\\mu, D_\\nu \\right] = -i\\ g\\ T^a\\ F_{\\mu\\nu}^a ~."
},
{
"math_id": 13,
"text": "\\ f^{abc} = f_{abc}\\ "
},
{
"math_id": 14,
"text": "\\ \\eta_{\\mu \\nu } = {\\rm diag}(+---) ~."
},
{
"math_id": 15,
"text": "\\ \\partial^\\mu F_{\\mu\\nu}^a + g\\ f^{abc}\\ A^{\\mu b}\\ F_{\\mu\\nu}^c = 0 ~."
},
{
"math_id": 16,
"text": "\\ F_{\\mu\\nu}=T^aF^a_{\\mu\\nu}\\ ,"
},
{
"math_id": 17,
"text": "\\ \\left( D^\\mu F_{\\mu\\nu} \\right)^a = 0 ~."
},
{
"math_id": 18,
"text": "\\ \\left( D_\\mu\\ F_{\\nu \\kappa} \\right)^a + \\left( D_\\kappa\\ F_{\\mu \\nu} \\right)^a + \\left( D_\\nu\\ F_{\\kappa \\mu} \\right)^a = 0\\ "
},
{
"math_id": 19,
"text": "\\ \\left[ D_{\\mu}, \\left[ D_{\\nu},D_{\\kappa} \\right] \\right] + \\left[ D_{\\kappa}, \\left[ D_{\\mu},D_{\\nu} \\right] \\right] + \\left[ D_{\\nu}, \\left[ D_{\\kappa},D_{\\mu} \\right] \\right] = 0\\ "
},
{
"math_id": 20,
"text": "\\ \\left[ D_{\\mu},F^a_{\\nu\\kappa} \\right] = D_{\\mu}\\ F^a_{\\nu\\kappa} ~."
},
{
"math_id": 21,
"text": "\\ \\tilde{F}^{\\mu\\nu} = \\tfrac{1}{2}\\varepsilon^{\\mu\\nu\\rho\\sigma}F_{\\rho\\sigma}\\ ,"
},
{
"math_id": 22,
"text": "\\ D_{\\mu}\\tilde{F}^{\\mu\\nu} = 0 ~."
},
{
"math_id": 23,
"text": "\\ J_\\mu^a\\ "
},
{
"math_id": 24,
"text": "\\ \\partial^\\mu F_{\\mu\\nu}^a + g\\ f^{abc}\\ A^{b\\mu}\\ F_{\\mu\\nu}^c = -J_\\nu^a ~."
},
{
"math_id": 25,
"text": "\\ \\left[A\\right]=\\left[ L^{\\left(\\tfrac{2-D}{2}\\right)} \\right]\\ "
},
{
"math_id": 26,
"text": "\\ \\left[g^2\\right] = \\left[L^{\\left(D-4\\right)}\\right] ~."
},
{
"math_id": 27,
"text": "\\ Z[j] = \\int [\\mathrm{d}A]\\ \\exp\\left[- \\tfrac{i}{2} \\int \\mathrm{d}^4x\\ \\operatorname{tr}\\left( F^{\\mu \\nu}\\ F_{\\mu \\nu}\\right) + i\\ \\int \\mathrm{d}^4x\\ j^a_\\mu(x)\\ A^{a\\mu}(x) \\right]\\ ,"
},
{
"math_id": 28,
"text": "\\begin{align}\nZ[j,\\bar\\varepsilon,\\varepsilon] & = \\int [\\mathrm{d}\\ A] [\\mathrm{d}\\ \\bar c] [\\mathrm{d}\\ c]\\ \\exp\\Bigl\\{ i\\ S_F\\ \\left[\\partial A, A\\right] + i\\ S_{gf}\\left[\\partial A\\right] + i\\ S_g\\left[\\partial c, \\partial\\bar c, c,\\bar c, A \\right] \\Bigr\\} \\\\\n&\\exp\\left\\{i\\int \\mathrm{d}^4x\\ j^a_\\mu(x)A^{a\\mu}(x)+i\\int \\mathrm{d}^4x\\ \\left[\\bar c^a(x)\\ \\varepsilon^a(x) + \\bar\\varepsilon^a(x)\\ c^a(x)\\right]\\right\\}\n\\end{align}"
},
{
"math_id": 29,
"text": "S_F=- \\tfrac{1}{2} \\int \\mathrm{d}^4 x\\ \\operatorname{tr}\\left( F^{\\mu \\nu}\\ F_{\\mu \\nu} \\right)\\ "
},
{
"math_id": 30,
"text": "S_{gf} = -\\frac{1}{2\\xi} \\int \\mathrm{d}^4 x\\ (\\partial\\cdot A)^2\\ "
},
{
"math_id": 31,
"text": "\\ S_g = -\\int \\mathrm{d}^4 x\\ \\left(\\bar c^a\\ \\partial_\\mu\\partial^\\mu c^a + g\\ \\bar c^a\\ f^{abc}\\ \\partial_\\mu\\ A^{b\\mu}\\ c^c \\right)\\ "
},
{
"math_id": 32,
"text": "\\begin{align}\nZ[j,\\bar\\varepsilon,\\varepsilon] &= \\exp\\left(-i\\ g\\int \\mathrm{d}^4x\\ \\frac{\\delta}{i\\ \\delta\\ \\bar\\varepsilon^a(x)}\\ f^{abc}\\partial_\\mu\\ \\frac{i\\ \\delta}{\\delta\\ j^b_\\mu(x)}\\ \\frac{i\\ \\delta}{\\delta\\ \\varepsilon^c(x)} \\right)\\\\\n& \\qquad \\times \\exp\\left(-i\\ g\\int \\mathrm{d}^4x\\ f^{abc}\\partial_\\mu\\frac{i\\ \\delta}{\\delta\\ j^a_\\nu(x)}\\frac{i\\ \\delta}{\\delta\\ j^b_\\mu(x)}\\ \\frac{i\\ \\delta}{\\delta\\ j^{c\\nu}(x)}\\right)\\\\\n& \\qquad \\qquad \\times \\exp\\left(-i\\ \\frac{g^2}{4}\\int \\mathrm{d}^4x\\ f^{abc}\\ f^{ars}\\frac{i\\ \\delta}{\\delta\\ j^b_\\mu(x)}\\ \\frac{i\\ \\delta}{\\delta\\ j^c_\\nu(x)}\\ \\frac{\\ i\\delta}{\\delta\\ j^{r\\mu}(x)} \\frac{i\\ \\delta}{\\delta\\ j^{s\\nu}(x)} \\right) \\\\\n& \\qquad \\qquad \\qquad \\times Z_0[j,\\bar\\varepsilon,\\varepsilon]\n\\end{align}"
},
{
"math_id": 33,
"text": " Z_0[j,\\bar\\varepsilon,\\varepsilon] = \\exp \\left( -\\int \\mathrm{d}^4x\\ \\mathrm{d}^4y\\ \\bar\\varepsilon^a(x)\\ C^{ab}(x-y)\\ \\varepsilon^b(y) \\right)\\exp \\left( \\tfrac{1}{2} \\int \\mathrm{d}^4x\\ \\mathrm{d}^4y\\ j^a_\\mu(x)\\ D^{ab\\mu\\nu}(x-y)\\ j^b_\\nu(y) \\right)\\ "
},
{
"math_id": 34,
"text": "\\ \\bar c^a\\ f^{abc}\\ \\partial_\\mu A^{b\\mu}\\ c^c ~."
},
{
"math_id": 35,
"text": "\\ f^{abc}\\ "
}
]
| https://en.wikipedia.org/wiki?curid=672202 |
67220350 | 2 Chronicles 5 | Second Book of Chronicles, chapter 5
2 Chronicles 5 is the fifth chapter of the Second Book of Chronicles the Old Testament of the Christian Bibles or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is the installation of the Ark of the Covenant in the temple.
Text.
This chapter was originally written in the Hebrew language and is divided into 14 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The construction of the Temple completed (5:1).
This verse concludes the section started in 2 Chronicles 4:6 with the placement of temple decorations into the finished building.
"Thus all the work that Solomon made for the house of the Lord was finished: and Solomon brought in all the things that David his father had dedicated; and the silver, and the gold, and all the instruments, put he among the treasures of the house of God."
Verse 1.
The construction of the temple started in Solomon's fourth year of years, took seven years to complete (1 Kings 6:1) and another thirteen years in furnishing it (1 Kings 9:1, 2), but this is not mentioned in the Chronicles.
The Ark brought into the Temple (5:2–14).
The section parallels 1 Kings 8:1–13, except for verses 11b–13a, which points to the implementation of David's Levitical and priestly orders (1 Chronicles 15–16; 25–26). All participants were sanctified (cf 1 Chronicles 15:14) and all three musician families were present to play musical instruments and sing in unison 'For he is good, for his steadfast love endures for ever' (verse 13; cf. 1 Chronicles 16:41). Once the music begun, a cloud fill the house (verse 13), recalling the cloud which came down on the tent of meeting in the desert (cf. Numbers 12:5).
"Then Solomon assembled the elders of Israel, and all the heads of the tribes, the chief of the fathers of the children of Israel, unto Jerusalem, to bring up the ark of the covenant of the Lord out of the city of David, which is Zion."
Verse 2.
The transfer of the ark from Mount Zion to the temple on Mount Moriah was the first part of the celebration.
"And all the elders of Israel came; and the Levites took up the ark."
Verse 4.
The Levites carried the ark in conformation with Moses' instructions (Deuteronomy 10:8; 31:25) and David's orders (1 Chronicles 15:2). In 1 Kings 8:3, 6 it was specified that the "priests" (who must be from the tribe of Levi) carried the ark, as Levites who were not priests were forbidden to enter the most holy place (as shown in verses 7 and 29:16).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67220350 |
672218 | Isospin | Quantum number related to the weak interaction
In nuclear physics and particle physics, isospin (I) is a quantum number related to the up- and down quark content of the particle.
Isospin is also known as isobaric spin or isotopic spin.
Isospin symmetry is a subset of the flavour symmetry seen more broadly in the interactions of baryons and mesons.
The name of the concept contains the term "spin" because its quantum mechanical description is mathematically similar to that of angular momentum (in particular, in the way it couples; for example, a proton–neutron pair can be coupled either in a state of total isospin 1 or in one of 0). But unlike angular momentum, it is a dimensionless quantity and is not actually any type of spin.
Before the concept of quarks was introduced, particles that are affected equally by the strong force but had different charges (e.g. protons and neutrons) were considered different states of the same particle, but having isospin values related to the number of charge states. A close examination of isospin symmetry ultimately led directly to the discovery and understanding of quarks and to the development of Yang–Mills theory. Isospin symmetry remains an important concept in particle physics.
Isospin invariance.
To a good approximation the proton and neutron have the same mass: they can be interpreted as two states of the same particle. These states have different values for an internal isospin coordinate. The mathematical properties of this coordinate are completely analogous to intrinsic spin angular momentum. The component of the operator, formula_0, for this coordinate has eigenvalues + and −; it is related to the charge operator, formula_1:
formula_2
which has eigenvalues formula_3 for the proton and zero for the neutron. For a system of n nucleons, the charge operator depends upon the mass number A:
formula_4
Isobars, nuclei with the same mass number like 40K and 40Ar, only differ in the value of the formula_0 eigenvalue. For this reason isospin is also called "isobaric spin".
The internal structure of these nucleons is governed by the strong interaction, but the Hamiltonian of the strong interaction is isospin invariant. As a consequence the nuclear forces are charge independent. Properties like the stability of deuterium can be predicted based on isospin analysis. However,
this invariance is not exact and the quark model gives more precise results.
Relation to hypercharge.
The charge operator can be expressed in terms of the projection of isospin formula_5 and hypercharge, formula_6:
formula_7
This is known as the Gell-Mann-Nishijima formula. The hypercharge is the center of splitting for the isospin multiplet:
formula_8
This relation has an analog in the weak interaction where T is the weak isospin.
Quark content and isospin.
In the modern formulation, isospin (I) is defined as a vector quantity in which up and down quarks have a value of I = , with the 3rd-component (I3) being + for up quarks, and − for down quarks, while all other quarks have I = 0. Therefore, for hadrons in general, where nu and nd are the numbers of up and down quarks respectively,
formula_9
In any combination of quarks, the 3rd component of the isospin vector (I3) could either be aligned between a pair of quarks, or face the opposite direction, giving different possible values for total isospin for any combination of quark flavours. Hadrons with the same quark content but different total isospin can be distinguished experimentally, verifying that flavour is actually a vector quantity, not a scalar (up vs down simply being a projection in the quantum mechanical z axis of flavour space).
For example, a strange quark can be combined with an up and a down quark to form a baryon, but there are two different ways the isospin values can combine – either adding (due to being flavour-aligned) or cancelling out (due to being in opposite flavour directions). The isospin-1 state (the ) and the isospin-0 state (the ) have different experimentally detected masses and half-lives.
Isospin and symmetry.
Isospin is regarded as a symmetry of the strong interaction under the action of the Lie group SU(2), the two states being the up flavour and down flavour. In quantum mechanics, when a Hamiltonian has a symmetry, that symmetry manifests itself through a set of states that have the same energy (the states are described as being "degenerate"). In simple terms, the energy operator for the strong interaction gives the same result when an up quark and an otherwise identical down quark are swapped around.
Like the case for regular spin, the isospin operator I is vector-valued: it has three components I"x", I"y", I"z", which are coordinates in the same 3-dimensional vector space where the 3 representation acts. Note that this vector space has nothing to do with the physical space, except similar mathematical formalism. Isospin is described by two quantum numbers: I – the total isospin, and I3 – an eigenvalue of the I"z" projection for which flavor states are eigenstates. In other words, each I3 state specifies certain flavor state of a multiplet. The third coordinate (z), to which the "3" subscript refers, is chosen due to notational conventions that relate bases in 2 and 3 representation spaces. Namely, for the spin- case, components of I are equal to Pauli matrices divided by 2, and so I"z" = τ3, where
formula_10
While the forms of these matrices are isomorphic to those of spin, "these" Pauli matrices only act within the Hilbert space of isospin, not that of spin, and therefore is common to denote them with τ rather than σ to avoid confusion.
Although isospin symmetry is actually very slightly broken, SU(3) symmetry is more badly broken, due to the much higher mass of the strange quark compared to the up and down. The discovery of charm, bottomness and topness could lead to further expansions up to SU(6) flavour symmetry, which would hold if all six quarks were identical. However, the very much larger masses of the charm, bottom, and top quarks means that SU(6) flavour symmetry is very badly broken in nature (at least at low energies), and assuming this symmetry leads to qualitatively and quantitatively incorrect predictions. In modern applications, such as lattice QCD, isospin symmetry is often treated as exact for the three light quarks (uds), while the three heavy quarks (cbt) must be treated separately.
Hadron nomenclature.
Hadron nomenclature is based on isospin.
<templatestyles src="Reflist/styles.css" />
History.
Origin of isospin.
In 1932, Werner Heisenberg introduced a new (unnamed) concept to explain binding of the proton and the then newly discovered neutron (symbol n). His model resembled the bonding model for molecule Hydrogen ion, H2+: a single electron was shared by two protons.
Heisenberg's theory had several problems, most notable it incorrectly predicted the exceptionally strong binding energy of He+2, alpha particles. However, its equal treatment of the proton and neutron gained significance when several experimental studies showed these particles must bind almost equally. In response, Eugene Wigner used Heisenberg's concept in his 1937 paper where he introduced the term "isotopic spin" to indicate how the concept is similar to spin in behavior.
The particle zoo.
These considerations would also prove useful in the analysis of meson-nucleon interactions after the discovery of the pions in 1947. The three pions (, , ) could be assigned to an isospin triplet with "I" = 1 and "I"3 = +1, 0 or −1. By assuming that isospin was conserved by nuclear interactions, the new mesons were more easily accommodated by nuclear theory.
As further particles were discovered, they were assigned into isospin multiplets according to the number of different charge states seen: 2 doublets "I" = of K mesons (, ), (, ), a triplet "I" = 1 of Sigma baryons (, , ), a singlet "I" = 0 Lambda baryon (), a quartet "I" = Delta baryons (, , , ), and so on.
The power of isospin symmetry and related methods comes from the observation that families of particles with similar masses tend to correspond to the invariant subspaces associated with the irreducible representations of the Lie algebra SU(2). In this context, an invariant subspace is spanned by basis vectors which correspond to particles in a family. Under the action of the Lie algebra SU(2), which generates rotations in isospin space, elements corresponding to definite particle states or superpositions of states can be rotated into each other, but can never leave the space (since the subspace is in fact invariant). This is reflective of the symmetry present. The fact that unitary matrices will commute with the Hamiltonian means that the physical quantities calculated do not change even under unitary transformation. In the case of isospin, this machinery is used to reflect the fact that the mathematics of the strong force behaves the same if a proton and neutron are swapped around (in the modern formulation, the up and down quark).
An example: Delta baryons.
For example, the particles known as the Delta baryons – baryons of spin – were grouped together because they all have nearly the same mass (approximately ) and interact in nearly the same way.
They could be treated as the same particle, with the difference in charge being due to the particle being in different states. Isospin was introduced in order to be the variable that defined this difference of state. In an analogue to spin, an isospin projection (denoted "I"3) is associated to each charged state; since there were four Deltas, four projections were needed. Like spin, isospin projections were made to vary in increments of 1. Hence, in order to have four increments of 1, an isospin value of is required (giving the projections "I"3 = +, +, −, −). Thus, all the Deltas were said to have isospin "I" =, and each individual charge had different "I"3 (e.g. the was associated with "I"3 = +).
In the isospin picture, the four Deltas and the two nucleons were thought to simply be the different states of two particles. The Delta baryons are now understood to be made of a mix of three up and down quarks – uuu (), uud (), udd (), and ddd (); the difference in charge being difference in the charges of up and down quarks (+ "e" and − "e" respectively); yet, they can also be thought of as the excited states of the nucleons.
Gauged isospin symmetry.
Attempts have been made to promote isospin from a global to a local symmetry. In 1954, Chen Ning Yang and Robert Mills suggested that the notion of protons and neutrons, which are continuously rotated into each other by isospin, should be allowed to vary from point to point. To describe this, the proton and neutron direction in isospin space must be defined at every point, giving local basis for isospin. A gauge connection would then describe how to transform isospin along a path between two points.
This Yang–Mills theory describes interacting vector bosons, like the photon of electromagnetism. Unlike the photon, the SU(2) gauge theory would contain self-interacting gauge bosons. The condition of gauge invariance suggests that they have zero mass, just as in electromagnetism.
Ignoring the massless problem, as Yang and Mills did, the theory makes a firm prediction: the vector particle should couple to all particles of a given isospin "universally". The coupling to the nucleon would be the same as the coupling to the kaons. The coupling to the pions would be the same as the self-coupling of the vector bosons to themselves.
When Yang and Mills proposed the theory, there was no candidate vector boson. J. J. Sakurai in 1960 predicted that there should be a massive vector boson which is coupled to isospin, and predicted that it would show universal couplings. The rho mesons were discovered a short time later, and were quickly identified as Sakurai's vector bosons. The couplings of the rho to the nucleons and to each other were verified to be universal, as best as experiment could measure. The fact that the diagonal isospin current contains part of the electromagnetic current led to the prediction of rho-photon mixing and the concept of vector meson dominance, ideas which led to successful theoretical pictures of GeV-scale photon-nucleus scattering.
The introduction of quarks.
The discovery and subsequent analysis of additional particles, both mesons and baryons, made it clear that the concept of isospin symmetry could be broadened to an even larger symmetry group, now called flavor symmetry. Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry.
In the quark model, the isospin projection ("I"3) followed from the up and down quark content of particles; uud for the proton and udd for the neutron. Technically, the nucleon doublet states are seen to be linear combinations of products of 3-particle isospin doublet states and spin doublet states. That is, the (spin-up) proton wave function, in terms of quark-flavour eigenstates, is described by
formula_13
and the (spin-up) neutron by
formula_14
Here, formula_15 is the up quark flavour eigenstate, and formula_16 is the down quark flavour eigenstate, while formula_17 and formula_18 are the eigenstates of formula_19. Although these superpositions are the technically correct way of denoting a proton and neutron in terms of quark flavour and spin eigenstates, for brevity, they are often simply referred to as "uud" and "udd". The derivation above assumes exact isospin symmetry and is modified by SU(2)-breaking terms.
Similarly, the isospin symmetry of the pions are given by:
formula_20
Although the discovery of the quarks led to reinterpretation of mesons as a vector bound state of a quark and an antiquark, it is sometimes still useful to think of them as being the gauge bosons of a hidden local symmetry.
Weak isospin.
In 1961 Sheldon Glashow proposed that as relation similar to the Gell-Mann–Nishijima formula for charge to isospin would also apply to the weak interaction:
formula_21
Here the charge formula_22 is related to the projection of weak isospin formula_5 and the weak hypercharge formula_23.
Isospin and weak isospin are related to the same symmetry but for different forces. Weak isospin is the gauge symmetry of the weak interaction which connects quark and lepton doublets of left-handed particles in all generations; for example, up and down quarks, top and bottom quarks, electrons and electron neutrinos. By contrast (strong) isospin connects only up and down quarks, acts on both chiralities (left and right) and is a global (not a gauge) symmetry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{T}_3"
},
{
"math_id": 1,
"text": "\\hat{Q}"
},
{
"math_id": 2,
"text": "\\hat{Q} = e(\\hat{T}_3 + \\frac{1}{2}) "
},
{
"math_id": 3,
"text": "e"
},
{
"math_id": 4,
"text": "\\hat{Q} = e(\\hat{T}_3 + \\frac{1}{2}A) "
},
{
"math_id": 5,
"text": "T_3"
},
{
"math_id": 6,
"text": "Y"
},
{
"math_id": 7,
"text": "Q=\\frac{1}{2}Y+T_3, \\ \\ \\ \\ T_3=T, T- 1,...,-T."
},
{
"math_id": 8,
"text": "\\frac{1}{2}Y = \\frac{1}{2}(Q_{\\textrm{min}}+Q_{\\textrm{max}})."
},
{
"math_id": 9,
"text": "I_3 = \\frac{1}{2}(n_u - n_d)."
},
{
"math_id": 10,
"text": "\\tau_3 = \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix}."
},
{
"math_id": 11,
"text": "\\mathrm{u\\bar{u}}"
},
{
"math_id": 12,
"text": "\\mathrm{d\\bar{d}}"
},
{
"math_id": 13,
"text": "\\vert \\mathrm{p}\\uparrow \\rangle = \\frac 1{3\\sqrt 2}\\left(\\begin{array}{ccc} \\vert \\mathrm{duu}\\rangle & \\vert \\mathrm{udu}\\rangle & \\vert \\mathrm{uud}\\rangle \\end{array}\\right) \\left(\\begin{array}{ccc} 2 & -1 & -1\\\\ -1 & 2 & -1\\\\ -1 & -1 & 2 \\end{array}\\right) \\left(\\begin{array}{c} \\left\\vert\\downarrow\\uparrow\\uparrow\\right\\rangle\\\\ \\left\\vert\\uparrow\\downarrow\\uparrow\\right\\rangle\\\\ \\left\\vert\\uparrow\\uparrow\\downarrow\\right\\rangle \\end{array}\\right)"
},
{
"math_id": 14,
"text": "\\vert \\mathrm{n}\\uparrow \\rangle = \\frac 1{3\\sqrt 2}\\left(\\begin{array}{ccc} \\vert \\mathrm{udd}\\rangle & \\vert \\mathrm{dud}\\rangle & \\vert \\mathrm{ddu}\\rangle \\end{array}\\right) \\left(\\begin{array}{ccc} 2 & -1 & -1\\\\ -1 & 2 & -1\\\\ -1 & -1 & 2 \\end{array}\\right) \\left(\\begin{array}{c} \\left\\vert\\downarrow\\uparrow\\uparrow\\right\\rangle\\\\ \\left\\vert\\uparrow\\downarrow\\uparrow\\right\\rangle\\\\ \\left\\vert\\uparrow\\uparrow\\downarrow\\right\\rangle \\end{array}\\right)."
},
{
"math_id": 15,
"text": "\\mathrm{\\vert u \\rangle}"
},
{
"math_id": 16,
"text": "\\mathrm{\\vert d \\rangle}"
},
{
"math_id": 17,
"text": "\\left\\vert\\uparrow\\right\\rangle"
},
{
"math_id": 18,
"text": "\\left\\vert\\downarrow\\right\\rangle"
},
{
"math_id": 19,
"text": "S_z"
},
{
"math_id": 20,
"text": "\\begin{align}\n \\vert \\pi^+\\rangle &= \\vert \\mathrm{u\\overline {d}}\\rangle \\\\\n \\vert \\pi^0\\rangle &= \\tfrac{1}{\\sqrt{2}}\\left(\\vert \\mathrm{u\\overline {u}}\\rangle - \\vert \\mathrm{d \\overline{d}} \\rangle \\right) \\\\\n \\vert \\pi^-\\rangle &= -\\vert \\mathrm{d\\overline {u}}\\rangle.\n\\end{align}"
},
{
"math_id": 21,
"text": "Q = T_3 + \\frac{1}{2}Y_w."
},
{
"math_id": 22,
"text": "Q"
},
{
"math_id": 23,
"text": "Y_w"
}
]
| https://en.wikipedia.org/wiki?curid=672218 |
67224319 | 2 Chronicles 6 | Second Book of Chronicles, chapter 6
2 Chronicles 6 is the sixth chapter of the Second Book of Chronicles the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is Solomon's prayer and speech at the consecration of the temple.
Text.
This chapter was originally written in the Hebrew language and is divided into 42 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Solomon blesses the LORD (6:1–11).
The first part of this chapter starts with a doxology, followed by Solomon's speech about God's choice of Jerusalem and David for the temple's construction, that Davidic promises regarding
them (1 Chronicles 17:1–15) have been fulfilled.
"Then Solomon spoke:"
"The Lord said He would dwell in the dark cloud."
Verse 1.
The Hebrew expression that God 'wished to dwell in darkness' links to God's manifestation on Mount Sinai (Exodus 20:21; Deuteronomy 4:11; 5:22).
Verse 11.
The Chronicler sharpens the portrayal of David in relationship with Moses, most significantly in this verse where at three points the Chronicler eliminated or altered allusions to the exodus
themes found in Samuel-Kings ():
Solomon's prayer of dedication (6:12–42).
The second part of the chapter contains a prayer of dedication that consists of seven petitions concerning a variety of predicaments in which Israel may find, including defeat by enemies (verses 24–25), drought (verses 26–27), open pitched battles (verses 34–35) or exile (verses 36–39), in each case of which Solomon asks God to be attentive to the prayers of His people from His heavenly dwelling.
Verses 32–33 concern with foreigners, whose significance to the people of Israel would be increased in the time between the writing of the books of Kings and that of the books of Chronicles. The theme of Babylonian Exile in 1 Kings 8 had developed into the theme of diaspora (for examples, in Babylon and Egypt) in the Chronicler's time, so the phrase 'and grant them compassion in the sight of their captors, so that they may have compassion on them' in 1 Kings 8:50 is omitted in the Chronicles here, although interestingly it is taken up in the letter written by Hezekiah to the rest of the northern kingdom
(2 Chronicles 30:9). In contrast to 1 Kings 8 the Chronicler omits the reference to the Exodus and therefore to Moses in verse 40 (as in verse 11), but ends in a more positive tone by taking and changing Psalm 132:8–10 to enhance the importance of the ark and the anointed (such as the terms 'salvation' in place of 'righteousness', 'rejoice' in place of 'shout for joy', and goodness).
"For Solomon had made a brasen scaffold, of five cubits long, and five cubits broad, and three cubits high, and had set it in the midst of the court: and upon it he stood, and kneeled down upon his knees before all the congregation of Israel, and spread forth his hands toward heaven."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67224319 |
672259 | Batalin–Vilkovisky formalism | Generalization of the BRST formalism
In theoretical physics, the Batalin–Vilkovisky (BV) formalism (named for Igor Batalin and Grigori Vilkovisky) was developed as a method for determining the ghost structure for Lagrangian gauge theories, such as gravity and supergravity, whose corresponding Hamiltonian formulation has constraints not related to a Lie algebra (i.e., the role of Lie algebra structure constants are played by more general structure functions). The BV formalism, based on an action that contains both fields and "antifields", can be thought of as a vast generalization of the original BRST formalism for pure Yang–Mills theory to an arbitrary Lagrangian gauge theory. Other names for the Batalin–Vilkovisky formalism are field-antifield formalism, Lagrangian BRST formalism, or BV–BRST formalism. It should not be confused with the Batalin–Fradkin–Vilkovisky (BFV) formalism, which is the Hamiltonian counterpart.
Batalin–Vilkovisky algebras.
In mathematics, a Batalin–Vilkovisky algebra is a graded supercommutative algebra (with a unit 1) with a second-order nilpotent operator Δ of degree −1. More precisely, it satisfies the identities
formula_5
One often also requires normalization:
Antibracket.
A Batalin–Vilkovisky algebra becomes a Gerstenhaber algebra if one defines the Gerstenhaber bracket by
formula_7
Other names for the Gerstenhaber bracket are Buttin bracket, antibracket, or odd Poisson bracket. The antibracket satisfies
Odd Laplacian.
The normalized operator is defined as
formula_12
It is often called the odd Laplacian, in particular in the context of odd Poisson geometry. It "differentiates" the antibracket
The square formula_15 of the normalized formula_14 operator is a Hamiltonian vector field with odd Hamiltonian Δ(1)
which is also known as the modular vector field. Assuming normalization Δ(1)=0, the odd Laplacian formula_17 is just the Δ operator, and the modular vector field formula_18 vanishes.
Compact formulation in terms of nested commutators.
If one introduces the left multiplication operator formula_19 as
formula_20
and the supercommutator [,] as
formula_21
for two arbitrary operators "S" and "T", then the definition of the antibracket may be written compactly as
formula_22
and the second order condition for Δ may be written compactly as
formula_23 (The Δ operator is of second order)
where it is understood that the pertinent operator acts on the unit element 1. In other words, formula_24 is a first-order (affine) operator, and formula_25 is a zeroth-order operator.
Master equation.
The classical master equation for an even degree element "S" (called the action) of a Batalin–Vilkovisky algebra is the equation
formula_26
The quantum master equation for an even degree element "W" of a Batalin–Vilkovisky algebra is the equation
formula_27
or equivalently,
formula_28
Assuming normalization Δ(1) = 0, the quantum master equation reads
formula_29
Generalized BV algebras.
In the definition of a generalized BV algebra, one drops the second-order assumption for Δ. One may then define an infinite hierarchy of higher brackets of degree −1
formula_30
The brackets are (graded) symmetric
formula_31 (Symmetric brackets)
where formula_32 is a permutation, and formula_33 is the Koszul sign of the permutation
formula_34.
The brackets constitute a homotopy Lie algebra, also known as an formula_35 algebra, which satisfies generalized Jacobi identities
formula_36 (Generalized Jacobi identities)
The first few brackets are:
In particular, the one-bracket formula_42 is the odd Laplacian, and the two-bracket formula_43 is the antibracket up to a sign. The first few generalized Jacobi identities are:
where the Jacobiator for the two-bracket formula_51 is defined as
formula_52
BV "n"-algebras.
The Δ operator is by definition of n'th order if and only if the ("n" + 1)-bracket formula_53 vanishes. In that case, one speaks of a BV n-algebra. Thus a BV 2-algebra is by definition just a BV algebra. The Jacobiator formula_54 vanishes within a BV algebra, which means that the antibracket here satisfies the Jacobi identity. A BV 1-algebra that satisfies normalization Δ(1) = 0 is the same as a differential graded algebra (DGA) with differential Δ. A BV 1-algebra has vanishing antibracket.
Odd Poisson manifold with volume density.
Let there be given an (n|n) supermanifold with an odd Poisson bi-vector formula_55 and a Berezin volume density formula_56, also known as a P-structure and an S-structure, respectively. Let the local coordinates be called formula_57. Let the derivatives formula_58 and
formula_59
denote the left and right derivative of a function "f" wrt. formula_57, respectively. The odd Poisson bi-vector formula_55 satisfies more precisely
Under change of coordinates formula_63 the odd Poisson bi-vector formula_55
and Berezin volume density formula_56 transform as
where "sdet" denotes the superdeterminant, also known as the Berezinian.
Then the odd Poisson bracket is defined as
formula_66
A Hamiltonian vector field formula_67 with Hamiltonian "f" can be defined as
formula_68
The (super-)divergence of a vector field formula_69 is defined as
formula_70
Recall that Hamiltonian vector fields are divergencefree in even Poisson geometry because of Liouville's Theorem.
In odd Poisson geometry the corresponding statement does not hold. The odd Laplacian formula_71 measures the failure of Liouville's Theorem. Up to a sign factor, it is defined as one half the divergence of the corresponding Hamiltonian vector field,
formula_72
The odd Poisson structure formula_55 and Berezin volume density formula_56 are said to be compatible if the modular vector field formula_18 vanishes. In that case the odd Laplacian formula_71 is a BV Δ operator with normalization Δ(1)=0. The corresponding BV algebra is the algebra of functions.
Odd symplectic manifold.
If the odd Poisson bi-vector formula_55 is invertible, one has an odd symplectic manifold. In that case, there exists an odd Darboux Theorem. That is, there exist local Darboux coordinates, i.e., coordinates formula_73, and momenta formula_74, of degree
formula_75
such that the odd Poisson bracket is on Darboux form
formula_76
In theoretical physics, the coordinates formula_77 and momenta formula_78 are called fields and antifields, and are typically denoted formula_79 and formula_80, respectively.
formula_81
acts on the vector space of semidensities, and is a globally well-defined operator on the atlas of Darboux neighborhoods. Khudaverdian's formula_82 operator depends only on the P-structure. It is manifestly nilpotent formula_83, and of degree −1. Nevertheless, it is technically not a BV Δ operator as the vector space of semidensities has no multiplication. (The product of two semidensities is a density rather than a semidensity.) Given a fixed density formula_56, one may construct a nilpotent BV Δ operator as
formula_84
whose corresponding BV algebra is the algebra of functions, or equivalently, scalars. The odd symplectic structure formula_55 and density formula_56 are compatible if and only if Δ(1) is an odd constant. | [
{
"math_id": 0,
"text": "(ab)c = a(bc) "
},
{
"math_id": 1,
"text": "ab = (-1)^{|a||b|}ba "
},
{
"math_id": 2,
"text": "|ab| = |a| + |b| "
},
{
"math_id": 3,
"text": "|\\Delta(a)| = |a| - 1 "
},
{
"math_id": 4,
"text": "\\Delta^2 = 0 "
},
{
"math_id": 5,
"text": "\\begin{align}\n0 = & \\Delta(abc) \\\\\n&-\\Delta(ab)c-(-1)^{|a|}a\\Delta(bc)-(-1)^{(|a|+1)|b|}b\\Delta(ac)\\\\\n&+\\Delta(a)bc+(-1)^{|a|}a\\Delta(b)c+(-1)^{|a|+|b|}ab\\Delta(c)\\\\\n&-\\Delta(1)abc\n\\end{align}"
},
{
"math_id": 6,
"text": "\\Delta(1)=0 "
},
{
"math_id": 7,
"text": "(a,b) := (-1)^{\\left|a\\right|}\\Delta(ab) - (-1)^{\\left|a\\right|}\\Delta(a)b - a\\Delta(b)+a\\Delta(1)b ."
},
{
"math_id": 8,
"text": "|(a,b)| = |a|+|b| - 1 "
},
{
"math_id": 9,
"text": " (a,b) = -(-1)^{(|a|+1)(|b|+1)}(b,a) "
},
{
"math_id": 10,
"text": " (-1)^{(|a|+1)(|c|+1)}(a,(b,c)) + (-1)^{(|b|+1)(|a|+1)}(b,(c,a)) + (-1)^{(|c|+1)(|b|+1)}(c,(a,b)) = 0 "
},
{
"math_id": 11,
"text": " (ab,c) = a(b,c) + (-1)^{|a||b|}b(a,c)"
},
{
"math_id": 12,
"text": " {\\Delta}_{\\rho} := \\Delta-\\Delta(1) . "
},
{
"math_id": 13,
"text": " {\\Delta}_{\\rho}(a,b) = ({\\Delta}_{\\rho}(a),b) - (-1)^{\\left|a\\right|}(a,{\\Delta}_{\\rho}(b)) "
},
{
"math_id": 14,
"text": "{\\Delta}_{\\rho}"
},
{
"math_id": 15,
"text": "{\\Delta}_{\\rho}^{2}=(\\Delta(1),\\cdot)"
},
{
"math_id": 16,
"text": " {\\Delta}_{\\rho}^{2}(ab) = {\\Delta}_{\\rho}^{2}(a)b+ a{\\Delta}_{\\rho}^{2}(b) "
},
{
"math_id": 17,
"text": " {\\Delta}_{\\rho} "
},
{
"math_id": 18,
"text": " {\\Delta}_{\\rho}^{2} "
},
{
"math_id": 19,
"text": "L_{a}"
},
{
"math_id": 20,
"text": " L_{a}(b) := ab , "
},
{
"math_id": 21,
"text": "[S,T]:=ST - (-1)^{\\left|S\\right|\\left|T\\right|}TS "
},
{
"math_id": 22,
"text": " (a,b) := (-1)^{\\left|a\\right|} [[\\Delta,L_{a}],L_{b}]1 , "
},
{
"math_id": 23,
"text": " [[[\\Delta,L_{a}],L_{b}],L_{c}]1 = 0 "
},
{
"math_id": 24,
"text": " [\\Delta,L_{a}] "
},
{
"math_id": 25,
"text": " [[\\Delta,L_{a}],L_{b}] "
},
{
"math_id": 26,
"text": "(S,S) = 0 . "
},
{
"math_id": 27,
"text": " \\Delta\\exp \\left[\\frac{i}{\\hbar}W\\right] = 0 ,"
},
{
"math_id": 28,
"text": "\\frac{1}{2}(W,W) = i\\hbar{\\Delta}_{\\rho}(W)+\\hbar^{2}\\Delta(1) . "
},
{
"math_id": 29,
"text": "\\frac{1}{2}(W,W) = i\\hbar\\Delta(W) . "
},
{
"math_id": 30,
"text": " \\Phi^{n}(a_{1},\\ldots,a_{n}) := \\underbrace{[[\\ldots[\\Delta,L_{a_{1}}],\\ldots],L_{a_{n}}]}_{n~{\\rm nested~commutators}}1 . "
},
{
"math_id": 31,
"text": " \\Phi^{n}(a_{\\pi(1)},\\ldots,a_{\\pi(n)}) = (-1)^{\\left|a_{\\pi}\\right|}\\Phi^{n}(a_{1},\\ldots, a_{n}) "
},
{
"math_id": 32,
"text": "\\pi\\in S_{n}"
},
{
"math_id": 33,
"text": "(-1)^{\\left|a_{\\pi}\\right|}"
},
{
"math_id": 34,
"text": "a_{\\pi(1)}\\ldots a_{\\pi(n)} = (-1)^{\\left|a_{\\pi}\\right|}a_{1}\\ldots a_{n}"
},
{
"math_id": 35,
"text": "L_{\\infty}"
},
{
"math_id": 36,
"text": " \\sum_{k=0}^n \\frac{1}{k!(n\\!-\\!k)!}\\sum_{\\pi\\in S_{n}}(-1)^{\\left|a_{\\pi}\\right|}\\Phi^{n-k+1}\\left(\\Phi^{k}(a_{\\pi(1)}, \\ldots, a_{\\pi(k)}), a_{\\pi(k+1)}, \\ldots, a_{\\pi(n)}\\right) = 0. "
},
{
"math_id": 37,
"text": " \\Phi^{0} := \\Delta(1) "
},
{
"math_id": 38,
"text": " \\Phi^{1}(a) := [\\Delta,L_{a}]1 = \\Delta(a) - \\Delta(1)a =: {\\Delta}_{\\rho}(a) "
},
{
"math_id": 39,
"text": " \\Phi^{2}(a,b) := [[\\Delta,L_{a}],L_{b}]1 =: (-1)^{\\left|a\\right|}(a,b) "
},
{
"math_id": 40,
"text": " \\Phi^{3}(a,b,c) := [[[\\Delta,L_{a}],L_{b}],L_{c}]1 "
},
{
"math_id": 41,
"text": " \\vdots "
},
{
"math_id": 42,
"text": " \\Phi^{1}={\\Delta}_{\\rho}"
},
{
"math_id": 43,
"text": " \\Phi^{2}"
},
{
"math_id": 44,
"text": " \\Phi^{1}(\\Phi^0) = 0 "
},
{
"math_id": 45,
"text": "\\Delta(1)"
},
{
"math_id": 46,
"text": "\\Delta_\\rho"
},
{
"math_id": 47,
"text": " \\Phi^{2}(\\Phi^{0},a)+\\Phi^{1}\\left(\\Phi^{1}(a)\\right)"
},
{
"math_id": 48,
"text": "{\\Delta}_{\\rho}^{2}"
},
{
"math_id": 49,
"text": " \\Phi^{3}(\\Phi^{0},a,b) + \\Phi^{2}\\left(\\Phi^{1}(a),b\\right)+(-1)^{|a|}\\Phi^{2}\\left(a,\\Phi^{1}(b)\\right) +\\Phi^{1}\\left(\\Phi^{2}(a,b)\\right) = 0 "
},
{
"math_id": 50,
"text": " \\Phi^{4}(\\Phi^{0},a,b,c) + {\\rm Jac}(a,b,c)+ \\Phi^{1}\\left(\\Phi^{3}(a,b,c)\\right) + \\Phi^{3}\\left(\\Phi^{1}(a),b,c\\right) + (-1)^{\\left|a\\right|}\\Phi^{3}\\left(a,\\Phi^{1}(b),c\\right) +(-1)^{\\left|a\\right|+\\left|b\\right|}\\Phi^{3}\\left(a,b,\\Phi^{1}(c)\\right) = 0 "
},
{
"math_id": 51,
"text": "\\Phi^{2}"
},
{
"math_id": 52,
"text": " {\\rm Jac}(a_{1},a_{2},a_{3}) := \n\\frac{1}{2} \\sum_{\\pi\\in S_{3}}(-1)^{\\left|a_{\\pi}\\right|}\n\\Phi^{2}\\left(\\Phi^{2}(a_{\\pi(1)},a_{\\pi(2)}),a_{\\pi(3)}\\right) . "
},
{
"math_id": 53,
"text": " \\Phi^{n+1} "
},
{
"math_id": 54,
"text": " {\\rm Jac}(a,b,c)=0 "
},
{
"math_id": 55,
"text": " \\pi^{ij}"
},
{
"math_id": 56,
"text": "\\rho"
},
{
"math_id": 57,
"text": "x^{i}"
},
{
"math_id": 58,
"text": " \\partial_{i}f "
},
{
"math_id": 59,
"text": " f\\stackrel{\\leftarrow}{\\partial}_{i}:=(-1)^{\\left|x^{i}\\right|(|f|+1)}\\partial_{i}f "
},
{
"math_id": 60,
"text": " \\left|\\pi^{ij}\\right| = \\left|x^{i}\\right| + \\left|x^{j}\\right| -1 "
},
{
"math_id": 61,
"text": " \\pi^{ji} = -(-1)^{(\\left|x^{i}\\right|+1)(\\left|x^{j}\\right|+1)} \\pi^{ij} "
},
{
"math_id": 62,
"text": " (-1)^{(\\left|x^{i}\\right|+1)(\\left|x^{k}\\right|+1)}\\pi^{i\\ell}\\partial_{\\ell}\\pi^{jk} + {\\rm cyclic}(i,j,k) = 0 "
},
{
"math_id": 63,
"text": "x^{i} \\to x^{\\prime i} "
},
{
"math_id": 64,
"text": " \\pi^{\\prime k\\ell} = x^{\\prime k}\\stackrel{\\leftarrow}{\\partial}_{i} \\pi^{ij} \\partial_{j}x^{\\prime \\ell} "
},
{
"math_id": 65,
"text": "\\rho^{\\prime} = \\rho/{\\rm sdet}(\\partial_{i}x^{\\prime j}) "
},
{
"math_id": 66,
"text": " (f,g) := f\\stackrel{\\leftarrow}{\\partial}_{i}\\pi^{ij}\\partial_{j}g . "
},
{
"math_id": 67,
"text": " X_{f}"
},
{
"math_id": 68,
"text": " X_{f}[g] := (f,g) ."
},
{
"math_id": 69,
"text": " X=X^{i}\\partial_{i} "
},
{
"math_id": 70,
"text": " {\\rm div}_{\\rho} X := \\frac{(-1)^{\\left|x^{i}\\right|(|X|+1)}}{\\rho} \\partial_{i}(\\rho X^{i}) "
},
{
"math_id": 71,
"text": " {\\Delta}_{\\rho}"
},
{
"math_id": 72,
"text": " {\\Delta}_{\\rho}(f) := \\frac{(-1)^{\\left|f\\right|}}{2}{\\rm div}_{\\rho} X_{f} = \\frac{(-1)^{\\left|x^{i}\\right|}}{2\\rho}\\partial_{i}\\rho \\pi^{ij}\\partial_{j}f."
},
{
"math_id": 73,
"text": " q^{1}, \\ldots, q^{n} "
},
{
"math_id": 74,
"text": " p_{1},\\ldots, p_{n} "
},
{
"math_id": 75,
"text": " \\left|q^{i}\\right|+\\left|p_{i}\\right|=1, "
},
{
"math_id": 76,
"text": " (q^{i},p_{j}) = \\delta^{i}_{j} . "
},
{
"math_id": 77,
"text": "q^{i} "
},
{
"math_id": 78,
"text": "p_{j} "
},
{
"math_id": 79,
"text": "\\phi^{i} "
},
{
"math_id": 80,
"text": "\\phi^{*}_{j} "
},
{
"math_id": 81,
"text": "\\Delta_{\\pi} := (-1)^{\\left|q^{i}\\right|}\\frac{\\partial}{\\partial q^{i}}\\frac{\\partial}{\\partial p_{i}} "
},
{
"math_id": 82,
"text": "\\Delta_{\\pi}"
},
{
"math_id": 83,
"text": "\\Delta_{\\pi}^{2}=0"
},
{
"math_id": 84,
"text": " \\Delta(f) :=\\frac{1}{\\sqrt{\\rho}}\\Delta_{\\pi}(\\sqrt{\\rho}f),"
}
]
| https://en.wikipedia.org/wiki?curid=672259 |
67227 | A Brief History of Time | 1988 book by Stephen Hawking
A Brief History of Time: From the Big Bang to Black Holes is a book on theoretical cosmology by the physicist Stephen Hawking. It was first published in 1988. Hawking wrote the book for readers who had no prior knowledge of physics.
In "A Brief History of Time", Hawking writes in non-technical terms about the structure, origin, development and eventual fate of the Universe, which is the object of study of astronomy and modern physics. He talks about basic concepts like space and time, basic building blocks that make up the Universe (such as quarks) and the fundamental forces that govern it (such as gravity). He writes about cosmological phenomena such as the Big Bang and black holes. He discusses two major theories, general relativity and quantum mechanics, that modern scientists use to describe the Universe. Finally, he talks about the search for a unifying theory that describes everything in the Universe in a coherent manner.
The book became a bestseller and has sold more than 25 million copies.
Publication.
Early in 1983, Hawking first approached Simon Mitton, the editor in charge of astronomy books at Cambridge University Press, with his ideas for a popular book on cosmology. Mitton was doubtful about all the equations in the draft manuscript, which he felt would put off the buyers in airport bookshops that Hawking wished to reach. With some difficulty, he persuaded Hawking to drop all but one equation. The author himself notes in the book's acknowledgements that he was warned that for every equation in the book, the readership would be halved, hence it includes only a single equation: formula_0. The book does employ a number of complex models, diagrams, and other illustrations to detail some of the concepts that it explores.
Contents.
In "A Brief History of Time", Stephen Hawking explains a range of subjects in cosmology, including the Big Bang, black holes and light cones, to the non-specialist reader. His main goal is to give an overview of the subject, but he also attempts to explain some complex mathematics. In the 1996 edition of the book and subsequent editions, Hawking discusses the possibility of time travel and wormholes and explores the possibility of having a Universe without a quantum singularity at the beginning of time. The 2017 edition of the book contained twelve chapters, whose contents are summarized below.
Chapter 1: Our Picture of the Universe.
In the first chapter, Hawking discusses the history of astronomical studies, particularly ancient Greek philosopher Aristotle's conclusions about spherical Earth and a circular geocentric model of the Universe, later elaborated upon by the second-century Greek astronomer Ptolemy. Hawking then depicts the rejection of the Aristotelian and Ptolemaic model and the gradual development of the currently accepted heliocentric model of the Solar System in the 16th, 17th, and 18th centuries, first proposed by the Polish priest Nicholas Copernicus in 1514, validated a century later by Italian scientist Galileo Galilei and German scientist Johannes Kepler (who proposed an elliptical orbit model instead of a circular one), and further supported mathematically by English scientist Isaac Newton in his 1687 book on gravity, "Principia Mathematica".
In this chapter, Hawking also covers how the topic of the origin of the Universe and time was studied and debated over the centuries: the perennial existence of the Universe hypothesised by Aristotle and other early philosophers was opposed by St. Augustine and other theologians' belief in its creation at a specific time in the past, where time is a concept that was born with the creation of the Universe. In the modern age, German philosopher Immanuel Kant argued again that time had no beginning. In 1929, American astronomer Edwin Hubble's discovery of the expanding Universe implied that between ten and twenty billion years ago, the entire Universe was contained in one singular extremely dense place. This discovery brought the concept of the beginning of the Universe within the province of science. Currently scientists use Albert Einstein's general theory of relativity and quantum mechanics to partially describe the workings of the Universe, while still looking for a complete Grand Unified Theory that would describe everything in the Universe.
Chapter 2: Space and Time.
In this chapter, Hawking describes the development of scientific thought regarding the nature of space and time. He first describes the Aristotelian idea that the naturally preferred state of a body is to be at rest, and which can only be moved by force, implying that heavier objects will fall faster. However, Italian scientist Galileo Galilei experimentally proved Aristotle's theory wrong by observing the motion of objects of different weights and concluding that all objects would fall at the same rate. This eventually led to English scientist Isaac Newton's laws of motion and gravity. However, Newton's laws implied that there is no such thing as absolute state of rest or absolute space as believed by Aristotle: whether an object is 'at rest' or 'in motion' depends on the inertial frame of reference of the observer.
Hawking then describes Aristotle and Newton's belief in absolute time, i.e. time can be measured accurately regardless of the state of motion of the observer. However, Hawking writes that this commonsense notion does not work at or near the speed of light. He mentions Danish scientist Ole Rømer's discovery that light travels at a very high but finite speed through his observations of Jupiter and one of its moons Io as well as British scientist James Clerk Maxwell's equations on electromagnetism which showed that light travels in waves moving at a fixed speed. Since the notion of absolute rest was abandoned in Newtonian mechanics, Maxwell and many other physicists argued that light must travel through a hypothetical fluid called aether, its speed being relative to that of aether. This was later disproved by the Michelson–Morley experiment, showing that the speed of light always remains constant regardless of the motion of the observer. Einstein and Henri Poincaré later argued that there is no need for aether to explain the motion of light, assuming that there is no absolute time. The special theory of relativity is based on this, arguing that light travels with a finite speed no matter what the speed of the observer is.
Mass and energy are related by the equation formula_0, which explains that an infinite amount of energy is needed for any object with mass to travel at the speed of light(3×10⁸m/s). A new way of defining a metre using speed of light was developed. "Events" can also be described by using light cones, a spacetime graphical representation that restricts what events are allowed and what are not based on the past and the future light cones. A 4-dimensional spacetime is also described, in which 'space' and 'time' are intrinsically linked. The motion of an object through space inevitably impacts the way in which it experiences time.
Einstein's general theory of relativity explains how the path of a ray of light is affected by 'gravity', which according to Einstein is an illusion caused by the warping of spacetime, in contrast to Newton's view which described gravity as a force which matter exerts on other matter. In spacetime curvature, light always travels in a straight path in the 4-dimensional "spacetime", but may appear to curve in 3-dimensional space due to gravitational effects. These straight-line paths are geodesics. The twin paradox, a thought experiment in special relativity involving identical twins, considers that twins can age differently if they move at different speeds relative to each other, or even if they lived in different locations with unequal spacetime curvature. Special relativity is based upon arenas of space and time where events take place, whereas general relativity is dynamic where force could change spacetime curvature and which gives rise to a dynamic, expanding Universe. Hawking and Roger Penrose worked upon this and later proved using general relativity that if the Universe had a beginning a finite time ago in the past, then it also might end at a finite time from now into the future.
Chapter 3: The Expanding Universe.
In this chapter, Hawking first describes how physicists and astronomers calculated the relative distance of stars from the Earth. In the 18th century, Sir William Herschel confirmed the positions and distances of many stars in the night sky. In 1924, Edwin Hubble discovered a method to measure the distance using the brightness of Cepheid variable stars as viewed from Earth. The luminosity, brightness, and distance of these stars are related by a simple mathematical formula. Using all these, he calculated distances of nine different galaxies. We live in a fairly typical spiral galaxy, containing vast numbers of stars.
The stars are very far away from us, so we can only observe their one characteristic feature, their light. When this light is passed through a prism, it gives rise to a spectrum. Every star has its own spectrum, and since each element has its own unique spectra, we can measure a star's light spectra to know its chemical composition. We use thermal spectra of the stars to know their temperature. In 1920, when scientists were examining spectra of different galaxies, they found that some of the characteristic lines of the star spectrum were shifted towards the red end of the spectrum. The implications of this phenomenon were given by the Doppler effect, and it was clear that many galaxies were moving away from us.
It was assumed that, since some galaxies are red shifted, some galaxies would also be blue shifted. However, redshifted galaxies far outnumbered blueshifted galaxies. Hubble found that the amount of redshift is directly proportional to relative distance. From this, he determined that the Universe is expanding and had a beginning. Despite this, the concept of a static Universe persisted into the 20th century. Einstein was so sure of a static Universe that he developed the 'cosmological constant' and introduced 'anti-gravity' forces to allow a universe of infinite age to exist. Moreover, many astronomers also tried to avoid the implications of general relativity and stuck with their static Universe, with one especially notable exception, the Russian physicist Alexander Friedmann.
Friedmann made two very simple assumptions: the Universe is identical wherever we are, i.e. homogeneity, and that it is identical in every direction that we look in, i.e. isotropy. His results showed that the Universe is non-static. His assumptions were later proved when two physicists at Bell Labs, Arno Penzias and Robert Wilson, found unexpected microwave radiation not only from the one particular part of the sky but from everywhere and by nearly the same amount. Thus Friedmann's first assumption was proved to be true.
At around the same time, Robert H. Dicke and Jim Peebles were also working on microwave radiation. They argued that they should be able to see the glow of the early Universe as background microwave radiation. Wilson and Penzias had already done this, so they were awarded with the Nobel Prize in 1978. In addition, our place in the Universe is not exceptional, so we should see the Universe as approximately the same from any other part of space, which supports Friedmann's second assumption. His work remained largely unknown until similar models were made by Howard Robertson and Arthur Walker.
Friedmann's model gave rise to three different types of models for the evolution of the Universe. First, the Universe would expand for a given amount of time, and if the expansion rate is less than the density of the Universe (leading to gravitational attraction), it would ultimately lead to the collapse of the Universe at a later stage. Secondly, the Universe would expand, and at some time, if the expansion rate and the density of the Universe became equal, it would expand slowly and stop, leading to a somewhat static Universe. Thirdly, the Universe would continue to expand forever, if the density of the Universe is less than the critical amount required to balance the expansion rate of the Universe.
The first model depicts the space of the Universe to be curved inwards. In the second model, the space would lead to a flat structure, and the third model results in negative 'saddle shaped' curvature. Even if we calculate, the current expansion rate is more than the critical density of the Universe including the dark matter and all the stellar masses. The first model included the beginning of the Universe as a Big Bang from a space of infinite density and zero volume known as 'singularity', a point where the general theory of relativity (Friedmann's solutions are based in it) also breaks down.
This concept of the beginning of time (proposed by the Belgian Catholic priest Georges Lemaître) seemed originally to be motivated by religious beliefs, because of its support of the biblical claim of the universe having a beginning in time instead of being eternal. So a new theory was introduced, the "steady state theory" by Hermann Bondi, Thomas Gold, and Fred Hoyle, to compete with the Big Bang theory. Its predictions also matched with the current Universe structure. But the fact that radio wave sources near us are far fewer than from the distant Universe, and there were numerous more radio sources than at present, resulted in the failure of this theory and universal acceptance of the Big Bang Theory. Evgeny Lifshitz and Isaak Markovich Khalatnikov also tried to find an alternative to the Big Bang theory but also failed.
Roger Penrose used light cones and general relativity to prove that a collapsing star could result in a region of zero size and infinite density and curvature called a black hole. Hawking and Penrose proved together that the Universe should have arisen from a singularity, which Hawking himself disproved once quantum effects are taken into account.
Chapter 4: The Uncertainty Principle.
In this chapter, Hawking first discusses nineteenth-century French mathematician Laplace's strong belief in scientific determinism, where scientific laws will eventually be able to accurately predict the future of the Universe. Then he discusses the theory of infinite radiation of stars according to the calculations of British scientists Lord Rayleigh and James Jeans, which was later revised in 1900 by German scientist Max Planck who suggested that energy must radiate in small, finite packets called quanta.
Hawking then discusses the uncertainty principle formulated by German scientist Werner Heisenberg, according to which the speed and the position of a particle cannot be precisely known due to Planck's quantum hypothesis: increasing the accuracy in measuring its speed will decrease the certainty of its position and vice versa. This disproved Laplace's idea of a completely deterministic theory of the universe. Hawking then describes the eventual development of quantum mechanics by Heisenberg, Austrian physicist Erwin Schroedinger and English physicist Paul Dirac in the 1920s, a theory which introduced an irreducible element of unpredictability into science, and despite German scientist Albert Einstein's strong objections, it has been proven to be very successful in describing the universe except for gravity and large-scale structures.
Hawking then discusses how Heisenberg's uncertainty principle implies the wave–particle duality behaviour of light (and particles in general).
He then describes the phenomenon of interference where multiple light waves interfere with each other to give rise to a single light wave with properties different from those of the component waves, as well as the interference within particles, exemplified by the two-slit experiment. Hawking writes how interference refined our understanding of the structure of atoms, the building blocks of matter. While Danish scientist Niels Bohr's theory only partially solved the problem of collapsing electrons, quantum mechanics completely resolved it. According to Hawking, American scientist Richard Feynman's sum over histories is a nice way of visualizing the wave-particle duality. Finally, Hawking mentions that Einstein's general theory of relativity is a classical, non-quantum theory which ignores the uncertainty principle and that it has to be reconciled with quantum theory in situations where gravity is very strong, such as black holes and the Big Bang.
Chapter 5: Elementary Particles and Forces of Nature.
In this chapter, Hawking traces the history of investigation about the nature of matter: Aristotle's four elements, Democritus's notion of indivisible atoms, John Dalton's ideas about atoms combining to form molecules, J. J. Thomson's discovery of electrons inside atoms, Ernest Rutherford's discovery of atomic nucleus and protons, James Chadwick's discovery of neutrons and finally Murray Gell-Mann's work on even smaller quarks which make up protons and neutrons. Hawking then discusses the six different "flavors" (up, down, strange, charm, bottom, and top) and three different "colors" of quarks (red, green, and blue). Later in the chapter he discusses anti-quarks, which are outnumbered by quarks due to the expansion and cooling of the Universe.
Hawking then discusses the "spin" property of particles, which determines what a particle looks like from different directions. Hawking then discusses two groups of particles in the Universe based on their spin: fermions and bosons. Fermions, with a spin of 1/2, follow the "Pauli exclusion principle", which states that they cannot share the same quantum state (for example, two "spin up" protons cannot occupy the same location in space). Without this rule, complex structures could not exist.
Bosons or the force-carrying particles, with a spin of 0, 1, or 2, do not follow the exclusion principle. Hawking then gives the examples of "virtual gravitons" and "virtual photons". Virtual gravitons, with a spin of 2, carry the force of gravity. Virtual photons, with a spin of 1, carry the electromagnetic force. Hawking then discusses the weak nuclear force (responsible for radioactivity and affecting mainly fermions) and the strong nuclear force carried by the particle gluon, which binds quarks together into hadrons, usually neutrons and protons, and also binds neutrons and protons together into atomic nuclei. Hawking then writes about the phenomenon called color confinement which prevents the discovery of quarks and gluons on their own (except at extremely high temperature) as they remain confined within hadrons.
Hawking writes that at extremely high temperature, the electromagnetic force and weak nuclear force behave as a single electroweak force, giving rise to the speculation that at even higher temperatures, the electroweak force and strong nuclear force would also behave as a single force. Theories which attempt to describe the behaviour of this "combined" force are called Grand Unified Theories, which may help us explain many of the mysteries of physics that scientists have yet to solve.
Chapter 6: Black Holes.
In this chapter, Hawking discusses black holes, regions of spacetime where extremely strong gravity prevents everything, including light, from escaping from within them. Hawking describes how most black holes are formed during the collapse of massive stars (at least 25 times heavier than the Sun) approaching end of life. He writes about the event horizon, the black hole's boundary from which no particle can escape to the rest of spacetime. Hawking then discusses non-rotating black holes with spherical symmetry and rotating ones with axisymmetry. Hawking then describes how astronomers discover a black hole not directly, but indirectly, by observing with special telescopes the powerful X-rays emitted when it consumes a star. Hawking ends the chapter by mentioning his famous bet made in 1974 with American physicist Kip Thorne in which Hawking argued that black holes did not exist. Hawking lost the bet as new evidence proved that Cygnus X-1 was indeed a black hole.
Chapter 7: Black Holes Ain't So Black.
This chapter discusses an aspect of black holes' behavior that Stephen Hawking discovered in the 1970s. According to earlier theories, black holes can only become larger, and never smaller, because nothing which enters a black hole can come out. However, in 1974, Hawking published a new theory which argued that black holes can "leak" radiation. He imagined what might happen if a pair of virtual particles appeared near the edge of a black hole. Virtual particles briefly 'borrow' energy from spacetime itself, then annihilate with each other, returning the borrowed energy and ceasing to exist. However, at the edge of a black hole, one virtual particle might be trapped by the black hole while the other escapes. Because of the second law of thermodynamics, particles are 'forbidden' from taking energy from the vacuum. Thus, the particle takes energy from the black hole instead of from the vacuum, and escape from the black hole as Hawking radiation.
According to Hawking, black holes must very slowly shrink over time and eventually "evaporate" because of this radiation, rather than continue existing forever as scientists had previously believed.
Chapter 8: The Origin and Fate of the Universe.
The beginning and the end of the universe are discussed in this chapter.
Most scientists agree that the Universe began in an expansion called the "Big Bang". At the start of the Big Bang, the Universe had an extremely high temperature, which prevented the formation of complex structures like stars, or even very simple ones like atoms. During the Big Bang, a phenomenon called "inflation" took place, in which the Universe briefly expanded ("inflated") to a much larger size. Inflation explains some characteristics of the Universe that had previously greatly confused researchers. After inflation, the universe continued to expand at a slower pace. It became much colder, eventually allowing for the formation of such structures.
Hawking also discusses how the Universe might have appeared differently if it grew in size slower or faster than it actually has. For example, if the Universe expanded too slowly, it would collapse, and there would not be enough time for life to form. If the Universe expanded too quickly, it would have become almost empty.
Hawking ultimately proposes the conclusion that the universe might be finite, but boundless. In other words, it may have no beginning nor ending in time, but merely exist with a finite amount of matter and energy.
The concept of quantum gravity is also discussed in this chapter.
Chapter 9: The Arrow of Time.
In this chapter Hawking talks about why "real time", as Hawking calls time as humans observe and experience it (in contrast to "imaginary time", which Hawking claims is inherent to the laws of science) seems to have a certain direction, notably from the past towards the future. Hawking then discusses three "arrows of time" which, in his view, give time this property. Hawking's first arrow of time is the thermodynamic arrow of time: the direction in which entropy (which Hawking calls disorder) increases. According to Hawking, this is why we never see the broken pieces of a cup gather themselves together to form a whole cup. Hawking's second arrow is the psychological arrow of time, whereby our subjective sense of time seems to flow in one direction, which is why we remember the past and not the future. Hawking claims that our brain measures time in a way where disorder increases in the direction of time – we never observe it working in the opposite direction. In other words, he claims that the psychological arrow of time is intertwined with the thermodynamic arrow of time. Hawking's third and final arrow of time is the cosmological arrow of time: the direction of time in which the Universe is expanding rather than contracting. According to Hawking, during a contraction phase of the universe, the thermodynamic and cosmological arrows of time would not agree.
Hawking then claims that the "no boundary proposal" for the universe implies that the universe will expand for some time before contracting back again. He goes on to argue that the no boundary proposal is what drives entropy and that it predicts the existence of a well-defined thermodynamic arrow of time if and only if the universe is expanding, as it implies that the universe must have started in a smooth and ordered state that must grow toward disorder as time advances. He argues that, because of the no boundary proposal, a contracting universe would not have a well-defined thermodynamic arrow and therefore only a Universe which is in an expansion phase can support intelligent life. Using the weak anthropic principle, Hawking goes on to argue that the thermodynamic arrow must agree with the cosmological arrow in order for either to be observed by intelligent life. This, in Hawking's view, is why humans experience these three arrows of time going in the same direction.
Chapter 10: Wormholes and Time Travel.
In this chapter, Hawking discusses whether it is possible to time travel, i.e., travel into the future or the past. He shows how physicists have attempted to devise possible methods by humans with advanced technology may be able to travel faster than the speed of light, or travel backwards in time, and these concepts have become mainstays of science fiction. Einstein–Rosen bridges were proposed early in the history of general relativity research. These "wormholes" would appear identical to black holes from the outside, but matter which entered would be relocated to a different location in spacetime, potentially in a distant region of space, or even backwards in time. However, later research demonstrated that such a wormhole, even if possible for it to form in the first place, would not allow any material to pass through before turning back into a regular black hole. The only way that a wormhole could theoretically remain open, and thus allow faster-than-light travel or time travel, would require the existence of exotic matter with negative energy density, which violates the energy conditions of general relativity. As such, almost all physicists agree that faster-than-light travel and travel backwards in time are not possible.
Hawking also describes his own "chronology protection conjecture", which provides a more formal explanation for why faster-than-light and backwards time travel are almost certainly impossible.
Chapter 11: The Unification of Physics.
Quantum field theory (QFT) and general relativity (GR) describe the physics of the Universe with astounding accuracy within their own domains of applicability. However, these two theories contradict each other. For example, the uncertainty principle of QFT is incompatible with GR. This contradiction, and the fact that QFT and GR do not fully explain observed phenomena, have led physicists to search for a theory of "quantum gravity" that is both internally consistent and explains observed phenomena just as well as or better than existing theories do.
Hawking is cautiously optimistic that such a unified theory of the Universe may be found soon, in spite of significant challenges. At the time the book was written, "superstring theory" had emerged as the most popular theory of quantum gravity, but this theory and related string theories were still incomplete and had yet to be proven in spite of significant effort (this remains the case as of 2021). String theory proposes that particles behave like one-dimensional "strings", rather than as dimensionless particles as they do in QFT. These strings "vibrate" in many dimensions. Instead of 3 dimensions as in QFT or 4 dimensions as in GR, superstring theory requires a total of 10 dimensions. The nature of the six "hyperspace" dimensions required by superstring theory are difficult if not impossible to study, leaving countless theoretical string theory landscapes which each describe a universe with different properties. Without a means to narrow the scope of possibilities, it is likely impossible to find practical applications for string theory.
Alternative theories of quantum gravity, such as loop quantum gravity, similarly suffer from a lack of evidence and difficulty to study.
Hawking thus proposes three possibilities: 1) there exists a complete unified theory that we will eventually find; 2) the overlapping characteristics of different landscapes will allow us to gradually explain physics more accurately with time and 3) there is no ultimate theory. The third possibility has been sidestepped by acknowledging the limits set by the uncertainty principle. The second possibility describes what has been happening in physical sciences so far, with increasingly accurate partial theories.
Hawking believes that such refinement has a limit and that by studying the very early stages of the Universe in a laboratory setting, a complete theory of Quantum Gravity will be found in the 21st century allowing physicists to solve many of the currently unsolved problems in physics.
Conclusion.
In this final chapter, Hawking summarises the efforts made by humans through their history to understand the Universe and their place in it: starting from the belief in anthropomorphic spirits controlling nature, followed by the recognition of regular patterns in nature, and finally with the scientific advancement in recent centuries, the inner workings of the universe have become far better understood. He recalls the suggestion of the nineteenth-century French mathematician Laplace that the Universe's structure and evolution could eventually be precisely explained by a set of laws whose origin is left in God's domain. However, Hawking states that the uncertainty principle introduced by the quantum theory in the twentieth century has set limits to the predictive accuracy of future laws to be discovered.
Hawking comments that historically, the study of cosmology (the study of the origin, evolution, and end of Earth and the Universe as a whole) has been primarily motivated by a search for philosophical and religious insights, for instance, to better understand the nature of God, or even whether God exists at all. However, for Hawking, most scientists today who work on these theories approach them with mathematical calculation and empirical observation, rather than asking such philosophical questions. In his mind, the increasingly technical nature of these theories have caused modern cosmology to become increasingly divorced from philosophical discussion. Hawking nonetheless expresses hope that one day everybody would talk about these theories in order to understand the true origin and nature of the Universe, and accomplish "the ultimate triumph of human reasoning".
Editions.
The introduction was removed after the first edition, as it was copyrighted by Sagan, rather than by Hawking or the publisher, and the publisher did not have the right to reprint it in perpetuity. Hawking wrote his own introduction for later editions.
Film.
In 1991, Errol Morris directed a documentary film about Hawking; although they share a title, the film is a biographical study of Hawking, and not a filmed version of the book.
Apps.
"Stephen Hawking's Pocket Universe: A Brief History of Time Revisited" is based on the book. The app was developed by Preloaded for Transworld publishers, a division of the Penguin Random House group.
The app was produced in 2016. It was designed by Ben Courtney and produced by Jemma Harris and is available on iOS only.
Opera.
The Metropolitan Opera commissioned an opera to premiere in the 2015–2016 season based on Hawking's book. It was to be composed by Osvaldo Golijov with a libretto by Alberto Manguel in a production by Robert Lepage. The planned opera was changed to be about a different subject and eventually canceled completely.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = mc^2"
}
]
| https://en.wikipedia.org/wiki?curid=67227 |
67227898 | Edmond–Ogston model | The Edmond–Ogston model is a thermodynamic model proposed by Elizabeth Edmond and Alexander George Ogston in 1968 to describe phase separation of two-component polymer mixtures in a common solvent. At the core of the model is an expression for the Helmholtz free energy formula_0
formula_1
that takes into account terms in the concentration of the polymers up to second order, and needs three virial coefficients formula_2 and formula_3 as input. Here formula_4 is the molar concentration of polymer formula_5, formula_6 is the universal gas constant, formula_7 is the absolute temperature, formula_8 is the system volume. It is possible to obtain explicit solutions for the coordinates of the critical point
formula_9,
where formula_10 represents the slope of the binodal and spinodal in the critical point. Its value can be obtained by solving a third order polynomial in formula_11,
formula_12,
which can be done analytically using Cardano's method and choosing the solution for which both formula_13 and formula_14 are positive.
The spinodal can be expressed analytically too, and the Lambert W function has a central role to express the coordinates of binodal and tie-lines.
The model is closely related to the Flory–Huggins model.
The model and its solutions have been generalized to mixtures with an arbitrary number of components formula_15, with formula_15 greater or equal than 2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "\\ F = RTV(\\,c_1\\ln\\ c_1 + c_2\\ln\\ c_2 + B_{11} {c_1}^2 + B_{22} {c_2}^2 + 2 B_{12} {c_1} {c_2}) \\,"
},
{
"math_id": 2,
"text": "B_{11}, B_{12}"
},
{
"math_id": 3,
"text": "B_{22}"
},
{
"math_id": 4,
"text": "c_i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "V"
},
{
"math_id": 9,
"text": "(c_{1,c},c_{2,c}) = (\\frac{1}{2(B_{12} \\cdot S_c-B_{11})} \\,,\\frac{1}{2(B_{12}/S_c-B_{22})} \\,)"
},
{
"math_id": 10,
"text": "-S_c"
},
{
"math_id": 11,
"text": "\\sqrt{S_c}"
},
{
"math_id": 12,
"text": "\\ B_{22} {\\sqrt{S_c}}^3 + B_{12} {\\sqrt{S_c}}^2 - B_{12} {\\sqrt{S_c}} - B_{11} = 0 \\,"
},
{
"math_id": 13,
"text": "c_{1,c}"
},
{
"math_id": 14,
"text": "c_{2,c}"
},
{
"math_id": 15,
"text": "N"
}
]
| https://en.wikipedia.org/wiki?curid=67227898 |
672291 | Type I string theory | In theoretical physics, type I string theory is one of five consistent supersymmetric string theories in ten dimensions. It is the only one whose strings are unoriented (both orientations of a string are equivalent) and the only one which perturbatively contains not only closed strings, but also open strings. The terminology of type I and type II was coined by John Henry Schwarz in 1982 to classify the three string theories known at the time.
Overview.
The classic 1976 work of Ferdinando Gliozzi, Joël Scherk and David Olive paved the way to a systematic understanding of the rules behind string spectra in cases where only closed strings are present via modular invariance. It did not lead to similar progress for models with open strings, despite the fact that the original discussion was based on the type I string theory.
As first proposed by Augusto Sagnotti in 1988, the type I string theory can be obtained as an orientifold of type IIB string theory, with 32 half-D9-branes added in the vacuum to cancel various anomalies giving it a gauge group of SO(32) via Chan–Paton factors.
At low energies, type I string theory is described by the type I supergravity in ten dimensions coupled to the SO(32) supersymmetric Yang–Mills theory. The discovery in 1984 by Michael Green and John H. Schwarz that anomalies in type I string theory cancel sparked the first superstring revolution. However, a key property of these models, shown by A. Sagnotti in 1992, is that in general the Green–Schwarz mechanism takes a more general form, and involves several two forms in the cancellation mechanism.
The relation between the type IIB string theory and the type I string theory has a large number of surprising consequences, both in ten and in lower dimensions, that were first displayed by the String Theory Group at the University of Rome Tor Vergata in the early 1990s. It opened the way to the construction of entire new classes of string spectra with or without supersymmetry. Joseph Polchinski's work on D-branes provided a geometrical interpretation for these results in terms of extended objects (D-brane, orientifold).
In the 1990s it was first argued by Edward Witten that type I string theory with the string coupling constant formula_0 is equivalent to the SO(32) heterotic string with the coupling formula_1. This equivalence is known as S-duality.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g"
},
{
"math_id": 1,
"text": "1/g"
}
]
| https://en.wikipedia.org/wiki?curid=672291 |
672301 | Orientifold | In theoretical physics orientifold is a generalization of the notion of orbifold, proposed by Augusto Sagnotti in 1987. The novelty is that in the case of string theory the non-trivial element(s) of the orbifold group includes the reversal of the orientation of the string. Orientifolding therefore produces unoriented strings—strings that carry no "arrow" and whose two opposite orientations are equivalent. Type I string theory is the simplest example of such a theory and can be obtained by orientifolding type IIB string theory.
In mathematical terms, given a smooth manifold formula_0, two discrete, freely acting, groups formula_1 and formula_2 and the worldsheet parity operator formula_3 (such that formula_4) an orientifold is expressed as the quotient space formula_5. If formula_2 is empty, then the quotient space is an orbifold. If formula_2 is not empty, then it is an orientifold.
Application to string theory.
In string theory formula_0 is the compact space formed by rolling up the theory's extra dimensions, specifically a six-dimensional Calabi–Yau space. The simplest viable compact spaces are those formed by modifying a torus.
Supersymmetry breaking.
The six dimensions take the form of a Calabi–Yau for reasons of partially breaking the supersymmetry of the string theory to make it more phenomenologically viable. The Type II string theories have 32 real supercharges, and compactifying on a six-dimensional torus leaves them all unbroken. Compactifying on a more general Calabi–Yau sixfold, 3/4 of the supersymmetry is removed to yield a four-dimensional theory with 8 real supercharges (N=2). To break this further to the only non-trivial phenomenologically viable supersymmetry, N=1, half of the supersymmetry generators must be projected out and this is achieved by applying the orientifold projection.
Effect on field content.
A simpler alternative to using Calabi–Yaus to break to N=2 is to use an orbifold originally formed from a torus. In such cases it is simpler to examine the symmetry group associated to the space as the group is given in the definition of the space.
The orbifold group formula_1 is restricted to those groups which work crystallographically on the torus lattice, i.e. lattice preserving. formula_2 is generated by an involution formula_6, not to be confused with the parameter signifying position along the length of a string. The involution acts on the holomorphic 3-form formula_7 (again, not to be confused with the parity operator above) in different ways depending on the particular string formulation being used.
The locus where the orientifold action reduces to the change of the string orientation is called the orientifold plane. The involution leaves the large dimensions of space-time unaffected and so orientifolds can have O-planes of at least dimension 3. In the case of formula_8 it is possible that all spatial dimensions are left unchanged and O9 planes can exist. The orientifold plane in type I string theory is the spacetime-filling O9-plane.
More generally, one can consider orientifold O"p"-planes where the dimension "p" is counted in analogy with D"p"-branes. O-planes and D-branes can be used within the same construction and generally carry opposite tension to one another.
However, unlike D-branes, O-planes are not dynamical. They are defined entirely by the action of the involution, not by string boundary conditions as D-branes are. Both O-planes and D-branes must be taken into account when computing tadpole constraints.
The involution also acts on the complex structure (1,1)-form "J"
This has the result that the number of moduli parameterising the space is reduced. Since formula_6 is an involution, it has eigenvalues formula_13. The (1,1)-form basis formula_14, with dimension formula_15 (as defined by the Hodge diamond of the orientifold's cohomology) is written in such a way that each basis form has definite sign under formula_6. Since moduli formula_16 are defined by formula_17 and "J" must transform as listed above under formula_6, only those moduli paired with 2-form basis elements of the correct parity under formula_6 survive. Therefore, formula_6 creates a splitting of the cohomology as formula_18 and the number of moduli used to describe the orientifold is, in general, less than the number of moduli used to describe the orbifold used to construct the orientifold. It is important to note that although the orientifold projects out half of the supersymmetry generators the number of moduli it projects out can vary from space to space. In some cases formula_19, in that all of the (1-1)-forms have the same parity under the orientifold projection. In such cases the way in which the different supersymmetry content enters into the moduli behaviour is through the flux dependent scalar potential the moduli experience, the N=1 case is different from the N=2 case.
Notes.
<templatestyles src="Reflist/styles.css" />
*Erratum: | [
{
"math_id": 0,
"text": "\\mathcal{M}"
},
{
"math_id": 1,
"text": "G_{1}"
},
{
"math_id": 2,
"text": "G_{2}"
},
{
"math_id": 3,
"text": "\\Omega_{p}"
},
{
"math_id": 4,
"text": "\\Omega_{p} : \\sigma \\to 2\\pi - \\sigma"
},
{
"math_id": 5,
"text": "\\mathcal{M}/(G_{1} \\cup \\Omega G_{2})"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "\\Omega"
},
{
"math_id": 8,
"text": "\\sigma (\\Omega) = \\Omega"
},
{
"math_id": 9,
"text": "\\sigma (\\Omega) = -\\Omega"
},
{
"math_id": 10,
"text": "\\sigma (\\Omega) = \\bar{\\Omega}"
},
{
"math_id": 11,
"text": "\\sigma (J) = J"
},
{
"math_id": 12,
"text": "\\sigma (J) = -J"
},
{
"math_id": 13,
"text": "\\pm 1"
},
{
"math_id": 14,
"text": "\\omega_{i}"
},
{
"math_id": 15,
"text": "h^{1,1}"
},
{
"math_id": 16,
"text": "A_{i}"
},
{
"math_id": 17,
"text": "J = A_{i}\\omega_{i}"
},
{
"math_id": 18,
"text": "h^{1,1} = h^{1,1}_{+} + h^{1,1}_{-}"
},
{
"math_id": 19,
"text": "h^{1,1} = h^{1,1}_{\\pm}"
}
]
| https://en.wikipedia.org/wiki?curid=672301 |
67231 | Metre per second | SI derived unit of speed and velocity
<templatestyles src="Template:Infobox/styles-images.css" />
The metre per second is the unit of both speed (a scalar quantity) and velocity (a vector quantity, which has direction and magnitude) in the International System of Units (SI), equal to the speed of a body covering a distance of one metre in a time of one second. According to the definition of metre, is exactly formula_0 of the speed of light.
The SI unit symbols are m/s, m·s−1, m s−1, or .
Conversions.
is equivalent to:
= 3.6 km/h (exactly)
≈ 3.2808 feet per second (approximately)
≈ 2.2369 miles per hour (approximately)
≈ 1.9438 knots (approximately)
1 foot per second = (exactly)
1 mile per hour = (exactly)
1 km/h = (exactly)
Relation to other measures.
The benz, named in honour of Karl Benz, has been proposed as a name for one metre per second. Although it has seen some support as a practical unit, primarily from German sources, it was rejected as the SI unit of velocity and has not seen widespread use or acceptance.
Unicode character.
The "metre per second" symbol is encoded by Unicode at code point .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{299792458}"
}
]
| https://en.wikipedia.org/wiki?curid=67231 |
67232586 | 2 Chronicles 7 | Second Book of Chronicles, chapter 7
2 Chronicles 7 is the seventh chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is the conclusion of dedication ceremony and God's covenant for the temple.
Text.
This chapter was originally written in the Hebrew language and is divided into 22 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Fire from heaven (7:1–3).
This section reports God's positive response to the plea in 2 Chronicles 6:41 that He accepted the temple as His own, applying Leviticus 9:22–24 (without the people's blessings by Moses and Aaron in Leviticus 9:23). God's glory took provisional possession of the temple as in 2 Chronicles 5:13–14, but now with an endorsing fire falling from the heavens, which was witnessed by the religious elite as well as all the Israelites, because God's glory does not only fill the temple, but is also above it (cf. Exodus 40:34 for verse 2, which can be rendered as 'and during all this time the glory of YHWH still filled the temple').
"And when Solomon finished praying, fire came down from the heavens and consumed the burnt offering and sacrifices, and the glory of the Lord filled the temple."
Verse 1.
The 'divine consecration' of the burnt offering and the sacrifices by fire coming down from heaven (not found in 1 Kings 8–9) 'dramatically legitimates' Solomon's Temple as 'an enduring fixture of Israelite life' (cf. Leviticus 9:24; 1 Kings 18:36-39; 1 Chronicles 21:26).
"And when all the children of Israel saw how the fire came down, and the glory of the Lord upon the house, they bowed themselves with their faces to the ground upon the pavement, and worshipped, and praised the Lord, saying, For he is good; for his mercy endureth for ever."
Verse 3.
The song refrain is found in Psalm 136 (cf. 2 Chronicles 5:13; 7:6; 20:21; Ezra 3:11), which became a significant element in the postexilic Temple liturgy.
Sacrifices of dedication (7:4–10).
The celebration of the temple's dedication and the Feast of Tabernacles were two separate feasts, each lasting seven days, for a total of 14 days (clarifying 1 Kings 8:66): the temple dedication took place from the 8th to the 14th of the seventh month, while the Feast of Tabernacles lasted from the 15th until the 21st of the same month with the concluding feast (as in Leviticus 23:36, 39; cf. Numbers 29:35–38; Nehemiah 8:18) is on the 22nd, then Solomon dismissed the festive community on the 23rd, as stated in verse 10, showing a strict adherence to the festal calendar according to Moses' law.
God's response to Solomon (7:11–22).
Verse 11 bridges the previous section to the next showing that Solomon was successful because he behaved in an exemplary manner. God lists four ways in which the Israelites could move Him to action (verse 14): humility, prayer, seeking His face, and turning from wicked ways; all these becoming repeated themes in the following chapters of Chronicles, whereas verses 17–22, a form of theodicy, lays out the explanation for the future collapse of David's monarchy and the destruction of the temple. The unique phrase in verse 18 'a successor to rule over Israel' (instead of 'a successor on the throne of Israel' in Kings
9:5 and also excluding the phrase 'over Israel for ever') parallels Micah 5:1 with messianic undertones, followed by the exclusion of the phrase 'or your children' in verse 19 to invoke the responsibilities of the current generation.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67232586 |
67234101 | Jump flooding algorithm | Class of algorithms used for computing distance-related functions
The jump flooding algorithm (JFA) is a flooding algorithm used in the construction of Voronoi diagrams and distance transforms. The JFA was introduced by Rong Guodong at an ACM symposium in 2006.
The JFA has desirable attributes in GPU computation, notably for its efficient performance. However, it is only an approximate algorithm and does not always compute the correct result for every pixel, although in practice errors are few and the magnitude of errors is generally small.
Implementation.
The JFA original formulation is simple to implement.
Take an formula_0 grid of pixels (like an image or texture). All pixels will start with an "undefined" color unless it is a uniquely-colored "seed" pixel. As the JFA progresses, each undefined pixel will be filled with a color corresponding to that of a seed pixel.
For each step size formula_1, run one iteration of the JFA:
Iterate over every pixel formula_2 at formula_3.
For each neighbor formula_4 at formula_5 where formula_6:
if formula_2 is undefined and formula_4 is colored, change formula_2's color to formula_4's
if formula_2 is colored and formula_4 is colored, if formula_7 where formula_8 and formula_9 are the seed pixels for formula_2 and formula_4, respectively, then change formula_2's color to formula_4's.
Note that pixels may change color more than once in each step, and that the JFA does not specify a method for resolving cases where distances are equal, therefore the last-checked pixel's color is used above.
The JFA finishes after evaluating the last pixel in the last step size. Regardless of the content of the initial data, the innermost loop runs a total of formula_10 times over each pixel, for an overall computational complexity of formula_11.
Variants.
Some variants of JFA are:
Uses.
The jump flooding algorithm and its variants may be used for calculating Voronoi maps and centroidal Voronoi tessellations (CVT), generating distance fields, point-cloud rendering, feature matching, the computation of power diagrams, and soft shadow rendering. The grand strategy game developer Paradox Interactive uses the JFA to render borders between countries and provinces.
Further developments.
The JFA has inspired the development of numerous similar algorithms. Some have well-defined error properties which make them useful for scientific computing.
In the computer vision domain, the JFA has inspired new belief propagation algorithms to accelerate the solution of a variety of problems.
References.
<templatestyles src="Reflist/styles.css" />
"As of [ this edit], this article uses content from "Is Jump Flood Algorithm Separable?", authored by alan-wolfe, trichoplax at Stack Exchange, which is licensed in a way that permits reuse under the , but not under the . All relevant terms must be followed." | [
{
"math_id": 0,
"text": "N \\times N"
},
{
"math_id": 1,
"text": "k \\in \\{\\frac N 2, \\frac N 4, \\dots, 1\\}"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "(x, y)"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "(x+i, y+j)"
},
{
"math_id": 6,
"text": "i,j \\in \\{-k, 0, k\\}"
},
{
"math_id": 7,
"text": "dist(p, s) > dist(p, s')"
},
{
"math_id": 8,
"text": "s"
},
{
"math_id": 9,
"text": "s'"
},
{
"math_id": 10,
"text": "9 \\log_2(N)"
},
{
"math_id": 11,
"text": "O(N^2 \\log_2(N))"
},
{
"math_id": 12,
"text": "^2"
},
{
"math_id": 13,
"text": "\\log_2(N)"
}
]
| https://en.wikipedia.org/wiki?curid=67234101 |
672358 | Type II string theory | In theoretical physics, type II string theory is a unified term that includes both type IIA strings and type IIB strings theories. Type II string theory accounts for two of the five consistent superstring theories in ten dimensions. Both theories have formula_0 extended supersymmetry which is maximal amount of supersymmetry — namely 32 supercharges — in ten dimensions. Both theories are based on oriented closed strings. On the worldsheet, they differ only in the choice of GSO projection. They were first discovered by Michael Green and John Henry Schwarz in 1982, with the terminology of type I and type II coined to classify the three string theories known at the time.
Type IIA string theory.
At low energies, type IIA string theory is described by type IIA supergravity in ten dimensions which is a non-chiral theory (i.e. left–right symmetric) with (1,1) "d"=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore trivial.
In the 1990s it was realized by Edward Witten (building on previous insights by Michael Duff, Paul Townsend, and others) that the limit of type IIA string theory in which the string coupling goes to infinity becomes a new 11-dimensional theory called M-theory. Consequently the low energy type IIA supergravity theory can also be derived from the unique maximal supergravity theory in 11 dimensions (low energy version of M-theory) via a dimensional reduction.
The content of the massless sector of the theory (which is relevant in the low energy limit) is given by formula_1 representation of SO(8) where formula_2 is the irreducible vector representation, formula_3 and formula_4 are the irreducible representations with odd and even eigenvalues of the fermionic parity operator often called co-spinor and spinor representations. These three representations enjoy a triality symmetry which is evident from its Dynkin diagram. The four sectors of the massless spectrum after GSO projection and decomposition into irreducible representations are
formula_5
formula_6
formula_7
formula_8
where formula_9 and formula_10 stands for Ramond and Neveu–Schwarz sectors respectively. The numbers denote the dimension of the irreducible representation and equivalently the number of components of the corresponding fields. The various massless fields obtained are the graviton formula_11 with two superpartner gravitinos formula_12 which gives rise to local spacetime supersymmetry, a scalar dilaton formula_13 with two superpartner spinors—the dilatinos formula_14, a 2-form spin-2 gauge field formula_15 often called the Kalb–Ramond field, a 1-form formula_16 and a 3-form formula_17. Since the formula_18-form gauge fields naturally couple to extended objects with formula_19 dimensional world-volume, Type IIA string theory naturally incorporates various extended objects like D0, D2, D4 and D6 branes (using Hodge duality) among the D-branes (which are formula_9formula_9 charged) and F1 string and NS5 brane among other objects.
The mathematical treatment of type IIA string theory belongs to symplectic topology and algebraic geometry, particularly Gromov–Witten invariants.
Type IIB string theory.
At low energies, type IIB string theory is described by type IIB supergravity in ten dimensions which is a chiral theory (left–right asymmetric) with (2,0) "d"=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore nontrivial.
In the 1990s it was realized that type IIB string theory with the string coupling constant "g" is equivalent to the same theory with the coupling "1/g". This equivalence is known as S-duality.
Orientifold of type IIB string theory leads to type I string theory.
The mathematical treatment of type IIB string theory belongs to algebraic geometry, specifically the deformation theory of complex structures originally studied by Kunihiko Kodaira and Donald C. Spencer.
In 1997 Juan Maldacena gave some arguments indicating that type IIB string theory is equivalent to N = 4 supersymmetric Yang–Mills theory in the 't Hooft limit; it was the first suggestion concerning the AdS/CFT correspondence.
Relationship between the type II theories.
In the late 1980s, it was realized that type IIA string theory is related to type IIB string theory by T-duality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{N}=2"
},
{
"math_id": 1,
"text": "(8_v \\oplus 8_s) \\otimes (8_v \\oplus 8_c)"
},
{
"math_id": 2,
"text": "8_v"
},
{
"math_id": 3,
"text": "8_c"
},
{
"math_id": 4,
"text": "8_s"
},
{
"math_id": 5,
"text": "\\text{NS-NS}:~ 8_v \\otimes 8_v = 1 \\oplus 28\\oplus 35= \\Phi \\oplus B_{\\mu\\nu} \\oplus G_{\\mu\\nu}"
},
{
"math_id": 6,
"text": "\\text{NS-R}: 8_v\\otimes 8_c=8_s\\oplus 56_c=\\lambda^+ \\oplus \\psi_m^-"
},
{
"math_id": 7,
"text": "\\text{R-NS}: 8_c\\otimes 8_s=8_s\\oplus 56_s=\\lambda^-\\oplus \\psi_m^+"
},
{
"math_id": 8,
"text": "\\text{R-R}: 8_s\\otimes 8_c=8_v\\oplus 56_t=C_n \\oplus C_{nmp}"
},
{
"math_id": 9,
"text": "\\text{R}"
},
{
"math_id": 10,
"text": "\\text{NS}"
},
{
"math_id": 11,
"text": "G_{\\mu\\nu}"
},
{
"math_id": 12,
"text": "\\psi_m^\\pm "
},
{
"math_id": 13,
"text": "\\Phi"
},
{
"math_id": 14,
"text": "\\lambda^\\pm"
},
{
"math_id": 15,
"text": "B_{\\mu\\nu}"
},
{
"math_id": 16,
"text": "C_n"
},
{
"math_id": 17,
"text": "C_{nmp}"
},
{
"math_id": 18,
"text": "\\text{p}"
},
{
"math_id": 19,
"text": "\\text{p+1}"
}
]
| https://en.wikipedia.org/wiki?curid=672358 |
67247219 | 2 Chronicles 8 | Second Book of Chronicles, chapter 8
2 Chronicles 8 is the eighth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is Solomon's other building projects and commercial efforts.
Text.
This chapter was originally written in the Hebrew language and is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Solomon's other building projects (8:1–11).
This section reports that Solomon received additional cities from Hiram the king of Tyre, whereas reported in 1 Kings 9, Solomon gave Hiram 20 cities, probably in exchange. Solomon sent Israelites to settle in those cities, similar as the policy of the Assyrian towards the defeated northern kingdom in . The remaining population of non-Israelites were
employed as slave workers by Solomon, with Israelites as guards exempted from the works (). Solomon was proud to have Pharaoh's daughter as his wife, so he built her a special house, also that she, as a woman (and foreigner), was not to come into contact with holy things.
"And it came to pass at the end of twenty years, wherein Solomon had built the house of the Lord, and his own house,"
"that the cities which Hiram had given to Solomon, Solomon built them; and he settled the children of Israel there."
"And Solomon went to Hamath Zobah and seized it."
'And he built Tadmor in the wilderness, and all the store cities, which he built in Hamath."
Public worship established at the Temple (8:12–16).
The passage details how Solomon kept the commandments of Moses in offerings and David's ordinances in the appointments of priests and Levites (; , ). The three annual festivals are named here, along with the daily sacrifices as well as for sabbath and the new moons.
Solomon's fleet (8:17–18).
This part parallels 1 Kings 9:26-28, with "the sea" refers to the Red Sea.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67247219 |
67250288 | 2 Chronicles 9 | Second Book of Chronicles, chapter 9
2 Chronicles 9 is the ninth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in the late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is Solomon's fame and wealth with the visit of the queen of Sheba and the list of his treasures, ending with the report of his death and the history books containing his activities.
Text.
This chapter was originally written in the Hebrew language and is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The queen of Sheba's visit (9:1–12).
The story of the Queen of Sheba's visit to Jerusalem was almost identical with that in 1 Kings 10 and fits extremely well for international recognition of Judah's rulers in the Chronicles (cf. e.g. 1 Chronicles 14:17).
"And she gave the king one hundred and twenty talents of gold, spices in great abundance, and precious stones; there never were any spices such as those the queen of Sheba gave to King Solomon."
Solomon's wealth (9:13–28).
This section with substantial information regarding Solomon's splendor and power largely parallels with 1 Kings 10:14–28a, describing how the promise of unmatched wealth and wisdom (2 Chronicles 1:11–12) was fulfilled in Solomon. Verse 25a corresponds with 1 Kings 5:6, verse 25b with 1 Kings 10:26b, whereas verse 26 includes information recorded in 1 Kings 5:1 and verses 27–28 with 1 Kings 10:27–28. Some materials in 1 Kings 10 with no parallel in this chapter can be found in 2 Chronicles 1:14-17.
The death of Solomon (9:29–31).
The Chronicles report that Solomon enjoyed a peaceful reign of unified Israel kingdom from the beginning to the end ('from first to last'; instead of 'all that he did' in 1 Kings 11:41), omitting any negative information found in other documents (1 Kings 9:11–16; 11:1–38) and also not to present Solomon's wisdom as his most significant quality. Instead of using 'the Book of the Acts of Solomon', the Chronicles use 'the history of the prophet Nathan, and in the prophecy of Ahijah the Shilonite, and in the visions of the seer Iddo concerning Jeroboam son of Nebat' – three prophetic sources just like David (1 Chronicles 29:29) – covering the beginning of Solomon's reign (Nathan, cf. 1 Kings 1), to its end (Ahijah; cf. 1 Kings 11:29), whereas Iddo is mentioned again as a source in 2 Chronicles 12:15 (Rehoboam) and 13:22 (Abijah).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67250288 |
6725174 | Heel lift | Type of shoe insert
Heel lifts, also known as shoe inserts, are commonly used as therapy for leg-length differences leading to knee, hip, and back pain. They attempt to reduce stress on the Achilles' tendon during healing, and for various rehabilitation uses.
The intent of a heel lift is not to absorb shock or spread pressure on the foot, but to raise one foot in order to shift balance and gait. As such, these products should be firm and not compressible, in order to add a constant amount of height without causing the heel to rub vertically in the shoe.
Calculation.
A commonly used formula for calculating the amount lift necessary for short leg syndrome was presented by David Heilig:
formula_0
where
Duration (D) is
formula_1 years = formula_2<br>
formula_3 years = formula_4<br>
formula_5 years = formula_6
SBU is Sacral Base Unleveling (SBU), and
L is the amount of Lift required (L).
and
Compensation (C)> is absent (none) = 0 pts
<br>Sidebending and rotation (of the spine) = 1 pt
<br>
Wedging, facet size changes, endplates with horizontal growths, spurring = 2 pts
The maximum lift measure within the shoe (i.e., between the heel and the insole) is 1/4 inch, while the maximum lift from the heel to the floor is 1/2 inch.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L < [SBU] / [D + C]"
},
{
"math_id": 1,
"text": "0-10"
},
{
"math_id": 2,
"text": "1 pt"
},
{
"math_id": 3,
"text": "10-30"
},
{
"math_id": 4,
"text": "2 pts"
},
{
"math_id": 5,
"text": "30+"
},
{
"math_id": 6,
"text": "3 pts"
}
]
| https://en.wikipedia.org/wiki?curid=6725174 |
67256111 | BitClout | Blockchain-based social media app
Cryptocurrency
BitClout was an open source blockchain-based social media platform. On the platform, users could post short-form writings and photos, award money to posts they particularly like by clicking a diamond icon, as well as buy and sell "creator coins" (personalized tokens whose value depends on people's reputations). BitClout ran on a custom proof of work blockchain, and was a prototype of what can be built on DeSo (short for "Decentralized Social"). BitClout's founder and primary leader is Nader al-Naji, known pseudonymously as "Diamondhands". In July 2024, al-Naji was arrested by the U.S. Securities and Exchange Commission and charged with fraud involving BitClout.
Under development since 2019, BitClout's blockchain created its first block in January 2021, and BitClout itself launched publicly in March 2021. The platform launched with 15,000 "reserved" accounts - a move intended to prevent impersonation, but which backfired as some people with reserved accounts tried to actively distance themselves. Later, in September 2021, BitClout was revealed to be the flagship product of the DeSo blockchain.
History.
Origins (2019 - March 2021).
In early 2019, Nader al-Naji became interested in "mixing investing and social media". He started creating a custom blockchain in May 2019, but didn't tell anyone else until November 2020. However, in the fall of 2020, al-Naji pitched BitClout's own investors under his real name and began posting job listings for a "new operation".
Although BitClout was not originally intended to launch until mid-2021, its development was sped up due to "zeitgeist about decentralized social media" in January 2021.
BitClout's first block was mined on 18 January 2021. Its next block was mined on 1 March 2021.
As BitClout (March - September 2021).
In early March 2021, about fifty investors received links to a password-protected website with the BitClout white paper. They were encouraged to explore the site and send the same link to "two or three other 'trusted contacts'". Within weeks users were spending millions of dollars per day on the platform. The platform's founders said they were "completely unprepared", having planned to have a "soft-launch". The leader went by the name "diamondhands" on the platform.
On 24 March 2021, BitClout launched out of private beta. Investors include Sequoia Capital, Andreessen Horowitz, the venture capital firm Social Capital, Coinbase Ventures, Winklevoss Capital Management, Alexis Ohanian, Polychain, Pantera, and Digital Currency Group (CoinDesk's parent company). During its initial launch, BitClout's currency could be bought with bitcoin, but not sold except on Discord servers or Twitter threads. A single bitcoin wallet related to BitClout received more than $165M worth of deposits.
In March 2021, law firm Anderson Kill P.C. sent Nader al-Naji, the presumed leader of the BitClout platform, a cease-and-desist letter, demanding the removal of Brandon Curtis's account and alleging that BitClout violated sections 1798 and 3344 of the California Civil Code by using Curtis's name and likeness without his consent. Curtis also tweeted, "Adopting Bitcoin's aesthetic to raise VC funding to carry out unethical and blatantly illegal schemes like BitClout: not cool". (However, Curtis's coin, despite not being listed on the official website, can still be bought by users searching for the original username.) Additionally, in April 2021, Lee Hsien Loong asked for his name and photograph to be removed from the site, stating that he has "nothing to do with the platform" and that "it is misleading and done without [his] permission".
On 18 May 2021, diamondhands announced that 100% of the BitClout code went public.
On 12 June 2021, the supply of BitClout was capped at around 11 million coins.
On 18 July 2021, BitClout added the ability for users to mint and purchase NFTs within the platform.
As part of DeSo (September 2021 - present).
On 21 September 2021, it was revealed that BitClout was a prototype built on DeSo, short for "Decentralized Social". As a part of this revelation, diamondhands confirmed his identity as Nader al-Naji. (As early as April 2021, it had been believed that diamondhands indeed that person.)The Bitclout project raised $200M in funding, which went to setting up the DeSo Foundation.
Design.
BitClout is a social media platform. Its users can post short-form writings and photos (similarly to Twitter). They can award money to posts they particularly like by clicking a diamond icon (similarly to Twitch Bits).
The prices of each account's "creator coin" goes up and down with the popularity of the celebrity behind it. For example, if someone says something negative, the value of their corresponding account may go down. This price is computed automatically according to the formula formula_0.
At launch time, BitClout scraped 15,000 profiles of celebrities from Twitter to create "reserved" accounts in their names. To claim a reserved account, the account holder would need to tweet about it (which also serves as a marketing strategy). At least 80 such reserved profiles have been claimed.
Proof of stake was introduced in March 2024.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "price\\_in\\_bitclout=.003*creator\\_coins\\_in\\_circulation^2"
}
]
| https://en.wikipedia.org/wiki?curid=67256111 |
67256476 | MacMahon Squares | Puzzle published in 1921 by Percy MacMahon
MacMahon Squares are an edge-matching puzzle first published by Percy MacMahon in 1921, using 24 unique squares with 3-color patterns; each of the four edges is assigned a single color. The complete set of 24 squares are organized next to each other by matching edge colors to create a 4 by 6 grid. Such tessellation puzzles have multiple variants, which are determined by restrictions on how to arrange the 24 squares. This game has also been commercialized in numerous physical forms, by various companies.
The game.
MacMahon squares was first published in Percy Alexander MacMahon's 1921 treatise "New Mathematical Pastimes". The original version consisted of one copy of each of the 24 different squares that can be made by coloring the edges of a square with one of three colors. (Here "different" means up to rotations.) The goal is to arrange the squares into a 4 by 6 grid so that when two squares share an edge, the common edge is the same color in both squares.
In 1964, a supercomputer was used to produce 12,261 solutions to the basic version of the MacMahon Squares puzzle, with a runtime of about 40 hours.
Theory.
The MacMahon Squares game is an example of an edge-matching puzzle. The family of such problems is NP-complete. The first part of "New Mathematical Diversions" describes these games in general, starting with linear forms (dominoes), then progressing in detail with similar games using tiles shaped as equilateral triangles, squares, right isoceles triangles, cubes, and hexagons.
There are a total of 24 distinct squares for 3 colors. For an arbitrary number of colors formula_0, the number of unique squares can be found by the expression formula_1.
For other shapes with formula_0 colors, MacMahon determined the number of unique patterns are:
For example, given a triangle with three sides, each of which is assigned one of four possible colors, there are 56 unique patterns.
Variants.
Contact system.
In his book, MacMahon suggested the ability to define which borders can contact one another, based on their colors. This is by some permutation of the 3 colors, described by Ca,b,c. Here, a, b, and c represent the shift in colors to which the first, second, and third colors can be matched to. A '1' it is matched to itself, and a '2' signifies that it must be matched with a different color.
For example, C1,1,1 represents 1 to 1, 2 to 2, and 3 to 3, as each of these matchings are represented by the number 1. Alternatively, C1,2 represents 1 to 1 and 2 to 3 as the 1 to 1 matching is represented by the number 1, and the matching between 2 and 3 is represented by 2. More colors can be described in a similar way. For example, a coloring of C1,2,2,2 represents 1 to 1, 2 to 3, 4 to 5, and 6 to 7.
From here we can see that the only possible numbers to describe the pairings by are 1 and 2, since a 3 or above merely skips over a color that would be used the same otherwise because colorings are relative.
Boundaries.
Another way to change the puzzle is to restrict which colors squared make up the border colors. In the classic MacMahon squares puzzle, there are a total of 20 places on the border. The number of each color that can be present on these 20 places can be described by Ba,b,c where a, b, and c are the number of each color of the border pieces.
For example, B20,0,0 represents 20 of the first color and none of the rest since the first color already constitutes all available border spaces. Alternatively, B10,10,0 represents 10 of the first color and 10 of the next. More colors can be described in a similar way. For example, a boundary of B22,16,8,2 represents 22 of the first, 16 of the second, 8 of the next, and 2 of the last colors to populate the border colors.
From here we can see that the only possible numbers to describe the number of each color composing the boundary are even numbers, since this would imply an odd number of another color, which would violate the parity of the total number of triangles.
Analogous puzzles.
MacMahon Squares, along with variations on the idea, was commercialized as Multimatch.
Another puzzle with similar properties is MacMahon's Cubes, which are a set of 30 cubes, with sides colored one of 6 different colors. Unlike the MacMahon Squares puzzle, we do not include all 2,226 possible cubes, but only the cubes containing exactly 6 distinct colors and 1 of each of the 6 colors.
MacMahon Squares have served as a baseline for numerous other puzzles. Some of these include the Nelson Puzzle, the Wang Tile, and TetraVex.
The commercial board game "Trioker", patented in 1969 by Marc Odier, uses triangular tiles first proposed by MacMahon in 1921.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\frac{n}{4}\\cdot(n+1)\\cdot(n^2-n+2)"
},
{
"math_id": 2,
"text": "\\frac{n}{3}\\cdot(n^2+2)"
},
{
"math_id": 3,
"text": "\\frac{n}{5}\\cdot(n^4+4)"
},
{
"math_id": 4,
"text": "\\frac{n}{6}\\cdot(n+1)\\cdot(n^4-n^3+n^2+2)"
},
{
"math_id": 5,
"text": "\\frac{n}{7}\\cdot(n^6+6)"
}
]
| https://en.wikipedia.org/wiki?curid=67256476 |
672568 | Minimal model | Minimal model may refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathcal{M}"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "\\mathcal{M}'"
},
{
"math_id": 3,
"text": "\\mathcal{M}\\supset\\mathcal{M}'"
}
]
| https://en.wikipedia.org/wiki?curid=672568 |
67257628 | 2 Chronicles 10 | Second Book of Chronicles, chapter 10
2 Chronicles 10 is the tenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the kingdom of Israel's division in the beginning of Rehoboam's reign.
Text.
This chapter was originally written in the Hebrew language and is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Rebellion against Rehoboam (10:1–15).
The whole passage (until verse 19) parallels 1 Kings 12:1–19 with only a few verbal alterations. After inheriting the throne from his father, Rehoboam went to Shechem to be confirmed as king. The northern tribes of Israel called Jeroboam (who fled to Egypt for fear of Solomon) to lead them in requesting a relaxation of financial burden applied by Solomon. Rehoboam, refusing the old men's counsel, but following the advice of young men, replied to them roughly, so ten tribes (not including Judah and Benjamin) revolted and established the northern kingdom, killed Hadoram, Rehoboam's officer, and forced Rehoboam to flee to Jerusalem (verse 18).
"And Rehoboam went to Shechem for to Shechem were all Israel come to make him king."
Verse 1.
Jacob's well is located about south-east of it, and Joseph's tomb is to the east ().
The kingdom divided (10:16–19).
The kingdom's division is presented in the Chronicles as God's will, in accordance with interpretation of 1 Kings, although some facts about Solomon's falling away and Jeroboam's background (explained in the 1 Kings 11) are not reported. The war with Jeroboam was only a side issue in this chapter and is elaborated in chapter 13 (cf. 1 Kings 12).
"Then king Rehoboam sent Hadoram that was over the tribute; and the children of Israel stoned him with stones, that he died. But king Rehoboam made speed to get him up to his chariot, to flee to Jerusalem."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67257628 |
67266860 | 2 Chronicles 11 | Second Book of Chronicles, chapter 11
2 Chronicles 11 is the eleventh chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the fallout from the unified kingdom of Israel's division in the beginning of Rehoboam's reign.
Text.
This chapter was originally written in the Hebrew language and is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Rehoboam fortifies Judah (11:1–12).
Verses 1–4 parallel , but verses 5–12 has no parallel elsewhere. Rehoboam refrained from attacking Jeroboam because of a prophetic intervention (verse 4), an obedience for which he is rewarded. Instead, Rehoboam transformed some cities into fortresses (verses 6–10), all but Adoraim are mentioned elsewhere in the Hebrew Bible.
"‘Thus says the Lord:
""You shall not go up or fight against your brethren! Let every man return to his house, for this thing is from Me."‘"
"Therefore they obeyed the words of the Lord, and turned back from attacking Jeroboam."
Rehoboam's supporters and family (11:13–23).
Verses 13-17 describe the consequences in Judah of Jeroboam's cult 'reforms', as it is reported in verse 15 that Jeroboam made idols (1 Kings 12:28–29 detail the placement of two golden calves in Bethel and Dan), then recruited new non-Levite priests who pledged allegiance to him, so the Levites (including the priests; verse 14) and the laymen (verse 16) from the northern kingdom came to Jerusalem for the legitimate sacrificial rite, exactly what Jeroboam wished to avoid with his religious policy. The Chronicles indicates that a large family and numerous children are a sign of God's blessing (without detailing Solomon's large family, perhaps because it is combined with the idolatry) so the family of Rehoboam is recorded especially in relation to the two wives, Mahalath and Maachah, who were closely related to David's family. Like David, his grandfather, Rehoboam places his sons in the administration of the kingdom (verses 22–23; cf. 1 Chronicles 18:17).
" For the Levites left their suburbs and their possession, and came to Judah and Jerusalem: for Jeroboam and his sons had cast them off from executing the priest's office unto the Lord:"
Verse 14.
The exodus of the priests and Levites from the northern Israel territory into Judah strengthened the southern kingdom and demonstrated Jeroboam's apostasy (cf. 1 Kings 12:28, 13:33; 14:8–9).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67266860 |
67267852 | 2 Chronicles 12 | Second Book of Chronicles, chapter 12
2 Chronicles 12 is the twelfth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the kingdom of Israel's division in the beginning of Rehoboam's reign.
Text.
This chapter was originally written in the Hebrew language and is divided into 16 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Shishak attacked Jerusalem (12:1–12).
After a short recovery (three years faithfully the law of God; 2 Chronicles 11:17), Rehoboam and the people fell to apostasy, so Egypt could defeat them as a form of punishment. Uzziah also behaved similarly in 2 Chronicles 26:16. The siege of Jerusalem in Rehoboam's time is comparable to the one in Hezekiah's time (2 Chronicles 32).
"And it came to pass, that in the fifth year of king Rehoboam Shishak king of Egypt came up against Jerusalem, because they had transgressed against the LORD,"
"And Sousakim gave to Jeroboam Ano the eldest sister of Thekemina his wife, to him as wife; she was great among the king's daughters".
Verse 2.
Most scholars support the identification by Champollion with Shoshenq I of the 22nd dynasty (ruled Egypt 945–924 BCE), who left behind "explicit records of a campaign into Canaan (scenes; a long list of Canaanite place-names from the Negev to Galilee; stelae), including a stela [found] at Megiddo", and Bubastite Portal at Karnak, although Jerusalem was not mentioned in any of these campaign records. A common variant of Shoshenq's name omits its 'n' glyphs, resulting in a pronunciation like, "Shoshek".
Rehoboam's reign and death (12:13–16).
This section records events at a further phase of Rehoboam's rule, which follows a tragic pattern: 'As soon as he has recovered, Rehoboam immediately apostasizes again' (cf. verse 1), so the Chronicles notes that 'he did not set his heart to seek the LORD'. The concluding remarks in verses 15–16 distinguishes between the earlier and later acts of Rehoboam, although the time of separation is not entirely clear. His records were written in the books of Shemaiah and Iddo (unclear if they were two separate sources or a single text; cf. e.g. 1 Chronicles 29:29; 2 Chronicles 9:29; also 2 Chronicles 11:2 or 1 Kings 12:22 with different spelling regarding Shemaiah, and 1 Kings 13:22 regarding Iddo, probably identical person as here).
"Thus King Rehoboam strengthened himself in Jerusalem and reigned. Now Rehoboam was forty-one years old when he became king; and he reigned seventeen years in Jerusalem, the city which the Lord had chosen out of all the tribes of Israel, to put His name there. His mother’s name was Naamah, an Ammonitess"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67267852 |
67269640 | 2 Chronicles 13 | Second Book of Chronicles, chapter 13
2 Chronicles 13 is the thirteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Abijah, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 22 verses in Christian Bibles, but 23 verses in the Hebrew Bible with the following verse numbering comparison:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Abijah king of Judah (13:1–2).
The information about Abijah's reign over Judah contains no judgement (unlike the negative judgement in 1 Kings), which is otherwise only given to Jehoahaz (2 Chronicles 36:1–4), and a report of his (one and only) victory over Jeroboam (not recorded in 1 Kings), preceded by a sermon on the mount which portrays the basic relationship between the northern and southern kingdoms.
"In the eighteenth year of King Jeroboam, Abijah became king over Judah."
"He reigned three years in Jerusalem. His mother’s name was Michaiah the daughter of Uriel of Gibeah."
"And there was war between Abijah and Jeroboam."
War between Abijah and Jeroboam (13:3–22).
This section consists of preparations for war, a lengthy speech, and the description of an actual battle between the army of Abijah (kingdom of Judah) and that of Jeroboam (northern kingdom of Israel). Abijah's army (400,000 'valiant warriors ... picked men') was only half the size of Jeroboam's (800,000 'picked mighty warriors'—a number corresponding to 2 Samuel 24:9 and David's census), suggesting that on human terms, northern Israel should be victorious. Still, Abijah made a 'stylistically and rhetorically artistic speech' (verses 5–12), calling on the people of the northern kingdom to return to the legitimate rule of the David's line (YHWH's elect), the legitimate office of priesthood in Jerusalem, and the legitimate (and pure) worship in the temple of Jerusalem, from the idolatrous worship of Jeroboam with his own priests serving that of 'no gods' (, "lo elohim"; cf. Hosea 8:6). The 'enumeration of Judean orthopraxis' by Abijah describes the Temple worship during the period of United Monarchy (1 Chronicles 15–16; 23–29; 2 Chronicles 2–4) with reference to the tabernacle worship at the time of Moses (Exodus 25:30-40; 29:1-9, 38-42; 30:7-10; Leviticus 24:3-9; Numbers 8:2-4; 28:3-8). The battle had the elements of tactics by the Israelites, who prepared an ambush, and the holy warfare by the Judeans, for whom the Lord acted to strike down Jeroboam's troops, enabling the Judeans to kill 500,000 chosen men of Israel (verses 16–17).
"Then Abijah stood on Mount Zemaraim, which is in the mountains of Ephraim, and said, “Hear me, Jeroboam and all Israel:"
"And the rest of the acts of Abijah, and his ways, and his sayings, are written in the story of the prophet Iddo."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67269640 |
672731 | Matrix exponential | Matrix operation generalizing exponentiation of scalar numbers
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
Let X be an "n"×"n" real or complex matrix. The exponential of X, denoted by "e""X" or exp("X"), is the "n"×"n" matrix given by the power series
formula_0
where formula_1 is defined to be the identity matrix formula_2 with the same dimensions as formula_3. The series always converges, so the exponential of X is well-defined.
Equivalently,
formula_4
where I is the "n"×"n" identity matrix.
When X is an "n"×"n" diagonal matrix then exp("X") will be an "n"×"n" diagonal matrix with each diagonal element equal to the ordinary exponential applied to the corresponding diagonal element of X.
Properties.
Elementary properties.
Let "X" and "Y" be "n"×"n" complex matrices and let "a" and "b" be arbitrary complex numbers. We denote the "n"×"n" identity matrix by "I" and the zero matrix by 0. The matrix exponential satisfies the following properties.
We begin with the properties that are immediate consequences of the definition as a power series:
The next key result is this one:
The proof of this identity is the same as the standard power-series argument for the corresponding identity for the exponential of real numbers. That is to say, "as long as formula_3 and formula_7 commute", it makes no difference to the argument whether formula_3 and formula_7 are numbers or matrices. It is important to note that this identity typically does not hold if formula_3 and formula_7 do not commute (see Golden-Thompson inequality below).
Consequences of the preceding identity are the following:
Using the above results, we can easily verify the following claims. If "X" is symmetric then "e""X" is also symmetric, and if "X" is skew-symmetric then "e""X" is orthogonal. If "X" is Hermitian then "e""X" is also Hermitian, and if "X" is skew-Hermitian then "e""X" is unitary.
Finally, a Laplace transform of matrix exponentials amounts to the resolvent,
formula_8
for all sufficiently large positive values of s.
Linear differential equation systems.
One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. The solution of
formula_9
where A is a constant matrix and "y" is a column vector, is given by
formula_10
The matrix exponential can also be used to solve the inhomogeneous equation
formula_11
See the section on applications below for examples.
There is no closed-form solution for differential equations of the form
formula_12
where A is not constant, but the Magnus series gives the solution as an infinite sum.
The determinant of the matrix exponential.
By Jacobi's formula, for any complex square matrix the following trace identity holds:
formula_13
In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact that the right hand side of the above equation is always non-zero, and so det("eA") ≠ 0, which implies that "eA" must be invertible.
In the real-valued case, the formula also exhibits the map
formula_14
to not be surjective, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.
Real symmetric matrices.
The matrix exponential of a real symmetric matrix is positive definite. Let formula_15 be an "n"×"n" real symmetric matrix and formula_16 a column vector. Using the elementary properties of the matrix exponential and of symmetric matrices, we have:
formula_17
Since formula_18 is invertible, the equality only holds for formula_19, and we have formula_20 for all non-zero formula_21. Hence formula_22 is positive definite.
The exponential of sums.
For any real numbers (scalars) x and y we know that the exponential function satisfies "e""x"+"y" = "e""x" "e""y". The same is true for commuting matrices. If matrices X and Y commute (meaning that "XY" = "YX"), then,
formula_23
However, for matrices that do not commute the above equality does not necessarily hold.
The Lie product formula.
Even if X and Y do not commute, the exponential "e""X" + "Y" can be computed by the Lie product formula
formula_24
Using a large finite k to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.
The Baker–Campbell–Hausdorff formula.
In the other direction, if X and Y are sufficiently small (but not necessarily commuting) matrices, we have
formula_25
where Z may be computed as a series in commutators of X and Y by means of the Baker–Campbell–Hausdorff formula:
formula_26
where the remaining terms are all iterated commutators involving X and Y. If X and Y commute, then all the commutators are zero and we have simply "Z" = "X" + "Y".
Inequalities for exponentials of Hermitian matrices.
For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials.
If A and B are Hermitian matrices, then
formula_27
There is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices – and, in any event, tr(exp("A")exp("B")exp("C")) is not guaranteed to be real for Hermitian "A", "B", "C". However, Lieb proved that it can be generalized to three matrices if we modify the expression as follows
formula_28
The exponential map.
The exponential of a matrix is always an invertible matrix. The inverse matrix of "e""X" is given by "e"−"X". This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map
formula_29
from the space of all "n"×"n" matrices to the general linear group of degree n, i.e. the group of all "n"×"n" invertible matrices. In fact, this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field C of complex numbers and not R).
For any two matrices X and Y,
formula_30
where ‖ · ‖ denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of "M""n"(C).
The map
formula_31
defines a smooth curve in the general linear group which passes through the identity element at "t" = 0.
In fact, this gives a one-parameter subgroup of the general linear group since
formula_32
The derivative of this curve (or tangent vector) at a point "t" is given by
The derivative at "t" = 0 is just the matrix "X", which is to say that "X" generates this one-parameter subgroup.
More generally, for a generic t-dependent exponent, "X"("t"),
formula_33
Taking the above expression "e""X"("t") outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent,
formula_34
The coefficients in the expression above are different from what appears in the exponential. For a closed form, see derivative of the exponential map.
Directional derivatives when restricted to Hermitian matrices.
Let formula_3 be a formula_35 Hermitian matrix with distinct eigenvalues. Let formula_36 be its eigen-decomposition where formula_37 is a unitary matrix whose columns are the eigenvectors of formula_3, formula_38 is its conjugate transpose, and formula_39 the vector of corresponding eigenvalues. Then, for any formula_35 Hermitian matrix formula_40, the directional derivative of formula_41 at formula_3 in the direction formula_40 is
formula_42
where formula_43, the operator formula_44 denotes the Hadamard product, and, for all formula_45, the matrix formula_46 is defined as
formula_47
In addition, for any formula_35 Hermitian matrix formula_48, the second directional derivative in directions formula_48 and formula_40 is
formula_49
where the matrix-valued function formula_50 is defined, for all formula_45, as
formula_51
with
formula_52
Computing the matrix exponential.
Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Matlab, GNU Octave, R, and SciPy all use the Padé approximant. In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices. Subsequent sections describe methods suitable for numerical evaluation on large matrices.
Diagonalizable case.
If a matrix is diagonal:
formula_53
then its exponential can be obtained by exponentiating each entry on the main diagonal:
formula_54
This result also allows one to exponentiate diagonalizable matrices. If
<templatestyles src="Block indent/styles.css"/>"A" = "UDU"−1
and "D" is diagonal, then
<templatestyles src="Block indent/styles.css"/>"e""A" = "Ue""D""U"−1.
Application of Sylvester's formula yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.)
Example : Diagonalizable.
For example, the matrix
formula_55
can be diagonalized as
formula_56
Thus,
formula_57
Nilpotent case.
A matrix N is nilpotent if "N""q" = 0 for some integer "q". In this case, the matrix exponential "e""N" can be computed directly from the series expansion, as the series terminates after a finite number of terms:
formula_58
Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.
General case.
Using the Jordan–Chevalley decomposition.
By the Jordan–Chevalley decomposition, any formula_35 matrix "X" with complex entries can be expressed as
formula_59
where
This means that we can compute the exponential of "X" by reducing to the previous two cases:
formula_60
Note that we need the commutativity of "A" and "N" for the last step to work.
Using the Jordan canonical form.
A closely related method is, if the field is algebraically closed, to work with the Jordan form of X. Suppose that "X" = "PJP"−1 where J is the Jordan form of X. Then
formula_61
Also, since
formula_62
Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form
formula_63
where N is a special nilpotent matrix. The matrix exponential of J is then given by
formula_64
Projection case.
If P is a projection matrix (i.e. is idempotent: "P"2 = "P"), its matrix exponential is:
<templatestyles src="Block indent/styles.css"/>"e""P" = "I" + ("e" − 1)"P".
Deriving this by expansion of the exponential function, each power of P reduces to P which becomes a common factor of the sum:
formula_65
Rotation case.
For a simple rotation in which the perpendicular unit vectors a and b specify a plane, the rotation matrix R can be expressed in terms of a similar exponential function involving a generator G and angle θ.
formula_66
formula_67
The formula for the exponential results from reducing the powers of G in the series expansion and identifying the respective series coefficients of "G2" and G with −cos("θ") and sin("θ") respectively. The second expression here for "eGθ" is the same as the expression for "R"("θ") in the article containing the derivation of the generator, "R"("θ") = "eGθ".
In two dimensions, if formula_68 and formula_69, then formula_70, formula_71, and
formula_72
reduces to the standard matrix for a plane rotation.
The matrix "P" = −"G"2 projects a vector onto the ab-plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of 30° = π/6 in the plane spanned by a and b,
formula_73
formula_74
Let "N" = "I" - "P", so "N"2 = "N" and its products with "P" and "G" are zero. This will allow us to evaluate powers of "R".
formula_75
Evaluation by Laurent series.
By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order n−1.
If P and "Qt" are nonzero polynomials in one variable, such that "P"("A") = 0, and if the meromorphic function
formula_76
is entire, then
formula_77
To prove this, multiply the first of the two above equalities by "P"("z") and replace z by A.
Such a polynomial "Qt"("z") can be found as follows−see Sylvester's formula. Letting a be a root of P, "Qa,t"("z") is solved from the product of P by the principal part of the Laurent series of f at a: It is proportional to the relevant Frobenius covariant. Then the sum "St" of the "Qa,t", where a runs over all the roots of P, can be taken as a particular "Qt". All the other "Qt" will be obtained by adding a multiple of P to "St"("z"). In particular, "St"("z"), the Lagrange-Sylvester polynomial, is the only "Qt" whose degree is less than that of P.
Example: Consider the case of an arbitrary 2×2 matrix,
formula_78
The exponential matrix e"tA", by virtue of the Cayley–Hamilton theorem, must be of the form
formula_79
Let α and β be the roots of the characteristic polynomial of A,
formula_80
Then we have
formula_81
hence
formula_82
if "α" ≠ "β"; while, if "α" = "β",
formula_83
so that
formula_84
Defining
formula_85
we have
formula_86
where sin("qt")/"q" is 0 if "t" = 0, and t if "q" = 0.
Thus,
formula_87
Thus, as indicated above, the matrix A having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece,
formula_88
the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of Euler's formula for Pauli spin matrices, that is rotations of the doublet representation of the group SU(2).
The polynomial "St" can also be given the following "interpolation" characterization. Define "et"("z") ≡ "etz", and "n" ≡ deg "P". Then "St"("z") is the unique degree < "n" polynomial which satisfies "S""t"("k")("a") = "e""t"("k")("a") whenever k is less than the multiplicity of a as a root of P. We assume, as we obviously can, that P is the minimal polynomial of A. We further assume that A is a diagonalizable matrix. In particular, the roots of P are simple, and the "interpolation" characterization indicates that "St" is given by the Lagrange interpolation formula, so it is the Lagrange−Sylvester polynomial.
At the other extreme, if "P" = ("z" - "a")"n", then
formula_89
The simplest case not covered by the above observations is when formula_90 with "a" ≠ "b", which yields
formula_91
Evaluation by implementation of Sylvester's formula.
A practical, expedited computation of the above reduces to the following rapid steps. Recall from above that an "n×n" matrix exp("tA") amounts to a linear combination of the first n−1 powers of A by the Cayley–Hamilton theorem. For diagonalizable matrices, as illustrated above, e.g. in the 2×2 case, Sylvester's formula yields exp("tA") = "Bα" exp("tα") + "Bβ" exp("tβ"), where the Bs are the Frobenius covariants of A.
It is easiest, however, to simply solve for these Bs directly, by evaluating this expression and its first derivative at "t" = 0, in terms of A and I, to find the same answer as above.
But this simple procedure also works for defective matrices, in a generalization due to Buchheim. This is illustrated here for a 4×4 example of a matrix which is "not diagonalizable", and the Bs are not projection matrices.
Consider
formula_92
with eigenvalues "λ"1 = 3/4 and "λ"2 = 1, each with a multiplicity of two.
Consider the exponential of each eigenvalue multiplied by t, exp("λit"). Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix "B""i". If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of t for each repetition, to ensure linear independence.
Sum all such terms, here four such,
formula_94
To solve for all of the unknown matrices B in terms of the first three powers of A and the identity, one needs four equations, the above one providing one such at t = 0. Further, differentiate it with respect to t,
formula_95
and again,
formula_96
and once more,
formula_97
Setting t = 0 in these four equations, the four coefficient matrices Bs may now be solved for,
formula_98
to yield
formula_99
Substituting with the value for A yields the coefficient matrices
formula_100
so the final answer is
formula_101
The procedure is much shorter than Putzer's algorithm sometimes utilized in such cases.
Illustrations.
Suppose that we want to compute the exponential of
formula_102
Its Jordan form is
formula_103
where the matrix "P" is given by
formula_104
Let us first calculate exp("J"). We have
formula_105
The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so exp("J"1(4)) = ["e"4]. The exponential of "J"2(16) can be calculated by the formula "e"(λ"I" + "N") = "e""λ" "e"N mentioned above; this yields
formula_106
Therefore, the exponential of the original matrix B is
formula_107
Applications.
Linear differential equations.
The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a "homogeneous" differential equation of the form
formula_108
has solution "e""At" y(0).
If we consider the vector
formula_109
we can express a system of "inhomogeneous" coupled linear differential equations as
formula_110
Making an ansatz to use an integrating factor of "e"−"At" and multiplying throughout, yields
formula_111
The second step is possible due to the fact that, if "AB" = "BA", then "e""At""B" = "Be""At". So, calculating "e""At" leads to the solution to the system, by simply integrating the third step with respect to t.
A solution to this can be obtained by integrating and multiplying by formula_112 to eliminate the exponent in the LHS. Notice that while formula_112 is a matrix, given that it is a matrix exponential, we can say that formula_113. In other words, formula_114.
Example (homogeneous).
Consider the system
formula_115
The associated defective matrix is
formula_116
The matrix exponential is
formula_117
so that the general solution of the homogeneous system is
formula_118
amounting to
formula_119
Example (inhomogeneous).
Consider now the inhomogeneous system
formula_120
We again have
formula_121
and
formula_122
From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution.
We have, by above,
formula_123
which could be further simplified to get the requisite particular solution determined through variation of parameters.
Note c = y"p"(0). For more rigor, see the following generalization.
Inhomogeneous case generalization: variation of parameters.
For the inhomogeneous case, we can use integrating factors (a method akin to variation of parameters). We seek a particular solution of the form yp("t") = exp("tA") z("t"),
formula_124
For y"p" to be a solution,
formula_125
Thus,
formula_126
where c is determined by the initial conditions of the problem.
More precisely, consider the equation
formula_127
with the initial condition "Y"("t"0) = "Y"0, where
Left-multiplying the above displayed equality by "e−tA" yields
formula_130
We claim that the solution to the equation
formula_131
with the initial conditions formula_132 for 0 ≤ "k" < "n" is
formula_133
where the notation is as follows:
"sk"("t") is the coefficient of formula_136 in the polynomial denoted by formula_137 in Subsection Evaluation by Laurent series above.
To justify this claim, we transform our order n scalar equation into an order one vector equation by the usual reduction to a first order system. Our vector equation takes the form
formula_138
where A is the transpose companion matrix of P. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection Evaluation by implementation of Sylvester's formula above.
In the case n = 2 we get the following statement. The solution to
formula_139
is
formula_140
where the functions "s"0 and "s"1 are as in Subsection Evaluation by Laurent series above.
Matrix-matrix exponentials.
The matrix exponential of another matrix (matrix-matrix exponential), is defined as
formula_141
formula_142
for any normal and non-singular "n"×"n" matrix X, and any complex "n"×"n" matrix Y.
For matrix-matrix exponentials, there is a distinction between the left exponential YX and the right exponential XY, because the multiplication operator for matrix-to-matrix is not commutative. Moreover,
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^X = \\sum_{k=0}^\\infty \\frac{1}{k!} X^k"
},
{
"math_id": 1,
"text": "X^0"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "e^X = \\lim_{k \\rightarrow \\infty} \\left(I + \\frac{X}{k} \\right)^k"
},
{
"math_id": 5,
"text": "XY=YX"
},
{
"math_id": 6,
"text": "e^Xe^Y=e^{X+Y}"
},
{
"math_id": 7,
"text": "Y"
},
{
"math_id": 8,
"text": "\\int_0^\\infty e^{-ts}e^{tX}\\,dt = (sI - X)^{-1}"
},
{
"math_id": 9,
"text": " \\frac{d}{dt} y(t) = Ay(t), \\quad y(0) = y_0, "
},
{
"math_id": 10,
"text": " y(t) = e^{At} y_0. "
},
{
"math_id": 11,
"text": " \\frac{d}{dt} y(t) = Ay(t) + z(t), \\quad y(0) = y_0. "
},
{
"math_id": 12,
"text": " \\frac{d}{dt} y(t) = A(t) \\, y(t), \\quad y(0) = y_0, "
},
{
"math_id": 13,
"text": "\\det\\left(e^A\\right) = e^{\\operatorname{tr}(A)}~."
},
{
"math_id": 14,
"text": "\\exp \\colon M_n(\\R) \\to \\mathrm{GL}(n, \\R)"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "x \\in \\R^n"
},
{
"math_id": 17,
"text": "x^Te^Sx=x^Te^{S/2}e^{S/2}x=x^T(e^{S/2})^Te^{S/2}x =(e^{S/2}x)^Te^{S/2}x=\\lVert e^{S/2}x\\rVert^2\\geq 0."
},
{
"math_id": 18,
"text": "e^{S/2}"
},
{
"math_id": 19,
"text": "x=0"
},
{
"math_id": 20,
"text": "x^Te^Sx > 0"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "e^S"
},
{
"math_id": 23,
"text": "e^{X+Y} = e^Xe^Y."
},
{
"math_id": 24,
"text": "e^{X+Y} = \\lim_{k\\to\\infty} \\left(e^{\\frac{1}{k}X}e^{\\frac{1}{k}Y}\\right)^k."
},
{
"math_id": 25,
"text": "e^Xe^Y = e^Z,"
},
{
"math_id": 26,
"text": "Z = X + Y + \\frac{1}{2}[X,Y] + \\frac{1}{12}[X,[X,Y]] - \\frac{1}{12}[Y,[X,Y]]+ \\cdots,"
},
{
"math_id": 27,
"text": "\\operatorname{tr}\\exp(A + B) \\leq \\operatorname{tr}\\left[\\exp(A)\\exp(B)\\right]."
},
{
"math_id": 28,
"text": "\\operatorname{tr}\\exp(A + B + C) \\leq \\int_0^\\infty \\mathrm{d}t\\, \\operatorname{tr}\\left[e^A\\left(e^{-B} + t\\right)^{-1}e^C \\left(e^{-B} + t\\right)^{-1}\\right]."
},
{
"math_id": 29,
"text": "\\exp \\colon M_n(\\Complex) \\to \\mathrm{GL}(n, \\Complex)"
},
{
"math_id": 30,
"text": "\\left\\| e^{X+Y} - e^X\\right\\| \\le \\|Y\\| e^{\\|X\\|} e^{\\|Y\\|}, "
},
{
"math_id": 31,
"text": "t \\mapsto e^{tX}, \\qquad t \\in \\R"
},
{
"math_id": 32,
"text": "e^{tX}e^{sX} = e^{(t + s)X}."
},
{
"math_id": 33,
"text": "\\frac{d}{dt}e^{X(t)} = \\int_0^1 e^{\\alpha X(t)} \\frac{dX(t)}{dt} e^{(1 - \\alpha) X(t)}\\,d\\alpha ~. "
},
{
"math_id": 34,
"text": "\\left(\\frac{d}{dt}e^{X(t)}\\right)e^{-X(t)} = \\frac{d}{dt}X(t) + \\frac{1}{2!} \\left[X(t), \\frac{d}{dt}X(t)\\right] + \\frac{1}{3!} \\left[X(t), \\left[X(t), \\frac{d}{dt}X(t)\\right]\\right] + \\cdots "
},
{
"math_id": 35,
"text": "n \\times n"
},
{
"math_id": 36,
"text": "X = E \\textrm{diag}(\\Lambda) E^*"
},
{
"math_id": 37,
"text": "E"
},
{
"math_id": 38,
"text": "E^*"
},
{
"math_id": 39,
"text": "\\Lambda = \\left(\\lambda_1, \\ldots, \\lambda_n\\right)"
},
{
"math_id": 40,
"text": "V"
},
{
"math_id": 41,
"text": "\\exp: X \\to e^X"
},
{
"math_id": 42,
"text": "\nD \\exp (X) [V]\n\\triangleq\n\\lim_{\\epsilon \\to 0} \n\\frac{1}{\\epsilon}\n\\left(\\displaystyle\ne^{X + \\epsilon V}\n-\ne^{X}\n\\right)\n=\nE(G \\odot \\bar{V}) E^*\n"
},
{
"math_id": 43,
"text": "\\bar{V} = E^* V E"
},
{
"math_id": 44,
"text": "\\odot"
},
{
"math_id": 45,
"text": "1 \\leq i, j \\leq n"
},
{
"math_id": 46,
"text": "G"
},
{
"math_id": 47,
"text": "\nG_{i, j} = \\left\\{\\begin{align}\n& \\frac{e^{\\lambda_i} - e^{\\lambda_j}}{\\lambda_i - \\lambda_j} & \\text{ if } i \\neq j,\\\\\n& e^{\\lambda_i} & \\text{ otherwise}.\\\\\n\\end{align}\\right.\n"
},
{
"math_id": 48,
"text": "U"
},
{
"math_id": 49,
"text": "\nD^2 \\exp (X) [U, V]\n\\triangleq\n\\lim_{\\epsilon_u \\to 0} \n\\lim_{\\epsilon_v \\to 0} \n\\frac{1}{4 \\epsilon_u \\epsilon_v}\n\\left(\\displaystyle\ne^{X + \\epsilon_u U + \\epsilon_v V}\n-\ne^{X - \\epsilon_u U + \\epsilon_v V}\n-\ne^{X + \\epsilon_u U - \\epsilon_v V}\n+\ne^{X - \\epsilon_u U - \\epsilon_v V}\n\\right)\n=\nE F(U, V) E^*\n"
},
{
"math_id": 50,
"text": "F"
},
{
"math_id": 51,
"text": "\nF(U, V)_{i,j} = \\sum_{k=1}^n \\phi_{i,j,k}(\\bar{U}_{ik}\\bar{V}_{jk}^* + \\bar{V}_{ik}\\bar{U}_{jk}^*)\n"
},
{
"math_id": 52,
"text": "\n\\phi_{i,j,k} = \\left\\{\\begin{align}\n& \\frac{G_{ik} - G_{jk}}{\\lambda_i - \\lambda_j} & \\text{ if } i \\ne j,\\\\\n& \\frac{G_{ii} - G_{ik}}{\\lambda_i - \\lambda_k} & \\text{ if } i = j \\text{ and } k \\ne i,\\\\\n& \\frac{G_{ii}}{2} & \\text{ if } i = j = k.\\\\\n\\end{align}\\right.\n"
},
{
"math_id": 53,
"text": "A = \\begin{bmatrix}\na_1 & 0 & \\cdots & 0 \\\\\n0 & a_2 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & a_n\n\\end{bmatrix} ,"
},
{
"math_id": 54,
"text": "e^A = \\begin{bmatrix}\ne^{a_1} & 0 & \\cdots & 0 \\\\\n0 & e^{a_2} & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & \\cdots & e^{a_n}\n\\end{bmatrix} ."
},
{
"math_id": 55,
"text": " A = \\begin{bmatrix}\n1 & 4\\\\\n1 & 1\\\\\n\\end{bmatrix}"
},
{
"math_id": 56,
"text": "\\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}\\begin{bmatrix}\n-1 & 0\\\\\n0 & 3\\\\\n\\end{bmatrix}\\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}^{-1}."
},
{
"math_id": 57,
"text": "e^A = \\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}e^\\begin{bmatrix}\n-1 & 0\\\\\n0 & 3\\\\\n\\end{bmatrix}\\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}^{-1}=\\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}\\begin{bmatrix}\n\\frac{1}{e} & 0\\\\\n0 & e^3\\\\\n\\end{bmatrix}\\begin{bmatrix}\n-2 & 2\\\\\n1 & 1\\\\\n\\end{bmatrix}^{-1} = \\begin{bmatrix}\n\\frac{e^4+1}{2e} & \\frac{e^4-1}{e}\\\\\n\\frac{e^4-1}{4e} & \\frac{e^4+1}{2e}\\\\\n\\end{bmatrix}."
},
{
"math_id": 58,
"text": "e^N = I + N + \\frac{1}{2}N^2 + \\frac{1}{6}N^3 + \\cdots + \\frac{1}{(q - 1)!}N^{q-1} ~."
},
{
"math_id": 59,
"text": "X = A + N "
},
{
"math_id": 60,
"text": "e^X = e^{A+N} = e^A e^N. "
},
{
"math_id": 61,
"text": "e^{X} = Pe^{J}P^{-1}."
},
{
"math_id": 62,
"text": "\\begin{align}\n J &= J_{a_1}(\\lambda_1) \\oplus J_{a_2}(\\lambda_2) \\oplus \\cdots \\oplus J_{a_n}(\\lambda_n), \\\\\n e^J &= \\exp \\big( J_{a_1}(\\lambda_1) \\oplus J_{a_2}(\\lambda_2) \\oplus \\cdots \\oplus J_{a_n}(\\lambda_n) \\big) \\\\\n &= \\exp \\big( J_{a_1}(\\lambda_1) \\big) \\oplus \\exp \\big( J_{a_2}(\\lambda_2) \\big) \\oplus \\cdots \\oplus \\exp \\big( J_{a_n}(\\lambda_n) \\big).\n\\end{align}"
},
{
"math_id": 63,
"text": "\\begin{align}\n & & J_a(\\lambda) &= \\lambda I + N \\\\\n &\\Rightarrow & e^{J_a(\\lambda)} &= e^{\\lambda I + N} = e^\\lambda e^N.\n\\end{align}"
},
{
"math_id": 64,
"text": "e^J = e^{\\lambda_1} e^{N_{a_1}} \\oplus e^{\\lambda_2} e^{N_{a_2}} \\oplus \\cdots \\oplus e^{\\lambda_n} e^{N_{a_n}}"
},
{
"math_id": 65,
"text": "e^P = \\sum_{k=0}^{\\infty} \\frac{P^k}{k!} = I + \\left(\\sum_{k=1}^{\\infty} \\frac{1}{k!}\\right)P = I + (e - 1)P ~."
},
{
"math_id": 66,
"text": "\\begin{align}\n G &= \\mathbf{ba}^\\mathsf{T} - \\mathbf{ab}^\\mathsf{T} & P &= -G^2 = \\mathbf{aa}^\\mathsf{T} + \\mathbf{bb}^\\mathsf{T} \\\\\n P^2 &= P & PG &= G = GP ~,\n\\end{align}"
},
{
"math_id": 67,
"text": "\\begin{align}\n R\\left( \\theta \\right) = e^{G\\theta}\n &= I + G\\sin (\\theta) + G^2(1 - \\cos(\\theta)) \\\\\n &= I - P + P\\cos (\\theta) + G\\sin (\\theta ) ~.\\\\\n\\end{align}"
},
{
"math_id": 68,
"text": "a = \\left[\\begin{smallmatrix} 1 \\\\ 0 \\end{smallmatrix}\\right]"
},
{
"math_id": 69,
"text": "b = \\left[ \\begin{smallmatrix} 0 \\\\ 1 \\end{smallmatrix} \\right]"
},
{
"math_id": 70,
"text": "G = \\left[ \\begin{smallmatrix} 0 & -1 \\\\ 1 & 0\\end{smallmatrix} \\right]"
},
{
"math_id": 71,
"text": "G^2 = \\left[ \\begin{smallmatrix}-1 & 0 \\\\ 0 & -1\\end{smallmatrix} \\right]"
},
{
"math_id": 72,
"text": "R(\\theta) = \\begin{bmatrix}\\cos(\\theta) & -\\sin(\\theta)\\\\ \\sin(\\theta) & \\cos(\\theta)\\end{bmatrix} = I \\cos(\\theta) + G \\sin(\\theta)"
},
{
"math_id": 73,
"text": "\\begin{align}\n \\mathbf{a} &= \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} &\n \\mathbf{b} &= \\frac{1}{\\sqrt{5}}\\begin{bmatrix} 0 \\\\ 1 \\\\ 2 \\\\ \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 74,
"text": "\\begin{align}\n G = \\frac{1}{\\sqrt{5}}&\\begin{bmatrix}\n 0 & -1 & -2 \\\\\n 1 & 0 & 0 \\\\\n 2 & 0 & 0 \\\\\n \\end{bmatrix} &\n P = -G^2 &= \\frac{1}{5}\\begin{bmatrix}\n 5 & 0 & 0 \\\\\n 0 & 1 & 2 \\\\\n 0 & 2 & 4 \\\\\n \\end{bmatrix} \\\\\n P\\begin{bmatrix} 1 \\\\ 2 \\\\ 3 \\\\ \\end{bmatrix} =\n \\frac{1}{5}&\\begin{bmatrix} 5 \\\\ 8 \\\\ 16 \\\\ \\end{bmatrix} =\n \\mathbf{a} + \\frac{8}{\\sqrt{5}}\\mathbf{b} &\n R\\left(\\frac{\\pi}{6}\\right) &= \\frac{1}{10}\\begin{bmatrix}\n 5\\sqrt{3} & -\\sqrt{5} & -2\\sqrt{5} \\\\\n \\sqrt{5} & 8 + \\sqrt{3} & -4 + 2\\sqrt{3} \\\\\n 2\\sqrt{5} & -4 + 2\\sqrt{3} & 2 + 4\\sqrt{3} \\\\\n \\end{bmatrix} \\\\\n\\end{align}"
},
{
"math_id": 75,
"text": "\\begin{align}\n R\\left( \\frac{\\pi}{6} \\right) &= N + P\\frac{\\sqrt{3}}{2} + G\\frac{1}{2} \\\\\n R\\left( \\frac{\\pi}{6} \\right)^2 &= N + P\\frac{1}{2} + G\\frac{\\sqrt{3}}{2} \\\\\n R\\left( \\frac{\\pi}{6} \\right)^3 &= N + G \\\\\n R\\left( \\frac{\\pi}{6} \\right)^6 &= N - P \\\\\n R\\left( \\frac{\\pi}{6} \\right)^{12} &= N + P = I \\\\\n\\end{align}"
},
{
"math_id": 76,
"text": "f(z)=\\frac{e^{t z}-Q_t(z)}{P(z)}"
},
{
"math_id": 77,
"text": "e^{t A} = Q_t(A)."
},
{
"math_id": 78,
"text": "A := \\begin{bmatrix}\n a & b \\\\\n c & d\n\\end{bmatrix}."
},
{
"math_id": 79,
"text": "e^{tA} = s_0(t)\\, I + s_1(t)\\,A."
},
{
"math_id": 80,
"text": "P(z) = z^2 - (a + d)\\ z + ad - bc = (z - \\alpha)(z - \\beta) ~ ."
},
{
"math_id": 81,
"text": "S_t(z) = e^{\\alpha t} \\frac{z - \\beta}{\\alpha - \\beta} + e^{\\beta t} \\frac{z - \\alpha}{\\beta - \\alpha}~,"
},
{
"math_id": 82,
"text": "\\begin{align}\n s_0(t) &= \\frac{\\alpha\\,e^{\\beta t} - \\beta\\,e^{\\alpha t}}{\\alpha - \\beta}, &\n s_1(t) &= \\frac{e^{\\alpha t} - e^{\\beta t}}{\\alpha - \\beta}\n\\end{align}"
},
{
"math_id": 83,
"text": "S_t(z) = e^{\\alpha t} (1 + t (z - \\alpha)) ~,"
},
{
"math_id": 84,
"text": "\\begin{align}\n s_0(t) &= (1 - \\alpha\\,t)\\,e^{\\alpha t},&\n s_1(t) &= t\\,e^{\\alpha t}~.\n\\end{align}"
},
{
"math_id": 85,
"text": "\\begin{align}\n s &\\equiv \\frac{\\alpha + \\beta}{2} = \\frac{\\operatorname{tr} A}{2}~, &\n q &\\equiv \\frac{\\alpha - \\beta}{2} = \\pm\\sqrt{-\\det\\left(A - sI\\right)},\n\\end{align}"
},
{
"math_id": 86,
"text": "\\begin{align}\n s_0(t) &= e^{st}\\left(\\cosh(qt) - s\\frac{\\sinh(qt)}{q}\\right), &\n s_1(t) &= e^{st}\\frac{\\sinh(qt)}{q},\n\\end{align}"
},
{
"math_id": 87,
"text": "e^{tA}=e^{st}\\left(\\left(\\cosh(qt) - s\\frac{\\sinh(qt)}{q}\\right)~I~ + \\frac{\\sinh(qt)}{q} A\\right) ~."
},
{
"math_id": 88,
"text": "A = sI + (A-sI)~,"
},
{
"math_id": 89,
"text": "S_t = e^{at}\\ \\sum_{k=0}^{n-1}\\ \\frac{t^k}{k!}\\ (z - a)^k ~."
},
{
"math_id": 90,
"text": "P = (z - a)^2\\,(z - b)"
},
{
"math_id": 91,
"text": "S_t = e^{at}\\ \\frac{z - b}{a - b}\\ \\left(1 + \\left(t + \\frac{1}{b - a}\\right)(z - a)\\right) + e^{bt}\\ \\frac{(z - a)^2}{(b - a)^2}."
},
{
"math_id": 92,
"text": "A = \\begin{bmatrix}\n 1 & 1 & 0 & 0 \\\\\n 0 & 1 & 1 & 0 \\\\\n 0 & 0 & 1 & -\\frac{1}{8} \\\\\n 0 & 0 & \\frac{1}{2} & \\frac{1}{2}\n\\end{bmatrix} ~,"
},
{
"math_id": 93,
"text": "B_{i_1} e^{\\lambda_i t}, ~ B_{i_2} t e^{\\lambda_i t}, ~ B_{i_3} t^2 e^{\\lambda_i t} "
},
{
"math_id": 94,
"text": "\\begin{align}\n e^{At} &= B_{1_1} e^{\\lambda_1 t} + B_{1_2} t e^{\\lambda_1 t} + B_{2_1} e^{\\lambda_2 t} + B_{2_2} t e^{\\lambda_2 t} , \\\\\n e^{At} &= B_{1_1} e^{\\frac{3}{4} t} + B_{1_2} t e^{\\frac{3}{4} t} + B_{2_1} e^{1 t} + B_{2_2} t e^{1 t} ~.\n\\end{align}"
},
{
"math_id": 95,
"text": "A e^{A t} = \\frac{3}{4} B_{1_1} e^{\\frac{3}{4} t} + \\left( \\frac{3}{4} t + 1 \\right) B_{1_2} e^{\\frac{3}{4} t} + 1 B_{2_1} e^{1 t} + \\left(1 t + 1 \\right) B_{2_2} e^{1 t} ~,"
},
{
"math_id": 96,
"text": "\\begin{align}\n A^2 e^{At} &= \\left(\\frac{3}{4}\\right)^2 B_{1_1} e^{\\frac{3}{4} t} + \\left( \\left(\\frac{3}{4}\\right)^2 t + \\left( \\frac{3}{4} + 1 \\cdot \\frac{3}{4}\\right) \\right) B_{1_2} e^{\\frac{3}{4} t} + B_{2_1} e^{1 t} + \\left(1^2 t + (1 + 1 \\cdot 1 )\\right) B_{2_2} e^{1 t} \\\\\n &= \\left(\\frac{3}{4}\\right)^2 B_{1_1} e^{\\frac{3}{4} t} + \\left( \\left(\\frac{3}{4}\\right)^2 t + \\frac{3}{2} \\right) B_{1_2} e^{\\frac{3}{4} t} + B_{2_1} e^{t} + \\left(t + 2\\right) B_{2_2} e^{t} ~,\n\\end{align}"
},
{
"math_id": 97,
"text": "\\begin{align}\n A^3 e^{At} &= \\left(\\frac{3}{4}\\right)^3 B_{1_1} e^{\\frac{3}{4} t} + \\left( \\left(\\frac{3}{4}\\right)^3 t + \\left( \\left(\\frac{3}{4}\\right)^2 + \\left(\\frac{3}{2}\\right) \\cdot \\frac{3}{4}\\right) \\right) B_{1_2} e^{\\frac{3}{4} t} + B_{2_1} e^{1 t} + \\left(1^3 t + (1 + 2) \\cdot 1 \\right) B_{2_2} e^{1 t} \\\\\n &= \\left(\\frac{3}{4}\\right)^3 B_{1_1} e^{\\frac{3}{4} t}\\! + \\left( \\left(\\frac{3}{4}\\right)^3 t\\! + \\frac{27}{16} \\right) B_{1_2} e^{\\frac{3}{4} t}\\! + B_{2_1} e^{t}\\! + \\left(t + 3\\cdot 1\\right) B_{2_2} e^{t} ~.\n\\end{align}"
},
{
"math_id": 98,
"text": "\\begin{align}\nI &= B_{1_1} + B_{2_1} \\\\\nA &= \\frac{3}{4} B_{1_1} + B_{1_2} + B_{2_1} + B_{2_2} \\\\\nA^2 &= \\left(\\frac{3}{4}\\right)^2 B_{1_1} + \\frac{3}{2} B_{1_2} + B_{2_1} + 2 B_{2_2} \\\\\nA^3 &= \\left(\\frac{3}{4}\\right)^3 B_{1_1} + \\frac{27}{16} B_{1_2} + B_{2_1} + 3 B_{2_2} ~,\n\\end{align} "
},
{
"math_id": 99,
"text": "\\begin{align}\nB_{1_1} &= 128 A^3 - 366 A^2 + 288 A - 80 I \\\\\nB_{1_2} &= 16 A^3 - 44 A^2 + 40 A - 12 I \\\\\nB_{2_1} &= -128 A^3 + 366 A^2 - 288 A + 80 I \\\\\nB_{2_2} &= 16 A^3 - 40 A^2 + 33 A - 9 I ~.\n\\end{align}"
},
{
"math_id": 100,
"text": "\\begin{align}\nB_{1_1} &= \\begin{bmatrix}0 & 0 & 48 & -16\\\\ 0 & 0 & -8 & 2\\\\ 0 & 0 & 1 & 0\\\\ 0 & 0 & 0 & 1\\end{bmatrix}\\\\\nB_{1_2} &= \\begin{bmatrix}0 & 0 & 4 & -2\\\\ 0 & 0 & -1 & \\frac{1}{2}\\\\ 0 & 0 & \\frac{1}{4} & -\\frac{1}{8}\\\\ 0 & 0 & \\frac{1}{2} & -\\frac{1}{4} \\end{bmatrix}\\\\\nB_{2_1} &= \\begin{bmatrix}1 & 0 & -48 & 16\\\\ 0 & 1 & 8 & -2\\\\ 0 & 0 & 0 & 0\\\\ 0 & 0 & 0 & 0\\end{bmatrix}\\\\\nB_{2_2} &= \\begin{bmatrix}0 & 1 & 8 & -2\\\\ 0 & 0 & 0 & 0\\\\ 0 & 0 & 0 & 0\\\\ 0 & 0 & 0 & 0\\end{bmatrix}\n\\end{align}"
},
{
"math_id": 101,
"text": "e^{tA} = \\begin{bmatrix}\n e^t & te^t & \\left(8t - 48\\right) e^t\\! + \\left(4t + 48\\right)e^{\\frac{3}{4}t} & \\left(16 - 2\\,t\\right)e^t\\! + \\left(-2t - 16\\right)e^{\\frac{3}{4}t}\\\\\n 0 & e^t & 8e^t\\! + \\left(-t - 8\\right) e^{\\frac{3}{4}t} & -2e^t + \\frac{t + 4}{2}e^{\\frac{3}{4}t}\\\\\n 0 & 0 & \\frac{t + 4}{4}e^{\\frac{3}{4}t} & -\\frac{t}{8}e^{\\frac{3}{4}t}\\\\\n 0 & 0 & \\frac{t}{2}e^{\\frac{3}{4}t} & -\\frac{t - 4}{4}e^{\\frac{3}{4}t} ~.\n \\end{bmatrix}\n"
},
{
"math_id": 102,
"text": "B = \\begin{bmatrix}\n 21 & 17 & 6 \\\\\n -5 & -1 & -6 \\\\\n 4 & 4 & 16\n\\end{bmatrix}."
},
{
"math_id": 103,
"text": "J = P^{-1}BP = \\begin{bmatrix}\n 4 & 0 & 0 \\\\\n 0 & 16 & 1 \\\\\n 0 & 0 & 16\n\\end{bmatrix},"
},
{
"math_id": 104,
"text": "P = \\begin{bmatrix}\n -\\frac14 & 2 & \\frac54 \\\\\n \\frac14 & -2 & -\\frac14 \\\\\n 0 & 4 & 0\n\\end{bmatrix}."
},
{
"math_id": 105,
"text": "J = J_1(4) \\oplus J_2(16) "
},
{
"math_id": 106,
"text": "\\begin{align}\n &\\exp \\left( \\begin{bmatrix} 16 & 1 \\\\ 0 & 16 \\end{bmatrix} \\right)\n = e^{16} \\exp \\left( \\begin{bmatrix} 0 & 1 \\\\ 0 & 0 \\end{bmatrix} \\right) = \\\\[6pt]\n {}={} &e^{16} \\left(\\begin{bmatrix} 1 & 0 \\\\ 0 & 1 \\end{bmatrix} + \\begin{bmatrix} 0 & 1 \\\\ 0 & 0 \\end{bmatrix} + {1 \\over 2!}\\begin{bmatrix} 0 & 0 \\\\ 0 & 0 \\end{bmatrix} + \\cdots {} \\right)\n = \\begin{bmatrix} e^{16} & e^{16} \\\\ 0 & e^{16} \\end{bmatrix}.\n\\end{align}\n"
},
{
"math_id": 107,
"text": "\\begin{align}\n \\exp(B)\n &= P \\exp(J) P^{-1}\n = P \\begin{bmatrix} e^4 & 0 & 0 \\\\ 0 & e^{16} & e^{16} \\\\ 0 & 0 & e^{16} \\end{bmatrix} P^{-1} \\\\[6pt]\n &= {1 \\over 4} \\begin{bmatrix}\n 13e^{16} - e^4 & 13e^{16} - 5e^4 & 2e^{16} - 2e^4 \\\\\n -9e^{16} + e^4 & -9e^{16} + 5e^4 & -2e^{16} + 2e^4 \\\\\n 16e^{16} & 16e^{16} & 4e^{16}\n \\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 108,
"text": " \\mathbf{y}' = A\\mathbf{y} "
},
{
"math_id": 109,
"text": " \\mathbf{y}(t) = \\begin{bmatrix} y_1(t) \\\\ \\vdots \\\\y_n(t) \\end{bmatrix} ~,"
},
{
"math_id": 110,
"text": " \\mathbf{y}'(t) = A\\mathbf{y}(t)+\\mathbf{b}(t)."
},
{
"math_id": 111,
"text": "\\begin{align}\n & & e^{-At}\\mathbf{y}'-e^{-At}A\\mathbf{y} &= e^{-At}\\mathbf{b} \\\\\n &\\Rightarrow & e^{-At}\\mathbf{y}'-Ae^{-At}\\mathbf{y} &= e^{-At}\\mathbf{b} \\\\\n &\\Rightarrow & \\frac{d}{dt} \\left(e^{-At}\\mathbf{y}\\right) &= e^{-At}\\mathbf{b}~.\n\\end{align}"
},
{
"math_id": 112,
"text": "e^{\\textbf{A}t}"
},
{
"math_id": 113,
"text": "e^{\\textbf{A}t} e^{-\\textbf{A}t} = I"
},
{
"math_id": 114,
"text": "\\exp{\\textbf{A}t} = \\exp{{(-\\textbf{A}t)}^{-1}}"
},
{
"math_id": 115,
"text": "\\begin{matrix}\n x' &=& 2x & -y & +z \\\\\n y' &=& & 3y & -1z \\\\\n z' &=& 2x & +y & +3z\n\\end{matrix}~."
},
{
"math_id": 116,
"text": "A = \\begin{bmatrix}\n 2 & -1 & 1 \\\\\n 0 & 3 & -1 \\\\\n 2 & 1 & 3\n\\end{bmatrix}~."
},
{
"math_id": 117,
"text": "e^{tA} = \\frac{1}{2}\\begin{bmatrix}\n e^{2t}\\left( 1 + e^{2t} - 2t\\right) & -2te^{2t} & e^{2t}\\left(-1 + e^{2t}\\right) \\\\\n -e^{2t}\\left(-1 + e^{2t} - 2t\\right) & 2(t + 1)e^{2t} & -e^{2t}\\left(-1 + e^{2t}\\right) \\\\\n e^{2t}\\left(-1 + e^{2t} + 2t\\right) & 2te^{2t} & e^{2t}\\left( 1 + e^{2t}\\right)\n\\end{bmatrix}~,"
},
{
"math_id": 118,
"text": "\\begin{bmatrix}x \\\\y \\\\ z\\end{bmatrix} =\n \\frac{x(0)}{2}\\begin{bmatrix}e^{2t}\\left(1 + e^{2t} - 2t\\right) \\\\ -e^{2t}\\left(-1 + e^{2t} - 2t\\right) \\\\ e^{2t}\\left(-1 + e^{2t} + 2t\\right)\\end{bmatrix}\n + \\frac{y(0)}{2}\\begin{bmatrix}-2te^{2t} \\\\ 2(t + 1)e^{2t} \\\\ 2te^{2t}\\end{bmatrix}\n + \\frac{z(0)}{2}\\begin{bmatrix}e^{2t}\\left(-1 + e^{2t}\\right) \\\\ -e^{2t}\\left(-1 + e^{2t}\\right) \\\\ e^{2t}\\left(1 + e^{2t}\\right)\\end{bmatrix} ~,\n"
},
{
"math_id": 119,
"text": "\\begin{align}\n 2x &= x(0)e^{2t}\\left(1 + e^{2t} - 2t\\right) + y(0)\\left(-2te^{2t}\\right) + z(0)e^{2t}\\left(-1 + e^{2t}\\right) \\\\[2pt]\n 2y &= x(0)\\left(-e^{2t}\\right)\\left(-1 + e^{2t} - 2t\\right) + y(0)2(t + 1)e^{2t} + z(0)\\left(-e^{2t}\\right)\\left(-1 + e^{2t}\\right) \\\\[2pt]\n 2z &= x(0)e^{2t}\\left(-1 + e^{2t} + 2t\\right) + y(0)2te^{2t} + z(0)e^{2t}\\left(1 + e^{2t}\\right) ~.\n\\end{align}"
},
{
"math_id": 120,
"text": "\\begin{matrix}\n x' &=& 2x & - & y & + & z & + & e^{2t} \\\\\n y' &=& & & 3y& - & z & \\\\\n z' &=& 2x & + & y & + & 3z & + & e^{2t}\n\\end{matrix} ~."
},
{
"math_id": 121,
"text": "A = \\left[\\begin{array}{rrr}\n 2 & -1 & 1 \\\\\n 0 & 3 & -1 \\\\\n 2 & 1 & 3\n\\end{array}\\right] ~,"
},
{
"math_id": 122,
"text": "\\mathbf{b} = e^{2t}\\begin{bmatrix}1 \\\\0\\\\1\\end{bmatrix}."
},
{
"math_id": 123,
"text": "\\begin{align}\n \\mathbf{y}_p\n &= e^{tA}\\int_0^t e^{(-u)A}\\begin{bmatrix}e^{2u} \\\\0\\\\e^{2u}\\end{bmatrix}\\,du+e^{tA}\\mathbf{c} \\\\[6pt]\n &= e^{tA}\\int_0^t \\begin{bmatrix}\n 2e^u - 2ue^{2u} & -2ue^{2u} & 0 \\\\\n -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\\\\n 2ue^{2u} & 2ue^{2u} & 2e^u\n \\end{bmatrix}\\begin{bmatrix}e^{2u} \\\\0 \\\\e^{2u}\\end{bmatrix}\\,du + e^{tA}\\mathbf{c} \\\\[6pt]\n &= e^{tA}\\int_0^t \\begin{bmatrix}\n e^{2u}\\left( 2e^u - 2ue^{2u}\\right) \\\\\n e^{2u}\\left(-2e^u + 2(1 + u)e^{2u}\\right) \\\\\n 2e^{3u} + 2ue^{4u}\n \\end{bmatrix}\\,du + e^{tA}\\mathbf{c} \\\\[6pt]\n &= e^{tA}\\begin{bmatrix}\n -{1 \\over 24}e^{3t}\\left(3e^t(4t - 1) - 16\\right) \\\\\n {1 \\over 24}e^{3t}\\left(3e^t(4t + 4) - 16\\right) \\\\\n {1 \\over 24}e^{3t}\\left(3e^t(4t - 1) - 16\\right)\n \\end{bmatrix} + \\begin{bmatrix}\n 2e^t - 2te^{2t} & -2te^{2t} & 0 \\\\\n -2e^t + 2(t + 1)e^{2t} & 2(t + 1)e^{2t} & 0 \\\\\n 2te^{2t} & 2te^{2t} & 2e^t\n \\end{bmatrix}\\begin{bmatrix}c_1 \\\\c_2 \\\\c_3\\end{bmatrix} ~,\n\\end{align}"
},
{
"math_id": 124,
"text": "\\begin{align}\n \\mathbf{y}_p'(t)\n & = \\left(e^{tA}\\right)'\\mathbf{z}(t) + e^{tA}\\mathbf{z}'(t) \\\\[6pt]\n & = Ae^{tA}\\mathbf{z}(t) + e^{tA}\\mathbf{z}'(t) \\\\[6pt]\n & = A\\mathbf{y}_p(t) + e^{tA}\\mathbf{z}'(t)~.\n\\end{align}"
},
{
"math_id": 125,
"text": "\\begin{align}\n e^{tA}\\mathbf{z}'(t) &= \\mathbf{b}(t) \\\\[6pt]\n \\mathbf{z}'(t) &= \\left(e^{tA}\\right)^{-1}\\mathbf{b}(t) \\\\[6pt]\n \\mathbf{z}(t) &= \\int_0^t e^{-uA}\\mathbf{b}(u)\\,du + \\mathbf{c} ~.\n\\end{align}"
},
{
"math_id": 126,
"text": "\\begin{align}\n \\mathbf{y}_p(t)\n & = e^{tA}\\int_0^t e^{-uA}\\mathbf{b}(u)\\,du + e^{tA}\\mathbf{c} \\\\\n & = \\int_0^t e^{(t - u)A}\\mathbf{b}(u)\\,du + e^{tA}\\mathbf{c}~,\n\\end{align}"
},
{
"math_id": 127,
"text": "Y' - A\\ Y = F(t)"
},
{
"math_id": 128,
"text": "t_0"
},
{
"math_id": 129,
"text": "Y_0"
},
{
"math_id": 130,
"text": "Y(t) = e^{(t - t_0)A}\\ Y_0 + \\int_{t_0}^t e^{(t - x)A}\\ F(x)\\ dx ~."
},
{
"math_id": 131,
"text": "P(d/dt)\\ y = f(t)"
},
{
"math_id": 132,
"text": "y^{(k)}(t_0) = y_k"
},
{
"math_id": 133,
"text": "y(t) = \\sum_{k=0}^{n-1}\\ y_k\\ s_k(t - t_0) + \\int_{t_0}^t s_{n-1}(t - x)\\ f(x)\\ dx ~,"
},
{
"math_id": 134,
"text": "P\\in\\mathbb{C}[X]"
},
{
"math_id": 135,
"text": "y_k"
},
{
"math_id": 136,
"text": "X^k"
},
{
"math_id": 137,
"text": "S_t\\in\\mathbb{C}[X]"
},
{
"math_id": 138,
"text": "\\frac{dY}{dt} - A\\ Y = F(t),\\quad Y(t_0) = Y_0,"
},
{
"math_id": 139,
"text": "\n y'' - (\\alpha + \\beta)\\ y' + \\alpha\\,\\beta\\ y = f(t),\\quad\n y(t_0) = y_0,\\quad\n y'(t_0) = y_1\n"
},
{
"math_id": 140,
"text": "y(t) = y_0\\ s_0(t - t_0) + y_1\\ s_1(t - t_0) + \\int_{t_0}^t s_1(t - x)\\,f(x)\\ dx,"
},
{
"math_id": 141,
"text": "X^Y = e^{\\log(X) \\cdot Y}"
},
{
"math_id": 142,
"text": "^Y\\!X = e^{Y \\cdot \\log(X)}"
}
]
| https://en.wikipedia.org/wiki?curid=672731 |
672758 | Worldsheet | Mathematical concept
In string theory, a worldsheet is a two-dimensional manifold which describes the embedding of a string in spacetime. The term was coined by Leonard Susskind as a direct generalization of the world line concept for a point particle in special and general relativity.
The type of string, the geometry of the spacetime in which it propagates, and the presence of long-range background fields (such as gauge fields) are encoded in a two-dimensional conformal field theory defined on the worldsheet. For example, the bosonic string in 26 dimensions has a worldsheet conformal field theory consisting of 26 free scalar bosons. Meanwhile, a superstring worldsheet theory in 10 dimensions consists of 10 free scalar fields and their fermionic superpartners.
Mathematical formulation.
Bosonic string.
We begin with the classical formulation of the bosonic string.
First fix a formula_0-dimensional flat spacetime (formula_0-dimensional Minkowski space), formula_1, which serves as the ambient space for the string.
A world-sheet formula_2 is then an embedded surface, that is, an embedded 2-manifold formula_3, such that the induced metric has signature formula_4 everywhere. Consequently it is possible to locally define coordinates formula_5 where formula_6 is time-like while formula_7 is space-like.
Strings are further classified into open and closed. The topology of the worldsheet of an open string is formula_8, where formula_9, a closed interval, and admits a global coordinate chart formula_10 with formula_11 and formula_12.
Meanwhile the topology of the worldsheet of a closed string is formula_13, and admits 'coordinates' formula_10 with formula_11 and formula_14. That is, formula_7 is a periodic coordinate with the identification formula_15. The redundant description (using quotients) can be removed by choosing a representative formula_16.
World-sheet metric.
In order to define the Polyakov action, the world-sheet is equipped with a world-sheet metric formula_17, which also has signature formula_18 but is independent of the induced metric.
Since Weyl transformations are considered a redundancy of the metric structure, the world-sheet is instead considered to be equipped with a conformal class of metrics formula_19. Then formula_20 defines the data of a conformal manifold with signature formula_18.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "\\Sigma"
},
{
"math_id": 3,
"text": "\\Sigma \\hookrightarrow M"
},
{
"math_id": 4,
"text": "(-,+)"
},
{
"math_id": 5,
"text": "(\\tau,\\sigma)"
},
{
"math_id": 6,
"text": "\\tau"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "\\mathbb{R}\\times I"
},
{
"math_id": 9,
"text": "I := [0,1]"
},
{
"math_id": 10,
"text": "(\\tau, \\sigma)"
},
{
"math_id": 11,
"text": "-\\infty < \\tau < \\infty"
},
{
"math_id": 12,
"text": "0 \\leq \\sigma \\leq 1"
},
{
"math_id": 13,
"text": "\\mathbb{R}\\times S^1"
},
{
"math_id": 14,
"text": "\\sigma \\in \\mathbb{R}/2\\pi\\mathbb{Z}"
},
{
"math_id": 15,
"text": "\\sigma \\sim \\sigma + 2\\pi"
},
{
"math_id": 16,
"text": "0 \\leq \\sigma < 2\\pi"
},
{
"math_id": 17,
"text": "\\mathbf{g}"
},
{
"math_id": 18,
"text": "(-, +)"
},
{
"math_id": 19,
"text": "[\\mathbf{g}]"
},
{
"math_id": 20,
"text": "(\\Sigma, [\\mathbf{g}])"
}
]
| https://en.wikipedia.org/wiki?curid=672758 |
67276781 | Delicate prime | A delicate prime, digitally delicate prime, or weakly prime number is a prime number where, under a given radix but generally decimal, replacing any one of its digits with any other digit always results in a composite number.
Definition.
A prime number is called a "digitally delicate prime number" when, under a given radix but generally decimal, replacing any one of its digits with any other digit always results in a composite number. A weakly prime base-"b" number with "n" digits must produce formula_0 composite numbers after every digit is individually changed to every other digit. There are infinitely many weakly prime numbers in any base. Furthermore, for any fixed base there is a positive proportion of such primes.
History.
In 1978, Murray S. Klamkin posed the question of whether these numbers existed. Paul Erdős proved that there exist an infinite number of "delicate primes" under any base.
In 2007, Jens Kruse Andersen found the 1000-digit weakly prime formula_1.
In 2011, Terence Tao proved in a 2011 paper, that delicate primes exist in a positive proportion for all bases. Positive proportion here means as the primes get bigger, the distance between the delicate primes will be quite similar, thus not scarce among prime numbers.
Widely digitally delicate primes.
In 2021, Michael Filaseta of the University of South Carolina tried to find a delicate prime number such that when you add an infinite number of leading zeros to the prime number and change any one of its digits, including the leading zeros, it becomes composite. He called these numbers "widely digitally delicate". He with a student of his showed in the paper that there exist an infinite number of these numbers, although they could not produce a single example of this, having looked through 1 to 1 billion. They also proved that a positive proportion of primes are widely digitally delicate.
Jon Grantham gave an explicit example of a 4032-digit widely digitally delicate prime.
Examples.
The smallest weakly prime base-"b" number for bases 2 through 10 are:
In the decimal number system, the first weakly prime numbers are:
294001, 505447, 584141, 604171, 971767, 1062599, 1282529, 1524181, 2017963, 2474431, 2690201, 3085553, 3326489, 4393139 (sequence in the OEIS).
For the first of these, each of the 54 numbers 094001, 194001, 394001, ..., 294009 are composite. | [
{
"math_id": 0,
"text": "(b - 1) \\times n"
},
{
"math_id": 1,
"text": "(17 \\times 10^{1000} - 17) / 99 + 21686652"
}
]
| https://en.wikipedia.org/wiki?curid=67276781 |
67296089 | AdS black brane | Solution to Einstein equation
An anti de Sitter black brane is a solution of the Einstein equations in the presence of a negative cosmological constant which possesses a planar event horizon. This is distinct from an anti de Sitter black hole solution which has a spherical event horizon. The negative cosmological constant implies that the spacetime will asymptote to an anti de Sitter spacetime at spatial infinity.
Math development.
The Einstein equation is given byformula_0where formula_1 is the Ricci curvature tensor, R is the Ricci scalar, formula_2 is the cosmological constant and formula_3 is the metric we are solving for.
We will work in d spacetime dimensions with coordinates formula_4 where formula_5 and formula_6. The line element for a spacetime that is stationary, time reversal invariant, space inversion invariant, rotationally invariant
and translationally invariant in the formula_7 directions is given by,
formula_8.
Replacing the cosmological constant with a length scale L
formula_9,
we find that,
formula_10
formula_11
with formula_12 and formula_13 integration constants, is a solution to the Einstein equation.
The integration constant formula_12 is associated with a residual symmetry associated with a rescaling of the time coordinate. If we require that the line element takes the form,
formula_14, when r goes to infinity, then we must set formula_15.
The point formula_16 represents a curvature singularity and the point formula_17 is a coordinate singularity when formula_18. To see this, we switch to the coordinate system formula_19 where formula_20 and formula_21 is defined by the differential equation,formula_22.The line element in this coordinate system is given by,formula_23,which is regular at formula_17. The surface formula_17 is an event horizon.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R_{\\mu\\nu}-\\frac{1}{2}R g_{\\mu\\nu}+\\Lambda g_{\\mu\\nu}=0,"
},
{
"math_id": 1,
"text": "R_{\\mu\\nu}"
},
{
"math_id": 2,
"text": "\\Lambda"
},
{
"math_id": 3,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 4,
"text": "(t,r,x_1,...,x_{d-2})"
},
{
"math_id": 5,
"text": "r\\geq0"
},
{
"math_id": 6,
"text": "-\\infin<t,x_1,...,x_{d-2}<\\infin"
},
{
"math_id": 7,
"text": "x_i"
},
{
"math_id": 8,
"text": "ds^2=L^2\\left(\\frac{dr^2}{r^2h(r)}+r^2(-dt^2f(r)+d\\vec{x}^2)\\right)"
},
{
"math_id": 9,
"text": "\\Lambda=-\\frac{1}{2L^2}(d-1)(d-2)"
},
{
"math_id": 10,
"text": "f(r)=a\\left(1-\\frac{b}{r^{d-1}}\\right)"
},
{
"math_id": 11,
"text": "h(r)=1-\\frac{b}{r^{d-1}}"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "ds^2=L^2\\left(\\frac{dr^2}{r^2}+r^2(-dt^2+d\\vec{x})\\right)"
},
{
"math_id": 15,
"text": "a=1"
},
{
"math_id": 16,
"text": "r=0"
},
{
"math_id": 17,
"text": "r^{d-1}=b"
},
{
"math_id": 18,
"text": "b>0"
},
{
"math_id": 19,
"text": "(v,r,x_1,...,x_{d-2})"
},
{
"math_id": 20,
"text": "v=t+r^*(r)"
},
{
"math_id": 21,
"text": "r^*(r)"
},
{
"math_id": 22,
"text": "\\frac{dr^*}{dr}=\\frac{1}{r^2h(r)}"
},
{
"math_id": 23,
"text": "ds^2=L^2(-r^2h(r)dv^2+2dvdr+r^2d\\vec{x}^2)"
}
]
| https://en.wikipedia.org/wiki?curid=67296089 |
6729896 | Surface reconstruction | Surface reconstruction refers to the process by which atoms at the surface of a crystal assume a different structure than that of the bulk. Surface reconstructions are important in that they help in the understanding of surface chemistry for various materials, especially in the case where another material is adsorbed onto the surface.
Basic principles.
In an ideal infinite crystal, the equilibrium position of each individual atom is determined by the forces exerted by all the other atoms in the crystal, resulting in a periodic structure. If a surface is introduced to the surroundings by terminating the crystal along a given plane, then these forces are altered, changing the equilibrium positions of the remaining atoms. This is most noticeable for the atoms at or near the surface plane, as they now only experience inter-atomic forces from one direction. This imbalance results in the atoms near the surface assuming positions with different spacing and/or symmetry from the bulk atoms, creating a different surface structure. This change in equilibrium positions near the surface can be categorized as either a relaxation or a reconstruction.
Relaxation refers to a change in the position of surface atoms relative to the bulk positions, while the bulk unit cell is preserved at the surface. Often this is a purely normal relaxation: that is, the surface atoms move in a direction normal to the surface plane, usually resulting in a smaller-than-usual inter-layer spacing. This makes intuitive sense, as a surface layer that experiences no forces from the open region can be expected to contract towards the bulk. Most metals experience this type of relaxation. Some surfaces also experience relaxations in the lateral direction as well as the normal, so that the upper layers become shifted relative to layers further in, in order to minimize the positional energy.
Reconstruction refers to a change in the two-dimensional structure of the surface layers, in addition to changes in the position of the entire layer. For example, in a cubic material the surface layer might re-structure itself to assume a smaller two-dimensional spacing between the atoms, as lateral forces from adjacent layers are reduced. The general symmetry of a layer might also change, as in the case of the Pt (100) surface, which reconstructs from a cubic to a hexagonal structure. A reconstruction can affect one or more layers at the surface and can either conserve the total number of atoms in a layer (a conservative reconstruction) or have a greater or lesser number than in the bulk (a non-conservative reconstruction).
Reconstruction due to adsorption.
The relaxations and reconstructions considered above would describe the ideal case of "atomically clean" surfaces in vacuum, in which the interaction with another medium is not considered. However, reconstructions can also be induced or affected by the adsorption of other atoms onto the surface, as the interatomic forces are changed. These reconstructions can assume a variety of forms when the detailed interactions between different types of atoms are taken into account, but some general principles can be identified.
The reconstruction of a surface with adsorption will depend on the following factors:
Composition plays an important role in that it determines the form that the adsorption process takes, whether by relatively weak physisorption through van der Waals interactions or stronger chemisorption through the formation of chemical bonds between the substrate and adsorbate atoms. Surfaces that undergo chemisorption generally result in more extensive reconstructions than those that undergo physisorption, as the breaking and formation of bonds between the surface atoms alter the interaction of the substrate atoms as well as the adsorbate.
Different reconstructions can also occur depending on the substrate and adsorbate coverages and the ambient conditions, as the equilibrium positions of the atoms are changed depending on the forces exerted. One example of this occurs in the case of In adsorbed on the Si(111) surface, in which the two differently reconstructed phases of Si(111)formula_0-In and Si(111)formula_1-In (in Wood's notation, see below) can actually coexist under certain conditions. These phases are distinguished by the In coverage in the different regions and occur for certain ranges of the average In coverage.
Notation of reconstructions.
In general, the change in a surface layer's structure due to a reconstruction can be completely specified by a matrix notation proposed by Park and Madden. If formula_2 and formula_3 are the basic translation vectors of the two-dimensional structure in the bulk, and formula_4 and formula_5 are the basic translation vectors of the "superstructure" or reconstructed plane, then the relationship between the two sets of vectors can be described by the following equations:
formula_6
formula_7
so that the two-dimensional reconstruction can be described by the matrix
formula_8
Note that this system does not describe any relaxation of the surface layers relative to the bulk inter-layer spacing, but only describes the change in the individual layer's structure.
Surface reconstructions are more commonly given in Wood's notation, which reduces the matrix above into a more compact notation
X("hkl") "m" × "n" - R"φ",
which describes the reconstruction of the ("hkl") plane (given by its Miller indices). In this notation, the surface unit cell is given as multiples of the nonreconstructed surface unit cell with the unit cell vectors "a" and "b". For example, a calcite(104) (2×1) reconstruction means that the unit cell is twice as long in direction "a" and has the same length in direction "b". If the unit cell is rotated with respect to the unit cell of the nonreconstructed surface, the angle "φ" is given in addition (usually in degrees). This notation is often used to describe reconstructions concisely, but does not directly indicate changes in the layer symmetry (for example, square to hexagonal).
Measurement of reconstructions.
Determination of a material's surface reconstruction requires a measurement of the positions of the surface atoms that can be compared to a measurement of the bulk structure. While the bulk structure of crystalline materials can usually be determined by using a diffraction experiment to determine the Bragg peaks, any signal from a reconstructed surface is obscured due to the relatively tiny number of atoms involved.
Special techniques are thus required to measure the positions of the surface atoms, and these generally fall into two categories: diffraction-based methods adapted for surface science, such as low-energy electron diffraction (LEED) or Rutherford backscattering spectroscopy, and atomic-scale probe techniques such as scanning tunneling microscopy (STM) or atomic force microscopy. Of these, STM has been most commonly used in recent history due to its very high resolution and ability to resolve aperiodic features.
Examples of reconstructions.
To allow a better understanding of the variety of reconstructions in different systems, examine the following examples of reconstructions in metallic, semiconducting and insulating materials.
Silicon.
A very well known example of surface reconstruction occurs in silicon, a semiconductor commonly used in a variety of computing and microelectronics applications. With a diamond-like face-centered cubic (fcc) lattice, it exhibits several different well-ordered reconstructions depending on temperature and on which crystal face is exposed.
When Si is cleaved along the (100) surface, the ideal diamond-like structure is interrupted and results in a 1×1 square array of surface Si atoms. Each of these has two dangling bonds remaining from the diamond structure, creating a surface that can obviously be reconstructed into a lower-energy structure. The observed reconstruction is a 2×1 periodicity, explained by the formation of dimers, which consist of paired surface atoms, decreasing the number of dangling bonds by a factor of two. These dimers reconstruct in rows with a high long-range order, resulting in a surface of "filled" and "empty" rows. LEED studies and calculations also indicate that relaxations as deep as five layers into the bulk are also likely to occur.
The Si (111) structure, by comparison, exhibits a much more complex reconstruction. Cleavage along the (111) surface at low temperatures results in another 2×1 reconstruction, differing from the (100) surface by forming long π-bonded chains in the first and second surface layers. However, when heated above 400 °C, this structure converts irreversibly to the more complicated 7×7 reconstruction. In addition, a disordered 1×1 structure is regained at temperatures above 850 °C, which can be converted back to the 7×7 reconstruction by slow cooling.
The 7×7 reconstruction is modeled according to a dimer-adatom-stacking fault (DAS) model constructed by many research groups over a period of 25 years. Extending through the five top layers of the surface, the unit cell of the reconstruction contains 12 adatoms and 2 triangular subunits, 9 dimers, and a deep corner hole that extends to the fourth and fifth layers. This structure was gradually inferred from LEED and RHEED measurements and calculation, and was finally resolved in real space by Gerd Binnig, Heinrich Rohrer, Ch. Gerber and E. Weibel as a demonstration of the STM, which was developed by Binnig and Rohrer at IBM's Zurich Research Laboratory. The full structure with positions of all reconstructed atoms has also been confirmed by massively parallel computation.
A number of similar DAS reconstructions have also been observed on Si (111) in non-equilibrium conditions in a (2"n" + 1)×(2"n" + 1) pattern and include 3×3, 5×5 and 9×9 reconstructions. The preference for the 7×7 reconstruction is attributed to an optimal balance of charge transfer and stress, but the other DAS-type reconstructions can be obtained under conditions such as rapid quenching from the disordered 1×1 structure.
Gold.
The structure of the Au (100) surface is an interesting example of how a cubic structure can be reconstructed into a different symmetry, as well as the temperature dependence of a reconstruction. In the bulk gold is an (fcc) metal, with a surface structure reconstructed into a distorted hexagonal phase. This hexagonal phase is often referred to as a (28×5) structure, distorted and rotated by about 0.81° relative to the [011] crystal direction. Molecular-dynamics simulations indicate that this rotation occurs to partly relieve a compressive strain developed in the formation of this hexagonal reconstruction, which is nevertheless favored thermodynamically over the unreconstructed structure. However, this rotation disappears in a phase transition at approximately "T" = 970 K, above which an un-rotated hexagonal structure is observed.
A second phase transition is observed at "T" = 1170 K, in which an order–disorder transition occurs, as entropic effects dominate at high temperature. The high-temperature disordered phase is explained as a quasi-melted phase in which only the surface becomes disordered between 1170 K and the bulk melting temperature of 1337 K. This phase is not completely disordered, however, as this melting process allows the effects of the substrate interactions to become important again in determining the surface structure. This results in a recovery of the square (1×1) structure within the disordered phase and makes sense as at high temperatures the energy reduction allowed by the hexagonal reconstruction can be presumed to be less significant.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{3}\\times\\sqrt{3}"
},
{
"math_id": 1,
"text": "\\sqrt{31}\\times\\sqrt{31}"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "a_s"
},
{
"math_id": 5,
"text": "b_s"
},
{
"math_id": 6,
"text": "a_s = G_{11} a + G_{12} b,"
},
{
"math_id": 7,
"text": "b_s = G_{21} a + G_{22} b,"
},
{
"math_id": 8,
"text": "G = \\begin{pmatrix}\n G_{11} & G_{12} \\\\\n G_{21} & G_{22}\n\\end{pmatrix}."
}
]
| https://en.wikipedia.org/wiki?curid=6729896 |
6729973 | Force concentration | Military strategy
Force concentration is the practice of concentrating a military force so as to bring to bear such overwhelming force against a portion of an enemy force that the disparity between the two forces alone acts as a force multiplier in favour of the concentrated forces.
Mass of decision.
Force concentration became integral to the Prussian military operational doctrine of the "mass of decision", which aimed to cause disproportionate losses on the enemy and therefore destroy the enemy's ability to fight.
From an empirical examination of past battles, the Prussian military theorist Carl von Clausewitz (1780–1831) concluded:
[...] we may infer, that it is very difficult in the present state of Europe, for the most talented General to gain a victory over an enemy double his strength. Now if we see double numbers prove such a weight in the scale against the greatest Generals, we may be sure, that in ordinary cases, in small as well as great combats, an important superiority of numbers, but which need not be over two to one, will be sufficient to ensure the victory, however disadvantageous other circumstances may be.
Lanchester's laws.
During the First World War Frederick W. Lanchester formulated Lanchester's laws that calculated that the combat power of a military force is the square of the number of members of that unit so that the advantage a larger force has is the difference of the squares of the two forces, i.e.
So a two to one advantage in units will quadruple the firepower and inflict four times the punishment, three times as many units will have nine times the combat ability and so on. Basically the greater the numerical superiority that one side has, the greater the damage he can inflict on the other side and the smaller the cost to himself.
Mathematical model.
There is no battlefield where battle tactics can be reduced to a pure race of delivering damage while ignoring all other circumstances. However, in some types of warfare, such as a battle for air superiority, confrontation of armoured forces in World War II or battleship-based naval battles, the ratio of armed forces could become the dominant factor. In that case, equations stated in Lanchester's laws model the potential outcome of the conflict fairly well. Balance between the two opponent forces incline to the side of superior force by the factor of formula_0. For example, two tanks against one tank are superior by a factor of four.
This result could be understood if the rate of damage (considered as the only relevant factor in the model) is solved as a system of differential equations. The rate in which each army delivers damage to the opponent is proportional to the number of units – in the model each unit shoots at a given rate – and to the ability or effectiveness of each surviving unit to kill the enemy. The sizes of both armies decrease at different rates depending on the size of the other, and casualties of the superior army approach zero as the size of the inferior army approaches zero. This can be written in equations:
formula_1
formula_2
The above equations result in the following homogeneous second-order linear ordinary differential equations:
formula_6
formula_7
To determine the time evolution of formula_3 and formula_8, these equations need to be solved using the known initial conditions (the initial size of the two armies prior to combat).
This model clearly demonstrates (see picture) that an inferior force can suffer devastating losses even when the superior force is only slightly larger, in case of equal per-unit qualitative capabilities: in the first example (see picture, top plot) the superior force starts only 40% larger, yet it brings about the total annihilation of the inferior force while suffering only 40% losses. Quality of the force may outweigh the quantitative inferiority of the force (middle plot) when it comes to battle outcomes.
Business strategy.
In the 1960s, Lanchester's laws were popularised by the business consultant Nobuo Taoka and found favour with a segment of the Japanese business community. The laws were used to formulate plans and strategies to attack market share. The "Canon–Xerox copier battle" in the UK, for example, reads like a classic people's war campaign. In this case, the laws supported Canon's establishment of a "revolutionary base area" by concentrating resources on a single geographical area until dominance could be achieved, in this case in Scotland. After this, they carefully defined regions to be individually attacked again with a more focused allocation of resources. The sales and distribution forces built up to support these regions in turn were used in the final "determined push in London with a numerically larger salesforce".
Hypothetical example.
Imagine two equally matched sides each with two infantry and two armoured divisions. Now visualize a straight defensive line with the two infantry and two armoured divisions, deployed equally along the length of the line. Hypothetically the attacker can win by concentrating his armour at one point (with his infantry holding the rest of the line).
Traditionally it is accepted that a defending force has a 3:1 advantage over an attacker. In other words, a defending force can hold off three times its own number of attackers. Imagine, then, that the defensive line is four units in length, so that each portion of the line can be held by a single defending division. Assume that they can take on the oncoming armour on equal terms (with ATGWs, pre-prepared artillery fireplans etc.) and that they have had time to dig in. This single unit should be able to hold off 3 times its own number. With the attacking force having only two armoured units, the defenders should have the advantage.
However, as the defensive line increases from the imaginary four units in length, the advantage slips from the defender to the attacker. The longer the line to be held, the thinner the defenders will be spread. With the defender having sacrificed his mobility to dig in, the attacker can choose where and when to attack. Either penetrating the line or turning a flank and thus being able to destroy the enemy in detail. Thus, concentrating two divisions and attacking at a single point generates a far greater force than is achieved by spreading two divisions into a line and pushing forward on a broad front.
Concentration of force in this scenario requires mobility (to permit rapid concentration) and power (to be effective in combat once concentrated). The tank embodies these two properties and for the past seventy years has been seen as the primary weapon of conventional warfare.
No one side has a monopoly on military art, and what is obvious to one side is obvious to the other. A far more likely scenario is that both forces will choose to use their infantry to hold a line and to concentrate their armour, and rather than a line in the sand, the infantry line would be more of a trip wire, to warn of where the enemy has chosen to launch his attack, with the armoured forces jostling to find the right place to attack or counterattack. Other considerations, then, must come into play for a decisive blow to be achieved.
Such considerations may be economic or political in nature, e.g. one side is unable or unwilling to allow the sanctity of its soil to be violated, and thus insists on defending a line on a map.
History.
Force concentration has been a part of the military commander's repertoire since the dawn of warfare, though maybe not by that name. Commanders have always tried to have the advantage of numbers. The declined flank for example, was one way of achieving a force concentration during a battle.
Disposition of Roman Legions.
At the beginning of the Roman Empire, in the first years of the first millennium, Rome's Legions were grouped into battle groups of three or four Legions, on the Rhine, on the Danube and in the Levant. By the third century A.D. these Legions had been dispersed along the frontiers in frontier fortifications, and within the Empire as internal security troops. In the first case Rome's military might was disposed in a manner in which it had a concentration of force capable of offensive action; in the second case it could defend effectively but could only attack and counterattack with difficulty.
Guerrilla warfare.
As they are usually the smaller in number an appreciation of force concentration is especially important to guerrilla forces, who find it prudent initially to avoid confrontations with any large concentrations of government/occupying forces. However, through the use of small attacks, shows of strength, atrocities etc. in out of the way areas, they may be able to lure their opponents into spreading themselves out into isolated outposts, linked by convoys and patrols, in order to control territory. The guerrilla forces may then attempt to use force concentrations of their own; using unpredictable and unexpected concentrations of their forces, to destroy individual patrols, convoys and outposts. In this way they can hope to defeat their enemy in detail.
Regular forces, in turn, may act in order to invite such attacks by concentrations of enemy guerrillas, in order to bring an otherwise elusive enemy to battle, relying on its own superior training and firepower to win such battles. This was successfully practiced by the French during the First Indochina War at the Battle of Nà Sản, but a subsequent attempt to replicate this at Dien Bien Phu led to decisive defeat.
Aerial warfare.
During World War I the Central Powers became increasingly unable to meet the Allied Powers in terms of outright number of fighter aircraft. To overcome this shortcoming rather than deploying their fighters uniformly along the fronts, the Germans concentrated their fighters into large mobile Jagdgeschwader formations, the most famous of which was Manfred von Richthofen's Flying Circus, that could be moved rapidly and unexpectedly to different points along the front. This allowed them to create a "local superiority" in numbers, that could achieve air supremacy in a local area in support of ground operations or just to destroy Allied fighters in the overall strategy of attrition.
Similarly the Second World War Big Wing was one tactic that was evolved to cause maximum damage to the enemy with the minimum of casualties.
Blitzkrieg.
Modern armour warfare doctrine was developed and established during the run up to World War II. A fundamental key to conventional Warfare is the concentration of force at a particular point (the [der] Schwerpunkt). Concentration of force increases the chance of victory in a particular engagement. Correctly chosen and exploited, victory in a given engagement or a chain of small engagements is often sufficient to win the battle.
Defence of France 1944.
The Nazi defence of France in 1944 could have followed one of the two models offered in the hypothetical example. The first was to distribute the available forces along the Atlantic Wall and throw the invading Allies back into the sea where and when they landed. The second was to keep the German Panzers concentrated and well away from the beaches. Territory could then be conceded to draw the invasion force away from their lodgement areas from which it would be nipped off by the cutting of their supply lines and then defeated in detail. The superiority of concentrated forces using maneuver warfare in the hypothetical example carried the proviso of "all other things being equal"; by 1944 things were far from being equal.
With Allied air superiority not only were major force concentrations vulnerable to tactical and heavy bombers themselves, but so were the vital assets—bridges, marshalling yards, fuel depots, etc.—needed to give them mobility. As it was in this case, the blitzkrieg solution was the worst of both worlds, neither being far enough forward to maximise the use of their defensive fortifications, nor far enough away and concentrated to give it room to manoeuvre.
Similarly, for the Japanese in the final stages of the Island hopping campaign of the Pacific War, with Allied naval and air superiority and non-existent room to manoeuvre, neither a water's edge defensive strategy nor a holding back and counterattacking strategy could succeed.
Cold War and beyond.
For much of the Cold War, to combat the overwhelming Soviet supremacy in armour and men, NATO planned to use much of West German territory as a flood plain in a defence in depth to absorb and disperse the momentum of a massed Soviet attack. Mobile anti-tank teams and counterattacking NATO armies would seek to cut off the leading Soviet echelons from their supporting echelons and then reduce the isolated elements with superior air power and conventional munitions, and if this failed, with nuclear munitions.
In an effort to avoid the use of nuclear munitions in an otherwise conventional war, the US invested heavily in a family of technologies it called "Assault Breaker", the two parts of these programmes were an enhanced realtime intelligence, surveillance, target acquisition, and reconnaissance capability, and the second part a series of stand off precision guided air-launched and artillery weapon systems, such as the MLRS, ICMs, M712 Copperhead, and the BLU-108 submunition. Against such weapons massed concentrations of armour and troops would no longer be a virtue but a liability. From the mid eighties and onward a much greater level of force dispersal became desirable rather than concentration.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\left ( \\frac{Power_2}{Power_1} \\right ) ^ 2 "
},
{
"math_id": 1,
"text": " \\frac{d}{d t}N_1=-c_2 N_2"
},
{
"math_id": 2,
"text": " \\frac{d}{d t}N_2=-c_1 N_1"
},
{
"math_id": 3,
"text": "N_1"
},
{
"math_id": 4,
"text": "\\frac{d}{d t}N_2"
},
{
"math_id": 5,
"text": "c_1"
},
{
"math_id": 6,
"text": " \\frac{d^2}{d t^2}N_1 = c_2 c_1 N_1 "
},
{
"math_id": 7,
"text": " \\frac{d^2}{d t^2}N_2 = c_2 c_1 N_2 "
},
{
"math_id": 8,
"text": "N_2"
}
]
| https://en.wikipedia.org/wiki?curid=6729973 |
6730121 | Presentation of a monoid | In algebra, a presentation of a monoid (or a presentation of a semigroup) is a description of a monoid (or a semigroup) in terms of a set Σ of generators and a set of relations on the free monoid Σ∗ (or the free semigroup Σ+) generated by Σ. The monoid is then presented as the quotient of the free monoid (or the free semigroup) by these relations. This is an analogue of a group presentation in group theory.
As a mathematical structure, a monoid presentation is identical to a string rewriting system (also known as a semi-Thue system). Every monoid may be presented by a semi-Thue system (possibly over an infinite alphabet).
A "presentation" should not be confused with a "representation".
Construction.
The relations are given as a (finite) binary relation R on Σ∗. To form the quotient monoid, these relations are extended to monoid congruences as follows:
First, one takes the symmetric closure "R" ∪ "R"−1 of R. This is then extended to a symmetric relation "E" ⊂ Σ∗ × Σ∗ by defining "x" ~"E" "y" if and only if x = sut and y = svt for some strings "u", "v", "s", "t" ∈ Σ∗ with ("u","v") ∈ "R" ∪ "R"−1. Finally, one takes the reflexive and transitive closure of E, which then is a monoid congruence.
In the typical situation, the relation R is simply given as a set of equations, so that formula_0. Thus, for example,
formula_1
is the equational presentation for the bicyclic monoid, and
formula_2
is the plactic monoid of degree 2 (it has infinite order). Elements of this plactic monoid may be written as formula_3 for integers "i", "j", "k", as the relations show that "ba" commutes with both "a" and "b".
Inverse monoids and semigroups.
Presentations of inverse monoids and semigroups can be defined in a similar way using a pair
formula_4
where
formula_5
is the free monoid with involution on formula_6, and
formula_7
is a binary relation between words. We denote by formula_8 (respectively formula_9) the equivalence relation (respectively, the congruence) generated by "T".
We use this pair of objects to define an inverse monoid
formula_10
Let formula_11 be the Wagner congruence on formula_6, we define the inverse monoid
formula_12
"presented" by formula_4 as
formula_13
In the previous discussion, if we replace everywhere formula_14 with formula_15 we obtain a presentation (for an inverse semigroup) formula_4 and an inverse semigroup formula_16 presented by formula_4.
A trivial but important example is the free inverse monoid (or free inverse semigroup) on formula_6, that is usually denoted by formula_17 (respectively formula_18) and is defined by
formula_19
or
formula_20
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R=\\{u_1=v_1,\\ldots,u_n=v_n\\}"
},
{
"math_id": 1,
"text": "\\langle p,q\\,\\vert\\; pq=1\\rangle"
},
{
"math_id": 2,
"text": "\\langle a,b \\,\\vert\\; aba=baa, bba=bab\\rangle"
},
{
"math_id": 3,
"text": "a^ib^j(ba)^k"
},
{
"math_id": 4,
"text": "(X;T)"
},
{
"math_id": 5,
"text": " (X\\cup X^{-1})^* "
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "T\\subseteq (X\\cup X^{-1})^*\\times (X\\cup X^{-1})^*"
},
{
"math_id": 8,
"text": "T^{\\mathrm{e}}"
},
{
"math_id": 9,
"text": "T^\\mathrm{c}"
},
{
"math_id": 10,
"text": "\\mathrm{Inv}^1 \\langle X | T\\rangle."
},
{
"math_id": 11,
"text": "\\rho_X"
},
{
"math_id": 12,
"text": "\\mathrm{Inv}^1 \\langle X | T\\rangle"
},
{
"math_id": 13,
"text": "\\mathrm{Inv}^1 \\langle X | T\\rangle=(X\\cup X^{-1})^*/(T\\cup\\rho_X)^{\\mathrm{c}}."
},
{
"math_id": 14,
"text": "({X\\cup X^{-1}})^*"
},
{
"math_id": 15,
"text": "({X\\cup X^{-1}})^+"
},
{
"math_id": 16,
"text": "\\mathrm{Inv}\\langle X | T\\rangle"
},
{
"math_id": 17,
"text": "\\mathrm{FIM}(X)"
},
{
"math_id": 18,
"text": "\\mathrm{FIS}(X)"
},
{
"math_id": 19,
"text": "\\mathrm{FIM}(X)=\\mathrm{Inv}^1 \\langle X | \\varnothing\\rangle=({X\\cup X^{-1}})^*/\\rho_X,"
},
{
"math_id": 20,
"text": "\\mathrm{FIS}(X)=\\mathrm{Inv} \\langle X | \\varnothing\\rangle=({X\\cup X^{-1}})^+/\\rho_X."
}
]
| https://en.wikipedia.org/wiki?curid=6730121 |
67303968 | Antonella Cupillari | Italian-American mathematician
Antonella Cupillari (born 1955) is an Italian-American mathematician interested in the history of mathematics and mathematics education. She is an associate professor of mathematics at Penn State Erie, The Behrend College.
Education and career.
Cupillari earned a laurea at the University of L'Aquila in 1978, and completed her Ph.D. at the University at Albany, SUNY in 1984. Her dissertation, "A Small Boundary for formula_0 on a Strictly Pseudoconvex Domain", concerned functional analysis, and was supervised by R. Michael (Rolf) Range; she also published it in the "Proceedings of the American Mathematical Society".
Cupillari joined the faculty at Penn State Erie in 1984 and was promoted to associate professor in 1992.
Books.
Cupillari is the author of books on mathematics and the history of mathematics including:
Recognition.
Cupillari was the 2008 winner of the Award for Distinguished College or University Teaching of Mathematics of the Allegheny Mountain Section of the Mathematical Association of America.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^{\\infty}"
}
]
| https://en.wikipedia.org/wiki?curid=67303968 |
67306546 | Pyramid vector quantization | Pyramid vector quantization (PVQ) is a method used in audio and video codecs to quantize and transmit unit vectors, i.e. vectors whose magnitudes are known to the decoder but whose directions are unknown. PVQ may also be used as part of a gain/shape quantization scheme, whereby the magnitude and direction of a vector are quantized separately from each other. PVQ was initially described in 1986 in the paper "A Pyramid Vector Quantizer" by Thomas R. Fischer.
One caveat of PVQ is that it operates under the taxicab distance (L1-norm). Conversion to/from the more familiar Euclidean distance (L2-norm) is possible via vector projection, though results in a less uniform distribution of quantization points (the poles of the Euclidean "n"-sphere become denser than non-poles). No efficient algorithm for the ideal (i.e., uniform) vector quantization of the Euclidean "n"-sphere is known as of 2010.
This non-uniformity can be reduced by applying deformation like coordinate-wise power before projection, reducing mean-squared quantization error by ~10%.
PVQ is used in the CELT audio codec (inherited into Opus) and the Daala video codec.
Overview.
As a form of vector quantization, PVQ defines a codebook of M quantization points, each of which is assigned an integer codeword from 0 to M−1. The goal of the encoder is to find the codeword of the closest vector, which the decoder must decode back into a vector.
The PVQ codebook consists of all N-dimensional points formula_0 with integer-only coordinates whose absolute values sum to a constant K (i.e. whose L1-norm equals K). In set-builder notation:
formula_1
where formula_2 denotes the L1-norm of formula_0.
As it stands, the set S tesselates the surface of an N-dimensional pyramid. If desired, we may reshape it into a sphere by "projecting" the points onto the sphere, i.e. by normalizing them:
formula_3
where formula_4 denotes the L2-norm of formula_0.
Increasing the parameter K results in more quantization points, and hence typically yields a more "accurate" approximation of the original unit vector formula_5 at the cost of larger integer codewords that require more bits to transmit.
Example.
Suppose we wish to quantize three-dimensional unit vectors using the parameter K=2. Our codebook becomes:
Now, suppose we wish to transmit the unit vector <0.592, −0.720, 0.362> (rounded here to 3 decimal places, for clarity). According to our codebook, the closest point we can pick is codeword 13 (<0.707, −0.707, 0.000>), located approximately 0.381 units away from our original point.
Increasing the parameter K results in a larger codebook, which typically increases the reconstruction accuracy. For example, based on the Python code below, K=5 (codebook size: 102) yields an error of only 0.097 units, and K=20 (codebook size: 1602) yields an error of only 0.042 units.
Python code.
import itertools
import math
from typing import List, NamedTuple, Tuple
class PVQEntry(NamedTuple):
codeword: int
point: Tuple[int, ...]
normalizedPoint: Tuple[float, ...]
def create_pvq_codebook(n: int, k: int) -> List[PVQEntry]:
Naive algorithm to generate an n-dimensional PVQ codebook
with k pulses.
Runtime complexity: O(k**n)
ret = []
for p in itertools.product(range(-k, k + 1), repeat=n):
if sum(abs(x) for x in p) == k:
norm = math.sqrt(sum(x ** 2 for x in p))
q = tuple(x / norm for x in p)
ret.append(PVQEntry(len(ret), p, q))
return ret
def search_pvq_codebook(
codebook: List[PVQEntry], p: Tuple[float, ...]
) -> Tuple[PVQEntry, float]:
Naive algorithm to search the PVQ codebook. Returns the point in the
codebook that's "closest" to p, according to the Euclidean distance.)
ret = None
min_dist = None
for entry in codebook:
q = entry.normalizedPoint
dist = math.sqrt(sum((q[j] - p[j]) ** 2 for j in range(len(p))))
if min_dist is None or dist < min_dist:
ret = entry
min_dist = dist
return ret, min_dist
def example(p: Tuple[float, ...], k: int) -> None:
n = len(p)
codebook = create_pvq_codebook(n, k)
print("Number of codebook entries: " + str(len(codebook)))
entry, dist = search_pvq_codebook(codebook, p)
print("Best entry: " + str(entry))
print("Distance: " + str(dist))
phi = 1.2
theta = 5.4
x = math.sin(phi) * math.cos(theta)
y = math.sin(phi) * math.sin(theta)
z = math.cos(phi)
p = (x, y, z)
example(p, 2)
example(p, 5)
example(p, 20)
Complexity.
The PVQ codebook can be searched in formula_7. Encoding and decoding can likewise be performed in formula_7 using formula_8 memory.
The codebook size obeys the recurrence
formula_9
with formula_10 for all formula_11 and formula_12 for all formula_13.
A closed-form solution is given by
formula_14
where formula_15 is the hypergeometric function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{p}"
},
{
"math_id": 1,
"text": "S(N,K)=\\left\\{\\vec{p} \\in \\mathbb{Z}^N : \\left\\| \\vec{p} \\right\\|_1 = K\\right\\}"
},
{
"math_id": 2,
"text": "\\left\\| \\vec{p} \\right\\|_1"
},
{
"math_id": 3,
"text": "S_\\text{sphere}(N,K)=\\left\\{\\frac{\\vec{p}}{\\left\\| \\vec{p} \\right\\|_2} : \\vec{p} \\in S(N,K)\\right\\}"
},
{
"math_id": 4,
"text": "\\left\\| \\vec{p} \\right\\|_2"
},
{
"math_id": 5,
"text": "\\vec{v}"
},
{
"math_id": 6,
"text": "\\sqrt{2}/2"
},
{
"math_id": 7,
"text": "O(KN)"
},
{
"math_id": 8,
"text": "O(K+N)"
},
{
"math_id": 9,
"text": "V(N,K) = V(N-1,K) + V(N,K-1) + V(N-1,K-1)"
},
{
"math_id": 10,
"text": "V(N,0) = 1"
},
{
"math_id": 11,
"text": " N \\ge 0"
},
{
"math_id": 12,
"text": "V(0,K) = 0"
},
{
"math_id": 13,
"text": "K \\ne 0"
},
{
"math_id": 14,
"text": "V(N,K) = 2N \\cdot {}_2F_1 (1-K,1-N;2;2)."
},
{
"math_id": 15,
"text": "{}_2F_1"
}
]
| https://en.wikipedia.org/wiki?curid=67306546 |
6730796 | Concordant pair | In statistics, a concordant pair is a pair of observations, each on two variables, (X"1",Y"1") and (X"2",Y"2"), having the property that
formula_0
where "sgn" refers to whether a number is positive, zero, or negative (its sign). Specifically, the signum function, often represented as sgn, is defined as:
formula_1
That is, in a concordant pair, both elements of one pair are either greater than, equal to, or less than the corresponding elements of the other pair.
In contrast, a discordant pair is a pair of two-variable observations such that
formula_2
That is, if one pair contains a higher value of "X" then the other pair contains a higher value of "Y".
Uses.
The Kendall tau distance between two series is the total number of discordant pairs. The Kendall tau rank correlation coefficient, which measures how closely related two series of numbers are, is proportional to the difference between the number of concordant pairs and the number of discordant pairs. An estimate of Goodman and Kruskal's gamma, another measure of rank correlation, is given by the ratio of the difference to the sum of the numbers of concordant and discordant pairs. Somers' "D" is another similar but asymmetric measure given by the ratio of the difference in the number of concordant and discordant pairs to the number of pairs with unequal values for one of the two variables. | [
{
"math_id": 0,
"text": " \\sgn (X_2 - X_1)\\ = \\sgn (Y_2 - Y_1), "
},
{
"math_id": 1,
"text": " \\sgn x = \\begin{cases} \n-1, & x < 0 \\\\\n0 , & x = 0 \\\\\n1 , & x > 0\n\\end{cases} "
},
{
"math_id": 2,
"text": " \\sgn (X_2 - X_1)\\ = - \\sgn (Y_2 - Y_1). "
}
]
| https://en.wikipedia.org/wiki?curid=6730796 |
67314052 | 2 Chronicles 14 | Second Book of Chronicles, chapter 14
2 Chronicles 14 is the fourteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Asa, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 15 verses in Christian Bibles, but 14 verses in the Hebrew Bible with the following verse numbering comparison:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Asa king of Judah (14:1–2).
The record of Asa's reign in the Chronicles (2 Chronicles 14–16) is almost three times longer than in 1 Kings (15:9–24), consisting of two distinct phases:
Although not free from fault (2 Chronicles 16:7, 10, 12), the evaluation of Asa is positive (verse 2), because overall "he did that which was good and right" (cf. 1 Kings 15:14).
"And Abijah slept with his fathers, and they buried him in the City of David. Asa his son then reigned in his place. In his days the land was quiet for ten years."
Asa’s religious accomplishments (14:3–8).
This section deals with three themes:
The Chronicles omits the abolition of the "hierodules" ("male prostitutes") and all edifices recorded in 1 Kings 15:12.
War between Asa and Zerah the Ethiopian (14:9–15).
This section records a sacral war (cf. 2 Chronicles 13:2–20), where the outnumbered army of Judah faced a strong enemy, but when they cried to God (in accordance to 2 Chronicles 6:34–35), they achieved a victory and took abundant booty (verses 12–15). The phrase "cities around Gerar" (verse 14) and the words "tents ... sheep... goats ...camels" indicate that the defeated enemy was an "Arab-Edomite tribe".
"Then Zerah the Ethiopian came out against them with an army of a million men and three hundred chariots, and he came to Mareshah."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67314052 |
67314545 | 2 Chronicles 15 | Second Book of Chronicles, chapter 15
2 Chronicles 15 is the fifteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Asa, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
The prophecy of Azariah (15:1–7).
The section records Azariah's speech, which can be divided into three parts (after the introduction):
The speech's introduction addresses a broad audience: king Asa, people of Judah, and of Benjamin (excluding the Israelites from the northern kingdom). Verse 2 speaks of the reciprocity principles in divine-human relations "The Lord is with you, while you are with him", corollary of the 'measure-for-measure' principle spoken by Shemaiah in 2 Chronicles 12:5. The historical parts could refer to the judges period (cf. e.g. Judges 2:11-14; 17:6) or to a midrash-like reworking of Hosea 3:4. Azariah's speech concluded with a 'call for courageous deeds', patterned after Jeremiah 31:16: 'For your work shall be rewarded'.
"And the Spirit of God came upon Azariah the son of Oded:"
Asa’s reforms (15:8–19).
Asa responded immediately to Azariah's sermon by carrying out religious reforms, and then initiated a great assembly (modelled on 2 Kings 23) to establish a covenant renewal (cf. Exodus 19:3–8), accompanied with a joyful and enthusiastic sacrificial ceremony. The general assembly (verse 9) not only included the people of Judah and Benjamin, but also those from the northern kingdom who were regarded as 'strangers' from the Chronicler's perspective, from the tribes Ephraim, Manasseh, and Simeon (cf. 2 Chronicles 34:6). The "third month" points to the date of "the Sinai theophany" and the "Feast of Weeks" ("Shavuot" or "Pentecost") with sacrifices based on the number "seven" (seven hundred...seven thousand) to link with that particular feast (seven times seven).
"And when Asa heard these words, and the prophecy of Oded the prophet, he took courage, and put away the abominable idols out of all the land of Judah and Benjamin, and out of the cities which he had taken from mount Ephraim, and renewed the altar of the Lord, that was before the porch of the Lord."
Verse 8.
the vicinity of the mountains of Ephraim; cf 2 Chronicles 16:6), which Asa built after his war with Baasha or even to his father Abijah's conquest (2 Chronicles 13:19).
"So they gathered together at Jerusalem in the third month, in the fifteenth year of the reign of Asa."
"And there was no war until the thirty-fifth year of the reign of Asa."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67314545 |
67315073 | 2 Chronicles 16 | Second Book of Chronicles, chapter 16
2 Chronicles 16 is the sixteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Asa, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 14 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
War between Asa and Baasha (16:1–6).
The war against Baasha of Israel marks the second phase of Asa's reign when he behaved badly and was accordingly punished. During the first period Asa relied on God in battle and listened to God's prophet (Azariah), but in the second, he didn't rely on God but made alliance with Ben-hadad of Aram in his war and later ignored Hanani's sermon. As a consequence, throughout this last phase of his reign, Asa was always plagued by wars (cf. 1 Kings 15:16).
"In the thirty-sixth year of the reign of Asa, Baasha king of Israel came up against Judah and built Ramah, that he might let none go out or come in to Asa king of Judah."
Verse 1.
to September 894 BCE. Thiele assumes that the year here refers not to Asa's personal rule but to the duration of the kingdom of Judah (cf. the "20th year of Artaxerxes", 445 BCE, in Nehemiah 2:1 was calculated from the beginning of Xerxes' reign in 465 BCE). The Chronicler placed the invasions in a correct chronology after the rest in the first 10 years of Asa's rule (from 910 to 900 BCE), starting with the attack of the Cushites and the Lubim (but no war with Israel as yet) just before the third month of Asa's 15th year (between September 896 and September 895 BCE), which ended with a victory celebration of Judah. This caused a migration of people from the northern kingdom to the south, that Baasha's invasion attempted to halt. 1 Kings 15:33 notes that Baasha became the king of Israel on the third year of Asa's reign (909/908 BCE) and ruled for 24 years (until 886/885 BCE), thus only until the 26th year of Asa (1 Kings 16:8). Therefore, the 36th year from the Division was also the 16th year of Asa.
Hanani’s message to Asa (16:7–10).
The short but strong speech of Hanani has the elements of three other prophets: the proclamation of Isaiah proclamation (verse 7; cf. Isaiah 7:9; 10:20; 31:1), the words of Zechariah (verse 9; cf. Zechariah 4:10), and the suffering of Jeremiah (verse 10; Jer 20:2–3).
"And at that time Hanani the seer came to King Asa of Judah saying, “Because you depended on the king of Aram and did not depend on the Lord your God, therefore the army of the king of Aram escaped from your hand."
Death of Asa (16:11–14).
The extensive concluding acknowledgement of Asa's reign with the unusual words of appreciation before the description of his burial indicates that his son Jehoshaphat had
already taken on the business of government since Asa's illness rendered him unable to rule (verse 12). The sickness of Asa was seen as a punishment for his shameful behaviour towards Hanani the seer (1 Kings 15:23), in irony to the king's name which can be interpreted as "YHWH heals". Since the word "Asa" could also be derived from the Aramaic word for 'myrrh', the funeral pyre (cf. Jeremiah 34:5; 2 Chronicles 21:19) with the incense and the delicate spices shows that Asa was buried in 'a way that accorded with his name'.
"And in the thirty-ninth year of his reign, Asa became diseased in his feet, and his malady was severe; yet in his disease he did not seek the Lord, but the physicians."
"And Asa slept with his fathers, dying in the forty-first year of his reign."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67315073 |
67316066 | High-energy string scattering amplitudes | The "Gross conjecture" regarding high energy symmetry of string theory was based on the saddle-point calculation of hard string scattering amplitudes (SSA) of both the closed and open string theories. The conjecture claimed that there existed infinite "linear relations" among hard SSA of different string states. Moreover, these infinite linear relations were so powerful that they can be used to solve all the hard SSA and express them in terms of one amplitude. Some monographs had made speculations about this hidden stringy symmetry without getting any conclusive results. However, the saddle-point calculation of the hard SSA which was claimed to be valid for all string states and all string loop orders was pointed out to be inconsistent for the cases of the "excited" string states in a series of works done by the method of decoupling of zero-norm states (ZNS).
It was then further shown that even at closed string-tree level, there was "no" reliable saddle-point in the hard SSA calculation. Three evidences have been given to demonstrate the inconsistency of the "saddle-point". So instead of using the saddle-point method, they used the KLT formula to obtain the correct hard closed SSA, which differs from result of Gross and Mende by an oscillation prefactor. This prefactor consistently implied the existence of infinitely many zeros and poles in the hard SSA.
Soon later a similar conclusion was made based on the group theoretical calculation of SSA. They found out that up to the string one-loop level the saddle-point calculation was valid only for the hard four tachyon SSA, but was incorrect for other hard SSA of excited string states. For this reason, the authors admitted that they can not consistently find out any linear relations as suggested in Gross conjecture.
For the case of open bosonic string at the mass level formula_0, as an example, the hard open SSA of Gross and Manes were miscalculated to be
formula_1
which were inconsistent with the Ward identities or the decoupling of zero-norm states (ZNS) in the hard scattering limit to be discussed below.
The importance of two types of ZNS was stressed in the massive background field calculation of stringy symmetries. It was shown that in the weak field approximation (but valid for all energies) an "inter-particle" symmetry transformation
formula_2
for two propagating states formula_3 and formula_4 at mass level formula_0 of open bosonic string can be generated by the formula_5 vector ZNS with polarization formula_6
formula_7
Incidentally, a set of discrete ZNS formula_8 were shown to form the formula_9 spacetime symmetry algebra of the toy formula_10 string theory.
The first set of linear relations among hard SSA was obtained for the mass level formula_0 of the formula_11 open bosonic string theory by the method of decoupling of ZNS. (Note that the decoupling of ZNS was also used in the group theoretical calculation of SSA to fix the measure in the SSA calculation). By solving the following three linear relations or stringy Ward identities among the four leading order hard SSA
formula_12
one obtains the ratios
formula_13
These ratios were justified by a set of sample calculation of hard SSA. Similar results were obtained for the mass level formula_14. On the other hand, A remedy calculation was performed to recover the missing terms calculated by Gross and Manes in order to obtain the correct four ratios above.
The ratios calculated above for the mass level formula_0 can be generalized to arbitrary mass levels formula_15
formula_16
In addition to the method of decoupling of ZNS, a dual method called the Virasoro constraint method and a corrected saddle-point calculation (for string-tree amplitudes) also gave the same ratios above. It is important to note that the linear relations and ratios obtained by the decoupling of ZNS are valid for all string-loop orders since ZNS should be decoupled for all loop amplitudes due to unitarity of the theory. This important fact was not shared by the saddle-point calculation and neither of the group theoretical calculation of SSA. On the other hand, one believes that by keeping formula_17 fixed as a finite constant one can obtain more information about the high energy behavior of string theory compared to the tensionless string (formula_18) approach in which all string states are massless.
Since the linear relations obtained by the decoupling of ZNS are valid order by order and share the same forms for all orders in string perturbation theory, one expects that there exists "stringy symmetry" of the theory. Indeed, Two such symmetry groups were suggested recently to be the formula_19 group in the "Regge" scattering limit and the formula_20 group in the "Non-relativistic" scattering limit. Moreover, It was shown that the linear ratios for the mass level formula_21 can be extracted from the Regge SSA.
More recently, the authors in constructed the exact SSA of three tachyons and one arbitrary string state, or the Lauricella SSA (LSSA)
formula_23
in the formula_11 open bosonic string theory. In addition, they discovered the Lie algebra of the formula_22 symmetry group
formula_24
valid for "all" kinematic regimes of the LSSA. Moreover, the linear ratios presented above for the mass level formula_21 can be rederived by the LSSA in the hard scattering limit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M^{2}=4"
},
{
"math_id": 1,
"text": " T_{TTT}\\propto T_{[LT]}, T_{LLT}=T_{(LT)}=0,"
},
{
"math_id": 2,
"text": " \\delta C_{(\\mu\\nu\\lambda)}=\\frac{1}{2}\\partial_{(\\mu}\\partial_{\\nu} \\theta_{\\lambda)}^{2}-2\\eta_{(\\mu\\nu}\\theta_{\\lambda)}^{2},\\delta C_{[\\mu\\nu ]}=9\\partial_{\\lbrack\\mu}\\theta_{\\nu]}^{2} "
},
{
"math_id": 3,
"text": "C_{(\\mu\\nu\\lambda)}"
},
{
"math_id": 4,
"text": "C_{[\\mu\\nu]}"
},
{
"math_id": 5,
"text": "D_{2}"
},
{
"math_id": 6,
"text": "\\theta_{\\mu}^{2}"
},
{
"math_id": 7,
"text": " |D_{2}\\rangle=[(\\frac{1}{2}k_{\\mu}k_{\\nu}\\theta_{\\lambda}^{2}+2\\eta_{\\mu\\nu }\\theta_{\\lambda}^{2})\\alpha_{-1}^{\\mu}\\alpha_{-1}^{\\nu}\\alpha_{-1}^{\\lambda }+9k_{\\mu}\\theta_{\\nu}^{2}\\alpha_{-2}^{[\\mu}\\alpha_{-1}^{\\nu]}-6\\theta_{\\mu }^{2}\\alpha_{-3}^{\\mu}]\\left\\vert 0,k\\right\\rangle , k\\cdot\\theta ^{2}=0. "
},
{
"math_id": 8,
"text": "G_{J,M}^{+}"
},
{
"math_id": 9,
"text": "w_{\\infty}"
},
{
"math_id": 10,
"text": "2D"
},
{
"math_id": 11,
"text": "26D"
},
{
"math_id": 12,
"text": "T_{LLT}^{5\\rightarrow 3}+T_{(LT)}^{3} =0, 10T_{LLT}^{5\\rightarrow 3}+T_{TTT}^{3}+18 T_{(LT)}^{3} =0, T_{LLT}^{5\\rightarrow 3}+T_{TTT}^{3}+9 T_{[LT]}^{3}=0,"
},
{
"math_id": 13,
"text": "T_{TTT}:T_{LLT}:T_{(LT)}:T_{[LT]}=8:1:-1:-1."
},
{
"math_id": 14,
"text": "M^{2}=6 "
},
{
"math_id": 15,
"text": "M^{2}=2(N-1)"
},
{
"math_id": 16,
"text": " \\frac{T^{(N,2m,q)}}{T^{(N,0,0)}}=\\left( -\\frac{1}{M}\\right) ^{2m+q}\\left( \\frac{1}{2}\\right) ^{m+q}(2m-1)!!. "
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "\\alpha^{\\prime}\\rightarrow\\infty"
},
{
"math_id": 19,
"text": "SL(5,\\mathbb{C})"
},
{
"math_id": 20,
"text": "SL(4,\\mathbb{C})"
},
{
"math_id": 21,
"text": "M^2=2(N-1)"
},
{
"math_id": 22,
"text": "SL(K+3,\\mathbb{C})"
},
{
"math_id": 23,
"text": " A_{st}^{(r_{n}^{T},r_{m}^{P},r_{l}^{L})} =\\prod_{n=1}\\left[ -(n-1)!k_{3}^{T}\\right] ^{r_{n}^{T}}\\cdot\\prod_{m=1}\\left[ -(m-1)!k_{3} ^{P}\\right] ^{r_{m}^{P}}\\prod_{l=1}\\left[ -(l-1)!k_{3}^{L}\\right] ^{r_{l}^{L}}\\cdot B\\left( -\\frac{t}{2}-1,-\\frac{s}{2}-1\\right) F_{D}^{(K)}\\left( -\\frac{t}{2}-1;R_{n}^{T},R_{m}^{P},R_{l}^{L};\\frac{u}{2}+2-N;\\tilde{Z}_{n} ^{T},\\tilde{Z}_{m}^{P},\\tilde{Z}_{l}^{L}\\right) "
},
{
"math_id": 24,
"text": "[E_{ij},E_{kl}]=\\delta_{jk}E_{il}-\\delta_{li}E_{kj}; 1\\le i,j\\le K+3 "
}
]
| https://en.wikipedia.org/wiki?curid=67316066 |
67321416 | Besicovitch inequality | In mathematics, the Besicovitch inequality is a geometric inequality relating volume of a set and distances between certain subsets of its boundary. The inequality was first formulated by Abram Besicovitch.
Consider the n-dimensional cube formula_0 with a Riemannian metric formula_1. Let
formula_2
denote the distance between opposite faces of the cube. The Besicovitch inequality asserts that
formula_3
The inequality can be generalized in the following way. Given an n-dimensional Riemannian manifold M with connected boundary and a smooth map formula_4, such that the restriction of f to the boundary of M is a degree 1 map onto formula_5, define
formula_6
Then formula_7.
The Besicovitch inequality was used to prove systolic inequalities
on surfaces.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0,1]^n"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "d_i= dist_g(\\{x_i=0\\}, \\{x_i=1\\})"
},
{
"math_id": 3,
"text": "\\prod_i d_i \\geq Vol([0,1]^n,g)"
},
{
"math_id": 4,
"text": "f: M \\rightarrow [0,1]^n"
},
{
"math_id": 5,
"text": " \\partial [0,1]^n"
},
{
"math_id": 6,
"text": "d_i= dist_M(f^{-1}(\\{x_i=0\\}), f^{-1}(\\{x_i=1\\}))"
},
{
"math_id": 7,
"text": "\\prod_i d_i \\geq Vol(M)"
}
]
| https://en.wikipedia.org/wiki?curid=67321416 |
673234 | Prefix grammar | In theoretical computer science and formal language theory, a prefix grammar is a type of string rewriting system, consisting of a set of string rewriting rules, and similar to a formal grammar or a semi-Thue system. What is specific about prefix grammars is not the shape of their rules, but the way in which they are applied: only prefixes are rewritten. The prefix grammars describe exactly all regular languages.
Formal definition.
A prefix grammar "G" is a 3-tuple, (Σ, "S", "P"), where
For strings "x", "y", we write "x" →"G" "y" (and say: "G" can derive "y" from "x" in one step) if there are strings "u, v, w" such that &NoBreak;&NoBreak;, and "v" → "w" is in "P". Note that →"G" is a binary relation on the strings of Σ.
The "language" of "G", denoted &NoBreak;&NoBreak;, is the set of strings derivable from "S" in zero or more steps: formally, the set of strings "w" such that for some "s" in "S", "s R w", where "R" is the transitive closure of →"G".
Example.
The prefix grammar
describes the language defined by the regular expression
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 01(01)^* \\cup 100^* "
}
]
| https://en.wikipedia.org/wiki?curid=673234 |
6732384 | Graphical timeline of the Stelliferous Era | This is the timeline of the Stelliferous era but also partly charts the Primordial era, and charts more of the Degenerate era of the heat death scenario.
Timeline.
The scale is formula_0 where formula_1 is the time since the Big Bang expressed in years. For example, 10 billion years after the Big Bang (3.8 billion years ago) is formula_2.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "10 \\times \\log_{10}\\{t\\}"
},
{
"math_id": 1,
"text": "\\{t\\}"
},
{
"math_id": 2,
"text": "10 \\times \\log_{10}\\{10,000,000,000\\} = 10 \\times 10 = 100"
}
]
| https://en.wikipedia.org/wiki?curid=6732384 |
6732652 | Gibbs–Donnan effect | Behaviour of charged particles near a semi-permeable membrane
The Gibbs–Donnan effect (also known as the Donnan's effect, Donnan law, Donnan equilibrium, or Gibbs–Donnan equilibrium) is a name for the behaviour of charged particles near a semi-permeable membrane that sometimes fail to distribute evenly across the two sides of the membrane. The usual cause is the presence of a different charged substance that is unable to pass through the membrane and thus creates an uneven electrical charge. For example, the large anionic proteins in blood plasma are not permeable to capillary walls. Because small cations are attracted, but are not bound to the proteins, small anions will cross capillary walls away from the anionic proteins more readily than small cations.
Thus, some ionic species can pass through the barrier while others cannot. The solutions may be gels or colloids as well as solutions of electrolytes, and as such the phase boundary between gels, or a gel and a liquid, can also act as a selective barrier. The electric potential arising between two such solutions is called the Donnan potential.
The effect is named after the American Josiah Willard Gibbs who proposed it in 1878 and the British chemist Frederick G. Donnan who studied it experimentally in 1911.
The Donnan equilibrium is prominent in the triphasic model for articular cartilage proposed by Mow and Lai, as well as in electrochemical fuel cells and dialysis.
The Donnan effect is tactic pressure attributable to cations (Na+ and K+) attached to dissolved plasma proteins.
Example.
The presence of a charged impermeant ion (for example, a protein) on one side of a membrane will result in an asymmetric distribution of permeant charged ions. The Gibbs–Donnan equation at equilibrium states (assuming permeant ions are Na+ and Cl−):formula_0Equivalently,formula_1
Double Donnan.
Note that Sides 1 and 2 are no longer in osmotic equilibrium (i.e. the total osmolytes on each side are not the same)
"In vivo", ion balance does equilibriate at the proportions that would be predicted by the Gibbs–Donnan model, because the cell cannot tolerate the attendant large influx of water. This is balanced by instating a functionally impermeant cation, Na+, extracellularly to counter the anionic protein. Na+ does cross the membrane via leak channels (the permeability is approximately 1/10 that of K+, the most permeant ion) but, as per the pump-leak model, it is extruded by the Na+/K+-ATPase.
pH change.
Because there is a difference in concentration of ions on either side of the membrane, the pH (defined using the relative activity) may also differ when protons are involved. In many instances, from ultrafiltration of proteins to ion exchange chromatography, the pH of the buffer adjacent to the charged groups of the membrane is different from the pH of the rest of the buffer solution. When the charged groups are negative (basic), then they will attract protons so that the pH will be lower than the surrounding buffer. When the charged groups are positive (acidic), then they will repel protons so that the pH will be higher than the surrounding buffer.
Physiological applications.
Red blood cells.
When tissue cells are in a protein-containing fluid, the Donnan effect of the cytoplasmic proteins is equal and opposite to the Donnan effect of the extracellular proteins. The opposing Donnan effects cause chloride ions to migrate inside the cell, increasing the intracellular chloride concentration. The Donnan effect may explain why some red blood cells do not have active sodium pumps; the effect relieves the osmotic pressure of plasma proteins, which is why sodium pumping is less important for maintaining the cell volume .
Neurology.
Brain tissue swelling, known as cerebral oedema, results from brain injury and other traumatic head injuries that can increase intracranial pressure (ICP). Negatively charged molecules within cells create a fixed charge density, which increases intracranial pressure through the Donnan effect. ATP pumps maintain a negative membrane potential even though negative charges leak across the membrane; this action establishes a chemical and electrical gradient.
The negative charge in the cell and ions outside the cell creates a thermodynamic potential; if damage occurs to the brain and cells lose their membrane integrity, ions will rush into the cell to balance chemical and electrical gradients that were previously established. The membrane voltage will become zero, but the chemical gradient will still exist. To neutralize the negative charges within the cell, cations flow in, which increases the osmotic pressure inside relative to the outside of the cell. The increased osmotic pressure forces water to flow into the cell and tissue swelling occurs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[\\text{Na}^+]_\\alpha [\\text{Cl}^-]_\\alpha = [\\text{Na}^+]_\\beta [\\text{Cl}^-]_\\beta"
},
{
"math_id": 1,
"text": "\\frac{[\\text{Na}^+]_\\alpha}{[\\text{Na}^+]_\\beta} = \\frac{[\\text{Cl}^-]_\\beta}{[\\text{Cl}^-]_\\alpha}"
}
]
| https://en.wikipedia.org/wiki?curid=6732652 |
67327045 | Allan Adams | American physicist and oceanographer
Allan Adams is an American physicist and oceanographer. His research in physics has focused on string theory, QFT, and fluid dynamics, while his work in oceanography and ocean engineering have focused on high-precision optical sensing and imaging and on low-cost scalable instrumentation. He currently leads the Future Ocean Lab at Massachusetts Institute of Technology and is a visiting oceanographer at the Woods Hole Oceanographic Institution.
Adams earned degrees in physics from Harvard College, UC Berkeley, and Stanford University before joining the faculty of the MIT Department of Physics in 2008. Adams opened the Future Ocean Lab at MIT in January 2017, and became a Visiting Investigator at the Woods Hole Oceanographic Institution in 2018. In 2021, Adams co-founded Station B, a non-profit ocean engineering field station in Bermuda.
Adams is an avid sailplane pilot, cave diver, and father of two boys. He is married to MIT cognitive neuroscientist Rebecca Saxe.
Awards and recognition.
Adams was a Junior Fellow in the Harvard Society of Fellows before joining the faculty at MIT. Adams has received numerous awards for his teaching and monitorship, including MIT's School of Science Teaching Prize, the Buechner Teaching and Advising Prizes, and the Baker Memorial Award. His introductory lectures on Quantum Mechanics are freely available via MIT OpenCourseWare and have been viewed more than 10 million times. His talks on gravitational waves at TED 2016 and TED 2014 have been viewed more than 4.7 million times.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{N}={1}"
}
]
| https://en.wikipedia.org/wiki?curid=67327045 |
67330391 | 2 Chronicles 17 | Second Book of Chronicles, chapter 17
2 Chronicles 17 is the seventeenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter (and the next ones until chapter 20) is the reign of Jehoshaphat, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter divides into three parts: two general judgements on Jehoshaphat's rule (17:1–6, 10–19) and one report on teaching the law to the people (17:7–9). The first judgement (verses 1–6) focuses on domestic politics and religion, wheres the second (verses 10–19) concerns with foreign and military policy. Both 1 Kings and 2 Chronicles commend his reign (1 Kings 22:43–44; 2 Chronicles 16:3–4; 20:32–33), but the Chronicles provide more information not recorded in 1 Kings.
Jehoshaphat, king of Judah (17:1–6).
Jehoshaphat's reign was marked with peace, especially there were no conflicts with the northern kingdom (verse 1), drawing parallel with Solomon. Despite his successes as ruler—honored and wealthy (verse 5)—Jehoshaphat remained humble and God-fearing.
Jehoshaphat’s educational plan (17:7–9).
The education of all people of Judah on the book of the law of the Lord (Deuteronomy 17:18–20; 2 Kings 22:8–13) was performed by the royal officers, Levites, and priests (in that particular order), reflecting the growing importance of the Torah teaching and the Levites as teachers in postexilic era (Ezra 7:25; Nehemiah 8).
"Also in the third year of his reign he sent to his princes, even to Benhail, and to Obadiah, and to Zechariah, and to Nethaneel, and to Michaiah, to teach in the cities of Judah."
Jehoshaphat’s military power (17:10–19).
This section contains a second summarizing description of Jehoshaphat's reign from the perspective of foreign and military policy, with all Judah and the lands around Judah were struck by fear of the Lord (verse 10) and paid tributes to the king (verse 11; cf. 2 Chronicles 27:5). The army's composition (verses 14–19) were closely linked with the construction of forts, differentiating between army divisions from Judah and Benjamin (less than Judah and equipped with light armour consisting of bows and shields).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67330391 |
67330944 | 2 Chronicles 18 | Second Book of Chronicles, chapter 18
2 Chronicles 18 is the eighteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter (as all chapters from 17 to 20) is the reign of Jehoshaphat, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter parallels closely with 1 Kings 22:1–40 (especially from verse 4) with a different introduction (the Chronicles did not mention the three-year conflicts between the northern kingdom of Israel and Aram leading to the battle) and conclusion (shorter narrative of Ahab's death than 1 Kings 22), mainly to demonstrate that 'true YHWH-prophecy also existed in the northern kingdom'. The alliance with Ahab was Jehoshaphat's first of the two missteps (both involving the northern kingdom).
Jehoshaphat’s alliance with Ahab (18:1–11).
Verse 1 refers to 2 Chronicles 17:5 concerning Jehoshaphat's wealth and to 2 Kings 8:18, 27 about the marriage of Jehoshaphat's son, Joram, with Ahab's daughter, Athaliah (2 Chronicles 21:6; 22:2; cf. 2 Kings 8:18), probably driven by mutual political interests, but driving the royalty of Judah away from the Lord (2 Chronicles 21:6; 22:3–5).
"Jehoshaphat had riches and honor in abundance; and by marriage he allied himself with Ahab."
Micaiah's message of defeat (18:12–27).
Micaiah's speech describes a meeting of the Lord with his heavenly council (verses 18–22; cf. ; 2:1; Psalm 82:1) where the prophet was a witness to the conversation (cf. Jeremiah 23:18, 22).
"And Micaiah said, "If you return in peace, the Lord has not spoken by me." And he said, "Hear, all you peoples!""
Verse 27.
The last words of the prophet Micaiah the son of Imlah (, "šim-‘ū ‘amîm kulām", "hear all you peoples") are exactly the first words of the prophet Micah the Morasthite in the Book of Micah ().
Death of Ahab, king of Israel (18:28–34).
This section parallels closely to with some differences in the last parts, such as in verse 34, the sentence [Ahab] "was (or, continued) holding himself up in the chariot, facing Aram, until the evening" is a clearer rendering of 1 Kings 22:35 which reads that [Ahab] "was held up in the chariot, ... and he died in the evening", as well as the omission of the remaining narrative regarding the return of the army and the washing of Ahab's chariot at the pool of Samaria (1 Kings 22:36-38), which did not concern Jehoshaphat.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67330944 |
67331072 | Wells-Riley model | Model of airborne disease transmission
The Wells-Riley model is a simple model of the airborne transmission of infectious diseases, developed by William F. Wells and Richard L. Riley for tuberculosis and measles.
Wells-Riley can also be applied to other diseases transmitted in the air, such as COVID-19. The model describes the situation where one or more infected people are sharing a room (and so the room air) with other people who are susceptible to infection. It makes predictions for the probability that a susceptible person becomes infected. The prediction is that infection is more likely for small poorly ventilated rooms, and if the infected person is highly infectious.
The Wells-Riley is a highly simplified model of a very complex process, but does at least make predictions for how the probability of infection varies with things within our control, such as room ventilation.
Description of model.
Wells-Riley assumes that the air contains doses of the infectious bacterium or virus, and that you become infected if you breathe one dose in, i.e., the probability a person becomes infected, formula_0, is given by
formula_1
This dose is not a single bacterium or virus, but however many are needed to start an infection. These infectious doses are sometimes called 'quanta' - no relation to quantum physics. The doses are breathed out or otherwise emitted by the infectious person into the air of the room, such that in the room there is a concentration formula_2 of these doses per unit volume of the air. If you are breathing in air at volume rate formula_3, then after a time formula_4 in the room, the
formula_5
The Wells-Riley then relies on standard Poisson statistics which predicts for the probability of infection
formula_6
after a time formula_4 in the room. This is just the Poisson statistics expression for the probability of one or more doses being inhaled, once we know the mean number.
So the prediction is that you are more likely to become infected if the concentration of infectious doses in the room air is high, or if you spend longer in the room. The concentration of doses will tend to be high in small, poorly ventilated rooms, and smaller in larger, better ventilated rooms.
Relation of the Wells-Riley model to epidemiology.
Note that the Wells-Riley model approaches transmission of an airborne diseases as a physical transport problem, i.e., as the problem of how a virus or bacterium gets from one human body to another.
For transmission of COVID-19, for example, this would be how a virus breathed out by an infected person, can cross a room and be breathed in by a susceptible person. This is a different approach from that taken in the epidemiology of infectious diseases, which may gather information about who (e.g., nurses, factory workers) becomes infected, in what situations (e.g., the home, factories), and understand the spread of a disease in those terms - without considering how a virus or bacterium actually gets from one person to another.
However, the probability of infection predicted by the Wells-Riley model is close to the attack rate (also called secondary attack rate, and note that this 'rate' is a probability not a rate) in epidemiology. Compare the definition of formula_0 in this page with the definition of the attack rate.
Mechanism of transmission of infectious diseases through the air.
Wells-Riley is only applicable for transmission directly via the air, not via the susceptible person picking up the infectious agent from a surface (fomite transmission). Because the model assumes the air is well mixed it does not account for the region within one or two metres of an infected person, having a higher concentration of the infectious agent. Breathing and speaking produce a cone of warm (body temperature), humid air that moves out into and dissipates into the room air over a distance of about one to two metres, while a sneeze, with its much faster moving air, produces air movement up to metres away. If the person breathing/speaking/sneezing is infected then an infectious agent such as tuberculosis bacterium or a respiratory virus is expected to be more concentrated in this cone of air, but the infectious agent can also (at least in some cases) spread into the room air.
Assumptions made by the Wells-Riley model.
Estimating the number of inhaled doses requires more assumptions. The assumptions made by the model are essentially:
Assumptions 4 to 6 mean that the concentration of doses in the room air, formula_2, is
formula_9
Doses can be removed in three ways:
Assuming we can add the rates of these processes
formula_10
for formula_11 the lifetime of the infectious agent in air, formula_12 the lifetime of a dose in the air before settling onto a surface or the floor, and formula_13 the lifetime of the dose before it is removed by room ventilation or filtration. Then the concentration of doses is
formula_14
If the susceptible person spends a time formula_4 inside the room and inhales air at a rate (volume per unit time) formula_3 then they inhale a volume formula_15 and so a number of infectious doses
formula_16
or
formula_17
Prediction equation.
Putting all this together, the Wells-Riley prediction for the probability of infection is
formula_18
where:
The lifetime of room air formula_13 is just one over the air changes per hour - one measure of how well ventilated a room is. Building standards recommend several air changes per hour, in which case formula_13 will be tens of minutes.
Relation to carbon dioxide concentration in a room.
The Wells-Riley model assumes that an infected person continuously breathes out infectious virus. They will also continuously breathe out carbon dioxide, and so excess carbon dioxide concentration has been proposed as a proxy infection risk. In other words, the higher the carbon dioxide concentration in a room, the higher the risk of infection by an airborne disease. The excess concentration of carbon dioxide is that over the background level in the Earth's atmosphere, which is assumed to come from human respiration (in the absence of another source such as fire). Then the excess concentration of carbon dioxide formula_20is
formula_21
for formula_22 people each exhaling carbon dioxide at a rate formula_23. Carbon dioxide neither sediments out (it is a molecule) nor decays, leaving ventilation as the only process that removes it. In the second equality we used formula_24, i.e., the rate of production of carbon dioxide is the breathing rate (volume of air exhaled per second = volume of air inhaled per second) formula_25 times the concentration of carbon dioxide in exhaled breath formula_26. Note that this implies that we can estimate how well ventilated a room is if we know how many people are in the room, and the room's volume, from
formula_27
If for the virus ventilation is the dominant route for removal of the virus, i.e., formula_28, the Wells-Riley prediction for the infection probability is then
formula_29
which predicts that the higher the room concentration of carbon dioxide, the higher the infection risk.
Application to COVID-19.
Although originally developed for other diseases such as tuberculosis, Wells-Riley has been applied to try and understand (the still poorly understood) transmission of COVID-19, notably for a superspreading event in a chorale rehearsal in Skagit Valley (USA).
The Wells-Riley model is implemented as an interactive Google Sheets spreadsheet, and interactive apps showing estimates of the probability of infection. Even for the simple Wells-Riley model, the infection probability, formula_0, depends on seven parameters. The probability of becoming infected is predicted to increase with how infectious the person is (formula_7 - which may peak around the time of the onset of symptoms and is likely to vary hugely from one infectious person to another, how rapidly they are breathing (which for example will increase with exercise), the length of the time they are in the room, as well as the lifetime of the virus in the room air.
This lifetime can be reduced by both ventilation and by removing the virus by filtration. Large rooms also dilute the infectious agent and so reduce risk - although this assumes that the air is well mixed - a highly approximate assumption. A study of a COVID-19 transmission event in a restaurant in Guangzhou, went beyond this well-mixed approximation, to show that a group of three tables shared air with each other, to a greater extent than with the remainder of the (poorly ventilated) restaurant. One infected person on one of these tables (a few metres apart) infected people on the other two tables.
The COVID-19 pandemic has led to work on improving the Wells-Riley model to account for factors such as the virus being in droplets of varying size which have varying lifetimes, and an improved model also has an interactive app.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_i"
},
{
"math_id": 1,
"text": "P_i= \\mbox{probability one or more doses are inhaled}"
},
{
"math_id": 2,
"text": "c_{DOSE}"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "t_R"
},
{
"math_id": 5,
"text": "\\mbox{mean number of doses inhaled}= c_{DOSE}Bt_R"
},
{
"math_id": 6,
"text": "P_i=1-\\exp\\left(-c_{DOSE}Bt_R\\right)"
},
{
"math_id": 7,
"text": "r_{DOUT}"
},
{
"math_id": 8,
"text": "\\tau"
},
{
"math_id": 9,
"text": "c_{DOSE}=\\frac{r_{DOUT}\\tau}{V_{ROOM}}"
},
{
"math_id": 10,
"text": "\\frac{1}{\\tau}=\\frac{1}{\\tau_D}+\\frac{1}{\\tau_F}+\\frac{1}{\\tau_{VF}}"
},
{
"math_id": 11,
"text": "\\tau_D"
},
{
"math_id": 12,
"text": "\\tau_F"
},
{
"math_id": 13,
"text": "\\tau_{VF}"
},
{
"math_id": 14,
"text": "c_{DOSE}=\\frac{r_{DOUT}}{V_{ROOM}\\left(1/\\tau_D+1/\\tau_F+1/\\tau_{VF}\\right)}"
},
{
"math_id": 15,
"text": "B t_R"
},
{
"math_id": 16,
"text": "\\mbox{mean number of inhaled doses}=c_{DOSE}Bt_R"
},
{
"math_id": 17,
"text": "\\mbox{mean number of inhaled doses}=\\frac{r_{DOUT}Bt_R}{V_{ROOM}\\left(1/\\tau_D+1/\\tau_F+1/\\tau_{VF}\\right)}"
},
{
"math_id": 18,
"text": "P_i=1-\\exp\\left(-\\frac{r_{DOUT}Bt_R}{V_{ROOM}\\left(1/\\tau_D+1/\\tau_F+1/\\tau_{VF}\\right)}\\right)"
},
{
"math_id": 19,
"text": "V_{ROOM}"
},
{
"math_id": 20,
"text": "c_{CO2}^{(ex)}"
},
{
"math_id": 21,
"text": "c_{CO2}^{(ex)}=\\frac{N_P r_{CO2}\\tau_{VF}}{V_{ROOM}}=\\frac{N_P Bc_{CO2}^{(breath)}\\tau_{VF}}{V_{ROOM}}"
},
{
"math_id": 22,
"text": "N_P\n"
},
{
"math_id": 23,
"text": "r_{CO2}"
},
{
"math_id": 24,
"text": " r_{CO2}=Bc_{CO2}^{(breath)}"
},
{
"math_id": 25,
"text": " B"
},
{
"math_id": 26,
"text": " c_{CO2}^{(breath)}\\simeq 40,000~\\mbox{ppm}"
},
{
"math_id": 27,
"text": "\\frac{1}{\\tau_{VF}}=\\frac{N_P Bc_{CO2}^{(breath)}}{c_{CO2}^{(ex)}V_{ROOM}}"
},
{
"math_id": 28,
"text": "\\tau_D, \\tau_F \\gg \\tau_{VF}"
},
{
"math_id": 29,
"text": "P_i=1-\\exp\\left(-\\frac{c^{(ex)}_{CO2}r_{DOUT}t_R}{c_{CO2}^{(breath)}N_P}\\right)\n~~~~~~\\tau_D, \\tau_F \\gg \\tau_{VF}"
}
]
| https://en.wikipedia.org/wiki?curid=67331072 |
67331224 | Anderson's bridge | Circuit in electronics
In electronics, Anderson's bridge is a bridge circuit used to measure the self-inductance of the coil. It enables measurement of inductance by utilizing other circuit components like resistors and capacitors.
Anderson's bridge was invented by Alexander Anderson in 1891. He modified Maxwell's inductance capacitance bridge so that it gives very accurate measurement of self-inductance.
Balance conditions.
The balance conditions for Anderson's bridge or, equivalently the values of the self-inductance and resistance of the given coil can be found using basic circuit analysis techniques such as KCL, KVL and using phasors.
Consider the circuit diagram of Anderson's bridge in the given figure. Let L1 be the self-inductance and R1 be the electrical resistance of the coil under consideration. Since the voltmeter is ideally assumed to have nearly infinite impedance, the currents in branches ab and bc and those in the branches de and ec are taken to be equal. Applying Kirchhoff's current law at node d, it can be shown that-
formula_0
Since the analysis is being made under the balanced condition of the bridge, it can be said that the voltage drop across the voltmeter is essentially zero. On applying Kirchhoff's voltage law to the appropriate loops(in the anti-clockwise direction), the following relations hold-
formula_1
On solving these sets of equations, one can finally obtain the self-inductance and resistance of the coil as-
formula_2
Advantages.
The Anderson's bridge can also be used the other way round- that is, it can be used to measure the capacitance of an unknown capacitor using an inductor coil whose self-inductance and electrical resistance have been pre-determined to a high degree of precision. An interesting point to note is the fact that the measured self-inductance of the coil does not change even on taking dielectric loss within the capacitor into account. Another advantage of using this modified bridge is that unlike the variable capacitor used in Maxwell bridge, it makes use of a fixed capacitor which is relatively quite cheaper.
Disadvantages.
One of the obvious difficulties associated with Anderson's bridge are the relatively complex balance equation calculations compared to the Maxwell bridge. The circuit connections and computations are similarly more cumbersome in comparison to the Maxwell bridge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n I_4 + I_c &= I_2\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n I_1(R_1 + r_1 + j \\omega L_1) - I_2 R_2 - I_c r = 0 \\\\\n I_1 R_3 - \\frac{I_c}{j \\omega C} = 0 \\\\\n I_c r + \\frac{I_c}{j \\omega C} - I_4 R_4 = 0\n\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}\n L_1 &= (\\frac{R_3}{R_4})(R_2 R_4 + r(R_2 + R_4))C \\\\\n R_1 &= \\frac{R_2 R_3}{R_4} - r_1\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=67331224 |
673356 | Supercharge | In theoretical physics, a supercharge is a generator of supersymmetry transformations. It is an example of the general notion of a charge in physics.
Supercharge, denoted by the symbol Q, is an operator which transforms bosons into fermions, and vice versa. Since the supercharge operator changes a particle with spin one-half to a particle with spin one or zero, the supercharge itself is a spinor that carries one half unit of spin.
Depending on the context, supercharges may also be called "Grassmann variables" or "Grassmann directions"; they are generators of the exterior algebra of anti-commuting numbers, the Grassmann numbers. All these various usages are essentially synonymous; they refer to the formula_0 grading between bosons and fermions, or equivalently, the grading between "c-numbers" and "a-numbers". Calling it a charge emphasizes the notion of a symmetry at work.
Commutation.
Supercharge is described by the super-Poincaré algebra.
Supercharge commutes with the Hamiltonian operator:
[ Q , H ] = 0
So does its adjoint. | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2"
}
]
| https://en.wikipedia.org/wiki?curid=673356 |
6734 | Garbage collection (computer science) | Form of automatic memory management
In computer science, garbage collection (GC) is a form of automatic memory management. The "garbage collector" attempts to reclaim memory that was allocated by the program, but is no longer referenced; such memory is called "garbage". Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.
Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result.
Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also.
Overview.
Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be "garbage-collected languages". Other languages, such as C and C++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.
Although many languages integrate GC into their compiler and runtime system, "post-hoc" GC systems also exist, such as Automatic Reference Counting (ARC). Some of these "post-hoc" GC systems do not require recompilation.
Advantages.
GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors:
Disadvantages.
GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code is overhead, which can impair program performance. A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealized explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using an oracle, implemented by collecting traces from programs run under a profiler, and the program is only correct for one particular execution of the program. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection in iOS, despite it being the most desired feature.
The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs.
Strategies.
Tracing.
Tracing garbage collection is the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are "reachable" by a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics.
Reference counting.
Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.
As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation.
There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms:
Escape analysis.
Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.
Availability.
Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages lacking built-in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++.
Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection.
Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors.
BASIC.
BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in formula_0 performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.SYSTEM, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster.
Objective-C.
While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector.
However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbade the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC.
Limited environments.
Garbage collection is rarely used on embedded or real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.
Java.
Garbage collectors available in Java JDKs include:
Compile-time use.
Compile-time garbage collection is a form of static analysis allowing memory to be reused and reclaimed based on invariants known during compilation.
This form of garbage collection has been studied in the Mercury programming language, and it saw greater usage with the introduction of LLVM's automatic reference counter (ARC) into Apple's ecosystem (iOS and OS X) in 2011.
Real-time systems.
Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman.
In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.
Generational garbage collection schemes are based on the empirical observation that most objects die young. In generational garbage collection, two or more allocation regions (generations) are kept, which are kept separate based on the object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed.
Some high-level language computer architectures include hardware support for real-time garbage collection.
Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n^2)"
}
]
| https://en.wikipedia.org/wiki?curid=6734 |
67341981 | Methylenation | Chemical reaction which adds a –CH2– group to a molecule
In organic chemistry, methylenation is a chemical reaction that inserts a methylene () group into a chemical compound:
formula_0
In a related sense, it also describes a process in which a divalent group of a starting material is removed and replaced with a terminal CH2 group:
formula_1
Methylenation in this context is also known as methenylation. Most commonly, E is an oxygen atom, so that the reaction results in terminal alkenes from aldehydes and ketones, or more rarely, enol ethers from esters or enamines from amides.
Methods.
Methylene Insertion into Alkanes.
Singlet methylene (1[:CH2]), produced from photolysis of diazomethane under ultraviolet irradiation, methylenates hydrocarbons. Arenes and olefins undergo methylenation to give cyclopropanated products. In the case of arenes, the cyclopropanation product undergoes further electrocyclic ring opening to give cycloheptatriene products (Buchner ring expansion). Alkenes undergo both C=C methylenation and C–H methylenation insertion to give a mixture of cyclopropanation and homologation products.
Reflecting the exceptionally high reactivity of singlet methylene, normally unreactive alkanes undergo methylenation to give homologation products, even at –75 °C.
formula_2
Photolysis of a solution of diazomethane in "n-"pentane gives a mixture of hexanes and higher homologues. At –75 °C, the product ratio is 48:35:17 mixture of "n"-hexane, 2-methylpentane, and 3-methylpentane. The ratio is remarkably close to the statistical product ratio of 6:4:2 (~50:33:17) based on the number of available C–H bonds at each position that could undergo methylene insertion. As a result, Doering and coworkers concluded:"Methylene must be classed as the most indiscriminate reagent known in organic chemistry."
Methylene-for-oxo reactions.
A common method for methylenation involves the Wittig reaction using methylenetriphenylphosphorane with an aldehyde (Ph = phenyl, ):
<chem>RCHO + Ph3P=CH2 -> RCH=CH2 + Ph3PO</chem>
A related reaction can be accomplished with Tebbe's reagent, which is sufficiently versatile to allow methylenation of esters:
<chem>RCO2R' + Cp2Ti(Cl)CH2AlMe2 -> RC(OR')=CH2 + Cp2TiOAlMe2Cl</chem>
Other less well-defined titanium reagents, e.g., Lombardo's reagent, effect similar transformations.
Carbanions derived from methylsulfones have also been employed, equivalently to the Wittig reaction.
Methylenation adjacent to carbonyl groups.
Ketones and esters can be methylenated at the α position to give α,β-unsaturated carbonyl products containing an additional terminal CH2 group in a three-step process known as the Eschenmoser methylenation. An enolate is generated by deprotonation of the α-C–H bond using a hindered lithium amide (LiNR2) base (e.g., LDA, LHMDS). Subsequently, the enolate is reacted with Eschenmoser's salt ([Me2N=CH2]+I–) to give a β-dimethylamino carbonyl compound (Mannich base). The Mannich base is then subjected to methylation or N-oxidation to give a trimethylammonium salt or amine "N"-oxide, which is then subjected to Hofmann elimination or Cope elimination, respectively to give the α-methylene carbonyl compound. If the Hofmann elimination is used, the process can be represented as follows:
<chem>RC(=O)CH2R' -> RC(=O) CH(CH2NMe2)R' -> [RC(=O) CH(CH2NMe3)R']+I- -> RC(=O) C(=CH2)(R')</chem>
Other approaches.
Ethenolysis is a method for methylenation of internal alkene as illustrated by the following example:
formula_3
In principle, the addition of across a double bond could be classified as a methylenation, but such transformations are commonly described as cyclopropanations. | [
{
"math_id": 0,
"text": "\\ce{A-B} \\longrightarrow \\ce{A}{\\color{red}\\ce{-CH2 -}}\\ce{B}"
},
{
"math_id": 1,
"text": "\\ce{A=E} \\longrightarrow \\ce{A}{\\color{red}\\ce{=CH2}} "
},
{
"math_id": 2,
"text": "\\ce{R-H} + {\\color{red}\\ce{CH2}}\\longrightarrow \\ce{R}{\\color{red}\\ce{-CH2}}\\ce{H}"
},
{
"math_id": 3,
"text": "\\ce{RCH=CHR} + {\\color{red}\\ce{CH2=CH2}} \\longrightarrow \\ce{2 RCH=}{\\color{red}\\ce{CH2}}"
}
]
| https://en.wikipedia.org/wiki?curid=67341981 |
67343258 | Inclusive wealth | Aggregate value of all capital assets in a given region
Inclusive wealth is the aggregate value of all capital assets in a given region, including human capital, social capital, public capital, and natural capital. Maximizing inclusive wealth is often a goal of sustainable development. The Inclusive Wealth Index is a metric for inclusive wealth within countries: unlike gross domestic product (GDP), the Inclusive Wealth Index "provides a tool for countries to measure whether they are developing in a way that allows future generations to meet their own needs".
The United Nations Environment Programme (UNEP) published reports in 2012, 2014, and 2018 on inclusive wealth. The 2018 "Inclusive Wealth Report" found that, of 140 countries analyzed, inclusive wealth increased by 44% from 1990 to 2014, implying an average annual growth rate of 1.8%. On a per capita basis, 89 of 140 countries had increased inclusive wealth per capita. 96 of 140 countries had increased inclusive wealth per capita when adjusted. Roughly 40% of analyzed countries had stagnant or declining inclusive wealth, sometimes despite increasing GDP. Many countries showed a decline in natural capital during this period, fueling an increase in human capital.
<templatestyles src="Template:TOC limit/styles.css" />
Inclusive Wealth Index.
The Inclusive Wealth Index (IWI) was developed by UNEP in partnership with Kyushu University. The Index calculation is based on estimating stocks of human, natural and produced (manufactured) capital which make up the productive base of an economy. Biennial Inclusive Wealth Reports (IWR) track progress on sustainability across the world for 140 countries. The IWI is UNEP's metric for measuring intergenerational well-being. Implementing the IWI has been undertaken by many individual countries with UNEP support by a scientific panel headed by Sir Partha Dasgupta of Cambridge University.
Inclusive wealth is complementary to Gross Domestic Product (GDP). In a 'stocks and flows' model, capital assets are stocks, and the goods and services provided by the assets are flows (GDP). A tree is a stock; its fruit is a flow, while its leaves provide a continuous flow of services by pulling carbon dioxide from the atmosphere to store as carbon. It is a multi-purpose indicator capable of measuring traditional stocks of wealth along with skill sets, health care, and environmental assets that underlie human progress. The effective management of this capital supports the ultimate purpose of an economy – societal well-being.
Conceptual framework.
Produced capital (also referred to as manufactured capital) includes investment in roads, buildings, machines, equipment, and other physical infrastructure. Human capital comprises knowledge, education, skills, health and aptitude. Natural capital includes forests, fossil fuels, fisheries, agricultural land, sub-soil resources, rivers and estuaries, oceans, the atmosphere and ecosystems, more generally. Social capital includes trust, the strength of community and institutions, and the ability of societies to overcome problems. An economy's institutions and politics determine the social value of its assets, because they influence what people are able to enjoy from them. IWI does not directly measure social capital, which is considered to be embedded in other capital types. Not all components of capital that are conceptually components of wealth are currently included in the Inclusive Wealth methodology. This is due to difficulties in measuring certain assets, as well as data availability and comparability constraints.
Methodology.
Source:
The conceptual framework looks at well-being at time "t" as:
formula_0
Denoting produced, human, and natural capital as 𝐾, 𝐻, and 𝑁, the change in inclusive wealth 𝑊 is expressed by:
formula_1
where 𝑝𝐾, 𝑝𝐻 and 𝑝N are the marginal shadow prices of produced, human, and natural capital, respectively. They are formally defined by,
formula_2
given a forecast of how produced, human, and natural capitals, as well as other flow variables, evolve in the economy in question.
Practically, shadow prices act as a weight attached to each capital, resulting in the measure of wealth, or:
formula_3
In practice, "W" and "IWI" can be used interchangeably, although they can differ in that IWI also uses shadow prices on the margin. In addition, the unit of IWI is monetary rather than utility.
This does not affect the sustainability assessment overall.
Natural capital.
The components of natural capital include renewable resources (agricultural land, forests, and fisheries) and nonrenewable resources (fossil fuels and minerals).
The inclusion of fossil fuels within an indicator that tracks sustainability may appear counterintuitive because fossil fuels are often considered liabilities or stranded assets. The mechanism assumed in the IWI framework is the business-as-usual scenario of the imperfect economies that form the basis of our societies. The shadow price of any type of natural capital represents its marginal contribution towards social wellbeing. In this context, the potential benefit of fossil fuels for driving investment in other types of capital, outweighs the drawbacks of the social costs of carbon.
Non-renewable resources.
Non-renewable natural capital resources are oil, coal, natural gas, minerals and metals. To measure a fossil fuel, data measures the stock and compared to data from other years, in order to develop a time-series that reflects accurate flows. The unit shadow price for non-renewables is the price net of extraction cost, also called the rental price. The rental rate of the total price is assumed constant. Ideally, the marginal cost of extraction should be used for corresponding remaining stock, but this is hard to obtain. The accounting for minerals is similar to that used for fossil fuels. For rental rates, the sectoral rental rates of different mineral industries are used, as well as U.S. Geological Survey data.
Renewable resources.
Timber.
Timber stocks included in IWI estimates are those that are commercially available. To calculate the quantity of timber available, the total forest area, excluding cultivated forest[3], is multiplied by the timber density per area and percentage of total volume that is commercially available. The exclusion of cultivated forest from this category is debatable, as it is regarded as contributing to timber and non-timber values. Forest cultivation is categorized as a production activity in the System of National Accounts.
Following the estimation of physical stocks, shadow prices are computed to convert the wealth into monetary terms. The World Bank's approach uses a weighted average price of two commodities for industrial roundwood and fuelwood. Country-specific GDP deflators are used to convert prices from current to constant units, and regional rental rates for timber are applied, which are assumed to be constant over time. To obtain the proxy value for the shadow price of timber, the average price over the study period (1990 to 2014) is taken. Wealth corresponding to timber value is taken as the product of quantity, price and average rental rate over time.
Non-timber forest benefits.
Aside from the provisional ecosystem service of timber production, forests yield many other services. These additional ecosystem services are accounted for in the following manner: Non-cultivated forest area is retrieved from FAO (2015). The fraction of the forest area that contributes to human well-being is assumed to be 10%. The unit benefit of non-timber forest to inter-temporal social well-being is obtained from the Ecosystem Service Valuation Database (ESVD) database. This is expressed as USD/ha/year. Finally, to translate this benefit into capital asset value, we take its net present value, using the discount rate of 5%.
Fishery stocks.
Fishery stocks cannot be estimated based on habitat area, unlike forest or agricultural lands. Marine fishery habitats often cross national borders. Global fish stocks are often assessed using trends in catch or harvest data. With a country's harvest and effort data, along with a catchability coefficient, stocks can be estimated using the Schefer production function. For estimating fishery stocks in countries that lack sufficient effort data, a resource dynamic approach is taken.
Agricultural land.
Agricultural land is composed of cropland and pastureland. Data from Food and Agriculture Organization (2015) is employed to quantify cropland and pastureland area. Market prices are often unavailable for agricultural land. A shadow price is computed as the net present value of the annual flow of services per hectare, in line with World Bank (2011). IWI assumes the shadow price of pastureland is equal to that of cropland.
Shadow price.
Shadow prices are the estimated price of a good or a service that does not have a market price. The calculation of shadow prices is central to the IWI, particularly for natural capital. Various non-market valuation techniques provide estimates for these prices. The use of shadow prices for natural capital is controversial, mainly regarding the knowledge gap surrounding how to represent production functions of life-supporting ecosystems. Nevertheless, shadow prices based on willingness to pay measures are considered the best available approach for estimating their value.
Human capital.
The main components of human capital are health and education, but also parenting, on-the-job training, informal education and migration.
Human health is affected by daily well-being, productivity and lifespans. The latter is computed as a proxy for health-related human capital, largely because the options for quantifying the others are limited. The shadow price of health capital is the value of a statistical life year (VSLY).
IWI methodology focuses on the return on formal education, acknowledging that non-formal education such as early childhood learning and vocational training also contribute to wealth. Using data from Barro and Lee (2013), educational attainment is proxied by the average years of schooling per person. The rate of return on education is assumed to be 8.5%, and then multiplied by the educated population.
Produced capital.
Produced capital, also referred to as manufactured capital, includes physical infrastructure, land, facilities of private firms, and dwelling places. IWI uses the perpetual inventory method (PIM), which is a simple summation of gross investment net of depreciation that occurs in each period.
Adjustments.
Three adjustments influence wealth and social well-being, but are not covered by the official capital assets: carbon damage, oil capital gains, and total factor productivity.
Carbon damage can be regarded mostly as an exogenous change in social well-being. Calculation involves:
Oil prices are notorious for rapid fluctuations. Oil-rich nations benefit from spiking oil prices. Conversely, rising oil prices may result in reductions in social well-being for oil importing countries. An annual increase of 3% in the price of oil is assumed, corresponding to the annual average oil price increase during 1990–2014, implying that even if no oil is withdrawn, a nation can enjoy 3% growth in wealth.
Total factor productivity (TFP) measures residual contributions to social well-being. IWI includes TFP as an adjustment term. A non-parametric analysis called Malmquist productivity index is employed, which is based on the concept of data envelopment analysis.
History.
IWI was inaugurated in 2012 with the launch of the Inclusive Wealth Report (IWR) at the United Nations Conference on Sustainable Development (Rio+20). IWR 2012 compared the relative change of natural capital against produced capital and human capital. The results showed that changes in natural capital can significantly impact a nation's well-being, and that it is therefore possible to trace changes in components of wealth by country and link these to economic progress. The 2014 and 2018 IWRs expanded scope to cover 140 countries. The main focus of IWR 2014 was to estimate the education component of human capital. In IWR 2018, health was added to human capital, and fisheries were added to natural capital.
Changes in inclusive wealth are calculated using 25 year annual average growth rates. The results show that the growth of inclusive wealth is positive for many countries. Top performers include Republic of Korea, Singapore and Malta among others. However, in many countries, the population is growing more quickly than the inclusive wealth. These places experienced negative per capita wealth growth. Some of the negative per capita growth of wealth occurred in countries that experienced absolute gains in wealth.
IWI looks at each country's assets and assesses the changing health of these assets over 25 years. IWR 2018 shows that 44 out of the 140 countries have suffered a decline in inclusive wealth per capita since 1992, even though GDP per capita increased in all but a handful of them. This statistic shows that their growth is unsustainably depleting resources.
Inclusive Wealth Index and Sustainable Development Goals.
Sustainable Development Goal (SDG) 17 calls for developing "measurements of progress on sustainable development that complement GDP." The inclusive wealth index is one way of measuring progress on the SDGs and positive development trajectories.
Infrastructure and industrialization can occur in line with sustainability considerations. On a global level, produced capital per capita has experienced the largest increase compared to human and natural capital, often at the expense of the latter. The IWI framework provides data and guidance in monitoring the trade-offs without compromising other development goals.
IWI provides governments a new and holistic guide. If inclusive wealth (adjusted for population and wealth distribution) increases as governments try to meet SDGs, the SDGs will be sustainable; if it declines, the SDGs will be unsustainable. It could be that the goals are reached, but are not sustainable because the development paths that nations choose to follow erode their productive capacities.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(t)=\\int_{t}^{\\infty} U(C_\\tau)e^{-\\delta(\\tau-t)} d\\tau "
},
{
"math_id": 1,
"text": "dW=(K,H,N,t)/dt= p_k (dK/dt) + p_H(DH/dt) + p_N (dN/dt) \\delta V/\\delta N "
},
{
"math_id": 2,
"text": "p_K \\equiv \\delta V/\\delta K, p_H\\equiv \\delta V/ \\delta H, p_N \\equiv \\delta V /\\delta N "
},
{
"math_id": 3,
"text": "IWI = p_K (K)+ p_H(H)+p_N(N) "
}
]
| https://en.wikipedia.org/wiki?curid=67343258 |
67345527 | 2 Chronicles 19 | Second Book of Chronicles, chapter 19
2 Chronicles 19 is the nineteenth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter (as all chapters from 17 to 20) is the reign of Jehoshaphat, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 11 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Jehoshaphat reproved by Jehu (19:1–3).
Jehoshaphat's safe home return (in contrast to Ahab's death) literally fulfilled Micaiah's demands (18:16), although 'not complying with the spirit of his plea'. The prophet Jehu, the son of Hanani (cf. 2 Chronicles 16:7–9) rebuked Jehoshaphat for making an alliance with Ahab of Israel (cf. 2 Chronicles 16:1–6), because with the act, Jehoshaphat was not faithful to God in that 'he proposed to help the wicked and was loyal to those who hate the Lord'. 'Love' and 'hate' here are not 'emotional terms', but parts of a 'political vocabulary'; 'to love' means virtually 'to form a coalition'. The attacke by the Transjordanian alliance in chapter 20 could be interpreted as the realization of God's anger at Jehoshaphat.
Jehoshaphat's reforms (19:4–11).
Jehoshaphat continued to develop the policies he established at the start of his reign (2 Chronicles 17), extending to the territory of northern kingdom he controlled (verse 4; cf. ). The judicial reform may reflect the element "shaphat" ("to judge") in Jehoshaphat's name (cf. 2 Chronicles 16:12).
"And, behold, Amariah the chief priest is over you in all matters of the Lord; and Zebadiah the son of Ishmael, the ruler of the house of Judah, for all the king's matters: also the Levites shall be officers before you. Deal courageously, and the Lord shall be with the good."
Verse 11.
The distinction between "matters of the Lord" and "matters of the king" is found only in the books of Chronicles (1 Chronicles 26:30, 32; 2 Chronicles 19:11) and book of Ezra (Ezra 7:26).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67345527 |
673457 | Vinculum (symbol) | Horizontal line used in mathematical notation
formula_0
line segment from A to B
<templatestyles src="Fraction/styles.css" />1⁄7 = 0.142857
repeated 0.1428571428571428571...
formula_1
complex conjugate
formula_2
boolean NOT (A AND B)
formula_3
radical ab + 2
formula_4 = a − (b + c)
bracketing function
Vinculum usage
A vinculum (from la 'fetter, chain, tie') is a horizontal line used in mathematical notation for various purposes. It may be placed as an "overline" or "underline" above or below a mathematical expression to group the expression's elements. Historically, vincula were extensively used to group items together, especially in written mathematics, but in modern mathematics its use for this purpose has almost entirely been replaced by the use of parentheses. It was also used to mark Roman numerals whose values are multiplied by 1,000. Today, however, the common usage of a vinculum to indicate the repetend of a repeating decimal is a significant exception and reflects the original usage.
History.
The vinculum, in its general use, was introduced by Frans van Schooten in 1646 as he edited the works of François Viète (who had himself not used this notation). However, earlier versions, such as using an underline as Chuquet did in 1484, or in limited form as Descartes did in 1637, using it only in relation to the radical sign, were common.
Usage.
Modern.
A vinculum can indicate a line segment where "A" and "B" are the endpoints:
A vinculum can indicate the repetend of a repeating decimal value:
A vinculum can indicate the complex conjugate of a complex number:
Logarithm of a number less than 1 can conveniently be represented using vinculum:
In Boolean algebra, a vinculum may be used to represent the operation of inversion (also known as the NOT function):
meaning that Y is false only when both A and B are both true - or by extension, Y is true when either A or B is false.
Similarly, it is used to show the repeating terms in a periodic continued fraction. Quadratic irrational numbers are the only numbers that have these.
Historical.
Formerly its main use was as a notation to indicate a group (a bracketing device serving the same function as parentheses):
formula_9
meaning to add "b" and "c" first and then subtract the result from "a", which would be written more commonly today as "a" − ("b" + "c"). Parentheses, used for grouping, are only rarely found in the mathematical literature before the eighteenth century. The vinculum was used extensively, usually as an overline, but Chuquet in 1484 used the underline version.
In India, the use of this notation is still tested in primary school.
As a part of a radical.
The vinculum is used as part of the notation of a radical to indicate the radicand whose root is being indicated. In the following, the quantity formula_10 is the whole radicand, and thus has a vinculum over it:
formula_11
In 1637 Descartes was the first to unite the German radical sign √ with the vinculum to create the radical symbol in common use today.
The symbol used to indicate a vinculum need not be a line segment (overline or underline); sometimes braces can be used (pointing either up or down).
Encodings.
TeX.
In LaTeX, a text <text> can be overlined with codice_0. The inner codice_1 is necessary to
override the math-mode (here invoked by the dollar signs) which the codice_2 demands.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\overline{\\rm AB}"
},
{
"math_id": 1,
"text": "\\overline{a + bi}"
},
{
"math_id": 2,
"text": "Y = \\overline{AB}"
},
{
"math_id": 3,
"text": "\\sqrt[n]{ab+2}"
},
{
"math_id": 4,
"text": "a-\\overline{b+c}"
},
{
"math_id": 5,
"text": "\\overline{\\rm AB}."
},
{
"math_id": 6,
"text": "\\overline{2 + 3i} = 2 - 3i"
},
{
"math_id": 7,
"text": "\\log 2 = 0.301 \\Rightarrow \\log 0.2 = \\overline{1}.301 = -0.699"
},
{
"math_id": 8,
"text": "Y = \\overline{AB},"
},
{
"math_id": 9,
"text": "a-\\overline{b+c},"
},
{
"math_id": 10,
"text": "ab+2"
},
{
"math_id": 11,
"text": "\\sqrt[n]{ab+2}."
}
]
| https://en.wikipedia.org/wiki?curid=673457 |
67349962 | Propagation graph | Models signal dispersion by representing the radio propagation environment by a graph
Propagation graphs are a mathematical modelling method for radio propagation channels. A propagation graph is a signal flow graph in which vertices represent transmitters, receivers or scatterers. Edges in the graph model propagation conditions between vertices. Propagation graph models were initially developed by Troels Pedersen, et al. for multipath propagation in scenarios with multiple scattering, such as indoor radio propagation. It has later been applied in many other scenarios.
Mathematical definition.
A propagation graph is a simple directed graph formula_0 with vertex set formula_1 and edge set formula_2.
The vertices models objects in the propagation scenario. The vertex set formula_1 is split into three disjoint sets as
formula_3 where formula_4 is the set of transmitters,
formula_5 is the set of receivers and
formula_6 is the set of objects named "scatterers".
The edge set formula_7 models the propagation models propagation conditions between vertices. Since formula_8 is assumed simple, formula_9 and an edge may be identified by a pair of vertices as formula_10
An edge formula_10 is included in formula_2 if a signal emitted by vertex formula_11 can propagate to formula_12. In a propagation graph, transmitters cannot have incoming edges and receivers cannot have outgoing edges.
Two propagation rules are assumed
The definition of the vertex gain scaling and the edge transfer functions can be adapted to accommodate particular scenarios and should be defined in order to use the model in simulations. A variety of such definitions have been considered for different propagation graph models in the published literature.
The edge transfer functions (in the Fourier domain) can be grouped into transfer matrices as
where formula_18 is the frequency variable.
Denoting the Fourier transform of the transmitted signal by formula_19, the received signal reads in the frequency domain
formula_20
Transfer function.
The transfer function formula_21 of a propagation graph forms an infinite series
formula_22
The transfer function is a Neumann series of operators. Alternatively, it can be viewed pointwise in frequency as a geometric series of matrices. This observation yields a closed form expression for the transfer function as
formula_23
where formula_24 denotes the identity matrix and formula_25 is the spectral radius of the matrix given as argument. The transfer function account for propagation paths irrespective of the number of 'bounces'.
The series is similar to the Born series from multiple scattering theory.
The impulse responses formula_26 are obtained by inverse Fourier transform of formula_21
Partial transfer function.
Closed form expressions are available for partial sums, i.e. by considering only some of the terms in the transfer function. The partial transfer function for signal components propagation via at least formula_27 and at most formula_28 interactions is defined as
formula_29 where
formula_30
Here formula_31 denotes the number of interactions or the "bouncing order".
The partial transfer function is then
formula_32
Special cases:
One application of partial transfer functions is in hybrid models, where propagation graphs are employed to model part of the response (usually the higher-order interactions).
The partial impulse responses formula_37 are obtained from formula_38 by the inverse Fourier transform.
Propagation graph models.
The propagation graph methodology have been applied in various settings to create radio channel models. Such a model is referred to as a "propagation graph model". Such models have been derived for scenarios including
Calibration of propagation graph models.
To calibrate a propagation graph model, its parameters should be set to reasonable values. Different approaches can be taken.
Certain parameters can be derived from simplified geometry of the room. In particular, reverberation time can be computed via room electromagnetics. Alternatively, the parameters can ben set according to measurement data using inference techniques such as method of moments (statistics), approximate Bayesian computation., or deep neural networks
Related radio channel model types.
The method of propagation graph modeling is related to other methods. Noticeably,
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal G = (\\mathcal V, \\mathcal E)"
},
{
"math_id": 1,
"text": "\\mathcal V"
},
{
"math_id": 2,
"text": "\\mathcal E"
},
{
"math_id": 3,
"text": "\\mathcal V = \\mathcal V_t \\cup \\mathcal V_r \\cup\\mathcal V_s"
},
{
"math_id": 4,
"text": "\\mathcal V_t "
},
{
"math_id": 5,
"text": "\\mathcal V_r"
},
{
"math_id": 6,
"text": "\\mathcal V_s "
},
{
"math_id": 7,
"text": "\\mathcal E "
},
{
"math_id": 8,
"text": "\\mathcal G"
},
{
"math_id": 9,
"text": "\\mathcal E \\subset \\mathcal V^2"
},
{
"math_id": 10,
"text": "e = (v,v')"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "v'"
},
{
"math_id": 13,
"text": "e=(v,v')"
},
{
"math_id": 14,
"text": "\\mathbf D(f)"
},
{
"math_id": 15,
"text": "\\mathbf T(f)"
},
{
"math_id": 16,
"text": "\\mathbf R(f)"
},
{
"math_id": 17,
"text": "\\mathbf B(f)"
},
{
"math_id": 18,
"text": "f"
},
{
"math_id": 19,
"text": "\\mathbf X(f)"
},
{
"math_id": 20,
"text": "\\mathbf Y (f) = \\mathbf D(f) \\mathbf X (f) + \\mathbf R (f)\\mathbf T (f) \\mathbf X (f) + \\mathbf R (f)\\mathbf B(f) \\mathbf T (f) \\mathbf X (f) +\\mathbf R (f)\\mathbf B^2(f) \\mathbf T (f) \\mathbf X (f) + \\cdots"
},
{
"math_id": 21,
"text": "\\mathbf H(f)"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\mathbf H(f) &= \\mathbf D(f)+ \\mathbf R (f)[ \\mathbf I+ \\mathbf B(f) + \\mathbf B(f)^{2} + \\cdots ] \\mathbf T (f)\\\\\n&= \\mathbf D(f)+ \\mathbf R (f) \\sum_{k=0}^\\infty \\mathbf B(f)^k \\mathbf T(f)\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\mathbf H(f) = \\mathbf D(f) + \\mathbf R(f) [\\mathbf I - \\mathbf B(f)]^{-1} \\mathbf T(f),\\qquad \\rho(\\mathbf B(f))<1 "
},
{
"math_id": 24,
"text": "\\mathbf I"
},
{
"math_id": 25,
"text": "\\rho(\\cdot)"
},
{
"math_id": 26,
"text": "\\mathbf h(\\tau)"
},
{
"math_id": 27,
"text": "K"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "\\mathbf H_{K:L}(f) = \\sum_{k=K}^{L} \\mathbf H_k(f)"
},
{
"math_id": 30,
"text": "\\mathbf H_k(f) = \n\\begin{cases} \\mathbf D(f),& k=0\\\\\n\\mathbf R(f) \\mathbf B^{k-1}(f) \\mathbf T(f), & k = 1,2,3,\\ldots\n\\end{cases}\n"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "\\mathbf H_{K:L}(f) = \n\\begin{cases}\n\\mathbf D(f) + \\mathbf R(f) [\\mathbf I-\\mathbf B^L(f)] \\cdot [\\mathbf I-\\mathbf B(f)]^{-1} \\cdot \\mathbf T(f), & K = 0\\\\\n \\mathbf R(f) [\\mathbf B^{K-1}(f)-\\mathbf B^L(f)] \\cdot [\\mathbf I-\\mathbf B(f)]^{-1} \\cdot \\mathbf T(f), & \\text{otherwise}.\\\\\n\\end{cases}\n"
},
{
"math_id": 33,
"text": "\\mathbf H_{0:\\infty}(f) = \\mathbf H(f) "
},
{
"math_id": 34,
"text": "\\mathbf H_{1:\\infty}(f) = \\mathbf R(f) [\\mathbf I-\\mathbf B(f)]^{-1} \\mathbf T(f) "
},
{
"math_id": 35,
"text": "\\mathbf H_{0:L}(f)"
},
{
"math_id": 36,
"text": "\\mathbf H_{L+1:\\infty}(f)"
},
{
"math_id": 37,
"text": "\\mathbf h_{K:L}(\\tau) "
},
{
"math_id": 38,
"text": "\\mathbf H_{K:L}(f) "
}
]
| https://en.wikipedia.org/wiki?curid=67349962 |
67353527 | Introgressive hybridization in plants | Introgressive hybridization, also known as introgression, is the flow of genetic material between divergent lineages via repeated backcrossing. In plants, this backcrossing occurs when an formula_0 generation hybrid breeds with one or both of its parental species.
Source of variation.
Although some genera of plants hybridize and introgress more easily than others, in certain scenarios, external factors may contribute to an increased rate of hybridization. The phenomenon known as Hybridization of the Habitat echoes this idea, explaining that disturbances in a natural habitat can lead to species which typically do not hybridize and backcross to do so with relative ease. Plant breeders also manipulate their subjects to hybridize in order to optimize their hardiness, appearance, or whatever desired traits they want to select for. This type of hybridization has been particularly impactful for the production of many crop species, including but not limited to: certain types of rice, corn, wheat, barley, and rye. Natural introgression can occur with many genera and species, but manipulating the gene pool with artificial/forced introgression is useful for honing in on desired characteristics, such as drought tolerance or pest resistance.
Background.
In the early days of hybrid research, it was commonly believed that there was insufficient evidence of hybridization in nature because hybridization would mostly produce sterile or unfit offspring. Through experimentation and improved phylogenetic testing capabilities, we now see that the ability to produce fertile hybrid offspring varies by genus, within the plant kingdom. A few examples of species with the capacity to produce fertile hybrids are given below.
Examples of natural introgression.
Irises.
One of the most significant early studies of plant hybridization involved three species of irises. Although they commonly form crosses where their natural habitats overlap, there is no evidence that "Iris fulva", "Iris hexagona", or I"ris brevicaulis" are closely related and their phenotypic differences (color/pattern/size) are distinct. Once introgression occurs, the resulting offspring display a wide array of color combinations, as well as varying flower size. Iris fulva shows a tendency for asymmetrical introgression, where it transfers more genetic material into hybrid offspring than either "Iris hexagona or Iris brevicaulis".
Sunflowers.
Differential introgression of chloroplasts and nuclear genomes was first seen among the common sunflower ("Helianthus annuus ssp. texanus"). Within a particular region, the population showed differences in morphological features which indicated there may be hybridization with "H. debilis ssp cucumenifolius". Researchers discovered that these "H. a. texanus" contained chloroplast DNA from "H. d. cucumennfolius", indicating introgression had occurred in one direction.
Poplars.
Hybridization among poplars is common where ever populations overlap, however the degree of introgression varies greatly depending on the species. One study exploring the extent of introgression among three species of poplar trees (P. balsamifera, P. angustifolia and P. trichocarpa) conducted along the Rock Mountain range in the U.S. and Canada found extensive introgression in areas of species converge. Genomic sequencing even showed a trispecies hybrid in these overlapping areas. Another study found a hybrid zone in Utah where there was a unidirectional flow of introgression between "P. angustifolia and P. fremontii."
Examples of artificial introgression.
Wheat.
Introgression has played a major role in the development of wheat for crop production. One of the ways crop species can be manipulated is by crossing them with wild type species. For instance, the wild wheat relative species Agropyron elongatum has been crossed and introgressed with the domesticated wheat Triticum aestivum. Consequently, the resulting hybrids have a higher water stress adaptation and higher root and shoot biomass. Both of these modifications can improve the fitness of the crop.
Daffodils.
Daffodils (genus "Narcissus") are able to produce semi-fertile or fertile offspring, even from wide crosses. The ability of daffodils, such as the yellow trumpet Narcissi and Poets’ Narcissi to hybridize and backcross allows for the vast variety of options modern-day gardeners have to select from. Although daffodils do hybridize and introgress in nature, artificial introgression allows for breeders to take species that are geographically separated and make unique crosses that would not appear naturally.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_1"
}
]
| https://en.wikipedia.org/wiki?curid=67353527 |
67354329 | Beth Gladen | American biostatistician
Beth Charmaine Gladen is an American biostatistician who worked at the National Institute of Environmental Health Sciences, where she did pioneering research on children’s environmental health with physician Walter J. Rogan, including influential studies on the harmful effects of polychlorinated biphenyl as transmitted to children in utero and through breast milk, and on correlations between breastfeeding and infant mental development. The Rogan–Gladen estimator, a frequentist correction to observed prevalence rates to account for misclassifications based on sensitivity and specificity according to the formula
formula_0
is named in part for Gladen in theoretical work she published early in her career, again with Rogan.
Gladen completed her Ph.D. in statistics at Stanford University in 1977. Her dissertation, "Inference from Stopped Bernoulli Trials", was supervised by Paul Switzer. She was elected as a Fellow of the American Statistical Association in 1999. She retired before 2007.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{prevalence}=\\frac{\\text{observed prevalence}+\\text{specificity}-1}{\\text{sensitivity}+\\text{specificity}-1},"
}
]
| https://en.wikipedia.org/wiki?curid=67354329 |
673575 | Angular defect | In geometry, the (angular) defect (or deficit or deficiency) means the failure of some angles to add up to the expected amount of 360° or 180°, when such angles in the Euclidean plane would. The opposite notion is the excess.
Classically the defect arises in two ways:
and the excess also arises in two ways:
In the Euclidean plane, angles about a point add up to 360°, while interior angles in a triangle add up to 180° (equivalently, "exterior" angles add up to 360°). However, on a convex polyhedron the angles at a vertex add up to less than 360°, on a spherical triangle the interior angles always add up to more than 180° (the exterior angles add up to "less" than 360°), and the angles in a hyperbolic triangle always add up to less than 180° (the exterior angles add up to "more" than 360°).
In modern terms, the defect at a vertex is a discrete version of the curvature of the polyhedral surface concentrated at that point, and the Gauss–Bonnet theorem gives the total curvature as formula_0 times the Euler characteristic formula_1, so the sum of the defects is formula_2.
Defect of a vertex.
For a polyhedron, the defect at a vertex equals 2π minus the sum of all the angles at the vertex (all the faces at the vertex are included). If a polyhedron is convex, then the defect of each vertex is always positive. If the sum of the angles exceeds a full turn, as occurs in some vertices of many non-convex polyhedra, then the defect is negative.
The concept of defect extends to higher dimensions as the amount by which the sum of the dihedral angles of the cells at a peak falls short of a full circle.
Examples.
The defect of any of the vertices of a regular dodecahedron (in which three regular pentagons meet at each vertex) is 36°, or π/5 radians, or 1/10 of a circle. Each of the angles measures 108°; three of these meet at each vertex, so the defect is 360° − (108° + 108° + 108°) = 36°.
The same procedure can be followed for the other Platonic solids:
Descartes's theorem.
Descartes's theorem on the "total defect" of a polyhedron states that if the polyhedron is homeomorphic to a sphere (i.e. topologically equivalent to a sphere, so that it may be deformed into a sphere by stretching without tearing), the "total defect", i.e. the sum of the defects of all of the vertices, is two full circles (or 720° or 4π radians). The polyhedron need not be convex.
A generalization says the number of circles in the total defect equals the Euler characteristic of the polyhedron. This is a special case of the Gauss–Bonnet theorem which relates the integral of the Gaussian curvature to the Euler characteristic. Here the Gaussian curvature is concentrated at the vertices: on the faces and edges the Gaussian curvature is zero and the integral of Gaussian curvature at a vertex is equal to the defect there.
This can be used to calculate the number "V" of vertices of a polyhedron by totaling the angles of all the faces, and adding the total defect. This total will have one complete circle for every vertex in the polyhedron. Care has to be taken to use the correct Euler characteristic for the polyhedron.
A converse to this theorem is given by Alexandrov's uniqueness theorem, according to which a metric space that is locally Euclidean except for a finite number of points of positive angular defect, adding to 4π, can be realized in a unique way as the surface of a convex polyhedron.
Positive defects on non-convex figures.
It is tempting to think that every non-convex polyhedron must have some vertices whose defect is negative, but this need not be the case. Two counterexamples to this are the small stellated dodecahedron and the great stellated dodecahedron, which have twelve convex points each with positive defects.
A counterexample which does not intersect itself is provided by a cube where one face is replaced by a square pyramid: this elongated square pyramid is convex and the defects at each vertex are each positive. Now consider the same cube where the square pyramid goes into the cube: this is concave, but the defects remain the same and so are all positive.
Negative defect indicates that the vertex resembles a saddle point (negative curvature), whereas positive defect indicates that the vertex resembles a local maximum or minimum (positive curvature). | [
{
"math_id": 0,
"text": "2\\pi"
},
{
"math_id": 1,
"text": "\\chi = 2"
},
{
"math_id": 2,
"text": "4\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=673575 |
67358788 | Galperin configuration | Galperin configuration are a particular configuration of sensing elements found in a class of seismic instruments measuring ground motion and are named after Soviet seisomologist Evsey Iosifovich Galperin, who introduced it in 1955 for petroleum exploration.
Description.
Common triaxial seismometers provide signal outputs in three orthogonal axes oriented towards east–west (E), north–south (N) and up-down (Z), i.e. in the Cartesian coordinate system. In contrast, the Galperin configuration consists of three orthogonal axes (U, V, W) that are oriented at precisely the same angle with respect to the horizontal plane (α=35.26°). The projection of all three axes onto the horizontal plane are all separated by 120°, which results in the "symmetric triaxial" design. The recordings acquired with the Galperin configuration are brought to the Cartesian coordinate system by the following coordinate transformation, where β=30°:
formula_0
A main advantage of the Galperin configuration is that all three receivers have identical orientation with respect to the vertical axis and, thus have identical instrument responses. Another advantage is the ability to build smaller packages (i.e., instruments) compared to the Cartesian orientation, which makes the Galperin configuration especially applicable for borehole installations. Other benefits of the Galperin configuration include easier distinction between external and internal noise sources and the fact that the configuration is not sensitive to rotation around the vertical axis. However, the main drawback of the configuration is that all input vectors are linked by the rotational matrix, which causes failure of the entire system when one of the three sensor is malfunctioning. In the Cartesian configuration, for example, both horizontal components still provide useful data in case the vertical (Z) component fails.
The Galperin configuration found wide application in seismometer design, including models for borehole, ocean bottom, and vault installations. The Galperin configuration can also be applied at the source side to simulate three-component seismic sources
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix} E \\\\ N \\\\ Z \\end{bmatrix}\n\n=\n\n\\begin{bmatrix} -\\cos\\alpha & \\cos\\alpha\\sin\\beta & \\cos\\alpha\\sin\\beta \\\\ 0 & \\cos\\alpha\\cos\\beta & -\\cos\\alpha\\cos\\beta \\\\ \\sin\\alpha & \\sin\\alpha & \\sin\\alpha \\end{bmatrix}\n\n\\begin{bmatrix} U\\\\V\\\\W \\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=67358788 |
67362962 | 2 Chronicles 20 | Second Book of Chronicles, chapter 20
2 Chronicles 20 is the twentieth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter (as all chapters from 17 to 20) is the reign of Jehoshaphat, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 37 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Jehoshaphat Defeats Moab and Ammon (20:1–30).
This section contains the battle report of Jehoshaphat against the southeastern Transjordanian coalition of powers, but it was exclusively a sacral war (verse 15: "the battle is not yours, but God's") as the enemies destroyed themselves and the people of Judah only came to sing and pick up the spoils of war. Informed about the invasion of a huge enemy, Jehoshaphat resorted to prayer (verses 6–12), which was also called a 'national lament' (echoing Solomon's prayer in 2 Chronicles 6:28, 34), addressing YHWH as 'O LORD, God of our ancestors' and 'the ruler of all peoples who gave the Israelites their land'. YHWH ordered Israel not to attack these Transjordanian neighbors (Deuteronomy 2), but as they attacked, Jehoshaphat appealed to YHWH for their expulsion from his land. Jahaziel, a Levitical singer, served as the designated priest to proclaim God's assurance of victory (cf. Deuteronomy 20:2-4; 1 Chronicles 25:1-8), as a result of faith in God, quoting both Moses (Exodus 14:13-14) and David (1 Samuel 7:47). As previous sacral wars, 'the fear of God descends upon all the kingdoms of the countries' (cf. Exodus 15:14-16; Deuteronomy 2:25; 11:25; Joshua 2:9, 11, 24; 10:1-2; 1 Samuel 4:7-8; 14:15; 1 Chronicles 14:7; 2 Chronicles 14:13). Jehoshaphat, all Judeans and the citizens of Jerusalem reacted joyfully by worshipping YHWH (verses 18–19) followed by the Levites, who sang praises to God, even before the salvation happened. The entire action of God (verse 20) took place early in the morning (that is, the time at which God usually acted), leaving no survivor among the enemy armies and the largest spoils in the entire Hebrew Bible (taking three days to collect). The war ended where it began, in the temple of Jerusalem (verses 26–28) and with music (verses 29–30, cf. 17:10; typical for the Chronicles). As fear of YHWH struck not only Judah's neighboring kingdoms, but also all the kingdoms in the region, Judah was in peace as a reward for the nation's exemplary conduct.
The end of Jehoshaphat's reign (20:31–37).
Jehoshaphat second misstep happened at the end of his reign that he again worked together with another king of northern kingdom (Ahaziah the son of Ahab). Despite a warning given through a prophet, Jehoshaphat went on with his alliance and therefore was condemned to failure, although this (as well as the previous misstep) did not affect the positive judgement for his reign.
"And Jehoshaphat reigned over Judah. He was thirty-five years old when he began his reign, and he was king in Jerusalem for twenty-five years. The name of his mother was Azubah the daughter of Shilhi."
"And he allied himself with him to make ships to go to Tarshish, and they made the ships in Ezion Geber."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67362962 |
673693 | Eliyahu Rips | Israeli mathematician of Latvian origin (1948–2024)
Eliyahu Rips (; ; ; 12 December 1948 – 19 July 2024) was an Israeli mathematician of Latvian origin known for his research in geometric group theory. He became known to the general public following his co-authoring a paper on what is popularly known as Bible code, the supposed coded messaging in the Hebrew text of the Torah.
Biography.
Ilya (Eliyahu) Rips grew up in Latvia (then part of the Soviet Union). His mother was Jewish and from Riga, the only of nine siblings that survived the war; the others were killed in Rumbula and other places. His father Aaron was a Jewish mathematician from Belarus; his wife, children, and all of his relatives were killed during the Holocaust.
Rips was the first high school student from Latvia to participate in the International Mathematical Olympiad. In January 1969, he learnt from listening to Western radio broadcast — then illegal in the USSR — of the self-immolation of Czechoslovak student Jan Palach. On 13 April 1969, Rips, then a graduate student at the University of Latvia, attempted self-immolation in a protest against the Soviet invasion of Czechoslovakia. After unwrapping a self-made slogan condemning the occupation of Czechoslovakia he lit a candle and set his gasoline-soaked clothes ablaze. A group of bystanders was able to quickly put the fire out, resulting only in burns to Rips' neck and hands. Though injured, he was first taken to the local KGB office and interrogated. He was incarcerated by the Soviet government for two years. After his story spread among Western mathematical circles and a wave of petitions, Rips was freed in 1971. The following year, he was allowed to immigrate to Israel.
Rips joined the Department of Mathematics at the Hebrew University of Jerusalem, and in 1975 completed his Ph.D. in mathematics there. His topic was the dimensional subgroup problem. He was awarded the Aharon Katzir Prize. In 1979, Rips received the Erdős Prize from the Israel Mathematical Society, and was a sectional speaker at the International Congress of Mathematicians in 1994.
Rips died on 19 July 2024, at the age of 75.
Academic career.
Rips was a professor in the Department of Mathematics at Hebrew University. His research interests were geometric and combinatorial methods in infinite group theory. This includes small cancellation theory and its generalizations, (Gromov) hyperbolic group theory, Bass-Serre theory and the actions of groups on formula_0-trees.
Rips' work on group actions on formula_0-trees is mostly unpublished. The Rips machine, in the hands of Rips and his student Zlil Sela, has proven to be effective in obtaining classification results such as a solution to the isomorphism problem for hyperbolic groups.
"The Bible Code" controversy.
In the late 1970s, Rips began looking with the help of a computer for codes in the Torah. In 1994, Rips, together with Doron Witztum and Yoav Rosenberg, published in the journal "Statistical Science" an article, "Equidistant Letter Sequences in the Book of Genesis", which claimed the discovery of encoded messages in the Hebrew text of the Book of Genesis. This, in turn, was the inspiration for the 1997 book "The Bible Code" by journalist Michael Drosnin. While Rips originally claimed that he agreed with Drosnin's findings, in 1997 Rips described Drosnin's book as "on very shaky ground" and "of no value." Since Drosnin's book, Bible codes have been a subject of controversy, with the claims being criticized by Brendan McKay and others. An early supporter of Rips' theories was Robert Aumann, Nobel Prize Laureate in Economics 2005, who headed a commission overseeing Rips' experiments attempting to prove the existence of a secret code from God in the Torah. Eventually, Aumann abandoned the idea and withdrew his support from Rips.
"The Bible Code" treats the text of the Bible as a word search puzzle: for example, a word may be spelled diagonally moving in a north west direction, or perhaps left-to-right taking every second letter. The more patterns that are allowed, the more words that can be found. Elementary statistics can be used to estimate the probabilities of finding certain hidden messages. The statistician Jeffrey S. Rosenthal shows in his book "Struck by Lightning: The Curious World of Probabilities" that "hidden messages" are statistically expected and hence should not be seen as divine messages, much less as predictions of the future. Mathematician Brendan McKay illustrated this point by finding messages in the English text of "Moby Dick" that supposedly "predicted" famous assassinations of the past, such as the assassination of John F. Kennedy and the assassination of Indira Gandhi.
The 1997 "Ig Nobel Prize for Literature" was awarded to Eliyahu Rips, Doron Witztum, Yoav Rosenberg, and Michael Drosnin, for their work on Bible codes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb R"
}
]
| https://en.wikipedia.org/wiki?curid=673693 |
67378 | Ulam spiral | Visualization of the prime numbers formed by arranging the integers into a spiral
The Ulam spiral or prime spiral is a graphical depiction of the set of prime numbers, devised by mathematician Stanisław Ulam in 1963 and popularized in Martin Gardner's "Mathematical Games" column in "Scientific American" a short time later. It is constructed by writing the positive integers in a square spiral and specially marking the prime numbers.
Ulam and Gardner emphasized the striking appearance in the spiral of prominent diagonal, horizontal, and vertical lines containing large numbers of primes. Both Ulam and Gardner noted that the existence of such prominent lines is not unexpected, as lines in the spiral correspond to quadratic polynomials, and certain such polynomials, such as Euler's prime-generating polynomial "x"2 − "x" + 41, are believed to produce a high density of prime numbers. Nevertheless, the Ulam spiral is connected with major unsolved problems in number theory such as Landau's problems. In particular, no quadratic polynomial has ever been proved to generate infinitely many primes, much less to have a high asymptotic density of them, although there is a well-supported conjecture as to what that asymptotic density should be.
In 1932, 31 years prior to Ulam's discovery, the herpetologist Laurence Klauber constructed a triangular, non-spiral array containing vertical and diagonal lines exhibiting a similar concentration of prime numbers. Like Ulam, Klauber noted the connection with prime-generating polynomials, such as Euler's.
Construction.
The Ulam spiral is constructed by writing the positive integers in a spiral arrangement on a square lattice:
and then marking the prime numbers:
In the figure, primes appear to concentrate along certain diagonal lines. In the 201×201 Ulam spiral shown above, diagonal lines are clearly visible, confirming the pattern to that point. Horizontal and vertical lines with a high density of primes, while less prominent, are also evident. Most often, the number spiral is started with the number 1 at the center, but it is possible to start with any number, and the same concentration of primes along diagonal, horizontal, and vertical lines is observed. Starting with 41 at the center gives a diagonal containing an unbroken string of 40 primes (starting from 1523 southwest of the origin, decreasing to 41 at the origin, and increasing to 1601 northeast of the origin), the longest example of its kind.
History.
According to Gardner, Ulam discovered the spiral in 1963 while doodling during the presentation of "a long and very boring paper" at a scientific meeting. These hand calculations amounted to "a few hundred points". Shortly afterwards, Ulam, with collaborators Myron Stein and Mark Wells, used MANIAC II at Los Alamos Scientific Laboratory to extend the calculation to about 100,000 points. The group also computed the density of primes among numbers up to 10,000,000 along some of the prime-rich lines as well as along some of the prime-poor lines. Images of the spiral up to 65,000 points were displayed on "a scope attached to the machine" and then photographed. The Ulam spiral was described in Martin Gardner's March 1964 "Mathematical Games" column in "Scientific American" and featured on the front cover of that issue. Some of the photographs of Stein, Ulam, and Wells were reproduced in the column.
In an addendum to the "Scientific American" column, Gardner mentioned the earlier paper of Klauber.
Klauber describes his construction as follows, "The integers are arranged in triangular order with 1 at the apex, the second line containing numbers 2 to 4, the third 5 to 9, and so forth. When the primes have been indicated, it is found that there are concentrations in certain vertical and diagonal lines, and amongst these the so-called Euler sequences with high concentrations of primes are discovered."
Explanation.
Diagonal, horizontal, and vertical lines in the number spiral correspond to polynomials of the form
formula_0
where "b" and "c" are integer constants. When "b" is even, the lines are diagonal, and either all numbers are odd, or all are even, depending on the value of "c". It is therefore no surprise that all primes other than 2 lie in alternate diagonals of the Ulam spiral. Some polynomials, such as formula_1, while producing only odd values, factorize over the integers formula_2 and are therefore never prime except possibly when one of the factors equals 1. Such examples correspond to diagonals that are devoid of primes or nearly so.
To gain insight into why some of the remaining odd diagonals may have a higher concentration of primes than others, consider formula_3 and formula_4. Compute remainders upon division by 3 as "n" takes successive values 0, 1, 2, ... For the first of these polynomials, the sequence of remainders is 1, 2, 2, 1, 2, 2, ..., while for the second, it is 2, 0, 0, 2, 0, 0, ... This implies that in the sequence of values taken by the second polynomial, two out of every three are divisible by 3, and hence certainly not prime, while in the sequence of values taken by the first polynomial, none are divisible by 3. Thus it seems plausible that the first polynomial will produce values with a higher density of primes than will the second. At the very least, this observation gives little reason to believe that the corresponding diagonals will be equally dense with primes. One should, of course, consider divisibility by primes other than 3. Examining divisibility by 5 as well, remainders upon division by 15 repeat with pattern 1, 11, 14, 10, 14, 11, 1, 14, 5, 4, 11, 11, 4, 5, 14 for the first polynomial, and with pattern 5, 0, 3, 14, 3, 0, 5, 3, 9, 8, 0, 0, 8, 9, 3 for the second, implying that only three out of 15 values in the second sequence are potentially prime (being divisible by neither 3 nor 5), while 12 out of 15 values in the first sequence are potentially prime (since only three are divisible by 5 and none are divisible by 3).
While rigorously-proved results about primes in quadratic sequences are scarce, considerations like those above give rise to a plausible conjecture on the asymptotic density of primes in such sequences, which is described in the next section.
Hardy and Littlewood's Conjecture F.
In their 1923 paper on the Goldbach Conjecture, Hardy and Littlewood stated a series of conjectures, one of which, if true, would explain some of the striking features of the Ulam spiral. This conjecture, which Hardy and Littlewood called "Conjecture F", is a special case of the Bateman–Horn conjecture and asserts an asymptotic formula for the number of primes of the form "ax"2 + "bx" + "c". Rays emanating from the central region of the Ulam spiral making angles of 45° with the horizontal and vertical correspond to numbers of the form 4"x"2 + "bx" + "c" with "b" even; horizontal and vertical rays correspond to numbers of the same form with "b" odd. Conjecture F provides a formula that can be used to estimate the density of primes along such rays. It implies that there will be considerable variability in the density along different rays. In particular, the density is highly sensitive to the discriminant of the polynomial, "b"2 − 16"c".
Conjecture F is concerned with polynomials of the form "ax"2 + "bx" + "c" where "a", "b", and "c" are integers and "a" is positive. If the coefficients contain a common factor greater than 1 or if the discriminant Δ = "b"2 − 4"ac" is a perfect square, the polynomial factorizes and therefore produces composite numbers as "x" takes the values 0, 1, 2, ... (except possibly for one or two values of "x" where one of the factors equals 1). Moreover, if "a" + "b" and "c" are both even, the polynomial produces only even values, and is therefore composite except possibly for the value 2. Hardy and Littlewood assert that, apart from these situations, "ax"2 + "bx" + "c" takes prime values infinitely often as "x" takes the values 0, 1, 2, ... This statement is a special case of an earlier conjecture of Bunyakovsky and remains open. Hardy and Littlewood further assert that, asymptotically, the number "P"("n") of primes of the form "ax"2 + "bx" + "c" and less than "n" is given by
formula_5
where "A" depends on "a", "b", and "c" but not on "n". By the prime number theorem, this formula with "A" set equal to one is the asymptotic number of primes less than "n" expected in a random set of numbers having the same density as the set of numbers of the form "ax"2 + "bx" + "c". But since "A" can take values bigger or smaller than 1, some polynomials, according to the conjecture, will be especially rich in primes, and others especially poor. An unusually rich polynomial is 4"x"2 − 2"x" + 41 which forms a visible line in the Ulam spiral. The constant "A" for this polynomial is approximately 6.6, meaning that the numbers it generates are almost seven times as likely to be prime as random numbers of comparable size, according to the conjecture. This particular polynomial is related to Euler's prime-generating polynomial "x"2 − "x" + 41 by replacing "x" with 2"x", or equivalently, by restricting "x" to the even numbers. The constant "A" is given by a product running over all prime numbers,
formula_6,
in which formula_7 is number of zeros of the quadratic polynomial modulo "p" and therefore takes one of the values 0, 1, or 2. Hardy and Littlewood break the product into three factors as
formula_8.
Here the factor ε, corresponding to the prime 2, is 1 if "a" + "b" is odd and 2 if "a" + "b" is even. The first product index "p" runs over the finitely-many odd primes dividing both "a" and "b". For these primes formula_9 since "p" then cannot divide "c". The second product index formula_10 runs over the infinitely-many odd primes not dividing "a". For these primes formula_7 equals 1, 2, or 0 depending on whether the discriminant is 0, a non-zero square, or a non-square modulo "p". This is accounted for by the use of the Legendre symbol, formula_11. When a prime "p" divides "a" but not "b" there is one root modulo "p". Consequently, such primes do not contribute to the product.
A quadratic polynomial with "A" ≈ 11.3, currently the highest known value, has been discovered by Jacobson and Williams.
Variants.
Klauber's 1932 paper describes a triangle in which row "n" contains the numbers ("n" − 1)2 + 1 through "n"2. As in the Ulam spiral, quadratic polynomials generate numbers that lie in straight lines. Vertical lines correspond to numbers of the form "k"2 − "k" + "M". Vertical and diagonal lines with a high density of prime numbers are evident in the figure.
Robert Sacks devised a variant of the Ulam spiral in 1994. In the Sacks spiral, the non-negative integers are plotted on an Archimedean spiral rather than the square spiral used by Ulam, and are spaced so that one perfect square occurs in each full rotation. (In the Ulam spiral, two squares occur in each rotation.) Euler's prime-generating polynomial, "x"2 − "x" + 41, now appears as a single curve as "x" takes the values 0, 1, 2, ... This curve asymptotically approaches a horizontal line in the left half of the figure. (In the Ulam spiral, Euler's polynomial forms two diagonal lines, one in the top half of the figure, corresponding to even values of "x" in the sequence, the other in the bottom half of the figure corresponding to odd values of "x" in the sequence.)
Additional structure may be seen when composite numbers are also included in the Ulam spiral. The number 1 has only a single factor, itself; each prime number has two factors, itself and 1; composite numbers are divisible by at least three different factors. Using the size of the dot representing an integer to indicate the number of factors and coloring prime numbers red and composite numbers blue produces the figure shown.
Spirals following other tilings of the plane also generate lines rich in prime numbers, for example hexagonal spirals.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f(n) = 4 n^2 + b n + c"
},
{
"math_id": 1,
"text": "4 n^2 + 8 n + 3"
},
{
"math_id": 2,
"text": "(4 n^2 + 8 n + 3)=(2n+1)(2n+3)"
},
{
"math_id": 3,
"text": "4 n^2 + 6 n + 1"
},
{
"math_id": 4,
"text": "4 n^2 + 6 n + 5"
},
{
"math_id": 5,
"text": "P(n)\\sim A\\frac{1}{\\sqrt{a}}\\frac{\\sqrt{n}}{\\log n}"
},
{
"math_id": 6,
"text": " A = \\prod\\limits_{p} \\frac{p-\\omega(p)}{p-1}~"
},
{
"math_id": 7,
"text": "\\omega (p)"
},
{
"math_id": 8,
"text": "A = \\varepsilon\\prod_p \\biggl(\\frac{p}{p-1}\\biggr)\\,\\prod_{\\varpi}\\biggl(1-\\frac{1}{\\varpi-1}\\Bigl(\\frac{\\Delta}{\\varpi}\\Bigr)\\biggr)"
},
{
"math_id": 9,
"text": "\\omega (p)=0"
},
{
"math_id": 10,
"text": "\\varpi"
},
{
"math_id": 11,
"text": "\\left(\\frac{\\Delta}{\\varpi}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=67378 |
67378517 | Vertical-axis wind turbine | Type of wind turbine
A vertical-axis wind turbine (VAWT) is a type of wind turbine where the main rotor shaft is set transverse to the wind while the main components are located at the base of the turbine. This arrangement allows the generator and gearbox to be located close to the ground, facilitating service and repair. VAWTs do not need to be pointed into the wind, which removes the need for wind-sensing and orientation mechanisms. Major drawbacks for the early designs (Savonius, Darrieus and giromill) included the significant torque ripple during each revolution, and the large bending moments on the blades. Later designs addressed the torque ripple by sweeping the blades helically (Gorlov type). Savonius vertical-axis wind turbines (VAWT) are not widespread, but their simplicity and better performance in disturbed flow-fields, compared to small horizontal-axis wind turbines (HAWT) make them a good alternative for distributed generation devices in an urban environment.
A vertical axis wind turbine has its axis perpendicular to the wind streamlines and vertical to the ground. A more general term that includes this option is a "transverse axis wind turbine" or "cross-flow wind turbine". For example, the original Darrieus patent, US patent 1835018, includes both options.
Drag-type VAWTs such as the Savonius rotor typically operate at lower tip speed ratios than lift-based VAWTs such as Darrieus rotors and cycloturbines.
Computer modelling suggests that wind farms constructed using vertical-axis wind turbines are 15% more efficient than conventional horizontal axis wind turbines as they generate less turbulence.
General aerodynamics.
The forces and the velocities acting in a Darrieus turbine are depicted in figure 1. The resultant velocity vector, formula_0, is the vectorial sum of the undisturbed upstream air velocity, formula_1, and the velocity vector of the advancing blade, formula_2.
formula_3
Thus the oncoming fluid velocity varies during each cycle. Maximum velocity is found for formula_4 and the minimum is found for formula_5, where formula_6 is the azimuthal or orbital blade position. The angle of attack, formula_7, is the angle between the oncoming air speed, W, and the blade's chord. The resultant airflow creates a varying, positive angle of attack to the blade in the upstream zone of the machine, switching sign in the downstream zone of the machine.
It follows from geometric considerations of angular velocity as seen in the accompanying figure that:
formula_8
and:
formula_9
Solving for the relative velocity as the resultant of the tangential and normal components yields:
formula_10
Thus, combining the above with the definitions for the tip speed ratio formula_11 yields the following expression for the resultant velocity:
formula_12
Angle of attack is solved as:
formula_13
Which when substituting the above yields:
formula_14
The resultant aerodynamic force is resolved either into lift (L) - drag (D) components or normal (N) - tangential (T) components. The forces are considered acting at the quarter-chord point, and the pitching moment is determined to resolve the aerodynamic forces. The aeronautical terms "lift" and "drag" refer to the forces across (lift) and along (drag) the approaching net relative airflow. The tangential force acts along the blade's velocity, pulling the blade around, and the normal force acts radially, pushing against the shaft bearings. The lift and the drag force are useful when dealing with the aerodynamic forces around the blade such as dynamic stall, boundary layer etc.; while when dealing with global performance, fatigue loads, etc., it is more convenient to have a normal-tangential frame. The lift and the drag coefficients are usually normalised by the dynamic pressure of the relative airflow, while the normal and tangential coefficients are usually normalised by the dynamic pressure of undisturbed upstream fluid velocity.
formula_15
A = Blade Area (not to be confused with the Swept Area, which is equal to the height of the blade/rotor times the rotor diameter),
R = Radius of turbine
The amount of power, P, that can be absorbed by a wind turbine:
formula_16
Where formula_17 is the power coefficient, formula_18 is air density, formula_19 is the swept area of the turbine, and formula_20 is the wind speed.
Types.
There are two main types of Vertical Axis Wind Turbines. I.e. Savonius Wind turbine and Darrieus wind turbine. The Darrieus rotor comes in various subforms, including helix-shaped, disc-like, and the H-rotor with straight blades. These turbines typically have three slim rotor blades driven by lift forces, allowing them to achieve high speeds.
Various simple designs may exist for vertical wind turbines, as detailed below. In practice, you may come across a range of variations and combinations, with developers frequently demonstrating their creativity in crafting diverse forms of vertical wind turbines.
Savonius.
The Savonius wind turbine (SWT) is a drag-type VAWT. The common design includes a rotating shaft with two or three scoops that catch the incoming wind. Due to its simplistic and robust design and its relatively low efficiency it is used whenever reliability is more important than efficiency. One reason for the low efficiency of a Savonius wind turbine is that roughly only half of the turbine generates positive torque, while the other side moves against the wind and thus produces negative torque. A variant of SWT is the Harmony wind turbine with helix-shaped blades and an automatic furling mechanism during high-speed wind conditions.
Darrieus.
The Darrieus wind turbine is a lift-type VAWT. The original design included a number of curved aerofoil blades with the tips attached on a rotating shaft. However, there are also designs that use straight vertical airfoils, referred to as H-rotor or Giromill Darrieus wind turbines. Furthermore, the blades of the Darrieus wind turbine can be shaped into a helix to reduce the torque ripple effect on the turbine by spreading the torque evenly over the revolution.
Being lift-type devices, the Darrieus wind turbines can extract more power from the wind than drag-type wind turbines, such as the Savonius wind turbine.
Revolving wing.
Revolving wing wind turbines or rotating wing wind turbines are a new category of lift-type VAWTs which use 1 vertically standing, non-helical airfoil to generate 360-degree rotation around a vertical shaft that runs through the center of the airfoil.
Advantages.
VAWTs offer a number of advantages over traditional horizontal-axis wind turbines (HAWTs):
Disadvantages.
When the velocity of a VAWT wind turbine grows, so does the power, however at a certain peak point, the power progressively decreases to zero even while the wind turbine velocity is at its greatest. Such that, disc brakes are used to slow the velocity of a wind turbine at high wind conditions. However, sometimes due to disc brake overheating, the turbine can catch fire.
VAWTs often suffer from dynamic stall of the blades as the angle of attack varies rapidly.
The blades of a VAWT are fatigue-prone due to the wide variation in applied forces during each rotation. The vertically oriented blades can twist and bend during each turn, shortening their usable lifetimes.
Other than the drag-types, VAWTs have proven less reliable than HAWTs, although modern designs have overcome many early issues.
Research.
A 2021 study simulated a VAWT configuration that allowed VAWTs to beat a comparable HAWT installation by 15%. An 11,500-hour simulation demonstrated the increased efficiency, in part by using a grid formation. One effect is to avoid downstream turbulence stemming from grid-arranged HAWTs that lowers efficiency. Other optimizations included array angle, rotation direction, turbine spacing, and number of rotors.
In 2022 Norway's World Wide Wind introduced floating VAWTs with two sets of counter-rotating blades. The two sets are fixed to concentric shafts. Each has an attached turbine. One is attached to the rotor, the other to the stator. This has the effect of doubling their speed relative to each other versus a static stator. They claimed to more than double the output compared to the largest HAWTs. HAWTs require heavy drivetrains, gearboxes, generators and blades at the top of the tower, necessitating heavy underwater counterbalances. VAWTs place most of the heavy components at the bottom of the tower, reducing the need for counterbalance. The blades sweep a conical area, which helps reduce the turbulence downwind of each tower, increasing the maximum tower density. The company claims it will build a 40-megawatt unit.
Applications.
The Windspire, a small VAWT intended for individual (home or office) use was developed in the early 2000s by US company Mariah Power. The company reported that several units had been installed across the US by June 2008.
Arborwind, an Ann Arbor, Michigan, based company, produces a patented small VAWT which has been installed at several US locations as of 2013.
In 2011, Sandia National Laboratories wind-energy researchers began a five-year study of applying VAWT design technology to offshore wind farms. The researchers stated: "The economics of offshore windpower are different from land-based turbines, due to installation and operational challenges. VAWTs offer three big advantages that could reduce the cost of wind energy: a lower turbine center of gravity; reduced machine complexity; and better scalability to very large sizes. A lower center of gravity means improved stability afloat and lower gravitational fatigue loads. Additionally, the drivetrain on a VAWT is at or near the surface, potentially making maintenance easier and less time-consuming. Fewer parts, lower fatigue loads and simpler maintenance all lead to reduced maintenance costs."
A 24-unit VAWT demonstration plot was installed in southern California in the early 2010s by Caltech aeronautical professor John Dabiri. His design was incorporated in a 10-unit generating farm installed in 2013 in the Alaskan village of Igiugig.
Dulas, Anglesey, received permission in March 2014 to install a prototype VAWT on the breakwater at Port Talbot waterside. The turbine is a new design, supplied by Wales-based C-FEC (Swansea), and will be operated for a two-year trial. This VAWT incorporates a wind shield which blocks the wind from the advancing blades, and thus requires a wind-direction sensor and a positioning mechanism, as opposed to the "egg-beater" types of VAWTs discussed above.
StrongWind, a Canadian based company produces a patented urban VAWT which has been installed in several Canadian and international locations as of 2023.
Architect Michael Reynolds (known for his Earthship house designs) developed a 4th-generation VAWT named "Dynasphere". It has two 1.5 kW generators and can produce electricity at very low speeds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{W}"
},
{
"math_id": 1,
"text": "\\vec{U}"
},
{
"math_id": 2,
"text": "-\\vec{\\omega }\\times\\vec{R}"
},
{
"math_id": 3,
"text": "\\vec{W}=\\vec{U}+\\left( -\\vec{\\omega }\\times\\vec{R} \\right)"
},
{
"math_id": 4,
"text": "\\theta =0{}^\\circ "
},
{
"math_id": 5,
"text": "\\theta =180{}^\\circ "
},
{
"math_id": 6,
"text": "\\theta "
},
{
"math_id": 7,
"text": "\\alpha "
},
{
"math_id": 8,
"text": "V_t=R \\omega + U\\cos(\\theta) "
},
{
"math_id": 9,
"text": "V_n=U \\sin(\\theta) "
},
{
"math_id": 10,
"text": " W= \\sqrt{V_t^2+V_n^2} "
},
{
"math_id": 11,
"text": "\\lambda =(\\omega R) /U"
},
{
"math_id": 12,
"text": "W=U\\sqrt{1+2\\lambda \\cos \\theta +\\lambda ^{2}}"
},
{
"math_id": 13,
"text": " \\alpha = \\tan^{-1} \\left( \\frac{V_n}{V_t} \\right) "
},
{
"math_id": 14,
"text": "\\alpha =\\tan ^{-1}\\left( \\frac{\\sin \\theta }{\\cos \\theta +\\lambda } \\right)"
},
{
"math_id": 15,
"text": "C_{L}=\\frac{F_L}{{1}/{2}\\;\\rho AW^{2}}\\text{ };\\text{ }C_{D}=\\frac{D}{{1}/{2}\\;\\rho AW^{2}}\\text{ };\\text{ }C_{T}=\\frac{T}{{1}/{2}\\;\\rho AU^{2}R}\\text{ };\\text{ }C_{N}=\\frac{N}{{1}/{2}\\;\\rho AU^{2}}"
},
{
"math_id": 16,
"text": " P=\\frac{1}{2}C_{p}\\rho A\\nu^{3} "
},
{
"math_id": 17,
"text": "C_{p}"
},
{
"math_id": 18,
"text": "\\rho"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "\\nu"
}
]
| https://en.wikipedia.org/wiki?curid=67378517 |
67378608 | Envy minimization | Problem in computer science and operations research
In computer science and operations research, the envy minimization problem is the problem of allocating discrete items among agents with different valuations over the items, such that the amount of envy is as small as possible.
Ideally, from a fairness perspective, one would like to find an envy-free item allocation - an allocation in which no agent envies another agent. That is: no agent prefers the bundle allocated to another agent. However, with indivisible items this might be impossible. One approach for coping with this impossibility is to turn the problem to an optimization problem, in which the loss function is a function describing the amount of envy. In general, this optimization problem is NP-hard, since even deciding whether an envy-free allocation exists is equivalent to the partition problem. However, there are optimization algorithms that can yield good results in practice.
Defining the amount of envy.
There are several ways to define the objective function (the amount of envy) for minimization. Some of them are:
Minimizing the envy-ratio.
With "general valuations," any deterministic algorithm that minimizes the maximum envy-ratio requires a number of queries which is exponential in the number of goods in the worst case.
With "additive and identical valuations":
With "additive and different valuations":
Distributed envy minimization.
In some cases, it is required to compute an envy-minimizing allocation in a distributed manner, i.e., each agent should compute his/her own allocation, in a way that guarantees that the resulting allocation is consistent. This problem can be solved by presenting it as an Asymmetric distributed constraint optimization problem (ADCOP) as follows.
The problem can be solved using the following local search algorithm.
Online minimization of the envy-difference.
Sometimes, the items to allocate are not available all at once, but rather arrive over time in an online fashion. Each arriving item must be allocated immediately. An example application is the problem of a food bank, which accepts food donations and must allocate them immediately to charities.
Benade, Kazachkov, Procaccia and Psomas consider the problem of minimizing the maximum "envy-difference", where the valuations are normalized such that the value of each item is in [0,1]. Note that in the offline variant of this setting, it is easy to find an allocation in which the maximum envy-difference is 1 (such an allocation is called EF1; see envy-free item allocation for more details). However, in the online variant the envy-difference increases with the number of items. They show an algorithm in which the envy after "T" items is in formula_1. Jiang, Kulkarni and Singla improve their envy bound using an algorithm for "online two-dimensional discrepancy minimization".
Other settings.
Other settings in which envy minimization was studied are: | [
{
"math_id": 0,
"text": "\\text{EnvyRatio}(X,i,j) := \\max\\left(1, {u_i(X_j)\\over u_i(X_i)}\\right)"
},
{
"math_id": 1,
"text": "O(\\sqrt{T/n})"
}
]
| https://en.wikipedia.org/wiki?curid=67378608 |
67379953 | 2 Chronicles 21 | Second Book of Chronicles, chapter 21
2 Chronicles 21 is the twenty-first chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Jehoram, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 20 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Jehoram, king of Judah (21:1–7).
This section contains the record of Jehoram's reign, but uniquely also has the records of the king's brothers (verses 2–4), which only occurs with David's family in Kings or Chronicles. As soon as Jehoram had established his power, he brutally murdered all his brothers, who were in charge of fortified cities, and several notables, most likely driven by his lust for control or fear of losing it. However, the divine wrath was restrained for the kingdom, because of the promise to David (1 Chronicles 17:1–15; cf. 2 Kings 8:17–19).
"Jehoram was thirty-two years old when he became king, and he reigned eight years in Jerusalem."
Edom and Libnah Rebel (21:8–11).
The text provides unclear description whether Jehoram managed to defeat the Edomites, only to state that Edom and Libnah successfully revolted against the kingdom of Judah (verse 10), which should give ample warning to Jehoram to repent from his sins, but instead he continued to establish idol worship in Judah.
Elijah’s Letter to Jehoram (21:12–15).
In Jehoram's regnal record, there was not a single prophet appearing in flesh and
blood, and the prophetic warning only came in a letter sent by Elijah, who was active in the northern kingdom. Elijah's threats of divine punishment for Jehoram (verses 14–15) were all fulfilled and fell on Jehoram's people, family, property and own body (verses 16–19).
Death of Jehoram (21:16–20).
The punishment for Jehoram came from the south-western neighbors of the kingdom ("the Arabs who are near the Ethiopians"; cf. 2 Chronicles 14:9), and left with only the youngest son of Jehoram, the Davidic line was on the brink of total eradication. The Chronicler extensively describes Jehoram's final punishment in the form of a painful, incurable, yet indefinable sickness (probably a stomach ulcer leading to a chronic rectal prolapse).
"He was thirty-two when he began to reign, and he reigned eight years in Jerusalem. And he departed with no one’s regret. They buried him in the City of David, but not in the tombs of the kings."
Verse 20.
The repetition of Jehoram's age and length of reign (cf. verse 5) indicates a transcription from another source.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67379953 |
6738151 | Faulhaber's formula | Expression for sums of powers
In mathematics, Faulhaber's formula, named after the early 17th century mathematician Johann Faulhaber, expresses the sum of the "p"-th powers of the first "n" positive integers
formula_0
as a polynomial in "n". In modern notation, Faulhaber's formula is
formula_1
Here, formula_2 is the binomial coefficient ""p" + 1 choose "r", and the "Bj" are the Bernoulli numbers with the convention that formula_3.
The result: Faulhaber's formula.
Faulhaber's formula concerns expressing the sum of the "p"-th powers of the first "n" positive integers
formula_0
as a ("p" + 1)th-degree polynomial function of "n".
The first few examples are well known. For "p" = 0, we have
formula_4
For "p" = 1, we have the triangular numbers
formula_5
For "p" = 2, we have the square pyramidal numbers
formula_6
The coefficients of Faulhaber's formula in its general form involve the Bernoulli numbers "Bj". The Bernoulli numbers begin
formula_7
where here we use the convention that formula_3. The Bernoulli numbers have various definitions (see Bernoulli number#Definitions), such as that they are the coefficients of the exponential generating function
formula_8
Then Faulhaber's formula is that
formula_9
Here, the "Bj" are the Bernoulli numbers as above, and
formula_10
is the binomial coefficient "p" + 1 choose "k"".
Examples.
So, for example, one has for "p" = 4,
formula_11
The first seven examples of Faulhaber's formula are
formula_12
History.
Ancient period.
The history of the problem begins in antiquity and coincides with that of some of its special cases. The case formula_13 coincides with that of the calculation of the arithmetic series, the sum of the first formula_14 values of an arithmetic progression. This problem is quite simple but the case already known by the Pythagorean school for its connection with triangular numbers is historically interesting:
formula_15 polynomial formula_16 calculating the sum of the first formula_14 natural numbers.
For formula_17 the first cases encountered in the history of mathematics are:
formula_18 polynomial formula_19 calculating the sum of the first formula_20 successive odds forming a square. A property probably well known by the Pythagoreans themselves who, in constructing their figured numbers, had to add each time a gnomon consisting of an odd number of points to obtain the next perfect square.
formula_21 polynomial formula_22 calculating the sum of the squares of the successive integers. Property that is demonstrated in "Spirals", a work of Archimedes.
formula_23 polynomial formula_24 calculating the sum of the cubes of the successive integers. Corollary of a theorem of Nicomachus of Gerasa.
L'insieme formula_25 of the cases, to which the two preceding polynomials belong, constitutes the classical problem of powers of successive integers.
Middle period.
Over time, many other mathematicians became interested in the problem and made various contributions to its solution. These include Aryabhata, Al-Karaji, Ibn al-Haytham, Thomas Harriot, Johann Faulhaber, Pierre de Fermat and Blaise Pascal who recursively solved the problem of the sum of powers of successive integers by considering an identity that allowed to obtain a polynomial of degree formula_26 already knowing the previous ones.
Faulhaber's formula is also called Bernoulli's formula. Faulhaber did not know the properties of the coefficients later discovered by Bernoulli. Rather, he knew at least the first 17 cases, as well as the existence of the Faulhaber polynomials for odd powers described below.
In 1713, Jacob Bernoulli published under the title "Summae Potestatum" an expression of the sum of the p powers of the "n" first integers as a ("p" + 1)th-degree polynomial function of "n", with coefficients involving numbers "Bj", now called Bernoulli numbers:
formula_27
Introducing also the first two Bernoulli numbers (which Bernoulli did not), the previous formula becomes
formula_28
using the Bernoulli number of the second kind for which formula_29, or
formula_30
using the Bernoulli number of the first kind for which formula_31
A rigorous proof of these formulas and Faulhaber's assertion that such formulas would exist for all odd powers took until Carl Jacobi (1834), two centuries later. Jacobi benefited from the progress of mathematical analysis using the development in infinite series of an exponential function generating Bernoulli numbers.
Modern period.
In 1982 A.W.F. Edwards publishes an article in which he shows that Pascal's identity can be expressed by means of triangular matrices containing the Pascal's triangle deprived of 'last element of each line:
formula_32
The example is limited by the choice of a fifth order matrix but is easily extendable to higher orders. The equation can be written as: formula_33 and multiplying the two sides of the equation to the left by formula_34 , inverse of the matrix A, we obtain formula_35 which allows to arrive directly at the polynomial coefficients without directly using the Bernoulli numbers. Other authors after Edwards dealing with various aspects of the power sum problem take the matrix path and studying aspects of the problem in their articles useful tools such as the Vandermonde vector. Other researchers continue to explore through the traditional analytic route and generalize the problem of the sum of successive integers to any geometric progression
Proof with exponential generating function.
Let
formula_36
denote the sum under consideration for integer formula_37
Define the following exponential generating function with (initially) indeterminate formula_38
formula_39
We find
formula_40
This is an entire function in formula_38 so that formula_38 can be taken to be any complex number.
We next recall the exponential generating function for the Bernoulli polynomials formula_41
formula_42
where formula_43 denotes the Bernoulli number with the convention formula_44. This may be converted to a generating function with the convention formula_45 by the addition of formula_46 to the coefficient of formula_47 in each formula_41 (formula_48 does not need to be changed):
formula_49
It follows immediately that
formula_50
for all formula_51.
Faulhaber polynomials.
The term Faulhaber polynomials is used by some authors to refer to another polynomial sequence related to that given above.
Write
formula_52
Faulhaber observed that if "p" is odd then formula_53 is a polynomial function of "a".
For "p" = 1, it is clear that
formula_54
For "p" = 3, the result that
formula_55
is known as Nicomachus's theorem.
Further, we have
formula_56
(see OEIS: , OEIS: , OEIS: , OEIS: , OEIS: ).
More generally,
formula_57
Some authors call the polynomials in "a" on the right-hand sides of these identities "Faulhaber polynomials". These polynomials are divisible by "a"2 because the Bernoulli number "B""j" is 0 for odd "j" > 1.
Inversely, writing for simplicity formula_58, we have
formula_59
and generally
formula_60
Faulhaber also knew that if a sum for an odd power is given by
formula_61
then the sum for the even power just below is given by
formula_62
Note that the polynomial in parentheses is the derivative of the polynomial above with respect to "a".
Since "a" = "n"("n" + 1)/2, these formulae show that for an odd power (greater than 1), the sum is a polynomial in "n" having factors "n"2 and ("n" + 1)2, while for an even power the polynomial has factors "n", "n" + 1/2 and "n" + 1.
Expressing products of power sums as linear combinations of power sums.
Products of two (and thus by iteration, several) power sums formula_63 can be written as linear combinations of power sums with either all degrees even or all degrees odd, depending on the total degree of the product as a polynomial in formula_64, e.g. formula_65.
Note that the sums of coefficients must be equal on both sides, as can be seen by putting formula_66, which makes all the formula_67 equal to 1. Some general formulae include:
formula_68
Note that in the second formula, for even formula_69 the term corresponding to formula_70 is different from the other terms in the sum, while for odd formula_69, this additional term vanishes because of formula_71.
Matrix form.
Faulhaber's formula can also be written in a form using matrix multiplication.
Take the first seven examples
formula_72
Writing these polynomials as a product between matrices gives
formula_73
where
formula_74
Surprisingly, inverting the matrix of polynomial coefficients yields something more familiar:
formula_75
In the inverted matrix, Pascal's triangle can be recognized, without the last element of each row, and with alternating signs.
Let formula_76 be the matrix obtained from formula_77 by changing the signs of the entries in odd diagonals, that is by replacing formula_78 by formula_79, let formula_80 be the matrix obtained from formula_81 with a similar transformation, then
formula_82
and
formula_83
Also
formula_84
This is because it is evident that
formula_85 and that therefore polynomials of degree formula_86 of the form formula_87 subtracted the monomial difference formula_88 they become formula_89.
This is true for every order, that is, for each positive integer m, one has formula_90 and formula_91
Thus, it is possible to obtain the coefficients of the polynomials of the sums of powers of successive integers without resorting to the numbers of Bernoulli but by inverting the matrix easily obtained from the triangle of Pascal.
where formula_99 can be interpreted as "negative" Bernoulli numbers with formula_100.
Variations.
Interpreting the Stirling numbers of the second kind, formula_110, as the number of set partitions of formula_111 into formula_92 parts, the identity has a direct combinatorial proof since both sides count the number of functions formula_112 with formula_113 maximal. The index of summation on the left hand side represents formula_114, while the index on the right hand side is represents the number of elements in the image of f.
formula_115
This in particular yields the examples below – e.g., take "k" = 1 to get the first example. In a similar fashion we also find
formula_116
formula_118.
Relationship to Riemann zeta function.
Using formula_119, one can write
formula_120
If we consider the generating function formula_101 in the large formula_96 limit for formula_121, then we find
formula_122
Heuristically, this suggests that
formula_123
This result agrees with the value of the Riemann zeta function formula_124 for negative integers formula_125 on appropriately analytically continuing formula_126.
Umbral form.
In the umbral calculus, one treats the Bernoulli numbers formula_127, formula_128, formula_129, ... "as if" the index "j" in formula_130 were actually an exponent, and so "as if" the Bernoulli numbers were powers of some object "B".
Using this notation, Faulhaber's formula can be written as
formula_131
Here, the expression on the right must be understood by expanding out to get terms formula_130 that can then be interpreted as the Bernoulli numbers. Specifically, using the binomial theorem, we get
formula_132
A derivation of Faulhaber's formula using the umbral form is available in "The Book of Numbers" by John Horton Conway and Richard K. Guy.
Classically, this umbral form was considered as a notational convenience. In the modern umbral calculus, on the other hand, this is given a formal mathematical underpinning. One considers the linear functional "T" on the vector space of polynomials in a variable "b" given by formula_133 Then one can say
formula_134
A general formula.
The series formula_135 as a function of m is often abbreviated as formula_136. Beardon (see External Links) have published formulas for powers of formula_136. For example, Beardon 1996 stated this general formula for powers of formula_137, which shows that formula_138 raised to a power N can be written as a linear sum of terms formula_139... For example, by taking N to be 2, then 3, then 4 in Beardon's formula we get the identities formula_140.
Other formulae, such as formula_141 and formula_142 are known but no general formula for formula_143, where m, N are positive integers, has been published to date. In an unpublished paper by Derby (2019) the following formula was stated and proved:
formula_144.
This can be calculated in matrix form, as described above. In the case when m = 1 it replicates Beardon's formula for formula_145. When m = 2 and N = 2 or 3 it generates the given formulas for formula_146 and formula_147 . Examples of calculations for higher indices are
formula_148 and formula_149.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{k=1}^n k^p = 1^p + 2^p + 3^p + \\cdots + n^p"
},
{
"math_id": 1,
"text": " \\sum_{k=1}^n k^{p} = \\frac{1}{p+1} \\sum_{r=0}^p \\binom{p+1}{r} B_r n^{p-r+1} ."
},
{
"math_id": 2,
"text": "\\binom{p+1}{r}"
},
{
"math_id": 3,
"text": "B_1 = +\\frac12"
},
{
"math_id": 4,
"text": "\\sum_{k=1}^n k^0 = \\sum_{k=1}^n 1 = n ."
},
{
"math_id": 5,
"text": "\\sum_{k=1}^n k^1 = \\sum_{k=1}^n k = \\frac{n(n+1)}{2} = \\frac12(n^2 + n) ."
},
{
"math_id": 6,
"text": "\\sum_{k=1}^n k^2 = \\frac{n(n+1)(2n+1)}{6} = \\frac13(n^3 + \\tfrac32 n^2 + \\tfrac12n) ."
},
{
"math_id": 7,
"text": " \\begin{align}\nB_0 &= 1 & B_1 &= \\tfrac12 & B_2 &= \\tfrac16 & B_3 &= 0 \\\\\nB_4 &= -\\tfrac{1}{30} & B_5 &= 0 & B_6 &= \\tfrac{1}{42} & B_7 &= 0 ,\n\\end{align} "
},
{
"math_id": 8,
"text": " \\frac{t}{1 - \\mathrm{e}^{-t}} = \\frac{t}{2} \\left( \\operatorname{coth} \\frac{t}{2} +1 \\right) = \\sum_{k=0}^\\infty B_k \\frac{t^k}{k!} . "
},
{
"math_id": 9,
"text": " \\sum_{k=1}^n k^{p} = \\frac{1}{p+1} \\sum_{k=0}^p \\binom{p+1}{k} B_k n^{p-k+1} ."
},
{
"math_id": 10,
"text": "\\binom{p+1}{k} = \\frac{(p+1)!}{(p+1-k)!\\,k!} = \\frac{(p+1)p(p-1) \\cdots (p-k+3)(p-k+2)}{k(k-1)(k-2)\\cdots2\\cdot 1}"
},
{
"math_id": 11,
"text": "\\begin{align} 1^4 + 2^4 + 3^4 + \\cdots + n^4 &= \\frac{1}{5} \\sum_{j=0}^4 {5 \\choose j} B_j n^{5-j}\\\\\n&= \\frac{1}{5} \\left(B_0 n^5+5B_1n^4+10B_2n^3+10B_3n^2+5B_4n\\right)\\\\\n&= \\frac{1}{5} \\left(n^5 + \\tfrac{5}{2}n^4+ \\tfrac{5}{3}n^3- \\tfrac{1}{6}n \\right) .\\end{align}"
},
{
"math_id": 12,
"text": " \\begin{align}\n\\sum_{k=1}^n k^0 &=\t\\frac{1}{1} \\, \\big(n \\big)\t\\\\\n\\sum_{k=1}^n k^1 &=\t\\frac{1}{2} \\, \\big(n^2 + \\tfrac{2}{2} n \\big)\t\\\\\n\\sum_{k=1}^n k^2 &=\t\\frac{1}{3} \\, \\big(n^3 + \\tfrac{3}{2} n^2 + \\tfrac{ 3}{6} n \\big)\t\\\\\n\\sum_{k=1}^n k^3 &=\t\\frac{1}{4} \\, \\big(n^4 + \\tfrac{4}{2} n^3 + \\tfrac{ 6}{6} n^2 + 0n \\big)\t\\\\\n\\sum_{k=1}^n k^4 &=\t\\frac{1}{5} \\, \\big(n^5 + \\tfrac{5}{2} n^4 + \\tfrac{10}{6} n^3 + 0n^2 - \\tfrac{ 5}{30} n \\big)\t\\\\\n\\sum_{k=1}^n k^5 &=\t\\frac{1}{6} \\, \\big(n^6 + \\tfrac{6}{2} n^5 + \\tfrac{15}{6} n^4 + 0n^3 - \\tfrac{15}{30} n^2 + 0n \\big)\t\\\\\n\\sum_{k=1}^n k^6 &=\t\\frac{1}{7} \\, \\big(n^7 + \\tfrac{7}{2} n^6 + \\tfrac{21}{6} n^5 + 0n^4 - \\tfrac{35}{30} n^3 + 0n^2 + \\tfrac{7}{42}n \\big) .\n\\end{align}"
},
{
"math_id": 13,
"text": "p = 1 "
},
{
"math_id": 14,
"text": " n "
},
{
"math_id": 15,
"text": "1+2+\\dots+n=\\frac{1}{2}n^2+\\frac{1}{2}n, "
},
{
"math_id": 16,
"text": " S_{1,1}^1(n)"
},
{
"math_id": 17,
"text": " m> 1, "
},
{
"math_id": 18,
"text": "1+3+\\dots+2n-1=n^2, "
},
{
"math_id": 19,
"text": "S_{1,2}^1(n)"
},
{
"math_id": 20,
"text": " n"
},
{
"math_id": 21,
"text": "1^2+2^2+\\ldots+n^2=\\frac{1}{3}n^3+\\frac{1}{2}n^2+\\frac{1}{6}n, "
},
{
"math_id": 22,
"text": "S_{1,1}^2(n) "
},
{
"math_id": 23,
"text": "1^3+2^3+\\ldots+n^3=\\frac{1}{4}n^4+\\frac{1}{2}n^3+\\frac{1}{4}n^2, "
},
{
"math_id": 24,
"text": "S_{1,1}^3(n)"
},
{
"math_id": 25,
"text": "S_{1,1}^m(n) "
},
{
"math_id": 26,
"text": " m + 1 "
},
{
"math_id": 27,
"text": " \\sum_{k=1}^n k^{p} = \\frac{n^{p+1}}{p+1}+\\frac{1}{2}n^p+{1 \\over p+1}\n\\sum_{j=2}^p {p+1 \\choose j} B_j n^{p+1-j}."
},
{
"math_id": 28,
"text": "\\sum_{k=1}^n k^p = {1 \\over p+1} \\sum_{j=0}^p {p+1 \\choose j} B_j n^{p+1-j},"
},
{
"math_id": 29,
"text": "B_1=\\frac{1}{2}"
},
{
"math_id": 30,
"text": "\\sum_{k=1}^n k^p = {1 \\over p+1} \\sum_{j=0}^p (-1)^j{p+1 \\choose j} B_j^- n^{p+1-j},"
},
{
"math_id": 31,
"text": "B_1^- =-\\frac{1}{2}."
},
{
"math_id": 32,
"text": "\\begin{pmatrix}\nn\\\\\nn^2\\\\\nn^3\\\\\nn^4\\\\\nn^5\\\\\n\\end{pmatrix}=\\begin{pmatrix}\n1 &0 &0 &0 &0\\\\\n1&2&0 &0 &0 \\\\\n1&3&3&0 &0 \\\\\n1&4&6&4 &0 \\\\\n1&5&10&10&5\n\\end{pmatrix}\\begin{pmatrix}\nn\\\\\n\\sum_{k=0}^{n-1} k^1\\\\\n\\sum_{k=0}^{n-1} k^2\\\\\n\\sum_{k=0}^{n-1} k^3\\\\\n\\sum_{k=0}^{n-1} k^4\\\\\n\\end{pmatrix}"
},
{
"math_id": 33,
"text": "\\vec{N}=A\\vec{S} "
},
{
"math_id": 34,
"text": " A^{-1} "
},
{
"math_id": 35,
"text": " A^{-1}\\vec{N}=\\vec{S} "
},
{
"math_id": 36,
"text": "\nS_{p}(n)=\\sum_{k=1}^{n} k^p,\n"
},
{
"math_id": 37,
"text": "p\\ge 0."
},
{
"math_id": 38,
"text": "z"
},
{
"math_id": 39,
"text": "\nG(z,n)=\\sum_{p=0}^{\\infty} S_{p}(n) \\frac{1}{p!}z^p.\n"
},
{
"math_id": 40,
"text": "\n\\begin{align}\nG(z,n) =& \\sum_{p=0}^{\\infty} \\sum_{k=1}^{n} \\frac{1}{p!}(kz)^p\n=\\sum_{k=1}^{n}e^{kz}=e^{z}\\cdot\\frac{1-e^{nz}}{1-e^{z}},\\\\\n=& \\frac{1-e^{nz}}{e^{-z}-1}.\n\\end{align}\n"
},
{
"math_id": 41,
"text": "B_j(x)"
},
{
"math_id": 42,
"text": "\n\\frac{ze^{zx}}{e^{z}-1}=\\sum_{j=0}^{\\infty} B_j(x) \\frac{z^j}{j!},\n"
},
{
"math_id": 43,
"text": "B_j=B_j(0)"
},
{
"math_id": 44,
"text": "B_{1}=-\\frac{1}{2}"
},
{
"math_id": 45,
"text": "B^+_1=\\frac{1}{2}"
},
{
"math_id": 46,
"text": "j"
},
{
"math_id": 47,
"text": "x^{j-1}"
},
{
"math_id": 48,
"text": "B_0"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n\\sum_{j=0}^{\\infty} B^+_j(x) \\frac{z^j}{j!} =& \\frac{ze^{zx}}{e^{z}-1}+\\sum_{j=1}^{\\infty}jx^{j-1}\\frac{z^j}{j!}\\\\\n=& \\frac{ze^{zx}}{e^{z}-1}+\\sum_{j=1}^{\\infty}x^{j-1}\\frac{z^j}{(j-1)!}\\\\\n=& \\frac{ze^{zx}}{e^{z}-1}+ze^{zx}\\\\\n=& \\frac{ze^{zx}+ze^{z}e^{zx}-ze^{zx}}{e^{z}-1}\\\\\n=& \\frac{ze^{zx}}{1-e^{-z}}\n\\end{align}\n"
},
{
"math_id": 50,
"text": "\nS_p(n)=\\frac{B^+_{p+1}(n)-B^+_{p+1}(0)}{p+1}\n"
},
{
"math_id": 51,
"text": "p"
},
{
"math_id": 52,
"text": "a = \\sum_{k=1}^n k = \\frac{n(n+1)}{2} . "
},
{
"math_id": 53,
"text": "\\sum_{k=1}^n k^p"
},
{
"math_id": 54,
"text": "\\sum_{k=1}^n k^1 = \\sum_{k=1}^n k = \\frac{n(n+1)}{2} = a. "
},
{
"math_id": 55,
"text": "\\sum_{k=1}^n k^3 = \\frac{n^2(n+1)^2}{4} = a^2 "
},
{
"math_id": 56,
"text": " \\begin{align}\n\\sum_{k=1}^n k^5 &= \\frac{4a^3 - a^2}{3} \\\\\n\\sum_{k=1}^n k^7 &= \\frac{6a^4 -4a^3 + a^2}{3} \\\\\n\\sum_{k=1}^n k^9 &= \\frac{16a^5 - 20a^4 +12a^3 - 3a^2}{5} \\\\\n\\sum_{k=1}^n k^{11} &= \\frac{16a^6 - 32a^5 + 34a^4 - 20a^3 + 5a^2}{3}\n\\end{align} "
},
{
"math_id": 57,
"text": "\n\\sum_{k=1}^n k^{2m+1} = \\frac{1}{2^{2m+2}(2m+2)} \\sum_{q=0}^m \\binom{2m+2}{2q}\n(2-2^{2q})~ B_{2q} ~\\left[(8a+1)^{m+1-q}-1\\right].\n"
},
{
"math_id": 58,
"text": "s_j: = \\sum_{k=1}^n k^j"
},
{
"math_id": 59,
"text": " \\begin{align}\n4a^3 &= 3s_5 +s_3 \\\\\n8a^4 &= 4s_7+4s_5 \\\\\n16a^5 &= 5s_9+10s_7+s_5\n\\end{align} "
},
{
"math_id": 60,
"text": " 2^{m-1} a^m = \\sum_{j>0} \\binom{m}{2j-1} s_{2m-2j+1}."
},
{
"math_id": 61,
"text": "\\sum_{k=1}^n k^{2m+1} = c_1 a^2 + c_2 a^3 + \\cdots + c_m a^{m+1}"
},
{
"math_id": 62,
"text": "\\sum_{k=1}^n k^{2m} = \\frac{n+\\frac12}{2m+1}(2 c_1 a + 3 c_2 a^2+\\cdots + (m+1) c_m a^m)."
},
{
"math_id": 63,
"text": "s_{j_r}:=\\sum_{k=1}^n k^{j_r} "
},
{
"math_id": 64,
"text": "n "
},
{
"math_id": 65,
"text": "30 s_2s_4=-s_3+15s_5+16s_7 "
},
{
"math_id": 66,
"text": "n=1 "
},
{
"math_id": 67,
"text": "s_j "
},
{
"math_id": 68,
"text": " \\begin{align}\n(m+1)s_m^2 &= 2\\sum_{j=0}^{\\lfloor\\frac{m}2\\rfloor}\\binom{m+1}{2j}(2m+1-2j)B_{2j}s_{2m+1-2j}.\\\\\nm(m+1)s_ms_{m-1}&=m(m+1)B_ms_m+\\sum_{j=0}^{\\lfloor\\frac{m-1}2\\rfloor}\\binom{m+1}{2j}(2m+1-2j)B_{2j}s_{2m-2j}.\\\\ \n 2^{m-1} s_1^m &= \\sum_{j=1}^{\\lfloor\\frac{m+1}2\\rfloor} \\binom{m}{2j-1} s_{2m+1-2j}.\\end{align}"
},
{
"math_id": 69,
"text": "m "
},
{
"math_id": 70,
"text": "j=\\dfrac m2 "
},
{
"math_id": 71,
"text": "B_m=0 "
},
{
"math_id": 72,
"text": "\\begin{align}\n\\sum_{k=1}^n k^0 &= \\phantom{-}1n \\\\\n\\sum_{k=1}^n k^1 &= \\phantom{-}\\tfrac{1}{2}n+\\tfrac{1}{2}n^2 \\\\\n\\sum_{k=1}^n k^2 &= \\phantom{-}\\tfrac{1}{6}n+\\tfrac{1}{2}n^2+\\tfrac{1}{3}n^3 \\\\\n\\sum_{k=1}^n k^3 &= \\phantom{-}0n+\\tfrac{1}{4}n^2+\\tfrac{1}{2}n^3+\\tfrac{1}{4}n^4 \\\\\n\\sum_{k=1}^n k^4 &= -\\tfrac{1}{30}n+0n^2+\\tfrac{1}{3}n^3+\\tfrac{1}{2}n^4+\\tfrac{1}{5}n^5 \\\\\n\\sum_{k=1}^n k^5 &= \\phantom{-}0n-\\tfrac{1}{12}n^2+0n^3+\\tfrac{5}{12}n^4+\\tfrac{1}{2}n^5+\\tfrac{1}{6}n^6 \\\\\n\\sum_{k=1}^n k^6 &= \\phantom{-}\\tfrac{1}{42}n+0n^2-\\tfrac{1}{6}n^3+0n^4+\\tfrac{1}{2}n^5+\\tfrac{1}{2}n^6+\\tfrac{1}{7}n^7 .\n\\end{align}"
},
{
"math_id": 73,
"text": "\\begin{pmatrix}\n\\sum k^0 \\\\\n\\sum k^1 \\\\\n\\sum k^2 \\\\\n\\sum k^3 \\\\\n\\sum k^4 \\\\\n\\sum k^5 \\\\\n\\sum k^6 \n\\end{pmatrix} =\nG_7\n\\begin{pmatrix}\nn \\\\\nn^2 \\\\\nn^3 \\\\\nn^4 \\\\\nn^5 \\\\\nn^6 \\\\\nn^7 \n\\end{pmatrix} ,"
},
{
"math_id": 74,
"text": " G_7 = \\begin{pmatrix}\n1& 0& 0& 0& 0&0& 0\\\\\n{1\\over 2}& {1\\over 2}& 0& 0& 0& 0& 0\\\\\n{1\\over 6}& {1\\over 2}&{1\\over 3}& 0& 0& 0& 0\\\\\n0& {1\\over 4}& {1\\over 2}& {1\\over 4}& 0&0& 0\\\\\n-{1\\over 30}& 0& {1\\over 3}& {1\\over 2}& {1\\over 5}&0& 0\\\\\n0& -{1\\over 12}& 0& {5\\over 12}& {1\\over 2}& {1\\over 6}& 0\\\\\n{1\\over 42}& 0& -{1\\over 6}& 0& {1\\over 2}&{1\\over 2}& {1\\over 7}\n\\end{pmatrix} . "
},
{
"math_id": 75,
"text": "\nG_7^{-1}=\\begin{pmatrix}\n 1& 0& 0& 0& 0& 0& 0\\\\\n-1& 2& 0& 0& 0& 0& 0\\\\\n 1& -3& 3& 0& 0& 0& 0\\\\\n-1& 4& -6& 4& 0& 0& 0\\\\\n 1& -5& 10& -10& 5& 0& 0\\\\\n-1& 6& -15& 20& -15& 6& 0\\\\\n 1& -7& 21& -35& 35& -21& 7\\\\\n\\end{pmatrix} = \\overline{A}_7\n"
},
{
"math_id": 76,
"text": "A_7"
},
{
"math_id": 77,
"text": "\\overline{A}_7"
},
{
"math_id": 78,
"text": "a_{i,j}"
},
{
"math_id": 79,
"text": "(-1)^{i+j} a_{i,j}"
},
{
"math_id": 80,
"text": "\\overline{G}_7"
},
{
"math_id": 81,
"text": "G_7"
},
{
"math_id": 82,
"text": "\nA_7=\\begin{pmatrix}\n1& 0& 0& 0& 0& 0& 0\\\\\n1& 2& 0& 0& 0& 0& 0\\\\\n1& 3& 3& 0& 0& 0& 0\\\\\n1& 4& 6& 4& 0& 0& 0\\\\\n1& 5& 10& 10& 5& 0& 0\\\\\n1& 6& 15& 20& 15& 6& 0\\\\\n1& 7& 21& 35& 35& 21& 7\\\\\n\\end{pmatrix} "
},
{
"math_id": 83,
"text": " A_7^{-1}=\\begin{pmatrix}\n1& 0& 0& 0& 0&0& 0\\\\\n-{1\\over 2}& {1\\over 2}& 0& 0& 0& 0& 0\\\\\n{1\\over 6}& -{1\\over 2}&{1\\over 3}& 0& 0& 0& 0\\\\\n0& {1\\over 4}& -{1\\over 2}& {1\\over 4}& 0&0& 0\\\\\n-{1\\over 30}& 0& {1\\over 3}& -{1\\over 2}& {1\\over 5}&0& 0\\\\\n0& -{1\\over 12}& 0& {5\\over 12}& -{1\\over 2}& {1\\over 6}& 0\\\\\n{1\\over 42}& 0& -{1\\over 6}& 0& {1\\over 2}&-{1\\over 2}& {1\\over 7}\n\\end{pmatrix}=\\overline{G}_7 .\n"
},
{
"math_id": 84,
"text": "\\begin{pmatrix}\n\\sum_{k=0}^{n-1} k^0 \\\\\n\\sum_{k=0}^{n-1} k^1 \\\\\n\\sum_{k=0}^{n-1} k^2 \\\\\n\\sum_{k=0}^{n-1} k^3 \\\\\n\\sum_{k=0}^{n-1} k^4 \\\\\n\\sum_{k=0}^{n-1} k^5 \\\\\n\\sum_{k=0}^{n-1} k^6 \\\\\n\\end{pmatrix} =\n\\overline{G}_7\\begin{pmatrix}\nn \\\\\nn^2 \\\\\nn^3 \\\\\nn^4 \\\\\nn^5 \\\\\nn^6 \\\\\nn^7 \\\\\n\\end{pmatrix}\n"
},
{
"math_id": 85,
"text": "\\sum_{k=1}^{n}k^m-\\sum_{k=0}^{n-1}k^m=n^m "
},
{
"math_id": 86,
"text": "m+ 1"
},
{
"math_id": 87,
"text": "\\frac{1}{m+1}n^{m+1}+\\frac{1}{2}n^m+\\cdots "
},
{
"math_id": 88,
"text": " n^m "
},
{
"math_id": 89,
"text": "\\frac{1}{m+1}n^{m+1}-\\frac{1}{2}n^m+\\cdots "
},
{
"math_id": 90,
"text": "G_m^{-1} = \\overline{A}_m"
},
{
"math_id": 91,
"text": "\\overline{G}_m^{-1} = A_m."
},
{
"math_id": 92,
"text": "k"
},
{
"math_id": 93,
"text": "p-k"
},
{
"math_id": 94,
"text": "\n\\sum_{k=1}^n k^p= \\sum_{k=0}^p \\frac1{k+1}{p \\choose k} B_{p-k} n^{k+1}.\n"
},
{
"math_id": 95,
"text": "n^p"
},
{
"math_id": 96,
"text": "n"
},
{
"math_id": 97,
"text": "1"
},
{
"math_id": 98,
"text": " \n\\begin{align}\n\\sum_{k=1}^n k^{p} &= \\frac{1}{p+1} \\sum_{k=0}^p \\binom{p+1}{k} (-1)^kB_k (n+1)^{p-k+1} \\\\\n&= \\sum_{k=0}^p \\frac{1}{k+1} \\binom{p}{k} (-1)^{p-k}B_{p-k} (n+1)^{k+1}, \n\\end{align}\n"
},
{
"math_id": 99,
"text": "(-1)^kB_k = B^-_k"
},
{
"math_id": 100,
"text": "B^-_1=-\\tfrac12"
},
{
"math_id": 101,
"text": "G(z,n)"
},
{
"math_id": 102,
"text": "\n\\begin{align}\nG(z,n) &= \\frac{e^{(n+1)z}}{e^z -1} - \\frac{e^z}{e^z -1}\\\\\n&= \\sum_{j=0}^{\\infty} \\left(B_j(n+1)-(-1)^j B_j\\right) \\frac{z^{j-1}}{j!}, \n\\end{align}\n"
},
{
"math_id": 103,
"text": "\n\\sum_{k=1}^n k^p = \\frac{1}{p+1} \\left(B_{p+1}(n+1)-(-1)^{p+1}B_{p+1}\\right) = \\frac{1}{p+1}\\left(B_{p+1}(n+1)-B_{p+1}(1) \\right).\n"
},
{
"math_id": 104,
"text": "B_n = 0"
},
{
"math_id": 105,
"text": " n > 1"
},
{
"math_id": 106,
"text": " (-1)^{p+1} "
},
{
"math_id": 107,
"text": "p > 0"
},
{
"math_id": 108,
"text": "\n\\sum_{k=0}^n k^p = \\sum_{k=0}^p \\left\\{{p\\atop k}\\right\\}\\frac{(n+1)_{k+1}}{k+1},\n"
},
{
"math_id": 109,
"text": "\n\\sum_{k=1}^n k^p = \\sum_{k=1}^{p+1} \\left\\{{p+1\\atop k}\\right\\}\\frac{(n)_k}{k}.\n"
},
{
"math_id": 110,
"text": "\\left\\{{p+1\\atop k}\\right\\}"
},
{
"math_id": 111,
"text": "\\lbrack p+1\\rbrack"
},
{
"math_id": 112,
"text": "f:\\lbrack p+1\\rbrack \\to \\lbrack n\\rbrack "
},
{
"math_id": 113,
"text": "f(1)"
},
{
"math_id": 114,
"text": "k=f(1)"
},
{
"math_id": 115,
"text": "\\begin{align}(n+1)^{k+1}-1 &= \\sum_{m = 1}^n \\left((m+1)^{k+1} - m^{k+1}\\right)\\\\\n&= \\sum_{p = 0}^k \\binom{k+1}{p} (1^p+2^p+ \\dots + n^p).\\end{align}"
},
{
"math_id": 116,
"text": "\\begin{align}n^{k+1} = \\sum_{m = 1}^n \\left(m^{k+1} - (m-1)^{k+1}\\right) = \\sum_{p = 0}^k (-1)^{k+p}\\binom{k+1}{p} (1^p+2^p+ \\dots + n^p).\\end{align}"
},
{
"math_id": 117,
"text": "A_n(x)"
},
{
"math_id": 118,
"text": "\\sum_{n=1}^\\infty n^k x^n=\\frac{x}{(1-x)^{k+1}}A_k(x)"
},
{
"math_id": 119,
"text": "B_k=-k\\zeta(1-k)"
},
{
"math_id": 120,
"text": "\n\\sum\\limits_{k=1}^n k^p = \\frac{n^{p+1}}{p+1} - \\sum\\limits_{j=0}^{p-1}{p \\choose j}\\zeta(-j)n^{p-j}.\n"
},
{
"math_id": 121,
"text": "\\Re (z)<0"
},
{
"math_id": 122,
"text": "\n\\lim_{n\\rightarrow \\infty}G(z,n) = \\frac{1}{e^{-z}-1}=\\sum_{j=0}^{\\infty} (-1)^{j-1}B_j \\frac{z^{j-1}}{j!}\n"
},
{
"math_id": 123,
"text": "\n\\sum_{k=1}^{\\infty} k^p=\\frac{(-1)^{p} B_{p+1}}{p+1}.\n"
},
{
"math_id": 124,
"text": "\\zeta(s)=\\sum_{n=1}^{\\infty}\\frac{1}{n^s}"
},
{
"math_id": 125,
"text": "s=-p<0"
},
{
"math_id": 126,
"text": "\\zeta(s)"
},
{
"math_id": 127,
"text": "B^0 = 1"
},
{
"math_id": 128,
"text": "B^1 = \\frac{1}{2}"
},
{
"math_id": 129,
"text": "B^2 = \\frac{1}{6}"
},
{
"math_id": 130,
"text": "B^j"
},
{
"math_id": 131,
"text": " \\sum_{k=1}^n k^p = \\frac{1}{p+1} \\big( (B+n)^{p+1} - B^{p+1} \\big) . "
},
{
"math_id": 132,
"text": "\\begin{align}\n\\frac{1}{p+1} \\big( (B+n)^{p+1} - B^{p+1} \\big)\n&= {1 \\over p+1} \\left( \\sum_{k=0}^{p+1} \\binom{p+1}{k} B^k n^{p+1-k} - B^{p+1} \\right) \\\\\n&= {1 \\over p+1} \\sum_{k=0}^{p} \\binom{p+1}{j} B^k n^{p+1-k} .\n\\end{align}"
},
{
"math_id": 133,
"text": "T(b^j) = B_j."
},
{
"math_id": 134,
"text": "\\begin{align}\n\\sum_{k=1}^n k^p &= {1 \\over p+1} \\sum_{j=0}^p {p+1 \\choose j} B_j n^{p+1-j} \\\\\n&= {1 \\over p+1} \\sum_{j=0}^p {p+1 \\choose j} T(b^j) n^{p+1-j} \\\\\n&= {1 \\over p+1} T\\left(\\sum_{j=0}^p {p+1 \\choose j} b^j n^{p+1-j} \\right) \\\\\n&= T\\left({(b+n)^{p+1} - b^{p+1} \\over p+1}\\right).\n\\end{align}"
},
{
"math_id": 135,
"text": "1^m + 2^m + 3^m + . . n^m"
},
{
"math_id": 136,
"text": "S_m"
},
{
"math_id": 137,
"text": "S_1:\\;\\;\\;S_1^{\\;N} = \\frac{1}{2^N}\\sum_{r=0}^{N} {N \\choose r}S_{N+r}(1-(-1)^{N-r})"
},
{
"math_id": 138,
"text": "S_1"
},
{
"math_id": 139,
"text": "S_3,\\;\\; S_5,\\;\\; S_7"
},
{
"math_id": 140,
"text": "S_1^{\\;2} = S_3,\\;\\;S_1^{\\;3} = \\frac{1}{4}S_3 + \\frac{3}{4}S_5,\\;\\; S_1^{\\;4} = \\frac{1}{2}S_5 + \\frac{1}{2}S_7 "
},
{
"math_id": 141,
"text": "S_2^{\\;2} = \\frac{1}{3}S_4 + \\frac{2}{3}S_5 "
},
{
"math_id": 142,
"text": "S_2^{\\;3} = \\frac{1}{12}S_4 + \\frac{7}{12}S_6+ \\frac{1}{3}S_8"
},
{
"math_id": 143,
"text": "S_m^{\\;N}"
},
{
"math_id": 144,
"text": "S_m^{\\;N} = \\sum_{k=1}^{N}(-1)^{k-1} {N \\choose k}\\sum_{r=1}^{n}r^{mk}S_m^{\\;\\;N-k}(r)"
},
{
"math_id": 145,
"text": "S_1^{\\;N}"
},
{
"math_id": 146,
"text": "S_2^{\\;\\;2}"
},
{
"math_id": 147,
"text": "S_2^{\\;3}"
},
{
"math_id": 148,
"text": "S_2^{\\;4} = \\frac{1}{54}S_5 + \\frac{5}{18}S_7 + \\frac{5}{9}S_9 + \\frac{4}{27}S_{11} "
},
{
"math_id": 149,
"text": "S_6^{\\;3} = \\frac{1}{588}S_8 - \\frac{1}{42}S_{10} + \\frac{13}{84}S_{12}- \\frac{47}{98}S_{14}+ \\frac{17}{28}S_{16} + \\frac{19}{28}S_{18}+ \\frac{3}{49}S_{20}"
}
]
| https://en.wikipedia.org/wiki?curid=6738151 |
67383410 | Land cover maps | Land cover maps are tools that provide vital information about the Earth's land use and cover patterns. They aid policy development, urban planning, and forest and agricultural monitoring.
The systematic mapping of land cover patterns, including change detection, often follows two main approaches:
Image pre-processing is normally done through radiometric corrections, while image processing involves the application of either unsupervised or supervised classifications and vegetation indices quantification for land cover map production.
Supervised classification.
A supervised classification is a system of classification in which the user builds a series of randomly generated training datasets or spectral signatures representing different land-use and land-cover (LULC) classes and applies these datasets in machine learning models to predict and spatially classify LULC patterns and evaluate classification accuracies.
Algorithms.
Several machine learning algorithms have been developed for supervised classification.
Unsupervised classification.
Unsupervised classification is a system of classification in which single or groups of pixels are automatically classified by the software without the user applying signature files or training data. However, the user defines the number of classes for which the computer will automatically generate by grouping similar pixels into a single category using a clustering algorithm. This system of classification is mostly used in areas with no field observations or prior knowledge on the available land cover types.
Vegetation indices classification.
Vegetation indices classification is a system in which two or more spectral bands are combined through defined statistical algorithms to reflect the spatial properties of a vegetation cover.
Most of these indices make use of the relationship between red and near-infrared (NIR) bands of satellite images to generate vegetation properties. Several vegetation indices have been developed; scientists apply these via remote sensing to effectively classify forest cover and land use patterns.
These spectral indices use two or more bands to accurately acquire surface reflectance of land features, thereby improving classification accuracy.
formula_0
This index measures vegetation greenness, with values ranging between -1 and 1. High NDVI values represent dense vegetation cover, moderate NDVI values represent sparse vegetation cover, and low NDVI values correspond to non-vegetated areas (e.g., barren or bare lands).
formula_1
with usually default values of L = 0.5 and G = 2.5.
formula_2
formula_3
where both red and green range between 0 and 256.
formula_4
where red ranges between 0 and 256.
formula_5
formula_6
formula_7
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{NVDI} = {(\\text{NIR} - \\text{Red}) \\over (\\text{NIR} + \\text{Red})}"
},
{
"math_id": 1,
"text": "G \\times {(\\text{NIR}-\\text{Red}) \\over (\\text{NIR} + C_1 \\times \\text{Red} - C_2 \\times \\text{Blue} + L)} "
},
{
"math_id": 2,
"text": "\\text{SAVI} = (1 + L) \\times {(\\text{NIR} - \\text{Red}) \\over (\\text{NIR} + \\text{Red} + L)}"
},
{
"math_id": 3,
"text": "\\text{SI} = \\sqrt[]{(256-\\text{Green}) \\times (256-\\text{Red})}"
},
{
"math_id": 4,
"text": "\\text{AVI} = \\sqrt[3]{(\\text{NIR} + 1) \\times (\\text{256} - \\text{Red}) \\times (\\text{NIR} - \\text{Red})}"
},
{
"math_id": 5,
"text": "\\text{BSI} = {(\\text{NIR}+\\text{Green})- \\text{Red} \\over (\\text{NIR}+\\text{Green})+ \\text{Red}}"
},
{
"math_id": 6,
"text": "\\text{NDWI} = {\\text{NIR} - \\text{SWIR} \\over \\text{NIR} + \\text{SWIR}}"
},
{
"math_id": 7,
"text": "\\text{NDBI} = {\\text{SWIR} - \\text{NIR} \\over \\text{SWIR} + \\text{NIR}}"
}
]
| https://en.wikipedia.org/wiki?curid=67383410 |
6738820 | Élisabeth Lutz | French mathematician
Élisabeth Lutz (May 14, 1914 – July 31, 2008) was a French mathematician. The Nagell–Lutz theorem in Diophantine geometry describes the torsion points of elliptic curves; it is named after Lutz and Trygve Nagell, who both published it in the 1930s.[L37]
Lutz was a student of André Weil at the University of Strasbourg, from 1934 to 1938. She earned a thesis for her research for him, on elliptic curves over formula_0-adic fields. She completed her doctorate ("thèse d’état") on formula_0-adic Diophantine approximation at the University of Grenoble in 1951 under the supervision of Claude Chabauty; her dissertation was "Sur les approximations diophantiennes linéaires formula_0-adiques".
She became a professor of mathematics at the University of Grenoble.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
}
]
| https://en.wikipedia.org/wiki?curid=6738820 |
67390901 | Induction, bounding and least number principles | In first-order arithmetic, the induction principles, bounding principles, and least number principles are three related families of first-order principles, which may or may not hold in nonstandard models of arithmetic. These principles are often used in reverse mathematics to calibrate the axiomatic strength of theorems.
Definitions.
Informally, for a first-order formula of arithmetic formula_0 with one free variable, the induction principle for formula_1 expresses the validity of mathematical induction over formula_1, while the least number principle for formula_1 asserts that if formula_1 has a witness, it has a least one. For a formula formula_2 in two free variables, the bounding principle for formula_3 states that, for a fixed "bound" formula_4, if for every formula_5 there is formula_6 such that formula_7, then we can find a bound on the formula_6's.
Formally, the induction principle for formula_1 is the sentence:
formula_8
There is a similar strong induction principle for formula_1:
formula_9
The least number principle for formula_1 is the sentence:
formula_10
Finally, the bounding principle for formula_3 is the sentence:
formula_11
More commonly, we consider these principles not just for a single formula, but for a class of formulae in the arithmetical hierarchy. For example, formula_12 is the axiom schema consisting of formula_13 for every formula_14 formula formula_0 in one free variable.
Nonstandard models.
It may seem that the principles formula_13, formula_15, formula_16, formula_17 are trivial, and indeed, they hold for all formulae formula_1, formula_3 in the standard model of arithmetic formula_18. However, they become more relevant in nonstandard models. Recall that a nonstandard model of arithmetic has the form formula_19 for some linear order formula_20. In other words, it consists of an initial copy of formula_18, whose elements are called "finite" or "standard", followed by many copies of formula_21 arranged in the shape of formula_20, whose elements are called "infinite" or "nonstandard".
Now, considering the principles formula_13, formula_15, formula_16, formula_17 in a nonstandard model formula_22, we can see how they might fail. For example, the hypothesis of the induction principle formula_13 only ensures that formula_0 holds for all elements in the "standard" part of formula_22 - it may not hold for the nonstandard elements, who can't be reached by iterating the successor operation from zero. Similarly, the bounding principle formula_17 might fail if the bound formula_23 is nonstandard, as then the (infinite) collection of formula_24 could be cofinal in formula_22.
Relations between the principles.
The following relations hold between the principles (over the weak base theory formula_25):
Over formula_31, Slaman proved that formula_32.
Reverse mathematics.
The induction, bounding and least number principles are commonly used in reverse mathematics and second-order arithmetic. For example, formula_33 is part of the definition of the subsystem formula_34 of second-order arithmetic. Hence, formula_35, formula_36 and formula_37 are all theorems of formula_34. The subsystem formula_38 proves all the principles formula_13, formula_15, formula_16, formula_17 for arithmetical formula_1, formula_3. The infinite pigeonhole principle is known to be equivalent to formula_39 and formula_40 over formula_34.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi(x)"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\psi(x,y)"
},
{
"math_id": 3,
"text": "\\psi"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "n < k"
},
{
"math_id": 6,
"text": "m_n"
},
{
"math_id": 7,
"text": "\\psi(n,m_n)"
},
{
"math_id": 8,
"text": "\\mathsf{I}\\varphi: \\quad \\big[ \\varphi(0) \\land \\forall x \\big( \\varphi(x) \\to \\varphi(x+1) \\big) \\big] \\to \\forall x\\ \\varphi (x)"
},
{
"math_id": 9,
"text": "\\mathsf{I}'\\varphi: \\quad \\forall x \\big[ \\big( \\forall y<x\\ \\ \\varphi(y) \\big) \\to \\varphi(x) \\big] \\to \\forall x\\ \\varphi (x)"
},
{
"math_id": 10,
"text": "\\mathsf{L}\\varphi: \\quad \\exists x\\ \\varphi (x) \\to \\exists x' \\big( \\varphi (x') \\land \\forall y < x'\\ \\, \\lnot \\varphi(y) \\big)"
},
{
"math_id": 11,
"text": "\\mathsf{B}\\psi: \\quad \\forall u \\big[ \\big( \\forall x < u\\ \\, \\exists y\\ \\, \\psi(x,y) \\big) \\to \\exists v\\ \\, \\forall x < u\\ \\, \\exists y < v\\ \\, \\psi(x,y) \\big]"
},
{
"math_id": 12,
"text": "\\mathsf{I}\\Sigma_2"
},
{
"math_id": 13,
"text": "\\mathsf{I}\\varphi"
},
{
"math_id": 14,
"text": "\\Sigma_2"
},
{
"math_id": 15,
"text": "\\mathsf{I}'\\varphi"
},
{
"math_id": 16,
"text": "\\mathsf{L}\\varphi"
},
{
"math_id": 17,
"text": "\\mathsf{B}\\psi"
},
{
"math_id": 18,
"text": "\\mathbb{N}"
},
{
"math_id": 19,
"text": "\\mathbb{N} + \\mathbb{Z} \\cdot K"
},
{
"math_id": 20,
"text": "K"
},
{
"math_id": 21,
"text": "\\mathbb{Z}"
},
{
"math_id": 22,
"text": "\\mathcal{M}"
},
{
"math_id": 23,
"text": "u"
},
{
"math_id": 24,
"text": "y"
},
{
"math_id": 25,
"text": "\\mathsf{PA}^- +\\mathsf{I}\\Sigma_0"
},
{
"math_id": 26,
"text": "\\mathsf{I}'\\varphi \\equiv \\mathsf{L}\\lnot\\varphi"
},
{
"math_id": 27,
"text": "\\mathsf{I}\\Sigma_n \\equiv \\mathsf{I}\\Pi_n \\equiv \\mathsf{I}'\\Sigma_n \\equiv \\mathsf{I}'\\Pi_n \\equiv \\mathsf{L}\\Sigma_n \\equiv \\mathsf{L}\\Pi_n"
},
{
"math_id": 28,
"text": "\\mathsf{I}\\Sigma_{n+1} \\implies \\mathsf{B}\\Sigma_{n+1} \\implies \\mathsf{I}\\Sigma_n"
},
{
"math_id": 29,
"text": "\\mathsf{B}\\Sigma_{n+1} \\equiv \\mathsf{B}\\Pi_n \\equiv \\mathsf{L}\\Delta_{n+1}"
},
{
"math_id": 30,
"text": "\\mathsf{L}\\Delta_n \\implies \\mathsf{I}\\Delta_n"
},
{
"math_id": 31,
"text": "\\mathsf{PA}^- +\\mathsf{I}\\Sigma_0 + \\mathsf{exp}"
},
{
"math_id": 32,
"text": "\\mathsf{B}\\Sigma_n \\equiv \\mathsf{L}\\Delta_n \\equiv \\mathsf{I}\\Delta_n"
},
{
"math_id": 33,
"text": "\\mathsf{I}\\Sigma_1"
},
{
"math_id": 34,
"text": "\\mathsf{RCA}_0"
},
{
"math_id": 35,
"text": "\\mathsf{I}'\\Sigma_1"
},
{
"math_id": 36,
"text": "\\mathsf{L}\\Sigma_1"
},
{
"math_id": 37,
"text": "\\mathsf{B}\\Sigma_1"
},
{
"math_id": 38,
"text": "\\mathsf{ACA}_0"
},
{
"math_id": 39,
"text": "\\mathsf{B}\\Pi_1"
},
{
"math_id": 40,
"text": "\\mathsf{B}\\Sigma_2"
}
]
| https://en.wikipedia.org/wiki?curid=67390901 |
67392565 | 2 Chronicles 22 | Second Book of Chronicles, chapter 22
2 Chronicles 22 is the twenty-second chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reigns of Ahaziah and Athaliah, rulers of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 12 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Ahaziah, king of Judah (22:1–9).
The section contains the record of Ahaziah's reign, with some events in the northern kingdom mentioned as necessary. Verses 1–6 correspond with 2 Kings 8:24b–29, whereas verses 7–9 are concise parallels to 2 Kings 9:1–28 and 10:12–14. Uniquely in verse 1, Ahaziah was said to be made king by the "people of Jerusalem", while elsewhere in the Hebrew Bible involving the "people of the land". It refers back to 2 Chronicles 21:17 for the explanation why the youngest of all Jehoram's sons should become king. The alliance with the northern kingdom and the emulation of its worship practices in Ahaziah's time threatened the elimination of Davidic dynasty as well as the traditional Temple worship practice in Jerusalem (2 Chronicles 23:18; 24:7). Verse 9 provides another point of view concerning Ahaziah's death in comparison to 1 Kings 9 which recorded that Ahaziah was wounded while fleeing near Ibleam, but reached Megiddo, where he died, whereas the Chronicler only records that Ahaziah died in Samaria, the 'evil capital'. The Chronicler does not explicitly confirm that Ahaziah was buried is Samaria, only that he received a burial for the sake of his God-fearing ancestor, Jehoshaphat, so it is not a contradiction to the statement in 1 Kings 9 that Ahaziah's dead body was brought to Jerusalem to be buried there. The anointing of Jehu and the assassination of Joram, king of Israel were described in 2 Kings 9:1–26.
"Ahaziah was forty-two years old when he became king, and he reigned one year in Jerusalem. His mother’s name was Athaliah the granddaughter of Omri."
Athaliah, queen of Judah (22:10–12).
Entering this section, Ahaziah and the Judean princes ("sons of Ahaziah's brothers"; verse 8) have been murdered, so the kingdom of Judah was in a similar situation to that at the end of Saul's reign (cf. 1 Chronicles 10), giving a significant meaning to the promise for David (cf. 2 Chronicles 23:3).
"But when Athaliah the mother of Ahaziah saw that her son was dead, she arose and destroyed all the seed royal of the house of Judah."
"But Jehoshabeath, the daughter of the king, took Joash the son of Ahaziah, and stole him away from among the king’s sons who were being murdered, and put him and his nurse in a bedroom. So Jehoshabeath, the daughter of King Jehoram, the wife of Jehoiada the priest (for she was the sister of Ahaziah), hid him from Athaliah so that she did not kill him."
Verse 11.
The unique information in the Chronicles that Jehoram's daughter Jehoshabeath (spelled as "Jehosheba" in 2 Kings 11) was the wife of Jehioada, the high priest could be historically reliable, despite the lack of support elsewhere in the Hebrew BIble, and it can explain why she could stay in the temple grounds.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67392565 |
67399793 | Quantum optical coherence tomography | Quantum optical coherence tomography (Q-OCT) is an imaging technique that uses nonclassical (quantum) light sources to generate high-resolution images based on the Hong-Ou-Mandel effect (HOM). Q-OCT is similar to conventional OCT but uses a fourth-order interferometer that incorporates two photodetectors rather than a second-order interferometer with a single photodetector. The primary advantage of Q-OCT over OCT is insensitivity to even-order dispersion in multi-layered and scattering media.
Several quantum sources of light have been developed so far. An example of such nonclassical sources is spontaneous parametric down-conversion that generates entangled photon pairs (twin-photon). The entangled photons are emitted in pairs and have stronger-than-classical temporal and spatial correlations. The entangled photons are anti-correlated in frequencies and directions. However, the nonclassical light sources are expensive and limited, several quantum-mimetic light sources are developed by classical light and nonlinear optics, which mimic dispersion cancellation and unique additional benefits.
Theory.
The principle of Q-OCT is fourth-order interferometry. The optical setup is based on a Hong ou Mandel (HOM) interferometer with a source. Twin photons travel into and recombined from reference and sample arm and the coincidence rate is measured with time delay.
The nonlinear crystal is pumped by a laser and generates photon pairs with anti-correlation in frequency. One photon travels through the sample and the other through a delay time before the interferometer. The photon-coincidence rate at the output ports of the beam splitter is measure as a function of length difference (formula_0) by a pair of single-photon-counting detectors and a coincidence counter.
Due to the quantum destructive interference, both photons emerge from the same port when the optical path lengths are equal. The coincidence rate has a sharp dip when the optical path length difference is zero. Such dips are used to monitor the reflectance of the sample as a function of depth.
The twin-photon source is characterized by the frequency-entangled state:
formula_1
where formula_2 is the angular frequency deviation about the central angular frequency formula_3 of the twin-photon wave packet, formula_4 is the spectral probability amplitude.
A reflecting sample is described by a transfer function:
formula_5
where formula_6 is the complex reflection coefficient from depth formula_7,
The coincidence rate formula_8 is then given by
formula_9
where
formula_10,
and
formula_11
represent the constant (self-interference) and varying contributions (cross-interference).
Dips in the coincidence rate plot arise from reflections from each of the two surfaces. When two photons have equal overall path lengths, the destructive interference of the two photon-pair probability amplitude occurs.
Advantages.
Compared with conventional OCT, Q-OCT has several advantages:
Applications.
Similar to FD-OCT, Q-OCT can provide 3D imaging of biological samples with a better resolution due to the photon entanglement. Q-OCT permits a direct determination of the group-velocity dispersion (GVD) coefficients of the media. The development of quantum-mimetic light sources offers unique additional benefits to quantum imaging, such as enhanced signal-to-noise ratio, better resolution, and acquisition rate. Although Q-OCT is not expected to replace OCT, it does offer some advantages as a biological imaging paradigm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c\\tau_q"
},
{
"math_id": 1,
"text": " \\left | \\psi \\right\\rangle = \\int \\,d \\Omega \\zeta (\\Omega) \\left | \\omega_0 + \\Omega \\right\\rangle_1 \n\\left | \\omega_0 - \\Omega \\right\\rangle_2, "
},
{
"math_id": 2,
"text": " \\Omega "
},
{
"math_id": 3,
"text": " \\omega_0 "
},
{
"math_id": 4,
"text": " \\zeta (\\Omega) "
},
{
"math_id": 5,
"text": " H(\\omega) = \\int\\limits_{0}^{\\infty} \\,d z r(z,\\omega)e^{i2\\phi(z,\\omega)}, "
},
{
"math_id": 6,
"text": " H(\\omega) = r(z,\\omega)"
},
{
"math_id": 7,
"text": " z"
},
{
"math_id": 8,
"text": "C(\\tau_q)"
},
{
"math_id": 9,
"text": " C(\\tau_q) \\propto \\Lambda_0 - Re{\\Lambda(2\\tau_q)},"
},
{
"math_id": 10,
"text": " \\Lambda_0 = \\int \\,d\\Omega |H(\\omega_0 + \\Omega)|^2 S(\\Omega)"
},
{
"math_id": 11,
"text": " \\Lambda(\\tau_q) = \\int \\,d\\Omega H(\\omega_0 + \\Omega) H^{\\ast}(\\omega_0 - \\Omega) S(\\Omega)e^{-i\\Omega\\tau_q},"
}
]
| https://en.wikipedia.org/wiki?curid=67399793 |
67401546 | 2 Chronicles 23 | Second Book of Chronicles, chapter 23
2 Chronicles 23 is the twenty-third chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reigns of Athaliah and Joash, rulers of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 21 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Joash anointed king of Judah (23:1–11).
The section describes the anointing of Joash as the king of Judah (verses 1–3 parallel to 2 Kings 11:4) involving not only the 'captains of the royal guard', but also the Levites, 'the heads of the families of Israel' and the 'entire community', Except for "Elishaphat", all other names can be found in the lists of priests and Levites in the books of Ezra, Nehemiah, and Chronicles. The temple personnel organization and working schedule (1 Chronicles 23–26) were indicated in verse 8 ('for the priest Jehoiada did not dismiss the divisions').
"And they brought out the king’s son, put the crown on him, gave him the Testimony, and made him king. Then Jehoiada and his sons anointed him, and said, “Long live the king!”"
Death of Athaliah (23:12–15).
The section about the slaughter of Athaliah (verses 12–15) parallels closely to 2 Kings 11:13–16. Athaliah heard the 'noise of the people' which is an 'unusual commotion', accompanied by the 'blast of the trumpets and the vehement acclamations of the people' across the Tyropœon and this attracted her attention, or 'excited her fears'. She was caught by the guards and taken "by the way by the which horses came into the king's house: and there was she slain" (2 Kings 11:16). Josephus explains that "the way" refers to the road to bring the horses into the king's (horses') house (not into [the king's house] of residence) or "hippodrome" (the gate of the king's mules) that was built on the southeast of the temple, near the horse gate in the valley of Kidron Athaliah's reign was the 'gravest threat' to the continuation of Davidic dynasty.
"And she looked, and, behold, the king stood at his pillar at the entering in, and the princes and the trumpets by the king: and all the people of the land rejoiced, and sounded with trumpets, also the singers with instruments of musick, and such as taught to sing praise. Then Athaliah rent her clothes, and said, Treason, Treason."
Jehoiada restored the worship of the LORD (23:16–21).
The high priest Jehoiada organized the offices (priests and Levites) and their duties (sacrifices and music) to undo the damage inflicted by Athaliah and prior rulers (cf. 2 Kings 11:17–20) and bring back to the law of Moses and David's orders (as Moses made no law concerning music for worship). Jerusalem became 'quiet' is a 'sign of divine blessing (1 Chronicles 4:40; 22:9; 2 Chronicles 14:1, 6; 20:30).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67401546 |
674050 | Picard theorem | Theorem about the range of an analytic function
In complex analysis, Picard's great theorem and Picard's little theorem are related theorems about the range of an analytic function. They are named after Émile Picard.
The theorems.
Little Picard Theorem: If a function formula_0 is entire and non-constant, then the set of values that formula_1 assumes is either the whole complex plane or the plane minus a single point.
Sketch of Proof: Picard's original proof was based on properties of the modular lambda function, usually denoted by formula_2, and which performs, using modern terminology, the holomorphic universal covering of the twice punctured plane by the unit disc. This function is explicitly constructed in the theory of elliptic functions. If formula_3 omits two values, then the composition of formula_3 with the inverse of the modular function maps the plane into
the unit disc which implies that formula_3 is constant by Liouville's theorem.
This theorem is a significant strengthening of Liouville's theorem which states that the image of an entire non-constant function must be unbounded. Many different proofs of Picard's theorem were later found and Schottky's theorem is a quantitative version of it. In the case where the values of formula_3 are missing a single point, this point is called a lacunary value of the function.
Great Picard's Theorem: If an analytic function formula_3 has an essential singularity at a point formula_4 , then on any punctured neighborhood of formula_5 takes on all possible complex values, with at most a single exception, infinitely often.
This is a substantial strengthening of the Casorati–Weierstrass theorem, which only guarantees that the range of formula_3 is dense in the complex plane. A result of the Great Picard Theorem is that any entire, non-polynomial function attains all possible complex values infinitely often, with at most one exception.
The "single exception" is needed in both theorems, as demonstrated here:
Proof.
Little Picard Theorem.
Suppose formula_7 is an entire function that omits two values formula_8 and formula_9. By considering formula_10 we may assume without loss of generality that formula_11 and formula_12.
Because formula_13 is simply connected and the range of formula_3 omits formula_14 , "f" has a holomorphic logarithm. Let formula_15 be an entire function such that formula_16. Then the range of formula_15 omits all integers. By a similar argument using the quadratic formula, there is an entire function "formula_17" such that formula_18. Then the range of formula_17 omits all complex numbers of the form formula_19, where formula_20 is an integer and formula_21 is a nonnegative integer.
By Landau's theorem, if formula_22, then for all formula_23, the range of formula_17 contains a disk of radius formula_24. But from above, any sufficiently large disk contains at least one number that the range of "h" omits. Therefore formula_25 for all formula_4. By the fundamental theorem of calculus, formula_17 is constant, so formula_3 is constant.
Generalization and current research.
"Great Picard's theorem" is true in a slightly more general form that also applies to meromorphic functions:
Great Picard's Theorem (meromorphic version): If "M" is a Riemann surface, "w" a point on "M", P1(C) = C ∪ {∞} denotes the Riemann sphere and "f" : "M"\{"w"} → P1(C) is a holomorphic function with essential singularity at "w", then on any open subset of "M" containing "w", the function "f"("z") attains all but at most "two" points of P1(C) infinitely often.
Example: The function "f"("z") = 1/(1 − "e"1/"z") is meromorphic on C* = C - {0}, the complex plane with the origin deleted. It has an essential singularity at "z" = 0 and attains the value ∞ infinitely often in any neighborhood of 0; however it does not attain the values 0 or 1.
With this generalization, "Little Picard Theorem" follows from "Great Picard Theorem" because an entire function is either a polynomial or it has an essential singularity at infinity. As with the little theorem, the (at most two) points that are not attained are lacunary values of the function.
The following conjecture is related to "Great Picard's Theorem":
Conjecture: Let {"U"1, ..., "Un"} be a collection of open connected subsets of C that cover the punctured unit disk D \ {0}. Suppose that on each "Uj" there is an injective holomorphic function "fj", such that d"f""j" = d"fk" on each intersection "U""j" ∩ "U""k". Then the differentials glue together to a meromorphic 1-form on D.
It is clear that the differentials glue together to a holomorphic 1-form "g" d"z" on D \ {0}. In the special case where the residue of "g" at 0 is zero the conjecture follows from the "Great Picard's Theorem". | [
{
"math_id": 0,
"text": "f: \\mathbb{C} \\to\\mathbb{C}"
},
{
"math_id": 1,
"text": "f(z)"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "w, f(z)"
},
{
"math_id": 6,
"text": "e^{\\frac{1}{z}}"
},
{
"math_id": 7,
"text": "f: \\mathbb{C}\\to\\mathbb{C}"
},
{
"math_id": 8,
"text": "z_0"
},
{
"math_id": 9,
"text": "z_1\n"
},
{
"math_id": 10,
"text": "\\frac{f(z)-z_0}{z_1 - z_0}"
},
{
"math_id": 11,
"text": "z_0 = 0"
},
{
"math_id": 12,
"text": "z_1=1"
},
{
"math_id": 13,
"text": "\\mathbb{C}"
},
{
"math_id": 14,
"text": "0\n"
},
{
"math_id": 15,
"text": "g"
},
{
"math_id": 16,
"text": "f(z)=e^{2\\pi ig(z)}"
},
{
"math_id": 17,
"text": "h"
},
{
"math_id": 18,
"text": "g(z)=\\cos(h(z))"
},
{
"math_id": 19,
"text": "2\\pi n \\pm i \\cosh^{-1}(m)"
},
{
"math_id": 20,
"text": "n\n"
},
{
"math_id": 21,
"text": "m"
},
{
"math_id": 22,
"text": "h'(w) \\ne 0"
},
{
"math_id": 23,
"text": "{R > 0}"
},
{
"math_id": 24,
"text": "|h'(w)| R/72"
},
{
"math_id": 25,
"text": "h'(w)=0"
}
]
| https://en.wikipedia.org/wiki?curid=674050 |
6740565 | Helly's selection theorem | On convergent subsequences of functions that are locally of bounded total variation
In mathematics, Helly's selection theorem (also called the "Helly selection principle") states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence.
In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions.
It is named for the Austrian mathematician Eduard Helly.
A more general version of the theorem asserts compactness of the space BVloc of functions locally of bounded total variation that are uniformly bounded at a point.
The theorem has applications throughout mathematical analysis. In probability theory, the result implies compactness of a tight family of measures.
Statement of the theorem.
Let ("f""n")"n" ∈ N be a sequence of increasing functions mapping the real line R into itself,
and suppose that it is uniformly bounded: there are "a,b" ∈ R such that "a" ≤ "f""n" ≤ "b" for every "n" ∈ N.
Then the sequence ("f""n")"n" ∈ N admits a pointwise convergent subsequence.
Generalisation to BVloc.
Let "U" be an open subset of the real line and let "f""n" : "U" → R, "n" ∈ N, be a sequence of functions. Suppose that
("f""n") has uniformly bounded total variation on any "W" that is compactly embedded in "U". That is, for all sets "W" ⊆ "U" with compact closure "W̄" ⊆ "U",
formula_0
where the derivative is taken in the sense of tempered distributions.
Then, there exists a subsequence "f""n""k", "k" ∈ N, of "f""n" and a function "f" : "U" → R, locally of bounded variation, such that
formula_1 <ref name="10.1093/oso/9780198502456.001.0001"></ref>
formula_2
Further generalizations.
There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces, is due to Barbu and Precupanu:
Let "X" be a reflexive, separable Hilbert space and let "E" be a closed, convex subset of "X". Let Δ : "X" → [0, +∞) be positive-definite and homogeneous of degree one. Suppose that "z""n" is a uniformly bounded sequence in BV([0, "T"]; "X") with "z""n"("t") ∈ "E" for all "n" ∈ N and "t" ∈ [0, "T"]. Then there exists a subsequence "z""n""k" and functions "δ", "z" ∈ BV([0, "T"]; "X") such that
formula_3
formula_4
formula_5
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sup_{n \\in \\mathbf{N}} \\left( \\left\\| f_{n} \\right\\|_{L^{1} (W)} + \\left\\| \\frac{\\mathrm{d} f_{n}}{\\mathrm{d} t} \\right\\|_{L^{1} (W)} \\right) < + \\infty,"
},
{
"math_id": 1,
"text": "\\lim_{k \\to \\infty} \\int_{W} \\big| f_{n_{k}} (x) - f(x) \\big| \\, \\mathrm{d} x = 0;"
},
{
"math_id": 2,
"text": "\\left\\| \\frac{\\mathrm{d} f}{\\mathrm{d} t} \\right\\|_{L^{1} (W)} \\leq \\liminf_{k \\to \\infty} \\left\\| \\frac{\\mathrm{d} f_{n_{k}}}{\\mathrm{d} t} \\right\\|_{L^{1} (W)}. "
},
{
"math_id": 3,
"text": "\\int_{[0, t)} \\Delta (\\mathrm{d} z_{n_{k}}) \\to \\delta(t);"
},
{
"math_id": 4,
"text": "z_{n_{k}} (t) \\rightharpoonup z(t) \\in E;"
},
{
"math_id": 5,
"text": "\\int_{[s, t)} \\Delta(\\mathrm{d} z) \\leq \\delta(t) - \\delta(s)."
}
]
| https://en.wikipedia.org/wiki?curid=6740565 |
67409235 | Polar factorization theorem | Theorem in Optimal Transport
In optimal transport, a branch of mathematics, polar factorization of vector fields is a basic result due to Brenier (1987), with antecedents of Knott-Smith (1984) and Rachev (1985), that generalizes many existing results among which are the polar decomposition of real matrices, and the rearrangement of real-valued functions.
The theorem.
" Notation. "Denote formula_0 the image measure of formula_1 through the map formula_2.
" Definition: Measure preserving map. " Let formula_3 and formula_4 be some probability spaces and formula_5 a measurable map. Then, formula_6 is said to be measure preserving iff formula_7, where formula_8 is the pushforward measure. Spelled out: for every formula_9-measurable subset formula_10 of formula_11, formula_12 is formula_1-measurable, and formula_13. The latter is equivalent to:
formula_14
where formula_15 is formula_9-integrable and formula_16 is formula_1-integrable.
" Theorem. " Consider a map formula_17 where formula_10 is a convex subset of formula_18, and formula_1 a measure on formula_10 which is absolutely continuous. Assume that formula_19 is absolutely continuous. Then there is a convex function formula_20 and a map formula_21 preserving formula_1 such that
formula_22
In addition, formula_23 and formula_6 are uniquely defined almost everywhere.
Applications and connections.
Dimension 1.
In dimension 1, and when formula_1 is the Lebesgue measure over the unit interval, the result specializes to Ryff's theorem. When formula_24 and formula_1 is the uniform distribution over formula_25, the polar decomposition boils down to
formula_26
where formula_27 is cumulative distribution function of the random variable formula_28 and formula_29 has a uniform distribution over formula_30. formula_27 is assumed to be continuous, and formula_31 preserves the Lebesgue measure on formula_30.
Polar decomposition of matrices.
When formula_2 is a linear map and formula_1 is the Gaussian normal distribution, the result coincides with the polar decomposition of matrices. Assuming formula_32 where formula_33 is an invertible formula_34 matrix and considering formula_1 the formula_35 probability measure, the polar decomposition boils down to
formula_36
where formula_37 is a symmetric positive definite matrix, and formula_38 an orthogonal matrix. The connection with the polar factorization is formula_39 which is convex, and formula_40 which preserves the formula_35 measure.
Helmholtz decomposition.
The results also allow to recover Helmholtz decomposition. Letting formula_41 be a smooth vector field it can then be written in a unique way as
formula_42
where formula_43 is a smooth real function defined on formula_10, unique up to an additive constant, and formula_44 is a smooth divergence free vector field, parallel to the boundary of formula_10.
The connection can be seen by assuming formula_45 is the Lebesgue measure on a compact set formula_46 and by writing formula_2 as a perturbation of the identity map
formula_47
where formula_48 is small. The polar decomposition of formula_49 is given by formula_50. Then, for any test function formula_51 the following holds:
formula_52
where the fact that formula_53 was preserving the Lebesgue measure was used in the second equality.
In fact, as formula_54, one can expand formula_55, and therefore formula_56. As a result, formula_57 for any smooth function formula_15, which implies that formula_58 is divergence-free.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\xi_\\# \\mu"
},
{
"math_id": 1,
"text": "\\mu"
},
{
"math_id": 2,
"text": "\\xi"
},
{
"math_id": 3,
"text": "(X,\\mu)"
},
{
"math_id": 4,
"text": "(Y,\\nu)"
},
{
"math_id": 5,
"text": "\\sigma :X \\rightarrow Y"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "\\sigma_{\\#}\\mu = \\nu"
},
{
"math_id": 8,
"text": "\\#"
},
{
"math_id": 9,
"text": "\\nu"
},
{
"math_id": 10,
"text": "\\Omega"
},
{
"math_id": 11,
"text": "Y"
},
{
"math_id": 12,
"text": "\\sigma^{-1}(\\Omega)"
},
{
"math_id": 13,
"text": "\\mu(\\sigma^{-1}(\\Omega))=\\nu(\\Omega )"
},
{
"math_id": 14,
"text": " \\int_{X}(f\\circ \\sigma)(x) \\mu(dx) =\\int_X (\\sigma^*f)(x) \\mu(dx) =\\int_Y f(y) (\\sigma_{\\#}\\mu)(dy) = \\int_{Y}f(y) \\nu(dy)"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": " f\\circ \\sigma "
},
{
"math_id": 17,
"text": "\\xi :\\Omega \\rightarrow R^{d}"
},
{
"math_id": 18,
"text": "R^{d}"
},
{
"math_id": 19,
"text": "\\xi_{\\#}\\mu"
},
{
"math_id": 20,
"text": "\\varphi :\\Omega \\rightarrow R"
},
{
"math_id": 21,
"text": "\\sigma :\\Omega \\rightarrow \\Omega"
},
{
"math_id": 22,
"text": " \\xi =\\left( \\nabla \\varphi \\right) \\circ \\sigma "
},
{
"math_id": 23,
"text": "\\nabla \\varphi"
},
{
"math_id": 24,
"text": "d=1"
},
{
"math_id": 25,
"text": "\\left[0,1\\right]"
},
{
"math_id": 26,
"text": " \\xi \\left( t\\right) =F_{X}^{-1}\\left( \\sigma \\left( t\\right) \\right) "
},
{
"math_id": 27,
"text": "F_{X}"
},
{
"math_id": 28,
"text": "\\xi \\left( U\\right)"
},
{
"math_id": 29,
"text": "U"
},
{
"math_id": 30,
"text": "\\left[ 0,1\\right]"
},
{
"math_id": 31,
"text": "\\sigma \\left( t\\right)=F_{X}\\left( \\xi \\left( t\\right) \\right)"
},
{
"math_id": 32,
"text": "\\xi \\left( x\\right) =Mx"
},
{
"math_id": 33,
"text": "M"
},
{
"math_id": 34,
"text": "d\\times d"
},
{
"math_id": 35,
"text": "\\mathcal{N}\\left( 0,I_{d}\\right)"
},
{
"math_id": 36,
"text": "\nM=SO\n"
},
{
"math_id": 37,
"text": "S"
},
{
"math_id": 38,
"text": "O"
},
{
"math_id": 39,
"text": "\\varphi \\left(x\\right) =x^{\\top }Sx/2"
},
{
"math_id": 40,
"text": "\\sigma \\left( x\\right) =Ox"
},
{
"math_id": 41,
"text": "x\\rightarrow V\\left( x\\right)"
},
{
"math_id": 42,
"text": "\nV=w+\\nabla p\n"
},
{
"math_id": 43,
"text": "p"
},
{
"math_id": 44,
"text": "w"
},
{
"math_id": 45,
"text": "\\mu "
},
{
"math_id": 46,
"text": "\\Omega \\subset R^{n}"
},
{
"math_id": 47,
"text": "\n\\xi _{\\epsilon }(x)=x+\\epsilon V(x)\n"
},
{
"math_id": 48,
"text": "\\epsilon"
},
{
"math_id": 49,
"text": "\\xi _{\\epsilon }"
},
{
"math_id": 50,
"text": "\\xi _{\\epsilon }=(\\nabla \\varphi_{\\epsilon })\\circ \\sigma_{\\epsilon }"
},
{
"math_id": 51,
"text": "f:R^{n}\\rightarrow R"
},
{
"math_id": 52,
"text": "\n\\int_{\\Omega }f(x+\\epsilon V(x))dx=\\int_{\\Omega }f((\\nabla \\varphi\n_{\\epsilon })\\circ \\sigma _{\\epsilon }\\left( x\\right) )dx=\\int_{\\Omega\n}f(\\nabla \\varphi _{\\epsilon }\\left( x\\right) )dx\n"
},
{
"math_id": 53,
"text": "\\sigma _{\\epsilon }"
},
{
"math_id": 54,
"text": "\\textstyle \\varphi _{0}(x)=\\frac{1}{2}\\Vert x\\Vert ^{2}"
},
{
"math_id": 55,
"text": "\\textstyle \\varphi _{\\epsilon }(x)=\\frac{1}{2}\\Vert x\\Vert ^{2}+\\epsilon p(x)+O(\\epsilon ^{2})"
},
{
"math_id": 56,
"text": "\\textstyle \\nabla \\varphi_{\\epsilon }\\left( x\\right) =x+\\epsilon \\nabla p(x)+O(\\epsilon ^{2})"
},
{
"math_id": 57,
"text": "\\textstyle \\int_{\\Omega }\\left( V(x)-\\nabla p(x)\\right) \\nabla f(x))dx"
},
{
"math_id": 58,
"text": "w\\left( x\\right) =V(x)-\\nabla p(x)"
}
]
| https://en.wikipedia.org/wiki?curid=67409235 |
6740960 | Cheerios effect | When floating objects attract each other
In fluid mechanics, the Cheerios effect is a colloquial name for the phenomenon of floating objects appearing to either attract or repel one another. The example which gives the effect its name is the observation that pieces of breakfast cereal (for example, Cheerios) floating on the surface of a bowl will tend to clump together, or appear to stick to the side of the bowl.
Description.
The effect is observed in small objects which are supported by the surface of a liquid. There are two types of such objects: objects which are sufficiently buoyant that they will always float on the surface (for example, Cheerios in milk), and objects which are heavy enough to sink when immersed, but not so heavy as to overcome the surface tension of the liquid (for example, steel pins on water). Objects of the same type will appear to attract one another, and objects of opposite types will appear to repel one another.
In addition, the same attractive or repulsive effect can be observed between objects and the wall of the container. Once again there are two possibilities: the interface between the liquid and the container wall is either a concave or a convex meniscus. Buoyant objects will be attracted in the case of a concave meniscus and repelled for convex. Non-buoyant floating objects will do the opposite.
Explanation.
All objects in a fluid experience two opposed forces in the vertical direction: gravity (determined by the mass of the object) and buoyancy (determined by the density of the fluid and the volume of liquid displaced by the object). If the buoyant force is greater than the force of gravity acting on an object, it will rise to the top of the liquid. On the other hand, an object immersed in a liquid which experiences a gravitational force greater than its buoyant force will sink.
At the surface of the liquid, a third effect comes into play - surface tension. This effect is due to the fact that molecules of the liquid are more strongly attracted to each other than they are to the air above the liquid. As such, non-wetting objects on the surface of the liquid will experience an upward force due to surface tension. If the upward force is sufficient to balance the force of gravity on the object, it will float on the surface of the liquid, while deforming the surface down. By contrast, objects with a net positive buoyancy will deform the water surface upward around them as they press against the surface.
This deformation of the liquid surface, combined with the net upwards or downwards force experienced by each object, is the cause of the Cheerios effect. Objects experiencing a net upward force will follow the surface of the liquid as it curves upward. Therefore two objects with an upward deformation will move toward each other as each follows the surface of the liquid upward. Similarly, objects with a net downward force will follow the curve of the liquid surface in the downward direction, and will move horizontally together as they do so.
The same principle holds at the side of the container, where the surface of the liquid is deformed by the meniscus effect. If the container is wetting with respect to the liquid, the meniscus will slope upwards at the wall of the container, and buoyant objects will move towards the wall as a result of travelling upward along the surface. By contrast, non-buoyant floating objects will move away from the walls of such a container for the same reason.
More complex behavior resulting from the same principles can be observed in shapes which do not have simple concave or convex meniscus behavior. When such objects come close to each other they rotate in the plane of the water surface until they find an optimum relative orientation then move toward each other.
Simplified calculation.
Writing in the American Journal of Physics, Dominic Vella and L. Mahadevan of Harvard University discuss the Cheerios effect and suggest that it may be useful in the study of the self-assembly of small structures. They calculate the force between two spheres of density formula_0 and radius formula_1 floating distance formula_2 apart in liquid of density formula_3 as
formula_4
where formula_5 is the surface tension, formula_6 is a modified Bessel function of the first kind, formula_7 is the Bond number, and
formula_8
is a nondimensional factor in terms of the contact angle formula_9.
Here formula_10 is a convenient meniscus length scale.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho_s"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\ell"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\n2\\pi\\gamma RB^{5/2}\\Sigma^2K_1\\left(\\frac{\\ell}{L_c}\\right)\n"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "K_1"
},
{
"math_id": 7,
"text": "B=\\rho gR^2/\\gamma"
},
{
"math_id": 8,
"text": "\n\\Sigma=\\frac{2\\rho_s/\\rho-1}{3}-\\frac{\\cos\\theta}{2}+\\frac{\\cos^3\\theta}{6}"
},
{
"math_id": 9,
"text": "\\theta"
},
{
"math_id": 10,
"text": "L_C=R/\\sqrt{B}"
}
]
| https://en.wikipedia.org/wiki?curid=6740960 |
67415801 | Jason P. Miller | American mathematician (born 1983)
Jason Peter Miller (born November 23, 1983) is an American mathematician, specializing in probability theory.
After graduating from Okemos High School, Miller matriculated in 2002 at the University of Michigan, where he graduated in 2006 with a B.S. with joint majors in mathematics, computer science, and economics. In 2006 he became a graduate student in mathematics at Stanford University. In 2011 he graduated there with a PhD supervised by Amir Dembo with dissertation "Limit theorems for Ginzburg–Landau formula_0 random surfaces ". Miller was a summer intern in 2009 at Microsoft Research and in 2010 at D.E. Shaw & Co. He was a postdoctoral researcher from September 2010 to July 2012 at Microsoft and from July 2012 to July 2015 (as a Schramm Fellow and a NSF Fellow) at MIT's department of mathematics, where he worked with Scott Sheffield. In 2015 Miller became a reader at Trinity College, Cambridge and in the University of Cambridge's Statistics Laboratory.
His research deals with many aspects of probability theory, including "stochastic interface models (random surfaces and SLE), random walk, mixing times for Markov chains, and interacting particle systems."
With Scott Sheffield, he did research on the geometry of "d"-dimensional Gaussian free fields (GFF fields), also called (Euclidean bosonic) massless free fields, which are "d"-dimensional analogs of Brownian motion. The two mathematicians introduced an "imaginary geometry" which made it possible to integrate the Schramm–Loewner evolution in many GFF fields. Miller and Sheffield also proved that two models of measure-endowed random surfaces, namely Liouville quantum gravity and the Brownian map, are equivalent. (The two models were introduced by Alexander Markovich Polyakov.)
Miller won the Rollo Davidson Prize in 2015, the Whitehead Prize in 2016, the Clay Research Award in 2017 (with Scott Sheffield), and the Doeblin Prize in 2018. He was an invited speaker with talk "Liouville quantum gravity as a metric space and a scaling limit" at the International Congress of Mathematicians in 2018 in Rio de Janeiro. He was awarded the Leonard Eisenbud Prize for Mathematics and Physics of the AMS in 2023 jointly with Scott Sheffield. In 2023 he received the Fermat Prize. | [
{
"math_id": 0,
"text": "\\nabla \\varphi"
}
]
| https://en.wikipedia.org/wiki?curid=67415801 |
67416519 | Ultrafilter on a set | Maximal proper filter
In the mathematical field of set theory, an ultrafilter on a set formula_0 is a "maximal filter" on the set formula_1 In other words, it is a collection of subsets of formula_0 that satisfies the definition of a filter on formula_0 and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of formula_0 that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set formula_0 can also be characterized as a filter on formula_0 with the property that for every subset formula_2 of formula_0 either formula_2 or its complement formula_3 belongs to the ultrafilter.
Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets, where the partially ordered set consists of the power set formula_4 and the partial order is subset inclusion formula_5 This article deals specifically with ultrafilters on a set and does not cover the more general notion.
There are two types of ultrafilter on a set. A principal ultrafilter on formula_0 is the collection of all subsets of formula_0 that contain a fixed element formula_6. The ultrafilters that are not principal are the free ultrafilters. The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma, which can be proven in ZFC. On the other hand, there exists models of ZF where every ultrafilter on a set is principal.
Ultrafilters have many applications in set theory, model theory, and topology. Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has a more complex structure.
Definitions.
Given an arbitrary set formula_7 an ultrafilter on formula_0 is a non-empty family formula_8 of subsets of formula_0 such that:
Properties (1), (2), and (3) are the defining properties of a filter on formula_1 Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis.
A filter subbase is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in some (proper) filter. The smallest (relative to formula_18) filter containing a given filter subbase is said to be generated by the filter subbase.
The upward closure in formula_0 of a family of sets formula_19 is the set
formula_20
A <templatestyles src="Template:Visible anchor/styles.css" />prefilter or <templatestyles src="Template:Visible anchor/styles.css" />filter base is a non-empty and proper (i.e. formula_21) family of sets formula_19 that is downward directed, which means that if formula_22 then there exists some formula_23 such that formula_24 Equivalently, a prefilter is any family of sets formula_19 whose upward closure formula_25 is a filter, in which case this filter is called the filter generated by formula_19 and formula_19 is said to be a filter base for formula_26
The dual in formula_0 of a family of sets formula_19 is the set formula_27 For example, the dual of the power set formula_4 is itself: formula_28
A family of sets is a proper filter on formula_0 if and only if its dual is a proper ideal on formula_0 ("proper" means not equal to the power set).
Generalization to ultra prefilters.
A family formula_29 of subsets of formula_0 is called <templatestyles src="Template:Visible anchor/styles.css" />ultra if formula_30 and any of the following equivalent conditions are satisfied:
A filter subbase that is ultra is necessarily a prefilter.
The ultra property can now be used to define both ultrafilters and ultra prefilters:
An <templatestyles src="Template:Visible anchor/styles.css" />ultra prefilter is a prefilter that is ultra. Equivalently, it is a filter subbase that is ultra.
An <templatestyles src="Template:Visible anchor/styles.css" />ultrafilter on formula_0 is a (proper) filter on formula_0 that is ultra. Equivalently, it is any filter on formula_0 that is generated by an ultra prefilter.
Ultra prefilters as maximal prefilters
To characterize ultra prefilters in terms of "maximality," the following relation is needed.
Given two families of sets formula_34 and formula_35 the family formula_34 is said to be coarser than formula_35 and formula_36 is finer than and subordinate to formula_37 written formula_38 or "N" ⊢ "M", if for every formula_39 there is some formula_40 such that formula_41 The families formula_34 and formula_36 are called equivalent if formula_38 and formula_42 The families formula_34 and formula_36 are comparable if one of these sets is finer than the other.
The subordination relationship, i.e. formula_43 is a preorder so the above definition of "equivalent" does form an equivalence relation.
If formula_44 then formula_38 but the converse does not hold in general.
However, if formula_36 is upward closed, such as a filter, then formula_38 if and only if formula_45
Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters.
If two families of sets formula_34 and formula_36 are equivalent then either both formula_34 and formula_36 are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase).
In particular, if a filter subbase is not also a prefilter, then it is not equivalent to the filter or prefilter that it generates. If formula_34 and formula_36 are both filters on formula_0 then formula_34 and formula_36 are equivalent if and only if formula_46 If a proper filter (resp. ultrafilter) is equivalent to a family of sets formula_34 then formula_34 is necessarily a prefilter (resp. ultra prefilter).
Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination:
An arbitrary family of sets is a prefilter if and only it is equivalent to a (proper) filter.
An arbitrary family of sets is an ultra prefilter if and only it is equivalent to an ultrafilter.
A <templatestyles src="Template:Visible anchor/styles.css" />maximal prefilter on formula_0 is a prefilter formula_47 that satisfies any of the following equivalent conditions:
Characterizations.
There are no ultrafilters on the empty set, so it is henceforth assumed that formula_0 is nonempty.
A filter subbase formula_8 on formula_0 is an ultrafilter on formula_0 if and only if any of the following equivalent conditions hold:
A (proper) filter formula_8 on formula_0 is an ultrafilter on formula_0 if and only if any of the following equivalent conditions hold:
Grills and filter-grills.
If formula_49 then its grill on formula_0 is the family
formula_50
where formula_51 may be written if formula_0 is clear from context.
For example, formula_52 and if formula_53 then formula_54
If formula_55 then formula_56 and moreover, if formula_33 is a filter subbase then formula_57
The grill formula_58 is upward closed in formula_0 if and only if formula_59 which will henceforth be assumed. Moreover, formula_60 so that formula_33 is upward closed in formula_0 if and only if formula_61
The grill of a filter on formula_0 is called a filter-grill on formula_1 For any formula_62 formula_33 is a filter-grill on formula_0 if and only if (1) formula_33 is upward closed in formula_0 and (2) for all sets formula_63 and formula_64 if formula_65 then formula_66 or formula_67 The grill operation formula_68 induces a bijection
formula_69
whose inverse is also given by formula_70 If formula_71 then formula_72 is a filter-grill on formula_0 if and only if formula_73 or equivalently, if and only if formula_72 is an ultrafilter on formula_1 That is, a filter on formula_0 is a filter-grill if and only if it is ultra. For any non-empty formula_74 formula_72 is both a filter on formula_0 and a filter-grill on formula_0 if and only if (1) formula_75 and (2) for all formula_48 the following equivalences hold:
formula_76 if and only if formula_77 if and only if formula_78
Free or principal.
If formula_19 is any non-empty family of sets then the Kernel of formula_19 is the intersection of all sets in formula_79
formula_80
A non-empty family of sets formula_19 is called:
If a family of sets formula_19 is fixed then formula_19 is ultra if and only if some element of formula_19 is a singleton set, in which case formula_19 will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter formula_19 is ultra if and only if formula_85 is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set.
The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point.
<templatestyles src="Math_theorem/styles.css" />
Proposition — If formula_8 is an ultrafilter on formula_0 then the following are equivalent:
Every filter on formula_0 that is principal at a single point is an ultrafilter, and if in addition formula_0 is finite, then there are no ultrafilters on formula_0 other than these. In particular, if a set formula_0 has finite cardinality formula_88 then there are exactly formula_89 ultrafilters on formula_0 and those are the ultrafilters generated by each singleton subset of formula_1 Consequently, free ultrafilters can only exist on an infinite set.
Examples, properties, and sufficient conditions.
If formula_0 is an infinite set then there are as many ultrafilters over formula_0 as there are families of subsets of formula_90 explicitly, if formula_0 has infinite cardinality formula_91 then the set of ultrafilters over formula_0 has the same cardinality as formula_92 that cardinality being formula_93
If formula_8 and formula_31 are families of sets such that formula_8 is ultra, formula_94 and formula_95 then formula_31 is necessarily ultra.
A filter subbase formula_8 that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by formula_8 to be ultra.
Suppose formula_47 is ultra and formula_96 is a set.
The trace formula_97 is ultra if and only if it does not contain the empty set.
Furthermore, at least one of the sets formula_98 and formula_99 will be ultra (this result extends to any finite partition of formula_0).
If formula_100 are filters on formula_7 formula_8 is an ultrafilter on formula_7 and formula_101 then there is some formula_102 that satisfies formula_103
This result is not necessarily true for an infinite family of filters.
The image under a map formula_104 of an ultra set formula_47 is again ultra and if formula_8 is an ultra prefilter then so is formula_105 The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if formula_0 has more than one point and if the range of formula_104 consists of a single point formula_106 then formula_106 is an ultra prefilter on formula_96 but its preimage is not ultra. Alternatively, if formula_8 is a principal filter generated by a point in formula_107 then the preimage of formula_8 contains the empty set and so is not ultra.
The elementary filter induced by an infinite sequence, all of whose points are distinct, is not an ultrafilter. If formula_108 then formula_109 denotes the set consisting all subsets of formula_0 having cardinality formula_110 and if formula_0 contains at least formula_111 (formula_112) distinct points, then formula_109 is ultra but it is not contained in any prefilter. This example generalizes to any integer formula_113 and also to formula_114 if formula_0 contains more than one element. Ultra sets that are not also prefilters are rarely used.
For every formula_115 and every formula_116 let formula_117 If formula_118 is an ultrafilter on formula_0 then the set of all formula_115 such that formula_119 is an ultrafilter on formula_120
Monad structure.
The functor associating to any set formula_0 the set of formula_121 of all ultrafilters on formula_0 forms a monad called the <templatestyles src="Template:Visible anchor/styles.css" />ultrafilter monad. The unit map
formula_122
sends any element formula_6 to the principal ultrafilter given by formula_87
This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets, which gives a conceptual explanation of this monad.
Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable.
The ultrafilter lemma.
The ultrafilter lemma was first proved by Alfred Tarski in 1930.
<templatestyles src="Math_theorem/styles.css" />
The <templatestyles src="Template:Visible anchor/styles.css" />ultrafilter lemma/principle/theorem — Every proper filter on a set formula_0 is contained in some ultrafilter on formula_1
The ultrafilter lemma is equivalent to each of the following statements:
A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it.
The following results can be proven using the ultrafilter lemma.
A free ultrafilter exists on a set formula_0 if and only if formula_0 is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it. Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets formula_123 can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of formula_124 is infinite.
Relationships to other statements under ZF.
Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice (AC). The ultrafilter lemma is independent of ZF. That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal.
Every filter that contains a singleton set is necessarily an ultrafilter and given formula_125 the definition of the discrete ultrafilter formula_126 does not require more than ZF.
If formula_0 is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets.
In particular, if formula_0 is finite then the ultrafilter lemma can be proven from the axioms ZF.
The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed.
More generally, the ultrafilter lemma can be proven by using the axiom of choice, which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF, the axiom of choice is, in particular, equivalent to (a) Zorn's lemma, (b) Tychonoff's theorem, (c) the weak form of the vector basis theorem (which states that every vector space has a basis), (d) the strong form of the vector basis theorem, and other statements.
However, the ultrafilter lemma is strictly weaker than the axiom of choice.
While free ultrafilters can be proven to exist, it is not possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible.
Alfred Tarski proved that under ZFC, the cardinality of the set of all free ultrafilters on an infinite set formula_0 is equal to the cardinality of formula_127 where formula_4 denotes the power set of formula_1
Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz, and Kantorovitch, improved by Hausdorff).
Under ZF, the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem; conversely, under ZF, the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice.
Statements that cannot be deduced.
The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can not be deduced from ZF together with only the ultrafilter lemma:
Equivalent statements.
Under ZF, the ultrafilter lemma is equivalent to each of the following statements:
Completeness.
The completeness of an ultrafilter formula_8 on a powerset is the smallest cardinal κ such that there are κ elements of formula_8 whose intersection is not in formula_9 The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least formula_128. An ultrafilter whose completeness is greater than formula_128—that is, the intersection of any countable collection of elements of formula_8 is still in formula_8—is called countably complete or σ-complete.
The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal.
<templatestyles src="Template:Visible anchor/styles.css" />Ordering on ultrafilters.
The <templatestyles src="Template:Visible anchor/styles.css" />Rudin–Keisler ordering (named after Mary Ellen Rudin and Howard Jerome Keisler) is a preorder on the class of powerset ultrafilters defined as follows: if formula_8 is an ultrafilter on formula_129 and formula_32 an ultrafilter on formula_130 then formula_131 if there exists a function formula_104 such that
formula_132 if and only if formula_133
for every subset formula_134
Ultrafilters formula_8 and formula_32 are called <templatestyles src="Template:Visible anchor/styles.css" />Rudin–Keisler equivalent, denoted "U" ≡RK "V", if there exist sets formula_10 and formula_135 and a bijection formula_136 that satisfies the condition above. (If formula_0 and formula_96 have the same cardinality, the definition can be simplified by fixing formula_137 formula_138)
It is known that ≡RK is the kernel of ≤RK, i.e., that "U" ≡RK "V" if and only if formula_139 and formula_140
Ultrafilters on ℘(ω).
There are several special properties that an ultrafilter on formula_141 where formula_142 extends the natural numbers, may possess, which prove useful in various areas of set theory and topology.
It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters.
In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom. Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters. Therefore, the existence of these types of ultrafilters is independent of ZFC.
P-points are called as such because they are topological P-points in the usual topology of the space βω \ ω of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem. To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of formula_148 there exists an element of the ultrafilter that has a homogeneous color.
An ultrafilter on formula_149 is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters.
Notes.
<templatestyles src="Reflist/styles.css" />
Proofs
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X."
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "X\\setminus A"
},
{
"math_id": 4,
"text": "\\wp(X)"
},
{
"math_id": 5,
"text": "\\,\\subseteq."
},
{
"math_id": 6,
"text": "x \\in X"
},
{
"math_id": 7,
"text": "X,"
},
{
"math_id": 8,
"text": "U"
},
{
"math_id": 9,
"text": "U."
},
{
"math_id": 10,
"text": "A \\in U"
},
{
"math_id": 11,
"text": "B \\subseteq X"
},
{
"math_id": 12,
"text": "A \\subseteq B \\subseteq X"
},
{
"math_id": 13,
"text": "B \\in U."
},
{
"math_id": 14,
"text": "B"
},
{
"math_id": 15,
"text": "A \\cap B."
},
{
"math_id": 16,
"text": "A \\subseteq X"
},
{
"math_id": 17,
"text": "X \\setminus A"
},
{
"math_id": 18,
"text": "\\subseteq"
},
{
"math_id": 19,
"text": "P"
},
{
"math_id": 20,
"text": "P^{\\uparrow X} := \\{S : A \\subseteq S \\subseteq X \\text{ for some } A \\in P\\}."
},
{
"math_id": 21,
"text": "\\varnothing \\not\\in P"
},
{
"math_id": 22,
"text": "B, C \\in P"
},
{
"math_id": 23,
"text": "A \\in P"
},
{
"math_id": 24,
"text": "A \\subseteq B \\cap C."
},
{
"math_id": 25,
"text": "P^{\\uparrow X}"
},
{
"math_id": 26,
"text": "P^{\\uparrow X}."
},
{
"math_id": 27,
"text": "X \\setminus P := \\{X \\setminus B : B \\in P\\}."
},
{
"math_id": 28,
"text": "X \\setminus \\wp(X) = \\wp(X)."
},
{
"math_id": 29,
"text": "U \\neq \\varnothing"
},
{
"math_id": 30,
"text": "\\varnothing \\not\\in U"
},
{
"math_id": 31,
"text": "S"
},
{
"math_id": 32,
"text": "V"
},
{
"math_id": 33,
"text": "\\mathcal{B}"
},
{
"math_id": 34,
"text": "M"
},
{
"math_id": 35,
"text": "N,"
},
{
"math_id": 36,
"text": "N"
},
{
"math_id": 37,
"text": "M,"
},
{
"math_id": 38,
"text": "M \\leq N"
},
{
"math_id": 39,
"text": "C \\in M,"
},
{
"math_id": 40,
"text": "F \\in N"
},
{
"math_id": 41,
"text": "F \\subseteq C."
},
{
"math_id": 42,
"text": "N \\leq M."
},
{
"math_id": 43,
"text": "\\,\\geq,\\,"
},
{
"math_id": 44,
"text": "M \\subseteq N"
},
{
"math_id": 45,
"text": "M \\subseteq N."
},
{
"math_id": 46,
"text": "M = N."
},
{
"math_id": 47,
"text": "U \\subseteq \\wp(X)"
},
{
"math_id": 48,
"text": "R, S \\subseteq X,"
},
{
"math_id": 49,
"text": "\\mathcal{B} \\subseteq \\wp(X)"
},
{
"math_id": 50,
"text": "\\mathcal{B}^{\\# X} := \\{S \\subseteq X ~:~ S \\cap B \\neq \\varnothing \\text{ for all } B \\in \\mathcal{B}\\}"
},
{
"math_id": 51,
"text": "\\mathcal{B}^{\\#}"
},
{
"math_id": 52,
"text": "\\varnothing^{\\#} = \\wp(X)"
},
{
"math_id": 53,
"text": "\\varnothing \\in \\mathcal{B}"
},
{
"math_id": 54,
"text": "\\mathcal{B}^{\\#} = \\varnothing."
},
{
"math_id": 55,
"text": "\\mathcal{A} \\subseteq \\mathcal{B}"
},
{
"math_id": 56,
"text": "\\mathcal{B}^{\\#} \\subseteq \\mathcal{A}^{\\#}"
},
{
"math_id": 57,
"text": "\\mathcal{B} \\subseteq \\mathcal{B}^{\\#}."
},
{
"math_id": 58,
"text": "\\mathcal{B}^{\\# X}"
},
{
"math_id": 59,
"text": "\\varnothing \\not\\in \\mathcal{B},"
},
{
"math_id": 60,
"text": "\\mathcal{B}^{\\#\\#} = \\mathcal{B}^{\\uparrow X}"
},
{
"math_id": 61,
"text": "\\mathcal{B}^{\\#\\#} = \\mathcal{B}."
},
{
"math_id": 62,
"text": "\\varnothing \\neq \\mathcal{B} \\subseteq \\wp(X),"
},
{
"math_id": 63,
"text": "R"
},
{
"math_id": 64,
"text": "S,"
},
{
"math_id": 65,
"text": "R \\cup S \\in \\mathcal{B}"
},
{
"math_id": 66,
"text": "R \\in \\mathcal{B}"
},
{
"math_id": 67,
"text": "S \\in \\mathcal{B}."
},
{
"math_id": 68,
"text": "\\mathcal{F} \\mapsto \\mathcal{F}^{\\# X}"
},
{
"math_id": 69,
"text": "{\\bull}^{\\# X} ~:~ \\operatorname{Filters}(X) \\to \\operatorname{FilterGrills}(X)"
},
{
"math_id": 70,
"text": "\\mathcal{F} \\mapsto \\mathcal{F}^{\\# X}."
},
{
"math_id": 71,
"text": "\\mathcal{F} \\in \\operatorname{Filters}(X)"
},
{
"math_id": 72,
"text": "\\mathcal{F}"
},
{
"math_id": 73,
"text": "\\mathcal{F} = \\mathcal{F}^{\\# X},"
},
{
"math_id": 74,
"text": "\\mathcal{F} \\subseteq \\wp(X),"
},
{
"math_id": 75,
"text": "\\varnothing \\not\\in \\mathcal{F}"
},
{
"math_id": 76,
"text": "R \\cup S \\in \\mathcal{F}"
},
{
"math_id": 77,
"text": "R, S \\in \\mathcal{F}"
},
{
"math_id": 78,
"text": "R \\cap S \\in \\mathcal{F}."
},
{
"math_id": 79,
"text": "P:"
},
{
"math_id": 80,
"text": "\\operatorname{ker} P := \\bigcap_{B \\in P} B."
},
{
"math_id": 81,
"text": "\\operatorname{ker} P = \\varnothing"
},
{
"math_id": 82,
"text": "\\operatorname{ker} P \\neq \\varnothing"
},
{
"math_id": 83,
"text": "\\operatorname{ker} P \\in P."
},
{
"math_id": 84,
"text": "\\operatorname{ker} P \\in P"
},
{
"math_id": 85,
"text": "\\operatorname{ker} P"
},
{
"math_id": 86,
"text": "\\operatorname{ker} P = \\{x\\}"
},
{
"math_id": 87,
"text": "x."
},
{
"math_id": 88,
"text": "n < \\infty,"
},
{
"math_id": 89,
"text": "n"
},
{
"math_id": 90,
"text": "X;"
},
{
"math_id": 91,
"text": "\\kappa"
},
{
"math_id": 92,
"text": "\\wp(\\wp(X));"
},
{
"math_id": 93,
"text": "2^{2^{\\kappa}}."
},
{
"math_id": 94,
"text": "\\varnothing \\not\\in S,"
},
{
"math_id": 95,
"text": "U \\leq S,"
},
{
"math_id": 96,
"text": "Y"
},
{
"math_id": 97,
"text": "U\\vert_Y := \\{B \\cap Y : B \\in U\\}"
},
{
"math_id": 98,
"text": "U\\vert_Y \\setminus \\{\\varnothing\\}"
},
{
"math_id": 99,
"text": "U\\vert_{X \\setminus Y} \\setminus \\{\\varnothing\\}"
},
{
"math_id": 100,
"text": "F_1, \\ldots, F_n"
},
{
"math_id": 101,
"text": "F_1 \\cap \\cdots \\cap F_n \\leq U,"
},
{
"math_id": 102,
"text": "F_i"
},
{
"math_id": 103,
"text": "F_i \\leq U."
},
{
"math_id": 104,
"text": "f : X \\to Y"
},
{
"math_id": 105,
"text": "f(U)."
},
{
"math_id": 106,
"text": "\\{ y \\}"
},
{
"math_id": 107,
"text": "Y \\setminus f(X)"
},
{
"math_id": 108,
"text": "n = 2,"
},
{
"math_id": 109,
"text": "U_n"
},
{
"math_id": 110,
"text": "n,"
},
{
"math_id": 111,
"text": "2 n - 1"
},
{
"math_id": 112,
"text": "=3"
},
{
"math_id": 113,
"text": "n > 1"
},
{
"math_id": 114,
"text": "n = 1"
},
{
"math_id": 115,
"text": "S \\subseteq X \\times X"
},
{
"math_id": 116,
"text": "a \\in X,"
},
{
"math_id": 117,
"text": "S\\big\\vert_{\\{a\\} \\times X} := \\{y \\in X ~:~ (a, y) \\in S\\}."
},
{
"math_id": 118,
"text": "\\mathcal{U}"
},
{
"math_id": 119,
"text": "\\left\\{a \\in X ~:~ S\\big\\vert_{\\{a\\} \\times X} \\in \\mathcal{U}\\right\\} \\in \\mathcal{U}"
},
{
"math_id": 120,
"text": "X \\times X."
},
{
"math_id": 121,
"text": "U(X)"
},
{
"math_id": 122,
"text": "X \\to U(X)"
},
{
"math_id": 123,
"text": "\\mathbb{F} \\neq \\varnothing"
},
{
"math_id": 124,
"text": "\\mathbb{F}"
},
{
"math_id": 125,
"text": "x \\in X,"
},
{
"math_id": 126,
"text": "\\{S \\subseteq X : x \\in S\\}"
},
{
"math_id": 127,
"text": "\\wp(\\wp(X)),"
},
{
"math_id": 128,
"text": "\\aleph_0"
},
{
"math_id": 129,
"text": "\\wp(X),"
},
{
"math_id": 130,
"text": "\\wp(Y),"
},
{
"math_id": 131,
"text": "V \\leq {}_{RK} U"
},
{
"math_id": 132,
"text": "C \\in V"
},
{
"math_id": 133,
"text": "f^{-1}[C] \\in U"
},
{
"math_id": 134,
"text": "C \\subseteq Y."
},
{
"math_id": 135,
"text": "B \\in V"
},
{
"math_id": 136,
"text": "f : A \\to B"
},
{
"math_id": 137,
"text": "A = X,"
},
{
"math_id": 138,
"text": "B = Y."
},
{
"math_id": 139,
"text": "U \\leq {}_{RK} V"
},
{
"math_id": 140,
"text": "V \\leq {}_{RK} U."
},
{
"math_id": 141,
"text": "\\wp(\\omega),"
},
{
"math_id": 142,
"text": "\\omega"
},
{
"math_id": 143,
"text": "\\left\\{ C_n : n < \\omega \\right\\}"
},
{
"math_id": 144,
"text": "n < \\omega,"
},
{
"math_id": 145,
"text": "C_n \\not\\in U,"
},
{
"math_id": 146,
"text": "A \\cap C_n"
},
{
"math_id": 147,
"text": "n."
},
{
"math_id": 148,
"text": "[\\omega]^2"
},
{
"math_id": 149,
"text": "\\wp(\\omega)"
}
]
| https://en.wikipedia.org/wiki?curid=67416519 |
67417478 | Mie potential | The Mie potential is an interaction potential describing the interactions between particles on the atomic level. It is mostly used for describing intermolecular interactions, but at times also for modeling intramolecular interaction, i.e. bonds.
The Mie potential is named after the German physicist Gustav Mie; yet the history of intermolecular potentials is more complicated. The Mie potential is the generalized case of the Lennard-Jones (LJ) potential, which is perhaps the most widely used pair potential.
The Mie potential formula_0 is a function of formula_1, the distance between two particles, and is written as
formula_2
with
formula_3 .
The Lennard-Jones potential corresponds to the special case where formula_4 and formula_5 in Eq. (1).
In Eq. (1), formula_6 is the dispersion energy, and formula_7 indicates the distance at which formula_8, which is sometimes called the "collision radius." The parameter formula_9 is generally indicative of the size of the particles involved in the collision. The parameters formula_10 and formula_11 characterize the shape of the potential: formula_10 describes the character of the repulsion and formula_11 describes the character of the attraction.
The attractive exponent formula_5 is physically justified by the London dispersion force, whereas no justification for a certain value for the repulsive exponent is known. The repulsive steepness parameter formula_10 has a significant influence on the modeling of thermodynamic derivative properties, e.g. the compressibility and the speed of sound. Therefore, the Mie potential is a more flexible intermolecular potential than the simpler Lennard-Jones potential.
The Mie potential is used today in many force fields in molecular modeling. Typically, the attractive exponent is chosen to be formula_5, whereas the repulsive exponent is used as an adjustable parameter during the model fitting.
Thermophysical properties of the Mie substance.
As for the Lennard-Jonesium, where a theoretical substance exists that is defined by particles interacting by the Lennard-Jones potential, a substance class of Mie substances exists that are defined as single site spherical particles interacting by a given Mie potential. Since an infinite number of Mie potentials exist (using different "n, m" parameters), equally many Mie substances exist, as opposed to Lennard-Jonesium, which is uniquely defined. For practical applications in molecular modelling, the Mie substances are mostly relevant for modelling small molecules, e.g. noble gases, and for coarse grain modelling, where larger molecules, or even a collection of molecules, are simplified in their structure and described by a single Mie particle. However, more complex molecules, such as long-chained alkanes, have successfully been modelled as homogeneous chains of Mie particles. As such, the Mie potential is useful for modelling far more complex systems than those whose behaviour is accurately captured by "free" Mie particles.
Thermophysical properties of both the Mie fluid, and chain molecules built from Mie particles have been the subject of numerous papers in recent years. Investigated properties include virial coefficients and interfacial, vapor-liquid equilibrium, and transport properties. Based on such studies the relation between the shape of the interaction potential (described by "n" and "m") and the thermophysical properties has been elucidated.
Also, many theoretical (analytical) models have been developed for describing thermophysical properties of Mie substances and chain molecules formed from Mie particles, such as several thermodynamic equations of state and models for transport properties.
It has been observed that many combinations of different (formula_12) can yield similar phase behaviour, and that this degeneracy is captured by the parameter
formula_13,
where fluids with different exponents, but the same formula_14-parameter will exhibit the same phase behavior.
Mie potential used in molecular modeling.
Due to its flexibility, the Mie potential is a popular choice for modelling real fluids in force fields. It is used as an interaction potential many molecular models today. Several (reliable) united atom transferable force fields are based on the Mie potential, such as that developed by Potoff and co-workers. The Mie potential has also been used for coarse-grain modeling. Electronic tools are available for building Mie force field models for both united atom force fields and transferable force fields. The Mie potential has also been used for modeling small spherical molecules (i.e. directly the Mie substance - see above). The Table below gives some examples. There, the molecular models have only the parameters of the Mie potential itself.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(r)"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "V(r) = C \\, \\varepsilon \\left[ \\left(\\frac{\\sigma}{r} \\right)^{n}- \\left( \\frac{\\sigma}{r}\\right)^m \\right] ,~~~~~~ (1) "
},
{
"math_id": 3,
"text": "C = \\frac{n}{n-m} \\left( \\frac{n}{m}\\right)^ {\\frac{m}{n-m}} "
},
{
"math_id": 4,
"text": "n=12"
},
{
"math_id": 5,
"text": "m=6"
},
{
"math_id": 6,
"text": "\\varepsilon"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "V = 0 "
},
{
"math_id": 9,
"text": "\\sigma"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "n, m"
},
{
"math_id": 13,
"text": "\\alpha = C \\left[\\frac{1}{m - 3} - \\frac{1}{n - 3}\\right]"
},
{
"math_id": 14,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=67417478 |
674274 | Vascular resistance | Force from blood vessels that affects blood flow
Vascular resistance is the resistance that must be overcome for blood to flow through the circulatory system. The resistance offered by the systemic circulation is known as the systemic vascular resistance (SVR) or may sometimes be called by the older term total peripheral resistance (TPR), while the resistance offered by the pulmonary circulation is known as the pulmonary vascular resistance (PVR). Systemic vascular resistance is used in calculations of blood pressure, blood flow, and cardiac function. Vasoconstriction (i.e., decrease in blood vessel diameter) increases SVR, whereas vasodilation (increase in diameter) decreases SVR.
Units for measuring.
Units for measuring vascular resistance are dyn·s·cm−5, pascal seconds per cubic metre (Pa·s/m3) or, for ease of deriving it by pressure (measured in mmHg) and cardiac output (measured in L/min), it can be given in mmHg·min/L. This is numerically equivalent to hybrid resistance units (HRU), also known as Wood units (in honor of Paul Wood, an early pioneer in the field), frequently used by pediatric cardiologists. The conversion between these units is:
formula_0
Calculation.
Hydraulic resistance is calculated as driving pressure divided by volumetric flow rate:
formula_1
where
This is Darcy's law, the hydraulic version of Ohm's law, in which the pressure difference is analogous to the electrical voltage difference, volumetric flow is analogous to electric current flow, and vascular resistance is analogous to electrical resistance.
SVR.
The SVR can therefore be calculated in units of dyn·s·cm−5 as
formula_2
where the pressures are measured in mmHg and the cardiac output is measured in units of litres per minute (L/min). Mean arterial pressure is the cycle average of blood pressure and is commonly approximated as 2 x diastolic blood pressure + systolic blood pressure/3 [or diastolic blood pressure + 1/3(systolic blood pressure - diastolic blood pressure)]. Mean right atrial pressure or central venous pressure, is usually very low (normally around 4mmHg), and as a result, it is frequently disregarded.
As an example: if systolic blood pressure = 120 mmHg, diastolic blood pressure = 80 mmHg, right atrial mean pressure = 3 mmHg and cardiac output = 5 L/min, Then mean arterial pressure = 2 x diastolic pressure + systolic pressure/3 = 93.3 mmHg, and SVR = (93 - 3) / 5 = 18 Wood units, or equivalently 1440 dyn·s/cm5.
It is difficult to measure or monitor SVR in most locations outside the ICU. An invasive catheter is necessary. SVR, BP and CO are related to each other but only BP is easily measured. In the typical situation at the bedside we have an equation with three variables, one known, that is the BP and two unknown, CO and SVR. For this reason the BP is frequently used as a practical but somewhat inadequate definition of shock or the state of blood flow.
PVR.
The PVR can be calculated similarly (in units of dyn·s·cm−5 ) as:
formula_3
where the units of measurement are the same as for SVR. The pulmonary artery wedge pressure (also called pulmonary artery occlusion pressure or PAOP) is a measurement in which one of the pulmonary arteries is occluded, and the pressure downstream from the occlusion is measured in order to approximate the left atrial pressure. Therefore, the numerator of the above equation is the pressure difference between the input to the pulmonary blood circuit (where the heart's right ventricle connects to the pulmonary trunk) and the output of the circuit (which is the input to the left atrium of the heart).
Regulation.
There are many factors that influence vascular resistance. Vascular compliance is determined by the muscle tone in the smooth muscle tissue of the tunica media and the elasticity of the elastic fibers there, but the muscle tone is subject to continual homeostatic changes by hormones and cell signaling molecules that induce vasodilation and vasoconstriction to keep blood pressure and blood flow within reference ranges.
In a first approach, based on fluids dynamics (where the flowing material is continuous and made of continuous atomic or molecular bonds, the internal friction happen between continuous parallel layers of different velocities) factors that influence vascular resistance are represented in an adapted form of the Hagen–Poiseuille equation:
formula_4
where
Vessel length is generally not subject to change in the body.
In Hagen–Poiseuille equation, the flow layers start from the wall and, by viscosity, reach each other in the central line of the vessel following a parabolic velocity profile.
In a second approach, more realistic and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.
The blood resistance law appears as R adapted to blood flow profile :
formula_5
where
Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels.
Blood viscosity increases as blood is more hemoconcentrated, and decreases as blood is more dilute. The greater the viscosity of blood, the larger the resistance will be. In the body, blood viscosity increases as red blood cell concentration increases, thus more hemodilute blood will flow more readily, while more hemoconcentrated blood will flow more slowly.
Counteracting this effect, decreased viscosity in a liquid results in the potential for increased turbulence. Turbulence can be viewed from outside of the closed vascular system as increased resistance, thereby countering the ease of flow of more hemodilute blood. Turbulence, particularly in large vessels, may account for some pressure change across the vascular bed.
The major regulator of vascular resistance in the body is regulation of vessel radius. In humans, there is very little pressure change as blood flows from the aorta to the large arteries, but the small arteries and arterioles are the site of about 70% of the pressure drop, and are the main regulators of SVR. When environmental changes occur (e.g. exercise, immersion in water), neuronal and hormonal signals, including binding of norepinephrine and epinephrine to the α1 receptor on vascular smooth muscles, cause either vasoconstriction or vasodilation. Because resistance is inversely proportional to the fourth power of vessel radius, changes to arteriole diameter can result in large increases or decreases in vascular resistance.
If the resistance is inversely proportional to the fourth power of vessel radius, the resulting force exerted on the wall vessels, the parietal drag force, is inversely proportional to the second power of the radius. The force exerted by the blood flow on the vessel walls is, according to the Poiseuille equation, the wall shear stress. This wall shear stress is proportional to the pressure drop. The pressure drop is applied on the section surface of the vessel, and the wall shear stress is applied on the sides of the vessel. So the total force on the wall is proportional to the pressure drop and the second power of the radius. Thus the force exerted on the wall vessels is inversely proportional to the second power of the radius.
The blood flow resistance in a vessel is mainly regulated by the vessel radius and viscosity when blood viscosity too varies with the vessel radius. According to very recent results showing the sheath flow surrounding the plug flow in a vessel, the sheath flow size is not neglectible in the real blood flow velocity profile in a vessel. The velocity profile is directly linked to flow resistance in a vessel. The viscosity variations, according to Thurston, are also balanced by the sheath flow size around the plug flow. The secondary regulators of vascular resistance, after vessel radius, is the sheath flow size and its viscosity.
Thurston, as well, shows that the resistance R is constant, where, for a defined vessel radius, the value η(δ)/δ is constant in the sheath flow.
Vascular resistance depends on blood flow which is divided into 2 adjacent parts : a plug flow, highly concentrated in RBCs, and a sheath flow, more fluid plasma release-cell layering. Both coexist and have different viscosities, sizes and velocity profiles in the vascular system.
Combining Thurston's work with the Hagen-Poiseuille equation shows that blood flow exerts a force on vessel walls which is inversely proportional to the radius and the sheath flow thickness. It is proportional to the mass flow rate and blood viscosity.
formula_6
where
Other factors.
Many of the platelet-derived substances, including serotonin, are vasodilatory when the endothelium is intact and are vasoconstrictive when the endothelium is damaged.
Cholinergic stimulation causes release of endothelium-derived relaxing factor (EDRF) (later it was discovered that EDRF was nitric oxide) from intact endothelium, causing vasodilation. If the endothelium is damaged, cholinergic stimulation causes vasoconstriction.
Adenosine most likely does not play a role in maintaining the vascular resistance in the resting state. However, it causes vasodilation and decreased vascular resistance during hypoxia. Adenosine is formed in the myocardial cells during hypoxia, ischemia, or vigorous work, due to the breakdown of high-energy phosphate compounds (e.g., adenosine monophosphate, AMP). Most of the adenosine that is produced leaves the cell and acts as a direct vasodilator on the vascular wall. Because adenosine acts as a direct vasodilator, it is not dependent on an intact endothelium to cause vasodilation.
Adenosine causes vasodilation in the small and medium-sized resistance arterioles (less than 100 μm in diameter). When adenosine is administered it can cause a coronary steal phenomenon, where the vessels in healthy tissue dilate more than diseased vessels. When this happens blood is shunted from potentially ischemic tissue that can now become ischemic tissue. This is the principle behind adenosine stress testing. Adenosine is quickly broken down by adenosine deaminase, which is present in red cells and the vessel wall. The coronary steal and the stress test can be quickly terminated by stopping the adenosine infusion.
Systemic.
A decrease in SVR (e.g., during exercising) will result in an increased flow to tissues and an increased venous flow back to the heart. An increased SVR, as occurs with some medications, will decrease flow to tissues and decrease venous flow back to the heart. Vasoconstriction and an increased SVR is particularly true of drugs the stimulate alpha(1) adrenergic receptors.
Pulmonary.
The major determinant of vascular resistance is "small arteriolar" (known as resistance arterioles) tone. These vessels are from 450 μm down to 100 μm in diameter (as a comparison, the diameter of a capillary is about 5 to 10 μm). Another determinant of vascular resistance is the "pre-capillary arterioles". These arterioles are less than 100 μm in diameter. They are sometimes known as autoregulatory vessels since they can dynamically change in diameter to increase or reduce blood flow.
Any change in the viscosity of blood (such as due to a change in hematocrit) would also affect the measured vascular resistance.
Pulmonary vascular resistance (PVR) also depends on the lung volume, and PVR is lowest at the functional residual capacity (FRC). The highly compliant nature of the pulmonary circulation means that the degree of lung distention has a large effect on PVR. This results primarily due to effects on the alveolar and extra-alveolar vessels. During inspiration, increased lung volumes cause alveolar expansion and lengthwise stretching of the interstitial alveolar vessels. This increases their length and reduces their diameter, thus increasing alveolar vessel resistance. On the other hand, decreased lung volumes during expiration cause the extra-alveolar arteries and veins to become narrower due to decreased radial traction from adjacent tissues. This leads to an increase in extra-alveolar vessel resistance. PVR is calculated as a sum of the alveolar and extra-alveolar resistances as these vessels lie in series with each other. Because the alveolar and extra-alveolar resistances are increased at high and low lung volumes respectively, the total PVR takes the shape of a U curve. The point at which PVR is the lowest is near the FRC.
Coronary.
The regulation of tone in the coronary arteries is a complex subject. There are a number of mechanisms for regulating coronary vascular tone, including metabolic demands (i.e. hypoxia), neurologic control, and endothelial factors (i.e. EDRF, endothelin).
Local metabolic control (based on metabolic demand) is the most important mechanism of control of coronary flow. Decreased tissue oxygen content and increased tissue CO2 content act as vasodilators. Acidosis acts as a direct coronary vasodilator and also potentiates the actions of adenosine on the coronary vasculature. | [
{
"math_id": 0,
"text": "\n1\\, \\frac{\\text{mmHg} \\cdot \\text{min}}{\\text{ L }} (\\text{HRUs})\n= 8\\, \\frac{\\text{MPa} \\cdot \\text{s}}{\\text{m}^3}\n= 80\\, \\frac{\\text{dyn} \\cdot \\text{sec}}{\\text{cm}^5}\n"
},
{
"math_id": 1,
"text": "R = \\Delta P / Q "
},
{
"math_id": 2,
"text": "\\frac {80 \\cdot (mean\\ arterial\\ pressure - mean \\ right \\ atrial \\ pressure)} {cardiac\\ output}"
},
{
"math_id": 3,
"text": "\\frac {80 \\cdot (mean\\ pulmonary\\ arterial\\ pressure - mean \\ pulmonary \\ artery \\ wedge \\ pressure)} {cardiac\\ output}"
},
{
"math_id": 4,
"text": "R = \\frac{8 L \\eta} {\\pi r^4} "
},
{
"math_id": 5,
"text": "R = \\frac{c L \\eta(\\delta)}{\\pi \\delta r^3} "
},
{
"math_id": 6,
"text": "F = \\frac{Q c L \\eta(\\delta)}{\\pi \\delta r} "
}
]
| https://en.wikipedia.org/wiki?curid=674274 |
67433501 | Generalized renewal process | In the mathematical theory of probability, a generalized renewal process (GRP) or G-renewal process is a stochastic point process used to model failure/repair behavior of repairable systems in reliability engineering. Poisson point process is a particular case of GRP.
Probabilistic model.
Virtual age.
The G-renewal process is introduced by Kijima and Sumita through the notion of the "virtual age".
formula_0
where:
formula_1 and formula_2 is real and virtual age (respectively) of the system at/after the "i"th repair,
formula_3 is the "restoration factor" (a.k.a., repair effectiveness factor),
formula_4, represents the condition of a perfect repair, where the system age is reset to zero after the repair. This condition corresponds to the "Ordinary Renewal Process".
formula_5, represents the condition of a minimal repair, where the system condition after the repair remains the same as right before the repair. This condition corresponds to the "Non-Homogeneous Poisson Process".
formula_6, represents the condition of a general repair, where the system condition is between perfect repair and minimal repair. This condition corresponds to the "Generalized Renewal Process".
Kaminskiy and Krivtsov extended the Kijima models by allowing "q" > 1, so that the repair damages (ages) the system to a higher degree than it was just before the respective failure.
G-renewal equation.
Mathematically, the G-renewal process is quantified through the solution of the G-renewal equation:
formula_7
where,
formula_8 formula_9
formula_10
"f"("t") is the probability density function (PDF) of the underlying failure time distribution,
"F"("t") is the cumulative distribution function (CDF) of the underlying failure time distribution,
"q" is the restoration factor,
formula_11 is the vector of parameters of the underlying failure-time distribution.
A closed-form solution to the G-renewal equation is not possible. Also, numerical approximations are difficult to obtain due to the recurrent infinite series. A Monte Carlo based approach to solving the G-renewal Equation was developed by Kaminiskiy and Krivtsov.
Statistical estimation.
The G–renewal process gained its practical popularity in reliability engineering only after methods for estimating its parameters had become available.
Monte Carlo approach.
The nonlinear LSQ estimation of the G–renewal process was first offered by Kaminskiy & Krivtsov. A random inter-arrival time from a parameterized G-Renewal process is given by:
formula_12
where,
formula_13 is the cumulative real age before the ith inter-arrival,
formula_14 is a uniformly distributed random variable,
formula_15 is the CDF of the underlying failure-time distribution.
The Monte Carlo solution was subsequently improved and implemented as a web resource.
Maximum likelihood approach.
The maximum likelihood procedures were subsequently discussed by Yañez, et al., and Mettas & Zhao. The estimation of the G–renewal restoration factor was addressed in detail by Kahle & Love.
Regularization method in estimating GRP parameters.
The estimation of G–renewal process parameters is an ill–posed inverse problem, and therefore, the solution may not be unique and is sensitive to the input data. Krivtsov & Yevkin suggested first to estimate the underlying distribution parameters using the time to first failures only. Then, the obtained parameters are used as the initial values for the second step, whereat all model parameters (including the restoration factor(s)) are estimated simultaneously. This approach allows, on the one hand, to avoid irrelevant solutions (wrong local maximums or minimums of the objective function) and on the other hand, to improve computational speed, as the number of iterations significantly depends on the selected initial values.
Limitations.
One limitation of the Generalized Renewal Process is that it cannot account for "better-than-new" repair. The G1-renewal process has been developed which applies the restoration factor to the life parameter of a location-scale distribution to be able to account for "better-than-new" repair in addition to other repair types.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_i = q t_i "
},
{
"math_id": 1,
"text": " t_i "
},
{
"math_id": 2,
"text": " y_i "
},
{
"math_id": 3,
"text": " q "
},
{
"math_id": 4,
"text": " q=0 "
},
{
"math_id": 5,
"text": " q=1 "
},
{
"math_id": 6,
"text": " 0<q<1 "
},
{
"math_id": 7,
"text": "W(t) = \\int_0^t( g(\\tau \\mid 0) + \\int_0^\\tau w(x) \\cdot (g(\\tau - x \\mid x) \\,dx )\\,d\\tau "
},
{
"math_id": 8,
"text": "g(\\tau \\mid x) = \\frac{(t+qx,\\theta)}{1-F(qx,\\theta)}"
},
{
"math_id": 9,
"text": " t,x \\geq 0 "
},
{
"math_id": 10,
"text": " w(x) = \\frac{dW(x)}{dx} "
},
{
"math_id": 11,
"text": " {\\theta} "
},
{
"math_id": 12,
"text": "X_i = F^{-1} (1-U_i[1- F(qS_{i-1})]) - qS_{i-1} "
},
{
"math_id": 13,
"text": " S_{i-1} "
},
{
"math_id": 14,
"text": " U_i "
},
{
"math_id": 15,
"text": " F "
}
]
| https://en.wikipedia.org/wiki?curid=67433501 |
67434894 | KBD algorithm | Cluster update algorithm
The KBD algorithm is a cluster update algorithm designed for the fully frustrated Ising model in two dimensions, or more generally any two dimensional spin glass with frustrated plaquettes arranged in a checkered pattern. It is discovered in 1990 by Daniel Kandel, Radel Ben-Av, and Eytan Domany, and generalized by P. D. Coddington and L. Han in 1994. It is the inspiration for cluster algorithms used in quantum monte carlo simulations.
Motivation.
The SW algorithm is the first non-local algorithm designed for efficient simulation of ferromagnetic spin models. However, it is soon realized that the efficiency of the algorithm cannot be extended to frustrated systems, due to an overly large correlation length of the generated clusters with respect to the underlying spin system. The KBD algorithm is an attempt to extend the bond-formation rule to the plaquettes of the lattice, such that the generated clusters are informed by the frustration profile, resulting in them being smaller than the SW ones, thereby making the algorithm more efficient in comparison. However, at the current stage, it is not known whether this algorithm can be generalized for arbitrary spin glass models.
Algorithm.
We begin by decomposing the square lattice down into plaquettes arranged in a checkered pattern (such that the plaquettes only overlap vertex-wise but not edge-wise). Since the spin model is fully-frustrated, each plaquette must contain exactly one or three negative interactions. If the plaquette contains three negative interactions, then no bonds can be formed. However, if the plaquette contains one negative interaction, then two parallel bonds can be formed (perpendicular to the negative edge) with probability formula_0, where formula_1 is the inverse temperature of the spin model.
The bonds will then form clusters on the lattice, on which the spins can be collectively flipped (either with the SW rule or the Wolff rule ). It can be shown that the update satisfies detailed balance, meaning that correctness is guaranteed if the algorithm is used in conjunction with ergodic algorithms like single spin-flip updates.
Topological features.
At zero temperature, or the formula_2 limit, all the plaquettes will contain exactly one negative edge. In this case, on each checkered plaquette, the KBD algorithm will always open two parallel bonds perpendicular to the negative edge, meaning that the bond will be closed on the negative edge along with the edge opposite to it. If we were to track the closed bonds in the dual lattice, by drawing a straight/bent line inside each plaquette such that it intersects with the closed bonds, then it can be shown that a path following the lines must form a cycle.
Furthermore, it can be shown that there must be at least two such cycles, and that the cycles cannot intersect. Most importantly, each cycle cannot be contracted to a point in the underlying surface that the lattice is embedded in. On a periodic lattice (or a torus), this means that the cycles of closed bonds must wind around the torus in the same direction, from which one can show that the largest cluster (which must be "squeezed" between these cycles) at zero temperature cannot span a finite fraction of the lattice size in the thermodynamic limit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p = 1-e^{-4\\beta}"
},
{
"math_id": 1,
"text": "\\beta"
},
{
"math_id": 2,
"text": "\\beta \\to \\infty"
}
]
| https://en.wikipedia.org/wiki?curid=67434894 |
67437670 | Replica cluster move | Replica cluster move in condensed matter physics refers to a family of non-local cluster algorithms used to simulate spin glasses. It is an extension of the Swendsen-Wang algorithm in that it generates non-trivial spin clusters informed by the interaction states on two (or more) replicas instead of just one. It is different from the replica exchange method (or parallel tempering), as it performs a non-local update on a fraction of the sites between the two replicas at the same temperature, while parallel tempering directly exchanges all the spins between two replicas at different temperature. However, the two are often used alongside to achieve state-of-the-art efficiency in simulating spin-glass models.
The Chayes-Machta-Redner representation.
The Chayes-Machta-Redner (CMR) representation is a graphical representation of the Ising spin glass which extends the standard FK representation. It is based on the observation that the total Hamiltonian of two independent Ising replicas α and β,
formula_0
can be written as the Hamiltonian of a 4-state clock model. To see this, we define the following mapping
formula_1
where formula_2 is the orientation of the 4-state clock, then the total Hamiltonian can be represented as
formula_3
In the graphical representation of this model, there are two types of bonds that can be open, referred to as blue and red. To generate the bonds on the lattice, the following rules are imposed:
Under these rules, it can be checked that a cycle of open bonds can only contain an even number of red bonds. A cluster formed with blue bonds is referred to as a blue cluster, and a super-cluster formed together with both blue and red bonds is referred to as a grey cluster.
Once the clusters are generated, there are two types of non-local updates that can be made to the clock states independently in the clock clusters (and thus the spin states in both replicas). First, for every blue cluster, we can flip (or rotate formula_9) the clock states with some arbitrary probability. Following this, for every grey cluster (blue clusters connected with red bonds), we can rotate all the clock states simultaneously by a random angle.
It can be shown that both updates are consistent with the bond-formation rules, and satisfy detailed balance. Therefore, an algorithm based on this CMR representation will be correct when used in conjunction with other ergodic algorithms. However, the algorithm is not necessarily efficient, as a giant grey cluster will tend to span the entire lattice at sufficiently low temperatures (e.g. even at paramagnetic phases of spin-glass models).
Houdayer cluster move.
The Houdayer cluster move is a simpler cluster algorithm based on a site percolation process on sites with negative spin overlaps. It is discovered by Jerome Houdayer in 2001. For two independent Ising replicas, we can define the spin overlap as
formula_10
and a cluster is formed by randomly selecting a site and percolating through the adjacent sites with formula_11 (with a percolation ratio of 1) until the maximal cluster is formed. The spins in the cluster are then exchanged between the two replicas. It can be shown that the exchange update is isoenergetic, meaning that the total energy is conserved in the update. This gives an acceptance ratio of 1 as calculated from the Metropolis-Hastings rule. In other words, the update is rejection-free.
Suppressing percolation of large clusters.
The efficiency of this algorithm is highly sensitive to the site percolation threshold of the underlying lattice. If the percolation threshold is too small, then a giant cluster will likely span the entire lattice, resulting in the trivial update of exchanging nearly all the spins between the replicas. This is why the original algorithm only performs well in low dimensional settings (where the site percolation ratio is sufficiently high). To efficiently extend this algorithm to higher dimensions, one has to perform certain algorithmic interventions.
For instance, one can restrict the cluster moves to low-temperature replicas where one expects only a few number of negative-overlap sites to appear (such that the algorithm does not percolate supercritically). In addition, one can perform a global spin-flip in one of the two replicas when the number of negative-overlap sites exceeds half the lattice size, in order to further suppress the percolation process.
The Jorg cluster move is another way to reduce the sizes of the Houdayer clusters. In each Houdayer cluster, the algorithm forms open bonds with probability formula_12, similar to the Swensden-Wang algorithm. This will form sub-clusters that are smaller than the Houdayer clusters, and the spins in these sub-clusters can then be exchange between replicas in a similar fashion as a Houdayer cluster move.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " H = -\\sum_{<ij>}J_{ij} \\big( \\sigma_i^{\\alpha}\\sigma_j^{\\alpha} + \\sigma_i^{\\beta}\\sigma_j^{\\beta} \\big), "
},
{
"math_id": 1,
"text": " (\\sigma^{\\alpha},\\sigma^{\\beta}) \\to \\theta: \n\\quad \\big\\{ (+1,+1), (+1,-1), (-1,-1), (-1,+1) \\big\\} \\mapsto \\big\\{ 0, \\frac{\\pi}{2}, \\pi, \\frac{3\\pi}{2} \\big\\}, "
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "H = - 2 J_{ij}\\sum_{<ij>} \\cos(\\theta_j - \\theta_i)."
},
{
"math_id": 4,
"text": " J_{ij}\\cos(\\theta_j - \\theta_i) = 1 "
},
{
"math_id": 5,
"text": " (i,j) "
},
{
"math_id": 6,
"text": " p_{\\text{blue}} = 1-e^{-4\\beta|J_{ij}|} "
},
{
"math_id": 7,
"text": " J_{ij}\\cos(\\theta_j - \\theta_i) = 0 "
},
{
"math_id": 8,
"text": " p_{\\text{red}} = 1-e^{-2\\beta|J_{ij}|} "
},
{
"math_id": 9,
"text": " 180^{\\circ} "
},
{
"math_id": 10,
"text": " q_i = \\sigma_i^{\\alpha} \\sigma_j^{\\beta}, "
},
{
"math_id": 11,
"text": " q = -1 "
},
{
"math_id": 12,
"text": "1 - e^{-4\\beta|J_{ij}|}"
}
]
| https://en.wikipedia.org/wiki?curid=67437670 |
6743885 | Steven Zucker | American mathematician (1949–2019)
Steven Mark Zucker (12 September 1949 – 13 September 2019) was an American mathematician who introduced the Zucker conjecture, proved in different ways by Eduard Looijenga (1988) and by Leslie Saper and Mark Stern (1990).
Zucker completed his Ph.D. in 1974 at Princeton University under the supervision of Spencer Bloch. His work with David A. Cox led to the creation of the Cox–Zucker machine, an algorithm for determining if a given set of sections provides a basis (up to torsion) for the Mordell–Weil group of an elliptic surface formula_0, where formula_1 is isomorphic to the projective line.
He was part of the mathematics faculty at the Johns Hopkins University. In 2012, he became a fellow of the American Mathematical Society.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E \\to S"
},
{
"math_id": 1,
"text": "S"
}
]
| https://en.wikipedia.org/wiki?curid=6743885 |
67439252 | Lam's problem | In finite geometry, Lam's problem is the problem of determining if a finite projective plane of order ten exists.
The order ten case is the first theoretically uncertain case, as all smaller orders can be resolved by purely theoretical means.
Lam's problem is named after Clement W. H. Lam who experimentally determined that projective planes of order ten do not exist via exhaustive computational searches.
Introduction.
A finite projective plane of order formula_0 is a collection of points and lines such that
A consequence of this definition is that a projective plane of order formula_0 will contain formula_2 points and formula_2 lines.
The incidence relation between points and lines may equivalently be described using an incidence matrix. In this context a projective plane of order formula_0
is equivalent to a formula_3 matrix with formula_4 entries such that every row and column has formula_1 ones
and the inner product between any two rows or columns is exactly formula_5.
Using the incidence matrix representation, Lam's problem is equivalent to determining if there is a way of placing 0s and 1s in a formula_6 matrix such that there are exactly eleven 1s in each row and column and any pair of
rows share a single 1 in the same column.
Clement W. H. Lam considered studying the existence of a projective plane of order ten in his PhD thesis but was dissuaded by his thesis advisor H. J. Ryser who believed the problem was too difficult.
Resolution.
Edward Assmus presented a connection between projective planes and coding theory at the conference "Combinatorial Aspects of Finite Geometries" in 1970.
He studied the code generated by the rows of the incidence matrix of a hypothetical projective plane of order ten and derived a number of restrictive properties that such a code must satisfy.
In particular, the enumerator polynomial of the code is completely determined by the number of words of weights 12, 15, and 16 in the code.
Over the next two decades a number of computer searches showed that the hypothetical code associated with the projective plane of order ten does not contain words of weights 15, 12, and 16—which implied that it must contain words of weight 19.
Finally, Clement Lam, Larry Thiel and Stanley Swiercz used about three months of time on a Cray-1A supercomputer to show that words of weight 19 are also not present in the code.
This resolved Lam's problem in the negative.
Their result was independently verified in 2021 by using a SAT solver to generate computer-verifiable certificates for the correctness of the exhaustive searches.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n+1"
},
{
"math_id": 2,
"text": "n^2+n+1"
},
{
"math_id": 3,
"text": "(n^2+n+1)\\times(n^2+n+1)"
},
{
"math_id": 4,
"text": "\\{0,1\\}"
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "111\\times111"
}
]
| https://en.wikipedia.org/wiki?curid=67439252 |
67445633 | SARS-CoV-2 Delta variant | Variant of SARS-CoV-2 detected late 2020
The Delta variant (B.1.617.2) was a variant of SARS-CoV-2, the virus that causes COVID-19. It was first detected in India on 5 October 2020. The Delta variant was named on 31 May 2021 and had spread to over 179 countries by 22 November 2021. The World Health Organization (WHO) indicated in June 2021 that the Delta variant was becoming the dominant strain globally.
It has mutations in the gene encoding the SARS-CoV-2 spike protein causing the substitutions T478K, P681R and L452R, which are known to affect transmissibility of the virus as well as whether it can be neutralised by antibodies for previously circulating variants of the COVID-19 virus. In August 2021, Public Health England (PHE) reported secondary attack rate in household contacts of non-travel or unknown cases for Delta to be 10.8% "vis-à-vis" 10.2% for the Alpha variant; the case fatality rate for those 386,835 people with Delta is 0.3%, where 46% of the cases and 6% of the deaths are unvaccinated and below 50 years old. Immunity from previous recovery or COVID-19 vaccines are effective in preventing severe disease or hospitalisation from infection with the variant.
On 7 May 2021, PHE changed their classification of lineage B.1.617.2 from a variant under investigation (VUI) to a variant of concern (VOC) based on an assessment of transmissibility being at least equivalent to B.1.1.7 (Alpha variant); the UK's SAGE using May data estimated a "realistic" possibility of being 50% more transmissible. On 11 May 2021, the WHO also classified this lineage VOC, and said that it showed evidence of higher transmissibility and reduced neutralisation. On 15 June 2021, the Centers for Disease Control and Prevention (CDC) declared Delta a variant of concern.
The variant is thought to be partly responsible for India's deadly second wave of the pandemic beginning in February 2021. It later contributed to a third wave in Fiji, the United Kingdom and South Africa, and the WHO warned in July 2021 that it could have a similar effect elsewhere in Europe and Africa. By late July, it had also driven an increase in daily infections in parts of Asia, the United States, Australia, and New Zealand.
Classification.
The Delta variant has mutations in the gene encoding the SARS-CoV-2 spike protein causing the substitutions D614G, T478K, P681R and L452R. It is identified as the 21A, 21I, and 21J clades under the Nextstrain phylogenetic classification system.
Names.
The virus has also been referred to by the term "Indian Variant" as it was originally detected in India. However, the Delta variant is only one of three variants of the lineage B.1.617, all of which were first detected in India. At the end of May 2021, the WHO assigned the label Delta to lineage B.1.617.2 after introducing a new policy of using Greek letters for variants of concern and variants of interest.
Other sublineages of B.1.617.
There are three sublineages of lineage B.1.617 categorised so far.
B.1.617.1 (Kappa variant) was designated a Variant Under Investigation in April 2021 by Public Health England. Later in April 2021, two other variants B.1.617.2 and B.1.617.3 were designated as Variants Under Investigation. While B.1.617.3 shares the L452R and E484Q mutations found in B.1.617.1, B.1.617.2 lacks the E484Q mutation. B.1.617.2 has the T478K mutation, not found in B.1.617.1 and B.1.617.3. Simultaneously, the ECDC released a brief maintaining all three sublineages of B.1.617 as VOI, estimating that a "greater understanding of the risks related to these B.1.617 lineages is needed before any modification of current measures can be considered".
Mutations.
The Delta/ B.1.617.2 genome has 13 mutations (15 or 17 according to some sources, depending on whether more common mutations are included) which produce alterations in the amino-acid sequences of the proteins it encodes.
The list of spike protein mutations is: 19R, (G142D), Δ156-157, R158G, L452R, T478K, D614G, P681R, D950N according to GVN, or T19R, G142D, del 156–157, R158G, L452R, T478K, D614G, P681R according to Genscript Four of them, all of which are in the virus's spike protein code, are of particular concern:
The E484Q mutation is not present in the B.1.617.2 genome.
Lineages.
As of August 2021, Delta variants have been subdivided in the Pango lineage designation system into variants from AY.1 to AY.28. However, there is no information on whether such classification correlates with biological characteristic changes of the virus. It is said that, as of August 2021, AY.4 to AY.11 are predominant in the UK, AY.12 in Israel, AY.2, AY.3, AY.13, AY.14, AY.25 in the US, AY.20 in the US and Mexico, AY.15 in Canada, AY.16 in Kenya, AY.17 in Ireland and Northern Ireland, AY.19 in South Africa, AY.21 in Italy and Switzerland, AY.22 in Portugal, AY.24 in Indonesia, and AY.23 in Indonesia, Singapore, Japan, and South Korea.
"Delta plus" variant.
Delta with K417N originally corresponded to lineages AY.1 and AY.2, subsequently also lineage AY.3, and has been nicknamed "Delta plus" or "Nepal variant". It has the K417N mutation, which is also present in the Beta variant. The exchange at position 417 is a lysine-to-asparagine substitution.
As of mid-October 2021, the AY.3 variant accounted for a cumulative prevalence of approximately 5% in the United States, and 2% worldwide. In mid-October the AY.4.2 Delta sublineage was expanding in England, and being monitored and assessed. It contains mutations A222V and Y145H in its spike protein, not considered of particular concern. It has been suggested that AY.4.2 might be 10-15% more transmissible than the original Delta variant. Mid-October 2021, AY.4.2 accounted for an estimated 10% of cases, and has led to an additional growth rate rising to about 1% (10% of 10%) per generational time of five days or so. This additional growth rate would grow with increasing prevalence. Without AY.4.2 and no other changes, the number of cases in the UK would have been about 10% lower. AY.4.2 grows about 15% faster per week. In the UK it was reclassified as a "variant under investigation" (but not "of concern") in late October 2021. In Denmark, after a drop in AY.4.2 cases, a new fast surge was detected and monitored, but was not yet considered a cause of concern.
Symptoms.
The most common symptoms may have changed from the most common symptoms previously associated with standard COVID-19. Infected people may mistake the symptoms for a bad cold and not realize they need to isolate. Common symptoms reported have been headaches, sore throat, a runny nose or a fever.
Prevention.
WHO has not issued preventative measures against Delta specifically; non-pharmaceutical measures recommended to prevent wild type COVID-19 should still be effective. These would include washing hands, wearing a mask, maintaining distance from others, avoiding touching the mouth, nose or eyes, avoiding crowded indoor spaces with poor ventilation especially where people are talking, going to get tested if one develops symptoms and isolating if one becomes sick. Public Health authorities should continue to find infected individuals using testing, trace their contacts, and isolate those who have tested positive or been exposed. Event organizers should assess the potential risks of any mass gathering and develop a plan to mitigate these risks. See also Non-pharmaceutical intervention (epidemiology).
The Indian Council of Medical Research (ICMR) found that convalescent sera of the COVID-19 cases and recipients of Bharat Biotech's BBV152 (Covaxin) were able to neutralise VUI B.1.617 although with a lower efficacy.
Anurag Agrawal, the director of the Institute of Genomics and Integrative Biology (IGIB), said the study on the effectiveness of the available vaccines on lineage B.1.617 suggests that post-vaccination, the infections are milder.
Anthony Fauci, the Chief Medical Advisor to the President of the United States, has also expressed his confidence regarding the preliminary results. In an interview on 28 April, he said: <templatestyles src="Template:Blockquote/styles.css" />This is something where we're still gaining data daily. But the most recent data was looking at convalescent sera of COVID-19 cases and people who received the vaccine used in India, the Covaxin. It was found to neutralise the 617 variants.
Another study by the Centre for Cellular and Molecular Biology (CCMB) in Hyderabad found Covishield (Oxford–AstraZeneca) vaccinated sera offers protection against lineage B.1.617.
A study conducted by Public Health England (PHE), found that compared to those who were unvaccinated those who were vaccinated with either the Pfizer-BioNTech or AstraZeneca-Oxford had 33% less instances of symptomatic disease caused by the variant after the first dose. Among those who were two weeks after the receiving their second dose of the Pfizer-BioNTech vaccine 88% less subjects had symptomatic disease from the Delta variant versus those that were unvaccinated. Among those who were two weeks after the receiving their second dose of the AstraZeneca-Oxford vaccine 60% less subjects had symptomatic disease from the Delta variant versus those that were unvaccinated.
A study by a group of researchers from the Francis Crick Institute, published in "The Lancet", shows that humans fully vaccinated with the Pfizer-BioNTech vaccine are likely to have more than five times lower levels of neutralizing antibodies against the Delta variant compared to the original COVID-19 strain.
In June 2021, PHE announced it had conducted a study which found that after two shots, the Pfizer-BioNTech vaccine and the AstraZeneca vaccine are respectively 96% and 92% effective at preventing hospitalisation from the Delta variant.
On July 3, researchers from the universities of Toronto and Ottawa in Ontario, Canada, released a preprint study suggesting that the Moderna vaccine may be effective against death or hospitalization from the Delta variant.
In a study of the University of Sri Jayewardenepura in July 2021 found the Sinopharm BIBP vaccine caused seroconversion in 95% of individuals studied that had received both doses of the vaccine. The rate was higher in 20-39 age group (98.9%) but slightly lower in the over 60 age group (93.3%). Neutralising antibodies were present among 81.25% of the vaccinated individuals studied.
On 29 June 2021, the director of the Gamaleya Institute, Denis Logunov, said that Sputnik V is about 90% effective against the Delta variant.
On July 21, researchers from PHE published a study finding that the Pfizer vaccine was 93.7% effective against symptomatic disease from Delta after 2 doses, while the Astrazeneca vaccine was 67% effective.
On August 2, several experts expressed concern that achieving herd immunity may not currently be possible because the Delta variant is transmitted among those immunized with current vaccines.
On August 10, a study showed that the full vaccination coverage rate is correlated inversely to the SARS-CoV-2 delta variant mutation frequency in 16 countries (R-squared=0.878). Data strongly indicates that full vaccination against COVID-19 may slow down virus evolution.
Treatment.
In vitro experiments suggest that bamlanivimab may not be effective against Delta on its own. At high enough concentrations, casirivimab, etesevimab and imdevimab appear to still be effective. A preprint study suggests that sotrovimab may also be effective against Delta. Doctors in Singapore have been using supplemental oxygen, remdesivir and corticosteroids on more Delta patients than they did on previous variants.
Epidemiology.
Transmissibility.
UK scientists have said that the Delta variant is between 40% and 60% more transmissible than the previously dominant Alpha variant, which was first identified in the UK (as the Kent variant). Given that Alpha is already 150% as transmissible as the original SARS-CoV-2 strain that emerged in late 2019 in Wuhan, and if Delta is 150% as transmissible as Alpha, then Delta may be 225% as transmissible as the original strain. BBC reported that formula_0 – basic reproduction number, or the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection – for the first detected SARS-CoV-2 virus is 2.4–2.6, whereas Alpha's reproduction number is 4–5 and Delta's is 5–9. These basic reproduction numbers can be compared to MERS (0.29-0.80), seasonal influenza (1.2–1.4), Ebola (1.4–1.8), common cold (2–3), SARS (2–4), smallpox (3.5–6), and chickenpox (10–12). Due to Delta's high transmissibility even those that are vaccinated are vulnerable, albeit to a lesser extent.
A study published online (not peer-reviewed) by Guangdong Provincial Center for Disease Control and Prevention may partly explain the increased transmissibility: people with infection caused by Delta had 1,000 times more copies of the virus in the respiratory tracts than those with infection caused by variants first identified in the beginning of the pandemic; and it took on average 4 days for people infected with Delta for the virus to be detectable compared to 6 days with initially identified variants.
Surveillance data from the U.S., Germany and the Netherlands indicates the Delta variant is growing by about a factor of 4 every two weeks with respect to the Alpha variant.
In India, the United Kingdom, Portugal, Russia, Mexico, Australia, Indonesia, South Africa, Germany, Luxembourg, the United States, the Netherlands, Denmark, France and probably many other countries, the Delta variant had become the dominant strain by July 2021. Depending on country, there is typically a lag from a few days to several weeks between cases and variant reporting. As of July 20, this variant had spread to 124 countries, and WHO had indicated that it was becoming the dominant strain, if not one already.
In the Netherlands, the virus was still able to propagate significantly in the population with over 93.4% of blood donors being tested positive for SARS-CoV-2 antibodies after week 28, 2021. Many people there are not fully vaccinated, so those antibodies would have been developed from exposure to the wild virus or from a vaccine. Similar high seroimmunity levels occur in the United Kingdom in blood donors and general surveillance.
A preprint found that the viral load in the first positive test of infections with the variant was on average ~1000 times higher than with compared infections during 2020. Preliminary data from a study with 100,000 volunteers in the UK from May to July 2021, when Delta was spreading rapidly, indicates that vaccinated people who test positive for COVID-19, including asymptomatic cases, have a lower viral load in average. Data from the US, UK, and Singapore indicate that vaccinated people infected by Delta may have viral loads as high as unvaccinated infected people, but might remain infectious for a shorter period.
Infection age groups.
Surveillance data from the Indian government's Integrated Disease Surveillance Programme (IDSP) shows that around 32% of patients, both hospitalised and outside hospitals, were aged below 30 in the second wave compared to 31% during the first wave, among people aged 30–40 the infection rate stayed at 21%. Hospitalisation in the 20–39 bracket increased to 25.5% from 23.7% while the 0–19 range increased to 5.8% from 4.2%. The data also showed a higher proportion of asymptomatic patients were admitted during the second wave, with more complaints of breathlessness.
Virulence.
A few early studies suggest the Delta variant causes more severe illness than other strains. On 7 June 2021, researchers at the National Centre for Infectious Diseases in Singapore posted a paper suggesting that patients testing positive for Delta are more likely to develop pneumonia and/or require oxygen than patients with wild type or Alpha. On June 11, Public Health England released a report finding that there was "significantly increased risk of hospitalization" from Delta as compared with Alpha; the risk was approximately twice as high for those infected with the Delta variant. On June 14, researchers from Public Health Scotland found that the risk of hospitalization from Delta was roughly double that of from Alpha. On July 7, a preprint study from epidemiologists at the University of Toronto found that Delta had a 120% greater – or more than twice as large – risk of hospitalization, 287% greater risk of ICU admission and 137% greater risk of death compared to non-variant of concern strains of SARS-COV-2. However, on July 9, Public Health England reported that the Delta variant in England had a case fatality rate (CFR) of 0.2%, while the Alpha variant's case fatality rate was 1.9%, although the report warns that "case fatality rates are not comparable across variants as they have peaked at different points in the pandemic, and so vary in background hospital pressure, vaccination availability and rates and case profiles, treatment options, and impact of reporting delay, among other factors." James McCreadie, a spokesperson for Public Health England, clarified "It is too early to assess the case fatality ratio compared to other variants."
A Canadian study released on 5 October 2021 revealed that the Delta variant caused a 108 percent rise in hospitalization, 235 percent increase in ICU admission, and a 133 percent surge in death compared to other variants. is more serious and resulted in an increased risk of death compared to previous variants, odds that are significantly decreased with immunization.
Statistics.
The chance of detecting a Delta case varies significantly, especially depending on a country's sequencing rate (less than 0.05% of all COVID-19 cases have been sequenced in the lowest-sequencing countries to around 50 percent in the highest).
By 22 June 2021, more than 4,500 sequences of the variant had been detected in about 78 countries. Reported numbers of sequences in countries with detections are:
History.
The first cases of the variant outside India were detected in late February 2021, including the United Kingdom on 22 February, the United States on 23 February and Singapore on 26 February.
British scientists at Public Health England redesignated the B.1.617.2 variant on 7 May 2021 as "variant of concern" (VOC-21APR-02), after they flagged evidence in May 2021 that it spreads more quickly than the original version of the virus. Another reason was that they identified 48 clusters of B.1.617.2, some of which revealed a degree of community transmission. With cases from the Delta variant having risen quickly, British scientists considered the Delta variant having overtaken the Alpha variant as the dominant variant of SARS-CoV-2 in the UK in early June 2021. Researchers at Public Health England later found that over 90% of new cases in the UK in the early part of June 2021 were the Delta variant; they also cited evidence that the Delta variant was associated with an approximately 60% increased risk of household transmission compared to the Alpha variant.
Canada's first confirmed case of the variant was identified in Quebec on 21 April 2021, and later the same day 39 cases of the variant were identified in British Columbia. Alberta reported a single case of the variant on 22 April 2021. Nova Scotia reported two Delta variant cases in June 2021.
Fiji also confirmed its first case of the variant on 19 April 2021 in Lautoka, and has since then climbed up to 47,000 cases and counting. The variant has been identified as a super-spreader and has led to the lockdowns of five cities (Lautoka, Nadi, Suva, Lami and Nausori), an area which accounts for almost two-thirds of the country's population.
On 29 April 2021, health officials from Finland's Ministry of Social Affairs and Health (STM) and the Finnish Institute for Health and Welfare (THL) reported that the variant had been detected in three samples dating back to March 2021.
The Philippines confirmed its first two cases of the variant on 11 May 2021, despite the imposed travel ban of the country from the nations in the Indian subcontinent (except for Bhutan and Maldives). Both patients have no travel history from India for the past 14 days, but instead from Oman and UAE.
North Macedonia confirmed its first case of the variant on 7 June 2021 after a person who was recovering from the virus in Iraq was transported to North Macedonia. In a laboratory test, the variant was detected in the person. On 22 June 2021, the country reported its second case of the Delta variant in a colleague of the first case who had also been in Iraq and who subsequently developed symptoms.
The detection of B.1.617 was hampered in some countries by a lack of specialised kits for the variant and laboratories that can perform the genetic test. For example, as of 18 May, Pakistan had not reported any cases, but authorities noted that 15% of COVID-19 samples in the country were of an "unknown variant"; they could not say if it was B.1.617 because they were unable to test for it. Other countries had reported travellers arriving from Pakistan that were infected with B.1.617.
In June 2021, scientist Vinod Scaria of India's Institute of Genomics and Integrative Biology highlighted the existence of the B.1.617.2.1 variant, also known as AY.1 or Delta plus, which has an additional K417N mutation compared to the Delta variant. B.1.617.2.1 was detected in Europe in March 2021, and has since been detected in Asia and America.
On 23 June 2021, the European Centre for Disease Prevention and Control (ECDC) warned that the variant would represent 90% of all new cases in the European Union by the end of August.
By 3 July 2021, Delta became dominant in the US.
On 9 July 2021, Public Health England issued Technical Briefing 18 on SARS-CoV-2 variants, documenting 112 deaths among 45,136 UK cases of SARS-CoV-2 Delta variant with 28 days follow-up with a fatality rate of 0.2%. Briefing 16 notes that "[M]ortality is a lagged indicator, which means that the number of cases who have completed 28 days of follow up is very low – therefore, it is too early to provide a formal assessment of the case fatality of Delta, stratified by age, compared to other variants." Briefing 18 warns that "Case fatality is not comparable across variants as they have peaked at different points in the pandemic, and so vary in background hospital pressure, vaccination availability and rates and case profiles, treatment options, and impact of reporting delay, among other factors." The most concerning issue is the logistic growth rate of 0.93/week relative to Alpha. This means that per week, the number of Delta samples/cases is growing by a factor of exp (0.93)=2.5 with respect to the Alpha variant. This results, under the same infection prevention measures, in a much greater case load over time until a large fraction of people have been infected by it.
Government responses.
After the rise in cases from the second wave, at least 20 countries imposed travel bans and restrictions on passengers from India in April and May. UK prime minister Boris Johnson cancelled his visit to India twice, while Japanese Prime Minister Yoshihide Suga postponed his April trip.
In May 2021, residents of two tower blocks in Velbert, Germany, were quarantined after a woman in the building tested positive for the Delta variant.
In May, Delhi Chief Minister Arvind Kejriwal said that a new coronavirus variant from Singapore was extremely dangerous for children and could result in a third wave in India.
From 16 May to 13 June 2021, as well as 22 July 2021 to 10 August 2021; Singapore entered lockdowns, known as "Phase 2 Heightened Alert", similar to 2020.
On 14 June, the British prime minister Boris Johnson announced that the proposed end of all restrictions on 21 June in the United Kingdom was delayed for up to four weeks and vaccination roll-out was accelerated following concerns over the Delta variant, which accounted for the vast majority (90%) of new infections. UK scientists have said that the Delta variant is between 40% and 60% more transmissible than the previously dominant Alpha variant, which was first identified in the UK (as the Kent variant).
On 23 June, the province of Ontario in Canada accelerated 2nd dose vaccine appointments for people living in Delta hot spots such as Toronto, Peel and Hamilton.
On 25 June, Israel restored their mask mandate citing the threat of Delta.
On 28 June, Sydney and Darwin went back into lockdown because of Delta outbreaks. South Africa banned indoor and outdoor gatherings apart from funerals, imposed a curfew, and banned the sale of alcohol.
On 3 July, the islands of Bali and Java in Indonesia went into emergency lockdown.
On 8 July, Japanese Prime Minister Yoshihide Suga announced that Tokyo would once again enter a state of emergency, and that most spectators would be barred from attending the Olympics set to start there on 23 July.
On 9 July, Seoul, South Korea ramped up restrictions urging people to wear masks outdoors, and limiting the size of gatherings.
On 12 July, French President Emmanuel Macron announced that all health care workers will need to be vaccinated by 15 September and that France will start using health passports to enter bars, cafés, restaurants and shopping centres from August.
Los Angeles announced it will require masks indoors starting 17 July 2021.
The United Kingdom lifted most COVID-19 restrictions on 19 July, despite a surge in cases as the Delta variant became dominant. The government cited the protection and wide coverage of the COVID-19 vaccination programme, although health experts expressed concern at the move.
On 23 July, Vietnam extended its lockdown of Ho Chi Minh City to 1 August, and announced lockdown restrictions would be put in place in Hanoi, affecting a third of the country's population. The Delta variant had brought upon the country's largest outbreak to date, after mostly successful containment measures throughout 2020.
On 17 August, New Zealand went into an alert level 4 lockdown, following a positive case being reported in Auckland. More cases soon followed in the Coromandel Peninsula. This was the first reported community transmission case in the country in 170 days (since February 2021).
Extinction.
In October 2021, Dr Jenny Harries, chief executive of the UK Health and Security Agency stated that previous circulating variants such as Alpha had 'disappeared' and replaced by the Delta variant. In March 2022, the World Health Organization listed the Alpha, Beta and Gamma variants as previously circulating citing lack of any detected cases in the prior weeks and months, in part due to the dominance of the Delta variant and subsequent Omicron variant. However within a few months the Delta variant was listed as a previously circulated variant with countries such as Australia going 12 weeks without any detections of Delta.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_0"
}
]
| https://en.wikipedia.org/wiki?curid=67445633 |
67446776 | 2 Chronicles 26 | Second Book of Chronicles, chapter 26
2 Chronicles 26 is the twenty-sixth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Uzziah, king of Judah.
Text.
This chapter was originally written in the Hebrew language and is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Uzziah, king of Judah (26:1–15).
Like some kings of Judah before him, Uzziah's reign could be divided into two periods: one positive (Uzziah's successes as a result of seeking God) and one negative (Uzziah's illness caused by his disobedience as described in verses 16–21. The Chronicles portray Uzziah as a major reformer whose fame reached Egypt and detail his wars, construction projects, agriculture and military organization, whereas 2 Kings 14:21–2; 15:1–7 which only report the fortification and conquest of Elath as well as Uzziah's illness.
"Uzziah was sixteen years old when he began to reign, and he reigned fifty-two years in Jerusalem. His mother's name was Jecoliah of Jerusalem."
"And he sought God in the days of Zechariah, who had understanding in the visions of God: and as long as he sought the Lord, God made him to prosper."
"Also he built towers in the desert. He dug many wells, for he had much livestock, both in the lowlands and in the plains; he also had farmers and vinedressers in the mountains and in Carmel, for he loved the soil."
Verse 10.
Archaeological evidence supports the record of Uzziah's building projects in the south, although it could also point to the time of Jehoshaphat or other kings.
Uzziah’s disease (26:16–23).
In the latter period of his reign, Uzziah grew proud and attempted to burn incense, which could only be performed by the priests. When the priests warned him, Uzziah became angry to them, but the king was immediately smitten with leprosy, so he had to live in a separate house from then on and his son, Jotham, ruled as regent.
"So Uzziah slept with his fathers, and they buried him with his fathers in the field of the burial which belonged to the kings; for they said, He is a leper: and Jotham his son reigned in his stead."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=67446776 |
67447218 | Huff model | In spatial analysis, the Huff model is a widely used tool for predicting the probability of a consumer visiting a site, as a function of the distance of the site, its attractiveness, and the relative attractiveness of alternatives. It was formulated by David Huff in 1963. It is used in marketing, economics, retail research and urban planning, and is implemented in several commercially available GIS systems.
Its relative ease of use and applicability to a wide range of problems contribute to its enduring appeal.
The formula is given as:
formula_0
where : | [
{
"math_id": 0,
"text": "P_{ij}=\n\\frac{A_j^\\alpha D_{ij}^{- \\beta}}\n{\\sum_{k=1}^{n}A_k^{\\alpha} D_{ik}^{- \\beta}}\n"
},
{
"math_id": 1,
"text": "A_j"
},
{
"math_id": 2,
"text": "D_{ij}"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=67447218 |
674484 | Bilinear interpolation | Method of interpolating functions on a 2D grid
In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., "x" and "y") using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be generalized to functions defined on the vertices of (a mesh of) arbitrary convex quadrilaterals.
Bilinear interpolation is performed using linear interpolation first in one direction, and then again in another direction. Although each step is linear in the sampled values and in the position, the interpolation as a whole is not linear but rather quadratic in the sample location.
Bilinear interpolation is one of the basic resampling techniques in computer vision and image processing, where it is also called bilinear filtering or bilinear texture mapping.
Computation.
Suppose that we want to find the value of the unknown function "f" at the point ("x", "y"). It is assumed that we know the value of "f" at the four points "Q"11 = ("x"1, "y"1), "Q"12 = ("x"1, "y"2), "Q"21 = ("x"2, "y"1), and "Q"22 = ("x"2, "y"2).
Repeated linear interpolation.
We first do linear interpolation in the "x"-direction. This yields
formula_0
We proceed by interpolating in the "y"-direction to obtain the desired estimate:
formula_1
Note that we will arrive at the same result if the interpolation is done first along the "y" direction and then along the "x" direction.
Polynomial fit.
An alternative way is to write the solution to the interpolation problem as a multilinear polynomial
formula_2
where the coefficients are found by solving the linear system
formula_3
yielding the result
formula_4
Weighted mean.
The solution can also be written as a weighted mean of the "f"("Q"):
formula_5
where the weights sum to 1 and satisfy the transposed linear system
formula_6
yielding the result
formula_7
which simplifies to
formula_8
in agreement with the result obtained by repeated linear interpolation. The set of weights can also be interpreted as a set of generalized barycentric coordinates for a rectangle.
Alternative matrix form.
Combining the above, we have
formula_9
On the unit square.
If we choose a coordinate system in which the four points where "f" is known are (0, 0), (0, 1), (1, 0), and (1, 1), then the interpolation formula simplifies to
formula_10
or equivalently, in matrix operations:
formula_11
Here we also recognize the weights:
formula_12
Alternatively, the interpolant on the unit square can be written as
formula_13
where
formula_14
In both cases, the number of constants (four) correspond to the number of data points where "f" is given.
Properties.
As the name suggests, the bilinear interpolant is "not" linear; but it is linear (i.e. affine) along lines parallel to either the "x" or the "y" direction, equivalently if "x" or "y" is held constant. Along any other straight line, the interpolant is quadratic. Even though the interpolation is "not" linear in the position ("x" and "y"), at a fixed point it "is" linear in the interpolation values, as can be seen in the (matrix) equations above.
The result of bilinear interpolation is independent of which axis is interpolated first and which second. If we had first performed the linear interpolation in the "y" direction and then in the "x" direction, the resulting approximation would be the same.
The interpolant is a bilinear polynomial, which is also a harmonic function satisfying Laplace's equation. Its graph is a bilinear Bézier surface patch.
Inverse and generalization.
In general, the interpolant will assume any value (in the convex hull of the vertex values) at an infinite number of points (forming branches of hyperbolas), so the interpolation is not invertible.
However, when bilinear interpolation is applied to two functions simultaneously, such as when interpolating a vector field, then the interpolation is invertible (under certain conditions). In particular, this inverse can be used to find the "unit square coordinates" of a point inside any convex quadrilateral (by considering the coordinates of the quadrilateral as a vector field which is bilinearly interpolated on the unit square). Using this procedure bilinear interpolation can be extended to any convex quadrilateral, though the computation is significantly more complicated if it is not a parallelogram. The resulting map between quadrilaterals is known as a "bilinear transformation", "bilinear warp" or "bilinear distortion".
Alternatively, a projective mapping between a quadrilateral and the unit square may be used, but the resulting interpolant will not be bilinear.
In the special case when the quadrilateral is a parallelogram, a linear mapping to the unit square exists and the generalization follows easily.
The obvious extension of bilinear interpolation to three dimensions is called trilinear interpolation.
Application in image processing.
In computer vision and image processing, bilinear interpolation is used to resample images and textures. An algorithm is used to map a screen pixel location to a corresponding point on the texture map. A weighted average of the attributes (color, transparency, etc.) of the four surrounding texels is computed and applied to the screen pixel. This process is repeated for each pixel forming the object being textured.
When an image needs to be scaled up, each pixel of the original image needs to be moved in a certain direction based on the scale constant. However, when scaling up an image by a non-integral scale factor, there are pixels (i.e., "holes") that are not assigned appropriate pixel values. In this case, those "holes" should be assigned appropriate RGB or grayscale values so that the output image does not have non-valued pixels.
Bilinear interpolation can be used where perfect image transformation with pixel matching is impossible, so that one can calculate and assign appropriate intensity values to pixels. Unlike other interpolation techniques such as nearest-neighbor interpolation and bicubic interpolation, bilinear interpolation uses values of only the 4 nearest pixels, located in diagonal directions from a given pixel, in order to find the appropriate color intensity values of that pixel.
Bilinear interpolation considers the closest 2 × 2 neighborhood of known pixel values surrounding the unknown pixel's computed location. It then takes a weighted average of these 4 pixels to arrive at its final, interpolated value.
Example.
As seen in the example on the right, the intensity value at the pixel computed to be at row 20.2, column 14.5 can be calculated by first linearly interpolating between the values at column 14 and 15 on each rows 20 and 21, giving
formula_15
and then interpolating linearly between these values, giving
formula_16
This algorithm reduces some of the visual distortion caused by resizing an image to a non-integral zoom factor, as opposed to nearest-neighbor interpolation, which will make some pixels appear larger than others in the resized image.
A Simplification of Terms.
This example is of tabularised Pressure (columns) vs Temperature (rows) data as a lookup against some Variable
The following standard calculation by parts has 18 required operations.
formula_17
This can all be simplified from the initial 18 individual operations to 16 individual operations as such;
formula_18
The above has two repeated operations.
formula_19
These two repetitions can be assigned temporary variables whilst computing a single interpolation which will reduce the number of calculations down to 14 operations which is the minimum number of steps required for producing the desired interpolation. Doing this interpolation in 14 rather than 18 operations makes it 22% more efficient.
Simplification of terms is good practice for application of mathematical methodology to engineering applications and can reduce computational and energy requirements for a process. | [
{
"math_id": 0,
"text": "\\begin{align}\n f(x, y_1) = \\frac{x_2-x}{x_2-x_1} f(Q_{11}) + \\frac{x-x_1}{x_2-x_1} f(Q_{21}), \\\\\n f(x, y_2) = \\frac{x_2-x}{x_2-x_1} f(Q_{12}) + \\frac{x-x_1}{x_2-x_1} f(Q_{22}).\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n f(x,y) &= \\frac{y_2-y}{y_2-y_1} f(x, y_1) + \\frac{y-y_1}{y_2-y_1} f(x, y_2) \\\\\n &= \\frac{y_2-y}{y_2-y_1} \\left( \\frac{x_2-x}{x_2-x_1} f(Q_{11}) + \\frac{x-x_1}{x_2-x_1} f(Q_{21}) \\right) +\n \\frac{y-y_1}{y_2-y_1} \\left( \\frac{x_2-x}{x_2-x_1} f(Q_{12}) + \\frac{x-x_1}{x_2-x_1} f(Q_{22}) \\right) \\\\\n &= \\frac{1}{(x_2-x_1)(y_2-y_1)} \\left( f(Q_{11})(x_2-x)(y_2-y) + f(Q_{21})(x-x_1)(y_2-y)+ f(Q_{12})(x_2-x)(y-y_1) + f(Q_{22})(x-x_1)(y-y_1) \\right) \\\\\n &= \\frac{1}{(x_2-x_1)(y_2-y_1)} \\begin{bmatrix} x_2-x & x-x_1 \\end{bmatrix} \\begin{bmatrix} f(Q_{11}) & f(Q_{12}) \\\\ f(Q_{21}) & f(Q_{22}) \\end{bmatrix} \\begin{bmatrix}\ny_2-y \\\\ y-y_1 \\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 2,
"text": "f(x, y) \\approx a_{00} + a_{10}x + a_{01}y + a_{11}xy,"
},
{
"math_id": 3,
"text": "\\begin{align}\n\\begin{bmatrix}\n1 & x_1 & y_1 & x_1 y_1 \\\\\n1 & x_1 & y_2 & x_1 y_2 \\\\\n1 & x_2 & y_1 & x_2 y_1 \\\\\n1 & x_2 & y_2 & x_2 y_2 \n\\end{bmatrix}\\begin{bmatrix}\na_{00}\\\\a_{10}\\\\a_{01}\\\\a_{11}\n\\end{bmatrix} = \\begin{bmatrix}\nf(Q_{11})\\\\f(Q_{12})\\\\f(Q_{21})\\\\f(Q_{22})\n\\end{bmatrix},\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\begin{bmatrix}\na_{00}\\\\a_{10}\\\\a_{01}\\\\a_{11}\n\\end{bmatrix} = \\frac{1}{(x_2-x_1)(y_2-y_1)}\\begin{bmatrix}\nx_2y_2 & -x_2y_1 & -x_1y_2 & x_1y_1 \\\\\n-y_2 & y_1 & y_2 & -y_1 \\\\\n-x_2 & x_2 & x_1 & -x_1 \\\\\n1 & -1 & -1 & 1 \n\\end{bmatrix}\\begin{bmatrix}\nf(Q_{11})\\\\f(Q_{12})\\\\f(Q_{21})\\\\f(Q_{22})\n\\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 5,
"text": "f(x, y) \\approx w_{11} f(Q_{11}) + w_{12} f(Q_{12}) + w_{21} f(Q_{21}) + w_{22} f(Q_{22}),"
},
{
"math_id": 6,
"text": "\n \\begin{bmatrix}\n 1 & 1 & 1 & 1 \\\\\n x_1 & x_1 & x_2 & x_2 \\\\\n y_1 & y_2 & y_1 & y_2 \\\\\n x_1y_1 & x_1y_2 & x_2y_1 & x_2y_2\n\\end{bmatrix}\n\\begin{bmatrix}\n w_{11} \\\\ w_{12} \\\\ w_{21} \\\\ w_{22}\n\\end{bmatrix} =\n\\begin{bmatrix}\n 1 \\\\ x \\\\ y \\\\ xy\n\\end{bmatrix},\n"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\begin{bmatrix}\nw_{11}\\\\w_{21}\\\\w_{12}\\\\w_{22}\n\\end{bmatrix} = \\frac{1}{(x_2-x_1)(y_2-y_1)}\\begin{bmatrix}\nx_2y_2 & -y_2 & -x_2 & 1 \\\\\n-x_2y_1 & y_1 & x_2 & -1 \\\\\n-x_1y_2 & y_2 & x_1 & -1 \\\\\nx_1y_1 & -y_1 & -x_1 & 1 \n\\end{bmatrix}\\begin{bmatrix}\n1\\\\x\\\\y\\\\xy\n\\end{bmatrix},\n\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align}\n w_{11} &= (x_2 - x )(y_2 - y ) / ((x_2 - x_1)(y_2 - y_1)), \\\\\n w_{12} &= (x_2 - x )(y - y_1) / ((x_2 - x_1)(y_2 - y_1)), \\\\\n w_{21} &= (x - x_1)(y_2 - y ) / ((x_2 - x_1)(y_2 - y_1)), \\\\\n w_{22} &= (x - x_1)(y - y_1) / ((x_2 - x_1)(y_2 - y_1)),\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\nf(x,y) \\approx \\frac{1}{(x_2-x_1)(y_2-y_1)}\n\\begin{bmatrix}f(Q_{11}) & f(Q_{12}) & f(Q_{21}) & f(Q_{22})\\end{bmatrix}\\begin{bmatrix}\nx_2y_2 & -y_2 & -x_2 & 1 \\\\\n-x_2y_1 & y_1 & x_2 & -1 \\\\\n-x_1y_2 & y_2 & x_1 & -1 \\\\\nx_1y_1 & -y_1 & -x_1 & 1 \n\\end{bmatrix}\\begin{bmatrix}\n1\\\\x\\\\y\\\\xy\n\\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 10,
"text": "f(x, y) \\approx f(0, 0) (1 - x)(1 - y) + f(0, 1) (1 - x)y + f(1, 0) x(1 - y) + f(1, 1) xy, "
},
{
"math_id": 11,
"text": "f(x, y) \\approx \\begin{bmatrix} 1 - x & x \\end{bmatrix} \\begin{bmatrix} f(0, 0) & f(0, 1) \\\\ f(1, 0) & f(1, 1) \\end{bmatrix} \\begin{bmatrix} 1 - y \\\\ y \\end{bmatrix}."
},
{
"math_id": 12,
"text": "\\begin{align}\n w_{11} &= (1 - x)(1 - y), \\\\\n w_{12} &= (1 - x)y, \\\\\n w_{21} &= x(1 - y), \\\\\n w_{22} &= xy.\n\\end{align}"
},
{
"math_id": 13,
"text": "f(x, y) = a_{00} + a_{10}x + a_{01}y + a_{11}xy,"
},
{
"math_id": 14,
"text": "\\begin{align}\n a_{00} &= f(0, 0), \\\\\n a_{10} &= f(1, 0) - f(0, 0), \\\\\n a_{01} &= f(0, 1) - f(0, 0), \\\\\n a_{11} &= f(1, 1) - f(1, 0) - f(0, 1) + f(0, 0).\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\n I_{20, 14.5} &= \\frac{15 - 14.5}{15 - 14} \\cdot 91 + \\frac{14.5 - 14}{15 - 14} \\cdot 210 = 150.5, \\\\\n I_{21, 14.5} &= \\frac{15 - 14.5}{15 - 14} \\cdot 162 + \\frac{14.5 - 14}{15 - 14} \\cdot 95 = 128.5,\n\\end{align}"
},
{
"math_id": 16,
"text": "I_{20.2, 14.5} = \\frac{21 - 20.2}{21 - 20} \\cdot 150.5 + \\frac{20.2 - 20}{21 - 20} \\cdot 128.5 = 146.1."
},
{
"math_id": 17,
"text": "\\begin{align}\nI_{T_{1}, P_{1}-P_{2}} &= \\frac{P_{2} - P_{x}}{P_{2} - P_{1}} \\cdot V_{11} + \\frac{P_{x} - P_{1}}{P_{2} - P_{1}} \\cdot V_{12} = V_{1x},\\\\\nI_{T_{2}, P_{1}-P_{2}} &= \\frac{P_{2} - P_{x}}{P_{2} - P_{1}} \\cdot V_{21} + \\frac{P_{x} - P_{1}}{P_{2} - P_{1}} \\cdot V_{22} = V_{2x},\\\\\nI_{P_{x}, T_{1}-T_{2}} &= \\frac{T_{2} - T_{x}}{T_{2} - T_{1}} \\cdot V_{1x} + \\frac{T_{x} - T_{1}}{T_{2} - T_{1}} \\cdot V_{2x} = V_{xx}\n\\end{align}"
},
{
"math_id": 18,
"text": "V_{xx} = \\frac{((P_{2} - P_{x}) \\cdot V_{11} + (P_{x} - P_{1}) \\cdot V_{12}) \\cdot (T_{2} - T_{x}) + ((P_{2} - P_{x}) \\cdot V_{21} + (P_{x} - P_{1}) \\cdot V_{22}) \\cdot (T_{x} - T_{1})}{(P_{2} - P_{1}) \\cdot (T_{2} - T_{1})}."
},
{
"math_id": 19,
"text": "(P_{2} - P_{x}), (P_{x} - P_{1})."
}
]
| https://en.wikipedia.org/wiki?curid=674484 |
67454670 | Quantum state discrimination | The term quantum state discrimination collectively refers to quantum-informatics techniques, with the help of which, by performing a small number of measurements on a physical system, its specific quantum state can be identified . And this is provided that the set of states in which the system can be is known in advance, and we only need to determine which one it is. This assumption distinguishes such techniques from quantum tomography, which does not impose additional requirements on the state of the system, but requires many times more measurements.
If the set of states in which the investigated system can be is represented by orthogonal vectors, the situation is particularly simple. To unambiguously determine the state of the system, it is enough to perform a quantum measurement in the basis formed by these vectors. The given quantum state can then be flawlessly identified from the measured value. Moreover, it can be easily shown that if the individual states are not orthogonal to each other, there is no way to tell them apart with certainty. Therefore, in such a case, it is always necessary to take into account the possibility of incorrect or inconclusive determination of the state of the system. However, there are techniques that try to alleviate this deficiency. With exceptions, these techniques can be divided into two groups, namely those based on error minimization and then those that allow the state to be determined unambiguously in exchange for lower efficiency.
The first group of techniques is based on the works of Carl W. Helstrom from the 60s and 70s of the 20th century and in its basic form consists in the implementation of projective quantum measurement, where the measurement operators are projective representations. The second group is based on the conclusions of a scientific article published by ID Ivanovich in 1987 and requires the use of generalized measurement, in which the elements of the POVM set are taken as measurement operators. Both groups of techniques are currently the subject of active, primarily theoretical, research, and apart from a number of special cases, there is no general solution that would allow choosing measurement operators in the form of expressible analytical formula.
More precisely, in its standard formulation, the problem involves performing some POVM formula_0 on a given unknown state formula_1, under the promise that the state received is an element of a collection of states formula_2, with formula_3 occurring with probability formula_4, that is, formula_5. The task is then to find the probability of the POVM formula_0 correctly guessing which state was received. Since the probability of the POVM returning the formula_6-th outcome when the given state was formula_7 has the form formula_8, it follows that the probability of successfully determining the correct state is formula_9.
Helstrom Measurement.
The discrimination of two states can be solved optimally using the Helstrom measurement. With two states formula_10 comes two probabilities formula_11 and POVMs formula_12. Since formula_13 for all POVMs, formula_14. So the probability of success is:
formula_15
To maximize the probability of success, the trace needs to be maximized. That's accomplished when formula_16 is a projector on the positive eigenspace of formula_17, and the maximal probability of success is given by
formula_18
where formula_19 denotes the trace norm.
Discriminating between multiple states.
If the task is to discriminate between more than two quantum states, there is no general formula for the optimal POVM and success probability. Nonetheless, the optimal success probability, for the task of discriminating between the elements of a given ensemble formula_20, can always be written asformula_21This is obtained observing that formula_22 is the "a priori" probability of getting the formula_23-th state, and formula_24 is the probability of (correctly) guessing the input to be formula_25, conditioned to having indeed received the state formula_25.
While this expression cannot be given an explicit form in the general case, it can be solved numerically via Semidefinite programming. An alternative approach to discriminate between a given ensemble of states is to the use the so-called Pretty Good Measurement (PGM), also known as the square root measurement. This is an alternative discrimination strategy that is not in general optimal, but can still be shown to work pretty well.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(E_i)_i"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "\\{\\sigma_i\\}_i"
},
{
"math_id": 3,
"text": "\\sigma_i"
},
{
"math_id": 4,
"text": "p_i"
},
{
"math_id": 5,
"text": "\\rho=\\sum_i p_i \\sigma_i"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\sigma_j"
},
{
"math_id": 8,
"text": "\\text{Prob}(i|j) = \\operatorname{tr}(E_i \\sigma_j) "
},
{
"math_id": 9,
"text": " P_{\\rm success} = \\sum_{i} p_{i}\\operatorname{tr}(\\sigma_{i} E_{i}) "
},
{
"math_id": 10,
"text": "\\{\\sigma_{0}, \\sigma_{1}\\}"
},
{
"math_id": 11,
"text": "\\{p_{0}, p_{1}\\}"
},
{
"math_id": 12,
"text": "\\{E_{0}, E_{1}\\}"
},
{
"math_id": 13,
"text": "\\sum_{i} E_{i} = I"
},
{
"math_id": 14,
"text": "E_{1} = I - E_{0}"
},
{
"math_id": 15,
"text": "\n\\begin{align}\nP_\\text{success} & = p_{0}\\operatorname{tr}(\\sigma_{0} E_{0}) + p_{1} \\operatorname{tr}(\\sigma_{1} E_{1}) \\\\\n& = p_{0}\\operatorname{tr}(\\sigma_{0} E_{0}) + p_{1} \\operatorname{tr}(\\sigma_{1} I - \\sigma_{1} E_{0}) \\\\\n& = p_{1} + \\operatorname{tr}[(p_{0} \\sigma_{0} - p_{1} \\sigma_{1}) E_{0}]\n\\end{align}\n"
},
{
"math_id": 16,
"text": " E_{0} "
},
{
"math_id": 17,
"text": " p_{0} \\sigma_{0} - p_{1} \\sigma_{1}"
},
{
"math_id": 18,
"text": " P_\\text{success} = \\frac12 + \\frac12\\|p_{0} \\sigma_{0} - p_{1} \\sigma_{1} \\|_1,"
},
{
"math_id": 19,
"text": "\\|\\cdot\\|_1"
},
{
"math_id": 20,
"text": " \\{(p_i,\\sigma_i)\\}_{i=1}^N "
},
{
"math_id": 21,
"text": " P_{\\rm success} = \\max_{ \\{E_i\\} }\\sum_i p_i \\operatorname{tr}(E_i \\sigma_i). "
},
{
"math_id": 22,
"text": " p_i "
},
{
"math_id": 23,
"text": " i "
},
{
"math_id": 24,
"text": " \\operatorname{tr}(E_i \\sigma_i) "
},
{
"math_id": 25,
"text": " \\sigma_i "
}
]
| https://en.wikipedia.org/wiki?curid=67454670 |
674582 | Trilinear interpolation | Method of multivariate interpolation on a 3-dimensional regular grid
Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point formula_0 within the local axial rectangular prism linearly, using function data on the lattice points. For an arbitrary, unstructured mesh (as used in finite element analysis), other methods of interpolation must be used; if all the mesh elements are tetrahedra (3D simplices), then barycentric coordinates provide a straightforward procedure.
Trilinear interpolation is frequently used in numerical analysis, data analysis, and computer graphics.
Compared to linear and bilinear interpolation.
Trilinear interpolation is the extension of linear interpolation, which operates in spaces with dimension formula_1, and bilinear interpolation, which operates with dimension formula_2, to dimension formula_3. These interpolation schemes all use polynomials of order 1, giving an accuracy of order 2, and it requires formula_4 adjacent pre-defined values surrounding the interpolation point. There are several ways to arrive at trilinear interpolation, which is equivalent to 3-dimensional tensor B-spline interpolation of order 1, and the trilinear interpolation operator is also a tensor product of 3 linear interpolation operators.
Method.
On a periodic and cubic lattice, let formula_5, formula_6, and formula_7
be the differences between each of formula_8, formula_9, formula_10 and the smaller coordinate related, that is:
formula_11
where formula_12 indicates the lattice point below formula_13, and formula_14 indicates the lattice point above formula_13 and similarly for
formula_15 and formula_16.
First we interpolate along formula_8 (imagine we are "pushing" the face of the cube defined by formula_17 to the opposing face, defined by formula_18), giving:
formula_19
Where formula_20 means the function value of formula_21 Then we interpolate these values (along formula_9, "pushing" from formula_22 to formula_23), giving:
formula_24
Finally we interpolate these values along formula_10 (walking through a line):
formula_25
This gives us a predicted value for the point.
The result of trilinear interpolation is independent of the order of the interpolation steps along the three axes: any other order, for instance along formula_8, then along formula_9, and finally along formula_10, produces the same value.
The above operations can be visualized as follows: First we find the eight corners of a cube that surround our point of interest. These corners have the values formula_20, formula_26, formula_27, formula_28, formula_29, formula_30, formula_31, formula_32.
Next, we perform linear interpolation between formula_20 and formula_26 to find formula_33, formula_29 and formula_30 to find formula_34, formula_31 and formula_32 to find formula_35, formula_27 and formula_28 to find formula_36.
Now we do interpolation between formula_33 and formula_36 to find formula_37, formula_34 and formula_35 to find formula_38. Finally, we calculate the value formula_39 via linear interpolation of formula_37 and formula_38
In practice, a trilinear interpolation is identical to two bilinear interpolation combined with a linear interpolation:
formula_40
Alternative algorithm.
An alternative way to write the solution to the interpolation problem is
formula_41
where the coefficients are found by solving the linear system
formula_42
yielding the result
formula_43 | [
{
"math_id": 0,
"text": "(x, y, z)"
},
{
"math_id": 1,
"text": "D = 1"
},
{
"math_id": 2,
"text": "D = 2"
},
{
"math_id": 3,
"text": "D = 3"
},
{
"math_id": 4,
"text": "2^D = 8"
},
{
"math_id": 5,
"text": "x_\\text{d}"
},
{
"math_id": 6,
"text": "y_\\text{d}"
},
{
"math_id": 7,
"text": "z_\\text{d}"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "y"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "\\begin{align}\n x_\\text{d} = \\frac{x - x_0}{x_1 - x_0} \\\\\n y_\\text{d} = \\frac{y - y_0}{y_1 - y_0} \\\\\n z_\\text{d} = \\frac{z - z_0}{z_1 - z_0}\n\\end{align}"
},
{
"math_id": 12,
"text": " x_0 "
},
{
"math_id": 13,
"text": " x "
},
{
"math_id": 14,
"text": " x_1 "
},
{
"math_id": 15,
"text": "y_0, y_1, z_0"
},
{
"math_id": 16,
"text": "z_1"
},
{
"math_id": 17,
"text": "C_{0jk}"
},
{
"math_id": 18,
"text": "C_{1jk}"
},
{
"math_id": 19,
"text": "\\begin{align}\n c_{00} &= c_{000} (1 - x_\\text{d}) + c_{100} x_\\text{d} \\\\\n c_{01} &= c_{001} (1 - x_\\text{d}) + c_{101} x_\\text{d} \\\\\n c_{10} &= c_{010} (1 - x_\\text{d}) + c_{110} x_\\text{d} \\\\\n c_{11} &= c_{011} (1 - x_\\text{d}) + c_{111} x_\\text{d}\n\\end{align}"
},
{
"math_id": 20,
"text": "c_{000}"
},
{
"math_id": 21,
"text": " (x_0, y_0, z_0). "
},
{
"math_id": 22,
"text": "C_{i0k}"
},
{
"math_id": 23,
"text": "C_{i1k}"
},
{
"math_id": 24,
"text": "\\begin{align}\n c_0 &= c_{00}(1 - y_\\text{d}) + c_{10}y_\\text{d} \\\\\n c_1 &= c_{01}(1 - y_\\text{d}) + c_{11}y_\\text{d}\n\\end{align}"
},
{
"math_id": 25,
"text": "c = c_0(1 - z_\\text{d}) + c_1z_\\text{d} ."
},
{
"math_id": 26,
"text": "c_{100}"
},
{
"math_id": 27,
"text": "c_{010}"
},
{
"math_id": 28,
"text": "c_{110}"
},
{
"math_id": 29,
"text": "c_{001}"
},
{
"math_id": 30,
"text": "c_{101}"
},
{
"math_id": 31,
"text": "c_{011}"
},
{
"math_id": 32,
"text": "c_{111}"
},
{
"math_id": 33,
"text": "c_{00}"
},
{
"math_id": 34,
"text": "c_{01}"
},
{
"math_id": 35,
"text": "c_{11}"
},
{
"math_id": 36,
"text": "c_{10}"
},
{
"math_id": 37,
"text": "c_{0}"
},
{
"math_id": 38,
"text": "c_{1}"
},
{
"math_id": 39,
"text": "c"
},
{
"math_id": 40,
"text": "c \\approx l\\left( b(c_{000}, c_{010}, c_{100}, c_{110}),\\, b(c_{001}, c_{011}, c_{101}, c_{111})\\right)"
},
{
"math_id": 41,
"text": "f(x, y, z) \\approx a_0 + a_1 x + a_2 y + a_3 z + a_4 x y + a_5 x z + a_6 y z + a_7 x y z"
},
{
"math_id": 42,
"text": "\\begin{align}\n \\begin{bmatrix}\n 1 & x_0 & y_0 & z_0 & x_0 y_0 & x_0 z_0 & y_0 z_0 & x_0 y_0 z_0 \\\\\n 1 & x_1 & y_0 & z_0 & x_1 y_0 & x_1 z_0 & y_0 z_0 & x_1 y_0 z_0 \\\\\n 1 & x_0 & y_1 & z_0 & x_0 y_1 & x_0 z_0 & y_1 z_0 & x_0 y_1 z_0 \\\\\n 1 & x_1 & y_1 & z_0 & x_1 y_1 & x_1 z_0 & y_1 z_0 & x_1 y_1 z_0 \\\\\n 1 & x_0 & y_0 & z_1 & x_0 y_0 & x_0 z_1 & y_0 z_1 & x_0 y_0 z_1 \\\\\n 1 & x_1 & y_0 & z_1 & x_1 y_0 & x_1 z_1 & y_0 z_1 & x_1 y_0 z_1 \\\\\n 1 & x_0 & y_1 & z_1 & x_0 y_1 & x_0 z_1 & y_1 z_1 & x_0 y_1 z_1 \\\\\n 1 & x_1 & y_1 & z_1 & x_1 y_1 & x_1 z_1 & y_1 z_1 & x_1 y_1 z_1 \n \\end{bmatrix}\\begin{bmatrix}\n a_0 \\\\ a_1 \\\\ a_2 \\\\ a_3 \\\\ a_4 \\\\ a_5 \\\\ a_6 \\\\ a_7\n \\end{bmatrix} = \\begin{bmatrix}\n c_{000} \\\\ c_{100} \\\\ c_{010} \\\\ c_{110} \\\\ c_{001} \\\\ c_{101} \\\\ c_{011} \\\\ c_{111}\n \\end{bmatrix},\n\\end{align}"
},
{
"math_id": 43,
"text": "\\begin{align}\n a_0 ={}\n &\\frac{-c_{000} x_1 y_1 z_1 + c_{001} x_1 y_1 z_0 + c_{010} x_1 y_0 z_1 - c_{011} x_1 y_0 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)} +{} \\\\\n &\\frac{ c_{100} x_0 y_1 z_1 - c_{101} x_0 y_1 z_0 - c_{110} x_0 y_0 z_1 + c_{111} x_0 y_0 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_1 ={}\n &\\frac{ c_{000} y_1 z_1 - c_{001} y_1 z_0 - c_{010} y_0 z_1 + c_{011} y_0 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)} +{} \\\\\n &\\frac{-c_{100} y_1 z_1 + c_{101} y_1 z_0 + c_{110} y_0 z_1 - c_{111} y_0 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_2 ={}\n &\\frac{ c_{000} x_1 z_1 - c_{001} x_1 z_0 - c_{010} x_1 z_1 + c_{011} x_1 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)} +{} \\\\\n &\\frac{-c_{100} x_0 z_1 + c_{101} x_0 z_0 + c_{110} x_0 z_1 - c_{111} x_0 z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_3 ={}\n &\\frac{ c_{000} x_1 y_1 - c_{001} x_1 y_1 - c_{010} x_1 y_0 + c_{011} x_1 y_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)} +{} \\\\\n &\\frac{-c_{100} x_0 y_1 + c_{101} x_0 y_1 + c_{110} x_0 y_0 - c_{111} x_0 y_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_4 ={}\n &\\frac{-c_{000} z_1 + c_{001} z_0 + c_{010} z_1 - c_{011} z_0 + c_{100} z_1 - c_{101} z_0 - c_{110} z_1 + c_{111} z_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_5 =\n &\\frac{-c_{000} y_1 + c_{001} y_1 + c_{010} y_0 - c_{011} y_0 + c_{100} y_1 - c_{101} y_1 - c_{110} y_0 + c_{111} y_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_6 ={}\n &\\frac{-c_{000} x_1 + c_{001} x_1 + c_{010} x_1 - c_{011} x_1 + c_{100} x_0 - c_{101} x_0 - c_{110} x_0 + c_{111} x_0}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}, \\\\[4pt]\n a_7 ={}\n &\\frac{ c_{000} - c_{001} - c_{010} + c_{011} - c_{100} + c_{101} + c_{110} - c_{111}}{(x_0 - x_1) (y_0 - y_1) (z_0 - z_1)}.\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=674582 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.