id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
13333998 | Dielectric resonator antenna | A dielectric resonator antenna (DRA) is a radio antenna mostly used at microwave frequencies and higher, that consists of a block of ceramic material of various shapes, the dielectric resonator, mounted on a metal surface, a ground plane. Radio waves are introduced into the inside of the resonator material from the transmitter circuit and bounce back and forth between the resonator walls, forming standing waves. The walls of the resonator are partially transparent to radio waves, allowing the radio power to radiate into space.
An advantage of dielectric resonator antennas is they lack metal parts, which become lossy at high frequencies, dissipating energy. So these antennas can have lower losses and be more efficient than metal antennas at high microwave and millimeter wave frequencies. Dielectric waveguide antennas are used in some compact portable wireless devices, and military millimeter-wave radar equipment. The antenna was first proposed by Robert Richtmyer in 1939. In 1982, Long et al. did the first design and test of dielectric resonator antennas considering a leaky waveguide model assuming magnetic conductor model of the dielectric surface . In that very first investigation, Long et al. explored "HEM11d" mode in a cylindrical shaped ceramic block to radiate broadside. After three decades, yet another mode ("HEM12d") bearing identical broadside pattern has been introduced by Guha in 2012.
An antenna like effect is achieved by periodic swing of electrons from its capacitive element to the ground plane which behaves like an inductor. The authors further argued that the operation of a dielectric antenna resembles the antenna conceived by Marconi, the only difference is that inductive element is replaced by the dielectric material.
Features.
Dielectric resonator antennas offer the following attractive features:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\lambda_0} {\\sqrt{\\varepsilon_r}}"
},
{
"math_id": 1,
"text": "\\lambda_0"
},
{
"math_id": 2,
"text": "\\varepsilon_r"
},
{
"math_id": 3,
"text": "\\varepsilon_r\\approx10-100"
}
]
| https://en.wikipedia.org/wiki?curid=13333998 |
133345 | Dead reckoning | Means of calculating position
In navigation, dead reckoning is the process of calculating the current position of a moving object by using a previously determined position, or fix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, is path integration.
Advances in navigational aids that give accurate information on position, in particular satellite navigation using the Global Positioning System, have made simple dead reckoning by humans obsolete for most purposes. However, inertial navigation systems, which provide very accurate directional information, use dead reckoning and are very widely applied.
Etymology.
Contrary to myth, the term "dead reckoning" was not originally used to abbreviate "deduced reckoning", nor is it a misspelling of the term "ded reckoning". The use of "ded" or "deduced reckoning" is not known to have appeared earlier than 1931, much later in history than "dead reckoning", which appeared as early as 1613 in the Oxford English Dictionary. The original intention of "dead" in the term is generally assumed to mean using a stationary object that is "dead in the water" as a basis for calculations. Additionally, at the time the first appearance of "dead reckoning", "ded" was considered a common spelling of "dead". This potentially led to later confusion of the origin of the term.
By analogy with their navigational use, the words "dead reckoning" are also used to mean the process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime. Often, this usage implies that the changes are not known accurately. The earlier value and the changes may be measured or calculated quantities.
Errors.
While dead reckoning can give the best available information on the present position with little math or analysis, it is subject to significant errors of approximation. For precise positional information, both speed and direction must be accurately known at all times during travel. Most notably, dead reckoning does not account for directional drift during travel through a fluid medium. These errors tend to compound themselves over greater distances, making dead reckoning a difficult method of navigation for longer journeys.
For example, if displacement is measured by the number of rotations of a wheel, any discrepancy between the actual and assumed traveled distance per rotation, due perhaps to slippage or surface irregularities, will be a source of error. As each estimate of position is relative to the previous one, errors are cumulative, or compounding, over time.
The accuracy of dead reckoning can be increased significantly by using other, more reliable methods to get a new fix part way through the journey. For example, if one was navigating on land in poor visibility, then dead reckoning could be used to get close enough to the known position of a landmark to be able to see it, before walking to the landmark itself—giving a precisely known starting point—and then setting off again.
Localization of mobile sensor nodes.
Localizing a static sensor node is not a difficult task because attaching a Global Positioning System (GPS) device suffices the need of localization. But a mobile sensor node, which continuously changes its geographical location with time is difficult to localize. Mostly mobile sensor nodes within some particular domain for data collection can be used, "i.e", sensor node attached to an animal within a grazing field or attached to a soldier on a battlefield. Within these scenarios a GPS device for each sensor node cannot be afforded. Some of the reasons for this include cost, size and battery drainage of constrained sensor nodes.
To overcome this problem a limited number of reference nodes (with GPS) within a field is employed. These nodes continuously broadcast their locations and other nodes in proximity receive these locations and calculate their position using some mathematical technique like trilateration. For localization, at least three known reference locations are necessary to localize. Several localization algorithms based on Sequential Monte Carlo (SMC) method have been proposed in literature. Sometimes a node at some places receives only two known locations and hence it becomes impossible to localize. To overcome this problem, dead reckoning technique is used. With this technique a sensor node uses its previous calculated location for localization at later time intervals. For example, at time instant 1 if node A calculates its position as "loca_1" with the help of three known reference locations; then at time instant 2 it uses "loca_1" along with two other reference locations received from other two reference nodes. This not only localizes a node in less time but also localizes in positions where it is difficult to get three reference locations.
Animal navigation.
In studies of animal navigation, dead reckoning is more commonly (though not exclusively) known as path integration. Animals use it to estimate their current location based on their movements from their last known location. Animals such as ants, rodents, and geese have been shown to track their locations continuously relative to a starting point and to return to it, an important skill for foragers with a fixed home.
Vehicular navigation.
Marine.
In marine navigation a "dead" reckoning plot generally does not take into account the effect of currents or wind. Aboard ship a dead reckoning plot is considered important in evaluating position information and planning the movement of the vessel.
Dead reckoning begins with a known position, or fix, which is then advanced, mathematically or directly on the chart, by means of recorded heading, speed, and time. Speed can be determined by many methods. Before modern instrumentation, it was determined aboard ship using a chip log. More modern methods include pit log referencing engine speed ("e.g". in rpm) against a table of total displacement (for ships) or referencing one's indicated airspeed fed by the pressure from a pitot tube. This measurement is converted to an equivalent airspeed based upon known atmospheric conditions and measured errors in the indicated airspeed system. A naval vessel uses a device called a pit sword (rodmeter), which uses two sensors on a metal rod to measure the electromagnetic variance caused by the ship moving through water. This change is then converted to ship's speed. Distance is determined by multiplying the speed and the time. This initial position can then be adjusted resulting in an estimated position by taking into account the current (known as set and drift in marine navigation). If there is no positional information available, a new dead reckoning plot may start from an estimated position. In this case subsequent dead reckoning positions will have taken into account estimated set and drift.
Dead reckoning positions are calculated at predetermined intervals, and are maintained between fixes. The duration of the interval varies. Factors including one's speed made good and the nature of heading and other course changes, and the navigator's judgment determine when dead reckoning positions are calculated.
Before the 18th-century development of the marine chronometer by John Harrison and the lunar distance method, dead reckoning was the primary method of determining longitude available to mariners such as Christopher Columbus and John Cabot on their trans-Atlantic voyages. Tools such as the traverse board were developed to enable even illiterate crew members to collect the data needed for dead reckoning. Polynesian navigation, however, uses different wayfinding techniques.
Air.
On 14 June, 1919, John Alcock and Arthur Brown took off from Lester's Field in St. John's, Newfoundland in a Vickers Vimy. They navigated across the Atlantic Ocean by dead reckoning and landed in County Galway, Ireland at 8:40 a.m. on 15 June completing the first non-stop transatlantic flight.
On 21 May 1927 Charles Lindbergh landed in Paris, France after a successful non-stop flight from the United States in the single-engined "Spirit of St. Louis". As the aircraft was equipped with very basic instruments, Lindbergh used dead reckoning to navigate.
Dead reckoning in the air is similar to dead reckoning on the sea, but slightly more complicated. The density of the air the aircraft moves through affects its performance as well as winds, weight, and power settings.
The basic formula for DR is Distance = Speed x Time. An aircraft flying at 250 knots airspeed for 2 hours has flown 500 nautical miles through the air. The wind triangle is used to calculate the effects of wind on heading and airspeed to obtain a magnetic heading to steer and the speed over the ground (groundspeed). Printed tables, formulae, or an E6B flight computer are used to calculate the effects of air density on aircraft rate of climb, rate of fuel burn, and airspeed.
A course line is drawn on the aeronautical chart along with estimated positions at fixed intervals (say every half hour). Visual observations of ground features are used to obtain fixes. By comparing the fix and the estimated position corrections are made to the aircraft's heading and groundspeed.
Dead reckoning is on the curriculum for VFR (visual flight rules – or basic level) pilots worldwide. It is taught regardless of whether the aircraft has navigation aids such as GPS, ADF and VOR and is an ICAO Requirement. Many flying training schools will prevent a student from using electronic aids until they have mastered dead reckoning.
Inertial navigation systems (INSes), which are nearly universal on more advanced aircraft, use dead reckoning internally. The INS provides reliable navigation capability under virtually any conditions, without the need for external navigation references, although it is still prone to slight errors.
Automotive.
Dead reckoning is today implemented in some high-end automotive navigation systems in order to overcome the limitations of GPS/GNSS technology alone. Satellite microwave signals are unavailable in parking garages and tunnels, and often severely degraded in urban canyons and near trees due to blocked lines of sight to the satellites or multipath propagation. In a dead-reckoning navigation system, the car is equipped with sensors that know the wheel circumference and record wheel rotations and steering direction. These sensors are often already present in cars for other purposes (anti-lock braking system, electronic stability control) and can be read by the navigation system from the controller-area network bus. The navigation system then uses a Kalman filter to integrate the always-available sensor data with the accurate but occasionally unavailable position information from the satellite data into a combined position fix.
Autonomous navigation in robotics.
Dead reckoning is utilized in some robotic applications. It is usually used to reduce the need for sensing technology, such as ultrasonic sensors, GPS, or placement of some linear and rotary encoders, in an autonomous robot, thus greatly reducing cost and complexity at the expense of performance and repeatability. The proper utilization of dead reckoning in this sense would be to supply a known percentage of electrical power or hydraulic pressure to the robot's drive motors over a given amount of time from a general starting point. Dead reckoning is not totally accurate, which can lead to errors in distance estimates ranging from a few millimeters (in CNC machining) to kilometers (in UAVs), based upon the duration of the run, the speed of the robot, the length of the run, and several other factors.
Pedestrian dead reckoning.
With the increased sensor offering in smartphones, built-in accelerometers can be used as a pedometer and built-in magnetometer as a compass heading provider. Pedestrian dead reckoning (PDR) can be used to supplement other navigation methods in a similar way to automotive navigation, or to extend navigation into areas where other navigation systems are unavailable.
In a simple implementation, the user holds their phone in front of them and each step causes position to move forward a fixed distance in the direction measured by the compass. Accuracy is limited by the sensor precision, magnetic disturbances inside structures, and unknown variables such as carrying position and stride length. Another challenge is differentiating walking from running, and recognizing movements like bicycling, climbing stairs, or riding an elevator.
Before phone-based systems existed, many custom PDR systems existed. While a pedometer can only be used to measure linear distance traveled, PDR systems have an embedded magnetometer for heading measurement. Custom PDR systems can take many forms including special boots, belts, and watches, where the variability of carrying position has been minimized to better utilize magnetometer heading. True dead reckoning is fairly complicated, as it is not only important to minimize basic drift, but also to handle different carrying scenarios and movements, as well as hardware differences across phone models.
Directional dead reckoning.
The south-pointing chariot was an ancient Chinese device consisting of a two-wheeled horse-drawn vehicle which carried a pointer that was intended always to aim to the south, no matter how the chariot turned. The chariot pre-dated the navigational use of the magnetic compass, and could not "detect" the direction that was south. Instead it used a kind of directional dead reckoning: at the start of a journey, the pointer was aimed southward by hand, using local knowledge or astronomical observations e.g. of the Pole Star. Then, as it traveled, a mechanism possibly containing differential gears used the different rotational speeds of the two wheels to turn the pointer relative to the body of the chariot by the angle of turns made (subject to available mechanical accuracy), keeping the pointer aiming in its original direction, to the south. Errors, as always with dead reckoning, would accumulate as distance traveled increased.
For networked games.
Networked games and simulation tools routinely use dead reckoning to predict where an actor should be right now, using its last known kinematic state (position, velocity, acceleration, orientation, and angular velocity). This is primarily needed because it is impractical to send network updates at the rate that most games run, 60 Hz. The basic solution starts by projecting into the future using linear physics:
formula_0
This formula is used to move the object until a new update is received over the network. At that point, the problem is that there are now two kinematic states: the currently estimated position and the just received, actual position. Resolving these two states in a believable way can be quite complex. One approach is to create a curve (e.g. cubic Bézier splines, centripetal Catmull–Rom splines, and Hermite curves) between the two states while still projecting into the future. Another technique is to use projective velocity blending, which is the blending of two projections (last known and current) where the current projection uses a blending between the last known and current velocity over a set time.
The first equation calculates a blended velocity formula_5 given the client-side velocity at the time of the last server update formula_6 and the last known server-side velocity formula_7. This essentially blends from the client-side velocity towards the server-side velocity for a smooth transition. Note that formula_8 should go from zero (at the time of the server update) to one (at the time at which the next update should be arriving). A late server update is unproblematic as long as formula_8 remains at one.
Next, two positions are calculated: firstly, the blended velocity formula_5 and the last known server-side acceleration formula_9 are used to calculate formula_10. This is a position which is projected from the client-side start position formula_11 based on formula_12, the time which has passed since the last server update. Secondly, the same equation is used with the last known server-side parameters to calculate the position projected from the last known server-side position formula_13 and velocity formula_7, resulting in formula_14.
Finally, the new position to display on the client formula_15 is the result of interpolating from the projected position based on client information formula_10 towards the projected position based on the last known server information formula_14. The resulting movement smoothly resolves the discrepancy between client-side and server-side information, even if this server-side information arrives infrequently or inconsistently. It is also free of oscillations which spline-based interpolation may suffer from.
Computer science.
In computer science, dead-reckoning refers to navigating an array data structure using indexes. Since every array element has the same size, it is possible to directly access one array element by knowing any position in the array.
Given the following array:
knowing the memory address where the array starts, it is easy to compute the memory address of D:
formula_16
Likewise, knowing D's memory address, it is easy to compute the memory address of B:
formula_17
This property is particularly important for performance when used in conjunction with arrays of structures because data can be directly accessed, without going through a pointer dereference.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n P_t = P_0 + V_0T + \\frac{1}{2}A_0T^2 \n"
},
{
"math_id": 1,
"text": " V_b = V_0 + \\left (\\acute{V}_0 - V_0 \\right)\\hat{T} "
},
{
"math_id": 2,
"text": " P_t = P_0 + V_bT_t + \\frac{1}{2}\\acute{A}_0T_t^2 "
},
{
"math_id": 3,
"text": " \\acute{P}_t = \\acute{P}_0 + \\acute{V}_0T_t + \\frac{1}{2}\\acute{A}_0T_t^2 "
},
{
"math_id": 4,
"text": " Pos = P_t + \\left (\\acute{P}_t - P_t \\right)\\hat{T} "
},
{
"math_id": 5,
"text": "V_b"
},
{
"math_id": 6,
"text": "V_0"
},
{
"math_id": 7,
"text": "\\acute{V}_0"
},
{
"math_id": 8,
"text": "\\hat{T}"
},
{
"math_id": 9,
"text": "\\acute{A}_0"
},
{
"math_id": 10,
"text": "P_t"
},
{
"math_id": 11,
"text": "P_0"
},
{
"math_id": 12,
"text": "T_t"
},
{
"math_id": 13,
"text": "\\acute{P}_0"
},
{
"math_id": 14,
"text": "\\acute{P}_t"
},
{
"math_id": 15,
"text": "Pos"
},
{
"math_id": 16,
"text": "\\text{address}_\\text{D} = \\text{address}_\\text{start of array} + ( \\text{size}_\\text{array element} * \\text{arrayIndex}_\\text{D} )"
},
{
"math_id": 17,
"text": "\\text{address}_\\text{B} = \\text{address}_\\text{D} - ( \\text{size}_\\text{array element} * ( \\text{arrayIndex}_\\text{D} - \\text{arrayIndex}_\\text{B} ) )"
}
]
| https://en.wikipedia.org/wiki?curid=133345 |
13337259 | Retardation factor | Fraction of an analyte in chromatography
In chromatography, the retardation factor (R) is the fraction of an analyte in the mobile phase of a chromatographic system. In planar chromatography in particular, the retardation factor RF is defined as the ratio of the distance traveled by the center of a spot to the distance traveled by the solvent front. Ideally, the values for "RF" are equivalent to the R values used in column chromatography.
Although the term retention factor is sometimes used synonymously with retardation factor in regard to planar chromatography the term is not defined in this context. However, in column chromatography, the retention factor or capacity factor (k) is defined as the ratio of time an analyte is retained in the stationary phase to the time it is retained in the mobile phase, which is inversely proportional to the retardation factor.
General definition.
In chromatography, the retardation factor, "R", is the fraction of the sample in the mobile phase at equilibrium, defined as:
formula_0
Planar chromatography.
The retardation factor, "RF", is commonly used in paper chromatography and thin layer chromatography (TLC) for analyzing and comparing different substances. It can be mathematically described by the following ratio:
formula_1
An "RF" value will always be in the range 0 to 1; if the substance moves, it can only move in the direction of the solvent flow, and cannot move faster than the solvent. For example, if particular substance in an unknown mixture travels 2.5 cm and the solvent front travels 5.0 cm, the retardation factor would be 0.50. One can choose a mobile phase with different characteristics (particularly polarity) in order to control how far the substance being investigated migrates.
An "RF" value is characteristic for any given compound (provided that the same stationary and mobile phases are used). It can provide corroborative evidence as to the identity of a compound. If the identity of a compound is suspected but not yet proven, an authentic sample of the compound, or standard, is spotted and run on a TLC plate side by side (or on top of each other) with the compound in question. Note that this identity check must be performed on a single plate, because it is difficult to duplicate all the factors which influence RF exactly from experiment to experiment.
Relationship with retention factor.
In terms of retention factor ("k"), retardation factor ("R") is defined as follows:
formula_2
based on the definition of "k":
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ R = \\frac{\\mbox{quantity of substance in the mobile phase}}{\\mbox{total quantity of substance in the system}}"
},
{
"math_id": 1,
"text": "\\ R_F = \\frac{\\mbox{migration distance of substance}}{\\mbox{migration distance of solvent front}}"
},
{
"math_id": 2,
"text": "\\ R = \\frac{1}{1+k} "
},
{
"math_id": 3,
"text": "\\ k = \\frac{1-R}{R} "
}
]
| https://en.wikipedia.org/wiki?curid=13337259 |
1333992 | Metabolic acidosis | Imbalance in the body's acid-base equilibrium
Medical condition
Metabolic acidosis is a serious electrolyte disorder characterized by an imbalance in the body's acid-base balance. Metabolic acidosis has three main root causes: increased acid production, loss of bicarbonate, and a reduced ability of the kidneys to excrete excess acids. Metabolic acidosis can lead to acidemia, which is defined as arterial blood pH that is lower than 7.35. Acidemia and acidosis are not mutually exclusive – pH and hydrogen ion concentrations also depend on the coexistence of other acid-base disorders; therefore, pH levels in people with metabolic acidosis can range from low to high.
Acute metabolic acidosis, lasting from minutes to several days, often occurs during serious illnesses or hospitalizations, and is generally caused when the body produces an excess amount of organic acids (ketoacids in ketoacidosis, or lactic acid in lactic acidosis). A state of chronic metabolic acidosis, lasting several weeks to years, can be the result of impaired kidney function (chronic kidney disease) and/or bicarbonate wasting. The adverse effects of acute versus chronic metabolic acidosis also differ, with acute metabolic acidosis impacting the cardiovascular system in hospital settings, and chronic metabolic acidosis affecting muscles, bones, kidney and cardiovascular health.
Signs and symptoms.
Acute metabolic acidosis.
Symptoms are not specific, and diagnosis can be difficult unless patients present with clear indications for blood gas sampling. Symptoms may include palpitations, headache, altered mental status such as severe anxiety due to hypoxia, decreased visual acuity, nausea, vomiting, abdominal pain, altered appetite and weight gain, muscle weakness, bone pain, and joint pain. People with acute metabolic acidosis may exhibit deep, rapid breathing called Kussmaul respirations which is classically associated with diabetic ketoacidosis. Rapid deep breaths increase the amount of carbon dioxide exhaled, thus lowering the serum carbon dioxide levels, resulting in some degree of compensation. Overcompensation via respiratory alkalosis to form an alkalemia does not occur.
Extreme acidemia can also lead to neurological and cardiac complications:
Physical examination can occasionally reveal signs of the disease, but is often otherwise normal. Cranial nerve abnormalities are reported in ethylene glycol poisoning, and retinal edema can be a sign of methanol intoxication.
Chronic metabolic acidosis.
Chronic metabolic acidosis has non-specific clinical symptoms but can be readily diagnosed by testing serum bicarbonate levels in patients with chronic kidney disease (CKD) as part of a comprehensive metabolic panel. Patients with CKD Stages G3–G5 should be routinely screened for metabolic acidosis.
Diagnostic approach and causes.
Metabolic acidosis results in a reduced serum pH that is due to metabolic and not respiratory dysfunction. Typically the serum bicarbonate concentration will be <22 mEq/L, below the normal range of 22 to 29 mEq/L, the standard base will be more negative than -2 (base deficit) and the pCO2 will be reduced as a result of hyperventilation in an attempt to restore the pH closer to normal. Occasionally in a mixed acid-base disorder where metabolic acidosis is not the primary disorder present, the pH may be normal or high. In the absence of chronic respiratory alkalosis, metabolic acidosis can be clinically diagnosed by analysis of the calculated serum bicarbonate level.
Causes.
Generally, metabolic acidosis occurs when the body produces too much acid (e.g., lactic acidosis, see below section), there is a loss of bicarbonate from the blood, or when the kidneys are not removing enough acid from the body.
Chronic metabolic acidosis is most often caused by a decreased capacity of the kidneys to excrete excess acids through renal ammoniagenesis. The typical Western diet generates 75–100 mEq of acid daily, and individuals with normal kidney function increase the production of ammonia to get rid of this dietary acid. As kidney function declines, the tubules lose the ability to excrete excess acid, and this results in buffering of acid using serum bicarbonate, as well as bone and muscle stores.
There are many causes of acute metabolic acidosis, and thus it is helpful to group them by the presence or absence of a normal anion gap.
Increased anion gap
Causes of increased anion gap include:
Normal anion gap
Causes of normal anion gap include:
To distinguish between the main types of metabolic acidosis, a clinical tool called the anion gap is very useful. The anion gap is calculated by subtracting the sum of the serum concentrations of major anions, chloride and bicarbonate, from the serum concentration of the major cation, sodium. (The serum potassium concentration may be added to the calculation, but this merely changes the normal reference range for what is considered a normal anion gap)
Because the concentration of serum sodium is greater than the combined concentrations of chloride and bicarbonate an 'anion gap' is noted. In reality serum is electoneutral because of the presence of other minor cations (potassium, calcium and magnesium) and anions (albumin, sulphate and phosphate) that are not measured in the equation that calculates the anion gap.
The normal value for the anion gap is 8–16 mmol/L (12±4). An elevated anion gap (i.e. > 16 mmol/L) indicates the presence of excess 'unmeasured' anions, such as lactic acid in anaerobic metabolism resulting from tissue hypoxia, glycolic and formic acid produced by the metabolism of toxic alcohols, ketoacids produced when acetyl-CoA undergoes ketogenesis rather than entering the tricarboxylic (Krebs) cycle, and failure of renal excretion of products of metabolism such as sulphates and phosphates.
Adjunctive tests are useful in determining the aetiology of a raised anion gap metabolic acidosis including detection of an osmolar gap indicative of the presence of a toxic alcohol, measurement of serum ketones indicative of ketoacidosis and renal function tests and urinanalysis to detect renal dysfunction.
Elevated protein (albumin, globulins) may theoretically increase the anion gap but high levels are not usually encountered clinically. Hypoalbuminaemia, which is frequently encountered clinically, will "mask" an anion gap. As a rule of thumb, a decrease in serum albumin by 1 G/L will decrease the anion gap by 0.25 mmol/L
Pathophysiology.
Compensatory mechanisms.
Metabolic acidosis is characterized by a low concentration of bicarbonate (HCO3-), which can happen with increased generation of acids (such as ketoacids or lactic acid), excess loss of HCO3- by the kidneys or gastrointestinal tract, or an inability to generate sufficient HCO3-. Thus demonstrating the importance of maintaining balance between acids and bases in the body for maintaining optimal functioning of organs, tissues and cells.
The body regulates the acidity of the blood by four buffering mechanisms.
Buffer.
The decreased bicarbonate that distinguishes metabolic acidosis is therefore due to two separate processes: the buffer (from water and carbon dioxide) and additional renal generation. The buffer reactions are: <chem display=block>H+ + HCO3- <=> H2CO3 <=> CO2 + H2O</chem>
The Henderson–Hasselbalch equation mathematically describes the relationship between blood pH and the components of the bicarbonate buffering system: formula_0 where "pK"a &approx; 6.1. In clinical practice, the CO2 concentration is usually determined via Henry's law from "P"aCO2, the CO2 partial pressure in arterial blood: formula_1
For example, blood gas machines usually determine bicarbonate concentrations from measured "p"H and "P"aCO2 values. Mathematically, the algorithm substitutes the Henry's law formula into the Henderson-Hasselbach equation and then rearranges: formula_2 At sea level, normal numbers might be "p"H &approx; 7.4 and "P"aCO2 &approx; 40 mmHg; these then imply formula_3
Consequences.
Acute metabolic acidosis.
Acute metabolic acidosis most often occurs during hospitalizations, and acute critical illnesses. It is often associated with poor prognosis, with a mortality rate as high as 57% if the pH remains untreated at 7.20. At lower pH levels, acute metabolic acidosis can lead to impaired circulation and end organ function.
Chronic metabolic acidosis.
Chronic metabolic acidosis commonly occurs in people with chronic kidney disease (CKD) with an eGFR of less than 45 ml/min/1.73m2, most often with mild to moderate severity; however, metabolic acidosis can manifest earlier on in the course of CKD. Multiple animal and human studies have shown that metabolic acidosis in CKD, given its chronic nature, has a profound adverse impact on cellular function, overall contributing to high morbidities in patients.
The most adverse consequences of chronic metabolic acidosis in people with CKD, and in particular, for those who have end-stage renal disease (ESRD), are detrimental changes to the bones and muscles. Acid buffering leads to loss of bone density, resulting in an increased risk of bone fractures, renal osteodystrophy, and bone disease; as well, increased protein catabolism leads to muscle wasting. Furthermore, metabolic acidosis in CKD is also associated with a reduction in eGFR; it is both a complication of CKD, as well as an underlying cause of CKD progression.
Treatment.
Treatment of metabolic acidosis depends on the underlying cause, and should target reversing the main process. When considering course of treatment, it is important to distinguish between acute versus chronic forms.
Acute metabolic acidosis.
Bicarbonate therapy is generally administered In patients with severe acute acidemia (pH < 7.11), or with less severe acidemia (pH 7.1–7.2) who have severe acute kidney injury. Bicarbonate therapy is not recommended for people with less severe acidosis (pH ≥ 7.1), unless severe acute kidney injury is present. In the BICAR-ICU trial, bicarbonate therapy for maintaining a pH >7.3 had no overall effect on the composite outcome of all-cause mortality and the presence of at least one organ failure at day 7. However, amongst the sub-group of patients with severe acute kidney injury, bicarbonate therapy significantly decreased the primary composite outcome, and 28-day mortality, along with the need for dialysis.
Chronic metabolic acidosis.
For people with chronic kidney disease (CKD), treating metabolic acidosis slows the progression of CKD. Dietary interventions for treatment of chronic metabolic acidosis include base-inducing fruits and vegetables that assist with reducing the urine net acid excretion, and increase TCO2. Recent research has also suggested that dietary protein restriction, through ketoanalogue-supplemented vegetarian very low protein diets are also a nutritionally safe option for correction of metabolic acidosis in people with CKD.
Currently, the most commonly used treatment for chronic metabolic acidosis is oral bicarbonate. The NKF/KDOQI guidelines recommend starting treatment when serum bicarbonate levels are <22 mEq/L, in order to maintain levels ≥ 22 mEq/L. Studies investigating the effects of oral alkali therapy demonstrated improvements in serum bicarbonate levels, resulting in a slower decline in kidney function, and reduction in proteinuria – leading to a reduction in the risk of progressing to kidney failure. However, side effects of oral alkali therapy include gastrointestinal intolerance, worsening edema, and worsening hypertension. Furthermore, large doses of oral alkali are required to treat chronic metabolic acidosis, and the pill burden can limit adherence.
Veverimer (TRC 101) is a promising investigational drug designed to treat metabolic acidosis by binding with the acid in the gastrointestinal tract and removing it from the body through excretion in the feces, in turn decreasing the amount of acid in the body, and increasing the level of bicarbonate in the blood. Results from a Phase 3, double-blind placebo-controlled 12-week clinical trial in people with CKD and metabolic acidosis demonstrated that Veverimer effectively and safely corrected metabolic acidosis in the short-term, and a blinded, placebo-controlled, 40-week extension of the trial assessing long-term safety, demonstrated sustained improvements in physical function and a combined endpoint of death, dialysis, or 50% decline in eGFR.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p\\ce{H}=pK_\\text{a}+\\operatorname{\\mathrm{Log}}\\frac{\\left[\\ce{HCO3^-}\\right]}{\\left[\\ce{CO2}\\right]}\\text{,}"
},
{
"math_id": 1,
"text": "[\\ce{CO2}] = (0.03\\text{ L}^{-1}/\\text{mmHg})\\times P_{\\text{a}\\ce{CO2}}\\text{.}"
},
{
"math_id": 2,
"text": "\\left[\\ce{HCO3^-}\\right]=(0.03\\text{ L}^{-1}/\\text{mmHg})P_{\\text{a}\\ce{CO2}}\\cdot 10^{p\\ce{H}-pK_\\text{a}}"
},
{
"math_id": 3,
"text": "\\begin{align}\n\\left[\\ce{HCO3^-}\\right]&=(0.03\\text{ L}^{-1}/\\text{mmHg})(40\\text{ mmHg})\\cdot10^{7.4-6.1} \\\\\n&=24\\text{ L}^{-1}\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=1333992 |
1334113 | Body armor | Protective clothing; armor worn on the body
Body armor, personal armor (also spelled "armour"), armored suit ("armoured") or coat of armor, among others, is armor for a person's body: protective clothing or close-fitting hands-free shields designed to absorb or deflect physical attacks. Historically used to protect military personnel, today it is also used by various types of police (riot police in particular), private security guards, or bodyguards, and occasionally ordinary citizens. Today there are two main types: regular non-plated body armor for moderate to substantial protection, and hard-plate reinforced body armor for maximum protection, such as used by combatants.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Many factors have affected the development of personal armor throughout human history. Significant factors in the development of armor include the economic and technological necessities of armor production. For instance full plate armor first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armor has run parallel to the development of increasingly effective weaponry on the battlefield, with armorers seeking to create better protection without sacrificing mobility.
Ancient.
The first record of body armor in history was found on the Stele of Vultures in ancient Sumer in today's south Iraq. The oldest known Western armor is the Dendra panoply, dating from the Mycenaean Era around 1400 BC.
Mail, also referred to as chainmail, is made of interlocking iron rings, which may be riveted or welded shut. It is believed to have been invented by Celtic people in Europe about 500 BC: most cultures that used mail used the Celtic word or a variant, suggesting the Celts as the originators. The Romans widely adopted mail as the lorica hamata, although they also made use of lorica segmentata and lorica squamata. While no non-metallic armor is known to have survived, it was likely to have been commonplace due to its lower cost.
Eastern armor has a long history, beginning in Ancient China. In East Asian history laminated armor such as lamellar, and styles similar to the coat of plates, and brigandine were commonly used. Later cuirasses and plates were also used. In pre-Qin dynasty times, leather armor was made out of rhinoceros. The use of iron plate armor on the Korean peninsula was developed during the Gaya Confederacy of 42 CE - 562 CE. The iron was mined and refined in the area surrounding Gimhae (Gyeongsangnam Province, South Korea). Using both vertical and triangular plate designs, the plate armor sets consisted of 27 or more individual thick curved plates, which were secured together by nail or hinge. The recovered sets include accessories such as iron arm guards, neck guards, leg guards, and horse armor/bits. The use of these armor types disappeared from use on the Korean Peninsula after the fall of the Gaya Confederacy to the Silla Dynasty, during the three kingdoms era Three Kingdoms of Korea in 562 CE.
Middle Ages.
In European history, well-known armor types include the mail hauberk of the early medieval age, and the full steel plate harness worn by later Medieval and Renaissance knights, and a few key components (breast and back plates) by heavy cavalry in several European countries until the first year of World War I (1914–1915).
The Japanese armor known today as samurai armor appeared in the Heian period. (794-1185) These early samurai armors are called the "ō-yoroi and "dō-maru".
Plate.
Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. By the late 13th century, the knees were capped, and two circular discs, called besagews were fitted to protect the underarms.
A variety of methods for improving the protection provided by mail were used as armorers seemingly experimented. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armor made of large plates sewn inside a textile or leather coat.
Early plate in Italy, and elsewhere in the 13th to 15th centuries were made of iron. Iron armor could be carburized or case hardened to give a surface of harder steel. Plate armor became cheaper than mail by the 15th century as it required much less labor and labor had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate.
The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century to replace the great helm, such as the sallet and barbute and later the armet and close helm.
Probably the most recognized style of armor in the world became the plate armor associated with the knights of the European Late Middle Ages, but continuing to the early 17th-century Age of Enlightenment in all European countries.
By about 1400, the full harness of plate armor had been developed in armories of Lombardy Heavy cavalry dominated the battlefield for centuries in part because of their armor.
In the early 15th century, small "hand cannon" first began to be used, in the Hussite Wars, in combination with Wagenburg tactics, allowing infantry to defeat armored knights on the battlefield. At the same time crossbows were made more powerful to pierce armor, and the development of the Swiss Pike square formation also created substantial problems for heavy cavalry. Rather than dooming the use of body armor, the threat of small firearms intensified the use and further refinement of plate armor. There was a 150-year period in which better and more metallurgically advanced steel armor was being used, precisely because of the danger posed by the gun. Hence, guns and cavalry in plate armor were "threat and remedy" together on the battlefield for almost 400 years. By the 15th-century, Italian armor plates were almost always made of steel. In Southern Germany armorers began to harden their steel armor only in the late 15th century. They would continue to harden their steel for the next century because they quenched and tempered their product which allowed for the fire-gilding to be combined with tempering.
The quality of the metal used in armor deteriorated as armies became bigger and armor was made thicker, necessitating breeding of larger cavalry horses. If during the 14th and 15th centuries armor seldom weighed more than , then by the late 16th century it weighed . The increasing weight and thickness of late 16th-century armor therefore gave substantial resistance.
In the early years of pistols and arquebuses, black powder muzzleloading firearms were fired at a relatively low velocity (usually below ). The full suits of plate armor, or only breast plates could actually stop bullets fired from a modest distance. The front breast plates were, in fact, commonly shot as a test. The impact point would often be encircled with engraving to point it out. This was called the "proof". Armor often also bore an insignia of the maker, especially if it was of good quality. Crossbow bolts or quarrels, if still used, would seldom penetrate good plate, nor would any bullet unless fired from close range.
In effect, rather than making plate armor obsolete, the use of firearms stimulated the development of plate armor into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armor were actually worn by generals and princely commanders until the 1710s.
Horse armor.
The horse was afforded protection from cavalry and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used as parade armor.
Gunpowder era.
As gunpowder weapons greatly improved from the 16th century onward, it became cheaper and more effective to have groups of unarmored infantry with early guns than to have expensive knights mounted on horseback, which was the primary cause for armor to be largely discarded. Most light cavalry units discarded their armor, though some heavy cavalry units continued to use it, such as German reiters, Polish hussars, and French cuirassiers.
Late modern use.
Metal armor remained in limited use long after its general obsolescence. Soldiers in the American Civil War (1861–1865) bought iron and steel vests from peddlers (both sides had considered but rejected it for standard issue). The effectiveness of the vests varied widely—some successfully deflected bullets and saved lives but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their weight on long marches as well as the stigma they got for being cowards from their fellow troops.
At the start of World War I in 1914, thousands of the French cuirassiers rode out to engage the German cavalry who likewise used helmets and armor. By that period, the shiny armor plate was covered in dark paint and a canvas wrap covered their elaborate Napoleonic-style helmets. Their armor was meant to protect only against sabers and lances. The cavalry had to beware of rifles and machine guns, like the infantry soldiers, who at least had a trench to give them some protection.
Some Arditi assault troops of the Italian army wore body armor in 1916 and 1917.
By the end of the war the Germans had made some 400,000 "Sappenpanzer" suits. Too heavy and restrictive for infantry, most were worn by spotters, sentries, machine gunners, and other troops who stayed in one place.
Modern non-metallic armor.
Soldiers use metal or ceramic plates in their bullet resistant vests, providing additional protection from pistol and rifle bullets. Metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from combat knives and knife bayonets. Chain mail armored gloves continue to be used by butchers and abattoir workers to prevent cuts and wounds while cutting up carcasses.
Ceramic.
Boron carbide is used in hard plate armor capable of defeating rifle and armor piercing ammunition. The ceramic material is typically structured with a Kevlar layer on one side and a nylon spall shield on the other, optimizing ballistic resistance against different projectile threats, including various calibers of shells and bullets. Boron carbide ceramics were first used in the 1960s in designing bulletproof vests, cockpit floor and pilot seats of gunships. It was used in armor plates like the SAPI series, and today in most civilian accessible body armors.
Other materials include boron suboxide, alumina, and silicon carbide, which are used for varying reasons from protecting from tungsten carbide penetrators, to improved weight to area ratios. Ceramic body armor is made up of a hard and rigid ceramic strike face bonded to a ductile fiber composite backing layer. The projectile is shattered, turned, or eroded as it impacts the ceramic strike face, and much of its kinetic energy is consumed as it interacts with the ceramic layer; the fiber composite backing layer absorbs residual kinetic energy and catches bullet and ceramic debris (spalling). This allows such armor to defeat armor-piercing 5.56×45mm, 7.62×51mm, and 7.62x39mm bullets, among others, with little or no felt blunt trauma. High-end ceramic armor plates typically utilize ultra-high-molecular-weight polyethylene fiber composite backing layers, whereas budget plates will utilize aramid or fiberglass.
Fibers.
DuPont Kevlar is well known as a component of some bullet resistant vests and bullet resistant face masks. The PASGT helmet and vest used by United States military forces since the early 1980s both have Kevlar as a key component, as do their replacements. Civilian applications include Kevlar reinforced clothing for motorcycle riders to protect against abrasion injuries. Kevlar in non-woven long strand form is used inside an outer protective cover to form chaps that loggers use while operating a chainsaw. If the moving chain contacts and tears through the outer cover, the long fibers of Kevlar tangle, clog, and stop the chain from moving as they get drawn into the workings of the drive mechanism of the saw. Kevlar is used also in emergency services protection gear if it involves high heat, "e.g.", tackling a fire, and Kevlar such as vests for police officers, security, and SWAT. The latest Kevlar material that DuPont has developed is Kevlar XP. In comparison with "normal" Kevlar, Kevlar XP is more lightweight and more comfortable to wear, as its quilt stitch is not required for the ballistic package.
Twaron is similar to Kevlar. They both belong to the aramid family of synthetic fibers. The only difference is that Twaron was first developed by Akzo in the 1970s. Twaron was first commercially produced in 1986. Now, Twaron is manufactured by Teijin Aramid. Like Kevlar, Twaron is a strong, synthetic fiber. It is also heat resistant and has many applications. It can be used in the production of several materials that include the military, construction, automotive, aerospace, and even sports market sectors. Among the examples of Twaron-made materials are body armor, helmets, ballistic vests, speaker woofers, drumheads, tires, turbo hoses, wire ropes, and cables.
Another fiber used to manufacture a bullet-resistant vest is Dyneema ultra-high-molecular-weight polyethylene. Originated in the Netherlands, Dyneema has an extremely high strength-to-weight ratio (a diameter rope of Dyneema can bear up to a load), is light enough (low density) that it can float on water, and has high energy absorption characteristics. Since the introduction of the Dyneema Force Multiplier Technology in 2013, many body armor manufacturers have switched to Dyneema for their high-end armor solutions.
Protected areas.
Shield.
A shield is held in the hand or arm. Its purpose is to intercept attacks, either by stopping projectiles such as arrows or by glancing a blow to the side of the shield-user, and it can also be used offensively as a bludgeoning weapon. Shields vary greatly in size, ranging from large shields that protect the user's entire body to small shields that are mostly for use in hand-to-hand combat. Shields also vary a great deal in thickness; whereas some shields were made of thick wooden planking, to protect soldiers from spears and crossbow bolts, other shields were thinner and designed mainly for glancing blows away (such as a sword blow). In prehistory, shields were made of wood, animal hide, or wicker. In antiquity and in the Middle Ages, shields were used by foot soldiers and mounted soldiers. Even after the invention of gunpowder and firearms, shields continued to be used. In the 18th century, Scottish clans continued to use small shields, and in the 19th century, some non-industrialized peoples continued to use shields. In the 20th and 21st centuries, ballistic shields are used by military and police units that specialize in anti-terrorist action, hostage rescue, and siege-breaching.
Head.
A combat helmet is among the oldest forms of personal protective equipment, and is known to have been worn in ancient India around 1700 BC and the Assyrians around 900 BC, followed by the ancient Greeks and Romans, throughout the Middle Ages, and up to the modern era. Their materials and construction became more advanced as weapons became more and more powerful. Initially constructed from leather and brass, and then bronze and iron during the Bronze and Iron Ages, they soon came to be made entirely from forged steel in many societies after about AD 950. At that time, they were purely military equipment, protecting the head from cutting blows with swords, flying arrows, and low-velocity musketry. Some late medieval helmets, like the great bascinet, rested on the shoulders and prevented the wearer from turning his head, greatly restricting mobility. During the 18th and 19th centuries, helmets were not widely used in warfare; instead, many armies used unarmored hats that offered no protection against blade or bullet. The arrival of World War I, with its trench warfare and wide use of artillery, led to mass adoption of metal helmets once again, this time with a shape that offered mobility, a low profile, and compatibility with gas masks. Today's militaries often use high-quality helmets made of ballistic materials such as Kevlar and Twaron, which have excellent bullet and fragmentation stopping power. Some helmets also have good non-ballistic protective qualities, though many do not. The two most popular ballistic helmet models are the PASGT and the MICH. The Modular Integrated Communications Helmet (MICH) type helmet has a slightly smaller coverage at the sides which allows tactical headsets and other communication equipment. The MICH model has standard pad suspension and four-point chinstrap. The Personal Armor System for Ground Troops (PASGT) helmet has been in use since 1983 and has slowly been replaced by the MICH helmet.
A ballistic face mask is designed to protect the wearer from ballistic threats. Ballistic face masks are usually made of kevlar or other bullet-resistant materials and the inside of the mask may be padded for shock absorption, depending on the design. Due to weight restrictions, protection levels range only up to NIJ Level IIIA.
Torso.
A ballistic vest helps absorb the impact from firearm-fired projectiles and shrapnel from explosions, and is worn on the torso. Soft vests are made from many layers of woven or laminated fibers and can be capable of protecting the wearer from small caliber handgun and shotgun projectiles, and small fragments from explosives, such as hand grenades.
Metal or ceramic plates can be used with a soft vest, providing additional protection from rifle rounds, and metallic components or tightly woven fiber layers can give soft armor resistance to stab and slash attacks from a bayonet or knife. Soft vests are commonly worn by police forces, private citizens and private security guards or bodyguards, whereas hard-plate reinforced vests are mainly worn by combat soldiers, police tactical units and hostage rescue teams.
A modern equivalent may combine a ballistic vest with other items of protective clothing, such as a combat helmet. Vests intended for police and military use may also include ballistic shoulder and side protection armor components, and explosive ordnance disposal technicians wear heavy armor and helmets with face visors and spine protection.
Limbs.
Medieval armor often offered protection for all of the limbs, including metal boots for the lower legs, gauntlets for the hands and wrists, and greaves for the legs. Today, protection of limbs from bombs is provided by a bombsuit. Most modern soldiers sacrifice limb protection for mobility, since armor thick enough to stop bullets would greatly inhibit movement of the arms and legs.
Performance standards.
Due to the various different types of projectiles, it is often inaccurate to refer to a particular product as "bulletproof" because this suggests that it will protect against any and all projectiles. Instead, the term bullet resistant is generally preferred.
Standards are regional. Around the world ammunition varies and armor testing must reflect the threats found locally.
While many standards exist, a few standards are widely used as models. The US National Institute of Justice ballistic and stab documents are examples of broadly accepted standards. In addition to the NIJ, the United Kingdom's Home Office Scientific Development Branch (HOSDB—formerly the Police Scientific Development Branch (PSDB)) standards are also used by a number of other countries and organizations. These "model" standards are usually adapted by other countries by following the same basic test methodologies, while changing the specific ammunition tested. NIJ Standard-0101.06 has specific performance standards for bullet resistant vests used by law enforcement. This rates vests on the following scale against penetration and also blunt trauma protection (deformation):
In 2018 or 2019, NIJ was expected to introduce the new NIJ Standard-0101.07. This new standard will completely replace the NIJ Standard-0101.06. The current system of using Roman numerals (II, IIIA, III, and IV) to indicate the level of threat will disappear and be replaced by a naming convention similar to the standard developed by UK Home Office Scientific Development Branch. HG (Hand Gun) is for soft armor and RF (Rifle) is for hard armor. Another important change is that the test-round velocity for conditioned armor will be the same as that for new armor during testing. For example, for NIJ Standard-0101.06 Level IIIA the .44 Magnum round is currently shot at for conditioned armor and at for new armor. For the NIJ Standard-0101.07, the velocity for both conditioned and new armor will be the same.
In January 2012, the NIJ introduced BA 9000, body armor quality management system requirements as a quality standard not unlike ISO 9001 (and much of the standards were based on ISO 9001).
In addition to the NIJ and HOSDB standards, other important standards include: the German Police's Technische Richtlinie (TR) Ballistische Schutzwesten, Draft ISO prEN ISO 14876, and Underwriters Laboratories (UL Standard 752).
Textile armor is tested for both penetration resistance by bullets and for the impact energy transmitted to the wearer. The "backface signature" or transmitted impact energy is measured by shooting armor mounted in front of a backing material, typically oil-based modelling clay. The clay is used at a controlled temperature and verified for impact flow before testing. After the armor is impacted with the test bullet the vest is removed from the clay and the depth of the indentation in the clay is measured.
The backface signature allowed by different test standards can be difficult to compare. Both the clay materials and the bullets used for the test are not common. In general the British, German and other European standards allow of backface signature, while the US-NIJ standards allow for , which can potentially cause internal injury. The allowable backface signature for this has been controversial from its introduction in the first NIJ test standard and the debate as to the relative importance of penetration-resistance vs. backface signature continues in the medical and testing communities.
In general a vest's textile material temporarily degrades when wet. Neutral water at room temp does not affect para-aramid or UHMWPE but acidic, basic and some other solutions can permanently reduce para-aramid fiber tensile strength. (As a result of this, the major test standards call for wet testing of textile armor.) Mechanisms for this wet loss of performance are not known. Vests that will be tested after ISO-type water immersion tend to have heat-sealed enclosures and those that are tested under NIJ-type water spray methods tend to have water-resistant enclosures.
From 2003 to 2005, a large study of the environmental degradation of Zylon armor was undertaken by the US-NIJ. This concluded that water, long-term use, and temperature exposure significantly affect tensile strength and the ballistic performance of PBO or Zylon fiber. This NIJ study on vests returned from the field demonstrated that environmental effects on Zylon resulted in ballistic failures under standard test conditions.
Ballistic testing V50 and V0.
Measuring the ballistic performance of armor is based on determining the kinetic energy of a bullet at impact. Because the energy of a bullet is a key factor in its penetrating capacity, velocity is used as the primary independent variable in ballistic testing. For most users the key measurement is the velocity at which no bullets will penetrate the armor. Measuring this zero penetration velocity (V0) must take into account variability in armor performance and test variability. Ballistic testing has a number of sources of variability: the armor, test backing materials, bullet, casing, powder, primer and the gun barrel, to name a few.
Variability reduces the predictive power of a determination of V0. If, for example, the V0 of an armor design is measured to be with a 9 mm FMJ bullet based on 30 shots, the test is only an estimate of the real V0 of this armor. The problem is variability. If the V0 is tested again with a second group of 30 shots on the same vest design, the result will not be identical.
Only a single low velocity penetrating shot is required to reduce the V0 value. The more shots made the lower the V0 will go. In terms of statistics, the zero penetration velocity is the tail end of the distribution curve. If the variability is known and the standard deviation can be calculated, one can rigorously set the V0 at a confidence interval. Test Standards now define how many shots must be used to estimate a V0 for the armor certification. This procedure defines a confidence interval of an estimate of V0. (See "NIJ and HOSDB test methods".)
V0 is difficult to measure, so a second concept has been developed in ballistic testing called V50. This is the velocity at which 50 percent of the shots go through and 50 percent are stopped by the armor. US military standards define a commonly used procedure for this test. The goal is to get three shots that penetrate and a second group of three shots that are stopped by the armor all within a specified velocity range. It is possible, and desirable, to have a penetration velocity lower than a stop velocity. These three stops and three penetrations can then be used to calculate a V50 velocity.
In practice this measurement of V50 often requires 1–2 vest panels and 10–20 shots. A very useful concept in armor testing is the offset velocity between the V0 and V50. If this offset has been measured for an armor design, then V50 data can be used to measure and estimate changes in V0. For vest manufacturing, field evaluation and life testing both V0 and V50 are used. However, as a result of the simplicity of making V50 measurements, this method is more important for control of armor after certification.
Cunniff analysis.
Using dimensionless analysis, Cuniff arrived at a relation connecting the V50 and the system parameters for textile-based body armors. Under the assumption that the energy of impact is dissipated in breaking the yarn, it was shown that
formula_0
Here,
formula_1
formula_2 are the failure stress, failure strain, density and elastic modulus of the yarn
formula_3 is the mass per unit area of the armor
formula_4 is the mass per unit area of the projectile
Military testing.
After the Vietnam War, military planners developed a concept of "Casualty Reduction". The large body of casualty data made clear that in a combat situation, fragments, not bullets, were the greatest threat to soldiers. After World War II vests were being developed and fragment testing was in its early stages. Artillery shells, mortar shells, aerial bombs, grenades, and antipersonnel mines are fragmentation devices. They all contain a steel casing that is designed to burst into small steel fragments or shrapnel, when their explosive core detonates. After considerable effort measuring fragment size distribution from various NATO and Soviet Bloc munitions, a fragment test was developed. Fragment simulators were designed and the most common shape is a Right Circular Cylinder or RCC simulator. This shape has a length equal to its diameter. These RCC Fragment Simulation Projectiles (FSPs) are tested as a group. The test series most often includes , , , and mass RCC FSP testing. The 2-4-16-64 series is based on the measured fragment size distributions.
The second part of "Casualty Reduction" strategy is a study of velocity distributions of fragments from munitions. Warhead explosives have blast speeds of to . As a result, they are capable of ejecting fragments at speeds of over , implying very high energy (where the energy of a fragment is <templatestyles src="Fraction/styles.css" />1⁄2 mass × velocity2, neglecting rotational energy). The military engineering data showed that, like the fragment size, the fragment velocities had characteristic distributions. It is possible to segment the fragment output from a warhead into velocity groups. For example, 95% of all fragments from a bomb blast under have a velocity of or less. This established a set of goals for military ballistic vest design.
The random nature of fragmentation required the military vest specification to trade off mass vs. ballistic-benefit. Hard vehicle armor is capable of stopping all fragments, but military personnel can only carry a limited amount of gear and equipment, so the weight of the vest is a limiting factor in vest fragment protection. The 2-4-16-64 grain series at limited velocity can be stopped by an all-textile vest of approximately . In contrast to deformable lead bullets, fragments do not change shape; they are steel and can not be deformed by textile materials. The FSP (the smallest fragment projectile commonly used in testing) is about the size of a grain of rice; such small, fast-moving fragments can potentially slip through the vest, moving between yarns. As a result, fabrics optimized for fragment protection are tightly woven, although these fabrics are not as effective at stopping lead bullets.
By the 2010s, the development of body armor had been stymied in regards to weight, in that designers had trouble increasing the protective capability of body armor while still maintaining or decreasing its weight.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V_{50} = (U^* )^{1/3} f\\left(\\frac{A_d}{A_p}\\right)."
},
{
"math_id": 1,
"text": "U^* = \\frac{\\sigma\\epsilon}{2\\rho}\\sqrt\\frac{E}{\\rho}"
},
{
"math_id": 2,
"text": "\\sigma,\\epsilon,\\rho,E"
},
{
"math_id": 3,
"text": "A_d"
},
{
"math_id": 4,
"text": "A_p"
}
]
| https://en.wikipedia.org/wiki?curid=1334113 |
13341540 | Asymmetric norm | Generalization of the concept of a norm
In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm.
Definition.
An asymmetric norm on a real vector space formula_0 is a function formula_1 that has the following properties:
Asymmetric norms differ from norms in that they need not satisfy the equality formula_6
If the condition of positive definiteness is omitted, then formula_7 is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for formula_8 at least one of the two numbers formula_9 and formula_10 is not zero.
Examples.
On the real line formula_11 the function formula_7 given by
formula_12
is an asymmetric norm but not a norm.
In a real vector space formula_13 the Minkowski functional formula_14 of a convex subset formula_15 that contains the origin is defined by the formula
formula_16 for formula_17.
This functional is an asymmetric seminorm if formula_18 is an absorbing set, which means that formula_19 and ensures that formula_9 is finite for each formula_20
Corresponce between asymmetric seminorms and convex subsets of the dual space.
If formula_21 is a convex set that contains the origin, then an asymmetric seminorm formula_7 can be defined on formula_22 by the formula
formula_23
For instance, if formula_24 is the square with vertices formula_25 then formula_7 is the taxicab norm formula_26 Different convex sets yield different seminorms, and every asymmetric seminorm on formula_22 can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm formula_7 is
More generally, if formula_0 is a finite-dimensional real vector space and formula_30 is a compact convex subset of the dual space formula_31 that contains the origin, then formula_32 is an asymmetric seminorm on formula_33 | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "p : X \\to [0, +\\infty)"
},
{
"math_id": 2,
"text": "p(x + y) \\leq p(x) + p(y) \\text{ for all } x, y \\in X."
},
{
"math_id": 3,
"text": "p(rx) = r p(x) \\text{ for all } x \\in X"
},
{
"math_id": 4,
"text": "r \\geq 0."
},
{
"math_id": 5,
"text": "p(x) > 0 \\text{ unless } x = 0"
},
{
"math_id": 6,
"text": "p(-x) = p(x)."
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "x \\neq 0,"
},
{
"math_id": 9,
"text": "p(x)"
},
{
"math_id": 10,
"text": "p(-x)"
},
{
"math_id": 11,
"text": "\\R,"
},
{
"math_id": 12,
"text": "p(x) = \\begin{cases}|x|, & x \\leq 0; \\\\ 2 |x|, & x \\geq 0; \\end{cases}"
},
{
"math_id": 13,
"text": "X,"
},
{
"math_id": 14,
"text": "p_B"
},
{
"math_id": 15,
"text": "B\\subseteq X"
},
{
"math_id": 16,
"text": "p_B(x) = \\inf \\left\\{r \\geq 0: x \\in r B \\right\\}\\,"
},
{
"math_id": 17,
"text": "x \\in X"
},
{
"math_id": 18,
"text": "B"
},
{
"math_id": 19,
"text": "\\bigcup_{r \\geq 0} r B = X,"
},
{
"math_id": 20,
"text": "x \\in X."
},
{
"math_id": 21,
"text": "B^* \\subseteq \\R^n"
},
{
"math_id": 22,
"text": "\\R^n"
},
{
"math_id": 23,
"text": "p(x) = \\max_{\\varphi \\in B^*} \\langle\\varphi, x \\rangle."
},
{
"math_id": 24,
"text": "B^* \\subseteq \\R^2"
},
{
"math_id": 25,
"text": "(\\pm 1,\\pm 1),"
},
{
"math_id": 26,
"text": "x = \\left(x_0, x_1\\right) \\mapsto \\left|x_0\\right| + \\left|x_1\\right|."
},
{
"math_id": 27,
"text": "B^*"
},
{
"math_id": 28,
"text": "n,"
},
{
"math_id": 29,
"text": "B^* = -B^*."
},
{
"math_id": 30,
"text": "B^* \\subseteq X^*"
},
{
"math_id": 31,
"text": "X^*"
},
{
"math_id": 32,
"text": "p(x) = \\max_{\\varphi \\in B^*} \\varphi(x)"
},
{
"math_id": 33,
"text": "X."
}
]
| https://en.wikipedia.org/wiki?curid=13341540 |
13342698 | Robbins lemma | In statistics, the Robbins lemma, named after Herbert Robbins, states that if "X" is a random variable having a Poisson distribution with parameter "λ", and "f" is any function for which the expected value E("f"("X")) exists, then
formula_0
Robbins introduced this proposition while developing empirical Bayes methods.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\operatorname{E}(X f(X - 1)) = \\lambda \\operatorname{E}(f(X)). "
}
]
| https://en.wikipedia.org/wiki?curid=13342698 |
13345478 | Schilder's theorem | In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on formula_0 to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions.
Statement of the theorem.
Let "C"0 = "C"0([0, "T"]; R"d") be the Banach space of continuous functions formula_1 such that formula_2, equipped with the supremum norm ||⋅||∞ and formula_3 be the subspace of absolutely continuous functions whose derivative is in formula_4 (the so-called Cameron-Martin space). Define the rate function
formula_5
on formula_3 and let formula_6 be two given functions, such that formula_7 (the "action") has a unique minimum formula_8.
Then under some differentiability and growth assumptions on formula_9 which are detailed in Schilder 1966, one has
formula_10
where formula_11 denotes expectation with respect to the Wiener measure formula_12 on formula_13 and formula_14 is the Hessian of formula_15 at the minimum formula_16; formula_17 is meant in the sense of an formula_18 inner product.
Application to large deviations on the Wiener measure.
Let "B" be a standard Brownian motion in "d"-dimensional Euclidean space R"d" starting at the origin, 0 ∈ R"d"; let W denote the law of "B", i.e. classical Wiener measure. For "ε" > 0, let W"ε" denote the law of the rescaled process √"ε""B". Then, on the Banach space "C"0 = "C"0([0, "T"]; R"d") of continuous functions formula_1 such that formula_2, equipped with the supremum norm ||⋅||∞, the probability measures W"ε" satisfy the large deviations principle with good rate function "I" : "C"0 → R ∪ {+∞} given by
formula_19
if "ω" is absolutely continuous, and "I"("ω") = +∞ otherwise. In other words, for every open set "G" ⊆ "C"0 and every closed set "F" ⊆ "C"0,
formula_20
and
formula_21
Example.
Taking "ε" = 1/"c"2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion "B" strays further than "c" from its starting point over the time interval [0, "T"], i.e. the probability
formula_22
as "c" tends to infinity. Here B"c"(0; ||⋅||∞) denotes the open ball of radius "c" about the zero function in "C"0, taken with respect to the supremum norm. First note that
formula_23
Since the rate function is continuous on "A", Schilder's theorem yields
formula_24
making use of the fact that the infimum over paths in the collection "A" is attained for "ω"("t")
. This result can be heuristically interpreted as saying that, for large "c" and/or large "T"
formula_25
In fact, the above probability can be estimated more precisely: for "B" a standard Brownian motion in R"n", and any "T", "c" and "ε" > 0, we have:
formula_26 | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": " f : [0,T] \\longrightarrow \\mathbf{R}^d"
},
{
"math_id": 2,
"text": "f(0)=0"
},
{
"math_id": 3,
"text": "C_0^\\ast"
},
{
"math_id": 4,
"text": "L^2"
},
{
"math_id": 5,
"text": "I(\\omega) = \\frac{1}{2} \\int_{0}^{T} \\| \\dot{\\omega}(t) \\|^{2} \\, \\mathrm{d} t"
},
{
"math_id": 6,
"text": "F:C_0\\to\\mathbb{R},G:C_0\\to\\mathbb{C}"
},
{
"math_id": 7,
"text": "S:=I+F"
},
{
"math_id": 8,
"text": "\\Omega\\in C_0^\\ast"
},
{
"math_id": 9,
"text": "F,G"
},
{
"math_id": 10,
"text": "\\lim_{\\lambda\\to\\infty}\\frac{\\mathbb{E}\\left[\\exp\\left(-\\lambda F(\\lambda^{-1/2} \\omega)\\right)G(\\lambda^{-1/2} \\omega)\\right]}{\\exp\\left(-\\lambda S(\\Omega)\\right)} = G(\\Omega)\\mathbb{E}\\left[\\exp\\left(-\\frac{1}{2}\\langle\\omega, D(\\Omega) \\omega\\rangle\\right)\\right]"
},
{
"math_id": 11,
"text": "\\mathbb{E}"
},
{
"math_id": 12,
"text": "\\mathbb{P}"
},
{
"math_id": 13,
"text": "C_0"
},
{
"math_id": 14,
"text": "D(\\Omega)"
},
{
"math_id": 15,
"text": "F"
},
{
"math_id": 16,
"text": "\\Omega"
},
{
"math_id": 17,
"text": "\\langle\\omega, D(\\Omega) \\omega\\rangle"
},
{
"math_id": 18,
"text": "L^2([0,T])"
},
{
"math_id": 19,
"text": "I(\\omega) = \\frac{1}{2} \\int_{0}^{T} | \\dot{\\omega}(t) |^{2} \\, \\mathrm{d} t"
},
{
"math_id": 20,
"text": "\\limsup_{\\varepsilon \\downarrow 0} \\varepsilon \\log \\mathbf{W}_{\\varepsilon} (F) \\leq - \\inf_{\\omega \\in F} I(\\omega)"
},
{
"math_id": 21,
"text": "\\liminf_{\\varepsilon \\downarrow 0} \\varepsilon \\log \\mathbf{W}_{\\varepsilon} (G) \\geq - \\inf_{\\omega \\in G} I(\\omega)."
},
{
"math_id": 22,
"text": "\\mathbf{W} (C_0 \\smallsetminus \\mathbf{B}_c (0; \\| \\cdot \\|_\\infty)) \\equiv \\mathbf{P} \\big[ \\| B \\|_\\infty > c \\big],"
},
{
"math_id": 23,
"text": "\\| B \\|_\\infty > c \\iff \\sqrt{\\varepsilon} B \\in A := \\left \\{ \\omega \\in C_0 \\mid |\\omega(t)| > 1 \\text{ for some } t \\in [0, T] \\right\\}."
},
{
"math_id": 24,
"text": "\\begin{align}\n\\lim_{c \\to \\infty} \\frac{\\log \\left (\\mathbf{P} \\left [ \\| B \\|_\\infty > c \\right] \\right )}{c^2} &= \\lim_{\\varepsilon \\to 0} \\varepsilon \\log \\left (\\mathbf{P} \\left[ \\sqrt{\\varepsilon} B \\in A \\right] \\right ) \\\\[6pt]\n&= - \\inf \\left\\{ \\left. \\frac{1}{2} \\int_0^T | \\dot{\\omega}(t) |^2 \\, \\mathrm{d} t \\,\\right|\\, \\omega \\in A \\right\\} \\\\[6pt]\n&= - \\frac{1}{2} \\int_0^T \\frac{1}{T^2} \\, \\mathrm{d} t \\\\[6pt]\n&= - \\frac{1}{2 T},\n\\end{align}"
},
{
"math_id": 25,
"text": "\\frac{\\log \\left (\\mathbf{P} \\left [ \\| B \\|_\\infty > c \\right] \\right )}{c^2} \\approx - \\frac{1}{2T} \\qquad \\text{or} \\qquad \\mathbf{P} \\left[ \\| B \\|_\\infty > c \\right ] \\approx \\exp \\left( - \\frac{c^2}{2 T} \\right)."
},
{
"math_id": 26,
"text": "\\mathbf{P} \\left[ \\sup_{0 \\leq t \\leq T} \\left| \\sqrt{\\varepsilon} B_t \\right | \\geq c \\right] \\leq 4 n \\exp \\left( - \\frac{c^2}{2 n T \\varepsilon} \\right)."
}
]
| https://en.wikipedia.org/wiki?curid=13345478 |
13345571 | Sammon mapping | Machine learning algorithm
Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection.
It is particularly suited for use in exploratory data analysis.
The method was proposed by John W. Sammon in 1969.
It is considered a non-linear approach as the mapping cannot be represented as a linear combination of the original variables as possible in techniques such as principal component analysis, which also makes it more difficult to use for classification applications.
Denote the distance between ith and jth objects in the original space by formula_0, and the distance between their projections by formula_1.
Sammon's mapping aims to minimize the following error function, which is often referred to as Sammon's stress or Sammon's error:
formula_2
The minimization can be performed either by gradient descent, as proposed initially, or by other means, usually involving iterative methods.
The number of iterations needs to be experimentally determined and convergent solutions are not always guaranteed.
Many implementations prefer to use the first Principal Components as a starting configuration.
The Sammon mapping has been one of the most successful nonlinear metric multidimensional scaling methods since its advent in 1969, but effort has been focused on algorithm improvement rather than on the form of the stress function.
The performance of the Sammon mapping has been improved by extending its stress function using left Bregman divergence
and right Bregman divergence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle d^{*}_{ij}"
},
{
"math_id": 1,
"text": "\\scriptstyle d^{}_{ij}"
},
{
"math_id": 2,
"text": "E = \\frac{1}{\\sum\\limits_{i<j}d^{*}_{ij}}\\sum_{i<j}\\frac{(d^{*}_{ij}-d_{ij})^2}{d^{*}_{ij}}."
}
]
| https://en.wikipedia.org/wiki?curid=13345571 |
13345788 | Freidlin–Wentzell theorem | In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
Statement.
Let "B" be a standard Brownian motion on R"d" starting at the origin, 0 ∈ R"d", and let "X""ε" be an R"d"-valued Itō diffusion solving an Itō stochastic differential equation of the form
formula_0
where the drift vector field "b" : R"d" → R"d" is uniformly Lipschitz continuous. Then, on the Banach space "C"0 = "C"0([0, "T"]; R"d") equipped with the supremum norm ||⋅||∞, the family of processes ("X""ε")"ε">0 satisfies the large deviations principle with good rate function "I" : "C"0 → R ∪ {+∞} given by
formula_1
if "ω" lies in the Sobolev space "H"1([0, "T"]; R"d"), and "I"("ω") = +∞ otherwise. In other words, for every open set "G" ⊆ "C"0 and every closed set "F" ⊆ "C"0,
formula_2
and
formula_3 | [
{
"math_id": 0,
"text": "\\begin{cases} dX_t^\\varepsilon = b(X_t^\\varepsilon) \\, dt + \\sqrt{\\varepsilon} \\, dB_t, \\\\ X_0^\\varepsilon = 0, \\end{cases}"
},
{
"math_id": 1,
"text": "I(\\omega) = \\frac{1}{2} \\int_0^T | \\dot{\\omega}_t - b(\\omega_t) |^2 \\, dt"
},
{
"math_id": 2,
"text": "\\limsup_{\\varepsilon \\downarrow 0} \\big( \\varepsilon \\log \\mathbf{P} \\big[ X^\\varepsilon \\in F \\big]\\big) \\leq -\\inf_{\\omega \\in F} I(\\omega)"
},
{
"math_id": 3,
"text": "\\liminf_{\\varepsilon \\downarrow 0} \\big( \\varepsilon \\log \\mathbf{P} \\big[ X^{\\varepsilon} \\in G \\big]\\big) \\geq - \\inf_{\\omega \\in G} I(\\omega)."
}
]
| https://en.wikipedia.org/wiki?curid=13345788 |
13345968 | Virtual black hole | Black holes appearing from quantum spacetime fluctuations
In quantum gravity, a virtual black hole is a hypothetical micro black hole that exists temporarily as a result of a quantum fluctuation of spacetime. It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.
The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation
formula_0
where formula_1 is the radius of curvature of spacetime small domain, formula_2 is the coordinate of the small domain, formula_3 is the Planck length, formula_4 is the reduced Planck constant, formula_5 is the Newtonian constant of gravitation, and formula_6 is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and is theorized to decrease when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.
The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta R_{\\mu}\\Delta x_{\\mu}\\ge\\ell^2_{P}=\\frac{\\hbar G}{c^3}"
},
{
"math_id": 1,
"text": "R_{\\mu}"
},
{
"math_id": 2,
"text": "x_{\\mu}"
},
{
"math_id": 3,
"text": "\\ell_{P}"
},
{
"math_id": 4,
"text": "\\hbar"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "c"
}
]
| https://en.wikipedia.org/wiki?curid=13345968 |
13347172 | Luminosity function (astronomy) | In astronomy, a luminosity function gives the number of stars or galaxies per luminosity interval. Luminosity functions are used to study the properties of large groups or classes of objects, such as the stars in clusters or the galaxies in the Local Group.
Note that the term "function" is slightly misleading, and the luminosity function might better be described as a luminosity "distribution". Given a luminosity as input, the luminosity function essentially returns the abundance of objects with that luminosity (specifically, number density per luminosity interval).
Main sequence luminosity function.
The main sequence luminosity function maps the distribution of main sequence stars according to their luminosity. It is used to compare star formation and death rates, and evolutionary models, with observations. Main sequence luminosity functions vary depending on their host galaxy and on selection criteria for the stars, for example in the Solar neighbourhood or the Small Magellanic Cloud.
White dwarf luminosity function.
The white dwarf luminosity function (WDLF) gives the number of white dwarf stars with a given luminosity. As this is determined by the rates at which these stars form and cool, it is of interest for the information it gives about the physics of white dwarf cooling and the age and history of the Galaxy.
Schechter luminosity function.
The Schechter luminosity function formula_0 provides an approximation of the abundance of galaxies in a luminosity interval formula_1. The luminosity function has units of a number density formula_2 per unit luminosity and is given by a power law with an exponential cut-off at high luminosity
formula_3
where formula_4 is a characteristic galaxy luminosity controlling the cut-off, and the normalization formula_5 has units of number density.
Equivalently, this equation can be expressed in terms of log-quantities with
formula_6
The galaxy luminosity function may have different parameters for different populations and environments; it is not a universal function. One measurement from field galaxies is formula_7.
It is often more convenient to rewrite the Schechter function in terms of magnitudes, rather than luminosities. In this case, the Schechter function becomes:
formula_8
Note that because the magnitude system is logarithmic, the power law has logarithmic slope formula_9. This is why a Schechter function with formula_10 is said to be flat.
Integrals of the Schechter function can be expressed via the incomplete gamma function
formula_11
Historically, the Schechter luminosity function was inspired by the Press–Schechter model. However, the connection between the two is not straight forward. If one assumes that every dark matter halo hosts one galaxy, then the Press-Schechter model yields a slope formula_12 for galaxies instead of the value given above which is closer to -1. The reason for this failure is that large halos tend to have a large host galaxy and many smaller satellites, and small halos may not host any galaxies with stars. See, e.g., halo occupation distribution, for a more-detailed description of the halo-galaxy connection.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "[L+dL]"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "dn(L) = \\phi~ dL = \\phi^* \\left(\\frac{L}{L^*}\\right)^\\alpha \\mathrm{e}^{-L/L^*} d\\left(\\frac{L}{L^*}\\right),"
},
{
"math_id": 4,
"text": "L^*"
},
{
"math_id": 5,
"text": "\\,\\!\\phi^*"
},
{
"math_id": 6,
"text": "dn(L) = \\ln(10) \\phi^* \\left(\\frac{L}{L^*}\\right)^{\\alpha+1} \\mathrm{e}^{-L/L^*} d\\left(\\log_{10}L\\right)."
},
{
"math_id": 7,
"text": "\\alpha=-1.25,\\ \\phi^* = 1.2 \\times 10^{-2} \\ h^3 \\ \\mathrm{Mpc}^{-3}"
},
{
"math_id": 8,
"text": " n(M)~ dM = (0.4 \\ \\ln 10) \\ \\phi^* \\ [ 10^{ 0.4 ( M^* - M ) } ]^{ \\alpha + 1} \\exp [ -10^{ 0.4 ( M^* - M ) } ] ~ dM .\n"
},
{
"math_id": 9,
"text": " \\alpha + 1 "
},
{
"math_id": 10,
"text": " \\alpha = -1 "
},
{
"math_id": 11,
"text": " \\int_a^b \\left(\\frac{L}{L^*}\\right)^\\alpha e^{-\\left(\\frac{L}{L^*}\\right)} d \\left(\\frac{L}{L^*}\\right)=\\Gamma(\\alpha+1,a)-\\Gamma(\\alpha+1,b) "
},
{
"math_id": 12,
"text": "\\alpha\\sim-3.5"
}
]
| https://en.wikipedia.org/wiki?curid=13347172 |
133496 | Parallelogram | Quadrilateral with two pairs of parallel sides
In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without appealing to the Euclidean parallel postulate or one of its equivalent formulations.
By comparison, a quadrilateral with at least one pair of parallel sides is a trapezoid in American English or a trapezium in British English.
The three-dimensional counterpart of a parallelogram is a parallelepiped.
The word comes from the Greek παραλληλό-γραμμον, "parallēló-grammon", which means a shape "of parallel lines".
Characterizations.
A simple (non-self-intersecting) quadrilateral is a parallelogram if and only if any one of the following statements is true:
Thus all parallelograms have all the properties listed above, and conversely, if just one of these statements is true in a simple quadrilateral, then it is a parallelogram.
Area formula.
All of the area formulas for general convex quadrilaterals apply to parallelograms. Further formulas are specific to parallelograms:
A parallelogram with base "b" and height "h" can be divided into a trapezoid and a right triangle, and rearranged into a rectangle, as shown in the figure to the left. This means that the area of a parallelogram is the same as that of a rectangle with the same base and height:
formula_0
The base × height area formula can also be derived using the figure to the right. The area "K" of the parallelogram to the right (the blue area) is the total area of the rectangle less the area of the two orange triangles. The area of the rectangle is
formula_1
and the area of a single triangle is
formula_2
Therefore, the area of the parallelogram is
formula_3
Another area formula, for two sides "B" and "C" and angle θ, is
formula_4
Provided that the parallelogram is not a rhombus, the area can be expressed using sides "B" and "C" and angle formula_5 at the intersection of the diagonals:
formula_6
When the parallelogram is specified from the lengths "B" and "C" of two adjacent sides together with the length "D"1 of either diagonal, then the area can be found from Heron's formula. Specifically it is
formula_7
where formula_8 and the leading factor 2 comes from the fact that the chosen diagonal divides the parallelogram into "two" congruent triangles.
From vertex coordinates.
Let vectors formula_9 and let formula_10 denote the matrix with elements of a and b. Then the area of the parallelogram generated by a and b is equal to formula_11.
Let vectors formula_12 and let formula_13. Then the area of the parallelogram generated by a and b is equal to formula_14.
Let points formula_15. Then the signed area of the parallelogram with vertices at "a", "b" and "c" is equivalent to the determinant of a matrix built using "a", "b" and "c" as rows with the last column padded using ones as follows:
formula_16
Proof that diagonals bisect each other.
To prove that the diagonals of a parallelogram bisect each other, we will use congruent triangles:
formula_17 "(alternate interior angles are equal in measure)"
formula_18 "(alternate interior angles are equal in measure)".
(since these are angles that a transversal makes with parallel lines "AB" and "DC").
Also, side "AB" is equal in length to side "DC", since opposite sides of a parallelogram are equal in length.
Therefore, triangles "ABE" and "CDE" are congruent (ASA postulate, "two corresponding angles and the included side").
Therefore,
formula_19
formula_20
Since the diagonals "AC" and "BD" divide each other into segments of equal length, the diagonals bisect each other.
Separately, since the diagonals "AC" and "BD" bisect each other at point "E", point "E" is the midpoint of each diagonal.
Lattice of parallelograms.
Parallelograms can tile the plane by translation. If edges are equal, or angles are right, the symmetry of the lattice is higher. These represent the four Bravais lattices in 2 dimensions.
Parallelograms arising from other figures.
Automedian triangle.
An automedian triangle is one whose medians are in the same proportions as its sides (though in a different order). If "ABC" is an automedian triangle in which vertex "A" stands opposite the side "a", "G" is the centroid (where the three medians of "ABC" intersect), and "AL" is one of the extended medians of "ABC" with "L" lying on the circumcircle of "ABC", then "BGCL" is a parallelogram.
Varignon parallelogram.
Varignon's theorem holds that the midpoints of the sides of an arbitrary quadrilateral are the vertices of a parallelogram, called its "Varignon parallelogram". If the quadrilateral is convex or concave (that is, not self-intersecting), then the area of the Varignon parallelogram is half the area of the quadrilateral.
Proof without words (see figure):
Tangent parallelogram of an ellipse.
For an ellipse, two diameters are said to be conjugate if and only if the tangent line to the ellipse at an endpoint of one diameter is parallel to the other diameter. Each pair of conjugate diameters of an ellipse has a corresponding tangent parallelogram, sometimes called a bounding parallelogram, formed by the tangent lines to the ellipse at the four endpoints of the conjugate diameters. All tangent parallelograms for a given ellipse have the same area.
It is possible to reconstruct an ellipse from any pair of conjugate diameters, or from any tangent parallelogram.
Faces of a parallelepiped.
A parallelepiped is a three-dimensional figure whose six faces are parallelograms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K = bh."
},
{
"math_id": 1,
"text": "K_\\text{rect} = (B+A) \\times H\\,"
},
{
"math_id": 2,
"text": "K_\\text{tri} = \\frac{A}{2} \\times H. \\,"
},
{
"math_id": 3,
"text": "K = K_\\text{rect} - 2 \\times K_\\text{tri} = ( (B+A) \\times H) - ( A \\times H) = B \\times H."
},
{
"math_id": 4,
"text": "K = B \\cdot C \\cdot \\sin \\theta.\\,"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "K = \\frac{|\\tan \\gamma|}{2} \\cdot \\left| B^2 - C^2 \\right|."
},
{
"math_id": 7,
"text": "K=2\\sqrt{S(S-B)(S-C)(S-D_1)}=\\frac{1}{2}\\sqrt{(B+C+D_1)(-B+C+D_1)(B-C+D_1)(B+C-D_1)},"
},
{
"math_id": 8,
"text": "S=(B+C+D_1)/2"
},
{
"math_id": 9,
"text": "\\mathbf{a},\\mathbf{b}\\in\\R^2"
},
{
"math_id": 10,
"text": "V = \\begin{bmatrix} a_1 & a_2 \\\\ b_1 & b_2 \\end{bmatrix} \\in\\R^{2 \\times 2}"
},
{
"math_id": 11,
"text": "|\\det(V)| = |a_1b_2 - a_2b_1|\\,"
},
{
"math_id": 12,
"text": "\\mathbf{a},\\mathbf{b}\\in\\R^n"
},
{
"math_id": 13,
"text": "V = \\begin{bmatrix} a_1 & a_2 & \\dots & a_n \\\\ b_1 & b_2 & \\dots & b_n \\end{bmatrix} \\in\\R^{2 \\times n}"
},
{
"math_id": 14,
"text": "\\sqrt{\\det(V V^\\mathrm{T})}"
},
{
"math_id": 15,
"text": "a,b,c\\in\\R^2"
},
{
"math_id": 16,
"text": "K = \\left| \\begin{matrix}\n a_1 & a_2 & 1 \\\\\n b_1 & b_2 & 1 \\\\\n c_1 & c_2 & 1\n \\end{matrix} \\right|. "
},
{
"math_id": 17,
"text": "\\angle ABE \\cong \\angle CDE"
},
{
"math_id": 18,
"text": "\\angle BAE \\cong \\angle DCE"
},
{
"math_id": 19,
"text": "AE = CE"
},
{
"math_id": 20,
"text": "BE = DE."
}
]
| https://en.wikipedia.org/wiki?curid=133496 |
1335 | Associative property | Property of a mathematical operation
In mathematics, the associative property is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations:
formula_0
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations".
Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, a × b = b × a, so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error.
Definition.
Formally, a binary operation formula_1 on a set S is called associative if it satisfies the associative law:
formula_2, for all formula_3 in S.}}
Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication.
formula_4, for all formula_3 in S.
The associative law can also be expressed in functional notation thus: formula_5
Generalized associative law.
If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. This is called the generalized associative law.
The number of possible bracketings is just the Catalan number, formula_6
, for "n" operations on "n+1" values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), in formula_7 possible ways:
If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as
formula_13
As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.
An example where this does not work is the logical biconditional ↔. It is associative; thus, is equivalent to , but most commonly means , which is not equivalent.
Examples.
Some examples of associative operations include the following.
Propositional logic.
Rule of replacement.
In standard truth-functional propositional logic, "association", or "associativity" are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules (using logical connectives notation) are:
formula_14
and
formula_15
where "formula_16" is a metalogical symbol representing "can be replaced in a proof with".
Truth functional connectives.
"Associativity" is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following (and their converses, since ↔ is commutative) are truth-functional tautologies.
formula_17
formula_18
formula_19
Joint denial is an example of a truth functional connective that is "not" associative.
Non-associative operation.
A binary operation formula_20 on a set "S" that does not satisfy the associative law is called non-associative. Symbolically,
formula_21
For such an operation the order of evaluation "does" matter. For example:
formula_22
formula_23
formula_24
formula_25
Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example,
formula_26
whereas
formula_27
Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures called non-associative algebras, which have also an addition and a scalar multiplication. Examples are the octonions and Lie algebras. In Lie algebras, the multiplication satisfies Jacobi identity instead of the associative law; this allows abstracting the algebraic nature of infinitesimal transformations.
Other examples are quasigroup, quasifield, non-associative ring, and commutative non-associative magmas.
Nonassociativity of floating point calculation.
In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the addition and multiplication of floating point numbers is "not" associative, as different rounding errors may be introduced when dissimilar-sized values are joined together in a different order.
To illustrate this, consider a floating point representation with a 4-bit significand:
<templatestyles src="Block indent/styles.css"/>(1.0002×20 + 1.0002×20) +
1.0002×24 = 1.0002×21 + 1.0002×24 = 1.0012×24
<templatestyles src="Block indent/styles.css"/>1.0002×20 + (1.0002×20 +
1.0002×24) = 1.0002×20 + 1.0002×24 = 1.0002×24
Even though most computers compute with 24 or 53 bits of significand, this is still an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.
Notation for non-associative operations.
In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like formula_28). However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses.
A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,
formula_29
while a right-associative operation is conventionally evaluated from right to left:
formula_30
Both left-associative and right-associative operations occur. Left-associative operations include the following:
formula_31
formula_32
formula_33
This notation can be motivated by the currying isomorphism, which enables partial application.
Right-associative operations include the following:
formula_34Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication:
formula_35Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression formula_36 the addition is performed before the exponentiation despite there being no explicit parentheses formula_37 wrapped around it. Thus given an expression such as formula_38, the full exponent formula_39 of the base formula_40 is evaluated first. However, in some contexts, especially in handwriting, the difference between formula_41, formula_42 and formula_34 can be hard to see. In such a case, right-associativity is usually implied.
formula_43
formula_44Using right-associative notation for these operations can be motivated by the Curry–Howard correspondence and by the currying isomorphism.
Non-associative operations for which no conventional evaluation order is defined include the following.
formula_45
formula_46
formula_47
formula_48
formula_49
formula_50.(Compare material nonimplication in logic.)
History.
William Rowan Hamilton seems to have coined the term "associative property" around 1844, a time when he was contemplating the non-associative algebra of the octonions he had learned about from John T. Graves.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n(2 + 3) + 4 &= 2 + (3 + 4) = 9 \\,\\\\\n2 \\times (3 \\times 4) &= (2 \\times 3) \\times 4 = 24 .\n\\end{align}"
},
{
"math_id": 1,
"text": "\\ast"
},
{
"math_id": 2,
"text": "(x \\ast y) \\ast z = x \\ast (y \\ast z)"
},
{
"math_id": 3,
"text": "x,y,z"
},
{
"math_id": 4,
"text": "(xy)z = x(yz)"
},
{
"math_id": 5,
"text": "(f \\circ (g \\circ h))(x) = ((f \\circ g) \\circ h)(x)"
},
{
"math_id": 6,
"text": "C_n"
},
{
"math_id": 7,
"text": "C_3 = 5"
},
{
"math_id": 8,
"text": "((ab)c)d"
},
{
"math_id": 9,
"text": "(a(bc))d"
},
{
"math_id": 10,
"text": "a((bc)d)"
},
{
"math_id": 11,
"text": "(a(b(cd))"
},
{
"math_id": 12,
"text": "(ab)(cd)"
},
{
"math_id": 13,
"text": "abcd"
},
{
"math_id": 14,
"text": "(P \\lor (Q \\lor R)) \\Leftrightarrow ((P \\lor Q) \\lor R)"
},
{
"math_id": 15,
"text": "(P \\land (Q \\land R)) \\Leftrightarrow ((P \\land Q) \\land R),"
},
{
"math_id": 16,
"text": "\\Leftrightarrow"
},
{
"math_id": 17,
"text": "((P \\lor Q) \\lor R) \\leftrightarrow (P \\lor (Q \\lor R))"
},
{
"math_id": 18,
"text": "((P \\land Q) \\land R) \\leftrightarrow (P \\land (Q \\land R))"
},
{
"math_id": 19,
"text": "((P \\leftrightarrow Q) \\leftrightarrow R) \\leftrightarrow (P \\leftrightarrow (Q \\leftrightarrow R))"
},
{
"math_id": 20,
"text": "*"
},
{
"math_id": 21,
"text": "(x*y)*z\\ne x*(y*z)\\qquad\\mbox{for some }x,y,z\\in S."
},
{
"math_id": 22,
"text": "\n(5-3)-2 \\, \\ne \\, 5-(3-2)\n"
},
{
"math_id": 23,
"text": "\n(4/2)/2 \\, \\ne \\, 4/(2/2)\n"
},
{
"math_id": 24,
"text": "\n2^{(1^2)} \\, \\ne \\, (2^1)^2\n"
},
{
"math_id": 25,
"text": "\\begin{align}\n \\mathbf{i} \\times (\\mathbf{i} \\times \\mathbf{j}) &= \\mathbf{i} \\times \\mathbf{k} = -\\mathbf{j} \\\\\n (\\mathbf{i} \\times \\mathbf{i}) \\times \\mathbf{j} &= \\mathbf{0} \\times \\mathbf{j} = \\mathbf{0}\n\\end{align}"
},
{
"math_id": 26,
"text": "\n(1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+(1+-1)+\\dots = 0\n"
},
{
"math_id": 27,
"text": "\n1+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+(-1+1)+\\dots = 1.\n"
},
{
"math_id": 28,
"text": "\\dfrac{2}{3/4}"
},
{
"math_id": 29,
"text": "\n\\left.\n\\begin{array}{l}\na*b*c=(a*b)*c\n\\\\\na*b*c*d=((a*b)*c)*d\n\\\\\na*b*c*d*e=(((a*b)*c)*d)*e\\quad\n\\\\\n\\mbox{etc.}\n\\end{array}\n\\right\\}\n\\mbox{for all }a,b,c,d,e\\in S\n"
},
{
"math_id": 30,
"text": "\n\\left.\n\\begin{array}{l}\nx*y*z=x*(y*z)\n\\\\\nw*x*y*z=w*(x*(y*z))\\quad\n\\\\\nv*w*x*y*z=v*(w*(x*(y*z)))\\quad\\\\\n\\mbox{etc.}\n\\end{array}\n\\right\\}\n\\mbox{for all }z,y,x,w,v\\in S\n"
},
{
"math_id": 31,
"text": "x-y-z=(x-y)-z"
},
{
"math_id": 32,
"text": "x/y/z=(x/y)/z"
},
{
"math_id": 33,
"text": "(f \\, x \\, y) = ((f \\, x) \\, y)"
},
{
"math_id": 34,
"text": "x^{y^z}=x^{(y^z)}"
},
{
"math_id": 35,
"text": "(x^y)^z=x^{(yz)}"
},
{
"math_id": 36,
"text": "2^{x+3}"
},
{
"math_id": 37,
"text": "2^{(x+3)}"
},
{
"math_id": 38,
"text": "x^{y^z}"
},
{
"math_id": 39,
"text": "y^z"
},
{
"math_id": 40,
"text": "x"
},
{
"math_id": 41,
"text": "{x^y}^z=(x^y)^z"
},
{
"math_id": 42,
"text": "x^{yz}=x^{(yz)}"
},
{
"math_id": 43,
"text": "\\mathbb{Z} \\rarr \\mathbb{Z} \\rarr \\mathbb{Z} = \\mathbb{Z} \\rarr (\\mathbb{Z} \\rarr \\mathbb{Z})"
},
{
"math_id": 44,
"text": "x \\mapsto y \\mapsto x - y = x \\mapsto (y \\mapsto x - y)"
},
{
"math_id": 45,
"text": "(x^\\wedge y)^\\wedge z\\ne x^\\wedge(y^\\wedge z)"
},
{
"math_id": 46,
"text": " a \\uparrow \\uparrow (b \\uparrow \\uparrow c) \\ne (a \\uparrow \\uparrow b) \\uparrow \\uparrow c"
},
{
"math_id": 47,
"text": " a \\uparrow \\uparrow \\uparrow (b \\uparrow \\uparrow \\uparrow c) \\ne (a \\uparrow \\uparrow \\uparrow b) \\uparrow \\uparrow \\uparrow c"
},
{
"math_id": 48,
"text": "\\vec a \\times (\\vec b \\times \\vec c) \\neq (\\vec a \\times \\vec b ) \\times \\vec c \\qquad \\mbox{ for some } \\vec a,\\vec b,\\vec c \\in \\mathbb{R}^3"
},
{
"math_id": 49,
"text": "{(x+y)/2+z\\over2}\\ne{x+(y+z)/2\\over2} \\qquad \\mbox{for all }x,y,z\\in\\mathbb{R} \\mbox{ with }x\\ne z."
},
{
"math_id": 50,
"text": "(A\\backslash B)\\backslash C \\neq A\\backslash (B\\backslash C)"
}
]
| https://en.wikipedia.org/wiki?curid=1335 |
1335094 | Random password generator | Program that generates password from random number generator
A random password generator is a software program or hardware device that takes input from a random or pseudo-random number generator and automatically generates a password. Random passwords can be generated manually, using simple sources of randomness such as dice or coins, or they can be generated using a computer.
While there are many examples of "random" password generator programs available on the Internet, generating randomness can be tricky, and many programs do not generate random characters in a way that ensures strong security. A common recommendation is to use open source security tools where possible, since they allow independent checks on the quality of the methods used. Simply generating a password at random does not ensure the password is a strong password, because it is possible, although highly unlikely, to generate an easily guessed or cracked password. In fact, there is no need at all for a password to have been produced by a perfectly random process: it just needs to be sufficiently difficult to guess.
A password generator can be part of a password manager. When a password policy enforces complex rules, it can be easier to use a password generator based on that set of rules than to manually create passwords.
Long strings of random characters are difficult for most people to memorize. Mnemonic hashes, which reversibly convert random strings into more memorable passwords, can substantially improve the ease of memorization. As the hash can be processed by a computer to recover the original 60-bit string, it has at least as much information content as the original string. Similar techniques are used in memory sport.
Password type and strength.
Random password generators normally output a string of symbols of specified length. These can be individual characters from some character set, syllables designed to form pronounceable passwords, or words from some word list to form a passphrase. The program can be customized to ensure the resulting password complies with the local password policy, say by always producing a mix of letters, numbers and special characters. Such policies typically reduce strength slightly below the formula that follows, because symbols are no longer independently produced.
The Password strength of a random password against a particular attack (brute-force search), can be calculated by computing the information entropy of the random process that produced it. If each symbol in the password is produced independently and with uniform probability, the entropy in bits is given by the formula formula_0, where "N" is the number of possible symbols and "L" is the number of symbols in the password. The function log2 is the base-2 logarithm. "H" is typically measured in bits.
Any password generator is limited by the state space of the pseudo-random number generator used if it is based on one. Thus a password generated using a 32-bit generator is limited to 32 bits entropy, regardless of the number of characters the password contains.
Websites.
A large number of password generator programs and websites are available on the Internet. Their quality varies and can be hard to assess if there is no clear description of the source of randomness that is used and if source code is not provided to allow claims to be checked. Furthermore, and probably most importantly, transmitting candidate passwords over the Internet raises obvious security concerns, particularly if the connection to the password generation site's program is not properly secured or if the site is compromised in some way. Without a secure channel, it is not possible to prevent eavesdropping, especially over public networks such as the Internet. A possible solution to this issue is to generate the password using a client-side programming language such as JavaScript. The advantage of this approach is that the generated password stays in the client computer and is not transmitted to or from an external server.
Web Cryptography API.
The Web Cryptography API is the World Wide Web Consortium’s (W3C) recommendation for a low-level interface that would increase the security of web applications by allowing them to perform cryptographic functions without having to access raw keying material. The Web Crypto API provides a reliable way to generate passwords using the codice_0 method. Here is the simple Javascript code that generate the strong password using web crypto API.
FIPS 181 standard.
Many computer systems already have an application (typically named "apg") to implement the password generator standard FIPS 181. FIPS 181—Automated Password Generator—describes a standard process for converting random bits (from a hardware random number generator) into somewhat pronounceable "words" suitable for a passphrase. However, in 1994 an attack on the FIPS 181 algorithm was discovered, such that an attacker can expect, on average, to break into 1% of accounts that have passwords based on the algorithm, after searching just 1.6 million passwords. This is due to the non-uniformity in the distribution of passwords generated, which can be addressed by using longer passwords or by modifying the algorithm.
Mechanical methods.
Yet another method is to use physical devices such as dice to generate the randomness. One simple way to do this uses a 6 by 6 table of characters. The first die roll selects a row in the table and the second a column. So, for example, a roll of 2 followed by a roll of 4 would select the letter "j" from the fractionation table below. To generate upper/lower case characters or some symbols a coin flip can be used, heads capital, tails lower case. If a digit was selected in the dice rolls, a heads coin flip might select the symbol above it on a standard keyboard, such as the '$' above the '4' instead of '4'.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H = L\\,\\log_2 N"
}
]
| https://en.wikipedia.org/wiki?curid=1335094 |
13351034 | Werner Kuhn (chemist) | Swiss chemist (1899–1963)
Werner Kuhn (February 6, 1899 – August 27, 1963) was a Swiss physical chemist who developed the first model of the viscosity of polymer solutions using statistical mechanics. He is known for being the first to apply Boltzmann's entropy formula:
formula_0
to the modeling of rubber molecules, i.e. the "rubber band entropy model", molecules which he imagined as chains of "N" independently oriented links of length "b" with an end-to-end distance of "r". This model, which resulted in the derivation of the thermal equation of state of rubber, has since been extrapolated to the entropic modeling of proteins and other conformational polymer chained molecules attached to a surface.
Kuhn received a degree in chemical engineering at the Eidgenössische Technische Hochschule (ETH, Federal Institute of Technology), in Zürich, and later a doctorate (1923) in physical chemistry. He was appointed professor of physical chemistry at the University of Kiel (1936–39) and then returned to Switzerland as director of the Physico-Chemical Institute of the University of Basel (1939–63), where he also served as rector (1955–56).
In a 1951 lecture along with his student V.B. Hargitay, he was the first to hypothesize the countercurrent multiplier mechanism in the mammalian kidney, later to be discovered in many other similar biological systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = k \\log W \\!"
}
]
| https://en.wikipedia.org/wiki?curid=13351034 |
13353871 | Dawson–Gärtner theorem | Mathematical result in large deviations theory
In mathematics, the Dawson–Gärtner theorem is a result in large deviations theory. Heuristically speaking, the Dawson–Gärtner theorem allows one to transport a large deviation principle on a “smaller” topological space to a “larger” one.
Statement of the theorem.
Let ("Y""j")"j"∈"J" be a projective system of Hausdorff topological spaces with maps "p""ij" : "Y""j" → "Y""i". Let "X" be the projective limit (also known as the inverse limit) of the system ("Y""j", "p""ij")"i","j"∈"J", i.e.
formula_0
Let ("μ""ε")"ε">0 be a family of probability measures on "X". Assume that, for each "j" ∈ "J", the push-forward measures ("p""j"∗"μ""ε")"ε">0 on "Y""j" satisfy the large deviation principle with good rate function "I""j" : "Y""j" → R ∪ {+∞}. Then the family ("μ""ε")"ε">0 satisfies the large deviation principle on "X" with good rate function "I" : "X" → R ∪ {+∞} given by
formula_1 | [
{
"math_id": 0,
"text": "X = \\varprojlim_{j \\in J} Y_{j} = \\left\\{ \\left. y = (y_{j})_{j \\in J} \\in Y = \\prod_{j \\in J} Y_{j} \\right| i < j \\implies y_{i} = p_{ij} (y_{j}) \\right\\}."
},
{
"math_id": 1,
"text": "I(x) = \\sup_{j \\in J} I_{j}(p_{j}(x))."
}
]
| https://en.wikipedia.org/wiki?curid=13353871 |
1335392 | Continued fraction factorization | In number theory, the continued fraction factorization method (CFRAC) is an integer factorization algorithm. It is a general-purpose algorithm, meaning that it is suitable for factoring any integer "n", not depending on special form or properties. It was described by D. H. Lehmer and R. E. Powers in 1931, and developed as a computer algorithm by Michael A. Morrison and John Brillhart in 1975.
The continued fraction method is based on Dixon's factorization method. It uses convergents in the regular continued fraction expansion of
formula_0.
Since this is a quadratic irrational, the continued fraction must be periodic (unless "n" is square, in which case the factorization is obvious).
It has a time complexity of formula_1, in the O and L notations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{kn},\\qquad k\\in\\mathbb{Z^+}"
},
{
"math_id": 1,
"text": "O\\left(e^{\\sqrt{2\\log n \\log\\log n}}\\right)=L_n\\left[1/2,\\sqrt{2}\\right]"
}
]
| https://en.wikipedia.org/wiki?curid=1335392 |
13354070 | Varadhan's lemma | In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic "φ"("Z""ε") of a family of random variables "Z""ε" as "ε" becomes small in terms of a rate function for the variables.
Statement of the lemma.
Let "X" be a regular topological space; let ("Z""ε")"ε">0 be a family of random variables taking values in "X"; let "μ""ε" be the law (probability measure) of "Z""ε". Suppose that ("μ""ε")"ε">0 satisfies the large deviation principle with good rate function "I" : "X" → [0, +∞]. Let "ϕ" : "X" → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
formula_0
where 1("E") denotes the indicator function of the event "E"; or, for some "γ" > 1, the moment condition
formula_1
Then
formula_2 | [
{
"math_id": 0,
"text": "\\lim_{M \\to \\infty} \\limsup_{\\varepsilon \\to 0} \\big(\\varepsilon \\log \\mathbf{E} \\big[ \\exp\\big(\\phi(Z_{\\varepsilon}) / \\varepsilon\\big)\\,\\mathbf{1}\\big(\\phi(Z_{\\varepsilon}) \\geq M\\big) \\big]\\big) = -\\infty,"
},
{
"math_id": 1,
"text": "\\limsup_{\\varepsilon \\to 0} \\big(\\varepsilon \\log \\mathbf{E} \\big[ \\exp\\big(\\gamma \\phi(Z_{\\varepsilon}) / \\varepsilon\\big) \\big]\\big) < \\infty."
},
{
"math_id": 2,
"text": "\\lim_{\\varepsilon \\to 0} \\varepsilon \\log \\mathbf{E} \\big[ \\exp\\big(\\phi(Z_{\\varepsilon}) / \\varepsilon\\big) \\big] = \\sup_{x \\in X} \\big( \\phi(x) - I(x) \\big)."
}
]
| https://en.wikipedia.org/wiki?curid=13354070 |
1335495 | Lambda point | The lambda point is the temperature at which normal fluid helium (helium I) makes the transition to superfluid helium II (approximately 2.17 K at 1 atmosphere). The lowest pressure at which He-I and He-II can coexist is the vapor−He-I−He-II triple point at and , which is the "saturated vapor pressure" at that temperature (pure helium gas in thermal equilibrium over the liquid surface, in a hermetic container). The highest pressure at which He-I and He-II can coexist is the bcc−He-I−He-II triple point with a helium solid at , .
The point's name derives from the graph (pictured) that results from plotting the specific heat capacity as a function of temperature (for a given pressure in the above range, in the example shown, at 1 atmosphere), which resembles the Greek letter lambda formula_0. The specific heat capacity has a sharp peak as the temperature approaches the lambda point. The tip of the peak is so sharp that a critical exponent characterizing the divergence of the heat capacity can be measured precisely only in zero gravity, to provide a uniform density over a substantial volume of fluid. Hence the heat capacity was measured within 2 nK below the transition in an experiment included in a Space Shuttle payload in 1992.<templatestyles src="Unsolved/styles.css" />
Unsolved problem in physics:
Although the heat capacity has a peak, it does not tend towards infinity (contrary to what the graph may suggest), but has finite limiting values when approaching the transition from above and below. The behavior of the heat capacity near the peak is described by the formula formula_1 where formula_2 is the reduced temperature, formula_3 is the Lambda point temperature, formula_4 are constants (different above and below the transition temperature), and "α" is the critical exponent: formula_5. Since this exponent is negative for the superfluid transition, specific heat remains finite.
The quoted experimental value of "α" is in a significant disagreement with the most precise theoretical determinations coming from high temperature expansion techniques, Monte Carlo methods and the conformal bootstrap.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "C\\approx A_\\pm t^{-\\alpha}+B_\\pm"
},
{
"math_id": 2,
"text": "t=|1-T/T_c|"
},
{
"math_id": 3,
"text": "T_c"
},
{
"math_id": 4,
"text": "A_\\pm,B_\\pm"
},
{
"math_id": 5,
"text": "\\alpha=-0.0127(3)"
}
]
| https://en.wikipedia.org/wiki?curid=1335495 |
1335536 | Butterfly theorem | About the midpoint of a chord of a circle, through which two other chords are drawn
The butterfly theorem is a classical result in Euclidean geometry, which can be stated as follows:
Let "M" be the midpoint of a chord "PQ" of a circle, through which two other chords "AB" and "CD" are drawn; "AD" and "BC" intersect chord "PQ" at "X" and "Y" correspondingly. Then "M" is the midpoint of "XY".
Proof.
A formal proof of the theorem is as follows:
Let the perpendiculars "XX′" and "XX″" be dropped from the point "X" on the straight lines "AM" and "DM" respectively. Similarly, let "YY′" and "YY″" be dropped from the point "Y" perpendicular to the straight lines "BM" and "CM" respectively.
Since
formula_0
formula_1
formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
From the preceding equations and the intersecting chords theorem, it can be seen that
formula_8
formula_9
formula_10
formula_11
formula_12
since "PM"
"MQ".
So,
formula_13
Cross-multiplying in the latter equation,
formula_14
Cancelling the common term
formula_15
from both sides of the resulting equation yields
formula_16
hence "MX"
"MY", since MX, MY, and PM are all positive, real numbers.
Thus, "M" is the midpoint of "XY".
Other proofs too exist, including one using projective geometry.
History.
Proving the butterfly theorem was posed as a problem by William Wallace in "The Gentleman's Mathematical Companion" (1803). Three solutions were published in 1804, and in 1805 Sir William Herschel posed the question again in a letter to Wallace. Rev. Thomas Scurr asked the same question again in 1814 in the "Gentleman's Diary or Mathematical Repository".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\triangle MXX' \\sim \\triangle MYY',"
},
{
"math_id": 1,
"text": " {MX \\over MY} = {XX' \\over YY'}, "
},
{
"math_id": 2,
"text": " \\triangle MXX'' \\sim \\triangle MYY'',"
},
{
"math_id": 3,
"text": " {MX \\over MY} = {XX'' \\over YY''}, "
},
{
"math_id": 4,
"text": " \\triangle AXX' \\sim \\triangle CYY'',"
},
{
"math_id": 5,
"text": " {XX' \\over YY''} = {AX \\over CY}, "
},
{
"math_id": 6,
"text": " \\triangle DXX'' \\sim \\triangle BYY',"
},
{
"math_id": 7,
"text": " {XX'' \\over YY'} = {DX \\over BY}. "
},
{
"math_id": 8,
"text": " \\left({MX \\over MY}\\right)^2 = {XX' \\over YY' } {XX'' \\over YY''}, "
},
{
"math_id": 9,
"text": " {} = {AX \\cdot DX \\over CY \\cdot BY}, "
},
{
"math_id": 10,
"text": " {} = {PX \\cdot QX \\over PY \\cdot QY}, "
},
{
"math_id": 11,
"text": " {} = {(PM-XM) \\cdot (MQ+XM) \\over (PM+MY) \\cdot (QM-MY)}, "
},
{
"math_id": 12,
"text": " {} = { (PM)^2 - (MX)^2 \\over (PM)^2 - (MY)^2}, "
},
{
"math_id": 13,
"text": " { (MX)^2 \\over (MY)^2} = {(PM)^2 - (MX)^2 \\over (PM)^2 - (MY)^2}. "
},
{
"math_id": 14,
"text": " {(MX)^2 \\cdot (PM)^2 - (MX)^2 \\cdot (MY)^2} = {(MY)^2 \\cdot (PM)^2 - (MX)^2 \\cdot (MY)^2} . "
},
{
"math_id": 15,
"text": " { -(MX)^2 \\cdot (MY)^2} "
},
{
"math_id": 16,
"text": " {(MX)^2 \\cdot (PM)^2} = {(MY)^2 \\cdot (PM)^2}, "
}
]
| https://en.wikipedia.org/wiki?curid=1335536 |
1336000 | Cartan subalgebra | Nilpotent subalgebra of a Lie algebra
In mathematics, a Cartan subalgebra, often abbreviated as CSA, is a nilpotent subalgebra formula_0 of a Lie algebra formula_1 that is self-normalising (if formula_2 for all formula_3, then formula_4). They were introduced by Élie Cartan in his doctoral thesis. It controls the representation theory of a semi-simple Lie algebra formula_1 over a field of characteristic formula_5.
In a finite-dimensional semisimple Lie algebra over an algebraically closed field of characteristic zero (e.g., formula_6), a Cartan subalgebra is the same thing as a maximal abelian subalgebra consisting of elements "x" such that the adjoint endomorphism formula_7 is semisimple (i.e., diagonalizable). Sometimes this characterization is simply taken as the definition of a Cartan subalgebra.pg 231
In general, a subalgebra is called toral if it consists of semisimple elements. Over an algebraically closed field, a toral subalgebra is automatically abelian. Thus, over an algebraically closed field of characteristic zero, a Cartan subalgebra can also be defined as a maximal toral subalgebra.
Kac–Moody algebras and generalized Kac–Moody algebras also have subalgebras that play the same role as the Cartan subalgebras of semisimple Lie algebras (over a field of characteristic zero).
Existence and uniqueness.
Cartan subalgebras exist for finite-dimensional Lie algebras whenever the base field is infinite. One way to construct a Cartan subalgebra is by means of a regular element. Over a finite field, the question of the existence is still open.
For a finite-dimensional semisimple Lie algebra formula_8 over an algebraically closed field of characteristic zero, there is a simpler approach: by definition, a toral subalgebra is a subalgebra of formula_8 that consists of semisimple elements (an element is semisimple if the adjoint endomorphism induced by it is diagonalizable). A Cartan subalgebra of formula_8 is then the same thing as a maximal toral subalgebra and the existence of a maximal toral subalgebra is easy to see.
In a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero, all Cartan subalgebras are conjugate under automorphisms of the algebra, and in particular are all isomorphic. The common dimension of a Cartan subalgebra is then called the rank of the algebra.
For a finite-dimensional complex semisimple Lie algebra, the existence of a Cartan subalgebra is much simpler to establish, assuming the existence of a compact real form. In that case, formula_0 may be taken as the complexification of the Lie algebra of a maximal torus of the compact group.
If formula_1 is a linear Lie algebra (a Lie subalgebra of the Lie algebra of endomorphisms of a finite-dimensional vector space "V") over an algebraically closed field, then any Cartan subalgebra of formula_1 is the centralizer of a maximal toral subalgebra of formula_1. If formula_1 is semisimple and the field has characteristic zero, then a maximal toral subalgebra is self-normalizing, and so is equal to the associated Cartan subalgebra. If in addition formula_8 is semisimple, then the adjoint representation presents formula_8 as a linear Lie algebra, so that a subalgebra of formula_8 is Cartan if and only if it is a maximal toral subalgebra.
Cartan subalgebras of semisimple Lie algebras.
For finite-dimensional semisimple Lie algebra formula_8 over an algebraically closed field of characteristic 0, a Cartan subalgebra formula_26 has the following properties:
These two properties say that the operators in formula_28 are simultaneously diagonalizable and that there is a direct sum decomposition of formula_1 as
formula_29
where
formula_30.
Let formula_31. Then formula_32 is a root system and, moreover, formula_33; i.e., the centralizer of formula_0 coincides with formula_0. The above decomposition can then be written as:
formula_34
As it turns out, for each formula_35, formula_36 has dimension one and so:
formula_37.
See also Semisimple Lie algebra#Structure for further information.
Decomposing representations with dual Cartan subalgebra.
Given a Lie algebra formula_1 over a field of characteristic formula_18, and a Lie algebra representationformula_38 there is a decomposition related to the decomposition of the Lie algebra from its Cartan subalgebra. If we set
formula_39
with formula_40, called the weight space for weight formula_41, there is a decomposition of the representation in terms of these weight spaces formula_42 In addition, whenever formula_43 we call formula_44 a weight of the formula_1-representation formula_45.
Classification of irreducible representations using weights.
But, it turns out these weights can be used to classify the irreducible representations of the Lie algebra formula_1. For a finite dimensional irreducible formula_1-representation formula_45, there exists a unique weight formula_35 with respect to a partial ordering on formula_46. Moreover, given a formula_35 such that formula_47 for every positive root formula_48, there exists a unique irreducible representation formula_49. This means the root system formula_32 contains all information about the representation theory of
Splitting Cartan subalgebra.
Over non-algebraically closed fields, not all Cartan subalgebras are conjugate. An important class are splitting Cartan subalgebras: if a Lie algebra admits a splitting Cartan subalgebra formula_0 then it is called "splittable," and the pair formula_50 is called a split Lie algebra; over an algebraically closed field every semisimple Lie algebra is splittable. Any two splitting Cartan algebras are conjugate, and they fulfill a similar function to Cartan algebras in semisimple Lie algebras over algebraically closed fields, so split semisimple Lie algebras (indeed, split reductive Lie algebras) share many properties with semisimple Lie algebras over algebraically closed fields.
Over a non-algebraically closed field not every semisimple Lie algebra is splittable, however.
Cartan subgroup.
A Cartan subgroup of a Lie group is a special type of subgroup. Specifically, its Lie algebra (which captures the group’s algebraic structure) is itself a Cartan subalgebra. When we consider the identity component of a subgroup, it shares the same Lie algebra. However, there isn’t a universally agreed-upon definition for which subgroup with this property should be called the ‘Cartan subgroup,’ especially when dealing with disconnected groups.
For compact connected Lie groups, a Cartan subgroup is essentially a maximal connected Abelian subgroup—often referred to as a ‘maximal torus.’ The Lie algebra associated with this subgroup is also a Cartan subalgebra.
Now, when we explore disconnected compact Lie groups, things get interesting. There are multiple definitions for a Cartan subgroup. One common approach, proposed by David Vogan, defines it as the group of elements that normalize a fixed maximal torus while preserving the fundamental Weyl chamber. This version is sometimes called the ‘large Cartan subgroup.’ Additionally, there exists a ‘small Cartan subgroup,’ defined as the centralizer of a maximal torus. It’s important to note that these Cartan subgroups may not always be abelian in genera
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{h}"
},
{
"math_id": 1,
"text": "\\mathfrak{g}"
},
{
"math_id": 2,
"text": "[X,Y] \\in \\mathfrak{h}"
},
{
"math_id": 3,
"text": "X \\in \\mathfrak{h}"
},
{
"math_id": 4,
"text": "Y \\in \\mathfrak{h}"
},
{
"math_id": 5,
"text": " 0 "
},
{
"math_id": 6,
"text": "\\mathbb{C}"
},
{
"math_id": 7,
"text": "\\operatorname{ad}(x) : \\mathfrak{g} \\to \\mathfrak{g}"
},
{
"math_id": 8,
"text": "\\mathfrak g"
},
{
"math_id": 9,
"text": "\\mathfrak{gl}_{n}"
},
{
"math_id": 10,
"text": "n\\times n"
},
{
"math_id": 11,
"text": " \\mathfrak{sl}_n(\\mathbb{C})"
},
{
"math_id": 12,
"text": "\\mathfrak{h} = \\left\\{ d(a_1,\\ldots,a_n) \\mid a_i \\in \\mathbb{C} \\text{ and } \\sum_{i=1}^n a_i = 0 \\right\\}"
},
{
"math_id": 13,
"text": " d(a_1,\\ldots,a_n) = \\begin{pmatrix}\na_1 & 0 & \\cdots & 0 \\\\\n0 & \\ddots & & 0 \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n0 & \\cdots & \\cdots &a_n\n\\end{pmatrix}\n"
},
{
"math_id": 14,
"text": "\\mathfrak{sl}_2(\\mathbb{C})"
},
{
"math_id": 15,
"text": " \\mathfrak{h} = \\left\\{\n\\begin{pmatrix}\na & 0 \\\\\n0 & -a\n\\end{pmatrix} : a \\in \\mathbb{C}\n\\right\\}"
},
{
"math_id": 16,
"text": "\\mathfrak{sl}_{2}(\\mathbb{R})"
},
{
"math_id": 17,
"text": "2"
},
{
"math_id": 18,
"text": "0"
},
{
"math_id": 19,
"text": "\\mathfrak{sl}_{2n}(\\mathbb{C})"
},
{
"math_id": 20,
"text": "2n"
},
{
"math_id": 21,
"text": "2n-1"
},
{
"math_id": 22,
"text": "n^{2}"
},
{
"math_id": 23,
"text": " \\begin{pmatrix} 0 & A\\\\ 0 & 0 \\end{pmatrix}"
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "\\mathfrak h"
},
{
"math_id": 27,
"text": "\\operatorname{ad} : \\mathfrak{g} \\to \\mathfrak{gl}(\\mathfrak{g})"
},
{
"math_id": 28,
"text": "\\operatorname{ad}(\\mathfrak h)"
},
{
"math_id": 29,
"text": "\\mathfrak{g} = \\bigoplus_{\\lambda \\in \\mathfrak{h}^*} \\mathfrak{g}_\\lambda"
},
{
"math_id": 30,
"text": "\\mathfrak{g}_\\lambda = \\{ x \\in \\mathfrak{g} : \\text{ad}(h)x = \\lambda(h)x, \\text{ for } h \\in \\mathfrak{h}\n\\}"
},
{
"math_id": 31,
"text": "\\Phi = \\{ \\lambda \\in \\mathfrak{h}^* \\setminus \\{0\\} | \\mathfrak{g}_\\lambda \\ne \\{0\\} \\}"
},
{
"math_id": 32,
"text": "\\Phi"
},
{
"math_id": 33,
"text": "\\mathfrak{g}_0 = \\mathfrak h"
},
{
"math_id": 34,
"text": "\\mathfrak{g} = \\mathfrak{h} \\oplus \\left(\n\\bigoplus_{\\lambda \\in \\Phi} \\mathfrak{g}_\\lambda\n\\right)"
},
{
"math_id": 35,
"text": "\\lambda \\in \\Phi"
},
{
"math_id": 36,
"text": "\\mathfrak{g}_{\\lambda}"
},
{
"math_id": 37,
"text": "\\dim \\mathfrak{g} = \\dim \\mathfrak{h} + \\# \\Phi"
},
{
"math_id": 38,
"text": "\\sigma: \\mathfrak{g}\\to \\mathfrak{gl}(V)"
},
{
"math_id": 39,
"text": "V_\\lambda = \\{v \\in V : (\\sigma(h))(v) = \\lambda(h) v \\text{ for } h \\in \\mathfrak{h} \\}"
},
{
"math_id": 40,
"text": "\\lambda \\in \\mathfrak{h}^*"
},
{
"math_id": 41,
"text": "\\lambda"
},
{
"math_id": 42,
"text": "V = \\bigoplus_{\\lambda \\in \\mathfrak{h}^*} V_\\lambda"
},
{
"math_id": 43,
"text": "V_\\lambda \\neq \\{0\\}"
},
{
"math_id": 44,
"text": "\\lambda "
},
{
"math_id": 45,
"text": "V"
},
{
"math_id": 46,
"text": "\\mathfrak{h}^*"
},
{
"math_id": 47,
"text": "\\langle \\alpha, \\lambda\\rangle \\in \\mathbb{N}"
},
{
"math_id": 48,
"text": "\\alpha \\in \\Phi^+"
},
{
"math_id": 49,
"text": "L^+(\\lambda)"
},
{
"math_id": 50,
"text": "(\\mathfrak{g},\\mathfrak{h})"
}
]
| https://en.wikipedia.org/wiki?curid=1336000 |
13361521 | L-shell | Mathematical parameter used to describe planetary magnetic field lines
The L-shell, L-value, or McIlwain L-parameter (after Carl E. McIlwain) is a parameter describing a particular set of planetary magnetic field lines. Colloquially, L-value often describes the set of magnetic field lines which cross the Earth's magnetic equator at a number of Earth-radii equal to the L-value. For example, formula_0 describes the set of the Earth's magnetic field lines which cross the Earth's magnetic equator two earth radii from the center of the Earth. L-shell parameters can also describe the magnetic fields of other planets. In such cases, the parameter is renormalized for that planet's radius and magnetic field model.
Although L-value is formally defined in terms of the Earth's true instantaneous magnetic field (or a high-order model like IGRF), it is often used to give a general picture of magnetic phenomena near the Earth, in which case it can be approximated using the dipole model of the Earth's magnetic field.
Charged particle motions in a dipole field.
The motions of low-energy charged particles in the Earth's magnetic field (or in any nearly-dipolar magnetic field) can be usefully described in terms of McIlwain's ("B","L") coordinates, the first of which, "B" is just the magnitude (or length) of the magnetic field vector.
This description is most valuable when the gyroradius of the charged particle orbit is small compared to the spatial scale for changes in the field. Then a charged particle will basically follow a helical path orbiting the local field line. In a local coordinate system "{x,y,z}" where "z" is along the field, the transverse motion will be nearly a circle, orbiting the "guiding center", that is the center of the orbit or the local "B" line, with the gyroradius and frequency characteristic of cyclotron motion for the field strength, while the simultaneous motion along "z" will be at nearly uniform velocity, since the component of the Lorentz force along the field line is zero.
At the next level of approximation, as the particle orbits and moves along the field line, along which the field changes slowly, the radius of the orbit changes so as to keep the magnetic flux enclosed by the orbit constant. Since the Lorentz force is strictly perpendicular to the velocity, it cannot change the energy of a charged particle moving in it. Thus the particle's kinetic energy remains constant. Then so also must its speed be constant. Then it can be shown that the particle's velocity parallel to the local field must decrease if the field is increasing along its "z" motion, and increase if the field decreases, while the components of the velocity transverse to the field increase or decrease so as to keep the magnitude of the total velocity constant. Conservation of energy prevents the transverse velocity from increasing without limit, and eventually the longitudinal component of the velocity becomes zero, while the pitch angle, of the particle with respect to the field line, becomes 90°. Then the longitudinal motion is stopped and reversed, and the particle is reflected back towards regions of weaker field, the guiding center now retracing its previous motion along the field line, with the particle's transverse velocity decreasing and its longitudinal velocity increasing.
In the (approximately) dipole field of the Earth, the magnitude of the field is greatest near the magnetic poles, and least near the magnetic Equator. Thus after the particle crosses the Equator, it will again encounter regions of increasing field, until it once again stops at the magnetic mirror point, on the opposite side of the Equator. The result is that, as the particle orbits its guiding center on the field line, it bounces back and forth between the north mirror point and the south mirror point, remaining approximately on the same field line. The particle is therefore endlessly trapped, and cannot escape from the region of the Earth. Particles with too-small pitch angles may strike the top of the atmosphere if they are not mirrored before their field line reaches too close to the Earth, in which case they will eventually be scattered by atoms in the air, lose energy, and be lost from the belts.
However, for particles which mirror at safe altitudes, (in yet a further level of approximation) the fact that the field generally increases towards the center of the Earth means that the curvature on the side of the orbit nearest the Earth is somewhat greater than on the opposite side, so that the orbit has a slightly non-circular, with a (prolate) cycloidal shape, and the guiding center slowly moves perpendicular both to the field line and to the radial direction. The guiding center of the cyclotron orbit, instead of moving exactly along the field line, therefore drifts slowly east or west (depending on the sign of the charge of the particle), and the local field line connecting the two mirror points at any moment, slowly sweeps out a surface connecting them as it moves in longitude. Eventually the particle will drift entirely around the Earth, and the surface will be closed upon itself. These drift surfaces, nested like the skin of an onion, are the surfaces of constant "L" in the McIlwain coordinate system. They apply not only for a perfect dipole field, but also for fields that are approximately dipolar. For a given particle, as long as only the Lorentz force is involved, "B" and "L" remain constant and particles can be trapped indefinitely. Use of ("B","L") coordinates provides us with a way of mapping the real, non-dipolar terrestrial or planetary field into coordinates that behave essentially like those of a perfect dipole. The "L" parameter is traditionally labeled in Earth-radii, of the point where the shell crosses the magnetic Equator, of the equivalent dipole. "B" is measured in gauss.
Equation for L in a Dipole Magnetic Field.
In a centered dipole magnetic field model, the path along a given L shell can be described as
formula_1
where formula_2 is the radial distance (in planetary radii) to a point on the line, formula_3 is its geomagnetic latitude, and formula_4 is the L-shell of interest.
L-shells on Earth.
For the Earth, L-shells uniquely define regions of particular geophysical interest. Certain physical phenomena occur in the ionosphere and magnetosphere at characteristic L-shells. For instance, auroral light displays are most common around L=6, can reach L=4 during moderate disturbances, and during the most severe geomagnetic storms, may approach L=2. The Van Allen radiation belts roughly correspond to L=, and L=. The plasmapause is typically around L=5.
L-shells on Jupiter.
The Jovian magnetic field is the strongest planetary field in the solar system. Its magnetic field traps electrons with energies greater than 500 MeV The characteristic L-shells are L=6, where electron distribution undergoes a marked hardening (increase of energy), and L=20-50, where the electron energy decreases to the VHF regime and the magnetosphere eventually gives way to the solar wind. Because Jupiter's trapped electrons contain so much energy, they more easily diffuse across L-shells than trapped electrons in Earth's magnetic field. One consequence of this is a more continuous and smoothly-varying radio-spectrum emitted by trapped electrons in gyro-resonance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L = 2"
},
{
"math_id": 1,
"text": " r = L\\cos^2\\lambda "
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "L"
}
]
| https://en.wikipedia.org/wiki?curid=13361521 |
13362584 | Smith–Minkowski–Siegel mass formula | In mathematics, the Smith–Minkowski–Siegel mass formula (or Minkowski–Siegel mass formula) is a formula for the sum of the weights of the lattices (quadratic forms) in a genus, weighted by the reciprocals of the orders of their automorphism groups. The mass formula is often given for integral quadratic forms, though it can be generalized to quadratic forms over any algebraic number field.
In 0 and 1 dimensions the mass formula is trivial, in 2 dimensions it is essentially equivalent to Dirichlet's class number formulas for imaginary quadratic fields, and in 3 dimensions some partial results were given by Gotthold Eisenstein.
The mass formula in higher dimensions was first given by H. J. S. Smith (1867), though his results were forgotten for many years.
It was rediscovered by H. Minkowski (1885), and an error in Minkowski's paper was found and corrected by C. L. Siegel (1935).
Many published versions of the mass formula have errors; in particular the 2-adic densities are difficult to get right, and it is sometimes forgotten that the trivial cases of dimensions 0 and 1 are different from the cases of dimension at least 2.
give an expository account and precise statement of the mass formula for integral quadratic forms, which is reliable because they check it on a large number of explicit cases.
For recent proofs of the mass formula see and .
The Smith–Minkowski–Siegel mass formula is essentially the constant term of the Weil–Siegel formula.
Statement of the mass formula.
If "f" is an "n"-dimensional positive definite integral quadratic form (or lattice) then the mass
of its genus is defined to be
formula_0
where the sum is over all integrally inequivalent forms in the same genus as "f", and Aut(Λ) is the automorphism group of Λ.
The form of the mass formula given by states that for "n" ≥ 2 the mass is given by
formula_1
where "m""p"("f") is the "p"-mass of "f", given by
formula_2
for sufficiently large "r", where "p""s" is the highest power of "p" dividing the determinant of "f". The number "N"("p""r") is the number of "n" by "n" matrices
"X" with coefficients that are integers mod "p" "r" such that
formula_3
where "A" is the Gram matrix of "f", or in other words the order of the automorphism group of the form reduced mod "p" "r".
Some authors state the mass formula in terms of the "p"-adic density
formula_4
instead of the "p"-mass. The "p"-mass is invariant under rescaling "f" but the "p"-density is not.
In the (trivial) cases of dimension 0 or 1 the mass formula needs some modifications. The factor of 2 in front represents the Tamagawa number of the special orthogonal group, which is only 1 in dimensions 0 and 1. Also the factor of 2 in front of "m""p"("f") represents the index of the special orthogonal group in the orthogonal group, which is only 1 in 0 dimensions.
Evaluation of the mass.
The mass formula gives the mass as an infinite product over all primes. This can be rewritten as a finite product as follows. For all but a finite number of primes (those not dividing 2 det("ƒ")) the "p"-mass "m""p"("ƒ") is equal to the standard p-mass std"p"("ƒ"), given by
formula_5 (for "n" = dim("ƒ") even)
formula_6 (for "n" = dim("ƒ") odd)
where the Legendre symbol in the second line is interpreted as 0 if "p" divides 2 det("ƒ").
If all the "p"-masses have their standard value, then the total mass is the
standard mass
formula_7 (For "n" odd)
formula_8 (For "n" even)
where
formula_9
"D" = (−1)"n"/2 det("ƒ")
The values of the Riemann zeta function for an even integers "s" are given in terms of Bernoulli numbers by
formula_10
So the mass of "ƒ" is given as a finite product of rational numbers as
formula_11
Evaluation of the "p"-mass.
If the form "f" has a p-adic Jordan decomposition
formula_12
where "q" runs through powers of "p" and "f""q" has determinant prime to "p" and dimension "n"("q"),
then the "p"-mass is given by
formula_13
Here "n"(II) is the sum of the dimensions of all Jordan components of type 2 and "p" = 2, and "n"(I,I) is the total number of pairs of adjacent constituents "f""q", "f"2"q" that are both of type I.
The factor "M""p"("f""q") is called a diagonal factor and is a power of "p" times the order of a certain orthogonal group over the field with "p" elements.
For odd "p" its value is given by
formula_14
when "n" is odd, or
formula_15
when "n" is even and (−1)"n"/2"d""q" is a quadratic residue, or
formula_16
when "n" is even and (−1)"n"/2"d""q" is a quadratic nonresidue.
For "p" = 2 the diagonal factor "M""p"("f""q") is notoriously tricky to calculate. (The notation is misleading as it depends not only on "f""q" but also on "f"2"q" and "f""q"/2.)
Then the diagonal factor "M""p"("f""q") is given as follows.
formula_17
when the form is bound or has octane value +2 or −2 mod 8 or
formula_18
when the form is free and has octane value −1 or 0 or 1 mod 8 or
formula_19
when the form is free and has octane value −3 or 3 or 4 mod 8.
Evaluation of ζ"D"("s").
The required values of the Dirichlet series ζ"D"("s") can be evaluated as follows. We write χ for the Dirichlet character with χ("m") given by 0 if "m" is even, and the Jacobi symbol formula_20 if "m" is odd. We write "k" for the modulus of this character and "k"1 for its conductor, and put χ = χ1ψ where χ1 is the principal character mod "k" and ψ is a primitive character mod "k"1. Then
formula_21
The functional equation for the L-series is
formula_22
where "G" is the Gauss sum
formula_23
If "s" is a positive integer then
formula_24
where "B""s"("x") is a Bernoulli polynomial.
Examples.
For the case of even unimodular lattices Λ of dimension "n" > 0 divisible by 8 the mass formula is
formula_25
where "B""k" is a Bernoulli number.
Dimension "n" = 0.
The formula above fails for "n" = 0, and in general the mass formula needs to be modified in the trivial cases when the dimension is at most 1. For "n" = 0 there is just one lattice, the zero lattice, of weight 1, so the total mass is 1.
Dimension "n" = 8.
The mass formula gives the total mass as
formula_26
There is exactly one even unimodular lattice of dimension 8, the E8 lattice, whose automorphism group is the Weyl group of "E"8 of order 696729600, so this verifies the mass formula in this case.
Smith originally gave a nonconstructive proof of the existence of an even unimodular lattice of dimension 8 using the fact that the mass is non-zero.
Dimension "n" = 16.
The mass formula gives the total mass as
formula_27
There are two even unimodular lattices of dimension 16, one with root system "E"82
and automorphism group of order 2×6967296002 = 970864271032320000, and one with root system "D"16 and automorphism group of order 21516! = 685597979049984000.
So the mass formula is
formula_28
Dimension "n" = 24.
There are 24 even unimodular lattices of dimension 24, called the Niemeier lattices. The mass formula for them is checked in .
Dimension "n" = 32.
The mass in this case is large, more than 40 million. This implies that there are more than 80 million even
unimodular lattices of dimension 32, as each has automorphism group of order at least 2 so contributes at most 1/2 to the mass. By refining this argument, showed that there are more than a billion such lattices. In higher dimensions the mass, and hence the number of lattices, increases very rapidly.
Generalizations.
Siegel gave a more general formula that counts the weighted number of representations of one quadratic form by forms in some genus; the Smith–Minkowski–Siegel mass formula is the special case when one form is the zero form.
Tamagawa showed that the mass formula was equivalent to the statement that the Tamagawa number of
the orthogonal group is 2, which is equivalent to saying that the Tamagawa number of its simply connected cover the spin group is 1. André Weil conjectured more generally that the Tamagawa number of any simply connected semisimple group is 1, and this conjecture was proved by Kottwitz in 1988.
gave a mass formula for unimodular lattices without roots (or with given root system). | [
{
"math_id": 0,
"text": "m(f) = \\sum_{\\Lambda}{1\\over|{\\operatorname{Aut}(\\Lambda)}|}"
},
{
"math_id": 1,
"text": "m(f) = 2\\pi^{-n(n+1)/4}\\prod_{j=1}^n\\Gamma(j/2)\\prod_{p\\text{ prime}}2m_p(f)"
},
{
"math_id": 2,
"text": "m_p(f) = {p^{(rn(n-1)+s(n+1))/2}\\over N(p^r)}"
},
{
"math_id": 3,
"text": "X^\\text{tr}AX \\equiv A\\ \\bmod\\ p^r"
},
{
"math_id": 4,
"text": "\\alpha_p(f) = {N(p^r)\\over p^{rn(n-1)/2}} = {p^{s(n+1)/2}\\over m_p(f)}"
},
{
"math_id": 5,
"text": "\\operatorname{std}_p(f)= {1\\over 2(1-p^{-2})(1-p^{-4})\\dots(1-p^{2-n}) (1-{(-1)^{n/2}\\det(f)\\choose p}p^{-n/2})} \\quad"
},
{
"math_id": 6,
"text": "\\operatorname{std}_p(f)= {1\\over 2(1-p^{-2})(1-p^{-4})\\dots(1-p^{1-n}) } "
},
{
"math_id": 7,
"text": "\\operatorname{std}(f) = 2\\pi^{-n(n+1)/4}\\left(\\prod_{j=1}^n\\Gamma(j/2)\\right) \\zeta(2)\\zeta(4)\\dots \\zeta(n-1)"
},
{
"math_id": 8,
"text": "\\operatorname{std}(f) = 2\\pi^{-n(n+1)/4}\\left(\\prod_{j=1}^n\\Gamma(j/2)\\right) \\zeta(2)\\zeta(4)\\dots \\zeta(n-2)\\zeta_D(n/2)"
},
{
"math_id": 9,
"text": "\\zeta_D(s) = \\prod_p{1\\over 1-{\\big(\\frac{D}{p}\\big)}p^{-s}}"
},
{
"math_id": 10,
"text": "\\zeta(s) = {(2\\pi)^s\\over 2\\times s!}|B_s|."
},
{
"math_id": 11,
"text": "m(f) = \\operatorname{std}(f)\\prod_{p|2\\det(f)}{m_p(f)\\over \\operatorname{std}_p(f)}."
},
{
"math_id": 12,
"text": "f=\\sum qf_q"
},
{
"math_id": 13,
"text": "m_p(f) = \\prod_qM_p(f_q)\\times \\prod_{q<q'}(q'/q)^{n(q)n(q')/2}\\times 2^{n(I,I)-n(II)}"
},
{
"math_id": 14,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{1-n})}"
},
{
"math_id": 15,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{2-n})(1-p^{-n/2})}"
},
{
"math_id": 16,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{2-n})(1+p^{-n/2})}"
},
{
"math_id": 17,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{-2t})}"
},
{
"math_id": 18,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{2-2t})(1-p^{-t})}"
},
{
"math_id": 19,
"text": "{1\\over 2(1-p^{-2})(1-p^{-4})\\dots (1-p^{2-2t})(1+p^{-t})}"
},
{
"math_id": 20,
"text": "{\\left(\\frac{D}{m}\\right)}"
},
{
"math_id": 21,
"text": "\\zeta_D(s) = L(s,\\chi) = L(s,\\psi)\\prod_{p|k}\\left(1 - {\\psi(p)\\over p^s}\\right)"
},
{
"math_id": 22,
"text": "L(1-s,\\psi)= {k_1^{s-1}\\Gamma(s)\\over (2\\pi)^s} (i^{-s}+\\psi(-1)i^s)G(\\psi)L(s,\\psi)"
},
{
"math_id": 23,
"text": "G(\\psi) = \\sum_{r=1}^{k_1}\\psi(r)e^{2\\pi i r/k_1}."
},
{
"math_id": 24,
"text": "L(1-s,\\psi) = -{k_1^{s-1}\\over s} \\sum_{r=1}^{k_1}\\psi(r)B_s(r/k_1)"
},
{
"math_id": 25,
"text": "\\sum_{\\Lambda}{1\\over|\\operatorname{Aut}(\\Lambda)|} = {|B_{n/2}|\\over n}\\prod_{1\\le j< n/2}{|B_{2j}|\\over 4j}"
},
{
"math_id": 26,
"text": "{|B_4|\\over 8}{|B_2|\\over 4}{|B_4|\\over 8}{|B_6|\\over 12} = {1/30\\over 8}\\;{1/6\\over 4}\\;{1/30\\over 8}\\;{1/42\\over 12} = {1\\over 696729600}."
},
{
"math_id": 27,
"text": "{|B_8|\\over 16}{|B_2|\\over 4}{|B_4|\\over 8}{|B_6|\\over 12}{|B_8|\\over 16}{|B_{10}|\\over 20}{|B_{12}|\\over 24}{|B_{14}|\\over 28} = {691\\over 277667181515243520000 }."
},
{
"math_id": 28,
"text": "{1\\over 970864271032320000} + {1\\over 685597979049984000} = {691\\over 277667181515243520000 }."
}
]
| https://en.wikipedia.org/wiki?curid=13362584 |
13363621 | Lundquist number | In plasma physics, the Lundquist number (denoted by formula_0) is a dimensionless ratio which compares the timescale of an Alfvén wave crossing to the timescale of resistive diffusion. It is a special case of the magnetic Reynolds number when the Alfvén velocity is the typical velocity scale of the system, and is given by
formula_1
where formula_2 is the typical length scale of the system, formula_3 is the magnetic diffusivity and formula_4 is the Alfvén velocity of the plasma.
High Lundquist numbers indicate highly conducting plasmas, while low Lundquist numbers indicate more resistive plasmas. Laboratory plasma experiments typically have Lundquist numbers between formula_5, while in astrophysical situations the Lundquist number can be greater than formula_6. Considerations of Lundquist number are especially important in magnetic reconnection. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "S = \\frac{Lv_A}{\\eta} ,"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "\\eta"
},
{
"math_id": 4,
"text": "v_A"
},
{
"math_id": 5,
"text": "10^2-10^8"
},
{
"math_id": 6,
"text": "10^{20}"
}
]
| https://en.wikipedia.org/wiki?curid=13363621 |
1336457 | Loop algebra | Type of Lie algebra of interest in physics
In mathematics, loop algebras are certain types of Lie algebras, of particular interest in theoretical physics.
Definition.
For a Lie algebra formula_0 over a field formula_1, if formula_2 is the space of Laurent polynomials, then
formula_3
with the inherited bracket
formula_4
Geometric definition.
If formula_0 is a Lie algebra, the tensor product of formula_0 with "C"∞("S"1), the algebra of (complex) smooth functions over the circle manifold "S"1 (equivalently, smooth complex-valued periodic functions of a given period),
formula_5
is an infinite-dimensional Lie algebra with the Lie bracket given by
formula_6
Here "g"1 and "g"2 are elements of formula_0 and "f"1 and "f"2 are elements of "C"∞("S"1).
This isn't precisely what would correspond to the direct product of infinitely many copies of formula_0, one for each point in "S"1, because of the smoothness restriction. Instead, it can be thought of in terms of smooth map from "S"1 to formula_0; a smooth parametrized loop in formula_0, in other words. This is why it is called the loop algebra.
Gradation.
Defining formula_7 to be the linear subspace formula_8 the bracket restricts to a product
formula_9
hence giving the loop algebra a formula_10-graded Lie algebra structure.
In particular, the bracket restricts to the 'zero-mode' subalgebra formula_11.
Derivation.
There is a natural derivation on the loop algebra, conventionally denoted formula_12 acting as
formula_13
formula_14
and so can be thought of formally as formula_15.
It is required to define affine Lie algebras, which are used in physics, particularly conformal field theory.
Loop group.
Similarly, a set of all smooth maps from "S"1 to a Lie group "G" forms an infinite-dimensional Lie group (Lie group in the sense we can define functional derivatives over it) called the loop group. The Lie algebra of a loop group is the corresponding loop algebra.
Affine Lie algebras as central extension of loop algebras.
If formula_0 is a semisimple Lie algebra, then a nontrivial central extension of its loop algebra formula_16 gives rise to an affine Lie algebra. Furthermore this central extension is unique.
The central extension is given by adjoining a central element formula_17, that is, for all formula_18,
formula_19
and modifying the bracket on the loop algebra to
formula_20
where formula_21 is the Killing form.
The central extension is, as a vector space, formula_22 (in its usual definition, as more generally, formula_23 can be taken to be an arbitrary field).
Cocycle.
Using the language of Lie algebra cohomology, the central extension can be described using a 2-cocycle on the loop algebra. This is the map
formula_24
satisfying
formula_25
Then the extra term added to the bracket is formula_26
Affine Lie algebra.
In physics, the central extension formula_27 is sometimes referred to as the affine Lie algebra. In mathematics, this is insufficient, and the full affine Lie algebra is the vector space
formula_28
where formula_12 is the derivation defined above.
On this space, the Killing form can be extended to a non-degenerate form, and so allows a root system analysis of the affine Lie algebra.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "K[t,t^{-1}]"
},
{
"math_id": 3,
"text": "L\\mathfrak{g} := \\mathfrak{g}\\otimes K[t,t^{-1}],"
},
{
"math_id": 4,
"text": "[X\\otimes t^m, Y\\otimes t^n] = [X,Y]\\otimes t^{m+n}."
},
{
"math_id": 5,
"text": "\\mathfrak{g}\\otimes C^\\infty(S^1),"
},
{
"math_id": 6,
"text": "[g_1\\otimes f_1,g_2 \\otimes f_2]=[g_1,g_2]\\otimes f_1 f_2."
},
{
"math_id": 7,
"text": "\\mathfrak{g}_i"
},
{
"math_id": 8,
"text": "\\mathfrak{g}_i = \\mathfrak{g}\\otimes t^i < L\\mathfrak{g},"
},
{
"math_id": 9,
"text": "[\\cdot\\, , \\, \\cdot]: \\mathfrak{g}_i \\times \\mathfrak{g}_j \\rightarrow \\mathfrak{g}_{i+j},"
},
{
"math_id": 10,
"text": "\\mathbb{Z}"
},
{
"math_id": 11,
"text": "\\mathfrak{g}_0 \\cong \\mathfrak{g}"
},
{
"math_id": 12,
"text": "d"
},
{
"math_id": 13,
"text": "d: L\\mathfrak{g} \\rightarrow L\\mathfrak{g}"
},
{
"math_id": 14,
"text": "d(X\\otimes t^n) = nX\\otimes t^n"
},
{
"math_id": 15,
"text": "d = t\\frac{d}{dt}"
},
{
"math_id": 16,
"text": "L\\mathfrak g"
},
{
"math_id": 17,
"text": "\\hat k"
},
{
"math_id": 18,
"text": "X\\otimes t^n \\in L\\mathfrak{g}"
},
{
"math_id": 19,
"text": "[\\hat k, X\\otimes t^n] = 0,"
},
{
"math_id": 20,
"text": "[X\\otimes t^m, Y\\otimes t^n] = [X,Y] \\otimes t^{m + n} + mB(X,Y) \\delta_{m+n,0} \\hat k,"
},
{
"math_id": 21,
"text": "B(\\cdot, \\cdot)"
},
{
"math_id": 22,
"text": "L\\mathfrak{g} \\oplus \\mathbb{C}\\hat k"
},
{
"math_id": 23,
"text": "\\mathbb{C}"
},
{
"math_id": 24,
"text": "\\varphi: L\\mathfrak g \\times L\\mathfrak g \\rightarrow \\mathbb{C}"
},
{
"math_id": 25,
"text": "\\varphi(X\\otimes t^m, Y\\otimes t^n) = mB(X,Y)\\delta_{m+n,0}."
},
{
"math_id": 26,
"text": "\\varphi(X\\otimes t^m, Y\\otimes t^n)\\hat k."
},
{
"math_id": 27,
"text": "L\\mathfrak g \\oplus \\mathbb C \\hat k"
},
{
"math_id": 28,
"text": "\\hat \\mathfrak{g} = L\\mathfrak{g} \\oplus \\mathbb C \\hat k \\oplus \\mathbb C d"
}
]
| https://en.wikipedia.org/wiki?curid=1336457 |
1336913 | Double dissolution | Procedure of dissolving both houses of the Australian Parliament
A double dissolution is a procedure permitted under the Australian Constitution to resolve deadlocks in the bicameral Parliament of Australia between the House of Representatives (lower house) and the Senate (upper house). A double dissolution is the only circumstance in which the entire Senate can be dissolved.
Similar to the United States Congress, but unlike the British Parliament, Australia's two parliamentary houses generally have almost equal legislative power (the Senate may reject outright but cannot amend appropriation (money) bills, which must originate in the House of Representatives). Governments, which are formed in the House of Representatives, can be frustrated by a Senate determined to reject their legislation.
If the conditions (called a trigger) are satisfied, the prime minister can advise the governor-general to dissolve both houses of Parliament and call a full election. If, after the election, the legislation that triggered the double dissolution is still not passed by the two houses, then a joint sitting of the two houses of parliament can be called to vote on the legislation. If the legislation is passed by the joint sitting, it is deemed to have passed both the House of Representatives and the Senate. The 1974 joint sitting remains the only occurrence in federal Australian history.
Historically, a double dissolution election has been called in lieu of an early election, with the formal trigger bill not playing a significant role during the subsequent election campaign.
There are also similar double dissolution provisions in the South Australian state constitution.
Constitutional basis.
Part of section 57 of the Constitution provides:
If the House of Representatives passes any proposed law, and the Senate rejects or fails to pass it, or passes it with amendments to which the House of Representatives will not agree, and if after an interval of three months the House of Representatives, in the same or the next session, again passes the proposed law with or without any amendments which have been made, suggested, or agreed to by the Senate, and the Senate rejects or fails to pass it, or passes it with amendments to which the House of Representatives will not agree, the Governor-General may dissolve the Senate and the House of Representatives simultaneously. But such dissolution shall not take place within six months before the date of the expiry of the House of Representatives by effluxion of time.
Section 57 also provides that, following the election, if the Senate a third time rejects the bill or bills that were the subject of the double dissolution, the Governor-General may convene a joint sitting of the two houses to consider the bill or bills, including any amendments which have been previously proposed in either house, or any new amendments. If a bill is passed by an absolute majority of the total membership of the joint sitting, it is treated as though it had been passed separately by both houses, and is presented for royal assent. The only time this procedure was invoked was in the 1974 joint sitting.
Trigger event.
The double dissolution provision comes into play if the Senate and House twice fail to agree on a piece of legislation (in section 57 called a "proposed law", and commonly referred to as a "trigger"). When one or more such triggers exist, the Governor-General may dissolve both the House and Senate – pursuant to section 57 of the Constitution – and issue writs for an election in which every seat in the Parliament is contested.
The conditions stipulated by section 57 of the Constitution are:
There is no similar provision for resolving deadlocks with respect to bills that have originated in the Senate and are blocked in the House of Representatives.
Though the Constitution refers to possible actions by the Governor-General, it had long been presumed that convention required the Governor-General to act only on the advice of the Prime Minister and the Cabinet. However, as the 1975 constitutional crisis demonstrated, the Governor-General is not compelled to follow the Prime Minister's advice. In these cases, he or she must be personally satisfied that the conditions specified in the Constitution apply, and is entitled to seek additional information or advice before coming to a decision.
Practice and misconceptions.
As a High Court Chief Justice Barwick observed in a unanimous decision in "Cormack v Cope (Joint Sittings Case)" (1974) (with emphasis added):
<templatestyles src="Template:Blockquote/styles.css" />
History.
There have been seven double dissolutions: in 1914, 1951, 1974, 1975, 1983, 1987 and 2016. However, a joint sitting following a double dissolution pursuant to section 57 has only taken place once, in 1974.
Summary.
The following table is a summary of the relevant details:
Elections.
A double dissolution affects the outcome of elections for houses of parliament using proportional representation over multiple elections, such as the proportional voting system for the Senate where each state normally only elects half its Senate delegation, but following a double dissolution, each state elects its entire senate delegation. The outcome is affected in two ways:
Neither of these issues arise in relation to the two territories represented in the Senate as each elects its two senators to a term ending at the dissolution of the House of Representatives.
Quota.
Under proportional representation, the more seats there are, the easier it is for smaller parties to win seat. A double dissolution increases the number of available seats because all seats are contested in the same election. The following calculations refer to the current arrangements of 12 senate seats per state since 1984, however the calculations are similar for the period from 1949 until 1983 when there were 10 senate seats per state. The quota for the election of each senator in each Australian state in a full senate election is 7.69% (formula_0), while in a normal half-Senate election the quota is 14.28% (formula_1).
While the threshold is lower for smaller parties, for more significant parties the distribution of candidates' votes as they are eliminated has a rounding effect. A double dissolution favours parties that have a vote significantly greater than a multiple of the required double dissolution vote and greater than a multiple of the normal quota. It disadvantages those that do not. For example, a party achieving 10% of the vote is likely to get one candidate out of six elected in a regular election (as minor parties' votes are distributed until they get to 14.28%) but the same party with the same vote is likely to have one candidate out of 12 elected during a double dissolution election (as their second candidate will be left with 2.31% and be excluded early in the count). A party with 25% is likely to achieve three candidates out of 12 during a double dissolution election (three candidates and 1.83% of the vote for their 4th candidate distributed to other candidates) and two out of six in a regular election (one candidate taking 14.28% and the second holding 10.72% remains standing until minor parties' preferences push the second candidate to a quota).
Since the abolition of group voting tickets in the lead-up to the 2016 general election, it is no longer possible to create "calculators" that assess the senate election outcome with reasonable accuracy. Antony Green's working guide is that "if a party has more than 0.5 of a quota, it will be in the race for one of the final seats". His calculation of the percentage of primary-vote required for the first six full- and half-quotas at a double dissolution election are as follows:
Unlike the case of a normal half-Senate election, the newly elected Senate, like the House, takes office immediately. The Senate cycle is altered, with the next change of Senate membership scheduled for the third date that falls on 1 July after the election. The senators from each state are divided into two classes: the first class receive three-year terms and the second class receive six-year terms (both of these may be interrupted by another double dissolution). Thus for the Parliament elected in the March 1983 double dissolution election, the next two Senate changeovers would have been due on 1 July 1985 and 1 July 1988, while the term of the new House of Representatives would have expired in 1986. Bob Hawke decided to call a regular federal election for December 1984 after only 18 months in office, to bring the two election cycles back into synchronisation.
Allocation of long-term and short-term seats.
In order to return to the normal arrangement of half the state Senators being contested at each election, following a double dissolution, section 13 of the Australian Constitution requires the senate to divide the state senators into two classes, with three-year and six-year terms. This has traditionally been done by allocating long terms to the senators elected earliest in the count. The 1984 amendments to the Commonwealth Electoral Act required the Australian Electoral Commission to conduct a notional recount as if only half the seats were to be elected, which was seen as producing a fairer allocation. This alternative allocation has not yet been used. Following double dissolution elections in 1987 and 2016, the order-elected method continued to be used, despite Senate resolutions in 1998 and 2010 agreeing to use the new method.
South Australian double dissolutions.
Under section 41 of the South Australian constitution, if a bill is passed by the House of Assembly during a session of Parliament and in the following Parliament after a general election for the lower house is rejected by the Legislative Council on both occasions, it is permitted for the Governor of South Australia to either issue a writ for the election of 2 additional members of the Legislative Council or to dissolve both houses at the same time to elect an entirely new Parliament. As the upper house consists of 22 members, with 11 elected statewide at each general election for an 8-year term at a quota of 8.33%, this would result in an election for all 22 members at a quota of 4.35%.
Although it has been threatened, this South Australian double dissolution procedure has never been used.
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dfrac{1}{12+1}"
},
{
"math_id": 1,
"text": "\\dfrac{1}{6+1}"
}
]
| https://en.wikipedia.org/wiki?curid=1336913 |
1336960 | Failure rate | Frequency with which an engineered system or component fails
Failure rate is the frequency with which an engineered system or component fails, expressed in failures per unit of time. It is usually denoted by the Greek letter λ (lambda) and is often used in reliability engineering.
The failure rate of a system usually depends on time, with the rate varying over the life cycle of the system. For example, an automobile's failure rate in its fifth year of service may be many times greater than its failure rate during its first year of service. One does not expect to replace an exhaust pipe, overhaul the brakes, or have major transmission problems in a new vehicle.
In practice, the mean time between failures (MTBF, 1/λ) is often reported instead of the failure rate. This is valid and useful if the failure rate may be assumed constant – often used for complex units / systems, electronics – and is a general agreement in some reliability standards (Military and Aerospace). It does in this case "only" relate to the flat region of the bathtub curve, which is also called the "useful life period". Because of this, it is incorrect to extrapolate MTBF to give an estimate of the service lifetime of a component, which will typically be much less than suggested by the MTBF due to the much higher failure rates in the "end-of-life wearout" part of the "bathtub curve".
The reason for the preferred use for MTBF numbers is that the use of large positive numbers (such as 2000 hours) is more intuitive and easier to remember than very small numbers (such as 0.0005 per hour).
The MTBF is an important system parameter in systems where failure rate needs to be managed, in particular for safety systems. The MTBF appears frequently in the engineering design requirements, and governs frequency of required system maintenance and inspections. In special processes called renewal processes, where the time to recover from failure can be neglected and the likelihood of failure remains constant with respect to time, the failure rate is simply the multiplicative inverse of the MTBF (1/λ).
A similar ratio used in the transport industries, especially in railways and trucking is "mean distance between failures", a variation which attempts to correlate actual loaded distances to similar reliability needs and practices.
Failure rates are important factors in the insurance, finance, commerce and regulatory industries and fundamental to the design of safe systems in a wide variety of applications.
Failure rate data.
Failure rate data can be obtained in several ways. The most common means are:
Given a component database calibrated with field failure data that is reasonably accurate
, the method can predict product level failure rate and failure mode data for a given application. The predictions have been shown to be more accurate than field warranty return analysis or even typical field failure analysis given that these methods depend on reports that typically do not have sufficient detail information in failure records.
Failure rate in the discrete sense.
The failure rate can be defined as the following:
The total number of failures within an item population, divided by the total time expended by that population, during a particular measurement interval under stated conditions. (MacDiarmid, "et al.")
Although the failure rate, formula_0, is often thought of as the probability that a failure occurs in a specified interval given no failure before time formula_1, it is not actually a probability because it can exceed 1. Erroneous expression of the failure rate in % could result in incorrect perception of the measure, especially if it would be measured from repairable systems and multiple systems with non-constant failure rates or different operation times. It can be defined with the aid of the reliability function, also called the survival function, formula_2, the probability of no failure before time formula_1.
formula_3, where formula_4 is the time to (first) failure distribution (i.e. the failure density function).
formula_5
over a time interval formula_6 = formula_7 from formula_8 (or formula_1) to formula_9. Note that this is a conditional probability, where the condition is that no failure has occurred before time formula_1. Hence the formula_10 in the denominator.
Hazard rate and ROCOF (rate of occurrence of failures) are often incorrectly seen as the same and equal to the failure rate. To clarify; the more promptly items are repaired, the sooner they will break again, so the higher the ROCOF. The hazard rate is however independent of the time to repair and of the logistic delay time.
Failure rate in the continuous sense.
Calculating the failure rate for ever smaller intervals of time results in the <templatestyles src="Template:Visible anchor/styles.css" />hazard function (also called hazard rate), formula_11. This becomes the "instantaneous" failure rate or we say instantaneous hazard rate as formula_12 approaches to zero:
formula_13
A continuous failure rate depends on the existence of a failure distribution, formula_14, which is a cumulative distribution function that describes the probability of failure (at least) up to and including time "t",
formula_15
where formula_16 is the failure time.
The failure distribution function is the integral of the failure "density" function, "f"("t"),
formula_17
The hazard function can be defined now as
formula_18
Many probability distributions can be used to model the failure distribution ("see List of important probability distributions"). A common model is the exponential failure distribution,
formula_19
which is based on the exponential density function. The hazard rate function for this is:
formula_20
Thus, for an exponential failure distribution, the hazard rate is a constant with respect to time (that is, the distribution is "memory-less"). For other distributions, such as a Weibull distribution, log-normal distribution, or a hypertabastic distribution, the hazard function may not be constant with respect to time. For some such as the deterministic distribution it is monotonic increasing (analogous to "wearing out"), for others such as the Pareto distribution it is monotonic decreasing (analogous to "burning in"), while for many it is not monotonic.
Solving the differential equation
formula_21
for formula_14, it can be shown that
formula_22
Decreasing failure rate.
A decreasing failure rate (DFR) describes a phenomenon where the probability of an event in a fixed time interval in the future decreases over time. A decreasing failure rate can describe a period of "infant mortality" where earlier failures are eliminated or corrected and corresponds to the situation where λ("t") is a decreasing function.
Mixtures of DFR variables are DFR. Mixtures of exponentially distributed random variables are hyperexponentially distributed.
Renewal processes.
For a renewal process with DFR renewal function, inter-renewal times are concave. Brown conjectured the converse, that DFR is also necessary for the inter-renewal times to be concave, however it has been shown that this conjecture holds neither in the discrete case nor in the continuous case.
Applications.
Increasing failure rate is an intuitive concept caused by components wearing out. Decreasing failure rate describes a system which improves with age.
Decreasing failure rates have been found in the lifetimes of spacecraft, Baker and Baker commenting that "those spacecraft that last, last on and on." The reliability of aircraft air conditioning systems were individually found to have an exponential distribution, and thus in the pooled population a DFR.
Coefficient of variation.
When the failure rate is decreasing the coefficient of variation is ⩾ 1, and when the failure rate is increasing the coefficient of variation is ⩽ 1. Note that this result only holds when the failure rate is defined for all t ⩾ 0 and that the converse result (coefficient of variation determining nature of failure rate) does not hold.
Units.
Failure rates can be expressed using any measure of time, but hours is the most common unit in practice. Other units, such as miles, revolutions, etc., can also be used in place of "time" units.
Failure rates are often expressed in engineering notation as failures per million, or 10−6, especially for individual components, since their failure rates are often very low.
The Failures In Time (FIT) rate of a device is the number of failures that can be expected in one billion (109) device-hours of operation.
(E.g. 1000 devices for 1 million hours, or 1 million devices for 1000 hours each, or some other combination.) This term is used particularly by the semiconductor industry.
The relationship of FIT to MTBF may be expressed as: MTBF = 1,000,000,000 x 1/FIT.
Additivity.
Under certain engineering assumptions (e.g. besides the above assumptions for a constant failure rate, the assumption that the considered system has no relevant redundancies), the failure rate for a complex system is simply the sum of the individual failure rates of its components, as long as the units are consistent, e.g. failures per million hours. This permits testing of individual components or subsystems, whose failure rates are then added to obtain the total system failure rate.
Adding "redundant" components to eliminate a single point of failure improves the mission failure rate, but makes the series failure rate (also called the logistics failure rate) worse—the extra components improve the mean time between critical failures (MTBCF), even though the mean time before something fails is worse.
Example.
Suppose it is desired to estimate the failure rate of a certain component. A test can be performed to estimate its failure rate. Ten identical components are each tested until they either fail or reach 1000 hours, at which time the test is terminated for that component. (The level of statistical confidence is not considered in this example.) The results are as follows:
Estimated failure rate is
formula_23
or 799.8 failures for every million hours of operation.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda (t)"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "R(t)=1-F(t)"
},
{
"math_id": 3,
"text": "\\lambda(t) = \\frac{f(t)}{R(t)}"
},
{
"math_id": 4,
"text": "f(t)"
},
{
"math_id": 5,
"text": "\\lambda(t) = \\frac{R(t_1)-R(t_2)}{(t_2-t_1) \\cdot R(t_1)}\n = \\frac{R(t)-R(t+\\Delta t)}{\\Delta t \\cdot R(t)} \\!"
},
{
"math_id": 6,
"text": "\\Delta t"
},
{
"math_id": 7,
"text": "(t_2-t_1)"
},
{
"math_id": 8,
"text": "t_1"
},
{
"math_id": 9,
"text": "t_2"
},
{
"math_id": 10,
"text": "R(t)"
},
{
"math_id": 11,
"text": "h(t)"
},
{
"math_id": 12,
"text": "\\Delta t "
},
{
"math_id": 13,
"text": "h(t)=\\lim_{\\Delta t \\to 0} \\frac{R(t)-R(t+\\Delta t)}{\\Delta t \\cdot R(t)} =-\\frac{\\mathrm{d} }{\\mathrm{d} t}\\ln R(t)."
},
{
"math_id": 14,
"text": "F(t)"
},
{
"math_id": 15,
"text": "\\operatorname{P}(T\\le t)=F(t)=1-R(t),\\quad t\\ge 0. \\!"
},
{
"math_id": 16,
"text": "{T}"
},
{
"math_id": 17,
"text": "F(t)=\\int_{0}^{t} f(\\tau)\\, d\\tau. \\!"
},
{
"math_id": 18,
"text": "h(t)=\\frac{f(t)}{1-F(t)}=\\frac{f(t)}{R(t)}."
},
{
"math_id": 19,
"text": "F(t)=\\int_{0}^{t} \\lambda e^{-\\lambda \\tau}\\, d\\tau = 1 - e^{-\\lambda t}, \\!"
},
{
"math_id": 20,
"text": "h(t) = \\frac{f(t)}{R(t)} = \\frac{\\lambda e^{-\\lambda t}}{e^{-\\lambda t}} = \\lambda ."
},
{
"math_id": 21,
"text": "h(t)=\\frac{f(t)}{1-F(t)}=\\frac{F'(t)}{1-F(t)}"
},
{
"math_id": 22,
"text": "F(t) = 1 - \\exp{\\left(-\\int_0^t h(t) dt \\right)}."
},
{
"math_id": 23,
"text": "\\frac{6\\text{ failures}}{7502\\text{ hours}} = 0.0007998\\, \\frac{\\text{failures}}{\\text{hour}} = 799.8 \\times 10^{-6}\\, \\frac{\\text{failures}}{\\text{hour}}, "
}
]
| https://en.wikipedia.org/wiki?curid=1336960 |
13371195 | ANOVA–simultaneous component analysis | In computational biology and bioinformatics, analysis of variance – simultaneous component analysis (ASCA or ANOVA–SCA) is a method that partitions variation and enables interpretation of these partitions by SCA, a method that is similar to principal components analysis (PCA). Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze differences. Statistical coupling analysis (SCA) is a technique used in bioinformatics to measure covariation between pairs of amino acids in a protein multiple sequence alignment (MSA).
This method is a multivariate or even megavariate extension of analysis of variance (ANOVA). The variation partitioning is similar to ANOVA. Each partition matches all variation induced by an effect or factor, usually a treatment regime or experimental condition. The calculated effect partitions are called effect estimates. Because even the effect estimates are multivariate, interpretation of these effects estimates is not intuitive. By applying SCA on the effect estimates one gets a simple interpretable result. In case of more than one effect, this method estimates the effects in such a way that the different effects are not correlated.
Details.
Many research areas see increasingly large numbers of variables in only few samples. The low sample to variable ratio creates problems known as multicollinearity and singularity. Because of this, most traditional multivariate statistical methods cannot be applied.
ASCA algorithm.
This section details how to calculate the ASCA model on a case of two main effects with one interaction effect. It is easy to extend the declared rationale to more main effects and more interaction effects. If the first effect is time and the second effect is dosage, only the interaction between time and dosage exists. We assume there are four time points and three dosage levels.
Let X be a matrix that holds the data. X is mean centered, thus having zero mean columns. Let A and B denote the main effects and AB the interaction of these effects. Two main effects in a biological experiment can be time (A) and pH (B), and these two effects may interact. In designing such experiments one controls the main effects to several (at least two) levels. The different levels of an effect can be referred to as A1, A2, A3 and A4, representing 2, 3, 4, 5 hours from the start of the experiment. The same thing holds for effect B, for example, pH 6, pH 7 and pH 8 can be considered effect levels.
A and B are required to be balanced if the effect estimates need to be orthogonal and the partitioning unique. Matrix E holds the information that is not assigned to any effect. The partitioning gives the following notation:
formula_0
Calculating main effect estimate A (or B).
Find all rows that correspond to effect A level 1 and average these rows. The result is a vector. Repeat this for the other effect levels. Make a new matrix of the same size of X and place the calculated averages in the matching rows. That is, give all rows that match effect (i.e.) A level 1 the average of effect A level 1.
After completing the level estimates for the effect, perform an SCA. The scores of this SCA are the sample deviations for the effect, the important variables of this effect are in the weights of the SCA loading vector.
Calculating interaction effect estimate AB.
Estimating the interaction effect is similar to estimating main effects. The difference is that for interaction estimates the rows that match effect A level 1 are combined with the effect B level 1 and all combinations of effects and levels are cycled through. In our example setting, with four time point and three dosage levels there are 12 interaction sets {A1-B1, A1B2, A2B1, A2B2 and so on}. It is important to deflate (remove) the main effects before estimating the interaction effect.
SCA on partitions A, B and AB.
Simultaneous component analysis is mathematically identical to PCA, but is semantically different in that it models different objects or subjects at the same time.
The standard notation for a SCA – and PCA – model is:
formula_1
where "X" is the data, "T" are the component scores and "P" are the component loadings. "E" is the residual or error matrix. Because ASCA models the variation partitions by SCA, the model for effect estimates looks like this:
formula_2
formula_3
formula_4
formula_5
Note that every partition has its own error matrix. However, algebra dictates that in a balanced mean centered data set every two level system is of rank 1. This results in zero errors, since any rank 1 matrix can be written as the product of a single component score and loading vector.
The full ASCA model with two effects and interaction including the SCA looks like this:
Decomposition:
formula_6
formula_7
Time as an effect.
Because 'time' is treated as a qualitative factor in the ANOVA decomposition preceding ASCA, a nonlinear multivariate time trajectory can be modeled. An example of this is shown in Figure 10 of this reference.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X = A+B+AB+E \\,"
},
{
"math_id": 1,
"text": "X=TP^{'}+E \\,"
},
{
"math_id": 2,
"text": "A=T_{a}P_{a}^{'}+E_{a} \\,"
},
{
"math_id": 3,
"text": "B=T_{b}P_{b}^{'}+E_{b} \\,"
},
{
"math_id": 4,
"text": "AB=T_{ab}P_{ab}^{'}+E_{ab} \\,"
},
{
"math_id": 5,
"text": "E=T_{e}P_{e}^{'}+E_{e} \\,"
},
{
"math_id": 6,
"text": "X=A+B+AB+E \\,"
},
{
"math_id": 7,
"text": "X=T_{a}P_{a}^{'}+T_{b}P_{b}^{'}+T_{ab}P_{ab}^{'}+T_{e}P_{e}^{'}+E_{a}+E_{b}+E_{ab}+E_{e}+E \\,"
}
]
| https://en.wikipedia.org/wiki?curid=13371195 |
1337282 | Topological property | Mathematical property of a space
In topology and related areas of mathematics, a topological property or topological invariant is a property of a topological space that is invariant under homeomorphisms. Alternatively, a topological property is a proper class of topological spaces which is closed under homeomorphisms. That is, a property of spaces is a topological property if whenever a space "X" possesses that property every space homeomorphic to "X" possesses that property. Informally, a topological property is a property of the space that can be expressed using open sets.
A common problem in topology is to decide whether two topological spaces are homeomorphic or not. To prove that two spaces are "not" homeomorphic, it is sufficient to find a topological property which is not shared by them.
Properties of topological properties.
A property formula_0 is:
Common topological properties.
Separation.
Some of these terms are defined differently in older mathematical literature; see history of the separation axioms.
Non-topological properties.
There are many examples of properties of metric spaces, etc, which are not topological properties. To show a property formula_0 is not topological, it is sufficient to find two homeomorphic topological spaces formula_24 such that formula_6 has formula_0, but formula_25 does not have formula_0.
For example, the metric space properties of boundedness and completeness are not topological properties. Let formula_26 and formula_27 be metric spaces with the standard metric. Then, formula_24 via the homeomorphism formula_28. However, formula_6 is complete but not bounded, while formula_25 is bounded but not complete.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
[2] Simon Moulieras, Maciej Lewenstein and Graciana Puentes, Entanglement engineering and topological protection by discrete-time quantum walks, Journal of Physics B: Atomic, Molecular and Optical Physics 46 (10), 104005 (2013).
https://iopscience.iop.org/article/10.1088/0953-4075/46/10/104005/pdf | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "(X, \\mathcal{T})"
},
{
"math_id": 2,
"text": "S \\subseteq X,"
},
{
"math_id": 3,
"text": "\\left(S, \\mathcal{T}|_S\\right)"
},
{
"math_id": 4,
"text": "P."
},
{
"math_id": 5,
"text": "\\vert X \\vert"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\vert \\tau(X)\\vert"
},
{
"math_id": 8,
"text": "w(X)"
},
{
"math_id": 9,
"text": "d(X)"
},
{
"math_id": 10,
"text": "f \\colon [0,1]\\to X"
},
{
"math_id": 11,
"text": "p(0) = x"
},
{
"math_id": 12,
"text": "p(1) = y"
},
{
"math_id": 13,
"text": "f \\colon S^1 \\to X"
},
{
"math_id": 14,
"text": "(X,T)"
},
{
"math_id": 15,
"text": "T(d)"
},
{
"math_id": 16,
"text": "T."
},
{
"math_id": 17,
"text": "f \\colon X \\to X"
},
{
"math_id": 18,
"text": "f(x) = y."
},
{
"math_id": 19,
"text": "\\kappa"
},
{
"math_id": 20,
"text": "\\Delta(X)"
},
{
"math_id": 21,
"text": "\\Delta(X) =\n\\min\\{|G| : G\\neq \\varnothing, G\\mbox{ is open}\\}."
},
{
"math_id": 22,
"text": "X."
},
{
"math_id": 23,
"text": "D"
},
{
"math_id": 24,
"text": "X \\cong Y"
},
{
"math_id": 25,
"text": "Y"
},
{
"math_id": 26,
"text": "X = \\R"
},
{
"math_id": 27,
"text": "Y = (-\\tfrac{\\pi}{2},\\tfrac{\\pi}{2})"
},
{
"math_id": 28,
"text": "\\operatorname{arctan}\\colon X \\to Y"
}
]
| https://en.wikipedia.org/wiki?curid=1337282 |
13373259 | Smooth coarea formula | In Riemannian geometry, the smooth coarea formulas relate integrals over the domain of certain mappings with integrals over their codomains.
Let formula_0 be smooth Riemannian manifolds of respective dimensions formula_1. Let formula_2 be a smooth surjection such that the pushforward (differential) of formula_3 is surjective almost everywhere. Let formula_4 a measurable function. Then, the following two equalities hold:
formula_5
formula_6
where formula_7 is the normal Jacobian of formula_3, i.e. the determinant of the derivative restricted to the orthogonal complement of its kernel.
Note that from Sard's lemma almost every point formula_8 is a regular point of formula_3 and hence the set formula_9 is a Riemannian submanifold of formula_10, so the integrals in the right-hand side of the formulas above make sense. | [
{
"math_id": 0,
"text": "\\scriptstyle M,\\,N"
},
{
"math_id": 1,
"text": "\\scriptstyle m\\,\\geq\\, n"
},
{
"math_id": 2,
"text": "\\scriptstyle F:M\\,\\longrightarrow\\, N"
},
{
"math_id": 3,
"text": "\\scriptstyle F"
},
{
"math_id": 4,
"text": "\\scriptstyle\\varphi:M\\,\\longrightarrow\\, [0,\\infty)"
},
{
"math_id": 5,
"text": "\\int_{x\\in M}\\varphi(x)\\,dM = \\int_{y\\in N}\\int_{x\\in F^{-1}(y)}\\varphi(x)\\frac{1}{N\\!J\\;F(x)}\\,dF^{-1}(y)\\,dN"
},
{
"math_id": 6,
"text": "\\int_{x\\in M}\\varphi(x)N\\!J\\;F(x)\\,dM = \\int_{y\\in N}\\int_{x\\in F^{-1}(y)} \\varphi(x)\\,dF^{-1}(y)\\,dN"
},
{
"math_id": 7,
"text": "\\scriptstyle N\\!J\\;F(x)"
},
{
"math_id": 8,
"text": "\\scriptstyle y\\,\\in\\, N"
},
{
"math_id": 9,
"text": "\\scriptstyle F^{-1}(y)"
},
{
"math_id": 10,
"text": "\\scriptstyle M"
}
]
| https://en.wikipedia.org/wiki?curid=13373259 |
1337370 | Cross section (geometry) | Geometrical concept
In geometry and science, a cross section is the non-empty intersection of a solid body in three-dimensional space with a plane, or the analog in higher-dimensional spaces. Cutting an object into slices creates many parallel cross-sections. The boundary of a cross-section in three-dimensional space that is parallel to two of the axes, that is, parallel to the plane determined by these axes, is sometimes referred to as a contour line; for example, if a plane cuts through mountains of a raised-relief map parallel to the ground, the result is a contour line in two-dimensional space showing points on the surface of the mountains of equal elevation.
In technical drawing a cross-section, being a projection of an object onto a plane that intersects it, is a common tool used to depict the internal arrangement of a 3-dimensional object in two dimensions. It is traditionally crosshatched with the style of crosshatching often indicating the types of materials being used.
With computed axial tomography, computers can construct cross-sections from x-ray data.
Definition.
If a plane intersects a solid (a 3-dimensional object), then the region common to the plane and the solid is called a cross-section of the solid. A plane containing a cross-section of the solid may be referred to as a "cutting plane".
The shape of the cross-section of a solid may depend upon the orientation of the cutting plane to the solid. For instance, while all the cross-sections of a ball are disks, the cross-sections of a cube depend on how the cutting plane is related to the cube. If the cutting plane is perpendicular to a line joining the centers of two opposite faces of the cube, the cross-section will be a square, however, if the cutting plane is perpendicular to a diagonal of the cube joining opposite vertices, the cross-section can be either a point, a triangle or a hexagon.
Plane sections.
A related concept is that of a plane section, which is the curve of intersection of a plane with a "surface". Thus, a plane section is the boundary of a cross-section of a solid in a cutting plane.
If a surface in a three-dimensional space is defined by a function of two variables, i.e., "z" = "f"("x", "y"), the plane sections by cutting planes that are parallel to a coordinate plane (a plane determined by two coordinate axes) are called level curves or isolines.
More specifically, cutting planes with equations of the form "z" = "k" (planes parallel to the xy-plane) produce plane sections that are often called contour lines in application areas.
Mathematical examples of cross sections and plane sections.
A cross section of a polyhedron is a polygon.
The conic sections – circles, ellipses, parabolas, and hyperbolas – are plane sections of a cone with the cutting planes at various different angles, as seen in the diagram at left.
Any cross-section passing through the center of an ellipsoid forms an elliptic region, while the corresponding plane sections are ellipses on its surface. These degenerate to disks and circles, respectively, when the cutting planes are perpendicular to a symmetry axis. In more generality, the plane sections of a quadric are conic sections.
A cross-section of a solid right circular cylinder extending between two bases is a disk if the cross-section is parallel to the cylinder's base, or an elliptic region (see diagram at right) if it is neither parallel nor perpendicular to the base. If the cutting plane is perpendicular to the base it consists of a rectangle (not shown) unless it is just tangent to the cylinder, in which case it is a single line segment.
The term cylinder can also mean the lateral surface of a solid cylinder (see cylinder (geometry)). If a cylinder is used in this sense, the above paragraph would read as follows: A plane section of a right circular cylinder of finite length is a circle if the cutting plane is perpendicular to the cylinder's axis of symmetry, or an ellipse if it is neither parallel nor perpendicular to that axis. If the cutting plane is parallel to the axis the plane section consists of a pair of parallel line segments unless the cutting plane is tangent to the cylinder, in which case, the plane section is a single line segment.
A plane section can be used to visualize the partial derivative of a function with respect to one of its arguments, as shown. Suppose "z" = "f"("x", "y"). In taking the partial derivative of "f"("x", "y") with respect to "x", one can take a plane section of the function "f" at a fixed value of "y" to plot the level curve of "z" solely against "x"; then the partial derivative with respect to "x" is the slope of the resulting two-dimensional graph.
In related subjects.
A plane section of a probability density function of two random variables in which the cutting plane is at a fixed value of one of the variables is a conditional density function of the other variable (conditional on the fixed value defining the plane section). If instead the plane section is taken for a fixed value of the density, the result is an iso-density contour. For the normal distribution, these contours are ellipses.
In economics, a production function "f"("x", "y") specifies the output that can be produced by various quantities "x" and "y" of inputs, typically labor and physical capital. The production function of a firm or a society can be plotted in three-dimensional space. If a plane section is taken parallel to the "xy"-plane, the result is an isoquant showing the various combinations of labor and capital usage that would result in the level of output given by the height of the plane section. Alternatively, if a plane section of the production function is taken at a fixed level of "y"—that is, parallel to the "xz"-plane—then the result is a two-dimensional graph showing how much output can be produced at each of various values of usage "x" of one input combined with the fixed value of the other input "y".
Also in economics, a cardinal or ordinal utility function "u"("w", "v") gives the degree of satisfaction of a consumer obtained by consuming quantities "w" and " v" of two goods. If a plane section of the utility function is taken at a given height (level of utility), the two-dimensional result is an indifference curve showing various alternative combinations of consumed amounts "w" and "v" of the two goods all of which give the specified level of utility.
Area and volume.
Cavalieri's principle states that solids with corresponding cross-sections of equal areas have equal volumes.
The cross-sectional area (formula_0) of an object when viewed from a particular angle is the total area of the orthographic projection of the object from that angle. For example, a cylinder of height "h" and radius "r" has formula_1 when viewed along its central axis, and formula_2 when viewed from an orthogonal direction. A sphere of radius "r" has formula_1 when viewed from any angle. More generically, formula_0 can be calculated by evaluating the following surface integral:
formula_3
where formula_4 is the unit vector pointing along the viewing direction toward the viewer, formula_5 is a surface element with an outward-pointing normal, and the integral is taken only over the top-most surface, that part of the surface that is "visible" from the perspective of the viewer. For a convex body, each ray through the object from the viewer's perspective crosses just two surfaces. For such objects, the integral may be taken over the entire surface (formula_6) by taking the absolute value of the integrand (so that the "top" and "bottom" of the object do not subtract away, as would be required by the Divergence Theorem applied to the constant vector field formula_4) and dividing by two:
formula_7
In higher dimensions.
In analogy with the cross-section of a solid, the cross-section of an n-dimensional body in an n-dimensional space is the non-empty intersection of the body with a hyperplane (an ("n" − 1)-dimensional subspace). This concept has sometimes been used to help visualize aspects of higher dimensional spaces. For instance, if a four-dimensional object passed through our three-dimensional space, we would see a three-dimensional cross-section of the four-dimensional object. In particular, a 4-ball (hypersphere) passing through 3-space would appear as a 3-ball that increased to a maximum and then decreased in size during the transition. This dynamic object (from the point of view of 3-space) is a sequence of cross-sections of the 4-ball.
Examples in science.
In geology, the structure of the interior of a planet is often illustrated using a diagram of a cross-section of the planet that passes through the planet's center, as in the cross-section of Earth at right.
Cross-sections are often used in anatomy to illustrate the inner structure of an organ, as shown at the left.
A cross-section of a tree trunk, as shown at left, reveals growth rings that can be used to find the age of the tree and the temporal properties of its environment.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A'"
},
{
"math_id": 1,
"text": "A' = \\pi r^2"
},
{
"math_id": 2,
"text": "A' = 2 rh"
},
{
"math_id": 3,
"text": " A' = \\iint \\limits_\\mathrm{top} d\\mathbf{A} \\cdot \\mathbf{\\hat{r}}, "
},
{
"math_id": 4,
"text": "\\mathbf{\\hat{r}}"
},
{
"math_id": 5,
"text": "d\\mathbf{A}"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": " A' = \\frac{1}{2} \\iint \\limits_A | d\\mathbf{A} \\cdot \\mathbf{\\hat{r}}| "
}
]
| https://en.wikipedia.org/wiki?curid=1337370 |
1337505 | Space diagonal | In geometry, a space diagonal (also interior diagonal or body diagonal) of a polyhedron is a line connecting two vertices that are not on the same face. Space diagonals contrast with "face diagonals", which connect vertices on the same face (but not on the same edge) as each other.
For example, a pyramid has no space diagonals, while a cube (shown at right) or more generally a parallelepiped has four space diagonals.
Axial diagonal.
An axial diagonal is a space diagonal that passes through the center of a polyhedron.
For example, in a cube with edge length "a", all four space diagonals are axial diagonals, of common length formula_0 More generally, a cuboid with edge lengths "a", "b", and "c" has all four space diagonals axial, with common length formula_1
A regular octahedron has 3 axial diagonals, of length formula_2, with edge length "a".
A regular icosahedron has 6 axial diagonals of length formula_3, where formula_4 is the golden ratio formula_5.
Space diagonals of magic cubes.
A magic square is an arrangement of numbers in a square grid so that the sum of the numbers along every row, column, and diagonal is the same. Similarly, one may define a magic cube to be an arrangement of numbers in a cubical grid so that the sum of the numbers on the four space diagonals must be the same as the sum of the numbers in each row, each column, and each pillar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a\\sqrt {3}."
},
{
"math_id": 1,
"text": "\\sqrt{a^2+b^2+c^2}. "
},
{
"math_id": 2,
"text": "a\\sqrt {2}"
},
{
"math_id": 3,
"text": "a\\sqrt {2+\\varphi}"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "(1+\\sqrt 5)/2"
}
]
| https://en.wikipedia.org/wiki?curid=1337505 |
1337587 | Semisimple Lie algebra | Direct sum of simple Lie algebras
In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras. (A simple Lie algebra is a non-abelian Lie algebra without any non-zero proper ideals.)
Throughout the article, unless otherwise stated, a Lie algebra is a finite-dimensional Lie algebra over a field of characteristic 0. For such a Lie algebra formula_0, if nonzero, the following conditions are equivalent:
Significance.
The significance of semisimplicity comes firstly from the Levi decomposition, which states that every finite dimensional Lie algebra is the semidirect product of a solvable ideal (its radical) and a semisimple algebra. In particular, there is no nonzero Lie algebra that is both solvable and semisimple.
Semisimple Lie algebras have a very elegant classification, in stark contrast to solvable Lie algebras. Semisimple Lie algebras over an algebraically closed field of characteristic zero are completely classified by their root system, which are in turn classified by Dynkin diagrams. Semisimple algebras over non-algebraically closed fields can be understood in terms of those over the algebraic closure, though the classification is somewhat more intricate; see real form for the case of real semisimple Lie algebras, which were classified by Élie Cartan.
Further, the representation theory of semisimple Lie algebras is much cleaner than that for general Lie algebras. For example, the Jordan decomposition in a semisimple Lie algebra coincides with the Jordan decomposition in its representation; this is not the case for Lie algebras in general.
If formula_0 is semisimple, then formula_1. In particular, every linear semisimple Lie algebra is a subalgebra of formula_2, the special linear Lie algebra. The study of the structure of formula_2 constitutes an important part of the representation theory for semisimple Lie algebras.
History.
The semisimple Lie algebras over the complex numbers were first classified by Wilhelm Killing (1888–90), though his proof lacked rigor. His proof was made rigorous by Élie Cartan (1894) in his Ph.D. thesis, who also classified semisimple real Lie algebras. This was subsequently refined, and the present classification by Dynkin diagrams was given by then 22-year-old Eugene Dynkin in 1947. Some minor modifications have been made (notably by J. P. Serre), but the proof is unchanged in its essentials and can be found in any standard reference, such as .
Jordan decomposition.
Each endomorphism "x" of a finite-dimensional vector space over a field of characteristic zero can be decomposed uniquely into a semisimple (i.e., diagonalizable over the algebraic closure) and nilpotent part
formula_10
such that "s" and "n" commute with each other. Moreover, each of "s" and "n" is a polynomial in "x". This is the Jordan decomposition of "x".
The above applies to the adjoint representation formula_3 of a semisimple Lie algebra formula_0. An element "x" of formula_0 is said to be semisimple (resp. nilpotent) if formula_11 is a semisimple (resp. nilpotent) operator. If formula_12, then the abstract Jordan decomposition states that "x" can be written uniquely as:
formula_13
where formula_14 is semisimple, formula_15 is nilpotent and formula_16. Moreover, if formula_17 commutes with "x", then it commutes with both formula_18 as well.
The abstract Jordan decomposition factors through any representation of formula_0 in the sense that given any representation ρ,
formula_19
is the Jordan decomposition of ρ("x") in the endomorphism algebra of the representation space. (This is proved as a consequence of Weyl's complete reducibility theorem; see .)
Structure.
Let formula_0 be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. The structure of formula_0 can be described by an adjoint action of a certain distinguished subalgebra on it, a Cartan subalgebra. By definition, a Cartan subalgebra (also called a maximal toral subalgebra) formula_20 of formula_0 is a maximal subalgebra such that, for each formula_21, formula_22 is diagonalizable. As it turns out, formula_20 is abelian and so all the operators in formula_23 are simultaneously diagonalizable. For each linear functional formula_24 of formula_20, let
formula_25.
(Note that formula_26 is the centralizer of formula_20.) Then
Let formula_33 with the commutation relations formula_34; i.e., the formula_35 correspond to the standard basis of formula_31.
The linear functionals in formula_28 are called the roots of formula_0 relative to formula_20. The roots span formula_36 (since if formula_37, then formula_22 is the zero operator; i.e., formula_38 is in the center, which is zero.) Moreover, from the representation theory of formula_31, one deduces the following symmetry and integral properties of formula_28: for each formula_29,
Note that formula_39 has the properties (1) formula_40 and (2) the fixed-point set is formula_41, which means that formula_39 is the reflection with respect to the hyperplane corresponding to formula_24. The above then says that formula_28 is a root system.
It follows from the general theory of a root system that formula_28 contains a basis formula_42 of formula_43 such that each root is a linear combination of formula_42 with integer coefficients of the same sign; the roots formula_44 are called simple roots. Let formula_45, etc. Then the formula_46 elements formula_47 (called Chevalley generators) generate formula_0 as a Lie algebra. Moreover, they satisfy the relations (called Serre relations): with formula_48,
formula_49
formula_50
formula_51
formula_52.
The converse of this is also true: i.e., the Lie algebra generated by the generators and the relations like the above is a (finite-dimensional) semisimple Lie algebra that has the root space decomposition as above (provided the formula_53 is a Cartan matrix). This is a theorem of Serre. In particular, two semisimple Lie algebras are isomorphic if they have the same root system.
The implication of the axiomatic nature of a root system and Serre's theorem is that one can enumerate all possible root systems; hence, "all possible" semisimple Lie algebras (finite-dimensional over an algebraically closed field of characteristic zero).
The Weyl group is the group of linear transformations of formula_54 generated by the formula_55's. The Weyl group is an important symmetry of the problem; for example, the weights of any finite-dimensional representation of formula_5 are invariant under the Weyl group.
Example root space decomposition in sln(C).
For formula_56 and the Cartan subalgebra formula_27 of diagonal matrices, define formula_57 by
formula_58,
where formula_59 denotes the diagonal matrix with formula_60 on the diagonal. Then the decomposition is given by
formula_61
where
formula_62
for the vector formula_63 in formula_64 with the standard (matrix) basis, meaning formula_63 represents the basis vector in the formula_65-th row and formula_66-th column. This decomposition of formula_5 has an associated root system:
formula_67
sl2(C).
For example, in formula_68 the decomposition is
formula_69
and the associated root system is
formula_70
sl3(C).
In formula_71 the decomposition is
formula_72
and the associated root system is given by
formula_73
Examples.
As noted in #Structure, semisimple Lie algebras over formula_74 (or more generally an algebraically closed field of characteristic zero) are classified by the root system associated to their Cartan subalgebras, and the root systems, in turn, are classified by their Dynkin diagrams.
Examples of semisimple Lie algebras, the classical Lie algebras, with notation coming from their Dynkin diagrams, are:
The restriction formula_83 in the formula_84 family is needed because formula_85 is one-dimensional and commutative and therefore not semisimple.
These Lie algebras are numbered so that "n" is the rank. Almost all of these semisimple Lie algebras are actually simple and the members of these families are almost all distinct, except for some collisions in small rank. For example formula_86 and formula_87. These four families, together with five exceptions (E6, E7, E8, F4, and G2), are in fact the "only" simple Lie algebras over the complex numbers.
Classification.
Every semisimple Lie algebra over an algebraically closed field of characteristic 0 is a direct sum of simple Lie algebras (by definition), and the finite-dimensional simple Lie algebras fall in four families – An, Bn, Cn, and Dn – with five exceptions
E6, E7, E8, F4, and G2. Simple Lie algebras are classified by the connected Dynkin diagrams, shown on the right, while semisimple Lie algebras correspond to not necessarily connected Dynkin diagrams, where each component of the diagram corresponds to a summand of the decomposition of the semisimple Lie algebra into simple Lie algebras.
The classification proceeds by considering a Cartan subalgebra (see below) and its adjoint action on the Lie algebra. The root system of the action then both determines the original Lie algebra and must have a very constrained form, which can be classified by the Dynkin diagrams. See the section below describing Cartan subalgebras and root systems for more details.
The classification is widely considered one of the most elegant results in mathematics – a brief list of axioms yields, via a relatively short proof, a complete but non-trivial classification with surprising structure. This should be compared to the classification of finite simple groups, which is significantly more complicated.
The enumeration of the four families is non-redundant and consists only of simple algebras if formula_88 for An, formula_89 for Bn, formula_90 for Cn, and formula_91 for Dn. If one starts numbering lower, the enumeration is redundant, and one has exceptional isomorphisms between simple Lie algebras, which are reflected in isomorphisms of Dynkin diagrams; the En can also be extended down, but below E6 are isomorphic to other, non-exceptional algebras.
Over a non-algebraically closed field, the classification is more complicated – one classifies simple Lie algebras over the algebraic closure, then for each of these, one classifies simple Lie algebras over the original field which have this form (over the closure). For example, to classify simple real Lie algebras, one classifies real Lie algebras with a given complexification, which are known as real forms of the complex Lie algebra; this can be done by Satake diagrams, which are Dynkin diagrams with additional data ("decorations").
Representation theory of semisimple Lie algebras.
Let formula_0 be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. Then, as in #Structure, formula_92 where formula_28 is the root system. Choose the simple roots in formula_28; a root formula_24 of formula_28 is then called positive and is denoted by formula_93 if it is a linear combination of the simple roots with non-negative integer coefficients. Let formula_94, which is a maximal solvable subalgebra of formula_0, the Borel subalgebra.
Let "V" be a (possibly-infinite-dimensional) simple formula_0-module. If "V" happens to admit a formula_95-weight vector formula_96, then it is unique up to scaling and is called the highest weight vector of "V". It is also an formula_20-weight vector and the formula_20-weight of formula_96, a linear functional of formula_20, is called the highest weight of "V". The basic yet nontrivial facts then are (1) to each linear functional formula_97, there exists a simple formula_0-module formula_98 having formula_99 as its highest weight and (2) two simple modules having the same highest weight are equivalent. In short, there exists a bijection between formula_36 and the set of the equivalence classes of simple formula_0-modules admitting a Borel-weight vector.
For applications, one is often interested in a finite-dimensional simple formula_0-module (a finite-dimensional irreducible representation). This is especially the case when formula_0 is the Lie algebra of a Lie group (or complexification of such), since, via the Lie correspondence, a Lie algebra representation can be integrated to a Lie group representation when the obstructions are overcome. The next criterion then addresses this need: by the positive Weyl chamber formula_100, we mean the convex cone formula_101 where formula_102 is a unique vector such that formula_103. The criterion then reads:
A linear functional formula_99 satisfying the above equivalent condition is called a dominant integral weight. Hence, in summary, there exists a bijection between the dominant integral weights and the equivalence classes of finite-dimensional simple formula_0-modules, the result known as the theorem of the highest weight. The character of a finite-dimensional simple module in turns is computed by the Weyl character formula.
The theorem due to Weyl says that, over a field of characteristic zero, every finite-dimensional module of a semisimple Lie algebra formula_0 is completely reducible; i.e., it is a direct sum of simple formula_0-modules. Hence, the above results then apply to finite-dimensional representations of a semisimple Lie algebra.
Real semisimple Lie algebra.
For a semisimple Lie algebra over a field that has characteristic zero but is not algebraically closed, there is no general structure theory like the one for those over an algebraically closed field of characteristic zero. But over the field of real numbers, there are still the structure results.
Let formula_0 be a finite-dimensional real semisimple Lie algebra and formula_107 the complexification of it (which is again semisimple). The real Lie algebra formula_0 is called a real form of formula_108. A real form is called a compact form if the Killing form on it is negative-definite; it is necessarily the Lie algebra of a compact Lie group (hence, the name).
Compact case.
Suppose formula_0 is a compact form and formula_109 a maximal abelian subspace. One can show (for example, from the fact formula_0 is the Lie algebra of a compact Lie group) that formula_23 consists of skew-Hermitian matrices, diagonalizable over formula_74 with imaginary eigenvalues. Hence, formula_110 is a Cartan subalgebra of formula_108 and there results in the root space decomposition (cf. #Structure)
formula_111
where each formula_112 is real-valued on formula_113; thus, can be identified with a real-linear functional on the real vector space formula_113.
For example, let formula_114 and take formula_109 the subspace of all diagonal matrices. Note formula_115. Let formula_116 be the linear functional on formula_117 given by formula_118 for formula_119. Then for each formula_120,
formula_121
where formula_122 is the matrix that has 1 on the formula_123-th spot and zero elsewhere. Hence, each root formula_24 is of the form formula_124 and the root space decomposition is the decomposition of matrices:
formula_125
Noncompact case.
Suppose formula_0 is not necessarily a compact form (i.e., the signature of the Killing form is not all negative). Suppose, moreover, it has a Cartan involution formula_126 and let formula_127 be the eigenspace decomposition of formula_126, where formula_128 are the eigenspaces for 1 and -1, respectively. For example, if formula_129 and formula_126 the negative transpose, then formula_130.
Let formula_131 be a maximal abelian subspace. Now, formula_132 consists of symmetric matrices (with respect to a suitable inner product) and thus the operators in formula_133 are simultaneously diagonalizable, with real eigenvalues. By repeating the arguments for the algebraically closed base field, one obtains the decomposition (called the restricted root space decomposition):
formula_134
where
Moreover, formula_28 is a root system but not necessarily reduced one (i.e., it can happen formula_138 are both roots).
The case of sl(n,C).
If formula_139, then formula_27 may be taken to be the diagonal subalgebra of formula_5, consisting of diagonal matrices whose diagonal entries sum to zero. Since formula_27 has dimension formula_140, we see that formula_141 has rank formula_140.
The root vectors formula_142 in this case may be taken to be the matrices formula_143 with formula_144, where formula_143 is the matrix with a 1 in the formula_145 spot and zeros elsewhere. If formula_146 is a diagonal matrix with diagonal entries formula_147, then we have
formula_148.
Thus, the roots for formula_149 are the linear functionals formula_150 given by
formula_151.
After identifying formula_27 with its dual, the roots become the vectors formula_152 in the space of formula_15-tuples that sum to zero. This is the root system known as formula_153 in the conventional labeling.
The reflection associated to the root formula_150 acts on formula_27 by transposing the formula_65 and formula_66 diagonal entries. The Weyl group is then just the permutation group on formula_15 elements, acting by permuting the diagonal entries of matrices in formula_27.
Generalizations.
Semisimple Lie algebras admit certain generalizations. Firstly, many statements that are true for semisimple Lie algebras are true more generally for reductive Lie algebras. Abstractly, a reductive Lie algebra is one whose adjoint representation is completely reducible, while concretely, a reductive Lie algebra is a direct sum of a semisimple Lie algebra and an abelian Lie algebra; for example, formula_154 is semisimple, and formula_155 is reductive. Many properties of semisimple Lie algebras depend only on reducibility.
Many properties of complex semisimple/reductive Lie algebras are true not only for semisimple/reductive Lie algebras over algebraically closed fields, but more generally for split semisimple/reductive Lie algebras over other fields: semisimple/reductive Lie algebras over algebraically closed fields are always split, but over other fields this is not always the case. Split Lie algebras have essentially the same representation theory as semisimple Lie algebras over algebraically closed fields, for instance, the splitting Cartan subalgebra playing the same role as the Cartan subalgebra plays over algebraically closed fields. This is the approach followed in , for instance, which classifies representations of split semisimple/reductive Lie algebras.
Semisimple and reductive groups.
A connected Lie group is called semisimple if its Lie algebra is a semisimple Lie algebra, i.e. a direct sum of simple Lie algebras. It is called reductive if its Lie algebra is a direct sum of simple and trivial (one-dimensional) Lie algebras. Reductive groups occur naturally as symmetries of a number of mathematical objects in algebra, geometry, and physics. For example, the group formula_156 of symmetries of an "n"-dimensional real vector space (equivalently, the group of invertible matrices) is reductive.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "\\mathfrak g = [\\mathfrak g, \\mathfrak g]"
},
{
"math_id": 2,
"text": "\\mathfrak{sl}"
},
{
"math_id": 3,
"text": "\\operatorname{ad}"
},
{
"math_id": 4,
"text": "\\operatorname{Der}(\\mathfrak g)"
},
{
"math_id": 5,
"text": "\\mathfrak{g}"
},
{
"math_id": 6,
"text": "\\operatorname{ad}: \\mathfrak{g} \\overset{\\sim}\\to \\operatorname{Der}(\\mathfrak g)"
},
{
"math_id": 7,
"text": "\\mathfrak g/[\\mathfrak g, \\mathfrak g]"
},
{
"math_id": 8,
"text": "\\mathfrak{g} \\otimes_k F"
},
{
"math_id": 9,
"text": "F \\supset k"
},
{
"math_id": 10,
"text": "x=s+n\\ "
},
{
"math_id": 11,
"text": "\\operatorname{ad}(x)"
},
{
"math_id": 12,
"text": "x\\in\\mathfrak g"
},
{
"math_id": 13,
"text": "x = s + n"
},
{
"math_id": 14,
"text": "s"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "[s, n] = 0"
},
{
"math_id": 17,
"text": "y \\in \\mathfrak g"
},
{
"math_id": 18,
"text": "s, n"
},
{
"math_id": 19,
"text": "\\rho(x) = \\rho(s) + \\rho(n)\\,"
},
{
"math_id": 20,
"text": "\\mathfrak h"
},
{
"math_id": 21,
"text": "h \\in \\mathfrak h"
},
{
"math_id": 22,
"text": "\\operatorname{ad}(h)"
},
{
"math_id": 23,
"text": "\\operatorname{ad}(\\mathfrak h)"
},
{
"math_id": 24,
"text": "\\alpha"
},
{
"math_id": 25,
"text": "\\mathfrak{g}_{\\alpha} = \\{ x \\in \\mathfrak{g} | \\operatorname{ad}(h) x := [h, x] = \\alpha(h) x \\, \\text{ for all } h \\in \\mathfrak h \\}"
},
{
"math_id": 26,
"text": "\\mathfrak{g}_0"
},
{
"math_id": 27,
"text": "\\mathfrak{h}"
},
{
"math_id": 28,
"text": "\\Phi"
},
{
"math_id": 29,
"text": "\\alpha, \\beta \\in \\Phi"
},
{
"math_id": 30,
"text": "\\dim \\mathfrak{g}_{\\alpha} = 1"
},
{
"math_id": 31,
"text": "\\mathfrak{sl}_2"
},
{
"math_id": 32,
"text": "\\dim \\mathfrak g < \\infty"
},
{
"math_id": 33,
"text": "h_{\\alpha} \\in \\mathfrak{h}, e_{\\alpha} \\in \\mathfrak{g}_{\\alpha}, f_{\\alpha} \\in \\mathfrak{g}_{-\\alpha}"
},
{
"math_id": 34,
"text": "[e_{\\alpha}, f_{\\alpha}] = h_{\\alpha}, [h_{\\alpha}, e_{\\alpha}] = 2e_{\\alpha}, [h_{\\alpha}, f_{\\alpha}] = -2f_{\\alpha}"
},
{
"math_id": 35,
"text": "h_{\\alpha}, e_{\\alpha}, f_{\\alpha}"
},
{
"math_id": 36,
"text": "\\mathfrak h^*"
},
{
"math_id": 37,
"text": "\\alpha(h) = 0, \\alpha \\in \\Phi"
},
{
"math_id": 38,
"text": "h"
},
{
"math_id": 39,
"text": "s_{\\alpha}"
},
{
"math_id": 40,
"text": "s_{\\alpha}(\\alpha) = -\\alpha"
},
{
"math_id": 41,
"text": "\\{ \\gamma \\in \\mathfrak{h}^* | \\gamma(h_\\alpha) = 0 \\}"
},
{
"math_id": 42,
"text": "\\alpha_1, \\dots, \\alpha_l"
},
{
"math_id": 43,
"text": "\\mathfrak{h}^*"
},
{
"math_id": 44,
"text": "\\alpha_i"
},
{
"math_id": 45,
"text": "e_i = e_{\\alpha_i}"
},
{
"math_id": 46,
"text": "3l"
},
{
"math_id": 47,
"text": "e_i, f_i, h_i"
},
{
"math_id": 48,
"text": "a_{ij} = \\alpha_j(h_i)"
},
{
"math_id": 49,
"text": "[h_i, h_j] = 0,"
},
{
"math_id": 50,
"text": "[e_i, f_i] = h_i, [e_i, f_j] = 0, i \\ne j,"
},
{
"math_id": 51,
"text": "[h_i, e_j] = a_{ij} e_j, [h_i, f_j] = -a_{ij} f_j,"
},
{
"math_id": 52,
"text": "\\operatorname{ad}(e_i)^{-a_{ij} + 1}(e_j) = \\operatorname{ad}(f_i)^{-a_{ij} + 1}(f_j) = 0, i \\ne j"
},
{
"math_id": 53,
"text": "[a_{ij}]_{1 \\le i, j \\le l}"
},
{
"math_id": 54,
"text": "\\mathfrak{h}^* \\simeq \\mathfrak{h}"
},
{
"math_id": 55,
"text": "s_\\alpha"
},
{
"math_id": 56,
"text": "\\mathfrak{g} = \\mathfrak{sl}_n(\\mathbb{C})\n"
},
{
"math_id": 57,
"text": "\\lambda_i \\in \\mathfrak{h}^*"
},
{
"math_id": 58,
"text": "\\lambda_i(d(a_1,\\ldots, a_n)) = a_i"
},
{
"math_id": 59,
"text": "d(a_1,\\ldots, a_n)"
},
{
"math_id": 60,
"text": "a_1,\\ldots, a_n"
},
{
"math_id": 61,
"text": "\\mathfrak{g} = \\mathfrak{h}\\oplus \\left( \\bigoplus_{i \\neq j} \\mathfrak{g}_{\\lambda_i - \\lambda_j} \\right)"
},
{
"math_id": 62,
"text": "\\mathfrak{g}_{\\lambda_i - \\lambda_j} = \\text{Span}_\\mathbb{C}(e_{ij})"
},
{
"math_id": 63,
"text": "e_{ij}"
},
{
"math_id": 64,
"text": "\\mathfrak{sl}_n(\\mathbb{C})"
},
{
"math_id": 65,
"text": "i"
},
{
"math_id": 66,
"text": "j"
},
{
"math_id": 67,
"text": "\\Phi = \\{ \\lambda_i - \\lambda_j : i \\neq j \\}"
},
{
"math_id": 68,
"text": "\\mathfrak{sl}_2(\\mathbb{C})"
},
{
"math_id": 69,
"text": "\\mathfrak{sl}_2= \\mathfrak{h}\\oplus\n\\mathfrak{g}_{\\lambda_1 - \\lambda_2}\\oplus\n\\mathfrak{g}_{\\lambda_2 - \\lambda_1}"
},
{
"math_id": 70,
"text": "\\Phi = \\{\\lambda_1 - \\lambda_2, \\lambda_2 - \\lambda_1 \\}"
},
{
"math_id": 71,
"text": "\\mathfrak{sl}_3(\\mathbb{C})"
},
{
"math_id": 72,
"text": "\\mathfrak{sl}_3 = \\mathfrak{h} \\oplus \\mathfrak{g}_{\\lambda_1 - \\lambda_2}\n\\oplus \\mathfrak{g}_{\\lambda_1 - \\lambda_3}\n\\oplus \\mathfrak{g}_{\\lambda_2 - \\lambda_3}\n\n\\oplus \\mathfrak{g}_{\\lambda_2 - \\lambda_1}\n\\oplus \\mathfrak{g}_{\\lambda_3 - \\lambda_1}\n\\oplus \\mathfrak{g}_{\\lambda_3 - \\lambda_2}\n"
},
{
"math_id": 73,
"text": "\\Phi = \\{\\pm(\\lambda_1 - \\lambda_2),\\pm(\\lambda_1 - \\lambda_3),\\pm(\\lambda_2 - \\lambda_3) \\}"
},
{
"math_id": 74,
"text": "\\mathbb{C}"
},
{
"math_id": 75,
"text": "A_n:"
},
{
"math_id": 76,
"text": "\\mathfrak {sl}_{n+1}"
},
{
"math_id": 77,
"text": "B_n:"
},
{
"math_id": 78,
"text": "\\mathfrak{so}_{2n+1}"
},
{
"math_id": 79,
"text": "C_n:"
},
{
"math_id": 80,
"text": "\\mathfrak {sp}_{2n}"
},
{
"math_id": 81,
"text": "D_n:"
},
{
"math_id": 82,
"text": "\\mathfrak{so}_{2n}"
},
{
"math_id": 83,
"text": "n>1"
},
{
"math_id": 84,
"text": "D_n"
},
{
"math_id": 85,
"text": "\\mathfrak{so}_{2}"
},
{
"math_id": 86,
"text": "\\mathfrak{so}_{4} \\cong \\mathfrak{so}_{3} \\oplus \\mathfrak{so}_{3} "
},
{
"math_id": 87,
"text": "\\mathfrak{sp}_{2} \\cong \\mathfrak{so}_{5}"
},
{
"math_id": 88,
"text": "n \\geq 1"
},
{
"math_id": 89,
"text": "n \\geq 2"
},
{
"math_id": 90,
"text": "n \\geq 3"
},
{
"math_id": 91,
"text": "n \\geq 4"
},
{
"math_id": 92,
"text": "\\mathfrak g = \\mathfrak h \\oplus \\bigoplus_{\\alpha \\in \\Phi} \\mathfrak g_{\\alpha}"
},
{
"math_id": 93,
"text": "\\alpha > 0"
},
{
"math_id": 94,
"text": "\\mathfrak b = \\mathfrak h \\oplus \\bigoplus_{\\alpha > 0} \\mathfrak g_{\\alpha}"
},
{
"math_id": 95,
"text": "\\mathfrak b"
},
{
"math_id": 96,
"text": "v_0"
},
{
"math_id": 97,
"text": "\\mu \\in \\mathfrak h^*"
},
{
"math_id": 98,
"text": "V^{\\mu}"
},
{
"math_id": 99,
"text": "\\mu"
},
{
"math_id": 100,
"text": "C \\subset \\mathfrak{h}^*"
},
{
"math_id": 101,
"text": "C = \\{ \\mu \\in \\mathfrak{h}^* | \\mu(h_{\\alpha}) \\ge 0, \\alpha \\in \\Phi > 0 \\}"
},
{
"math_id": 102,
"text": "h_{\\alpha} \\in [\\mathfrak g_{\\alpha}, \\mathfrak g_{-\\alpha}]"
},
{
"math_id": 103,
"text": "\\alpha(h_{\\alpha}) = 2"
},
{
"math_id": 104,
"text": "\\dim V^{\\mu} < \\infty"
},
{
"math_id": 105,
"text": "\\mu(h_{\\alpha})"
},
{
"math_id": 106,
"text": "C"
},
{
"math_id": 107,
"text": "\\mathfrak{g}^{\\mathbb{C}} = \\mathfrak{g} \\otimes_{\\mathbb{R}} \\mathbb{C}"
},
{
"math_id": 108,
"text": "\\mathfrak{g}^{\\mathbb{C}}"
},
{
"math_id": 109,
"text": "\\mathfrak h \\subset \\mathfrak g"
},
{
"math_id": 110,
"text": "\\mathfrak h^{\\mathbb{C}}"
},
{
"math_id": 111,
"text": "\\mathfrak{g}^{\\mathbb{C}} = \\mathfrak{h}^{\\mathbb{C}} \\oplus \\bigoplus_{\\alpha \\in \\Phi} \\mathfrak{g}_{\\alpha}"
},
{
"math_id": 112,
"text": "\\alpha \\in \\Phi"
},
{
"math_id": 113,
"text": "i \\mathfrak{h}"
},
{
"math_id": 114,
"text": "\\mathfrak{g} = \\mathfrak{su}(n)"
},
{
"math_id": 115,
"text": "\\mathfrak{g}^{\\mathbb{C}} = \\mathfrak{sl}_n \\mathbb{C}"
},
{
"math_id": 116,
"text": "e_i"
},
{
"math_id": 117,
"text": "\\mathfrak{h}^{\\mathbb{C}}"
},
{
"math_id": 118,
"text": "e_i(H) = h_i"
},
{
"math_id": 119,
"text": "H = \\operatorname{diag}(h_1, \\dots, h_n)"
},
{
"math_id": 120,
"text": "H \\in \\mathfrak{h}^{\\mathbb{C}}"
},
{
"math_id": 121,
"text": "[H, E_{ij}] = (e_i(H) - e_j(H)) E_{ij}"
},
{
"math_id": 122,
"text": "E_{ij}"
},
{
"math_id": 123,
"text": "(i, j)"
},
{
"math_id": 124,
"text": "\\alpha = e_i - e_j, i \\ne j"
},
{
"math_id": 125,
"text": "\\mathfrak{g}^{\\mathbb{C}} = \\mathfrak{h}^{\\mathbb{C}} \\oplus \\bigoplus_{i \\ne j} \\mathbb{C} E_{ij}."
},
{
"math_id": 126,
"text": "\\theta"
},
{
"math_id": 127,
"text": "\\mathfrak g = \\mathfrak k \\oplus \\mathfrak p"
},
{
"math_id": 128,
"text": "\\mathfrak k, \\mathfrak p"
},
{
"math_id": 129,
"text": "\\mathfrak g = \\mathfrak{sl}_n \\mathbb{R}"
},
{
"math_id": 130,
"text": "\\mathfrak k = \\mathfrak{so}(n)"
},
{
"math_id": 131,
"text": "\\mathfrak a \\subset \\mathfrak p"
},
{
"math_id": 132,
"text": "\\operatorname{ad}(\\mathfrak p)"
},
{
"math_id": 133,
"text": "\\operatorname{ad}(\\mathfrak a)"
},
{
"math_id": 134,
"text": "\\mathfrak g = \\mathfrak g_0 \\oplus \\bigoplus_{\\alpha \\in \\Phi} \\mathfrak{g}_{\\alpha}"
},
{
"math_id": 135,
"text": "\\theta(\\mathfrak{g}_{\\alpha}) = \\mathfrak{g}_{-\\alpha}"
},
{
"math_id": 136,
"text": "-\\Phi \\subset \\Phi"
},
{
"math_id": 137,
"text": "\\mathfrak g_0 = \\mathfrak a \\oplus Z_{\\mathfrak k}(\\mathfrak a)"
},
{
"math_id": 138,
"text": "\\alpha, 2\\alpha"
},
{
"math_id": 139,
"text": "\\mathfrak{g}=\\mathrm{sl}(n,\\mathbb{C})"
},
{
"math_id": 140,
"text": "n-1"
},
{
"math_id": 141,
"text": "\\mathrm{sl}(n;\\mathbb{C})"
},
{
"math_id": 142,
"text": "X"
},
{
"math_id": 143,
"text": "E_{i,j}"
},
{
"math_id": 144,
"text": "i\\neq j"
},
{
"math_id": 145,
"text": "(i,j)"
},
{
"math_id": 146,
"text": "H"
},
{
"math_id": 147,
"text": "\\lambda_1,\\ldots,\\lambda_n"
},
{
"math_id": 148,
"text": "[H,E_{i,j}]=(\\lambda_i-\\lambda_j)E_{i,j}"
},
{
"math_id": 149,
"text": "\\mathrm{sl}(n,\\mathbb{C})"
},
{
"math_id": 150,
"text": "\\alpha_{i,j}"
},
{
"math_id": 151,
"text": "\\alpha_{i,j}(H)=\\lambda_i-\\lambda_j"
},
{
"math_id": 152,
"text": "\\alpha_{i,j}:=e_i-e_j"
},
{
"math_id": 153,
"text": "A_{n-1}"
},
{
"math_id": 154,
"text": "\\mathfrak{sl}_n"
},
{
"math_id": 155,
"text": "\\mathfrak{gl}_n"
},
{
"math_id": 156,
"text": "GL_n(\\mathbb{R})"
}
]
| https://en.wikipedia.org/wiki?curid=1337587 |
13376002 | Optical modulation amplitude | In telecommunications, optical modulation amplitude (OMA) is the difference between two optical power levels, of a digital signal generated by an optical source, "e.g.," a laser diode.
It is given by
formula_0
where "P"1 is the optical power level generated when the light source is "on," and "P"0 is the power level generated when the light source is "off." The OMA may be specified in peak-to-peak mW.
The OMA can be related to the average power formula_1 and the extinction ratio formula_2
formula_3
In the limit of a high extinction ratio, formula_4. However, OMA is often used to express the effective usable modulation in a signal when the extinction ratio is not high and this approximation may not be valid. | [
{
"math_id": 0,
"text": "\\text{OMA} = P_1 - P_0 \\, "
},
{
"math_id": 1,
"text": "P_{\\text{av}} = (P_1+P_0)/2"
},
{
"math_id": 2,
"text": "r_{e} = P_{1}/P_{0}"
},
{
"math_id": 3,
"text": " \\text{OMA} = 2 P_{\\text{av}} \\frac{r_{e}-1}{r_{e}+1}"
},
{
"math_id": 4,
"text": " \\text{OMA} \\approx 2P_{\\text{av}} "
}
]
| https://en.wikipedia.org/wiki?curid=13376002 |
1337678 | PN | "PN" may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
Other uses.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "P_n"
},
{
"math_id": 2,
"text": "\\mathbb{P}_n"
}
]
| https://en.wikipedia.org/wiki?curid=1337678 |
13380321 | Reversible diffusion | In mathematics, a reversible diffusion is a specific example of a reversible stochastic process. Reversible diffusions have an elegant characterization due to the Russian mathematician Andrey Nikolaevich Kolmogorov.
Kolmogorov's characterization of reversible diffusions.
Let "B" denote a "d"-dimensional standard Brownian motion; let "b" : R"d" → R"d" be a Lipschitz continuous vector field. Let "X" : [0, +∞) × Ω → R"d" be an Itō diffusion defined on a probability space (Ω, Σ, P) and solving the Itō stochastic differential equation
formula_0
with square-integrable initial condition, i.e. "X"0 ∈ "L"2(Ω, Σ, P; R"d"). Then the following are equivalent: | [
{
"math_id": 0,
"text": "\\mathrm{d} X_{t} = b(X_{t}) \\, \\mathrm{d} t + \\mathrm{d} B_{t}"
},
{
"math_id": 1,
"text": "\\frac{\\mathrm{d} \\mu (x)}{\\mathrm{d} x} = \\exp \\left( - 2 \\Phi (x) \\right)"
},
{
"math_id": 2,
"text": "\\int_{\\mathbf{R}^{d}} \\exp \\left( - 2 \\Phi (x) \\right) \\, \\mathrm{d} x = 1."
}
]
| https://en.wikipedia.org/wiki?curid=13380321 |
13384414 | Bessel's correction | Correction for sample variance bias
In statistics, Bessel's correction is the use of "n" − 1 instead of "n" in the formula for the sample variance and sample standard deviation, where "n" is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.
Formulation.
In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the "mean" of the squares of deviations of sample values from the sample mean (i.e., using a multiplicative factor 1/"n"). In this case, the sample variance is a biased estimator of the population variance.
Multiplying the uncorrected sample variance by the factor
formula_0
gives an "unbiased" estimator of the population variance. In some literature, the above factor is called Bessel's correction.
One can understand Bessel's correction as the degrees of freedom in the residuals vector (residuals, not errors, because the population mean is unknown):
formula_1
where formula_2 is the sample mean. While there are "n" independent observations in the sample, there are only "n" − 1 independent residuals, as they sum to 0. For a more intuitive explanation of the need for Bessel's correction, see .
Generally Bessel's correction is an approach to reduce the bias due to finite sample size. Such finite-sample bias correction is also needed for other estimates like skew and kurtosis, but in these the inaccuracies are often significantly larger. To fully remove such bias it is necessary to do a more complex multi-parameter estimation. For instance a correct correction for the standard deviation depends on the kurtosis (normalized central 4th moment), but this again has a finite sample bias and it depends on the standard deviation, i.e., both estimations have to be merged.
Caveats.
There are three caveats to consider regarding Bessel's correction:
Firstly, while the sample variance (using Bessel's correction) is an unbiased estimator of the population variance, its square root, the sample standard deviation, is a "biased" estimate of the population standard deviation; because the square root is a concave function, the bias is downward, by Jensen's inequality. There is no general formula for an unbiased estimator of the population standard deviation, though there are correction factors for particular distributions, such as the normal; see unbiased estimation of standard deviation for details. An approximation for the exact correction factor for the normal distribution is given by using "n" − 1.5 in the formula: the bias decays quadratically (rather than linearly, as in the uncorrected form and Bessel's corrected form).
Secondly, the unbiased estimator does not minimize mean squared error (MSE), and generally has worse MSE than the uncorrected estimator (this varies with excess kurtosis). MSE can be minimized by using a different factor. The optimal value depends on excess kurtosis, as discussed in mean squared error: variance; for the normal distribution this is optimized by dividing by "n" + 1 (instead of "n" − 1 or "n").
Thirdly, Bessel's correction is only necessary when the population mean is unknown, and one is estimating "both" population mean "and" population variance from a given sample, using the sample mean to estimate the population mean. In that case there are "n" degrees of freedom in a sample of "n" points, and simultaneous estimation of mean and variance means one degree of freedom goes to the sample mean and the remaining "n" − 1 degrees of freedom (the "residuals") go to the sample variance. However, if the population mean is known, then the deviations of the observations from the population mean have "n" degrees of freedom (because the mean is not being estimated – the deviations are not residuals but "errors") and Bessel's correction is not applicable.
Source of bias.
Most simply, to understand the bias that needs correcting, think of an extreme case. Suppose the population is (0,0,0,1,2,9), which has a population mean of 2 and a population variance of formula_3. A sample of "n" = 1 is drawn, and it turns out to be formula_4 The best estimate of the population mean is formula_5 But what if we use the formula formula_6 to estimate the variance? The estimate of the variance would be zero – and the estimate would be zero for any population and any sample of "n" = 1. The problem is that in estimating the sample mean, the process has already made our estimate of the mean close to the value we sampled--identical, for "n" = 1. In the case of "n" = 1, the variance just cannot be estimated, because there is no variability in the sample.
But consider "n" = 2. Suppose the sample were (0, 2). Then formula_7 and formula_8, but with Bessel's correction, formula_9, which is an unbiased estimate (if all possible samples of "n" = 2 are taken and this method is used, the average estimate will be 12.4, same as the sample variance with Bessel's correction.)
To see this in more detail, consider the following example. Suppose the mean of the whole population is 2050, but the statistician does not know that, and must estimate it based on this small sample chosen randomly from the population:
formula_10
One may compute the sample average:
formula_11
This may serve as an observable estimate of the unobservable population average, which is 2050. Now we face the problem of estimating the population variance. That is the average of the squares of the deviations from 2050. If we knew that the population average is 2050, we could proceed as follows:
formula_12
But our estimate of the population average is the sample average, 2052. The actual average, 2050, is unknown. So the sample average, 2052, must be used:
formula_13
The variance is now smaller, and it (almost) always is. The only exception occurs when the sample average and the population average are the same. To understand why, consider that variance "measures distance from a point", and within a given sample, the average is precisely that point which minimises the distances. A variance calculation using "any" other average value must produce a larger result.
To see this algebraically, we use a simple identity:
formula_14
With formula_15 representing the deviation of an individual sample from the sample mean, and formula_16 representing the deviation of the sample mean from the population mean. Note that we've simply decomposed the actual deviation of an individual sample from the (unknown) population mean into two components: the deviation of the single sample from the sample mean, which we can compute, and the additional deviation of the sample mean from the population mean, which we can not. Now, we apply this identity to the squares of deviations from the population mean:
formula_17
Now apply this to all five observations and observe certain patterns:
formula_18
The sum of the entries in the middle column must be zero because the term "a" will be added across all 5 rows, which itself must equal zero. That is because "a" contains the 5 individual samples (left side within parentheses) which – when added – naturally have the same sum as adding 5 times the sample mean of those 5 numbers (2052). This means that a subtraction of these two sums must equal zero. The factor 2 and the term b in the middle column are equal for all rows, meaning that the relative difference across all rows in the middle column stays the same and can therefore be disregarded. The following statements explain the meaning of the remaining columns:
Therefore:
That is why the sum of squares of the deviations from the "sample" mean is too small to give an unbiased estimate of the population variance when the average of those squares is found. The smaller the sample size, the larger is the difference between the sample variance and the population variance.
Terminology.
This correction is so common that the term "sample variance" and "sample standard deviation" are frequently used to mean the corrected estimators (unbiased sample variation, less biased sample standard deviation), using "n" − 1. However caution is needed: some calculators and software packages may provide for both or only the more unusual formulation. This article uses the following symbols and definitions:
The standard deviations will then be the square roots of the respective variances. Since the square root introduces bias, the terminology "uncorrected" and "corrected" is preferred for the standard deviation estimators:
Formula.
The sample mean is given by
formula_19
The biased sample variance is then written:
formula_20
and the unbiased sample variance is written:
formula_21
Proof.
Suppose thus that formula_22 are independent and identically distributed random variables with expectation formula_23 and variance formula_24.
Knowing the values of the formula_22 at an outcome formula_25 of the underlying sample space, we would like to get a good estimate for the variance formula_24, which is unknown. To this end, we construct a mathematical formula containing the formula_22 such that the expectation of this formula is precisely formula_24. This means that on average, this formula should produce the right answer.
The educated, but naive way of guessing such a formula would be
formula_26,
where formula_27; this would be the variance if we had a discrete random variable on the discrete probability space formula_28 that had value formula_29 at formula_30. But let us calculate the expected value of this expression:
formula_31
here we have (by independence, symmetric cancellation and identical distributions)
formula_32
and therefore
formula_33.
In contrast,
formula_34.
Therefore, our initial guess was wrong by a factor of
formula_35,
and this is precisely Bessel's correction.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac n {n-1}"
},
{
"math_id": 1,
"text": "(x_1-\\overline{x},\\,\\dots,\\,x_n-\\overline{x}),"
},
{
"math_id": 2,
"text": "\\overline{x}"
},
{
"math_id": 3,
"text": "31/3"
},
{
"math_id": 4,
"text": "x_1=0."
},
{
"math_id": 5,
"text": "\\bar{x} = x_1/n = 0/1 = 0."
},
{
"math_id": 6,
"text": " (x_1-\\bar{x})^2/n = (0-0)/1 = 0"
},
{
"math_id": 7,
"text": "\\bar{x}=1"
},
{
"math_id": 8,
"text": " \\left[(x_1-\\bar{x})^2 + (x_2-\\bar{x})^2\\right] /n = (1+1)/2 = 1"
},
{
"math_id": 9,
"text": "\\left[(x_1-\\bar{x})^2 + (x_2-\\bar{x})^2\\right] /(n-1) = (1+1)/1 = 2"
},
{
"math_id": 10,
"text": " 2051,\\quad 2053,\\quad 2055,\\quad 2050,\\quad 2051 "
},
{
"math_id": 11,
"text": " \\frac{1}{5}\\left(2051 + 2053 + 2055 + 2050 + 2051\\right) = 2052"
},
{
"math_id": 12,
"text": "\\begin{align}\n {} & \\frac{1}{5}\\left[(2051 - 2050)^2 + (2053 - 2050)^2 + (2055 - 2050)^2 + (2050 - 2050)^2 + (2051 - 2050)^2\\right] \\\\[6pt]\n = {} & \\frac{36}{5} = 7.2\n\\end{align}"
},
{
"math_id": 13,
"text": "\\begin{align}\n {} & \\frac{1}{5}\\left[(2051 - 2052)^2 + (2053 - 2052)^2 + (2055 - 2052)^2 + (2050 - 2052)^2 + (2051 - 2052)^2\\right] \\\\[6pt]\n = {} & \\frac{16}{5} = 3.2\n\\end{align}"
},
{
"math_id": 14,
"text": "(a+b)^2 = a^2 + 2ab + b^2"
},
{
"math_id": 15,
"text": "a"
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "\\begin{align}\n {[}\\,\\underbrace{2053 - 2050}_{\\begin{smallmatrix} \\text{Deviation from} \\\\ \\text{the population} \\\\ \\text{mean} \\end{smallmatrix}}\\,]^2 & = [\\,\\overbrace{(\\,\\underbrace{2053 - 2052}_{\\begin{smallmatrix} \\text{Deviation from} \\\\ \\text{the sample mean} \\end{smallmatrix}}\\,)}^{\\text{This is }a.} + \\overbrace{(2052 - 2050)}^{\\text{This is }b.}\\,]^2 \\\\\n & = \\overbrace{(2053 - 2052)^2}^{\\text{This is }a^2.} + \\overbrace{2(2053 - 2052)(2052 - 2050)}^{\\text{This is }2ab.} + \\overbrace{(2052 - 2050)^2}^{\\text{This is }b^2.}\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{alignat}{2}\n \\overbrace{(2051 - 2052)^2}^{\\text{This is }a^2.}\\ &+\\ \\overbrace{2(2051 - 2052)(2052 - 2050)}^{\\text{This is }2ab.}\\ &&+\\ \\overbrace{(2052 - 2050)^2}^{\\text{This is }b^2.} \\\\\n (2053 - 2052)^2\\ &+\\ 2(2053 - 2052)(2052 - 2050)\\ &&+\\ (2052 - 2050)^2 \\\\\n (2055 - 2052)^2\\ &+\\ 2(2055 - 2052)(2052 - 2050)\\ &&+\\ (2052 - 2050)^2 \\\\\n (2050 - 2052)^2\\ &+\\ 2(2050 - 2052)(2052 - 2050)\\ &&+\\ (2052 - 2050)^2 \\\\\n (2051 - 2052)^2\\ &+\\ \\underbrace{2(2051 - 2052)(2052 - 2050)}_{\\begin{smallmatrix} \\text{The sum of the entries in this} \\\\ \\text{middle column must be 0.} \\end{smallmatrix}}\\ &&+\\ (2052 - 2050)^2\n\\end{alignat}"
},
{
"math_id": 19,
"text": "\\overline{x}=\\frac{1}{n}\\sum_{i=1}^n x_i."
},
{
"math_id": 20,
"text": "s_n^2 = \\frac {1}{n} \\sum_{i=1}^n \\left(x_i - \\overline{x} \\right)^ 2 = \\frac{\\sum_{i=1}^n x_i^2}{n} - \\frac{\\left(\\sum_{i=1}^n x_i\\right)^2}{n^2}"
},
{
"math_id": 21,
"text": "s^2 = \\frac {1}{n-1} \\sum_{i=1}^n \\left(x_i - \\overline{x} \\right)^ 2 = \\frac{\\sum_{i=1}^n x_i^2}{n-1} - \\frac{\\left(\\sum_{i=1}^n x_i\\right)^2}{(n-1)n} = \\left(\\frac{n}{n-1}\\right)\\,s_n^2."
},
{
"math_id": 22,
"text": "X_1, \\ldots, X_n"
},
{
"math_id": 23,
"text": "\\mu"
},
{
"math_id": 24,
"text": "\\sigma^2"
},
{
"math_id": 25,
"text": "\\omega \\in \\Omega"
},
{
"math_id": 26,
"text": "\\frac{1}{n} \\sum_{k=1}^n (x_k - \\overline x)^2"
},
{
"math_id": 27,
"text": "x_k = X_k(\\omega)"
},
{
"math_id": 28,
"text": "\\{1, \\ldots, n\\}"
},
{
"math_id": 29,
"text": "x_k"
},
{
"math_id": 30,
"text": "k"
},
{
"math_id": 31,
"text": "\\begin{align}\n\\mathbb E \\left[ \\frac{1}{n} \\sum_{k=1}^n (x_k - \\overline x)^2 \\right] & = \\mathbb E \\left[ \\frac{1}{n} \\sum_{k=1}^n \\left( x_k - \\frac{1}{n} \\sum_{j=1}^n x_j \\right)^2 \\right] \\\\\n& = \\mathbb E \\left[ \\frac{1}{n} \\sum_{k=1}^n \\left( \\frac{1}{n} \\sum_{j=1}^n (x_k - x_j) \\right)^2 \\right];\n\\end{align}"
},
{
"math_id": 32,
"text": "\\begin{align}\n\\mathbb E \\left[ \\left( \\sum_{j=1}^n (x_k - x_j) \\right)^2 \\right] & = \\mathbb E \\left[ \\sum_{j=1}^n \\sum_{l=1}^n (x_k - x_j)(x_k - x_l) \\right] \\\\\n& = n(n-1) \\mathbb E[X_1^2] - n(n-1) \\mathbb E[X_1]^2,\n\\end{align}"
},
{
"math_id": 33,
"text": "\\mathbb E \\left[ \\frac{1}{n} \\sum_{k=1}^n (x_k - \\overline x)^2 \\right] = \\frac{n-1}{n} \\left( \\mathbb E[X_1^2] - \\mathbb E[X_1]^2 \\right)"
},
{
"math_id": 34,
"text": "\\operatorname{Var}(X_1) = \\mathbb E[X_1^2] - \\mathbb E[X_1]^2"
},
{
"math_id": 35,
"text": "\\frac{n-1}{n}"
}
]
| https://en.wikipedia.org/wiki?curid=13384414 |
1338683 | Corecursion | Type of algorithm in computer science
In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is "generative recursion", which may lack a definite "direction" inherent in corecursion and recursion.
Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data (base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, such as streams, so long as it can be produced from simple data (base cases) in a sequence of "finite" steps. Where recursion may not terminate, never reaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, though it may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it produces and thus become non-"productive". Many functions that are traditionally analyzed as recursive can alternatively, and arguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for example recurrence relations such as the factorial.
Corecursion can produce both finite and infinite data structures as results, and may employ self-referential data structures. Corecursion is often used in conjunction with lazy evaluation, to produce only a finite subset of a potentially infinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly important concept in functional programming, where corecursion and codata allow total languages to work with infinite data structures.
Examples.
Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarily of interest in functional programming, it can be illustrated using imperative programming, which is done below using the generator facility in Python. In these examples local variables are used, and assigned values imperatively (destructively), though these are not necessary in corecursion in pure functional programming. In pure functional programming, rather than assigning to local variables, these computed values form an invariable sequence, and prior values are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be computed). The assignments simply express this in the imperative paradigm and explicitly specify where the computations happen, which serves to clarify the exposition.
Factorial.
A classic example of recursion is computing the factorial, which is defined recursively by "0! := 1" and "n! := n × (n - 1)!".
To "recursively" compute its result on a given input, a recursive function calls (a copy of) "itself" with a different ("smaller" in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless the "base case" has been reached. Thus a call stack develops in the process. For example, to compute "fac(3)", this recursively calls in turn "fac(2)", "fac(1)", "fac(0)" ("winding up" the stack), at which point recursion terminates with "fac(0) = 1", and then the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial call frame "fac(3)" that uses the result of "fac(2) = 2" to calculate the final result as "3 × 2 = 3 × fac(2) =: fac(3)" and finally return "fac(3) = 6". In this example a function returns a single value.
This stack unwinding can be explicated, defining the factorial "corecursively", as an iterator, where one "starts" with the case of formula_0, then from this starting value constructs factorial values for increasing numbers "1, 2, 3..." as in the above recursive definition with "time arrow" reversed, as it were, by reading it "backwards" as formula_1. The corecursive algorithm thus defined produces a "stream" of "all" factorials. This may be concretely implemented as a generator. Symbolically, noting that computing next factorial value requires keeping track of both "n" and "f" (a previous factorial value), this can be represented as:
formula_2
or in Haskell,
(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)
meaning, "starting from formula_3, on each step the next values are calculated as formula_4". This is mathematically equivalent and almost identical to the recursive definition, but the formula_5 emphasizes that the factorial values are being built "up", going forwards from the starting case, rather than being computed after first going backwards, "down" to the base case, with a formula_6 decrement. The direct output of the corecursive function does not simply contain the factorial formula_7 values, but also includes for each value the auxiliary data of its index "n" in the sequence, so that any one specific result can be selected among them all, as and when needed.
There is a connection with denotational semantics, where the denotations of recursive programs is built up corecursively in this way.
In Python, a recursive factorial function can be defined as:
def factorial(n: int) -> int:
"""Recursive factorial function."""
if n == 0:
return 1
else:
return n * factorial(n - 1)
This could then be called for example as codice_0 to compute "5!".
A corresponding corecursive generator can be defined as:
def factorials():
"""Corecursive generator."""
n, f = 0, 1
while True:
yield f
n, f = n + 1, f * (n + 1)
This generates an infinite stream of factorials in order; a finite portion of it can be produced by:
def n_factorials(n: int):
k, f = 0, 1
while k <= n:
yield f
k, f = k + 1, f * (k + 1)
This could then be called to produce the factorials up to "5!" via:
for f in n_factorials(5):
print(f)
If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and the access into one function,
def nth_factorial(n: int):
k, f = 0, 1
while k < n:
k, f = k + 1, f * (k + 1)
return f
As can be readily seen here, this is practically equivalent (just by substituting codice_1 for the only codice_2 there) to the accumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the concept of corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, where applicable.
Fibonacci sequence.
In the same way, the Fibonacci sequence can be represented as:
formula_8
Because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track two successive terms, with the formula_9 corresponding to shift forward by one step, and the formula_10 corresponding to computing the next term. This can then be implemented as follows (using parallel assignment):
def fibonacci_sequence():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
In Haskell, map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )
Tree traversal.
Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can very naturally be implemented via corecursion.
Iteratively, one may traverse a tree by placing its root node in a data structure, then iterating with that data structure while it is non-empty, on each step removing the first node from it and placing the removed node's "child nodes" back into that data structure. If the data structure is a stack (LIFO), this yields depth-first traversal, and if the data structure is a queue (FIFO), this yields breadth-first traversal:
formula_11
formula_12
formula_13
Using recursion, a depth-first traversal of a tree is implemented simply as recursively traversing each of the root node's child nodes in turn. Thus the second child subtree is not processed until the first child subtree is finished. The root node's value is handled separately, whether before the first child is traversed (resulting in pre-order traversal), after the first is finished and before the second (in-order), or after the second child node is finished (post-order) — assuming the tree is binary, for simplicity of exposition. The call stack (of the recursive traversal function invocations) corresponds to the stack that would be iterated over with the explicit LIFO structure manipulation mentioned above. Symbolically,
formula_14
formula_15
formula_16
"Recursion" has two meanings here. First, the recursive invocations of the tree traversal functions formula_17. More pertinently, we need to contend with how the resulting "list of values" is built here. Recursive, bottom-up output creation will result in the right-to-left tree traversal. To have it actually performed in the intended left-to-right order the sequencing would need to be enforced by some extraneous means, or it would be automatically achieved if the output were to be built in the top-down fashion, i.e. "corecursively".
A breadth-first traversal creating its output in the top-down order, corecursively, can be also implemented by starting at the root node, outputting its value, then breadth-first traversing the subtrees – i.e., passing on the "whole list" of subtrees to the next step (not a single subtree, as in the recursive approach) – at the next step outputting the values of all of their root nodes, then passing on "their" child subtrees, etc. In this case the generator function, indeed the output sequence itself, acts as the queue. As in the factorial example above, where the auxiliary information of the index (which step one was at, "n") was pushed forward, in addition to the actual output of "n"!, in this case the auxiliary information of the remaining subtrees is pushed forward, in addition to the actual output. Symbolically,
formula_18
meaning that at each step, one outputs the list of values in this level's nodes, then proceeds to the next level's nodes. Generating just the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the list of lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list). This is extensionally equivalent to the formula_19 specification above. In Haskell,
concatMap fst ( (\(v, ts) -> (rootValues ts, childTrees ts)) `iterate` ([], [fullTree]) )
Notably, given an infinite tree, the corecursive breadth-first traversal will traverse all nodes, just as for a finite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeed if traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf. This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures. One caveat still remains for trees with the infinite branching factor, which need a more attentive interlacing to explore the space better. See dovetailing.
In Python, this can be implemented as follows.
The usual post-order depth-first traversal can be defined as:
def df(node):
"""Post-order depth-first traversal."""
if node is not None:
df(node.left)
df(node.right)
print(node.value)
This can then be called by codice_3 to print the values of the nodes of the tree in post-order depth-first order.
The breadth-first corecursive generator can be defined as:
def bf(tree):
"""Breadth-first corecursive generator."""
tree_list = [tree]
while tree_list:
new_tree_list = []
for tree in tree_list:
if tree is not None:
yield tree.value
new_tree_list.append(tree.left)
new_tree_list.append(tree.right)
tree_list = new_tree_list
This can then be called to print the values of the nodes of the tree in breadth-first order:
for i in bf(t):
print(i)
Definition.
Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphism is then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpoint of a type equation; the isomorphism is then given by a final coalgebra.
If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not. On the other hand, if the domain of discourse is the category of complete partial orders and continuous functions, which corresponds roughly to the Haskell programming language, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form an isomorphism.
Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dual to the way that ordinary recursion recursively defines functions whose domain is an initial data type.
The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if one were to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistent with existing textbooks about Haskell. The examples used in this article predate the attempts to define corecursion and explain what it is.
Discussion.
The rule for "primitive corecursion" on codata is the dual to that for primitive recursion on data. Instead of descending on the argument by pattern-matching on its constructors (that "were called up before", somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. "fields"), we ascend on the result by filling-in its "destructors" (or "observers", that "will be called afterwards", somewhere - so we're actually calling a constructor, creating another bit of the result to be observed later on). Thus corecursion "creates" (potentially infinite) codata, whereas ordinary recursion "analyses" (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it might not terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.
In "Programming with streams in Coq: a case study: the Sieve of Eratosthenes" we find
hd (conc a s) = a
tl (conc a s) = s
else conc (hd s) (sieve p (tl s))
hd (primes s) = (hd s)
tl (primes s) = primes (sieve (hd s) (tl s))
where primes "are obtained by applying the primes operation to the stream (Enu 2)". Following the above notation, the sequence of primes (with a throwaway 0 prefixed to it) and numbers streams being progressively sieved, can be represented as
formula_20
or in Haskell,
The authors discuss how the definition of codice_4 is not guaranteed always to be "productive", and could become stuck e.g. if called with codice_5 as the initial stream.
Here is another example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes are ever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniques are an important part of Haskell programming.
This can be done in Python as well:
»> from itertools import tee, chain, islice
»> def fibonacci():
... def deferred_output():
... yield from output
... result, c1, c2 = tee(deferred_output(), 3)
... paired = (x + y for x, y in zip(c1, islice(c2, 1, None)))
... output = chain([0, 1], paired)
... return result
»> print(*islice(fibonacci(), 20), sep=', ')
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181
The definition of codice_6 can be inlined, leading to this:
fibs = 0 : 1 : next fibs
where
next (a: t@(b:_)) = (a+b):next t
This example employs a self-referential "data structure". Ordinary recursion makes use of self-referential "functions", but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can be rewritten as follows:
fibs = fibgen (0,1)
fibgen (x,y) = x : fibgen (y,x+y)
This employs only self-referential "function" to construct the result. If it were used with strict list constructor it would be an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces an indefinitely defined list.
Corecursion need not produce an infinite object; a corecursive queue is a particularly good example of this phenomenon. The following definition produces a breadth-first traversal of a binary tree in the "top-down" manner, in linear time (already incorporating the flattening mentioned above):
data Tree a b = Leaf a
| Branch b (Tree a b) (Tree a b)
bftrav :: Tree a b -> [Tree a b]
bftrav tree = tree : ts
where
ts = gen 1 (tree : ts)
gen 0 p = []
gen len (Leaf _ : p) = gen (len-1) p
gen len (Branch _ l r : p) = l : r : gen (len+1) p
-- ----read---- ----write-ahead---
-- bfvalues tree = [v | (Branch v _ _) <- bftrav tree]
This definition takes a tree and produces a list of its sub-trees (nodes and leaves). This list serves dual purpose as both the input queue and the result ( produces its output notches ahead of its input back-pointer, , along the list). It is finite if and only if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; this can safely be elided if this definition is applied only to infinite trees.
This Haskell code uses self-referential data structure, but does not "essentially" depend on lazy evaluation. It can be straightforwardly translated into e.g. Prolog which is not a lazy language. What "is" essential is the ability to build a list (used as the queue) in the "top-down" manner. For that, Prolog has tail recursion modulo cons (i.e. open ended lists). Which is also emulatable in Scheme, C, etc. using linked lists with mutable tail sentinel pointer:
bftrav( Tree, [Tree|TS]) :- bfgen( 1, [Tree|TS], TS).
bfgen( 0, _, []) :- !. % 0 entries in the queue -- stop and close the list
bfgen( N, [leaf(_) |P], TS ) :- N2 is N-1, bfgen( N2, P, TS).
bfgen( N, [branch(_,L,R)|P], [L,R|TS]) :- N2 is N+1, bfgen( N2, P, TS).
%% ----read----- --write-ahead--
Another particular example gives a solution to the problem of breadth-first labeling. The function codice_7 visits every node in a binary tree in the breadth first fashion, replacing each label with an integer, each subsequent integer bigger than the last by 1. This solution employs a self-referential data structure, and the binary tree can be finite or infinite.
label :: Tree a b -> Tree Int Int
label t = tn
where
(tn, ns) = go t (1:ns)
go :: Tree a b -> [Int] -> (Tree Int Int, [Int])
go (Leaf _ ) (i:a) = (Leaf i , i+1:a)
go (Branch _ l r) (i:a) = (Branch i ln rn, i+1:c)
where
(ln, b) = go l a
(rn, c) = go r b
Or in Prolog, for comparison,
label( Tree, Tn) :- label( Tree, [1|Ns], Tn, Ns).
label( leaf(_), [I|A], leaf( I), [I+1|A]).
label( branch(_,L,R),[I|A], branch(I,Ln,Rn),[I+1|C]) :-
label( L, A, Ln, B),
label( R, B, Rn, C).
An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the same way that a paramorphism (such as a catamorphism, such as fold) is a form of recursion.
The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.
History.
Corecursion, referred to as "circular programming," dates at least to , who credits John Hughes and Philip Wadler; more general forms were developed in . The original motivations included producing more efficient algorithms (allowing a single pass over data in some cases, instead of requiring multiple passes) and implementing classical data structures, such as doubly linked lists and queues, in functional languages.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "1 =: 0!"
},
{
"math_id": 1,
"text": "n! \\times (n+1) =: (n+1)!"
},
{
"math_id": 2,
"text": "n, f = (0, 1) : (n + 1, f \\times (n+1))"
},
{
"math_id": 3,
"text": "n, f = 0, 1"
},
{
"math_id": 4,
"text": "n+1, f \\times (n+1)"
},
{
"math_id": 5,
"text": "+1"
},
{
"math_id": 6,
"text": "-1"
},
{
"math_id": 7,
"text": "n!"
},
{
"math_id": 8,
"text": "a, b = (0, 1) : (b, a+b)"
},
{
"math_id": 9,
"text": "(b, -)"
},
{
"math_id": 10,
"text": "(-, a+b)"
},
{
"math_id": 11,
"text": "\\text{trav}_\\text{nn}(t) = \\text{aux}_\\text{nn}([t])"
},
{
"math_id": 12,
"text": "\\text{aux}_\\text{df}([t,\\ ...ts]) = \\text{val}(t) ;\\ \\text{aux}_\\text{df}([\\ ...\\text{children}(t),\\ ...ts\\ ])"
},
{
"math_id": 13,
"text": "\\text{aux}_\\text{bf}([t,\\ ...ts]) = \\text{val}(t) ;\\ \\text{aux}_\\text{bf}([\\ ...ts,\\ ...\\text{children}(t)\\ ])"
},
{
"math_id": 14,
"text": "\\text{df}_\\text{in}(t) = [\\ ...\\text{df}_\\text{in}(\\text{left}(t)),\\ \\text{val}(t),\\ ...\\text{df}_\\text{in}(\\text{right}(t))\\ ]"
},
{
"math_id": 15,
"text": "\\text{df}_\\text{pre}(t) = [\\ \\text{val}(t),\\ ...\\text{df}_\\text{pre}(\\text{left}(t)),\\ ...\\text{df}_\\text{pre}(\\text{right}(t))\\ ]"
},
{
"math_id": 16,
"text": "\\text{df}_\\text{post}(t) = [\\ ...\\text{df}_\\text{post}(\\text{left}(t)),\\ ...\\text{df}_\\text{post}(\\text{right}(t)),\\ \\text{val}(t)\\ ]"
},
{
"math_id": 17,
"text": "\\text{df}_{xyz}"
},
{
"math_id": 18,
"text": "v, ts = ([], [\\text{FullTree}]) : (\\text{RootValues}(ts), \\text{ChildTrees}(ts))"
},
{
"math_id": 19,
"text": "\\text{aux}_\\text{bf}"
},
{
"math_id": 20,
"text": "p, s = (0, [2..]) : (hd(s), sieve(hd(s),tl(s)))"
}
]
| https://en.wikipedia.org/wiki?curid=1338683 |
13390326 | Bitmap | Computing term
In computing, a bitmap (also called raster) graphic is an image formed from rows of different colored pixels. A GIF is an example of a graphics image file that uses a bitmap.
As a noun, the term "bitmap" is very often used to refer to a particular bitmapping application: the pix-map, which refers to a map of pixels, where each pixel may store more than two colors, thus using more than one bit per pixel. In such a case, the domain in question is the array of pixels which constitute a digital graphic output device (a screen or monitor). In some contexts, the term "bitmap" implies one bit per pixel, whereas "pixmap" is used for images with multiple bits per pixel.
A bitmap is a type of memory organization or image file format used to store digital images. The term "bitmap" comes from the computer programming terminology, meaning just a "map of bits", a spatially mapped array of bits. Now, along with "pixmap", it commonly refers to the similar concept of a spatially mapped array of pixels. Raster images in general may be referred to as bitmaps or pixmaps, whether synthetic or photographic, in files or memory.
Many graphical user interfaces use bitmaps in their built-in graphics subsystems. For example, the Microsoft Windows and OS/2 platforms' GDI subsystem uses the "Windows and OS/2 bitmap file format", usually named with the file extension codice_0 (or codice_1 for "device-independent bitmap"). Besides BMP, other file formats that store literal bitmaps include InterLeaved Bitmap (ILBM), Portable Bitmap (PBM), X Bitmap (XBM), and Wireless Application Protocol Bitmap (WBMP). Similarly, most other image file formats, such as JPEG, TIFF, PNG, and GIF, also store bitmap images (as opposed to vector graphics), but they are not usually referred to as "bitmaps", since they use compressed formats internally.
Pixel storage.
In typical uncompressed bitmaps, image pixels are generally stored with a variable number of bits per pixel which identify its color (the color depth). Pixels of 8 bits and fewer can represent either grayscale or indexed color. An alpha channel (for transparency) may be stored in a separate bitmap, where it is similar to a grayscale bitmap, or in a fourth channel that, for example, converts 24-bit images to 32 bits per pixel.
The bits representing the bitmap pixels may be packed or unpacked (spaced out to byte or word boundaries), depending on the format or device requirements. Depending on the color depth, a pixel in the picture will occupy at least n/8 bytes, where n is the bit depth.
For an uncompressed, packed-within-rows bitmap, such as is stored in Microsoft DIB or BMP file format, or in uncompressed TIFF format, a lower bound on storage size for a n-bit-per-pixel (2n colors) bitmap, in bytes, can be calculated as:
formula_0
where width and height are given in pixels.
In the formula above, header size and color palette size, if any, are not included. Due to effects of row padding to align each row start to a storage unit boundary, such as a word, additional bytes may be needed.
Device-independent bitmaps and BMP file format.
Microsoft has defined a particular representation of color bitmaps of different color depths, as an aid to exchanging bitmaps between devices and applications with a variety of internal representations. They called these device-independent bitmaps "DIBs", and the file format for them is called DIB file format or BMP file format. According to Microsoft support:
A device-independent bitmap (DIB) is a format used to define device-independent bitmaps in various color resolutions. The main purpose of DIBs is to allow bitmaps to be moved from one device to another (hence, the device-independent part of the name). A DIB is an external format, in contrast to a device-dependent bitmap, which appears in the system as a bitmap object (created by an application...). A DIB is normally transported in metafiles (usually using the StretchDIBits() function), BMP files, and the Clipboard (CF_DIB data format).
Here, "device independent" refers to the format, or storage arrangement, and should not be confused with device-independent color.
Other bitmap file formats.
The X Window System uses a similar XBM format for black-and-white images, and XPM ("pixelmap") for color images. Numerous other uncompressed bitmap file formats are in use, though most not widely. For most purposes, standardized compressed bitmap files such as GIF, PNG, TIFF, and JPEG are used. Lossless compression in particular provides the same information as a bitmap in a smaller file size. TIFF and JPEG have various options. JPEG is usually lossy compression. TIFF is usually either uncompressed, or lossless Lempel-Ziv-Welch compressed like GIF. PNG uses deflate lossless compression, another Lempel-Ziv variant.
There are also a variety of "raw" image files, which store raw bitmaps with no other information. Such raw files are just bitmaps in files, often with no header or size information (they are distinct from photographic raw image formats, which store raw unprocessed sensor data in a structured container such as TIFF format along with extensive image metadata).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{size} = \\text{width} \\cdot \\text{height} \\cdot n/8"
}
]
| https://en.wikipedia.org/wiki?curid=13390326 |
1339097 | Infinite-dimensional holomorphy | In mathematics, infinite-dimensional holomorphy is a branch of functional analysis. It is concerned with generalizations of the concept of holomorphic function to functions defined and taking values in complex Banach spaces (or Fréchet spaces more generally), typically of infinite dimension. It is one aspect of nonlinear functional analysis.
Vector-valued holomorphic functions defined in the complex plane.
A first step in extending the theory of holomorphic functions beyond one complex dimension is considering so-called "vector-valued holomorphic functions", which are still defined in the complex plane C, but take values in a Banach space. Such functions are important, for example, in constructing the holomorphic functional calculus for bounded linear operators.
Definition. A function "f" : "U" → "X", where "U" ⊂ C is an open subset and "X" is a complex Banach space, is called "holomorphic" if it is complex-differentiable; that is, for each point "z" ∈ "U" the following limit exists:
formula_0
One may define the line integral of a vector-valued holomorphic function "f" : "U" → "X" along a rectifiable curve γ : ["a", "b"] → "U" in the same way as for complex-valued holomorphic functions, as the limit of sums of the form
formula_1
where "a" = "t"0 < "t"1 < ... < "t""n" = "b" is a subdivision of the interval ["a", "b"], as the lengths of the subdivision intervals approach zero.
It is a quick check that the Cauchy integral theorem also holds for vector-valued holomorphic functions. Indeed, if "f" : "U" → "X" is such a function and "T" : "X" → C a bounded linear functional, one can show that
formula_2
Moreover, the composition "T" "f" : "U" → C is a complex-valued holomorphic function. Therefore, for γ a simple closed curve whose interior is contained in "U", the integral on the right is zero, by the classical Cauchy integral theorem. Then, since "T" is arbitrary, it follows from the Hahn–Banach theorem that
formula_3
which proves the Cauchy integral theorem in the vector-valued case.
Using this powerful tool one may then prove Cauchy's integral formula, and, just like in the classical case, that any vector-valued holomorphic function is analytic.
A useful criterion for a function "f" : "U" → "X" to be holomorphic is that "T" "f" : "U" → C is a holomorphic complex-valued function for every continuous linear functional "T" : "X" → C. Such an "f" is weakly holomorphic. It can be shown that a function defined on an open subset of the complex plane with values in a Fréchet space is holomorphic if, and only if, it is weakly holomorphic.
Holomorphic functions between Banach spaces.
More generally, given two complex Banach spaces "X" and "Y" and an open set "U" ⊂ "X", "f" : "U" → "Y" is called holomorphic if the Fréchet derivative of "f" exists at every point in "U". One can show that, in this more general context, it is still true that a holomorphic function is analytic, that is, it can be locally expanded in a power series. It is no longer true however that if a function is defined and holomorphic in a ball, its power series around the center of the ball is convergent in the entire ball; for example, there exist holomorphic functions defined on the entire space which have a finite radius of convergence.[#endnote_]
Holomorphic functions between topological vector spaces.
In general, given two complex topological vector spaces "X" and "Y" and an open set "U" ⊂ "X", there are various ways of defining holomorphy of a function "f" : "U" → "Y". Unlike the finite dimensional setting, when "X" and "Y" are infinite dimensional, the properties of holomorphic functions may depend on which definition is chosen. To restrict the number of possibilities we must consider, we shall only discuss holomorphy in the case when "X" and "Y" are locally convex.
This section presents a list of definitions, proceeding from the weakest notion to the strongest notion. It concludes with a discussion of some theorems relating these definitions when the spaces "X" and "Y" satisfy some additional constraints.
Gateaux holomorphy.
Gateaux holomorphy is the direct generalization of weak holomorphy to the fully infinite dimensional setting.
Let "X" and "Y" be locally convex topological vector spaces, and "U" ⊂ "X" an open set. A function "f" : "U" → "Y" is said to be Gâteaux holomorphic if, for every "a" ∈ "U" and "b" ∈ "X", and every continuous linear functional φ : "Y" → C, the function
formula_4
is a holomorphic function of "z" in a neighborhood of the origin. The collection of Gâteaux holomorphic functions is denoted by HG("U","Y").
In the analysis of Gateaux holomorphic functions, any properties of finite-dimensional holomorphic functions hold on finite-dimensional subspaces of "X". However, as usual in functional analysis, these properties may not piece together uniformly to yield any corresponding properties of these functions on full open sets.
formula_5
Here, formula_6 is the homogeneous polynomial of degree "n" in "y" associated with the multilinear operator "Dnf"("x"). The convergence of this series is not uniform. More precisely, if "V" ⊂ "X" is a "fixed" finite-dimensional subspace, then the series converges uniformly on sufficiently small compact neighborhoods of 0 ∈ "Y". However, if the subspace "V" is permitted to vary, then the convergence fails: it will in general fail to be uniform with respect to this variation. Note that this is in sharp contrast with the finite dimensional case.
Examples.
If "f" : ("U" ⊂ "X"1) × ("V" ⊂ "X"2) → "Y" is a function which is "separately" Gateaux holomorphic in each of its arguments, then "f" is Gateaux holomorphic on the product space.
Hypoanalyticity.
A function "f" : ("U" ⊂ "X") → "Y" is hypoanalytic if "f" ∈ "H"G("U","Y") and in addition "f" is continuous on relatively compact subsets of "U".
Holomorphy.
A function "f" ∈ HG(U,"Y") is holomorphic if, for every "x" ∈ "U", the Taylor series expansion
formula_5
(which is already guaranteed to exist by Gateaux holomorphy) converges and is continuous for "y" in a neighborhood of 0 ∈ "X". Thus holomorphy combines the notion of weak holomorphy with the convergence of the power series expansion. The collection of holomorphic functions is denoted by H("U","Y").
Locally bounded holomorphy.
A function "f" : ("U" ⊂ "X") → "Y" is said to be locally bounded if each point of "U" has a neighborhood whose image under "f" is bounded in "Y". If, in addition, "f" is Gateaux holomorphic on "U", then "f" is locally bounded holomorphic. In this case, we write "f" ∈ HLB("U","Y").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f'(z)=\\lim_{\\zeta\\to z} \\frac{f(\\zeta)-f(z)}{\\zeta - z}."
},
{
"math_id": 1,
"text": "\\sum_{1 \\le k \\le n} f(\\gamma(t_k)) ( \\gamma(t_k) - \\gamma(t_{k-1}) )"
},
{
"math_id": 2,
"text": "T\\left(\\int_\\gamma f(z)\\,dz\\right)=\\int_\\gamma (T\\circ f)(z)\\,dz."
},
{
"math_id": 3,
"text": "\\int_\\gamma f(z)\\,dz=0"
},
{
"math_id": 4,
"text": "f_{\\varphi}(z) = \\varphi\\circ f(a+zb)"
},
{
"math_id": 5,
"text": "f(x+y)=\\sum_{n=0}^\\infty \\frac{1}{n!} \\widehat{D}^nf(x)(y)"
},
{
"math_id": 6,
"text": "\\widehat{D}^nf(x)(y)"
}
]
| https://en.wikipedia.org/wiki?curid=1339097 |
13392068 | Tennenbaum's theorem | Tennenbaum's theorem, named for Stanley Tennenbaum who presented the theorem in 1959, is a result in mathematical logic that states that no countable nonstandard model of first-order Peano arithmetic (PA) can be recursive (Kaye 1991:153ff).
Recursive structures for PA.
A structure formula_0 in the language of PA is recursive if there are recursive functions formula_1 and formula_2 from formula_3 to formula_4, a recursive two-place relation <"M" on formula_4, and distinguished constants formula_5 such that
formula_6
where formula_7 indicates isomorphism and formula_4 is the set of (standard) natural numbers. Because the isomorphism must be a bijection, every recursive model is countable. There are many nonisomorphic countable nonstandard models of PA.
Statement of the theorem.
Tennenbaum's theorem states that no countable nonstandard model of PA is recursive. Moreover, neither the addition nor the multiplication of such a model can be recursive.
Proof sketch.
This sketch follows the argument presented by Kaye (1991). The first step in the proof is to show that, if "M" is any countable nonstandard model of PA, then the standard system of "M" (defined below) contains at least one nonrecursive set "S". The second step is to show that, if either the addition or multiplication operation on "M" were recursive, then this set "S" would be recursive, which is a contradiction.
Through the methods used to code ordered tuples, each element formula_8 can be viewed as a code for a set formula_9 of elements of "M". In particular, if we let formula_10 be the "i"th prime in "M", then formula_11. Each set formula_9 will be bounded in "M", but if "x" is nonstandard then the set formula_9 may contain infinitely many standard natural numbers. The standard system of the model is the collection formula_12. It can be shown that the standard system of any nonstandard model of PA contains a nonrecursive set, either by appealing to the incompleteness theorem or by directly considering a pair of recursively inseparable r.e. sets (Kaye 1991:154). These are disjoint r.e. sets formula_13 so that there is no recursive set formula_14 with formula_15 and formula_16.
For the latter construction, begin with a pair of recursively inseparable r.e. sets "A" and "B". For natural number "x" there is a "y" such that, for all "i < x", if formula_17 then formula_18 and if formula_19 then formula_20. By the overspill property, this means that there is some nonstandard "x" in "M" for which there is a (necessarily nonstandard) "y" in "M" so that, for every formula_21 with formula_22, we have
formula_23
Let formula_24 be the corresponding set in the standard system of "M". Because "A" and "B" are r.e., one can show that formula_25 and formula_26. Hence "S" is a separating set for "A" and "B", and by the choice of "A" and "B" this means "S" is nonrecursive.
Now, to prove Tennenbaum's theorem, begin with a nonstandard countable model "M" and an element "a" in "M" so that formula_27 is nonrecursive. The proof method shows that, because of the way the standard system is defined, it is possible to compute the characteristic function of the set "S" using the addition function formula_1 of "M" as an oracle. In particular, if formula_28 is the element of "M" corresponding to 0, and formula_29 is the element of "M" corresponding to 1, then for each formula_30 we can compute formula_31 ("i" times). To decide if a number "n" is in "S", first compute "p", the "n"th prime in formula_4. Then, search for an element "y" of "M" so that
formula_32
for some formula_33. This search will halt because the Euclidean algorithm can be applied to any model of PA. Finally, we have formula_34 if and only if the "i" found in the search was 0. Because "S" is not recursive, this means that the addition operation on "M" is nonrecursive.
A similar argument shows that it is possible to compute the characteristic function of "S" using the multiplication of "M" as an oracle, so the multiplication operation on "M" is also nonrecursive (Kaye 1991:154).
Turing degrees of models of PA.
Jockush and Soare have shown there exists a model of PA with low degree.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\oplus"
},
{
"math_id": 2,
"text": "\\otimes"
},
{
"math_id": 3,
"text": " \\mathbb{N} \\times \\mathbb{N}"
},
{
"math_id": 4,
"text": "\\mathbb{N}"
},
{
"math_id": 5,
"text": "n_0,n_1"
},
{
"math_id": 6,
"text": "\n(\\mathbb{N}, \\oplus,\\otimes,<_M,n_{0},n_{1}) \\cong M,\n"
},
{
"math_id": 7,
"text": "\\cong"
},
{
"math_id": 8,
"text": "x \\in M"
},
{
"math_id": 9,
"text": "S_x"
},
{
"math_id": 10,
"text": "p_i"
},
{
"math_id": 11,
"text": "z \\in S_x \\leftrightarrow M \\vDash p_z | x"
},
{
"math_id": 12,
"text": "\\{ S_x \\cap \\mathbb{N} : x \\in M\\}"
},
{
"math_id": 13,
"text": "A,B \\subseteq \\mathbb{N}"
},
{
"math_id": 14,
"text": "C \\subseteq \\mathbb{N}"
},
{
"math_id": 15,
"text": "A \\subseteq C"
},
{
"math_id": 16,
"text": "B \\cap C = \\emptyset"
},
{
"math_id": 17,
"text": "i \\in A"
},
{
"math_id": 18,
"text": "p_i | y"
},
{
"math_id": 19,
"text": "i \\in B"
},
{
"math_id": 20,
"text": "p_i \\nmid y"
},
{
"math_id": 21,
"text": "m \\in M"
},
{
"math_id": 22,
"text": "m <_M x"
},
{
"math_id": 23,
"text": "M \\vDash (m \\in A\\to p_m |y)\\land(m\\in B \\to p_m \\nmid y)"
},
{
"math_id": 24,
"text": "S = \\mathbb{N} \\cap S_y"
},
{
"math_id": 25,
"text": "A \\subseteq S"
},
{
"math_id": 26,
"text": "B \\cap S = \\emptyset"
},
{
"math_id": 27,
"text": "S = \\mathbb{N} \\cap S_a"
},
{
"math_id": 28,
"text": "n_0"
},
{
"math_id": 29,
"text": "n_1"
},
{
"math_id": 30,
"text": "i \\in \\mathbb{N}"
},
{
"math_id": 31,
"text": "n_i = n_1 \\oplus \\cdots \\oplus n_1"
},
{
"math_id": 32,
"text": "a = \\underbrace{y \\oplus y \\oplus \\cdots \\oplus y}_{p \\text{ times}} \\oplus n_i"
},
{
"math_id": 33,
"text": "i < p"
},
{
"math_id": 34,
"text": "n \\in S"
}
]
| https://en.wikipedia.org/wiki?curid=13392068 |
13395792 | Albrecht Beutelspacher | German mathematician
Albrecht Beutelspacher (born 5 June 1950) is a German mathematician and founder of the Mathematikum. He is a professor emeritus at the University of Giessen, where he held the chair for geometry and discrete mathematics from 1988 to 2018.
Biography.
Beutelspacher studied from 1969 to 1973 math, physics and philosophy at the University of Tübingen and received his PhD 1976 from the University of Mainz. His PhD advisor was Judita Cofman. From 1982 to 1985 he was an associate professor at the University of Mainz and from 1985 to 1988 he worked at a research department of Siemens. From 1988 to 2018 he was a tenured professor for geometry and discrete mathematics at the University of Giessen. He became a well-known popularizer of mathematics in Germany by authoring several books in the field of popular science and recreational math and by founding Germany's first math museum, the Mathematikum. He received several awards for his contributions to popularizing mathematics. He had a math column in the German popular science magazine Bild der Wissenschaft and moderated a popular math series for the TV Channel BR-formula_0 (educational TV). | [
{
"math_id": 0,
"text": " \\alpha "
}
]
| https://en.wikipedia.org/wiki?curid=13395792 |
1339598 | Banach measure | In the mathematical discipline of measure theory, a Banach measure is a certain way to assign a size (or area) to all subsets of the Euclidean plane, consistent with but extending the commonly used Lebesgue measure. While there are certain subsets of the plane which are not Lebesgue measurable, all subsets of the plane have a Banach measure. On the other hand, the Lebesgue measure is countably additive while a Banach measure is only finitely additive (and is therefore known as a "content").
Stefan Banach proved the existence of Banach measures in 1923. This established in particular that paradoxical decompositions as provided by the Banach-Tarski paradox in Euclidean space R3 cannot exist in the Euclidean plane R2.
Definition.
A Banach measure on R"n" is a function formula_0 (assigning a non-negative extended real number to each subset of R"n") such that
Properties.
The finite additivity of "μ" implies that formula_7 and formula_8 for any pairwise disjoint sets formula_9. We also have formula_10 whenever formula_11.
Since "μ" extends Lebesgue measure, we know that formula_12 whenever "A" is a finite or a countable set and that formula_13 for any product of intervals formula_14.
Since "μ" is invariant under isometries, it is in particular invariant under rotations and translations.
Results.
Stefan Banach showed that Banach measures exist on R1 and on R2. These results can be derived from the fact that the groups of isometries of R1 and of R2 are solvable.
The existence of these measures proves the impossibility of a Banach–Tarski paradox in one or two dimensions: it is not possible to decompose a one- or two-dimensional set of finite Lebesgue measure into finitely many sets that can be reassembled into a set with a different Lebesgue measure, because this would violate the properties of the Banach measure that extends the Lebesgue measure.
Conversely, the existence of the Banach-Tarski paradox in all dimensions "n ≥ 3" shows that no Banach measure can exist in these dimensions.
As Vitali's paradox shows, Banach measures cannot be strengthened to countably additive ones: there exist subsets of R"n" that are not Lebesgue measurable, for all "n ≥ 1".
Most of these results depend on some form of the axiom of choice. Using only the axioms of Zermelo-Fraenkel set theory without the axiom of choice, it is not possible to derive the Banach-Tarski paradox, nor it is possible to prove the existence of sets that are not Lebesgue-measurable (the latter claim depends on a fairly weak and widely believed assumption, namely that the existence of inaccessible cardinals is consistent). The existence of Banach measures on R1 and on R2 can also not be proven in the absence of the axiom of choice. In particular, no concrete formula for these Banach measures can be given.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu: {\\mathcal P}(\\R^n)\\to [0,\\infty]"
},
{
"math_id": 1,
"text": "\\mu(A \\cup B) = \\mu(A)+ \\mu(B) "
},
{
"math_id": 2,
"text": "A,B\\subseteq \\R^n"
},
{
"math_id": 3,
"text": "\\mu(A)=\\lambda(A) "
},
{
"math_id": 4,
"text": "A\\subseteq \\R^n"
},
{
"math_id": 5,
"text": "\\mu(A)=\\mu(f(A)) "
},
{
"math_id": 6,
"text": "f : \\R^n\\to\\R^n"
},
{
"math_id": 7,
"text": "\\mu(\\varnothing) = 0 "
},
{
"math_id": 8,
"text": "\\mu(A_1 \\cup \\cdots \\cup A_k) = \\sum_{i=1}^k\\mu(A_i) "
},
{
"math_id": 9,
"text": "A_1,\\ldots,A_k\\subseteq \\R^n"
},
{
"math_id": 10,
"text": "\\mu(A)\\leq\\mu(B) "
},
{
"math_id": 11,
"text": "A\\subseteq B\\subseteq \\R^n"
},
{
"math_id": 12,
"text": "\\mu(A)=0 "
},
{
"math_id": 13,
"text": "\\mu([a_1,b_1]\\times \\cdots \\times [a_n,b_n]) =(b_1-a_1)\\cdots(b_n-a_n) "
},
{
"math_id": 14,
"text": "[a_1,b_1]\\times \\cdots \\times [a_1,b_1]\\subseteq \\R^n"
}
]
| https://en.wikipedia.org/wiki?curid=1339598 |
1339615 | Stopping power | Ability of a firearm or other weapon to cause immediate incapacitation
Stopping power is the ability of a weapon – typically a ranged weapon such as a firearm – to cause a target (human or animal) to be incapacitated or immobilized. Stopping power contrasts with lethality in that it pertains only to a weapon's ability to make the target cease action, regardless of whether or not death ultimately occurs. Which ammunition cartridges have the greatest stopping power is a much-debated topic.
Stopping power is related to the physical properties and terminal behavior of the projectile (bullet, shot, or slug), the biology of the target, and the wound location, but the issue is complicated and not easily studied. Although higher-caliber ammunitions usually have greater muzzle energy and momentum and thus traditionally been widely associated with higher stopping power, the physics involved are multifactorial, with caliber, muzzle velocity, bullet mass, bullet shape and bullet material all contributing to the ballistics.
Despite much disagreement, the most popular theory of stopping power is that it is usually caused not by the force of the bullet but by the wounding effects of the bullet, which are typically a rapid loss of blood causing a circulatory failure, which leads to impaired motor function and/or unconsciousness. The "Big Hole School" and the principles of penetration and permanent tissue damage are in line with this way of thinking. The other prevailing theories focus more on the energy of the bullet and its effects on the nervous system, including hydrostatic shock and energy transfer, which is similar to kinetic energy deposit.
History.
The concept of stopping power appeared in the tail end of the 19th century when colonial troops (including American troops in the Philippines during the Moro Rebellion, and British soldiers during the New Zealand Wars) at close quarters found that their pistols were not able to stop charging native tribesmen. This led to the introduction or reintroduction of larger caliber weapons (such as the older .45 Colt and the newly developed .45 ACP) capable of stopping opponents with a single round.
During the Seymour Expedition in China, at one of the battles at Langfang, Chinese Boxers, armed with swords and spears, conducted a massed infantry charge against the forces of the Eight-Nation Alliance, who were equipped with rifles. At point-blank range, a British soldier had to fire four .303 Lee-Metford bullets into a Boxer before he stopped charging. U.S. Army officer Bowman McCalla reported that single rifle shots were not enough: multiple rifle shots were needed to halt a Boxer. Only machine guns were effective in immediately stopping the Boxers.
In the Moro Rebellion, Moro Muslim Juramentados in suicide attacks continued to charge against American soldiers even after being shot. Panglima Hassan in the Hassan uprising had to be shot dozens of times before he died. This forced the Americans to phase out .38 Long Colt revolvers and start using .45 Colt against the Moros.
British troops used expanding bullets during various conflicts in the Northwest Frontier in India, and the Mahdist War in Sudan. The British government voted against a prohibition on their use at the Hague Convention of 1899, although the prohibition only applied to international warfare.
In response to addressing stopping power issues, the Mozambique Drill was developed to maximize the likelihood of a target's quick incapacitation.
"Manstopper" is an informal term used to refer to any combination of firearm and ammunition that can reliably incapacitate, or "stop", a human target immediately. For example, the .45 ACP round and the .357 Magnum round both have firm reputations as "manstoppers". Historically, one type of ammunition has had the specific tradename "Manstopper". Officially known as the Mk III cartridge, these were made to suit the British Webley .455 service revolver in the early 20th century. The ammunition used a cylindrical bullet with hemispherical depressions at both ends. The front acted as a hollow point deforming on impact while the base opened to seal the round in the barrel. It was introduced in 1898 for use against "savage foes", but fell quickly from favor due to concerns of breaching the Hague Convention's international laws on military ammunition, and was replaced in 1900 by re-issued Mk II pointed-bullet ammunition.
Some sporting arms are also referred to as "stoppers" or "stopping rifles". These powerful arms are often used by game hunters (or their guides) for stopping a suddenly charging animal, like a buffalo or an elephant.
Dynamics of bullets.
A bullet will destroy or damage any tissues which it penetrates, creating a wound channel. It will also cause nearby tissue to stretch and expand as it passes through tissue. These two effects are typically referred to as "permanent cavity" (the track left by the bullet as it penetrates flesh) and "temporary cavity," which, as the name implies, is the temporary (instantaneous) displacement caused as the bullet travels through flesh, and is many times larger than the actual diameter of the bullet. These phenomena are unrelated to low-pressure cavitation in liquids.
The degree to which permanent and temporary cavitation occur is dependent on the mass, diameter, material, design and velocity of the bullet. This is because bullets "crush" tissue, and do not cut it. A bullet constructed with a half diameter ogive designed meplat and hard, solid copper alloy material may crush only the tissue directly in front of the bullet. This type of bullet (monolithic-solid rifle bullet) is conducive to causing more temporary cavitation as the tissue flows around the bullet, resulting in a deep and narrow wound channel. A bullet constructed with a two diameter, hollow point ogive designed meplat and low-antimony lead-alloy core with a thin gilding metal jacket material will crush tissue in front and to the sides as the bullet expands. Due to the energy expended in bullet expansion, velocity is lost more quickly. This type of bullet (hollow-point hand gun bullet) is conducive to causing more permanent cavitation as the tissue is crushed and accelerated into other tissues by the bullet, causing a shorter and wider wound channel. The exception to this general rule is non-expanding bullets which are long relative to their diameter. These tend to destabilize and yaw (tumble) soon after impact, increasing both temporary and permanent cavitation.
Bullets are constructed to behave in different ways, depending on the intended target. Different bullets are constructed variously to: not expand upon impact, expand upon impact at high velocity, expand upon impact, expand across a broad range of velocities, expand upon impact at low velocity, tumble upon impact, fragment upon impact, or disintegrate upon impact.
To control the expansion of a bullet, meplat design and materials are engineered. The meplat designs are: flat; round to pointed depending on the ogive; hollow pointed which can be large in diameter and shallow or narrow in diameter and deep and truncated which is a long narrow punched hole in the end of a monolithic-solid type bullet. The materials used to make bullets are: pure lead; alloyed lead for hardness; gilding metal jacket which is a copper alloy of nickel and zinc to promote higher velocities; pure copper; copper alloy of bronze with tungsten steel alloy inserts to promote weight.
Some bullets are constructed by bonding the lead core to the jacket to promote higher weight retention upon impact, causing a larger and deeper wound channel. Some bullets have a web in the center of the bullet to limit the expansion of the bullet while promoting penetration. Some bullets have dual cores to promote penetration.
Bullets that might be considered to have stopping power for dangerous large game animals are usually 11.63 mm (.458 caliber) and larger, including 12-gauge shotgun slugs. These bullets are monolithic-solids; full metal jacketed and tungsten steel insert. They are constructed to hold up during close range, high velocity impacts. These bullets are expected to impact and penetrate, and transfer energy to the surrounding tissues and vital organs through the entire length of a game animal's body if need be.
The stopping power of firearms when used against humans is a more complex subject, in part because many persons voluntarily cease hostile actions when shot; they either flee, surrender, or fall immediately. This is sometimes referred to as "psychological incapacitation".
Physical incapacitation is primarily a matter of shot location; most persons who are shot in the head are immediately incapacitated, and most who are shot in the extremities are not, regardless of the firearm or ammunition involved. Shotguns will usually incapacitate with one shot to the torso, but rifles and especially handguns are less reliable, particularly those which do not meet the FBI's penetration standard, such as .25ACP, .32 S&W, and rimfire models. More powerful handguns may or may not meet the standard, or may even overpenetrate, depending on what ammunition is used.
Fully jacketed bullets penetrate deeply without much expansion, while soft or hollow point bullets create a wider, shallower wound channel. Pre-fragmented bullets such as Glaser Safety Slugs and MagSafe ammunition are designed to fragment into birdshot on impact with the target. This fragmentation is intended to create more trauma to the target, and also to reduce collateral damage caused from ricocheting or overpenetrating of the target and the surrounding environments such as walls. Fragmenting rounds have been shown to be unlikely to obtain deep penetration necessary to disrupt vital organs located at the back of a hostile human.
Wounding effects.
Physical.
Permanent and temporary cavitation cause very different biological effects. A hole through the heart will cause loss of pumping efficiency, loss of blood, and eventual cardiac arrest. A hole through the liver or lung will be similar, with the lung shot having the added effect of reducing blood oxygenation; these effects however are generally slower to arise than damage to the heart. A hole through the brain can cause instant unconsciousness and will likely kill the recipient. A hole through the spinal cord will instantly interrupt the nerve signals to and from some or all extremities, disabling the target and in many cases also resulting in death (as the nerve signals to and from the heart and lungs are interrupted by a shot high in the chest or to the neck). By contrast, a hole through an arm or leg which hits only muscle will cause a great deal of pain but is unlikely to be fatal, unless one of the large blood vessels (femoral or brachial arteries, for example) is also severed in the process.
The effects of temporary cavitation are less well understood, due to a lack of a test material identical to living tissue. Studies on the effects of bullets typically are based on experiments using ballistic gelatin, in which temporary cavitation causes radial tears where the gelatin was stretched. Although such tears are visually engaging, some animal tissues (but not bone or liver) are more elastic than gelatin. In most cases, temporary cavitation is unlikely to cause anything more than a bruise. Some speculation states that nerve bundles can be damaged by temporary cavitation, creating a stun effect, but this has not been confirmed.
One exception to this is when a very powerful temporary cavity intersects with the spine. In this case, the resulting blunt trauma can slam the vertebrae together hard enough to either sever the spinal cord, or damage it enough to knock out, stun, or paralyze the target. For instance, in the shootout between eight FBI agents and two bank robbers in the 1986 FBI Miami shootout, Special Agent Gordon McNeill was struck in the neck by a high-velocity .223 bullet fired by Michael Platt. While the bullet did not directly contact the spine, and the wound incurred was not ultimately fatal, the temporary cavitation was sufficient to render SA McNeill paralyzed for several hours. Temporary cavitation may similarly fracture the femur if it is narrowly missed by a bullet.
Temporary cavitation can also cause the tearing of tissues if a very large amount of force is involved. The tensile strength of muscle ranges roughly from 1 to 4 MPa (145 to 580 lbf/in2), and minimal damage will result if the pressure exerted by the temporary cavitation is below this. Gelatin and other less elastic media have much lower tensile strengths, thus they exhibit more damage after being struck with the same amount of force. At typical handgun velocities, bullets will create temporary cavities with much less than 1 MPa of pressure, and thus are incapable of causing damage to elastic tissues that they do not directly contact.
Rifle bullets that strike a major bone (such as a femur) can expend their entire energy into the surrounding tissue. The struck bone is commonly shattered at the point of impact.
High velocity fragmentation can also increase the effect of temporary cavitation. The fragments sheared from the bullet cause many small permanent cavities around the main entry point. The main mass of the bullet can then cause a truly massive amount of tearing as the perforated tissue is stretched.
Whether a person or animal will be incapacitated (i.e. "stopped") when shot, depends on a large number of factors, including physical, physiological, and psychological effects.
Neurological.
The only way to immediately incapacitate a person or animal is to damage or disrupt their central nervous system (CNS) to the point of paralysis, unconsciousness, or death. Bullets can achieve this directly or indirectly. If a bullet causes sufficient damage to the brain or spinal cord, immediate loss of consciousness or paralysis, respectively, can result. However, these targets are relatively small and mobile, making them extremely difficult to hit even under optimal circumstances.
Bullets can indirectly disrupt the CNS by damaging the cardiovascular system so that it can no longer provide enough oxygen to the brain to sustain consciousness. This can be the result of bleeding from a perforation of a large blood vessel or blood-bearing organ, or the result of damage to the lungs or airway. If blood flow is completely cut off from the brain, a human still has enough oxygenated blood in their brain for 10–15 seconds of wilful action, though with rapidly decreasing effectiveness as the victim begins to lose consciousness.
Unless a bullet directly damages or disrupts the central nervous system, a person or animal will not be instantly and completely incapacitated by physiological damage. However, bullets can cause other disabling injuries that prevent specific actions (a person shot in the femur cannot run) and the physiological pain response from severe injuries will temporarily disable most individuals.
Several scientific papers reveal ballistic pressure wave effects on wounding and incapacitation, including central nervous system injuries from hits to the thorax and extremities. These papers document remote wounding effects for both rifle and pistol levels of energy transfer.
Recent work by Courtney and Courtney provides compelling support for the role of a ballistic pressure wave in creating remote neural effects leading to incapacitation and injury. This work builds upon the earlier works of Suneson et al. where the researchers implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh. These scientists observed neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh. The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile ..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
Psychological.
Emotional shock, terror, or surprise can cause a person to faint, surrender, or flee when shot or shot at. There are many documented instances where people have instantly dropped unconscious when the bullet only hit an extremity, or even completely missed. Additionally, the muzzle blast and flash from many firearms are substantial and can cause disorientation, dazzling, and stunning effects. Flashbangs (stun grenades) and other less-lethal "distraction devices" rely exclusively on these effects.
Pain is another psychological factor, and can be enough to dissuade a person from continuing their actions.
Temporary cavitation can emphasize the impact of a bullet, since the resulting tissue compression is identical to simple blunt force trauma. It is easier for someone to feel when they have been shot if there is considerable temporary cavitation, and this can contribute to either psychological factor of incapacitation.
However, if a person is sufficiently enraged, determined, or intoxicated, they can simply shrug off the psychological effects of being shot. During the colonial era, when native tribesmen came into contact with firearms for the first time, there was no psychological conditioning that being shot could be fatal, and most colonial powers eventually sought to create more effective manstoppers.
Therefore, such effects are not as reliable as physiological effects at stopping people. Animals will not faint or surrender if injured, though they may become frightened by the loud noise and pain of being shot, so psychological mechanisms are generally less effective against non-humans.
Penetration.
According to Dr. Martin Fackler and the International Wound Ballistics Association (IWBA), between of penetration in calibrated tissue simulant is optimal performance for a bullet which is meant to be used defensively, against a human adversary. They also believe that penetration is one of the most important factors when choosing a bullet (and that the number one factor is shot placement). If the bullet penetrates less than their guidelines, it is inadequate, and if it penetrates more, it is still satisfactory though not optimal. The FBI's penetration requirement is very similar at .
A penetration depth of may seem excessive, but a bullet sheds velocity—and crushes a narrower hole—as it penetrates deeper, so the bullet might be crushing a very small amount of tissue (simulating an "ice pick" injury) during its last two or three inches of travel, giving only between of effective wide-area penetration. Also, skin is elastic and tough enough to cause a bullet to be retained in the body, even if the bullet had a relatively high velocity when it hit the skin. About velocity is required for an expanded hollow point bullet to puncture skin 50% of the time.
The IWBA's and FBI's penetration guidelines are to ensure that the bullet can reach a vital structure from most angles, while retaining enough velocity to generate a large diameter hole through tissue. An extreme example where penetration would be important is if the bullet first had to enter and then exit an outstretched arm before impacting the torso. A bullet with low penetration might embed itself in the arm whereas a higher penetrating bullet would penetrate the arm then enter the thorax where it would have a chance of hitting a vital organ.
Overpenetration.
Excessive penetration or "overpenetration" occurs when a bullet passes through its intended target and out of the other side, with enough residual kinetic energy to continue flying as a stray projectile and risk causing unintended collateral damage to objects or persons beyond. This happens because the bullet has not released all its energy within the target, according to the energy transfer hypothesis.
Other hypotheses.
These hypotheses are a matter of some debate among scientists in the field:
Energy transfer.
The energy transfer hypothesis states that for small arms in general, the more energy transferred to the target, the greater the stopping power. It postulates that the pressure wave exerted on soft tissues by the bullet's temporary cavity hits the nervous system with a jolt of shock and pain and thereby forces incapacitation.
Proponents of this theory contend that the incapacitation effect is similar to that seen in non-concussive blunt-force trauma events, such as a knock-out punch to the body, a football player "shaken up" as result of a hard tackle, or a hitter being struck by a fastball. Pain in general has an inhibitory and weakening effect on the body, causing a person under physical stress to take a seat or even collapse. The force put on the body by the temporary cavity is supersonic compression, like the lash of a whip. While the lash only affects a short line of tissue across the back of the victim, the temporary cavity affects a volume of tissue roughly the size and shape of a football. Giving further credence to this theory is the support from the aforementioned effects of drugs on incapacitation. Pain killers, alcohol, and PCP have all been known to decrease the effects of nociception and increase a person's resistance to incapacitation, all while having no effect on blood loss.
Kinetic energy is a function of the bullet's mass and the square of its velocity. Generally speaking, it is the intention of the shooter to deliver an adequate amount of energy to the target via the projectiles. All else held equal, bullets that are light and fast tend to have more energy than those that are heavy and slow.
Over-penetration is detrimental to stopping power in regards to energy. This is because a bullet that passes through the target does not transfer all of its energy to the target. Lighter bullets tend to have less penetration in soft tissue and therefore are less likely to over-penetrate. Expanding bullets and other tip variations can increase the friction of the bullet through soft tissue, and/or allow internal ricochets off bone, therefore helping prevent over-penetration.
Non-penetrating projectiles can also possess stopping power and give support to the energy transfer hypothesis. Notable examples of projectiles designed to deliver stopping power without target penetration are Flexible baton rounds (commonly known as "beanbag bullets") and the rubber bullet, types of reduced-lethality ammunition.
The force exerted by a projectile upon tissue is equal to the bullet's local rate of kinetic energy loss, with distance formula_0 (the first derivative of the bullet's kinetic energy with respect to position). The ballistic pressure wave is proportional to this retarding force (Courtney and Courtney), and this retarding force is also the origin of both temporary cavitation and prompt damage (CE Peters).
Hydrostatic shock.
Hydrostatic shock is a controversial theory of terminal ballistics that states a penetrating projectile (such as a bullet) can produce a sonic pressure wave that causes "remote neural damage", "subtle damage in neural tissues" and/or "rapid incapacitating effects" in living targets. Proponents of the theory contend that damage to the brain from hydrostatic shock from a shot to the chest occurs in humans with most rifle cartridges and some higher-velocity handgun cartridges. Hydrostatic shock is not the shock from the temporary cavity itself, but rather the sonic pressure wave that radiates away from its edges through static soft tissue.
Knockback.
The idea of "knockback" implies that a bullet can have enough force to stop the forward motion of an attacker and physically knock them backwards or downwards. It follows from the law of conservation of momentum that no "knockback" could ever exceed the recoil felt by the shooter, and therefore has no use as a weapon. The myth of "knockback" has been spread through its confusion with the phrase "stopping power" as well as by many films, which show bodies flying backward after being shot.
The idea of knockback was first widely expounded in ballistics discussions during American involvement in Philippine insurrections and, simultaneously, in British conflicts in its colonial empire, when front-line reports stated that the .38 Long Colt caliber revolvers carried by U.S. and British soldiers were incapable of bringing down a charging warrior. Thus, in the early 1900s, the U.S. reverted to the .45 Colt in single action revolvers, and later adopted the .45 ACP cartridge in what was to become the M1911A1 pistol, and the British adopted the .455 Webley caliber cartridge in the Webley Revolver. The larger cartridges were chosen largely due to the Big Hole Theory (a larger hole does more damage), but the common interpretation was that these were changes from a light, deeply penetrating bullet to a larger, heavier "manstopper" bullet.
Though popularized in television and movies, and commonly referred to as "true stopping power" by uneducated proponents of large powerful calibers such as .44 Magnum, the effect of knockback from a handgun and indeed most personal weapons is largely a myth. The momentum of the so-called "manstopper" .45 ACP bullet is approximately that of a mass dropped from a height of . or that of a baseball. Such a force is simply incapable of arresting a running target's forward momentum. In addition, bullets are designed to penetrate instead of strike a blunt force blow, because, in penetrating, more severe tissue damage is done. A bullet with sufficient energy to knock down an assailant, such as a high-speed rifle bullet, would be more likely to instead pass straight through, while not transferring the full energy (in fact only a very small percentage of the full energy) of the bullet to the victim. Most energy from a fully stopped rifle round instead goes into formation of the temporary cavity and the destruction of both the round, the wound channel, and some of the surrounding tissues. There is no physical principle preventing a hypervelocity round from causing a splash injury in which the ejecta create rocket-like impulse on their way out to cause knockback, and indeed, no principle preventing a similar effect for exit wounds causing "knockforward", but this is still generally not anywhere near the impulse required to stop the motion of a sprinting person or knock them over from pure momentum.
Sometimes "knockdown power" is a phrase used interchangeably with "knockback", while other times it's used interchangeably with "stopping power". The misuse and fluid meaning of these phrases have done their part in confusing the issue of stopping power. The ability of a bullet to "knock down" a metal or otherwise inanimate target falls under the category of momentum, as explained above, and has little correlation with stopping power.
One-shot stop.
This hypothesis, promoted by Evan P. Marshall, is based on statistical analysis of actual shooting incidents from various reporting sources (typically police agencies). It is intended to be used as a unit of measurement and not as a tactical philosophy, as mistakenly believed by some. It considers the history of shooting incidents for a given factory ammunition load and compiles the percentage of "one-shot-stops" achieved with each specific ammunition load. That percentage is then intended to be used with other information to help predict the effectiveness of that load getting a "one-shot-stop". For example, if an ammunition load is used in 10 torso shootings, incapacitating all but two with one shot, the "one-shot-stop" percentage for the total sample would be 80%.
Some argue that this hypothesis ignores any inherent selection bias. For example, high-velocity 9×19mm Parabellum hollow point rounds appear to have the highest percentage of one-shot stops. Rather than identifying this as an inherent property of the firearm/bullet combination, the situations where these have occurred need to be considered. The 9mm has been the predominantly used caliber of many police departments, so many of these one-shot-stops were probably made by well-trained police officers, where accurate placement would be a contributory factor. However, Marshall's database of "one-shot-stops" does include shootings from law enforcement agencies, private citizens, and criminals alike.
Critics of this theory point out that bullet placement is a very significant factor, but is only generally used in such one-shot-stop calculations, covering shots to the torso. Others contend that the importance of "one-shot stop" statistics is overstated, pointing out that most gun encounters do not involve a "shoot once and see how the target reacts" situation. Proponents contend that studying one-shot situations is the best way to compare cartridges as comparing a person shot once to a person shot twice does not maintain a control and has no value.
Big hole school.
This school of thought says that the bigger the hole in the target, the higher the rate of bleed-out and thus the higher the rate of the aforementioned "one-shot stop". According to this theory, as the bullet does not pass entirely through the body, it incorporates the energy transfer and the overpenetration ideals. Those that support this theory cite the .40 S&W round, arguing that it has a better ballistic profile than the .45 ACP, and more stopping power than a 9mm.
The theory centers on the "permanent cavitation" element of a handgun wound. A big hole damages more tissue. It is therefore valid to a point, but penetration is also important, as a large bullet that does not penetrate will be less likely to strike vital blood vessels and blood-carrying organs such as the heart and liver, while a smaller bullet that penetrates deep enough to strike these organs or vessels will cause faster bleed-out through a smaller hole. The ideal may therefore be a combination: a large bullet that penetrates deeply, which can be achieved with a larger, slower non-expanding bullet, or a smaller, faster expanding bullet such as a hollow point.
In the extreme, a heavier bullet (which preserves momentum greater than a lighter bullet of the same caliber) may "overpenetrate", passing completely through the target without expending all of its kinetic energy. So-called "overpenetration" is not an important consideration when it comes to wounding incapacitation or "stopping power" because: (a) while a lower "proportion" of the bullet's energy is transferred to the target, a higher "absolute amount" of energy is shed than in partial penetration, and (b) overpenetration creates an exit wound.
Other contributing factors.
As mentioned earlier, there are many factors, such as drug and alcohol levels within the body, body mass index, mental illness, motivation levels, and gunshot location on the body which may determine which round will kill or at least catastrophically affect a target during any given situation.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{d}E_k/\\mathrm{d}x"
}
]
| https://en.wikipedia.org/wiki?curid=1339615 |
1339640 | Oversampling | Sampling higher than the Nyquist rate
In signal processing, oversampling is the process of sampling a signal at a sampling frequency significantly higher than the Nyquist rate. Theoretically, a bandwidth-limited signal can be perfectly reconstructed if sampled at the Nyquist rate or above it. The Nyquist rate is defined as twice the bandwidth of the signal. Oversampling is capable of improving resolution and signal-to-noise ratio, and can be helpful in avoiding aliasing and phase distortion by relaxing anti-aliasing filter performance requirements.
A signal is said to be oversampled by a factor of "N" if it is sampled at "N" times the Nyquist rate.
Motivation.
There are three main reasons for performing oversampling: to improve anti-aliasing performance, to increase resolution and to reduce noise.
Anti-aliasing.
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampling system, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, the digital filter associated with this downsampling is easier to implement than a comparable analog filter required by a non-oversampled system.
Resolution.
In practice, oversampling is implemented in order to reduce cost and improve performance of an analog-to-digital converter (ADC) or digital-to-analog converter (DAC). When oversampling by a factor of N, the dynamic range also increases a factor of N because there are N times as many possible values for the sum. However, the signal-to-noise ratio (SNR) increases by formula_0, because summing up uncorrelated noise increases its amplitude by formula_0, while summing up a coherent signal increases its average by N. As a result, the SNR increases by formula_0.
For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the SNR by a factor of 16, effectively adding 4 bits to the resolution and producing a single sample with 24-bit resolution.
The number of samples required to get formula_1 bits of additional data precision is
formula_2
To get the mean sample scaled up to an integer with formula_1 additional bits, the sum of formula_3 samples is divided by formula_4:
formula_5
This averaging is only effective if the signal contains sufficient uncorrelated noise to be recorded by the ADC. If not, in the case of a stationary input signal, all formula_4 samples would have the same value and the resulting average would be identical to this value; so in this case, oversampling would have made no improvement. In similar cases where the ADC records no noise and the input signal is changing over time, oversampling improves the result, but to an inconsistent and unpredictable extent.
Adding some dithering noise to the input signal can actually improve the final result because the dither noise allows oversampling to work to improve resolution. In many practical applications, a small increase in noise is well worth a substantial increase in measurement resolution. In practice, the dithering noise can often be placed outside the frequency range of interest to the measurement, so that this noise can be subsequently filtered out in the digital domain—resulting in a final measurement, in the frequency range of interest, with both higher resolution and lower noise.
Noise.
If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then because, as discussed above, uncorrelated signals combine more weakly than correlated ones, averaging "N" samples reduces the noise power by a factor of "N". If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of four which corresponds to a factor of two improvement in terms of voltage.
Certain kinds of ADCs known as delta-sigma converters produce disproportionately more quantization noise at higher frequencies. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled signal down to half the target sampling rate, a final result with "less" noise (over the entire band of the converter) can be obtained. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
Example.
Consider a signal with a bandwidth or highest frequency of "B" = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at four times that rate requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz (("f"s/2) − "B" = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz. Achieving an anti-aliasing filter with 0 Hz transition band is unrealistic whereas an anti-aliasing filter with a transition band of 300 Hz is not difficult.
Reconstruction.
The term oversampling is also used to denote a process used in the reconstruction phase of digital-to-analog conversion, in which an intermediate high sampling rate is used between the digital input and the analog output. Here, digital interpolation is used to add additional samples between recorded samples, thereby converting the data to a higher sample rate, a form of upsampling. When the resulting higher-rate samples are converted to analog, a less complex and less expensive analog reconstruction filter is required. Essentially, this is a way to shift some of the complexity of reconstruction from analog to the digital domain. Oversampling in the ADC can achieve some of the same benefits as using a higher sample rate at the DAC.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{N}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\mbox{number of samples} = (2^n)^2 = 2^{2n}."
},
{
"math_id": 3,
"text": "2^{2n}"
},
{
"math_id": 4,
"text": "2^n"
},
{
"math_id": 5,
"text": "\\mbox{scaled mean} = \\frac{ \\sum\\limits^{2^{2n}-1}_{i=0} 2^n \\text{data}_i}{2^{2n}} = \\frac{\\sum\\limits^{2^{2n}-1}_{i=0} \\text{data}_i}{2^n}."
}
]
| https://en.wikipedia.org/wiki?curid=1339640 |
13396591 | Möbius–Kantor graph | Symmetric bipartite cubic graph with 16 vertices and 24 edges
In the mathematical field of graph theory, the Möbius–Kantor graph is a symmetric bipartite cubic graph with 16 vertices and 24 edges named after August Ferdinand Möbius and Seligmann Kantor. It can be defined as the generalized Petersen graph "G"(8,3): that is, it is formed by the vertices of an octagon, connected to the vertices of an eight-point star in which each point of the star is connected to the points three steps away from it.
Möbius–Kantor configuration.
asked whether there exists a pair of polygons with "p" sides each, having the property that the vertices of one polygon lie on the lines through the edges of the other polygon, and vice versa. If so, the vertices and edges of these polygons would form a projective configuration. For "p" = 4 there is no solution in the Euclidean plane, but found pairs of polygons of this type, for a generalization of the problem in which the points and edges belong to the complex projective plane. That is, in Kantor's solution, the coordinates of the polygon vertices are complex numbers. Kantor's solution for "p" = 4, a pair of mutually-inscribed quadrilaterals in the complex projective plane, is called the Möbius–Kantor configuration. The Möbius–Kantor graph derives its name from being the Levi graph of the Möbius–Kantor configuration. It has one vertex per point and one vertex per triple, with an edge connecting two vertices if they correspond to a point and to a triple that contains that point.
The configuration may also be described algebraically in terms of the abelian group formula_0 with nine elements.
This group has four subgroups of order three (the subsets of elements of the form formula_1, formula_2, formula_3, and formula_4), each of which can be used to partition the nine group elements into three cosets of three elements per coset. These nine elements and twelve cosets form a configuration, the Hesse configuration. Removing the zero element and the four cosets containing zero gives rise to the Möbius–Kantor configuration.
As a subgraph.
The Möbius–Kantor graph is a subgraph of the four-dimensional hypercube graph, formed by removing eight edges from the hypercube. Since the hypercube is a unit distance graph, the Möbius–Kantor graph can also be drawn in the plane with all edges unit length, although such a drawing will necessarily have some pairs of crossing edges.
The Möbius–Kantor graph also occurs many times as an induced subgraph of the Hoffman–Singleton graph. Each of these instances is in fact an eigenvector of the Hoffman-Singleton graph, with associated eigenvalue -3. Each vertex "not" in the induced Möbius–Kantor graph is adjacent to exactly four vertices "in" the Möbius–Kantor graph, two each in half of a bipartition of the Möbius–Kantor graph.
Topology.
The Möbius–Kantor graph cannot be embedded without crossings in the plane; it has crossing number 4, and is the smallest cubic graph with that crossing number. Additionally, it provides an example of a graph all of whose subgraphs' crossing numbers differ from it by two or more.
However, it is a toroidal graph: it has an embedding in the torus in which all faces are hexagons. The dual graph of this embedding is the hyperoctahedral graph "K"2,2,2,2.
There is an even more symmetric embedding of Möbius–Kantor graph in the double torus which is a regular map, with six octagonal faces, in which all 96 symmetries of the graph can be realized as symmetries of the embedding Its 96-element symmetry group has a Cayley graph that can itself be embedded on the double torus, and was shown by to be the unique group with genus two. The Cayley graph on 96 vertices is a flag graph of the genus 2 regular map having Möbius–Kantor graph as a skeleton. This means it can be obtained from the regular map as a skeleton of the dual of its barycentric subdivision. A sculpture by DeWitt Godfrey and Duane Martinez showing the double torus embedding of the symmetries of the Möbius–Kantor graph was unveiled at the Technical Museum of Slovenia as part of the 6th Slovenian International Conference on Graph Theory in 2007. In 2013 a rotating version of the sculpture was unveiled at Colgate University.
The Möbius–Kantor graph admits an embedding into a triple torus (genus 3 torus) that is a regular map having four 12-gonal faces, and is the Petrie dual of the double torus embedding described above.
, motivated by an investigation of potential chemical structures of carbon compounds, studied the family of all embeddings of the Möbius–Kantor graph onto 2-manifolds; they showed that there are 759 inequivalent embeddings. The genus 1 embedding, which is not a regular map, is seen in the diagram above.
Algebraic properties.
The automorphism group of the Möbius–Kantor graph is a group of order 96. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Möbius–Kantor graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the "Foster census", the Möbius–Kantor graph is the unique cubic symmetric graph with 16 vertices, and the smallest cubic symmetric graph which is not also distance-transitive. The Möbius–Kantor graph is also a Cayley graph.
The generalized Petersen graph "G"("n,k") is vertex-transitive if and only if "n" = 10 and "k" =2 or if "k"2 ≡ ±1 (mod "n") and is edge-transitive only in the following seven cases: ("n,k") = (4,1), (5,2), (8,3), (10,2), (10,3), (12,5), or (24,5). So the Möbius–Kantor graph is one of only seven symmetric Generalized Petersen graphs. Its symmetric double torus embedding is correspondingly one of only seven regular cubic maps in which the total number of vertices is twice the number of vertices per face. Among the seven symmetric generalized Petersen graphs are the cubical graph formula_5, the Petersen graph formula_6, the dodecahedral graph formula_7, the Desargues graph formula_8 and the Nauru graph formula_9.
The characteristic polynomial of the Möbius–Kantor graph is equal to
formula_10
The Möbius–Kantor graph is a double cover of the graph of the cube.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_3\\times \\mathbb{Z}_3"
},
{
"math_id": 1,
"text": "(i,0)"
},
{
"math_id": 2,
"text": "(i,i)"
},
{
"math_id": 3,
"text": "(i,2i)"
},
{
"math_id": 4,
"text": "(0,i)"
},
{
"math_id": 5,
"text": "G(4,1)"
},
{
"math_id": 6,
"text": "G(5,2)"
},
{
"math_id": 7,
"text": "G(10,2)"
},
{
"math_id": 8,
"text": "G(10,3)"
},
{
"math_id": 9,
"text": "G(12,5)"
},
{
"math_id": 10,
"text": "(x-3)(x-1)^3(x+1)^3(x+3)(x^2-3)^4.\\ "
}
]
| https://en.wikipedia.org/wiki?curid=13396591 |
13396856 | Möbius–Kantor configuration | Geometric structure of 8 points and 8 lines
In geometry, the Möbius–Kantor configuration is a configuration consisting of eight points and eight lines, with three points on each line and three lines through each point. It is not possible to draw points and lines having this pattern of incidences in the Euclidean plane, but it is possible in the complex projective plane.
Coordinates.
August Ferdinand Möbius (1828) asked whether there exists a pair of polygons with "p" sides each, having the property that the vertices of one polygon lie on the lines through the edges of the other polygon, and vice versa. If so, the vertices and edges of these polygons would form a projective configuration. For formula_0 there is no solution in the Euclidean plane, but Seligmann Kantor (1882) found pairs of polygons of this type, for a generalization of the problem in which the points and edges belong to the complex projective plane. That is, in Kantor's solution, the coordinates of the polygon vertices are complex numbers. Kantor's solution for formula_0, a pair of mutually-inscribed quadrilaterals in the complex projective plane, is called the Möbius–Kantor configuration.
H. S. M. Coxeter (1950) supplies the following simple complex projective coordinates for the eight points of the Möbius–Kantor configuration:
(1,0,0), (0,0,1), (ω, −1, 1), (−1, 0, 1),
(−1,ω2,1), (1,ω,0), (0,1,0), (0,−1,1),
where ω denotes a complex cube root of 1.
The eight points and eight lines of the Möbius–Kantor configuration, with these coordinates, form the eight vertices and eight 3-edges of the complex polygon 3{3}3. Coxeter named it a Möbius–Kantor polygon.
Abstract incidence pattern.
More abstractly, the Möbius–Kantor configuration can be described as a system of eight points and eight triples of points such that each point belongs to exactly three of the triples. With the additional conditions (natural to points and lines) that no pair of points belong to more than one triple and that no two triples have more than one point in their intersection, any two systems of this type are equivalent under some permutation of the points. That is, the Möbius–Kantor configuration is the unique projective configuration of type (8383).
The Möbius–Kantor graph derives its name from being the Levi graph of the Möbius–Kantor configuration. It has one vertex per point and one vertex per triple, with an edge connecting two vertices if they correspond to a point and to a triple that contains that point.
The points and lines of the Möbius–Kantor configuration can be described as a matroid, whose elements are the points of the configuration and whose nontrivial flats are the lines of the configuration. In this matroid, a set "S" of points is independent if and only if either formula_1 or "S" consists of three non-collinear points. As a matroid, it has been called the MacLane matroid, after the work of Saunders MacLane (1936) proving that it cannot be oriented; it is one of several known minor-minimal non-orientable matroids.
Related configurations.
The solution to Möbius' problem of mutually inscribed polygons for values of "p" greater than four is also of interest. In particular, one possible solution for formula_2 is the Desargues configuration, a set of ten points and ten lines, three points per line and three lines per point, that does admit a Euclidean realization. The Möbius configuration is a three-dimensional analogue of the Möbius–Kantor configuration consisting of two mutually inscribed tetrahedra.
The Möbius–Kantor configuration can be augmented by adding four lines through the four pairs of points not already connected by lines, and by adding a ninth point on the four new lines. The resulting configuration, the Hesse configuration, shares with the Möbius–Kantor configuration the property of being realizable with complex coordinates but not with real coordinates. Deleting any one point from the Hesse configuration produces a copy of the Möbius–Kantor configuration.
Both configurations may also be described algebraically in terms of the abelian group formula_3 with nine elements.
This group has four subgroups of order three (the subsets of elements of the form formula_4, formula_5, formula_6, and formula_7 respectively), each of which can be used to partition the nine group elements into three cosets of three elements per coset. These nine elements and twelve cosets form the Hesse configuration. Removing the zero element and the four cosets containing zero gives rise to the Möbius–Kantor configuration.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p = 4"
},
{
"math_id": 1,
"text": "|S|\\le 2"
},
{
"math_id": 2,
"text": "p = 5"
},
{
"math_id": 3,
"text": "\\Z_3\\times \\Z_3"
},
{
"math_id": 4,
"text": "(i,0)"
},
{
"math_id": 5,
"text": "(i,i)"
},
{
"math_id": 6,
"text": "(i,2i)"
},
{
"math_id": 7,
"text": "(0,i)"
}
]
| https://en.wikipedia.org/wiki?curid=13396856 |
13398615 | Super-prime | Prime numbers that occupy prime-numbered positions
Super-prime numbers, also known as higher-order primes or prime-indexed primes (PIPs), are the subsequence of prime numbers that occupy prime-numbered positions within the sequence of all prime numbers. In other words, if prime numbers are matched with ordinal numbers, starting with prime number 2 matched with ordinal number 1, the primes matched with prime ordinal numbers are the super primes.
The subsequence begins
3, 5, 11, 17, 31, 41, 59, 67, 83, 109, 127, 157, 179, 191, 211, 241, 277, 283, 331, 353, 367, 401, 431, 461, 509, 547, 563, 587, 599, 617, 709, 739, 773, 797, 859, 877, 919, 967, 991, ... (sequence in the OEIS).
That is, if "p"("n") denotes the "n"th prime number, the numbers in this sequence are those of the form "p"("p"("n")).
used a computer-aided proof (based on calculations involving the subset sum problem) to show that every integer greater than 96 may be represented as a sum of distinct super-prime numbers. Their proof relies on a result resembling Bertrand's postulate, stating that (after the larger gap between super-primes 5 and 11) each super-prime number is less than twice its predecessor in the sequence.
show that there are
formula_0
super-primes up to "x".
This can be used to show that the set of all super-primes is small.
One can also define "higher-order" primeness much the same way and obtain analogous sequences of primes .
A variation on this theme is the sequence of prime numbers with palindromic prime indices, beginning with
3, 5, 11, 17, 31, 547, 739, 877, 1087, 1153, 2081, 2381, ... (sequence in the OEIS). | [
{
"math_id": 0,
"text": "\\frac{x}{(\\log x)^2} + O\\left(\\frac{x\\log\\log x}{(\\log x)^3}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=13398615 |
13398693 | Marginal model | In statistics, marginal models (Heagerty & Zeger, 2000) are a technique for obtaining regression estimates in multilevel modeling, also called hierarchical linear models.
People often want to know the effect of a predictor/explanatory variable "X", on a response variable "Y". One way to get an estimate for such effects is through regression analysis.
Why the name marginal model?
In a typical multilevel model, there are level 1 & 2 residuals (R and U variables). The two variables form a joint distribution for the response variable (formula_0). In a marginal model, we collapse over the level 1 & 2 residuals and thus "marginalize" (see also conditional probability) the joint distribution into a univariate normal distribution. We then fit the marginal model to data.
For example, for the following hierarchical model,
level 1: formula_1, the residual is formula_2, and formula_3
level 2: formula_4, the residual is formula_5, and formula_6
Thus, the marginal model is,
formula_7
This model is what is used to fit to data in order to get regression estimates.
References.
Heagerty, P. J., & Zeger, S. L. (2000). Marginalized multilevel models and likelihood inference. "Statistical Science, 15(1)", 1-26. | [
{
"math_id": 0,
"text": "Y_{ij}"
},
{
"math_id": 1,
"text": "Y_{ij} = \\beta_{0j} + R_{ij}"
},
{
"math_id": 2,
"text": "R_{ij}"
},
{
"math_id": 3,
"text": "\\operatorname{var}(R_{ij}) = \\sigma^2"
},
{
"math_id": 4,
"text": "\\beta_{0j} = \\gamma_{00} + U_{0j}"
},
{
"math_id": 5,
"text": "U_{0j}"
},
{
"math_id": 6,
"text": "\\operatorname{var}(U_{0j}) = \\tau_0^2"
},
{
"math_id": 7,
"text": "Y_{ij} \\sim N(\\gamma_{00},(\\tau_0^2+\\sigma^2))"
}
]
| https://en.wikipedia.org/wiki?curid=13398693 |
13400209 | Sullivan conjecture | Mathematical conjecture
In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group formula_0. The most elementary formulation, however, is in terms of the classifying space formula_1 of such a group. Roughly speaking, it is difficult to map such a space formula_1 continuously into a finite CW complex formula_2 in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller. Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from formula_1 to formula_2 is weakly contractible.
This is equivalent to the statement that the map formula_2 → formula_3 from X to the function space of maps formula_1 → formula_2, not necessarily preserving the base point, given by sending a point formula_4 of formula_2 to the constant map whose image is formula_4 is a weak equivalence. The mapping space formula_3 is an example of a homotopy fixed point set. Specifically, formula_3 is the homotopy fixed point set of the group formula_0 acting by the trivial action on formula_2. In general, for a group formula_0 acting on a space formula_2, the homotopy fixed points are the fixed points formula_5 of the mapping space formula_6 of maps from the universal cover formula_7 of formula_1 to formula_2 under the formula_0-action on formula_6 given by formula_8 in formula_0 acts on a map formula_9 in formula_6 by sending it to formula_10. The formula_0-equivariant map from formula_7 to a single point formula_11 induces a natural map η: formula_12→formula_5 from the fixed points to the homotopy fixed points of formula_0 acting on formula_2. Miller's theorem is that η is a weak equivalence for trivial formula_0-actions on finite-dimensional CW complexes. An important ingredient and motivation for his proof is a result of Gunnar Carlsson on the homology of formula_13 as an unstable module over the Steenrod algebra.
Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on formula_2 is allowed to be non-trivial. In, Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group formula_14. This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer, Carlsson, and Jean Lannes, showing that the natural map formula_15 → formula_16 is a weak equivalence when the order of formula_0 is a power of a prime p, and where formula_17 denotes the Bousfield-Kan p-completion of formula_2. Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points formula_18 before completion, and Lannes's proof involves his T-functor. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "BG"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "F(BG, X)"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "F(EG, X)^G"
},
{
"math_id": 6,
"text": "F(EG, X)"
},
{
"math_id": 7,
"text": "EG"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "gfg^{-1}"
},
{
"math_id": 11,
"text": "*"
},
{
"math_id": 12,
"text": "X^G = F(*,X)^G"
},
{
"math_id": 13,
"text": "BZ/2"
},
{
"math_id": 14,
"text": "G=Z/2"
},
{
"math_id": 15,
"text": "(X^G)_p"
},
{
"math_id": 16,
"text": "F(EG, (X)_p)^G"
},
{
"math_id": 17,
"text": "(X)_p"
},
{
"math_id": 18,
"text": "F(EG,X)^G"
}
]
| https://en.wikipedia.org/wiki?curid=13400209 |
13400442 | Emil Cohn | German physicist
Emil Georg Cohn (28 September 1854 – 28 January 1944), was a German physicist.
Life.
Cohn was born in Neustrelitz, Mecklenburg on 28 September 1854. He was the son of August Cohn, a lawyer, and Charlotte Cohn. At the age of 17, Cohn began to study jurisprudence at the University of Leipzig. However, at the Ruprecht Karl University of Heidelberg and the University of Strasbourg he began to study physics. In Strasbourg, he graduated in 1879. From 1881 to 1884, he was an assistant of August Kundt at the physical institute. In 1884 he habilitated in theoretical physics and was admitted as a private lecturer. From 1884 to 1918, he was a faculty member of the University of Strasbourg and was nominated as an assistant professor on 27 September 1884. He dealt with experimental physics at first, and then turned completely to theoretical physics. In 1918 he was nominated as an extraordinary professor.
After the end of World War I and the occupation of Alsace-Lorraine by France, Cohn and his family were expelled from Strasbourg on the Christmas Eve of 1918. In April 1919, he was nominated as a professor at the University of Rostock. From June 1920, he gave lectures about theoretical physics at the University of Freiburg. In 1935 he retired in Heidelberg where he lived until 1939. He resigned from the Deutsche Physikalische Gesellschaft (DPG) together with other physicists like Richard Gans, Leo Graetz, George Jaffé, Walter Kaufmann, in protest at the despotism of the Nazi regime.
Cohn was a baptized Protestant and was married with Marie Goldschmidt (1864–1950), with whom he had two daughters. Because of his Jewish descent he found himself forced to emigrate to Switzerland because of the pressure under the Nazi regime. He lived in Hasliberg-Hohfluh at first, and from 1942 in Ringgenberg, Switzerland, where he died at the age of 90.
Cohn's younger brother, Carl Cohn (1857–1931) was a successful overseas merchant from Hamburg, who worked from 1921 until 1929 as a senator in Hamburg.
Work.
At the beginning of the 20th century, Cohn was one of the most respectable experts in the area of theoretical electrodynamics. He was unsatisfied with the Lorentzian theory of electrodynamics for moving bodies and proposed an independent theory. His alternative theory, which was based on a modification of the Maxwell field-equations, was compatible to all relevant electrodynamic and optical experiments known at that time (1900–1904), including the Michelson–Morley experiment (MMX) of 1887. Cohn's electrodynamics of moving bodies was based on the assumption that light travels within the Earth's atmosphere with a constant velocity - however, his theory suffered from internal failures. While the theory predicted the negative result of MMX within air, a positive result would be expected within vacuum. Another weak point stems from the fact, that his concept was formulated without the use of atoms and electrons. So after 1905 his theory was superseded by Hendrik Lorentz's and Albert Einstein's.
Regarding his own theory (developed in 1900 and 1901), he used the Principle of Economy to eliminate the known concept of luminiferous aether (but also the concept of atoms) and argued that one can simply call it vacuum. He also maintained that one can use a frame of reference in which the fixed stars are at rest. As a heuristic concept this can be described as a material "aether", but in Cohn's opinion this would be only "metaphorical" and would not affect the consequences of his theory. He also incorporated the transformation equations "x'=x-vt" and "t'=t-vx/c²" introduced by Lorentz in 1895 into his theory, calling them the "Lorentzian Transformation" (). In 1905 this name (for transformations valid to "all" orders in v/c) was altered by Henri Poincaré into the commonly used expression "Lorentz transformation".
In 1904 he compared his theory with Lorentz's mature 1904 theory, employing physical interpretations of the Lorentz transformation that were similar to those later used in Albert Einstein's special relativity in 1905. For instance, local time was described by him as a consequence of the assumption that light propagates in spherical waves with constant velocity in all directions (a similar definition was already given by Poincaré in 1900).
<templatestyles src="Template:Blockquote/styles.css" />Everywhere, where the propagation of radiation is not the object of measurement, we define identical moments of time at different points of Earth's surface, by treating the propagation of light as "timeless". In optics, however, we "define" these identical moments of time by assuming, that the propagation takes place in "spherical" waves for every relatively resting and isotropic medium. This means: the "time" which actually serves us for the representation of terrestrial processes, is the ""local time" formula_0, for which the equations I'b to IVb hold, – not the "general time"" formula_1.
He also illustrated the effects of length contraction and time dilation by using moving rods and clocks.
<templatestyles src="Template:Blockquote/styles.css" />formula_2 are those measuring numbers being read at an "initially correct" measuring-rod (initially = when at rest), after it was introduced into the system and was accordingly deformed. [...] formula_3 are those time intervals indicated by an "initially correctly ticking" clock, after it was inserted into the system and accordingly has changed its rate.
He critically remarked that the distinction between "true time" and "local time" in Lorentz's theory is artificial, because it cannot be verified by experiment. However, Cohn himself believed that the validity of Lorentz's theory is limited to optical phenomena, whereas in his own theory it is possible that mechanical clocks might indicate the "true" time. Later in 1911 (after his own theory was disproved), Cohn accepted the relativity principle of "Lorentz and Einstein" and wrote a summary on special relativity, which was applauded by Einstein.
Sources.
<templatestyles src="Reflist/styles.css" />
*Wikisource translation:
*Wikisource translation: | [
{
"math_id": 0,
"text": "t'"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "x_0\\ y_0\\ z_0"
},
{
"math_id": 3,
"text": "t_0"
}
]
| https://en.wikipedia.org/wiki?curid=13400442 |
13403260 | Generalised metric | Metric geometry
In mathematics, the concept of a generalised metric is a generalisation of that of a metric, in which the distance is not a real number but taken from an arbitrary ordered field.
In general, when we define metric space the distance function is taken to be a real-valued function. The real numbers form an ordered field which is Archimedean and order complete. These metric spaces have some nice properties like: in a metric space compactness, sequential compactness and countable compactness are equivalent etc. These properties may not, however, hold so easily if the distance function is taken in an arbitrary ordered field, instead of in formula_0
Preliminary definition.
Let formula_1 be an arbitrary ordered field, and formula_2 a nonempty set; a function formula_3 is called a metric on formula_4 if the following conditions hold:
It is not difficult to verify that the open balls formula_9 form a basis for a suitable topology, the latter called the "metric topology" on formula_4 with the metric in formula_10
In view of the fact that formula_11 in its order topology is monotonically normal, we would expect formula_2 to be at least regular.
Further properties.
However, under axiom of choice, every general metric is monotonically normal, for, given formula_12 where formula_13 is open, there is an open ball formula_14 such that formula_15 Take formula_16 Verify the conditions for Monotone Normality.
The matter of wonder is that, even without choice, general metrics are monotonically normal.
"proof".
Case I: formula_11 is an Archimedean field.
Now, if formula_17 in formula_18 open, we may take formula_19 where formula_20 and the trick is done without choice.
Case II: formula_11 is a non-Archimedean field.
For given formula_21 where formula_13 is open, consider the set
formula_22
The set formula_23 is non-empty. For, as formula_13 is open, there is an open ball formula_24 within formula_25 Now, as formula_11 is non-Archimdedean, formula_26 is not bounded above, hence there is some formula_27 such that for all formula_28 formula_29 Putting formula_30 we see that formula_31 is in formula_32
Now define formula_33 We would show that with respect to this mu operator, the space is monotonically normal. Note that formula_34
If formula_35 is not in formula_13 (open set containing formula_17) and formula_17 is not in formula_36 (open set containing formula_35), then we'd show that formula_37 is empty. If not, say formula_38 is in the intersection. Then
formula_39
From the above, we get that formula_40 which is impossible since this would imply that either formula_35 belongs to formula_41 or formula_17 belongs to formula_42
This completes the proof.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle \\R."
},
{
"math_id": 1,
"text": "(F, +, \\cdot, <)"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "d : M \\times M \\to F^+ \\cup \\{0\\}"
},
{
"math_id": 4,
"text": "M,"
},
{
"math_id": 5,
"text": "d(x, y) = 0"
},
{
"math_id": 6,
"text": "x = y"
},
{
"math_id": 7,
"text": "d(x, y) = d(y, x)"
},
{
"math_id": 8,
"text": "d(x, y) + d(y, z) \\geq d(x, z)"
},
{
"math_id": 9,
"text": "B(x, \\delta)\\; := \\{y \\in M\\; : d(x, y) < \\delta\\}"
},
{
"math_id": 10,
"text": "F."
},
{
"math_id": 11,
"text": "F"
},
{
"math_id": 12,
"text": "x \\in G,"
},
{
"math_id": 13,
"text": "G"
},
{
"math_id": 14,
"text": "B(x, \\delta)"
},
{
"math_id": 15,
"text": "x \\in B(x, \\delta) \\subseteq G."
},
{
"math_id": 16,
"text": "\\mu(x, G) = B\\left(x, \\delta/2\\right)."
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "G, G"
},
{
"math_id": 19,
"text": "\\mu(x, G) := B(x, 1/2n(x,G)),"
},
{
"math_id": 20,
"text": "n(x, G) := \\min\\{n \\in \\N : B(x, 1/n) \\subseteq G\\},"
},
{
"math_id": 21,
"text": "x \\in G"
},
{
"math_id": 22,
"text": "A(x, G) := \\{a \\in F : \\text{ for all } n \\in \\N, B(x, n \\cdot a) \\subseteq G\\}."
},
{
"math_id": 23,
"text": "A(x, G)"
},
{
"math_id": 24,
"text": "B(x, k)"
},
{
"math_id": 25,
"text": "G."
},
{
"math_id": 26,
"text": "\\N_F"
},
{
"math_id": 27,
"text": "\\xi \\in F"
},
{
"math_id": 28,
"text": "n \\in \\N,"
},
{
"math_id": 29,
"text": "n \\cdot 1 \\leq \\xi."
},
{
"math_id": 30,
"text": "a = k \\cdot (2 \\xi)^{-1},"
},
{
"math_id": 31,
"text": "a"
},
{
"math_id": 32,
"text": "A(x, G)."
},
{
"math_id": 33,
"text": "\\mu(x, G) = \\bigcup\\{B(x, a) : a \\in A(x, G)\\}."
},
{
"math_id": 34,
"text": "\\mu(x,G)\\subseteq G."
},
{
"math_id": 35,
"text": "y"
},
{
"math_id": 36,
"text": "H"
},
{
"math_id": 37,
"text": "\\mu(x, G) \\cap \\mu(y, H)"
},
{
"math_id": 38,
"text": "z"
},
{
"math_id": 39,
"text": "\\exists a \\in A(x, G) \\colon d(x, z) < a;\\;\\;\n\\exists b \\in A(y, H) \\colon d(z, y) < b."
},
{
"math_id": 40,
"text": "d(x, y) \\leq d(x, z) + d(z, y) < 2 \\cdot \\max\\{a, b\\},"
},
{
"math_id": 41,
"text": "\\mu(x, G) \\subseteq G"
},
{
"math_id": 42,
"text": "\\mu(y, H) \\subseteq H."
}
]
| https://en.wikipedia.org/wiki?curid=13403260 |
13404205 | Hofstadter sequence | In mathematics, a Hofstadter sequence is a member of a family of related integer sequences defined by non-linear recurrence relations.
Sequences presented in "Gödel, Escher, Bach: an Eternal Golden Braid".
The first Hofstadter sequences were described by Douglas Richard Hofstadter in his book "Gödel, Escher, Bach". In order of their presentation in chapter III on figures and background (Figure-Figure sequence) and chapter V on recursive structures and processes (remaining sequences), these sequences are:
Hofstadter Figure-Figure sequences.
The Hofstadter Figure-Figure (R and S) sequences are a pair of complementary integer sequences defined as follows
formula_0
with the sequence formula_1 defined as a strictly increasing series of positive integers not present in formula_2. The first few terms of these sequences are
R: 1, 3, 7, 12, 18, 26, 35, 45, 56, 69, 83, 98, 114, 131, 150, 170, 191, 213, 236, 260, ... (sequence in the OEIS)
S: 2, 4, 5, 6, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, ... (sequence in the OEIS)
Hofstadter G sequence.
The Hofstadter G sequence is defined as follows
formula_3
The first few terms of this sequence are
0, 1, 1, 2, 3, 3, 4, 4, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 12, ... (sequence in the OEIS)
Hofstadter H sequence.
The Hofstadter H sequence is defined as follows
formula_4
The first few terms of this sequence are
0, 1, 1, 2, 3, 4, 4, 5, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, ... (sequence in the OEIS)
Hofstadter Female and Male sequences.
The Hofstadter Female (F) and Male (M) sequences are defined as follows
formula_5
The first few terms of these sequences are
F: 1, 1, 2, 2, 3, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 13, ... (sequence in the OEIS)
M: 0, 0, 1, 2, 2, 3, 4, 4, 5, 6, 6, 7, 7, 8, 9, 9, 10, 11, 11, 12, 12, ... (sequence in the OEIS)
Hofstadter Q sequence.
The Hofstadter Q sequence is defined as follows
formula_6
The first few terms of the sequence are
1, 1, 2, 3, 3, 4, 5, 5, 6, 6, 6, 8, 8, 8, 10, 9, 10, 11, 11, 12, ... (sequence in the OEIS)
Hofstadter named the terms of the sequence "Q numbers"; thus the Q number of 6 is 4. The presentation of the Q sequence in Hofstadter's book is actually the first known mention of a meta-Fibonacci sequence in literature.
While the terms of the Fibonacci sequence are determined by summing the two preceding terms, the two preceding terms of a Q number determine how far to go back in the Q sequence to find the two terms to be summed. The indices of the summation terms thus depend on the Q sequence itself.
Q(1), the first element of the sequence, is never one of the two terms being added to produce a later element; it is involved only within an index in the calculation of Q(3).
Although the terms of the Q sequence seem to flow chaotically, like many meta-Fibonacci sequences its terms can be grouped into blocks of successive generations. In case of the Q sequence, the "k"-th generation has 2"k" members. Furthermore, with "g" being the generation that a Q number belongs to, the two terms to be summed to calculate the Q number, called its parents, reside by far mostly in generation "g" − 1 and only a few in generation "g" − 2, but never in an even older generation.
Most of these findings are empirical observations, since virtually nothing has been proved rigorously about the "Q" sequence so far. It is specifically unknown if the sequence is well-defined for all "n"; that is, if the sequence "dies" at some point because its generation rule tries to refer to terms which would conceptually sit left of the first term Q(1).
Generalizations of the "Q" sequence.
Hofstadter–Huber "Q""r","s"("n") family.
20 years after Hofstadter first described the "Q" sequence, he and Greg Huber used the character "Q" to name the generalization of the "Q" sequence toward a family of sequences, and renamed the original "Q" sequence of his book to "U" sequence.
The original "Q" sequence is generalized by replacing ("n" − 1) and ("n" − 2) by ("n" − "r") and ("n" − "s"), respectively.
This leads to the sequence family
formula_7
where "s" ≥ 2 and "r" < "s".
With ("r","s") = (1,2), the original "Q" sequence is a member of this family. So far, only three sequences of the family "Q""r","s" are known, namely the "U" sequence with ("r","s") = (1,2) (which is the original "Q" sequence); the "V" sequence with ("r","s") = (1,4); and the W sequence with (r,s) = (2,4). Only the V sequence, which does not behave as chaotically as the others, is proven not to "die". Similar to the original "Q" sequence, virtually nothing has been proved rigorously about the W sequence to date.
The first few terms of the V sequence are
1, 1, 1, 1, 2, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 11, ... (sequence in the OEIS)
The first few terms of the W sequence are
1, 1, 1, 1, 2, 4, 6, 7, 7, 5, 3, 8, 9, 11, 12, 9, 9, 13, 11, 9, ... (sequence in the OEIS)
For other values ("r","s") the sequences sooner or later "die" i.e. there exists an "n" for which "Q""r","s"("n") is undefined because "n" − "Q""r","s"("n" − "r") < 1.
Pinn "F""i","j"("n") family.
In 1998, Klaus Pinn, scientist at University of Münster (Germany) and in close communication with Hofstadter, suggested another generalization of Hofstadter's "Q" sequence which Pinn called "F" sequences.
The family of Pinn "F""i","j" sequences is defined as follows:
formula_8
Thus Pinn introduced additional constants "i" and "j" which shift the index of the terms of the summation conceptually to the left (that is, closer to start of the sequence).
Only "F" sequences with ("i","j") = (0,0), (0,1), (1,0), and (1,1), the first of which represents the original "Q" sequence, appear to be well-defined. Unlike "Q"(1), the first elements of the Pinn "F""i","j"("n") sequences are terms of summations in calculating later elements of the sequences when any of the additional constants is 1.
The first few terms of the Pinn "F"0,1 sequence are
1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 6, 7, 8, 8, 8, 8, 9, 10, 10, 11, ... (sequence in the OEIS)
Hofstadter–Conway $10,000 sequence.
The Hofstadter–Conway $10,000 sequence is defined as follows
formula_9
The first few terms of this sequence are
1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11, 12, ... (sequence in the OEIS)
The values formula_10 converge to 1/2, and this sequence acquired its name because John Horton Conway offered a prize of $10,000 to anyone who could determine its rate of convergence. The prize, since reduced to $1,000, was claimed by Collin Mallows, who proved that
formula_11
In private communication with Klaus Pinn, Hofstadter later claimed that he had found the sequence and its structure about 10–15 years before Conway posed his challenge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\nR(1)&=1~ ;\\ S(1)=2 \\\\\nR(n)&=R(n-1)+S(n-1), \\quad n>1.\n\\end{align}\n"
},
{
"math_id": 1,
"text": "S(n)"
},
{
"math_id": 2,
"text": "R(n)"
},
{
"math_id": 3,
"text": "\n\\begin{align}\nG(0)&=0 \\\\\nG(n)&=n-G(G(n-1)), \\quad n>0.\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\n\\begin{align}\nH(0)&=0 \\\\\nH(n)&=n-H(H(H(n-1))), \\quad n>0.\n\\end{align}\n"
},
{
"math_id": 5,
"text": "\n\\begin{align}\nF(0)&=1~ ;\\ M(0)=0 \\\\\nF(n)&=n-M(F(n-1)), \\quad n>0 \\\\\nM(n)&=n-F(M(n-1)), \\quad n>0.\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\n\\begin{align}\nQ(1)&=Q(2)=1, \\\\\nQ(n)&=Q(n-Q(n-1))+Q(n-Q(n-2)), \\quad n>2.\n\\end{align}\n"
},
{
"math_id": 7,
"text": "\nQ_{r,s}(n) =\n\\begin{cases}\n1 , \\quad 1 \\le n \\le s, \\\\\nQ_{r,s}(n-Q_{r,s}(n-r))+Q_{r,s}(n-Q_{r,s}(n-s)), \\quad n > s,\n\\end{cases}\n"
},
{
"math_id": 8,
"text": "\nF_{i,j}(n) =\n\\begin{cases}\n1 , \\quad n=1,2, \\\\\nF_{i,j}(n-i-F_{i,j}(n-1))+F_{i,j}(n-j-F_{i,j}(n-2)), \\quad n > 2.\n\\end{cases}\n"
},
{
"math_id": 9,
"text": "\n \\begin{align}\n a(1) &= a(2) = 1, \\\\\n a(n) &= a\\big(a(n - 1)\\big) + a\\big(n - a(n - 1)\\big), \\quad n > 2.\n \\end{align}\n"
},
{
"math_id": 10,
"text": "a(n)/n"
},
{
"math_id": 11,
"text": "\n \\left|\\frac{a(n)}{n} - \\frac{1}{2}\\right| = O\\left(\\frac{1}{\\sqrt{\\log n}}\\right).\n"
}
]
| https://en.wikipedia.org/wiki?curid=13404205 |
13408015 | Heavy baryon chiral perturbation theory | Effective theory for baryons
Heavy baryon chiral perturbation theory (HBChPT) is an effective quantum field theory used to describe the interactions of pions and nucleons/baryons. It is somewhat an extension of chiral perturbation theory (ChPT) which just describes the low-energy interactions of pions. In a richer theory one would also like to describe the interactions of baryons with pions. A fully relativistic Lagrangian of nucleons is non-predictive as the quantum corrections, or loop diagrams can count as formula_0 quantities and therefore do not describe higher-order corrections.
Because the baryons are much heavier than the pions, HBChPT rests on the use of a nonrelativistic description of baryons compared to that of the pions. Therefore, higher order terms in the HBChPT Lagrangian come in at higher orders of formula_1 where formula_2 is the baryon mass. | [
{
"math_id": 0,
"text": "\\mathcal{O}(1)"
},
{
"math_id": 1,
"text": "m_B^{-n}"
},
{
"math_id": 2,
"text": "m_B"
}
]
| https://en.wikipedia.org/wiki?curid=13408015 |
13408203 | FRACTRAN | Turing-complete esoteric programming language invented by John Conway
FRACTRAN is a Turing-complete esoteric programming language invented by the mathematician John Conway. A FRACTRAN program is an ordered list of positive fractions together with an initial positive integer input "n". The program is run by updating the integer "n" as follows:
gives the following FRACTRAN program, called PRIMEGAME, which finds successive prime numbers:
formula_0
Starting with "n"=2, this FRACTRAN program generates the following sequence of integers:
After 2, this sequence contains the following powers of 2:
formula_1 (sequence in the OEIS)
The exponent part of these powers of two are primes, 2, 3, 5, etc.
Understanding a FRACTRAN program.
A FRACTRAN program can be seen as a type of register machine where the registers are stored in prime exponents in the argument "n".
Using Gödel numbering, a positive integer "n" can encode an arbitrary number of arbitrarily large positive integer variables. The value of each variable is encoded as the exponent of a prime number in the prime factorization of the integer. For example, the integer
formula_2
represents a register state in which one variable (which we will call v2) holds the value 2 and two other variables (v3 and v5) hold the value 1. All other variables hold the value 0.
A FRACTRAN program is an ordered list of positive fractions. Each fraction represents an instruction that tests one or more variables, represented by the prime factors of its denominator. For example:
formula_3
tests v2 and v5. If formula_4 and formula_5, then it subtracts 2 from v2 and 1 from v5 and adds 1 to v3 and 1 to v7. For example:
formula_6
Since the FRACTRAN program is just a list of fractions, these test-decrement-increment instructions are the only allowed instructions in the FRACTRAN language. In addition the following restrictions apply:
Creating simple programs.
Addition.
The simplest FRACTRAN program is a single instruction such as
formula_7
This program can be represented as a (very simple) algorithm as follows:
Given an initial input of the form formula_9, this program will compute the sequence formula_10, formula_11, etc., until eventually, after formula_12 steps, no factors of 2 remain and the product with formula_8 no longer yields an integer; the machine then stops with a final output of formula_13. It therefore adds two integers together.
Multiplication.
We can create a "multiplier" by "looping" through the "adder". In order to do this we need to introduce states into our algorithm. This algorithm will take a number formula_9 and produce formula_14:
State B is a loop that adds v3 to v5 and also moves v3 to v7, and state A is an outer control loop that repeats the loop in state B v2 times. State A also restores the value of v3 from v7 after the loop in state B has completed.
We can implement states using new variables as state indicators. The state indicators for state B will be v11 and v13. Note that we require two state control indicators for one loop; a primary flag (v11) and a secondary flag (v13). Because each indicator is consumed whenever it is tested, we need a secondary indicator to say "continue in the current state"; this secondary indicator is swapped back to the primary indicator in the next instruction, and the loop continues.
Adding FRACTRAN state indicators and instructions to the multiplication algorithm table, we have:
When we write out the FRACTRAN instructions, we must put the state A instructions last, because state A has no state indicators - it is the default state if no state indicators are set. So as a FRACTRAN program, the multiplier becomes:
formula_15
With input 2"a"3"b" this program produces output 5"ab".
Subtraction and division.
In a similar way, we can create a FRACTRAN "subtractor", and repeated subtractions allow us to create a "quotient and remainder" algorithm as follows:
Writing out the FRACTRAN program, we have:
formula_16
and input 2"n"3"d"11 produces output 5"q"7"r" where "n" = "qd" + "r" and 0 ≤ "r" < "d".
Conway's prime algorithm.
Conway's prime generating algorithm above is essentially a quotient and remainder algorithm within two loops. Given input of the form formula_17 where 0 ≤ "m" < "n", the algorithm tries to divide "n"+1 by each number from "n" down to 1, until it finds the largest number "k" that is a divisor of "n"+1. It then returns 2"n"+1 7"k"-1 and repeats. The only times that the sequence of state numbers generated by the algorithm produces a power of 2 is when "k" is 1 (so that the exponent of 7 is 0), which only occurs if the exponent of 2 is a prime. A step-by-step explanation of Conway's algorithm can be found in Havil (2007).
For this program, reaching the prime number 2, 3, 5, 7... requires respectively 19, 69, 281, 710... steps (sequence in the OEIS).
A variant of Conway's program also exists, which differs from the above version by two fractions:
formula_18
This variant is a little faster: reaching 2, 3, 5, 7... takes it 19, 69, 280, 707... steps (sequence in the OEIS). A single iteration of this program, checking a particular number "N" for primeness, takes the following number of steps:
formula_19
where formula_20 is the largest integer divisor of "N" and formula_21 is the floor function.
In 1999, Devin Kilminster demonstrated a shorter, ten-instruction program:
formula_22
For the initial input "n = 10" successive primes are generated by subsequent powers of 10.
Other examples.
The following FRACTRAN program:
formula_23
calculates the Hamming weight H("a") of the binary expansion of "a" i.e. the number of 1s in the binary expansion of "a". Given input 2"a", its output is 13H("a"). The program can be analysed as follows:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\left( \\frac{17}{91}, \\frac{78}{85}, \\frac{19}{51}, \\frac{23}{38}, \\frac{29}{33}, \\frac{77}{29}, \\frac{95}{23}, \\frac{77}{19}, \\frac{1}{17}, \\frac{11}{13}, \\frac{13}{11}, \\frac{15}{2}, \\frac{1}{7}, \\frac{55}{1} \\right)"
},
{
"math_id": 1,
"text": "2^2=4,\\, 2^3=8,\\, 2^5=32,\\, 2^7=128,\\, 2^{11}=2048,\\, 2^{13}=8192,\\, 2^{17}=131072,\\, 2^{19}=524288,\\, \\dots"
},
{
"math_id": 2,
"text": "60 = 2^2 \\times 3^1 \\times 5^1"
},
{
"math_id": 3,
"text": "f_1 = \\frac{21}{20} = \\frac{3 \\times 7}{2^2 \\times 5^1}"
},
{
"math_id": 4,
"text": "v_2 \\ge 2"
},
{
"math_id": 5,
"text": "v_5 \\ge 1"
},
{
"math_id": 6,
"text": "60 \\cdot f_1 = 2^2 \\times 3^1 \\times 5^1 \\cdot \\frac{3 \\times 7}{2^2 \\times 5^1} = 3^2 \\times 7^1"
},
{
"math_id": 7,
"text": "\\left( \\frac{3}{2} \\right)"
},
{
"math_id": 8,
"text": "\\frac{3}{2}"
},
{
"math_id": 9,
"text": "2^a 3^b"
},
{
"math_id": 10,
"text": "2^{a-1} 3^{b+1}"
},
{
"math_id": 11,
"text": "2^{a-2} 3^{b+2}"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": " 3^{a + b} "
},
{
"math_id": 14,
"text": "5^{ab}"
},
{
"math_id": 15,
"text": "\\left( \\frac{455}{33}, \\frac{11}{13}, \\frac{1}{11}, \\frac{3}{7}, \\frac{11}{2}, \\frac{1}{3} \\right)"
},
{
"math_id": 16,
"text": "\\left( \\frac{91}{66}, \\frac{11}{13}, \\frac{1}{33}, \\frac{85}{11}, \\frac{57}{119}, \\frac{17}{19}, \\frac{11}{17}, \\frac{1}{3} \\right)"
},
{
"math_id": 17,
"text": "2^n 7^m"
},
{
"math_id": 18,
"text": "\\left( \\frac{17}{91}, \\frac{78}{85}, \\frac{19}{51}, \\frac{23}{38}, \\frac{29}{33}, \\frac{77}{29}, \\frac{95}{23}, \\frac{77}{19}, \\frac{1}{17}, \\frac{11}{13}, \\frac{13}{11}, \\frac{15}{14}, \\frac{15}{2}, \\frac{55}{1} \\right)"
},
{
"math_id": 19,
"text": "N - 1 + (6N+2)(N-b) + 2 \\sum\\limits^{N-1}_{d=b} \\left\\lfloor \\frac{N}{d} \\right\\rfloor,"
},
{
"math_id": 20,
"text": "b < N"
},
{
"math_id": 21,
"text": "\\lfloor x \\rfloor"
},
{
"math_id": 22,
"text": "\\left( \\frac{7}{3}, \\frac{99}{98}, \\frac{13}{49}, \\frac{39}{35}, \\frac{36}{91}, \\frac{10}{143}, \\frac{49}{13}, \\frac{7}{11}, \\frac{1}{2}, \\frac{91}{1} \\right)."
},
{
"math_id": 23,
"text": "\\left( \\frac{3 \\cdot 11}{2^2 \\cdot 5} , \\frac{5}{11}, \\frac{13}{2 \\cdot 5}, \\frac{1}{5}, \\frac{2}{3}, \\frac{2 \\cdot 5}{7}, \\frac{7}{2} \\right)"
}
]
| https://en.wikipedia.org/wiki?curid=13408203 |
13409455 | Clarkson's inequalities | In mathematics, Clarkson's inequalities, named after James A. Clarkson, are results in the theory of "L""p" spaces. They give bounds for the "L""p"-norms of the sum and difference of two measurable functions in "L""p" in terms of the "L""p"-norms of those functions individually.
Statement of the inequalities.
Let ("X", Σ, "μ") be a measure space; let "f", "g" : "X" → R be measurable functions in "L""p". Then, for 2 ≤ "p" < +∞,
formula_0
For 1 < "p" < 2,
formula_1
where
formula_2
i.e., "q" = "p" ⁄ ("p" − 1).
The case "p" ≥ 2 is somewhat easier to prove, being a simple application of the triangle inequality and the convexity of
formula_3 | [
{
"math_id": 0,
"text": "\\left\\| \\frac{f + g}{2} \\right\\|_{L^p}^p + \\left\\| \\frac{f - g}{2} \\right\\|_{L^p}^p \\le \\frac{1}{2} \\left( \\| f \\|_{L^p}^p + \\| g \\|_{L^p}^p \\right)."
},
{
"math_id": 1,
"text": "\\left\\| \\frac{f + g}{2} \\right\\|_{L^p}^q + \\left\\| \\frac{f - g}{2} \\right\\|_{L^p}^q \\le \\left( \\frac{1}{2} \\| f \\|_{L^p}^p +\\frac{1}{2} \\| g \\|_{L^p}^p \\right)^\\frac{q}{p},"
},
{
"math_id": 2,
"text": "\\frac1{p} + \\frac1{q} = 1,"
},
{
"math_id": 3,
"text": "x \\mapsto x^p. "
}
]
| https://en.wikipedia.org/wiki?curid=13409455 |
1341151 | Cousin | Descendant of an ancestor's sibling
A cousin is a relative that is the child of a parent's sibling; this is more specifically referred to as a first cousin.
More generally, in the kinship system used in the English-speaking world, a cousin is a type of relationship in which relatives are two or more generations away from their most recent common ancestor. For this definition degrees and removals are used to further specify the relationship.
Degree measures how distant the relationship is from the most recent common ancestor(s). If the cousins do not come from the same generation, removal is specified, as removal measures the difference in generations between the two cousins. When the removal is not specified, no removal is assumed.
Various governmental entities have established systems for legal use that can precisely specify kinship with common ancestors any number of generations in the past; for example, in medicine and in law, a first cousin is a type of third-degree relative.
Basic definitions.
People are related with a type of cousin relationship if they share a common ancestor, and are separated from their most recent common ancestor by two or more generations. This means neither person is an ancestor of the other, they do not share a parent (are not siblings), and neither is a sibling of the other's parent (are not the other's aunt/uncle nor niece/nephew). In the English system the cousin relationship is further detailed by the concepts of "degree" and "removal".
The "degree" is the number of generations subsequent to the common ancestor before a parent of one of the cousins is found. This means the degree is the separation of the cousin from the common ancestor less one. Also, if the cousins are not separated from the common ancestor by the same number of generations, the cousin with the smallest separation is used to determine the degree. The "removal" is the difference between the number of generations from each cousin to the common ancestor. Two people can be removed but be around the same age due to differences in birth dates of parents, children, and other relevant ancestors.
Additional terms.
Gender-based distinctions.
A maternal cousin is a cousin that is related to the mother's side of the family, while a paternal cousin is a cousin that is related to the father's side of the family. This relationship is not necessarily reciprocal, as the maternal cousin of one person could be the paternal cousin of the other. In the example Basic family tree, Emma is David's maternal cousin and David is Emma's paternal cousin.
Parallel and cross cousins on the other hand are reciprocal relationships. Parallel cousins are descended from same-sex siblings. A parallel first cousin relationship exists when both the subject and relative are maternal cousins, or both are paternal cousins. Cross cousins are descendants from opposite-sex siblings. A cross first cousin relationship exists when the subject and the relative are maternal cousin and paternal cousin to each other. In the basic family tree example, David and Emma are cross cousins.
Multiplicities.
Double cousins are relatives who are cousins from two different branches of the family tree. This occurs when siblings, respectively, reproduce with different siblings from another family. This may also be referred to as "cousins on both sides". The resulting children are related to each other through both their parents and are thus doubly related. Double first cousins share both sets of grandparents.
Half cousins are descended from half siblings and would share one grandparent. The children of two half siblings are first half cousins. If half siblings have children with another pair of half siblings, the resulting children would be double half first cousins.
While there is no agreed upon term, it is possible for cousins to share three grandparents if a pair of half siblings had children with a pair of full siblings.
Non-blood relations.
Step-cousins are either stepchildren of an individual's aunt or uncle, nieces and nephews of one's step-parent, or the children of one's parent's step-sibling. A cousin-in-law is the cousin of one's spouse or the spouse of one's cousin.
Consanguinity.
Consanguinity is a measure of how closely individuals are related to each other. It is measured by the coefficient of relationship. Below, when discussing the coefficient of relationship, we assume the subject and the relative are related only through the kinship term. A coefficient of one represents the relationship one has with oneself. Consanguinity decreases by half for every generation of separation from the most recent common ancestor, as there are two parents for each child. When there is more than one common ancestor, the consanguinity between each ancestor is added together to get the final result.
Between first cousins, there are two shared ancestors each with four generations of separation, up and down the family tree: formula_0; their consanguinity is one-eighth. For each additional removal of the cousin relationship, consanguinity is reduced by half, as the generations of separation increase by one. For each additional degree of the cousin relationship, consanguinity is reduced by a quarter as the generations of separation increase by one on both sides.
Half cousins have half the consanguinity of ordinary cousins as they have half the common ancestors (i.e. one vs two). Double cousins have twice the consanguinity of ordinary cousins as they have twice the number of common ancestors (i.e. four vs two). Double first cousins share the same consanguinity as half-siblings. Likewise, double half cousins share the same consanguinity as first cousins as they both have two common ancestors. If there are half-siblings on one side and full siblings on the other, they would have three-halves the consanguinity of ordinary first cousins.
In a scenario where two monozygotic (identical) twins have children with another pair of monozygotic twins, the resulting double cousins would test as genetically similar as siblings.
Reproduction.
Couples that are closely related have an increased chance of sharing genes, including mutations that occurred in their family tree. If the mutation is a recessive trait, it will not reveal itself unless both father and mother share it. Due to the risk that the trait is harmful, children of high-consanguinity parents have an increased risk of recessive genetic disorders. See inbreeding for more information.
Closely related couples have more children. Couples related with consanguinity equivalent to that of third cousins have the greatest reproductive success. This seems counterintuitive as closely related parents have a higher probability of having offspring that are unfit, yet closer kinship can also decrease the likelihood of immunological incompatibility during pregnancy.
Cousin marriage.
Cousin marriage is important in several anthropological theories, which often differentiate between matriarchal and patriarchal parallel and cross cousins.
Currently about 10% and historically as high as 80% of all marriages are between first or second cousins. Cousin marriages are often arranged. Anthropologists believe it is used as a tool to strengthen the family, conserve its wealth, protect its cultural heritage, and retain the power structure of the family and its place in the community. Some groups encourage cousin marriage while others attach a strong social stigma to it. In some regions in the Middle East, more than half of all marriages are between first or second cousins (in some of the countries in this region, this may exceed 70%). Just outside this region, it is often legal but infrequent. Many cultures have encouraged specifically cross-cousin marriages. In other places, it is legally prohibited and culturally equivalent to incest. Supporters of cousin marriage often view the prohibition as discrimination, while opponents claim potential immorality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\tfrac{1}{2}\\right)^4 + \\left(\\tfrac{1}{2}\\right)^4"
}
]
| https://en.wikipedia.org/wiki?curid=1341151 |
13415343 | Regular matroid | Matroid that can be represented over all fields
In mathematics, a regular matroid is a matroid that can be represented over all fields.
Definition.
A matroid is defined to be a family of subsets of a finite set, satisfying certain axioms. The sets in the family are called "independent sets". One of the ways of constructing a matroid is to select a finite set of vectors in a vector space, and to define a subset of the vectors to be independent in the matroid when it is linearly independent in the vector space. Every family of sets constructed in this way is a matroid, but not every matroid can be constructed in this way, and the vector spaces over different fields lead to different sets of matroids that can be constructed from them.
A matroid formula_0 is regular when, for every field formula_1, formula_0 can be represented by a system of vectors over formula_1.
Properties.
If a matroid is regular, so is its dual matroid, and so is every one of its minors. Every direct sum of regular matroids remains regular.
Every graphic matroid (and every co-graphic matroid) is regular. Conversely, every regular matroid may be constructed by combining graphic matroids, co-graphic matroids, and a certain ten-element matroid that is neither graphic nor co-graphic, using an operation for combining matroids that generalizes the clique-sum operation on graphs.
The number of bases in a regular matroid may be computed as the determinant of an associated matrix, generalizing Kirchhoff's matrix-tree theorem for graphic matroids.
Characterizations.
The uniform matroid formula_2 (the four-point line) is not regular: it cannot be realized over the two-element finite field GF(2), so it is not a binary matroid, although it can be realized over all other fields. The matroid of the Fano plane (a rank-three matroid in which seven of the triples of points are dependent) and its dual are also not regular: they can be realized over GF(2), and over all fields of characteristic two, but not over any other fields than those. As showed, these three examples are fundamental to the theory of regular matroids: every non-regular matroid has at least one of these three as a minor. Thus, the regular matroids are exactly the matroids that do not have one of the three forbidden minors formula_2, the Fano plane, or its dual.
If a matroid is regular, it must clearly be realizable over the two fields GF(2) and GF(3). The converse is true: every matroid that is realizable over both of these two fields is regular. The result follows from a forbidden minor characterization of the matroids realizable over these fields, part of a family of results codified by Rota's conjecture.
The regular matroids are the matroids that can be defined from a totally unimodular matrix, a matrix in which every square submatrix has determinant 0, 1, or −1. The vectors realizing the matroid may be taken as the rows of the matrix. For this reason, regular matroids are sometimes also called unimodular matroids. The equivalence of regular matroids and unimodular matrices, and their characterization by forbidden minors, are deep results of W. T. Tutte, originally proved by him using the Tutte homotopy theorem. later published an alternative and simpler proof of the characterization of unimodular matrices by forbidden minors.
Algorithms.
There is a polynomial time algorithm for testing whether a matroid is regular, given access to the matroid through an independence oracle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "U{}^2_4"
}
]
| https://en.wikipedia.org/wiki?curid=13415343 |
1341579 | Angle of parallelism | An angle in certain right triangles in the hyperbolic plane
In hyperbolic geometry, angle of parallelism formula_0 is the angle at the non-right angle vertex of a right hyperbolic triangle having two asymptotic parallel sides. The angle depends on the segment length "a" between the right angle and the vertex of the angle of parallelism.
Given a point not on a line, drop a perpendicular to the line from the point. Let "a" be the length of this perpendicular segment, and formula_0 be the least angle such that the line drawn through the point does not intersect the given line. Since two sides are asymptotically parallel,
formula_1
There are five equivalent expressions that relate " formula_2" and "a":
formula_3
formula_4
formula_5
formula_6
formula_7
where sinh, cosh, tanh, sech and csch are hyperbolic functions and gd is the Gudermannian function.
Construction.
János Bolyai discovered a construction which gives the asymptotic parallel "s" to a line "r" passing through a point "A" not on "r". Drop a perpendicular from "A" onto "B" on "r". Choose any point "C" on "r" different from "B". Erect a perpendicular "t" to "r" at "C". Drop a perpendicular from "A" onto "D" on "t". Then length "DA" is longer than "CB", but shorter than "CA". Draw a circle around "C" with radius equal to "DA". It will intersect the segment "AB" at a point "E". Then the angle "BEC" is independent of the length "BC", depending only on "AB"; it is the angle of parallelism. Construct "s" through "A" at angle "BEC" from "AB".
formula_8
See Trigonometry of right triangles for the formulas used here.
History.
The angle of parallelism was developed in 1840 in the German publication "Geometrische Untersuchungen zur Theory der Parallellinien" by Nikolai Lobachevsky.
This publication became widely known in English after the Texas professor G. B. Halsted produced a translation in 1891. ("Geometrical Researches on the Theory of Parallels")
The following passages define this pivotal concept in hyperbolic geometry:
"The angle HAD between the parallel HA and the perpendicular AD is called the parallel angle (angle of parallelism) which we will here designate by Π(p) for AD = p".
Demonstration.
In the Poincaré half-plane model of the hyperbolic plane (see Hyperbolic motions), one can establish the relation of "Φ" to "a" with Euclidean geometry. Let "Q" be the semicircle with diameter on the "x"-axis that passes through the points (1,0) and (0,"y"), where "y" > 1. Since "Q" is tangent to the unit semicircle centered at the origin, the two semicircles represent "parallel hyperbolic lines". The "y"-axis crosses both semicircles, making a right angle with the unit semicircle and a variable angle "Φ" with "Q". The angle at the center of "Q" subtended by the radius to (0, "y") is also "Φ" because the two angles have sides that are perpendicular, left side to left side, and right side to right side. The semicircle "Q" has its center at ("x", 0), "x" < 0, so its radius is 1 − "x". Thus, the radius squared of "Q" is
formula_9
hence
formula_10
The metric of the Poincaré half-plane model of hyperbolic geometry parametrizes distance on the ray {(0, "y") : "y" > 0 } with logarithmic measure. Let the hyperbolic distance from (0, "y") to (0, 1) be "a", so: log "y" − log 1 = "a", so "y" = "ea" where "e" is the base of the natural logarithm. Then
the relation between "Φ" and "a" can be deduced from the triangle {("x", 0), (0, 0), (0, "y")}, for example:
formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Pi(a) "
},
{
"math_id": 1,
"text": " \\lim_{a\\to 0} \\Pi(a) = \\tfrac{1}{2}\\pi\\quad\\text{ and }\\quad\\lim_{a\\to\\infty} \\Pi(a) = 0. "
},
{
"math_id": 2,
"text": " \\Pi(a)"
},
{
"math_id": 3,
"text": " \\sin\\Pi(a) = \\operatorname{sech} a = \\frac{1}{\\cosh a} =\\frac{2}{e^a + e^{-a}} \\ , "
},
{
"math_id": 4,
"text": " \\cos\\Pi(a) = \\tanh a = \\frac {e^a - e^{-a}} {e^a + e^{-a}} \\ , "
},
{
"math_id": 5,
"text": " \\tan\\Pi(a) = \\operatorname{csch} a = \\frac{1}{\\sinh a} = \\frac {2}{e^a - e^{-a}} \\ , "
},
{
"math_id": 6,
"text": " \\tan \\left( \\tfrac{1}{2}\\Pi(a) \\right) = e^{-a}, "
},
{
"math_id": 7,
"text": " \\Pi(a) = \\tfrac{1}{2}\\pi - \\operatorname{gd}(a), "
},
{
"math_id": 8,
"text": " \\sin BEC = \\frac{ \\sinh {BC} }{ \\sinh {CE} } = \\frac{ \\sinh {BC} }{ \\sinh {DA} } = \\frac{ \\sinh {BC} }{ \\sin {ACD} \\sinh {CA} } = \\frac{ \\sinh {BC} }{ \\cos {ACB} \\sinh {CA} } = \\frac{ \\sinh {BC} \\tanh {CA} }{ \\tanh {CB} \\sinh {CA} } = \\frac{ \\cosh {BC} }{ \\cosh {CA} } = \\frac{ \\cosh {BC} }{ \\cosh {CB} \\cosh {AB} } = \\frac{ 1 }{ \\cosh {AB} } \\,."
},
{
"math_id": 9,
"text": " x^2 + y^2 = (1 - x)^2, "
},
{
"math_id": 10,
"text": " x = \\tfrac{1}{2}(1 - y^2). "
},
{
"math_id": 11,
"text": " \\tan\\phi = \\frac{y}{-x} = \\frac{2y}{y^2 - 1} = \\frac{2e^a}{e^{2a} - 1} = \\frac{1}{\\sinh a}. "
}
]
| https://en.wikipedia.org/wiki?curid=1341579 |
1341657 | Engel's theorem | Theorem in Lie representation theory
In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra formula_0 is a nilpotent Lie algebra if and only if for each formula_1, the adjoint map
formula_2
given by formula_3, is a nilpotent endomorphism on formula_4; i.e., formula_5 for some "k". It is a consequence of the theorem, also called Engel's theorem, which says that if a Lie algebra of matrices consists of nilpotent matrices, then the matrices can all be simultaneously brought to a strictly upper triangular form. Note that if we merely have a Lie algebra of matrices which is nilpotent "as a Lie algebra", then this conclusion does "not" follow (i.e. the naïve replacement in Lie's theorem of "solvable" with "nilpotent", and "upper triangular" with "strictly upper triangular", is false; this already fails for the one-dimensional Lie subalgebra of scalar matrices).
The theorem is named after the mathematician Friedrich Engel, who sketched a proof of it in a letter to Wilhelm Killing dated 20 July 1890 . Engel's student K.A. Umlauf gave a complete proof in his 1891 dissertation, reprinted as .
Statements.
Let formula_6 be the Lie algebra of the endomorphisms of a finite-dimensional vector space "V" and formula_7 a subalgebra. Then Engel's theorem states the following are equivalent:
Note that no assumption on the underlying base field is required.
We note that Statement 2. for various formula_0 and "V" is equivalent to the statement
This is the form of the theorem proven in #Proof. (This statement is trivially equivalent to Statement 2 since it allows one to inductively construct a flag with the required property.)
In general, a Lie algebra formula_0 is said to be nilpotent if the lower central series of it vanishes in a finite step; i.e., for formula_13 = ("i"+1)-th power of formula_0, there is some "k" such that formula_14. Then Engel's theorem implies the following theorem (also called Engel's theorem): when formula_0 has finite dimension,
Indeed, if formula_16 consists of nilpotent operators, then by 1. formula_17 2. applied to the algebra formula_18, there exists a flag formula_19 such that formula_20. Since formula_21, this implies formula_0 is nilpotent. (The converse follows straightforwardly from the definition.)
Proof.
We prove the following form of the theorem: "if formula_22 is a Lie subalgebra such that every formula_8 is a nilpotent endomorphism and if "V" has positive dimension, then there exists a nonzero vector "v" in "V" such that formula_11 for each "X" in formula_4."
The proof is by induction on the dimension of formula_4 and consists of a few steps. (Note the structure of the proof is very similar to that for Lie's theorem, which concerns a solvable algebra.) The basic case is trivial and we assume the dimension of formula_4 is positive.
Step 1: Find an ideal formula_23 of codimension one in formula_4.
This is the most difficult step. Let formula_23 be a maximal (proper) subalgebra of formula_4, which exists by finite-dimensionality. We claim it is an ideal of codimension one. For each formula_24, it is easy to check that (1) formula_15 induces a linear endomorphism formula_25 and (2) this induced map is nilpotent (in fact, formula_15 is nilpotent as formula_26 is nilpotent; see Jordan decomposition in Lie algebras). Thus, by inductive hypothesis applied to the Lie subalgebra of formula_27 generated by formula_28, there exists a nonzero vector "v" in formula_29 such that formula_30 for each formula_31. That is to say, if formula_32 for some "Y" in formula_4 but not in formula_33, then formula_34 for every formula_31. But then the subspace formula_35 spanned by formula_23 and "Y" is a Lie subalgebra in which formula_23 is an ideal of codimension one. Hence, by maximality, formula_36. This proves the claim.
Step 2: Let formula_37. Then formula_4 stabilizes "W"; i.e., formula_38 for each formula_39.
Indeed, for formula_40 in formula_4 and formula_26 in formula_23, we have: formula_41 since formula_23 is an ideal and so formula_42. Thus, formula_43 is in "W".
Step 3: Finish up the proof by finding a nonzero vector that gets killed by formula_4.
Write formula_44 where "L" is a one-dimensional vector subspace. Let "Y" be a nonzero vector in "L" and "v" a nonzero vector in "W". Now, formula_40 is a nilpotent endomorphism (by hypothesis) and so formula_45 for some "k". Then formula_46 is a required vector as the vector lies in "W" by Step 2. formula_47
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "X \\in \\mathfrak g"
},
{
"math_id": 2,
"text": "\\operatorname{ad}(X)\\colon \\mathfrak{g} \\to \\mathfrak{g},"
},
{
"math_id": 3,
"text": "\\operatorname{ad}(X)(Y) = [X, Y]"
},
{
"math_id": 4,
"text": "\\mathfrak{g}"
},
{
"math_id": 5,
"text": "\\operatorname{ad}(X)^k = 0"
},
{
"math_id": 6,
"text": "\\mathfrak{gl}(V)"
},
{
"math_id": 7,
"text": "\\mathfrak g \\subset \\mathfrak{gl}(V)"
},
{
"math_id": 8,
"text": "X \\in \\mathfrak{g}"
},
{
"math_id": 9,
"text": "V = V_0 \\supset V_1 \\supset \\cdots \\supset V_n =\n0, \\, \\operatorname{codim} V_i = i"
},
{
"math_id": 10,
"text": "\\mathfrak g \\cdot V_i \\subset V_{i+1}"
},
{
"math_id": 11,
"text": "X(v) = 0"
},
{
"math_id": 12,
"text": "X \\in \\mathfrak g."
},
{
"math_id": 13,
"text": "C^0 \\mathfrak g = \\mathfrak g, C^i \\mathfrak g = [\\mathfrak g, C^{i-1} \\mathfrak g]"
},
{
"math_id": 14,
"text": "C^k \\mathfrak g = 0"
},
{
"math_id": 15,
"text": "\\operatorname{ad}(X)"
},
{
"math_id": 16,
"text": "\\operatorname{ad}(\\mathfrak g)"
},
{
"math_id": 17,
"text": "\\Leftrightarrow"
},
{
"math_id": 18,
"text": "\\operatorname{ad}(\\mathfrak g) \\subset \\mathfrak{gl}(\\mathfrak g)"
},
{
"math_id": 19,
"text": "\\mathfrak g = \\mathfrak{g}_0 \\supset \\mathfrak{g}_1 \\supset \\cdots \\supset \\mathfrak{g}_n = 0"
},
{
"math_id": 20,
"text": "[\\mathfrak g, \\mathfrak g_i] \\subset \\mathfrak g_{i+1}"
},
{
"math_id": 21,
"text": "C^i \\mathfrak g\\subset \\mathfrak g_i"
},
{
"math_id": 22,
"text": "\\mathfrak{g} \\subset \\mathfrak{gl}(V)"
},
{
"math_id": 23,
"text": "\\mathfrak{h}"
},
{
"math_id": 24,
"text": "X \\in \\mathfrak h"
},
{
"math_id": 25,
"text": "\\mathfrak{g}/\\mathfrak{h} \\to \\mathfrak{g}/\\mathfrak{h}"
},
{
"math_id": 26,
"text": "X"
},
{
"math_id": 27,
"text": "\\mathfrak{gl}(\\mathfrak{g}/\\mathfrak{h})"
},
{
"math_id": 28,
"text": "\\operatorname{ad}(\\mathfrak{h})"
},
{
"math_id": 29,
"text": "\\mathfrak{g}/\\mathfrak{h}"
},
{
"math_id": 30,
"text": "\\operatorname{ad}(X)(v) = 0"
},
{
"math_id": 31,
"text": "X \\in \\mathfrak{h}"
},
{
"math_id": 32,
"text": "v = [Y]"
},
{
"math_id": 33,
"text": "\\mathfrak h"
},
{
"math_id": 34,
"text": "[X, Y] = \\operatorname{ad}(X)(Y) \\in \\mathfrak{h}"
},
{
"math_id": 35,
"text": "\\mathfrak{h}' \\subset \\mathfrak{g}"
},
{
"math_id": 36,
"text": "\\mathfrak{h}' = \\mathfrak g"
},
{
"math_id": 37,
"text": "W = \\{ v \\in V | X(v) = 0, X \\in \\mathfrak{h} \\}"
},
{
"math_id": 38,
"text": "X (v) \\in W"
},
{
"math_id": 39,
"text": "X \\in \\mathfrak{g}, v \\in W"
},
{
"math_id": 40,
"text": "Y"
},
{
"math_id": 41,
"text": "X(Y(v)) = Y(X(v)) + [X, Y](v) = 0"
},
{
"math_id": 42,
"text": "[X, Y] \\in \\mathfrak{h}"
},
{
"math_id": 43,
"text": "Y(v)"
},
{
"math_id": 44,
"text": "\\mathfrak{g} = \\mathfrak{h} + L"
},
{
"math_id": 45,
"text": "Y^k(v) \\ne 0, Y^{k+1}(v) = 0"
},
{
"math_id": 46,
"text": "Y^k(v)"
},
{
"math_id": 47,
"text": "\\square"
}
]
| https://en.wikipedia.org/wiki?curid=1341657 |
1342156 | Artin–Mazur zeta function | In mathematics, the Artin–Mazur zeta function, named after Michael Artin and Barry Mazur, is a function that is used for studying the iterated functions that occur in dynamical systems and fractals.
It is defined from a given function formula_0 as the formal power series
formula_1
where formula_2 is the set of fixed points of the formula_3th iterate of the function formula_0, and formula_4 is the number of fixed points (i.e. the cardinality of that set).
Note that the zeta function is defined only if the set of fixed points is finite for each formula_3. This definition is formal in that the series does not always have a positive radius of convergence.
The Artin–Mazur zeta function is invariant under topological conjugation.
The Milnor–Thurston theorem states that the Artin–Mazur zeta function of an interval map formula_0 is the inverse of the kneading determinant of formula_0.
Analogues.
The Artin–Mazur zeta function is formally similar to the local zeta function, when a diffeomorphism on a compact manifold replaces the Frobenius mapping for an algebraic variety over a finite field.
The Ihara zeta function of a graph can be interpreted as an example of the Artin–Mazur zeta function. | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\zeta_f(z)=\\exp \\left(\\sum_{n=1}^\\infty \n\\bigl|\\operatorname{Fix} (f^n)\\bigr| \\frac {z^n}{n}\\right),"
},
{
"math_id": 2,
"text": "\\operatorname{Fix} (f^n)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "|\\operatorname{Fix} (f^n)|"
}
]
| https://en.wikipedia.org/wiki?curid=1342156 |
1342362 | Ihara zeta function | In mathematics, the Ihara zeta function is a zeta function associated with a finite graph. It closely resembles the Selberg zeta function, and is used to relate closed walks to the spectrum of the adjacency matrix. The Ihara zeta function was first defined by Yasutaka Ihara in the 1960s in the context of discrete subgroups of the two-by-two p-adic special linear group. Jean-Pierre Serre suggested in his book "Trees" that Ihara's original definition can be reinterpreted graph-theoretically. It was Toshikazu Sunada who put this suggestion into practice in 1985. As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis.
Definition.
The Ihara zeta function is defined as the analytic continuation of the infinite product
formula_0
where "L"("p") is the "length" formula_1 of formula_2.
The product in the definition is taken over all prime closed geodesics formula_2 of the graph formula_3, where geodesics which differ by a cyclic rotation are considered equal. A "closed geodesic" formula_2 on formula_4 (known in graph theory as a "reduced closed walk"; it is not a graph geodesic) is a finite sequence of vertices formula_5 such that
formula_6
formula_7
The integer formula_8 is the length formula_1. The closed geodesic formula_2 is "prime" if it cannot be obtained by repeating a closed geodesic formula_9 times, for an integer formula_10.
This graph-theoretic formulation is due to Sunada.
Ihara's formula.
Ihara (and Sunada in the graph-theoretic setting) showed that for regular graphs the zeta function is a rational function.
If formula_4 is a formula_11-regular graph with adjacency matrix formula_12 then
formula_13
where formula_14 is the circuit rank of formula_4. If formula_4 is connected and has formula_15 vertices, formula_16.
The Ihara zeta-function is in fact always the reciprocal of a graph polynomial:
formula_17
where formula_18 is Ki-ichiro Hashimoto's edge adjacency operator. Hyman Bass gave a determinant formula involving the adjacency operator.
Applications.
The Ihara zeta function plays an important role in the study of free groups, spectral graph theory, and dynamical systems, especially symbolic dynamics, where the Ihara zeta function is an example of a Ruelle zeta function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\zeta_{G}(u)=\\prod_{p}\\frac{1}{1-u^{{L}(p)}},"
},
{
"math_id": 1,
"text": "L(p)"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "G = (V, E)"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "p = (v_0, \\ldots, v_{k-1})"
},
{
"math_id": 6,
"text": " (v_i, v_{(i+1)\\bmod k}) \\in E, "
},
{
"math_id": 7,
"text": " v_i \\neq v_{(i+2) \\bmod k}. "
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "m > 1"
},
{
"math_id": 11,
"text": "q+1"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "\\zeta_G(u) = \\frac{1}{(1-u^2)^{r(G)-1}\\det(I - Au + qu^2I)}, "
},
{
"math_id": 14,
"text": "r(G)"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "r(G)-1=(q-1)n/2"
},
{
"math_id": 17,
"text": "\\zeta_G(u) = \\frac{1}{\\det (I-Tu)}~,"
},
{
"math_id": 18,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=1342362 |
1342978 | Lerch zeta function | In mathematics, the Lerch zeta function, sometimes called the Hurwitz–Lerch zeta function, is a special function that generalizes the Hurwitz zeta function and the polylogarithm. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887.
Definition.
The Lerch zeta function is given by
formula_0
A related function, the Lerch transcendent, is given by
formula_1.
The transcendent only converges for any real number formula_2, where:
formula_3, or
formula_4, and formula_5.<ref name="https://arxiv.org/pdf/math/0506319.pdf" >https://arxiv.org/pdf/math/0506319.pdf </ref>
The two are related, as
formula_6
Integral representations.
The Lerch transcendent has an integral representation:
formula_7
The proof is based on using the integral definition of the Gamma function to write
formula_8
and then interchanging the sum and integral. The resulting integral representation converges for formula_9 Re("s") > 0, and Re("a") > 0. This analytically continues formula_10 to "z" outside the unit disk. The integral formula also holds if "z" = 1, Re("s") > 1, and Re("a") > 0; see Hurwitz zeta function.
A contour integral representation is given by
formula_11
where "C" is a Hankel contour counterclockwise around the positive real axis, not enclosing any of the points formula_12 (for integer "k") which are poles of the integrand. The integral assumes Re("a") > 0.
Other integral representations.
A Hermite-like integral representation is given by
formula_13
for
formula_14
and
formula_15
for
formula_16
Similar representations include
formula_17
and
formula_18
holding for positive "z" (and more generally wherever the integrals converge). Furthermore,
formula_19
The last formula is also known as "Lipschitz formula".
Special cases.
The Lerch zeta function and Lerch transcendent generalize various special functions.
The Hurwitz zeta function is the special case
formula_20
The polylogarithm is another special case:
formula_21
The Riemann zeta function is a special case of both of the above:
formula_22
Other special cases include:
formula_23
formula_24
formula_25
formula_26
Identities.
For λ rational, the summand is a root of unity, and thus formula_27 may be expressed as a finite sum over the Hurwitz zeta function. Suppose formula_28 with formula_29 and formula_30. Then formula_31 and formula_32.
formula_33
Various identities include:
formula_34
and
formula_35
and
formula_36
Series representations.
A series representation for the Lerch transcendent is given by
formula_37
The series is valid for all "s", and for complex "z" with Re("z")<1/2. Note a general resemblance to a similar series representation for the Hurwitz zeta function.
A Taylor series in the first parameter was given by Arthur Erdélyi. It may be written as the following series, which is valid for
formula_39
formula_40
If "n" is a positive integer, then
formula_41
where formula_42 is the digamma function.
A Taylor series in the third variable is given by
formula_43
where formula_44 is the Pochhammer symbol.
Series at "a" = −"n" is given by
formula_45
A special case for "n" = 0 has the following series
formula_46
where formula_47 is the polylogarithm.
An asymptotic series for formula_48
formula_49
for formula_50
and
formula_51
for formula_52
An asymptotic series in the incomplete gamma function
formula_53
for formula_54
The representation as a generalized hypergeometric function is
formula_55
Asymptotic expansion.
The polylogarithm function formula_56 is defined as
formula_57
Let
formula_58
For formula_59 and formula_60, an asymptotic expansion of formula_10 for large formula_61 and fixed formula_62 and formula_63 is given by
formula_64
for formula_65, where formula_66 is the Pochhammer symbol.
Let
formula_67
Let formula_68 be its Taylor coefficients at formula_69. Then for fixed formula_70 and formula_71,
formula_72
as formula_73.
Software.
The Lerch transcendent is implemented as LerchPhi in Maple and Mathematica, and as lerchphi in mpmath and SymPy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L(\\lambda, s, \\alpha) = \\sum_{n=0}^\\infty\n\\frac { e^{2\\pi i\\lambda n}} {(n+\\alpha)^s}."
},
{
"math_id": 1,
"text": "\\Phi(z, s, \\alpha) = \\sum_{n=0}^\\infty\n\\frac { z^n} {(n+\\alpha)^s}"
},
{
"math_id": 2,
"text": "\\alpha > 0"
},
{
"math_id": 3,
"text": "|z| < 1"
},
{
"math_id": 4,
"text": "\\mathfrak{R}(s) > 1"
},
{
"math_id": 5,
"text": "|z| = 1"
},
{
"math_id": 6,
"text": "\\,\\Phi(e^{2\\pi i\\lambda}, s,\\alpha)=L(\\lambda, s, \\alpha)."
},
{
"math_id": 7,
"text": "\n\\Phi(z,s,a)=\\frac{1}{\\Gamma(s)}\\int_0^\\infty\n\\frac{t^{s-1}e^{-at}}{1-ze^{-t}}\\,dt"
},
{
"math_id": 8,
"text": "\\Phi(z,s,a)\\Gamma(s)\n= \\sum_{n=0}^\\infty \\frac{z^n}{(n+a)^s} \\int_0^\\infty x^s e^{-x} \\frac{dx}{x}\n= \\sum_{n=0}^\\infty \\int_0^\\infty t^s z^n e^{-(n+a)t} \\frac{dt}{t}"
},
{
"math_id": 9,
"text": "z \\in \\Complex \\setminus [1,\\infty),"
},
{
"math_id": 10,
"text": "\\Phi(z,s,a)"
},
{
"math_id": 11,
"text": "\n\\Phi(z,s,a)=-\\frac{\\Gamma(1-s)}{2\\pi i} \\int_C \\frac{(-t)^{s-1}e^{-at}}{1-ze^{-t}}\\,dt"
},
{
"math_id": 12,
"text": "t = \\log(z) + 2k\\pi i"
},
{
"math_id": 13,
"text": "\n\\Phi(z,s,a)=\n\\frac{1}{2a^s}+\n\\int_0^\\infty \\frac{z^t}{(a+t)^s}\\,dt+\n\\frac{2}{a^{s-1}}\n\\int_0^\\infty\n\\frac{\\sin(s\\arctan(t)-ta\\log(z))}{(1+t^2)^{s/2}(e^{2\\pi at}-1)}\\,dt\n"
},
{
"math_id": 14,
"text": "\\Re(a)>0\\wedge |z|<1 "
},
{
"math_id": 15,
"text": "\n\\Phi(z,s,a)=\\frac{1}{2a^s}+\n\\frac{\\log^{s-1}(1/z)}{z^a}\\Gamma(1-s,a\\log(1/z))+\n\\frac{2}{a^{s-1}}\n\\int_0^\\infty\n\\frac{\\sin(s\\arctan(t)-ta\\log(z))}{(1+t^2)^{s/2}(e^{2\\pi at}-1)}\\,dt\n"
},
{
"math_id": 16,
"text": "\\Re(a)>0. "
},
{
"math_id": 17,
"text": "\n\\Phi(z,s,a)= \\frac{1}{2a^s} + \\int_{0}^{\\infty}\\frac{\\cos(t\\log z)\\sin\\Big(s\\arctan\\tfrac{t}{a}\\Big) - \\sin(t\\log z)\\cos\\Big(s\\arctan\\tfrac{t}{a}\\Big)}{\\big(a^2 + t^2\\big)^{\\frac{s}{2}} \\tanh\\pi t }\\,dt,\n"
},
{
"math_id": 18,
"text": "\\Phi(-z,s,a)= \\frac{1}{2a^s} + \\int_{0}^{\\infty}\\frac{\\cos(t\\log z)\\sin\\Big(s\\arctan\\tfrac{t}{a}\\Big) - \\sin(t\\log z)\\cos\\Big(s\\arctan\\tfrac{t}{a}\\Big)}{\\big(a^2 + t^2\\big)^{\\frac{s}{2}} \\sinh\\pi t }\\,dt,"
},
{
"math_id": 19,
"text": "\\Phi(e^{i\\varphi},s,a)=L\\big(\\tfrac{\\varphi}{2\\pi}, s, a\\big)= \\frac{1}{a^s} + \\frac{1}{2\\Gamma(s)}\\int_{0}^{\\infty}\\frac{t^{s-1}e^{-at}\\big(e^{i\\varphi}-e^{-t}\\big)}{\\cosh{t}-\\cos{\\varphi}}\\,dt,"
},
{
"math_id": 20,
"text": "\\zeta(s,\\alpha) = L(0, s, \\alpha) = \\Phi(1,s,\\alpha) = \\sum_{n=0}^\\infty \\frac{1}{(n+\\alpha)^s}."
},
{
"math_id": 21,
"text": "\\textrm{Li}_s(z)=z\\Phi(z,s,1) = \\sum_{n=1}^\\infty \\frac{z^n}{n^s}."
},
{
"math_id": 22,
"text": "\\zeta(s) = \\Phi(1,s,1) = \\sum_{n=1}^\\infty \\frac{1}{n^s}"
},
{
"math_id": 23,
"text": "\\eta(s) = \\Phi(-1,s,1) = \\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{n^s}"
},
{
"math_id": 24,
"text": "\\beta(s) = 2^{-s} \\Phi(-1,s,1/2) = \\sum_{k=0}^\\infty \\frac{(-1)^{k}}{(2k+1)^s}"
},
{
"math_id": 25,
"text": "\\chi_s(z)=2^{-s}z \\Phi(z^2,s,1/2) = \\sum_{k=0}^\\infty \\frac{z^{2k+1}}{(2k+1)^s}"
},
{
"math_id": 26,
"text": "\\psi^{(n)}(\\alpha)= (-1)^{n+1} n!\\Phi (1,n+1,\\alpha)"
},
{
"math_id": 27,
"text": "L(\\lambda, s, \\alpha)"
},
{
"math_id": 28,
"text": "\\lambda = \\frac{p}{q}"
},
{
"math_id": 29,
"text": "p, q \\in \\Z"
},
{
"math_id": 30,
"text": "q > 0"
},
{
"math_id": 31,
"text": "z = \\omega = e^{2 \\pi i \\frac{p}{q}}"
},
{
"math_id": 32,
"text": "\\omega^q = 1"
},
{
"math_id": 33,
"text": "\\Phi(\\omega, s, \\alpha) = \\sum_{n=0}^\\infty\n\\frac {\\omega^n} {(n+\\alpha)^s} = \\sum_{m=0}^{q-1} \\sum_{n=0}^\\infty \\frac {\\omega^{qn + m}}{(qn + m + \\alpha)^s} = \\sum_{m=0}^{q-1} \\omega^m q^{-s} \\zeta \\left( s,\\frac{m + \\alpha}{q} \\right) "
},
{
"math_id": 34,
"text": "\\Phi(z,s,a)=z^n \\Phi(z,s,a+n) + \\sum_{k=0}^{n-1} \\frac {z^k}{(k+a)^s}"
},
{
"math_id": 35,
"text": "\\Phi(z,s-1,a)=\\left(a+z\\frac{\\partial}{\\partial z}\\right) \\Phi(z,s,a)"
},
{
"math_id": 36,
"text": "\\Phi(z,s+1,a)=-\\frac{1}{s}\\frac{\\partial}{\\partial a} \\Phi(z,s,a)."
},
{
"math_id": 37,
"text": "\\Phi(z,s,q)=\\frac{1}{1-z}\n\\sum_{n=0}^\\infty \\left(\\frac{-z}{1-z} \\right)^n\n\\sum_{k=0}^n (-1)^k \\binom{n}{k} (q+k)^{-s}."
},
{
"math_id": 38,
"text": "\\tbinom{n}{k}"
},
{
"math_id": 39,
"text": "\\left|\\log(z)\\right| < 2 \\pi;s\\neq 1,2,3,\\dots; a\\neq 0,-1,-2,\\dots"
},
{
"math_id": 40,
"text": "\n\\Phi(z,s,a)=z^{-a}\\left[\\Gamma(1-s)\\left(-\\log (z)\\right)^{s-1}\n+\\sum_{k=0}^\\infty \\zeta(s-k,a)\\frac{\\log^k (z)}{k!}\\right]\n"
},
{
"math_id": 41,
"text": "\n\\Phi(z,n,a)=z^{-a}\\left\\{\n\\sum_{{k=0}\\atop k\\neq n-1}^ \\infty \\zeta(n-k,a)\\frac{\\log^k (z)}{k!}\n+\\left[\\psi(n)-\\psi(a)-\\log(-\\log(z))\\right]\\frac{\\log^{n-1}(z)}{(n-1)!}\n\\right\\},\n"
},
{
"math_id": 42,
"text": "\\psi(n)"
},
{
"math_id": 43,
"text": "\\Phi(z,s,a+x)=\\sum_{k=0}^\\infty \\Phi(z,s+k,a)(s)_{k}\\frac{(-x)^k}{k!};|x|<\\Re(a),"
},
{
"math_id": 44,
"text": "(s)_{k}"
},
{
"math_id": 45,
"text": "\n\\Phi(z,s,a)=\\sum_{k=0}^n \\frac{z^k}{(a+k)^s}\n+z^n\\sum_{m=0}^\\infty (1-m-s)_{m}\\operatorname{Li}_{s+m}(z)\\frac{(a+n)^m}{m!};\\ a\\rightarrow-n\n"
},
{
"math_id": 46,
"text": "\n\\Phi(z,s,a)=\\frac{1}{a^s}\n+\\sum_{m=0}^\\infty (1-m-s)_m \\operatorname{Li}_{s+m}(z)\\frac{a^m}{m!}; |a|<1,\n"
},
{
"math_id": 47,
"text": "\\operatorname{Li}_s(z)"
},
{
"math_id": 48,
"text": "s\\rightarrow-\\infty"
},
{
"math_id": 49,
"text": "\\Phi(z,s,a)=z^{-a}\\Gamma(1-s)\\sum_{k=-\\infty}^\\infty [2k\\pi i-\\log(z)]^{s-1}e^{2k\\pi ai}\n"
},
{
"math_id": 50,
"text": "|a|<1;\\Re(s)<0 ;z\\notin (-\\infty,0) "
},
{
"math_id": 51,
"text": "\n\\Phi(-z,s,a)=z^{-a}\\Gamma(1-s)\\sum_{k=-\\infty}^\\infty\n[(2k+1)\\pi i-\\log(z)]^{s-1}e^{(2k+1)\\pi ai}\n"
},
{
"math_id": 52,
"text": "|a|<1;\\Re(s)<0 ;z\\notin (0,\\infty). "
},
{
"math_id": 53,
"text": "\n\\Phi(z,s,a)=\\frac{1}{2a^s}+\n\\frac{1}{z^a}\\sum_{k=1}^\\infty\n\\frac{e^{-2\\pi i(k-1)a}\\Gamma(1-s,a(-2\\pi i(k-1)-\\log(z)))}\n {(-2\\pi i(k-1)-\\log(z))^{1-s}}+\n\\frac{e^{2\\pi ika}\\Gamma(1-s,a(2\\pi ik-\\log(z)))}{(2\\pi ik-\\log(z))^{1-s}}\n"
},
{
"math_id": 54,
"text": "|a|<1;\\Re(s)<0."
},
{
"math_id": 55,
"text": "\n\\Phi(z,s,\\alpha)=\\frac{1}{\\alpha^s}{}_{s+1}F_s\\left(\\begin{array}{c}\n1,\\alpha,\\alpha,\\alpha,\\cdots\\\\\n1+\\alpha,1+\\alpha,1+\\alpha,\\cdots\\\\\n\\end{array}\\mid z\\right).\n"
},
{
"math_id": 56,
"text": "\\mathrm{Li}_n(z)"
},
{
"math_id": 57,
"text": "\\mathrm{Li}_0(z)=\\frac{z}{1-z}, \\qquad \\mathrm{Li}_{-n}(z)=z \\frac{d}{dz} \\mathrm{Li}_{1-n}(z)."
},
{
"math_id": 58,
"text": "\n\\Omega_{a} \\equiv\\begin{cases}\n\\mathbb{C}\\setminus[1,\\infty) & \\text{if } \\Re a > 0, \\\\\n{z \\in \\mathbb{C}, |z|<1} & \\text{if } \\Re a \\le 0.\n\\end{cases}\n"
},
{
"math_id": 59,
"text": "|\\mathrm{Arg}(a)|<\\pi, s \\in \\mathbb{C}"
},
{
"math_id": 60,
"text": "z \\in \\Omega_{a}"
},
{
"math_id": 61,
"text": "a"
},
{
"math_id": 62,
"text": "s"
},
{
"math_id": 63,
"text": "z"
},
{
"math_id": 64,
"text": "\n \\Phi(z,s,a) = \\frac{1}{1-z} \\frac{1}{a^{s}}\n +\n \\sum_{n=1}^{N-1} \\frac{(-1)^{n} \\mathrm{Li}_{-n}(z)}{n!} \\frac{(s)_{n}}{a^{n+s}}\n +O(a^{-N-s})\n"
},
{
"math_id": 65,
"text": "N \\in \\mathbb{N}"
},
{
"math_id": 66,
"text": "(s)_n = s (s+1)\\cdots (s+n-1)"
},
{
"math_id": 67,
"text": "f(z,x,a) \\equiv \\frac{1-(z e^{-x})^{1-a}}{1-z e^{-x}}."
},
{
"math_id": 68,
"text": "C_{n}(z,a)"
},
{
"math_id": 69,
"text": "x=0"
},
{
"math_id": 70,
"text": "N \\in \\mathbb{N}, \\Re a > 1"
},
{
"math_id": 71,
"text": "\\Re s > 0"
},
{
"math_id": 72,
"text": "\n\\Phi(z,s,a) - \\frac{\\mathrm{Li}_{s}(z)}{z^{a}}\n=\n\\sum_{n=0}^{N-1}\nC_{n}(z,a) \\frac{(s)_{n}}{a^{n+s}}\n+\nO\\left( (\\Re a)^{1-N-s}+a z^{-\\Re a} \\right),\n"
},
{
"math_id": 73,
"text": "\\Re a \\to \\infty"
}
]
| https://en.wikipedia.org/wiki?curid=1342978 |
13431536 | Material point method | Numerical Technique used to simulate the behavior of solids, liquids, gases,"
The material point method (MPM) is a numerical technique used to simulate the behavior of solids, liquids, gases, and any other continuum material. Especially, it is a robust spatial discretization method for simulating multi-phase (solid-fluid-gas) interactions. In the MPM, a continuum body is described by a number of small Lagrangian elements referred to as 'material points'. These material points are surrounded by a background mesh/grid that is used to calculate terms such as the deformation gradient. Unlike other mesh-based methods like the finite element method, finite volume method or finite difference method, the MPM is not a mesh based method and is instead categorized as a meshless/meshfree or continuum-based particle method, examples of which are smoothed particle hydrodynamics and peridynamics. Despite the presence of a background mesh, the MPM does not encounter the drawbacks of mesh-based methods (high deformation tangling, advection errors etc.) which makes it a promising and powerful tool in computational mechanics.
The MPM was originally proposed, as an extension of a similar method known as FLIP (a further extension of a method called PIC) to computational solid dynamics, in the early 1990 by Professors Deborah L. Sulsky, Zhen Chen and Howard L. Schreyer at University of New Mexico. After this initial development, the MPM has been further developed both in the national labs as well as the University of New Mexico, Oregon State University, University of Utah and more across the US and the world. Recently the number of institutions researching the MPM has been growing with added popularity and awareness coming from various sources such as the MPM's use in the Disney film "Frozen".
The algorithm.
An MPM simulation consists of the following stages:
"(Prior to the time integration phase)"
"(During the time integration phase - explicit formulation)"
History of PIC/MPM.
The PIC was originally conceived to solve problems in fluid dynamics, and developed by Harlow at Los Alamos National Laboratory in 1957. One of the first PIC codes was the Fluid-Implicit Particle (FLIP) program, which was created by Brackbill in 1986 and has been constantly in development ever since. Until the 1990s, the PIC method was used principally in fluid dynamics.
Motivated by the need for better simulating penetration problems in solid dynamics, Sulsky, Chen and Schreyer started in 1993 to reformulate the PIC and develop the MPM, with funding from Sandia National Laboratories. The original MPM was then further extended by Bardenhagen "et al.". to include frictional contact, which enabled the simulation of granular flow, and by Nairn to include explicit cracks and crack propagation (known as CRAMP).
Recently, an MPM implementation based on a micro-polar Cosserat continuum has been used to simulate high-shear granular flow, such as silo discharge. MPM's uses were further extended into Geotechnical engineering with the recent development of a quasi-static, implicit MPM solver which provides numerically stable analyses of large-deformation problems in Soil mechanics.
Annual workshops on the use of MPM are held at various locations in the United States. The Fifth MPM Workshop was held at Oregon State University, in Corvallis, OR, on April 2 and 3, 2009.
Applications of PIC/MPM.
The uses of the PIC or MPM method can be divided into two broad categories: firstly, there are many applications involving fluid dynamics, plasma physics, magnetohydrodynamics, and multiphase applications. The second category of applications comprises problems in solid mechanics.
Fluid dynamics and multiphase simulations.
The PIC method has been used to simulate a wide range of fluid-solid interactions, including sea ice dynamics, penetration of biological soft tissues, fragmentation of gas-filled canisters, dispersion of atmospheric pollutants, multiscale simulations coupling molecular dynamics with MPM, and fluid-membrane interactions. In addition, the PIC-based FLIP code has been applied in magnetohydrodynamics and plasma processing tools, and simulations in astrophysics and free-surface flow.
As a result of a joint effort between UCLA's mathematics department and Walt Disney Animation Studios, MPM was successfully used to simulate snow in the 2013 animated film "Frozen".
Solid mechanics.
MPM has also been used extensively in solid mechanics, to simulate impact, penetration, collision and rebound, as well as crack propagation. MPM has also become a widely used method within the field of soil mechanics: it has been used to simulate granular flow, quickness test of sensitive clays, landslides, silo discharge, pile driving, fall-cone test, bucket filling, and material failure; and to model soil stress distribution, compaction, and hardening. It is now being used in wood mechanics problems such as simulations of transverse compression on the cellular level including cell wall contact. The work also received the George Marra Award for paper of the year from the Society of Wood Science and Technology.
Classification of PIC/MPM codes.
MPM in the context of numerical methods.
One subset of numerical methods are Meshfree methods, which are defined as methods for which "a predefined mesh is not necessary, at least in field variable interpolation". Ideally, a meshfree method does not make use of a mesh "throughout the process of solving the problem governed by partial differential equations, on a given arbitrary domain, subject to all kinds of boundary conditions," although existing methods are not ideal and fail in at least one of these respects. Meshless methods, which are also sometimes called particle methods, share a "common feature that the history of state variables is traced at points (particles) which are not connected with any element mesh, the distortion of which is a source of numerical difficulties." As can be seen by these varying interpretations, some scientists consider MPM to be a meshless method, while others do not. All agree, however, that MPM is a particle method.
The Arbitrary Lagrangian Eulerian (ALE) methods form another subset of numerical methods which includes MPM. Purely "Lagrangian" methods employ a framework in which a space is discretised into initial subvolumes, whose flowpaths are then charted over time. Purely "Eulerian" methods, on the other hand, employ a framework in which the motion of material is described relative to a mesh that remains fixed in space throughout the calculation. As the name indicates, ALE methods combine Lagrangian and Eulerian frames of reference.
Subclassification of MPM/PIC.
PIC methods may be based on either the strong form collocation or a weak form discretisation of the underlying partial differential equation (PDE). Those based on the strong form are properly referred to as finite-volume PIC methods. Those based on the weak form discretisation of PDEs may be called either PIC or MPM.
MPM solvers can model problems in one, two, or three spatial dimensions, and can also model axisymmetric problems. MPM can be implemented to solve either quasi-static or dynamic equations of motion, depending on the type of problem that is to be modeled. Several versions of MPM include Generalized Interpolation Material Point Method ;Convected Particle Domain Interpolation Method; Convected Particle Least Squares Interpolation Method.
The time-integration used for MPM may be either "explicit" or "implicit". The advantage to implicit integration is guaranteed stability, even for large timesteps. On the other hand, explicit integration runs much faster and is easier to implement.
Advantages.
Compared to FEM.
Unlike "FEM", MPM does not require periodical remeshing steps and remapping of state variables, and is therefore better suited to the modeling of large material deformations. In MPM, particles and not the mesh points store all the information on the state of the calculation. Therefore, no numerical error results from the mesh returning to its original position after each calculation cycle, and no remeshing algorithm is required.
The particle basis of MPM allows it to treat crack propagation and other discontinuities better than FEM, which is known to impose the mesh orientation on crack propagation in a material. Also, particle methods are better at handling history-dependent constitutive models.
Compared to pure particle methods.
Because in MPM nodes remain fixed on a regular grid, the calculation of gradients is trivial.
In simulations with two or more phases it is rather easy to detect contact between entities, as particles can interact via the grid with other particles in the same body, with other solid bodies, and with fluids.
Disadvantages of MPM.
MPM is more expensive in terms of storage than other methods, as MPM makes use of mesh as well as particle data. MPM is more computationally expensive than FEM, as the grid must be reset at the end of each MPM calculation step and reinitialised at the beginning of the following step. Spurious oscillation may occur as particles cross the boundaries of the mesh in MPM, although this effect can be minimized by using generalized interpolation methods (GIMP). In MPM as in FEM, the size and orientation of the mesh can impact the results of a calculation: for example, in MPM, strain localisation is known to be particularly sensitive to mesh refinement.
One stability problem in MPM that does not occur in FEM is the cell-crossing errors and null-space errors because the number of integration points (material points) does not remain constant in a cell.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_{mp}"
},
{
"math_id": 1,
"text": "\\vec{P_{mp}}"
},
{
"math_id": 2,
"text": "\\boldsymbol{\\bar{\\bar{\\sigma}}}_{mp}"
},
{
"math_id": 3,
"text": "\\vec{b}"
},
{
"math_id": 4,
"text": "N_{nd-mp}"
},
{
"math_id": 5,
"text": "M_{node}"
},
{
"math_id": 6,
"text": "\\vec{V_{node}}"
},
{
"math_id": 7,
"text": "\\vec{F_{node}^{\\mathsf{internal}}}"
},
{
"math_id": 8,
"text": "\\vec{F_{node}^{\\mathsf{external}}}"
},
{
"math_id": 9,
"text": "\n M_{node} = \\sum_{mp} m_{mp} ~~ N_{mp-nd}\n"
},
{
"math_id": 10,
"text": "\n \\vec{V_{node}} = {1 \\over M_{node}} ~~ \\sum_{mp} \\vec{P_{mp}} ~~ N_{mp-nd}\n"
},
{
"math_id": 11,
"text": "\n \\vec{F_{node}^{internal}} = \\sum_{mp} ~~\\bar{\\bar{\\sigma}}_{mp} ~~ \\nabla N_{mp-nd}\n"
},
{
"math_id": 12,
"text": "\n \\vec{F_{node}^{\\mathsf{external}}} = \\sum_{mp} \\vec{b}~~N_{mp-nd}\n"
},
{
"math_id": 13,
"text": "\\vec{A_{node}}"
},
{
"math_id": 14,
"text": "\\vec{A_{node}} = {\\vec{F^{external}_{node}+\\vec{F^{internal}_{node}} \\over M_{node}}} "
},
{
"math_id": 15,
"text": "\\tilde{\\vec{V_{node}}}"
},
{
"math_id": 16,
"text": "\\tilde{\\vec{V_{node}}} = \\vec{V_{node}} + \\vec{A_{node}}\\mathrm d t"
},
{
"math_id": 17,
"text": "\\vec{a_{mp}}"
},
{
"math_id": 18,
"text": "\\mathcal{\\bar{\\bar{F_{mp}}}}"
},
{
"math_id": 19,
"text": "\\bar{\\bar{\\dot{\\varepsilon}_{mp}}}"
},
{
"math_id": 20,
"text": "N_{nd-mp}"
},
{
"math_id": 21,
"text": "\n \\vec{a_{mp}} = \\sum_{nd} \\vec{A_{node}} ~~N_{nd-mp}\n"
},
{
"math_id": 22,
"text": "\n \\bar{\\bar{\\dot{\\varepsilon}_{mp}}} = \\sum_{nd} ~{1 \\over 2}~~[\\vec{V_{node}} \\nabla N_{nd-mp} + (V_{node} \\nabla N_{nd-mp})^T ]\n"
}
]
| https://en.wikipedia.org/wiki?curid=13431536 |
1343550 | P wave | Type of seismic wave
A P wave (primary wave or pressure wave) is one of the two main types of elastic body waves, called seismic waves in seismology. P waves travel faster than other seismic waves and hence are the first signal from an earthquake to arrive at any affected location or at a seismograph. P waves may be transmitted through gases, liquids, or solids.
Nomenclature.
The name "P wave" can stand for either pressure wave (as it is formed from alternating compressions and rarefactions) or primary wave (as it has high velocity and is therefore the first wave to be recorded by a seismograph). The name "S wave" represents another seismic wave propagation mode, standing for secondary or shear wave, a usually more destructive wave than the primary wave.
Seismic waves in the Earth.
Primary and secondary waves are body waves that travel within the Earth. The motion and behavior of both P and S waves in the Earth are monitored to probe the interior structure of the Earth. Discontinuities in velocity as a function of depth are indicative of changes in phase or composition. Differences in arrival times of waves originating in a seismic event like an earthquake as a result of waves taking different paths allow mapping of the Earth's inner structure.
P-wave shadow zone.
Almost all the information available on the structure of the Earth's deep interior is derived from observations of the travel times, reflections, refractions and phase transitions of seismic body waves, or normal modes. P waves travel through the fluid layers of the Earth's interior, and yet they are refracted slightly when they pass through the transition between the semisolid mantle and the liquid outer core. As a result, there is a P wave "shadow zone" between 103° and 142° from the earthquake's focus, where the initial P waves are not registered on seismometers. In contrast, S waves do not travel through liquids.
As an earthquake warning.
Advance earthquake warning is possible by detecting the nondestructive primary waves that travel more quickly through the Earth's crust than do the destructive secondary and Rayleigh waves.
The amount of warning depends on the delay between the arrival of the P wave and other destructive waves, generally on the order of seconds up to about 60 to 90 seconds for deep, distant, large quakes such as the 2011 Tohoku earthquake. The effectiveness of a warning depends on accurate detection of the P waves and rejection of ground vibrations caused by local activity (such as trucks or construction). Earthquake early warning systems can be automated to allow for immediate safety actions, such as issuing alerts, stopping elevators at the nearest floors, and switching off utilities.
Propagation.
Velocity.
In isotropic and homogeneous solids, a P wave travels in a straight line longitudinally; thus, the particles in the solid vibrate along the axis of propagation (the direction of motion) of the wave energy. The velocity of P waves in that kind of medium is given by
formula_0
where K is the bulk modulus (the modulus of incompressibility), μ is the shear modulus (modulus of rigidity, sometimes denoted as G and also called the second Lamé parameter), ρ is the density of the material through which the wave propagates, and λ is the first Lamé parameter.
In typical situations in the interior of the Earth, the density ρ usually varies much less than K or μ, so the velocity is mostly "controlled" by these two parameters.
The elastic moduli P-wave modulus, formula_1, is defined so that formula_2 and thereby
formula_3
Typical values for P wave velocity in earthquakes are in the range 5 to 8 km/s. The precise speed varies according to the region of the Earth's interior, from less than 6 km/s in the Earth's crust to 13.5 km/s in the lower mantle, and 11 km/s through the inner core.
Geologist Francis Birch discovered a relationship between the velocity of P waves and the density of the material the waves are traveling in:
formula_4
which later became known as Birch's law. (The symbol "a"() is an empirically tabulated function, and b is a constant.) | [
{
"math_id": 0,
"text": "v_\\mathrm{p} \\; = \\; \\sqrt{ \\frac{\\, K + \\tfrac{4}{3} \\mu \\;}{\\rho} } \\; = \\; \\sqrt{ \\frac{\\, \\lambda + 2 \\mu \\;}{\\rho} } "
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "\\, M = K + \\tfrac{4}{3} \\mu \\,"
},
{
"math_id": 3,
"text": "v_\\mathrm{p} = \\sqrt{ \\frac{\\, M \\;}{\\rho} } "
},
{
"math_id": 4,
"text": " v_\\mathrm{p} = a ( \\bar{M} ) + b \\, \\rho "
}
]
| https://en.wikipedia.org/wiki?curid=1343550 |
13437009 | Hereditary C*-subalgebra | In mathematics, a hereditary C*-subalgebra of a C*-algebra is a particular type of C*-subalgebra whose structure is closely related to that of the larger C*-algebra. A C*-subalgebra "B" of "A" is a hereditary C*-subalgebra if for all "a" ∈ "A" and "b" ∈ "B" such that 0 ≤ "a" ≤ "b", we have "a" ∈ "B".
Correspondence with closed left ideals.
There is a bijective correspondence between closed left ideals and hereditary C*-subalgebras of "A". If "L" ⊂ "A" is a closed left ideal, let "L"* denote the image of "L" under the *-operation. The set "L"* is a right ideal and "L"* ∩ "L" is a C*-subalgebra. In fact, "L"* ∩ "L" is hereditary and the map "L" ↦ "L"* ∩ "L" is a bijection. It follows from this correspondence that every closed ideal is a hereditary C*-subalgebra. Another corollary is that a hereditary C*-subalgebra of a simple C*-algebra is also simple.
Connections with positive elements.
If "p" is a projection of "A" (or a projection of the multiplier algebra of "A"), then "pAp" is a hereditary C*-subalgebra known as a corner of "A". More generally, given a positive "a" ∈ "A", the closure of the set "aAa" is the smallest hereditary C*-subalgebra containing "a", denoted by Her("a"). If "A" is separable, then every hereditary C*-subalgebra has this form.
These hereditary C*-subalgebras can bring some insight into the notion of Cuntz subequivalence. In particular, if "a" and "b" are positive elements of a C*-algebra "A", then formula_0 if "b" ∈ Her("a"). Hence, "a" ~ "b" if Her("a") = Her("b").
If "A" is unital and the positive element "a" is invertible, then Her("a") = "A". This suggests the following notion for the non-unital case: "a" ∈ "A" is said to be strictly positive if Her("a") = "A". For example, in the C*-algebra "K"("H") of compact operators acting on Hilbert space "H", a compact operator is strictly positive if and only if its range is dense in "H". A commutative C*-algebra contains a strictly positive element if and only if the spectrum of the algebra is σ-compact. More generally, a C*-algebra contains a strictly positive element if and only if the algebra has a sequential approximate identity. | [
{
"math_id": 0,
"text": "a \\precsim b"
}
]
| https://en.wikipedia.org/wiki?curid=13437009 |
1343748 | Transition radiation detector | A transition radiation detector (TRD) is a particle detector using the formula_0-dependent threshold of transition radiation in a stratified material. It contains many layers of materials with different indices of refraction. At each interface between materials, the probability of transition radiation increases with the relativistic gamma factor. Thus particles with large formula_0 give off many photons, and small formula_0 give off few. For a given energy, this allows a discrimination between a lighter particle (which has a high formula_0 and therefore radiates) and a heavier particle (which has a low formula_0 and radiates much less).
The passage of the particle is observed through many thin layers of material put in air or gas. The radiated photon gives energy deposition by photoelectric effect, and the signal is detected as ionization. Usually materials with low formula_1 are preferred (formula_2, formula_3) for the radiator, while for photons materials with high formula_1 are used to get a high cross section for photoelectric effect (ex. formula_4).
TRD detectors are used in ALICE and ATLAS experiment at Large Hadron Collider. The ALICE TRD operates together with a big TPC (Time Projection Chamber) and TOF (Time of Flight counter) to do particle identification in ion collisions. The ATLAS TRD is called TRT (Transition Radiation Tracker) which serves also as a tracker measuring particles' trajectory simultaneously. | [
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": "Z"
},
{
"math_id": 2,
"text": "Li"
},
{
"math_id": 3,
"text": "Be"
},
{
"math_id": 4,
"text": "Xe"
}
]
| https://en.wikipedia.org/wiki?curid=1343748 |
1343951 | Seifert surface | Orientable surface whose boundary is a knot or link
In mathematics, a Seifert surface (named after German mathematician Herbert Seifert) is an orientable surface whose boundary is a given knot or link.
Such surfaces can be used to study the properties of the associated knot or link. For example, many knot invariants are most easily calculated using a Seifert surface. Seifert surfaces are also interesting in their own right, and the subject of considerable research.
Specifically, let "L" be a tame oriented knot or link in Euclidean 3-space (or in the 3-sphere). A Seifert surface is a compact, connected, oriented surface "S" embedded in 3-space whose boundary is "L" such that the orientation on "L" is just the induced orientation from "S".
Note that any compact, connected, oriented surface with nonempty boundary in Euclidean 3-space is the Seifert surface associated to its boundary link. A single knot or link can have many different inequivalent Seifert surfaces. A Seifert surface must be oriented. It is possible to associate surfaces to knots which are not oriented nor orientable, as well.
Examples.
The standard Möbius strip has the unknot for a boundary but is not a Seifert surface for the unknot because it is not orientable.
The "checkerboard" coloring of the usual minimal crossing projection of the trefoil knot gives a Mobius strip with three half twists. As with the previous example, this is not a Seifert surface as it is not orientable. Applying Seifert's algorithm to this diagram, as expected, does produce a Seifert surface; in this case, it is a punctured torus of genus "g" = 1, and the Seifert matrix is
formula_0
Existence and Seifert matrix.
It is a theorem that any link always has an associated Seifert surface. This theorem was first published by Frankl and Pontryagin in 1930. A different proof was published in 1934 by Herbert Seifert and relies on what is now called the Seifert algorithm. The algorithm produces a Seifert surface formula_1, given a projection of the knot or link in question.
Suppose that link has "m" components ("m"
1 for a knot), the diagram has "d" crossing points, and resolving the crossings (preserving the orientation of the knot) yields "f" circles. Then the surface formula_1 is constructed from "f" disjoint disks by attaching "d" bands. The homology group formula_2 is free abelian on 2"g" generators, where
formula_3
is the genus of formula_1. The intersection form "Q" on formula_2 is skew-symmetric, and there is a basis of 2"g" cycles formula_4 with
formula_5 equal to a direct sum of the "g" copies of the matrix
formula_6
The 2"g" × 2"g" integer Seifert matrix
formula_7
has formula_8 the linking number in Euclidean 3-space (or in the 3-sphere) of "a""i" and the "pushoff" of "a""j" in the positive direction of formula_1. More precisely, recalling that Seifert surfaces are bicollared, meaning that we can extend the embedding of formula_1 to an embedding of formula_9, given some representative loop formula_10 which is homology generator in the interior of formula_1, the positive pushout is formula_11 and the negative pushout is formula_12.
With this, we have
formula_13
where "V"∗ = ("v"("j", "i")) the transpose matrix. Every integer 2"g" × 2"g" matrix formula_14 with formula_15 arises as the Seifert matrix of a knot with genus "g" Seifert surface.
The Alexander polynomial is computed from the Seifert matrix by formula_16 which is a polynomial of degree at most 2"g" in the indeterminate formula_17 The Alexander polynomial is independent of the choice of Seifert surface formula_18 and is an invariant of the knot or link.
The signature of a knot is the signature of the symmetric Seifert matrix formula_19 It is again an invariant of the knot or link.
Genus of a knot.
Seifert surfaces are not at all unique: a Seifert surface "S" of genus "g" and Seifert matrix "V" can be modified by a topological surgery, resulting in a Seifert surface "S"′ of genus "g" + 1 and Seifert matrix
formula_20
The genus of a knot "K" is the knot invariant defined by the minimal genus "g" of a Seifert surface for "K".
For instance:
A fundamental property of the genus is that it is additive with respect to the knot sum:
formula_21
In general, the genus of a knot is difficult to compute, and the Seifert algorithm usually does not produce a Seifert surface of least genus. For this reason other related invariants are sometimes useful. The canonical genus formula_22 of a knot is the least genus of all Seifert surfaces that can be constructed by the Seifert algorithm, and the free genus formula_23 is the least genus of all Seifert surfaces whose complement in formula_24 is a handlebody. (The complement of a Seifert surface generated by the Seifert algorithm is always a handlebody.) For any knot the inequality formula_25 obviously holds, so in particular these invariants place upper bounds on the genus.
The knot genus is NP-complete by work of Ian Agol, Joel Hass and William Thurston.
It has been shown that there are Seifert surfaces of the same genus that do not become isotopic either topologically or smoothly in the 4-ball.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\begin{pmatrix}1 & -1 \\\\ 0 & 1\\end{pmatrix}."
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "H_1(S)"
},
{
"math_id": 3,
"text": "g = \\frac{1}{2}(2 + d - f - m)"
},
{
"math_id": 4,
"text": "a_1, a_2, \\ldots, a_{2g}"
},
{
"math_id": 5,
"text": "Q = (Q(a_i, a_j))"
},
{
"math_id": 6,
"text": "\\begin{pmatrix} 0 & -1 \\\\ 1 & 0 \\end{pmatrix}"
},
{
"math_id": 7,
"text": "V = (v(i, j))"
},
{
"math_id": 8,
"text": "v(i, j)"
},
{
"math_id": 9,
"text": "S \\times [-1, 1]"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "x \\times \\{1\\}"
},
{
"math_id": 12,
"text": "x \\times \\{-1\\}"
},
{
"math_id": 13,
"text": "V - V^* = Q,"
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "V - V^* = Q"
},
{
"math_id": 16,
"text": "A(t) = \\det\\left(V - tV^*\\right),"
},
{
"math_id": 17,
"text": "t."
},
{
"math_id": 18,
"text": "S,"
},
{
"math_id": 19,
"text": "V + V^\\mathrm{T}."
},
{
"math_id": 20,
"text": "V' = V \\oplus \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix}."
},
{
"math_id": 21,
"text": "g(K_1 \\mathbin{\\#} K_2) = g(K_1) + g(K_2)"
},
{
"math_id": 22,
"text": "g_c"
},
{
"math_id": 23,
"text": "g_f"
},
{
"math_id": 24,
"text": "S^3"
},
{
"math_id": 25,
"text": "g \\leq g_f \\leq g_c"
}
]
| https://en.wikipedia.org/wiki?curid=1343951 |
1343980 | Deontic logic | Field of philosophical logic
Deontic logic is the field of philosophical logic that is concerned with obligation, permission, and related concepts. Alternatively, a deontic logic is a formal system that attempts to capture the essential logical features of these concepts. It can be used to formalize imperative logic, or directive modality in natural languages. Typically, a deontic logic uses "OA" to mean "it is obligatory that A" (or "it ought to be (the case) that A"), and "PA" to mean "it is permitted (or permissible) that A", which is defined as formula_0.
In natural language, the statement "You may go to the zoo OR the park" should be understood as formula_1 instead of formula_2, as both options are permitted by the statement. When there are multiple agents involved in the domain of discourse, the deontic modal operator can be specified to each agent to express their individual obligations and permissions. For example, by using a subscript formula_3 for agent formula_4, formula_5 means that "It is an obligation for agent formula_4 (to bring it about/make it happen) that formula_6". Note that formula_6 could be stated as an action by another agent; One example is "It is an obligation for Adam that Bob doesn't crash the car", which would be represented as formula_7, where B="Bob doesn't crash the car".
Etymology.
The term "deontic" is derived from the (gen.: ), meaning "that which is binding or proper."
Standard deontic logic.
In Georg Henrik von Wright's first system, obligatoriness and permissibility were treated as features of "acts". Soon after this, it was found that a deontic logic of "propositions" could be given a simple and elegant Kripke-style semantics, and von Wright himself joined this movement. The deontic logic so specified came to be known as "standard deontic logic," often referred to as SDL, KD, or simply D. It can be axiomatized by adding the following axioms to a standard axiomatization of classical propositional logic:
formula_8
formula_9
formula_10
In English, these axioms say, respectively:
"FA", meaning it is forbidden that "A", can be defined (equivalently) as formula_11 or formula_12.
There are two main extensions of SDL that are usually considered. The first results by adding an alethic modal operator formula_13 in order to express the Kantian claim that "ought implies can":
formula_14
where formula_15. It is generally assumed that formula_13 is at least a KT operator, but most commonly it is taken to be an S5 operator. In practical situations, obligations are usually assigned in anticipation of future events, in which case alethic possibilities can be hard to judge; Therefore, obligation assignments may be performed under the assumption of different conditions on different branches of timelines in the future, and past obligation assignments may be updated due to unforeseen developments that happened along the timeline.
The other main extension results by adding a "conditional obligation" operator O(A/B) read "It is obligatory that A given (or conditional on) B". Motivation for a conditional operator is given by considering the following ("Good Samaritan") case. It seems true that the starving and poor ought to be fed. But that the starving and poor are fed implies that there are starving and poor. By basic principles of SDL we can infer that there ought to be starving and poor! The argument is due to the basic K axiom of SDL together with the following principle valid in any normal modal logic:
formula_16
If we introduce an intensional conditional operator then we can say that the starving ought to be fed "only on the condition that there are in fact starving": in symbols O(A/B). But then the following argument fails on the usual (e.g. Lewis 73) semantics for conditionals: from O(A/B) and that A implies B, infer OB.
Indeed, one might define the unary operator O in terms of the binary conditional one O(A/B) as formula_17, where formula_18 stands for an arbitrary tautology of the underlying logic (which, in the case of SDL, is classical).
Semantics of standard deontic logic.
The accessibility relation between possible world is interpreted as "acceptability" relations: formula_19 is an acceptable world (viz. formula_20) if and only if all the obligations in formula_21 are fulfilled in formula_19 (viz. formula_22).
Anderson's deontic logic.
Alan R. Anderson (1959) shows how to define formula_23 in terms of the alethic operator formula_13 and a deontic constant (i.e. 0-ary modal operator) formula_24 standing for some sanction (i.e. bad thing, prohibition, etc.): formula_25. Intuitively, the right side of the biconditional says that A's failing to hold necessarily (or strictly) implies a sanction.
In addition to the usual modal axioms (necessitation rule N and distribution axiom K) for the alethic operator formula_13, Anderson's deontic logic only requires one additional axiom for the deontic constant formula_24: formula_26, which means that there is alethically possible to fulfill all obligations and avoid the sanction. This version of the Anderson's deontic logic is equivalent to SDL.
However, when modal axiom T is included for the alethic operator (formula_27), it can be proved in Anderson's deontic logic that formula_28, which is not included in SDL. Anderson's deontic logic inevitably couples the deontic operator formula_23 with the alethic operator formula_13, which can be problematic in certain cases.
Dyadic deontic logic.
An important problem of deontic logic is that of how to properly represent conditional obligations, e.g. "If you smoke (s), then you ought to use an ashtray (a). " It is not clear that either of the following representations is adequate:
formula_29
formula_30
Under the first representation it is vacuously true that if you commit a forbidden act, then you ought to commit any other act, regardless of whether that second act was obligatory, permitted or forbidden (Von Wright 1956, cited in Aqvist 1994). Under the second representation, we are vulnerable to the gentle murder paradox, where the plausible statements (1) "if you murder, you ought to murder gently", (2) "you do commit murder", and (3) "to murder gently you must murder" imply the less plausible statement: "you ought to murder". Others argue that "must" in the phrase "to murder gently you must murder" is a mistranslation from the ambiguous English word (meaning either "implies" or "ought"). Interpreting "must" as "implies" does not allow one to conclude "you ought to murder" but only a repetition of the given "you murder". Misinterpreting "must" as "ought" results in a perverse axiom, not a perverse logic. With use of negations one can easily check if the ambiguous word was mistranslated by considering which of the following two English statements is equivalent with the statement "to murder gently you must murder": is it equivalent to "if you murder gently it is forbidden not to murder" or "if you murder gently it is impossible not to murder" ?
Some deontic logicians have responded to this problem by developing dyadic deontic logics, which contain binary deontic operators:
formula_31 means "it is obligatory that A, given B"
formula_32 means "it is permissible that A, given B".
(The notation is modeled on that used to represent conditional probability.) Dyadic deontic logic escapes some of the problems of standard (unary) deontic logic, but it is subject to some problems of its own.
Other variations.
Many other varieties of deontic logic have been developed, including non-monotonic deontic logics, paraconsistent deontic logics, dynamic deontic logics, and hyperintensional deontic logics.
History.
Early deontic logic.
Philosophers from the Indian Mimamsa school to those of Ancient Greece have remarked on the formal logical relations of deontic concepts and philosophers from the late Middle Ages compared deontic concepts with alethic ones.
In his "Elementa juris naturalis" (written between 1669 and 1671), Gottfried Wilhelm Leibniz notes the logical relations between the "licitum" (permitted), the "illicitum" (prohibited), the "debitum" (obligatory), and the "indifferens" (facultative) are equivalent to those between the "possibile", the "impossibile", the "necessarium", and the "contingens" respectively.
Mally's first deontic logic and von Wright's first "plausible" deontic logic.
Ernst Mally, a pupil of Alexius Meinong, was the first to propose a formal system of deontic logic in his "Grundgesetze des Sollens" (1926) and he founded it on the syntax of Whitehead's and Russell's propositional calculus. Mally's deontic vocabulary consisted of the logical constants formula_33 and formula_34, unary connective formula_35, and binary connectives formula_36 and formula_37.
* Mally read formula_38 as "A ought to be the case".* He read formula_39 as "A requires B" .* He read formula_40 as "A and B require each other."* He read formula_33 as "the unconditionally obligatory" .* He read formula_34 as "the unconditionally forbidden".
Mally defined formula_36, formula_37, and formula_34 as follows:
Def. formula_41Def. formula_42Def. formula_43
Mally proposed five informal principles:
(i) If A requires B and if B requires C, then A requires C.(ii) If A requires B and if A requires C, then A requires B and C.(iii) A requires B if and only if it is obligatory that if A then B.(iv) The unconditionally obligatory is obligatory.(v) The unconditionally obligatory does not require its own negation.
He formalized these principles and took them as his axioms:
I. formula_44II. formula_45III. formula_46IV. formula_47V. formula_48
From these axioms Mally deduced 35 theorems, many of which he rightly considered strange. Karl Menger showed that formula_49 is a theorem and thus that the introduction of the ! sign is irrelevant and that A ought to be the case if A is the case. After Menger, philosophers no longer considered Mally's system viable. Gert Lokhorst lists Mally's 35 theorems and gives a proof for Menger's theorem at the Stanford Encyclopedia of Philosophy under "Mally's Deontic Logic".
The first plausible system of deontic logic was proposed by G. H. von Wright in his paper "Deontic Logic" in the philosophical journal "Mind" in 1951. (Von Wright was also the first to use the term "deontic" in English to refer to this kind of logic although Mally published the German paper "Deontik" in 1926.) Since the publication of von Wright's seminal paper, many philosophers and computer scientists have investigated and developed systems of deontic logic. Nevertheless, to this day deontic logic remains one of the most controversial and least agreed-upon areas of logic.
G. H. von Wright did not base his 1951 deontic logic on the syntax of the propositional calculus as Mally had done, but was instead influenced by alethic modal logics, which Mally had not benefited from. In 1964, von Wright published "A New System of Deontic Logic", which was a return to the syntax of the propositional calculus and thus a significant return to Mally's system. (For more on von Wright's departure from and return to the syntax of the propositional calculus, see "Deontic Logic: A Personal View" and "A New System of Deontic Logic", both by Georg Henrik von Wright.) G. H. von Wright's adoption of the modal logic of possibility and necessity for the purposes of normative reasoning was a return to Leibniz.
Although von Wright's system represented a significant improvement over Mally's, it raised a number of problems of its own. For example, "Ross's paradox" applies to von Wright's deontic logic, allowing us to infer from "It is obligatory that the letter is mailed" to "It is obligatory that either the letter is mailed or the letter is burned", which seems to imply it is permissible that the letter is burned. The "Good Samaritan paradox" also applies to his system, allowing us to infer from "It is obligatory to nurse the man who has been robbed" that "It is obligatory that the man has been robbed". Another major source of puzzlement is "Chisholm's paradox", named after American philosopher and logician Roderick Chisholm. There is no formalisation in von Wright's system of the following claims that allows them to be both jointly satisfiable and logically independent:
Several extensions or revisions of Standard Deontic Logic have been proposed over the years, with a view to solve these and other puzzles and paradoxes (such as the Gentle Murderer and Free choice permission).
Jørgensen's dilemma.
Deontic logic faces Jørgensen's dilemma.
This problem is best seen as a trilemma.
The following three claims are incompatible:
Responses to this problem involve rejecting one of the three premises.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PA\\equiv \\neg O\\neg A"
},
{
"math_id": 1,
"text": "Pz\\land Pp"
},
{
"math_id": 2,
"text": "Pz\\lor Pp"
},
{
"math_id": 3,
"text": "O_i"
},
{
"math_id": 4,
"text": "a_i"
},
{
"math_id": 5,
"text": "O_iA"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "O_{Adam}B"
},
{
"math_id": 8,
"text": "(\\models A) \\rightarrow (\\models OA)"
},
{
"math_id": 9,
"text": "O(A \\rightarrow B) \\rightarrow (OA \\rightarrow OB)"
},
{
"math_id": 10,
"text": "OA\\to PA"
},
{
"math_id": 11,
"text": "O \\lnot A"
},
{
"math_id": 12,
"text": "\\lnot PA"
},
{
"math_id": 13,
"text": "\\Box"
},
{
"math_id": 14,
"text": " OA \\to \\Diamond A. "
},
{
"math_id": 15,
"text": "\\Diamond\\equiv\\lnot\\Box\\lnot"
},
{
"math_id": 16,
"text": "\\vdash A\\to B\\Rightarrow\\ \\vdash OA\\to OB."
},
{
"math_id": 17,
"text": "OA\\equiv O(A/\\top)"
},
{
"math_id": 18,
"text": "\\top"
},
{
"math_id": 19,
"text": "v"
},
{
"math_id": 20,
"text": "wRv"
},
{
"math_id": 21,
"text": "w"
},
{
"math_id": 22,
"text": "(w\\models OA)\\to (v\\models A)"
},
{
"math_id": 23,
"text": "O"
},
{
"math_id": 24,
"text": "s"
},
{
"math_id": 25,
"text": "OA\\equiv\\Box(\\lnot A\\to s)"
},
{
"math_id": 26,
"text": "\\neg \\Box s\\equiv \\Diamond \\neg s"
},
{
"math_id": 27,
"text": "\\Box A\\to A"
},
{
"math_id": 28,
"text": "O(OA \\to A)"
},
{
"math_id": 29,
"text": "O(\\mathrm{smoke} \\rightarrow \\mathrm{ashtray})"
},
{
"math_id": 30,
"text": "\\mathrm{smoke} \\rightarrow O(\\mathrm{ashtray})"
},
{
"math_id": 31,
"text": "O(A \\mid B)"
},
{
"math_id": 32,
"text": "P(A \\mid B)"
},
{
"math_id": 33,
"text": "\\cup"
},
{
"math_id": 34,
"text": "\\cap"
},
{
"math_id": 35,
"text": "!"
},
{
"math_id": 36,
"text": "f"
},
{
"math_id": 37,
"text": "\\infty"
},
{
"math_id": 38,
"text": "!A"
},
{
"math_id": 39,
"text": "A f B"
},
{
"math_id": 40,
"text": "A \\infty B"
},
{
"math_id": 41,
"text": "\n\n\nf. A f B = A \\rightarrow !B"
},
{
"math_id": 42,
"text": "\\infty. A \\infty B = (A f B) \\& (B f A)"
},
{
"math_id": 43,
"text": "\\cap.\\rightarrow \\cap=\\lnot\\cup\n "
},
{
"math_id": 44,
"text": "\\rightarrow ((A f B) \\& (B \\rightarrow C)) \\rightarrow (A f C)"
},
{
"math_id": 45,
"text": "\\rightarrow ((A f B) \\& (A f C)) \\rightarrow (A f (B \\& C))"
},
{
"math_id": 46,
"text": "\\rightarrow (A f B) \\leftrightarrow !(A \\rightarrow B)"
},
{
"math_id": 47,
"text": "\\rightarrow \\exists \\cup ! \\cup"
},
{
"math_id": 48,
"text": "\\rightarrow \\lnot (\\cup f \\cap)"
},
{
"math_id": 49,
"text": "!A \\leftrightarrow A"
}
]
| https://en.wikipedia.org/wiki?curid=1343980 |
13439882 | Universal conductance fluctuations | Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length formula_0 needs be larger than the momentum relaxation length formula_1. UCF is more profound when electrical transport is in weak localization regime. formula_2 where formula_3, formula_4 is the number of conduction channels and formula_1 is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance formula_5 regardless of the number of channels.
Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle l_\\phi"
},
{
"math_id": 1,
"text": "\\textstyle l_m"
},
{
"math_id": 2,
"text": "\\textstyle l_\\phi<l_c"
},
{
"math_id": 3,
"text": "l_c=M\\cdot l_m"
},
{
"math_id": 4,
"text": "\\textstyle M"
},
{
"math_id": 5,
"text": "\\textstyle G_o=2e^2/h"
}
]
| https://en.wikipedia.org/wiki?curid=13439882 |
13442784 | Median polish | The median polish is a simple and robust exploratory data analysis procedure proposed by the statistician John Tukey. The purpose of median polish is to find an additively-fit model for data in a two-way layout table (usually, results from a factorial experiment) of the form row effect + column effect + overall median.
Median polish utilizes the medians obtained from the rows and the columns of a two-way table to iteratively calculate the row effect and column effect on the data. The results are not meant to be sensitive to the outliers, as the iterative procedure uses the medians rather than the means.
Model for a two-way table.
Suppose an experiment observes the variable Y under the influence of two variables. We can arrange the data in a two-way table in which one variable is constant along the rows and the other variable constant along the columns. Let "i" and "j" denote the position of rows and columns (e.g. y"ij" denotes the value of y at the "i"th row and the "j"th column). Then we can obtain a simple linear regression equation:
formula_0
where "b"0, "b"1, "b"2 are constants, and "xi" and "zj" are values associated with rows and columns, respectively.
The equation can be further simplified if no "xi" and "zj" values are present for the analysis:
formula_1
where "ci" and "dj" denote row effects and column effects, respectively.
Procedure.
To carry out median polish:
(1) find the row medians for each row, find the median of the row medians, record this as the overall effect.
(2) subtract each element in a row by its row median, do this for all rows.
(3) subtract the overall effect from each row median.
(4) do the same for each column, and add the overall effect from column operations to the overall effect generated from row operations.
(5) repeat (1)-(4) until negligible change occur with row or column medians | [
{
"math_id": 0,
"text": "\\mathbf{y}_{ij} = b_0 + b_1x_i + b_2z_j + \\varepsilon_{ij}, "
},
{
"math_id": 1,
"text": "\\mathbf{y}_{ij} = b_0 + c_i + d_j + \\varepsilon_{ij}, "
}
]
| https://en.wikipedia.org/wiki?curid=13442784 |
13443170 | Chirikov criterion | The Chirikov criterion or Chirikov resonance-overlap criterion
was established by the Russian physicist Boris Chirikov.
Back in 1959, he published a seminal article,
where he introduced the very first physical criterion for the onset of chaotic motion in
deterministic Hamiltonian systems. He then applied such a criterion to explain
puzzling experimental results on plasma confinement in magnetic bottles
obtained by Rodionov at the Kurchatov Institute.
Description.
According to this criterion a deterministic trajectory will begin to move
between two nonlinear resonances in a chaotic and unpredictable manner,
in the parameter range
formula_0
Here formula_1 is the perturbation parameter,
while
formula_2
is the resonance-overlap parameter, given by the ratio of the
unperturbed resonance width in frequency
formula_3
(often computed in the pendulum
approximation and proportional to the square-root of perturbation),
and the frequency difference
formula_4
between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nK \\approx S^2 = (\\Delta \\omega_r/\\Delta_d)^2 > 1 .\n"
},
{
"math_id": 1,
"text": " K"
},
{
"math_id": 2,
"text": " S = \\Delta \\omega_r/\\Delta_d"
},
{
"math_id": 3,
"text": " \\Delta \\omega_r"
},
{
"math_id": 4,
"text": " \\Delta_d"
}
]
| https://en.wikipedia.org/wiki?curid=13443170 |
134433 | Cholesky decomposition | Matrix decomposition method
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924.
When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
Statement.
The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition of the form
formula_0
where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.
The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.
When A is a real matrix (hence symmetric positive-definite), the factorization may be written
formula_1
where L is a real lower triangular matrix with positive diagonal entries.
Positive semidefinite matrices.
If a Hermitian matrix A is only positive semidefinite, instead of positive definite, then it still has a decomposition of the form A = LL* where the diagonal entries of L are allowed to be zero.
The decomposition need not be unique, for example:
formula_2
for any θ. However, if the rank of A is r, then there is a unique lower triangular L with exactly r positive diagonal elements and "n" − "r" columns containing all zeroes.
Alternatively, the decomposition can be made unique when a pivoting choice is fixed. Formally, if A is an "n" × "n" positive semidefinite matrix of rank r, then there is at least one permutation matrix P such that P A PT has a unique decomposition of the form P A PT = L L* with
formula_3,
where L1 is an "r" × "r" lower triangular matrix with positive diagonal.
LDL decomposition.
A closely related variant of the classical Cholesky decomposition is the LDL decomposition,
formula_4
where L is a lower unit triangular (unitriangular) matrix, and D is a diagonal matrix. That is, the diagonal elements of L are required to be 1 at the cost of introducing an additional diagonal matrix D in the decomposition. The main advantage is that the LDL decomposition can be computed and used with essentially the same algorithms, but avoids extracting square roots.
For this reason, the LDL decomposition is often called the "square-root-free Cholesky" decomposition. For real matrices, the factorization has the form A = LDLT and is often referred to as LDLT decomposition (or LDLT decomposition, or LDL′). It is reminiscent of the eigendecomposition of real symmetric matrices, A = QΛQT, but is quite different in practice because Λ and D are not similar matrices.
The LDL decomposition is related to the classical Cholesky decomposition of the form LL* as follows:
formula_5
Conversely, given the classical Cholesky decomposition formula_6 of a positive definite matrix, if S is a diagonal matrix that contains the main diagonal of formula_7, then A can be decomposed as formula_8 where
formula_9 (this rescales each column to make diagonal elements 1),
formula_10
If A is positive definite then the diagonal elements of D are all positive.
For positive semidefinite A, an formula_8 decomposition exists where the number of non-zero elements on the diagonal D is exactly the rank of A.
Some indefinite matrices for which no Cholesky decomposition exists have an LDL decomposition with negative entries in D: it suffices that the first "n" − 1 leading principal minors of A are non-singular.
Example.
Here is the Cholesky decomposition of a symmetric real matrix:
formula_11
And here is its LDLT decomposition:
formula_12
Geometric interpretation.
The Cholesky decomposition is equivalent to a particular choice of conjugate axes of an ellipsoid. In detail, let the ellipsoid be defined as formula_15, then by definition, a set of vectors formula_16 are conjugate axes of the ellipsoid iff formula_17. Then, the ellipsoid is preciselyformula_18where formula_19 maps the basis vector formula_20, and formula_21 is the unit sphere in n dimensions. That is, the ellipsoid is a linear image of the unit sphere.
Define the matrix formula_22, then formula_17 is equivalent to formula_23. Different choices of the conjugate axes correspond to different decompositions.
The Cholesky decomposition corresponds to choosing formula_13 to be parallel to the first axis, formula_14 to be within the plane spanned by the first two axes, and so on. This makes formula_24 an upper-triangular matrix. Then, there is formula_25, where formula_26 is lower-triangular.
Similarly, principal component analysis corresponds to choosing formula_16 to be perpendicular. Then, let formula_27 and formula_28, and there is formula_29 where formula_30 is an orthogonal matrix. This then yields formula_31.
Applications.
Numerical solution of system of linear equations.
The Cholesky decomposition is mainly used for the numerical solution of linear equations formula_32. If A is symmetric and positive definite, then formula_32 can be solved by first computing the Cholesky decomposition formula_33, then solving formula_34 for y by forward substitution, and finally solving formula_35 for x by back substitution.
An alternative way to eliminate taking square roots in the formula_36 decomposition is to compute the LDL decomposition formula_37, then solving formula_34 for y, and finally solving formula_38.
For linear systems that can be put into symmetric form, the Cholesky decomposition (or its LDL variant) is the method of choice, for superior efficiency and numerical stability. Compared to the LU decomposition, it is roughly twice as efficient.
Linear least squares.
Systems of the form Ax = b with A symmetric and positive definite arise quite often in applications. For instance, the normal equations in linear least squares problems are of this form. It may also happen that matrix A comes from an energy functional, which must be positive from physical considerations; this happens frequently in the numerical solution of partial differential equations.
Non-linear optimization.
Non-linear multi-variate functions may be minimized over their parameters using variants of Newton's method called "quasi-Newton" methods. At iteration k, the search steps in a direction formula_39 defined by solving formula_40 for formula_39, where formula_39 is the step direction, formula_41 is the gradient, and formula_42 is an approximation to the Hessian matrix formed by repeating rank-1 updates at each iteration. Two well-known update formulas are called Davidon–Fletcher–Powell (DFP) and Broyden–Fletcher–Goldfarb–Shanno (BFGS). Loss of the positive-definite condition through round-off error is avoided if rather than updating an approximation to the inverse of the Hessian, one updates the Cholesky decomposition of an approximation of the Hessian matrix itself.
Monte Carlo simulation.
The Cholesky decomposition is commonly used in the Monte Carlo method for simulating systems with multiple correlated variables. The covariance matrix is decomposed to give the lower-triangular L. Applying this to a vector of uncorrelated observations in a sample u produces a sample vector Lu with the covariance properties of the system being modeled.
The following simplified example shows the economy one gets from the Cholesky decomposition: suppose the goal is to generate two correlated normal variables formula_43 and formula_44 with given correlation coefficient formula_45. To accomplish that, it is necessary to first generate two uncorrelated Gaussian random variables formula_46 and formula_47 (for example, via a Box–Muller transform). Given the required correlation coefficient formula_45, the correlated normal variables can be obtained via the transformations formula_48 and formula_49.
Kalman filters.
Unscented Kalman filters commonly use the Cholesky decomposition to choose a set of so-called sigma points. The Kalman filter tracks the average state of a system as a vector x of length N and covariance as an "N" × "N" matrix P. The matrix P is always positive semi-definite and can be decomposed into LLT. The columns of L can be added and subtracted from the mean x to form a set of 2"N" vectors called "sigma points". These sigma points completely capture the mean and covariance of the system state.
Matrix inversion.
The explicit inverse of a Hermitian matrix can be computed by Cholesky decomposition, in a manner similar to solving linear systems, using formula_50 operations (formula_51 multiplications). The entire inversion can even be efficiently performed in-place.
A non-Hermitian matrix B can also be inverted using the following identity, where BB* will always be Hermitian:
formula_52
Computation.
There are various methods for calculating the Cholesky decomposition. The computational complexity of commonly used algorithms is "O"("n"3) in general. The algorithms described below all involve about (1/3)"n"3 FLOPs ("n"3/6 multiplications and the same number of additions) for real flavors and (4/3)"n"3 FLOPs for complex flavors, where n is the size of the matrix A. Hence, they have half the cost of the LU decomposition, which uses 2"n"3/3 FLOPs (see Trefethen and Bau 1997).
Which of the algorithms below is faster depends on the details of the implementation. Generally, the first algorithm will be slightly slower because it accesses the data in a less regular manner.
The Cholesky algorithm.
The Cholesky algorithm, used to calculate the decomposition matrix L, is a modified version of Gaussian elimination.
The recursive algorithm starts with "i" := 1 and
A(1) := A.
At step i, the matrix A("i") has the following form:
formula_53
where I"i"−1 denotes the identity matrix of dimension "i" − 1.
If the matrix L"i" is defined by
formula_54
(note that "a""i,i" > 0 since A("i") is positive definite),
then A("i") can be written as
formula_55
where
formula_56
Note that b"i" b*"i" is an outer product, therefore this algorithm is called the "outer-product version" in (Golub & Van Loan).
This is repeated for i from 1 to n. After n steps, A("n"+1) = I is obtained, and hence, the lower triangular matrix L sought for is calculated as
formula_57
The Cholesky–Banachiewicz and Cholesky–Crout algorithms.
If the equation
formula_58
is written out, the following is obtained:
formula_59
and therefore the following formulas for the entries of L:
formula_60
formula_61
For complex and real matrices, inconsequential arbitrary sign changes of diagonal and associated off-diagonal elements are allowed. The expression under the square root is always positive if A is real and positive-definite.
For complex Hermitian matrix, the following formula applies:
formula_62
formula_63
So it now is possible to compute the ("i", "j") entry if the entries to the left and above are known. The computation is usually arranged in either of the following orders:
for (i = 0; i < dimensionSize; i++) {
for (j = 0; j <= i; j++) {
float sum = 0;
for (k = 0; k < j; k++)
sum += L[i][k] * L[j][k];
if (i == j)
L[i][j] = sqrt(A[i][i] - sum);
else
L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum));
The above algorithm can be succinctly expressed as combining a dot product and matrix multiplication in vectorized programming languages such as Fortran as the following,
do i = 1, size(A,1)
L(i,i) = sqrt(A(i,i) - dot_product(L(i,1:i-1), L(i,1:i-1)))
L(i+1:,i) = (A(i+1:,i) - matmul(conjg(L(i,1:i-1)), L(i+1:,1:i-1))) / L(i,i)
end do
where codice_0 refers to complex conjugate of the elements.
for (j = 0; j < dimensionSize; j++) {
float sum = 0;
for (k = 0; k < j; k++) {
sum += L[j][k] * L[j][k];
L[j][j] = sqrt(A[j][j] - sum);
for (i = j + 1; i < dimensionSize; i++) {
sum = 0;
for (k = 0; k < j; k++) {
sum += L[i][k] * L[j][k];
L[i][j] = (1.0 / L[j][j] * (A[i][j] - sum));
The above algorithm can be succinctly expressed as combining a dot product and matrix multiplication in vectorized programming languages such as Fortran as the following,
do i = 1, size(A,1)
L(i,i) = sqrt(A(i,i) - dot_product(L(1:i-1,i), L(1:i-1,i)))
L(i,i+1:) = (A(i,i+1:) - matmul(conjg(L(1:i-1,i)), L(1:i-1,i+1:))) / L(i,i)
end do
where codice_0 refers to complex conjugate of the elements.
Either pattern of access allows the entire computation to be performed in-place if desired.
Stability of the computation.
Suppose that there is a desire to solve a well-conditioned system of linear equations. If the LU decomposition is used, then the algorithm is unstable unless some sort of pivoting strategy is used. In the latter case, the error depends on the so-called growth factor of the matrix, which is usually (but not always) small.
Now, suppose that the Cholesky decomposition is applicable. As mentioned above, the algorithm will be twice as fast. Furthermore, no pivoting is necessary, and the error will always be small. Specifically, if Ax = b, and y denotes the computed solution, then y solves the perturbed system (A + E)y = b, where
formula_64
Here ||·||2 is the matrix 2-norm, "cn" is a small constant depending on n, and ε denotes the unit round-off.
One concern with the Cholesky decomposition to be aware of is the use of square roots. If the matrix being factorized is positive definite as required, the numbers under the square roots are always positive "in exact arithmetic". Unfortunately, the numbers can become negative because of round-off errors, in which case the algorithm cannot continue. However, this can only happen if the matrix is very ill-conditioned. One way to address this is to add a diagonal correction matrix to the matrix being decomposed in an attempt to promote the positive-definiteness. While this might lessen the accuracy of the decomposition, it can be very favorable for other reasons; for example, when performing Newton's method in optimization, adding a diagonal matrix can improve stability when far from the optimum.
LDL decomposition.
An alternative form, eliminating the need to take square roots when A is symmetric, is the symmetric indefinite factorization
formula_65
The following recursive relations apply for the entries of D and L:
formula_66
formula_67
This works as long as the generated diagonal elements in D stay non-zero. The decomposition is then unique. D and L are real if A is real.
For complex Hermitian matrix A, the following formula applies:
formula_68
formula_69
Again, the pattern of access allows the entire computation to be performed in-place if desired.
Block variant.
When used on indefinite matrices, the LDL* factorization is known to be unstable without careful pivoting; specifically, the elements of the factorization can grow arbitrarily. A possible improvement is to perform the factorization on block sub-matrices, commonly 2 × 2:
formula_70
where every element in the matrices above is a square submatrix. From this, these analogous recursive relations follow:
formula_71
formula_72
This involves matrix products and explicit inversion, thus limiting the practical block size.
Updating the decomposition.
A task that often arises in practice is that one needs to update a Cholesky decomposition. In more details, one has already computed the Cholesky decomposition formula_73 of some matrix formula_74, then one changes the matrix formula_74 in some way into another matrix, say formula_75, and one wants to compute the Cholesky decomposition of the updated matrix: formula_76. The question is now whether one can use the Cholesky decomposition of formula_74 that was computed before to compute the Cholesky decomposition of formula_75.
Rank-one update.
The specific case, where the updated matrix formula_75 is related to the matrix formula_74 by formula_77, is known as a "rank-one update".
Here is a function written in Matlab syntax that realizes a rank-one update:
function [L] = cholupdate(L, x)
n = length(x);
for k = 1:n
r = sqrt(L(k, k)^2 + x(k)^2);
c = r / L(k, k);
s = x(k) / L(k, k);
L(k, k) = r;
if k < n
L((k+1):n, k) = (L((k+1):n, k) + s * x((k+1):n)) / c;
x((k+1):n) = c * x((k+1):n) - s * L((k+1):n, k);
end
end
end
A "rank-n update" is one where for a matrix formula_78 one updates the decomposition such that formula_79. This can be achieved by successively performing rank-one updates for each of the columns of formula_78.
Rank-one downdate.
A "rank-one downdate" is similar to a rank-one update, except that the addition is replaced by subtraction: formula_80. This only works if the new matrix formula_75 is still positive definite.
The code for the rank-one update shown above can easily be adapted to do a rank-one downdate: one merely needs to replace the two additions in the assignment to codice_2 and codice_3 by subtractions.
Adding and removing rows and columns.
If a symmetric and positive definite matrix formula_81 is represented in block form as
formula_82
and its upper Cholesky factor
formula_83
then for a new matrix formula_75, which is the same as formula_81 but with the insertion of new rows and columns,
formula_84
Now there is an interest in finding the Cholesky factorization of formula_75, which can be called formula_85, without directly computing the entire decomposition.
formula_86
Writing formula_87 for the solution of formula_88, which can be found easily for triangular matrices, and formula_89 for the Cholesky decomposition of formula_90, the following relations can be found:
formula_91
These formulas may be used to determine the Cholesky factor after the insertion of rows or columns in any position, if the row and column dimensions are appropriately set (including to zero). The inverse problem,
formula_84
with known Cholesky decomposition
formula_92
and the desire to determine the Cholesky factor
formula_93
of the matrix formula_81 with rows and columns removed,
formula_94
yields the following rules:
formula_95
Notice that the equations above that involve finding the Cholesky decomposition of a new matrix are all of the form formula_96, which allows them to be efficiently calculated using the update and downdate procedures detailed in the previous section.
Proof for positive semi-definite matrices.
Proof by limiting argument.
The above algorithms show that every positive definite matrix formula_97 has a Cholesky decomposition. This result can be extended to the positive semi-definite case by a limiting argument. The argument is not fully constructive, i.e., it gives no explicit numerical algorithms for computing Cholesky factors.
If formula_97 is an formula_98 positive semi-definite matrix, then the sequence formula_99 consists of positive definite matrices. (This is an immediate consequence of, for example, the spectral mapping theorem for the polynomial functional calculus.) Also,
formula_100
in operator norm. From the positive definite case, each formula_101 has Cholesky decomposition formula_102. By property of the operator norm,
formula_103
The formula_104 holds because formula_105 equipped with the operator norm is a C* algebra. So formula_106 is a bounded set in the Banach space of operators, therefore relatively compact (because the underlying vector space is finite-dimensional).
Consequently, it has a convergent subsequence, also denoted by formula_107, with limit formula_108.
It can be easily checked that this formula_108 has the desired properties, i.e. formula_109, and formula_108 is lower triangular with non-negative diagonal entries: for all formula_110 and formula_111,
formula_112
Therefore, formula_109.
Because the underlying vector space is finite-dimensional, all topologies on the space of operators are equivalent.
So formula_107 tends to formula_108 in norm means formula_107 tends to formula_108 entrywise.
This in turn implies that, since each formula_113 is lower triangular with non-negative diagonal entries, formula_108 is also.
Proof by QR decomposition.
Let formula_74 be a positive semi-definite Hermitian matrix. Then it can be written as a product of its square root matrix, formula_114. Now QR decomposition can be applied to formula_115, resulting in formula_116
, where formula_117 is unitary and formula_118 is upper triangular. Inserting the decomposition into the original equality yields formula_119. Setting formula_120 completes the proof.
Generalization.
The Cholesky factorization can be generalized to (not necessarily finite) matrices with operator entries. Let formula_121 be a sequence of Hilbert spaces. Consider the operator matrix
formula_122
acting on the direct sum
formula_123
where each
formula_124
is a bounded operator. If A is positive (semidefinite) in the sense that for all finite k and for any
formula_125
there is formula_126, then there exists a lower triangular operator matrix L such that A = LL*. One can also take the diagonal entries of L to be positive.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{A} = \\mathbf{L L}^{*},"
},
{
"math_id": 1,
"text": "\\mathbf{A} = \\mathbf{L L}^\\mathsf{T},"
},
{
"math_id": 2,
"text": "\\begin{bmatrix}0 & 0 \\\\0 & 1\\end{bmatrix} = \\mathbf L \\mathbf L^*, \\quad \\quad \\mathbf L=\\begin{bmatrix}0 & 0\\\\ \\cos \\theta & \\sin\\theta\\end{bmatrix},"
},
{
"math_id": 3,
"text": " \\mathbf L = \\begin{bmatrix} \\mathbf L_1 & 0 \\\\ \\mathbf L_2 & 0\\end{bmatrix} "
},
{
"math_id": 4,
"text": "\\mathbf{A} = \\mathbf{L D L}^*,"
},
{
"math_id": 5,
"text": "\\mathbf{A} = \\mathbf{L D L}^* = \\mathbf L \\mathbf D^{1/2} \\left(\\mathbf D^{1/2} \\right)^* \\mathbf L^* =\n\\mathbf L \\mathbf D^{1/2} \\left(\\mathbf L \\mathbf D^{1/2}\\right)^*."
},
{
"math_id": 6,
"text": "\\mathbf A = \\mathbf C \\mathbf C^*"
},
{
"math_id": 7,
"text": "\\mathbf C"
},
{
"math_id": 8,
"text": "\\mathbf L \\mathbf D \\mathbf L^*"
},
{
"math_id": 9,
"text": " \\mathbf L = \\mathbf C \\mathbf S^{-1} "
},
{
"math_id": 10,
"text": " \\mathbf D = \\mathbf S^2. "
},
{
"math_id": 11,
"text": "\\begin{align}\n \\begin{pmatrix}\n 4 & 12 & -16 \\\\\n 12 & 37 & -43 \\\\\n -16 & -43 & 98 \\\\\n \\end{pmatrix}\n=\n \\begin{pmatrix}\n 2 & 0 & 0 \\\\\n 6 & 1 & 0 \\\\\n -8 & 5 & 3 \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n 2 & 6 & -8 \\\\\n 0 & 1 & 5 \\\\\n 0 & 0 & 3 \\\\\n \\end{pmatrix}.\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\n \\begin{pmatrix}\n 4 & 12 & -16 \\\\\n 12 & 37 & -43 \\\\\n -16 & -43 & 98 \\\\\n \\end{pmatrix}\n& =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 3 & 1 & 0 \\\\\n -4 & 5 & 1 \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n 4 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & 9 \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n 1 & 3 & -4 \\\\\n 0 & 1 & 5 \\\\\n 0 & 0 & 1 \\\\\n \\end{pmatrix}.\n\\end{align}"
},
{
"math_id": 13,
"text": "v_1"
},
{
"math_id": 14,
"text": "v_2"
},
{
"math_id": 15,
"text": "y^TAy = 1"
},
{
"math_id": 16,
"text": "v_1, ..., v_n"
},
{
"math_id": 17,
"text": "v_i^T A v_j = \\delta_{ij}"
},
{
"math_id": 18,
"text": "\\left\\{ \\sum_i x_i v_i : x^T x = 1 \\right\\} = f(\\mathbb S^n)"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "e_i \\mapsto v_i"
},
{
"math_id": 21,
"text": "\\mathbb S^n"
},
{
"math_id": 22,
"text": "V := [v_1 | v_2 | \\cdots | v_n]"
},
{
"math_id": 23,
"text": "V^TAV = I"
},
{
"math_id": 24,
"text": "V"
},
{
"math_id": 25,
"text": "A = LL^T"
},
{
"math_id": 26,
"text": "L = (V^{-1})^T"
},
{
"math_id": 27,
"text": "\\lambda = 1/\\|v_i\\|^2"
},
{
"math_id": 28,
"text": "\\Sigma = \\mathrm{diag}(\\lambda_1, ..., \\lambda_n)"
},
{
"math_id": 29,
"text": "V = U\\Sigma^{-1/2}"
},
{
"math_id": 30,
"text": "U"
},
{
"math_id": 31,
"text": "A = U\\Sigma U^T"
},
{
"math_id": 32,
"text": "\\mathbf{Ax} = \\mathbf{b}"
},
{
"math_id": 33,
"text": "\\mathbf{A} = \\mathbf{LL}^\\mathrm{*}"
},
{
"math_id": 34,
"text": "\\mathbf{Ly} = \\mathbf{b}"
},
{
"math_id": 35,
"text": "\\mathbf{L^*x} = \\mathbf{y}"
},
{
"math_id": 36,
"text": "\\mathbf{LL}^\\mathrm{*}"
},
{
"math_id": 37,
"text": "\\mathbf{A} = \\mathbf{LDL}^\\mathrm{*}"
},
{
"math_id": 38,
"text": "\\mathbf{DL}^\\mathrm{*}\\mathbf{x} = \\mathbf{y}"
},
{
"math_id": 39,
"text": " p_k "
},
{
"math_id": 40,
"text": " B_k p_k = -g_k "
},
{
"math_id": 41,
"text": " g_k "
},
{
"math_id": 42,
"text": " B_k "
},
{
"math_id": 43,
"text": "x_1"
},
{
"math_id": 44,
"text": "x_2"
},
{
"math_id": 45,
"text": "\\rho"
},
{
"math_id": 46,
"text": "z_1"
},
{
"math_id": 47,
"text": "z_2"
},
{
"math_id": 48,
"text": "x_1 = z_1"
},
{
"math_id": 49,
"text": "x_2 = \\rho z_1 + \\sqrt{1 - \\rho^2} z_2"
},
{
"math_id": 50,
"text": "n^3"
},
{
"math_id": 51,
"text": "\\tfrac{1}{2} n^3"
},
{
"math_id": 52,
"text": "\\mathbf{B}^{-1} = \\mathbf{B}^* (\\mathbf{B B}^*)^{-1}."
},
{
"math_id": 53,
"text": "\\mathbf{A}^{(i)}=\n\\begin{pmatrix}\n\\mathbf{I}_{i-1} & 0 & 0 \\\\\n0 & a_{i,i} & \\mathbf{b}_{i}^{*} \\\\\n0 & \\mathbf{b}_{i} & \\mathbf{B}^{(i)}\n\\end{pmatrix},\n"
},
{
"math_id": 54,
"text": "\\mathbf{L}_{i}:=\n\\begin{pmatrix}\n\\mathbf{I}_{i-1} & 0 & 0 \\\\\n0 & \\sqrt{a_{i,i}} & 0 \\\\\n0 & \\frac{1}{\\sqrt{a_{i,i}}} \\mathbf{b}_{i} & \\mathbf{I}_{n-i}\n\\end{pmatrix},\n"
},
{
"math_id": 55,
"text": "\\mathbf{A}^{(i)} = \\mathbf{L}_{i} \\mathbf{A}^{(i+1)} \\mathbf{L}_{i}^{*}"
},
{
"math_id": 56,
"text": "\\mathbf{A}^{(i+1)}=\n\\begin{pmatrix}\n\\mathbf{I}_{i-1} & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & \\mathbf{B}^{(i)} - \\frac{1}{a_{i,i}} \\mathbf{b}_{i} \\mathbf{b}_{i}^{*}\n\\end{pmatrix}."
},
{
"math_id": 57,
"text": "\\mathbf{L} := \\mathbf{L}_{1} \\mathbf{L}_{2} \\dots \\mathbf{L}_{n}."
},
{
"math_id": 58,
"text": "\\begin{align}\n\\mathbf{A} = \\mathbf{LL}^T & =\n\\begin{pmatrix} L_{11} & 0 & 0 \\\\\n L_{21} & L_{22} & 0 \\\\\n L_{31} & L_{32} & L_{33}\\\\\n\\end{pmatrix}\n\\begin{pmatrix} L_{11} & L_{21} & L_{31} \\\\\n 0 & L_{22} & L_{32} \\\\\n 0 & 0 & L_{33}\n\\end{pmatrix} \\\\[8pt]\n& =\n\\begin{pmatrix} L_{11}^2 & &(\\text{symmetric}) \\\\\n L_{21}L_{11} & L_{21}^2 + L_{22}^2& \\\\\n L_{31}L_{11} & L_{31}L_{21}+L_{32}L_{22} & L_{31}^2 + L_{32}^2+L_{33}^2\n\\end{pmatrix},\n\\end{align}"
},
{
"math_id": 59,
"text": "\\begin{align}\n\\mathbf{L} = \n\\begin{pmatrix} \\sqrt{A_{11}} & 0 & 0 \\\\\nA_{21}/L_{11} & \\sqrt{A_{22} - L_{21}^2} & 0 \\\\\nA_{31}/L_{11} & \\left( A_{32} - L_{31}L_{21} \\right) /L_{22} &\\sqrt{A_{33}- L_{31}^2 - L_{32}^2}\n\\end{pmatrix}\n\\end{align}"
},
{
"math_id": 60,
"text": " L_{j,j} = (\\pm)\\sqrt{ A_{j,j} - \\sum_{k=1}^{j-1} L_{j,k}^2 }, "
},
{
"math_id": 61,
"text": " L_{i,j} = \\frac{1}{L_{j,j}} \\left( A_{i,j} - \\sum_{k=1}^{j-1} L_{i,k} L_{j,k} \\right) \\quad \\text{for } i>j. "
},
{
"math_id": 62,
"text": " L_{j,j} = \\sqrt{ A_{j,j} - \\sum_{k=1}^{j-1} L_{j,k}^*L_{j,k} }, "
},
{
"math_id": 63,
"text": " L_{i,j} = \\frac{1}{L_{j,j}} \\left( A_{i,j} - \\sum_{k=1}^{j-1} L_{j,k}^* L_{i,k} \\right) \\quad \\text{for } i>j. "
},
{
"math_id": 64,
"text": " \\|\\mathbf{E}\\|_2 \\le c_n \\varepsilon \\|\\mathbf{A}\\|_2. "
},
{
"math_id": 65,
"text": "\n\\begin{align}\n\\mathbf{A} = \\mathbf{LDL}^\\mathrm{T} & =\n\\begin{pmatrix} 1 & 0 & 0 \\\\\n L_{21} & 1 & 0 \\\\\n L_{31} & L_{32} & 1\\\\\n\\end{pmatrix}\n\\begin{pmatrix} D_1 & 0 & 0 \\\\\n 0 & D_2 & 0 \\\\\n 0 & 0 & D_3\\\\\n\\end{pmatrix}\n\\begin{pmatrix} 1 & L_{21} & L_{31} \\\\\n 0 & 1 & L_{32} \\\\\n 0 & 0 & 1\\\\\n\\end{pmatrix} \\\\[8pt]\n& = \\begin{pmatrix} D_1 & &(\\mathrm{symmetric}) \\\\\n L_{21}D_1 & L_{21}^2D_1 + D_2& \\\\\n L_{31}D_1 & L_{31}L_{21}D_{1}+L_{32}D_2 & L_{31}^2D_1 + L_{32}^2D_2+D_3.\n\\end{pmatrix}.\n\\end{align}\n"
},
{
"math_id": 66,
"text": " D_j = A_{jj} - \\sum_{k=1}^{j-1} L_{jk}^2 D_k, "
},
{
"math_id": 67,
"text": " L_{ij} = \\frac{1}{D_j} \\left( A_{ij} - \\sum_{k=1}^{j-1} L_{ik} L_{jk} D_k \\right) \\quad \\text{for } i>j. "
},
{
"math_id": 68,
"text": " D_{j} = A_{jj} - \\sum_{k=1}^{j-1} L_{jk}L_{jk}^* D_k, "
},
{
"math_id": 69,
"text": " L_{ij} = \\frac{1}{D_j} \\left( A_{ij} - \\sum_{k=1}^{j-1} L_{ik} L_{jk}^* D_k \\right) \\quad \\text{for } i>j. "
},
{
"math_id": 70,
"text": "\\begin{align}\n\\mathbf{A} = \\mathbf{LDL}^\\mathrm{T} & =\n\\begin{pmatrix}\n \\mathbf I & 0 & 0 \\\\\n \\mathbf L_{21} & \\mathbf I & 0 \\\\\n \\mathbf L_{31} & \\mathbf L_{32} & \\mathbf I\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n \\mathbf D_1 & 0 & 0 \\\\\n 0 & \\mathbf D_2 & 0 \\\\\n 0 & 0 & \\mathbf D_3\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n \\mathbf I & \\mathbf L_{21}^\\mathrm T & \\mathbf L_{31}^\\mathrm T \\\\\n 0 & \\mathbf I & \\mathbf L_{32}^\\mathrm T \\\\\n 0 & 0 & \\mathbf I\\\\\n\\end{pmatrix} \\\\[8pt]\n& = \\begin{pmatrix}\n \\mathbf D_1 & &(\\mathrm{symmetric}) \\\\\n \\mathbf L_{21} \\mathbf D_1 & \\mathbf L_{21} \\mathbf D_1 \\mathbf L_{21}^\\mathrm T + \\mathbf D_2& \\\\\n \\mathbf L_{31} \\mathbf D_1 & \\mathbf L_{31} \\mathbf D_{1} \\mathbf L_{21}^\\mathrm T + \\mathbf L_{32} \\mathbf D_2 & \\mathbf L_{31} \\mathbf D_1 \\mathbf L_{31}^\\mathrm T + \\mathbf L_{32} \\mathbf D_2 \\mathbf L_{32}^\\mathrm T + \\mathbf D_3\n\\end{pmatrix},\n\\end{align}\n"
},
{
"math_id": 71,
"text": "\\mathbf D_j = \\mathbf A_{jj} - \\sum_{k=1}^{j-1} \\mathbf L_{jk} \\mathbf D_k \\mathbf L_{jk}^\\mathrm T,"
},
{
"math_id": 72,
"text": "\\mathbf L_{ij} = \\left(\\mathbf A_{ij} - \\sum_{k=1}^{j-1} \\mathbf L_{ik} \\mathbf D_k \\mathbf L_{jk}^\\mathrm T\\right) \\mathbf D_j^{-1}."
},
{
"math_id": 73,
"text": "\\mathbf{A} = \\mathbf{L}\\mathbf{L}^*"
},
{
"math_id": 74,
"text": "\\mathbf{A}"
},
{
"math_id": 75,
"text": " \\tilde{\\mathbf{A}} "
},
{
"math_id": 76,
"text": " \\tilde{\\mathbf{A}} = \\tilde{\\mathbf{L}} \\tilde{\\mathbf{L}}^* "
},
{
"math_id": 77,
"text": " \\tilde{\\mathbf{A}} = \\mathbf{A} + \\mathbf{x} \\mathbf{x}^* "
},
{
"math_id": 78,
"text": "\\mathbf{M}"
},
{
"math_id": 79,
"text": " \\tilde{\\mathbf{A}} = \\mathbf{A} + \\mathbf{M} \\mathbf{M}^* "
},
{
"math_id": 80,
"text": " \\tilde{\\mathbf{A}} = \\mathbf{A} - \\mathbf{x} \\mathbf{x}^* "
},
{
"math_id": 81,
"text": " \\mathbf A "
},
{
"math_id": 82,
"text": "\n\\mathbf{A} = \n\\begin{pmatrix}\n \\mathbf A_{11} & \\mathbf A_{13} \\\\\n \\mathbf A_{13}^{\\mathrm{T}} & \\mathbf A_{33} \\\\\n\\end{pmatrix}\n"
},
{
"math_id": 83,
"text": "\n\\mathbf{L} = \n\\begin{pmatrix}\n \\mathbf L_{11} & \\mathbf L_{13} \\\\\n 0 & \\mathbf L_{33} \\\\\n\\end{pmatrix},\n"
},
{
"math_id": 84,
"text": "\\begin{align}\n\\tilde{\\mathbf{A}} &= \n\\begin{pmatrix}\n \\mathbf A_{11} & \\mathbf A_{12} & \\mathbf A_{13} \\\\\n \\mathbf A_{12}^{\\mathrm{T}} & \\mathbf A_{22} & \\mathbf A_{23} \\\\\n \\mathbf A_{13}^{\\mathrm{T}} & \\mathbf A_{23}^{\\mathrm{T}} & \\mathbf A_{33} \\\\\n\\end{pmatrix}\n\\end{align}\n"
},
{
"math_id": 85,
"text": " \\tilde{\\mathbf S} "
},
{
"math_id": 86,
"text": "\\begin{align}\n\\tilde{\\mathbf{S}} &= \n\\begin{pmatrix}\n \\mathbf S_{11} & \\mathbf S_{12} & \\mathbf S_{13} \\\\\n 0 & \\mathbf S_{22} & \\mathbf S_{23} \\\\\n 0 & 0 & \\mathbf S_{33} \\\\\n\\end{pmatrix}.\n\\end{align}\n"
},
{
"math_id": 87,
"text": " \\mathbf A \\setminus \\mathbf{b}"
},
{
"math_id": 88,
"text": " \\mathbf A \\mathbf x = \\mathbf b"
},
{
"math_id": 89,
"text": " \\text{chol} (\\mathbf M)"
},
{
"math_id": 90,
"text": " \\mathbf M "
},
{
"math_id": 91,
"text": "\\begin{align}\n\\mathbf S_{11} &= \\mathbf L_{11}, \\\\\n\\mathbf S_{12} &= \\mathbf L_{11}^{\\mathrm{T}} \\setminus \\mathbf A_{12}, \\\\\n\\mathbf S_{13} &= \\mathbf L_{13}, \\\\\n\\mathbf S_{22} &= \\mathrm{chol} \\left(\\mathbf A_{22} - \\mathbf S_{12}^{\\mathrm{T}} \\mathbf S_{12}\\right), \\\\\n\\mathbf S_{23} &= \\mathbf S_{22}^{\\mathrm{T}} \\setminus \\left(\\mathbf A_{23} - \\mathbf S_{12}^{\\mathrm{T}} \\mathbf S_{13}\\right), \\\\\n\\mathbf S_{33} &= \\mathrm{chol} \\left(\\mathbf L_{33}^{\\mathrm{T}} \\mathbf L_{33} - \\mathbf S_{23}^{\\mathrm{T}} \\mathbf S_{23}\\right).\n\\end{align}\n"
},
{
"math_id": 92,
"text": "\\begin{align}\n\\tilde{\\mathbf{S}} &= \n\\begin{pmatrix}\n \\mathbf S_{11} & \\mathbf S_{12} & \\mathbf S_{13} \\\\\n 0 & \\mathbf S_{22} & \\mathbf S_{23} \\\\\n 0 & 0 & \\mathbf S_{33} \\\\\n\\end{pmatrix}\n\\end{align}\n"
},
{
"math_id": 93,
"text": "\\begin{align}\n\\mathbf{L} &= \n\\begin{pmatrix}\n \\mathbf L_{11} & \\mathbf L_{13} \\\\\n 0 & \\mathbf L_{33} \\\\\n\\end{pmatrix}\n\\end{align}\n"
},
{
"math_id": 94,
"text": "\\begin{align}\n\\mathbf{A} &= \n\\begin{pmatrix}\n \\mathbf A_{11} & \\mathbf A_{13} \\\\\n \\mathbf A_{13}^{\\mathrm{T}} & \\mathbf A_{33} \\\\\n\\end{pmatrix},\n\\end{align}\n"
},
{
"math_id": 95,
"text": "\\begin{align}\n\\mathbf L_{11} &= \\mathbf S_{11}, \\\\\n\\mathbf L_{13} &= \\mathbf S_{13}, \\\\\n\\mathbf L_{33} &= \\mathrm{chol} \\left(\\mathbf S_{33}^{\\mathrm{T}} \\mathbf S_{33} + \\mathbf S_{23}^{\\mathrm{T}} \\mathbf S_{23}\\right).\n\\end{align}\n"
},
{
"math_id": 96,
"text": " \\tilde{\\mathbf{A}} = \\mathbf{A} \\pm \\mathbf{x} \\mathbf{x}^* "
},
{
"math_id": 97,
"text": " \\mathbf{A} "
},
{
"math_id": 98,
"text": " n \\times n "
},
{
"math_id": 99,
"text": " \\left(\\mathbf{A}_k\\right)_k := \\left(\\mathbf{A} + \\frac{1}{k} \\mathbf{I}_n\\right)_k "
},
{
"math_id": 100,
"text": " \n\\mathbf{A}_k \\rightarrow \\mathbf{A}\n\\quad \\text{for} \\quad\nk \\rightarrow \\infty\n"
},
{
"math_id": 101,
"text": " \\mathbf{A}_k "
},
{
"math_id": 102,
"text": " \\mathbf{A}_k = \\mathbf{L}_k\\mathbf{L}_k^* "
},
{
"math_id": 103,
"text": "\\| \\mathbf{L}_k \\|^2 \\leq \\| \\mathbf{L}_k \\mathbf{L}_k^* \\| = \\| \\mathbf{A}_k \\| \\,."
},
{
"math_id": 104,
"text": "\\leq"
},
{
"math_id": 105,
"text": "M_n(\\mathbb{C})"
},
{
"math_id": 106,
"text": " \\left(\\mathbf{L}_k \\right)_k"
},
{
"math_id": 107,
"text": " \\left( \\mathbf{L}_k \\right)_k"
},
{
"math_id": 108,
"text": " \\mathbf{L}"
},
{
"math_id": 109,
"text": " \\mathbf{A} = \\mathbf{L}\\mathbf{L}^* "
},
{
"math_id": 110,
"text": " x"
},
{
"math_id": 111,
"text": " y"
},
{
"math_id": 112,
"text": " \n\\langle \\mathbf{A} x, y \\rangle \n= \\left\\langle \\lim \\mathbf{A}_k x, y \\right\\rangle \n= \\langle \\lim \\mathbf{L}_k \\mathbf{L}_k^* x, y \\rangle \n= \\langle \\mathbf{L} \\mathbf{L}^*x, y \\rangle \\,. \n"
},
{
"math_id": 113,
"text": " \\mathbf{L}_k"
},
{
"math_id": 114,
"text": "\\mathbf{A} = \\mathbf{B} \\mathbf{B}^*"
},
{
"math_id": 115,
"text": "\\mathbf{B}^*"
},
{
"math_id": 116,
"text": "\\mathbf{B}^* = \\mathbf{Q}\\mathbf{R}"
},
{
"math_id": 117,
"text": "\\mathbf{Q}"
},
{
"math_id": 118,
"text": "\\mathbf{R}"
},
{
"math_id": 119,
"text": "A = \\mathbf{B} \\mathbf{B}^* = (\\mathbf{QR})^*\\mathbf{QR} = \\mathbf{R}^*\\mathbf{Q}^*\\mathbf{QR} = \\mathbf{R}^*\\mathbf{R}"
},
{
"math_id": 120,
"text": "\\mathbf{L} = \\mathbf{R}^*"
},
{
"math_id": 121,
"text": "\\{\\mathcal{H}_n \\}"
},
{
"math_id": 122,
"text": "\n\\mathbf{A} =\n\\begin{bmatrix}\n\\mathbf{A}_{11} & \\mathbf{A}_{12} & \\mathbf{A}_{13} & \\; \\\\\n\\mathbf{A}_{12}^* & \\mathbf{A}_{22} & \\mathbf{A}_{23} & \\; \\\\\n\\mathbf{A} _{13}^* & \\mathbf{A}_{23}^* & \\mathbf{A}_{33} & \\; \\\\\n\\; & \\; & \\; & \\ddots\n\\end{bmatrix}\n"
},
{
"math_id": 123,
"text": "\\mathcal{H} = \\bigoplus_n \\mathcal{H}_n,"
},
{
"math_id": 124,
"text": "\\mathbf{A}_{ij} : \\mathcal{H}_j \\rightarrow \\mathcal{H} _i"
},
{
"math_id": 125,
"text": "h \\in \\bigoplus_{n = 1}^k \\mathcal{H}_k ,"
},
{
"math_id": 126,
"text": "\\langle h, \\mathbf{A} h\\rangle \\ge 0"
},
{
"math_id": 127,
"text": "A = R^* R"
},
{
"math_id": 128,
"text": "R"
}
]
| https://en.wikipedia.org/wiki?curid=134433 |
1344480 | Uniform isomorphism | Uniformly continuous homeomorphismIn the mathematical field of topology a uniform isomorphism or <templatestyles src="Template:Visible anchor/styles.css" />uniform homeomorphism is a special isomorphism between uniform spaces that respects uniform properties. Uniform spaces with uniform maps form a category. An isomorphism between uniform spaces is called a uniform isomorphism.
Definition.
A function formula_0 between two uniform spaces formula_1 and formula_2 is called a uniform isomorphism if it satisfies the following properties
In other words, a uniform isomorphism is a uniformly continuous bijection between uniform spaces whose inverse is also uniformly continuous.
If a uniform isomorphism exists between two uniform spaces they are called <templatestyles src="Template:Visible anchor/styles.css" />uniformly isomorphic or <templatestyles src="Template:Visible anchor/styles.css" />uniformly equivalent.
Uniform embeddings
A is an injective uniformly continuous map formula_4 between uniform spaces whose inverse formula_5 is also uniformly continuous, where the image formula_6 has the subspace uniformity inherited from formula_7
Examples.
The uniform structures induced by equivalent norms on a vector space are uniformly isomorphic. | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "f^{-1}"
},
{
"math_id": 4,
"text": "i : X \\to Y"
},
{
"math_id": 5,
"text": "i^{-1} : i(X) \\to X"
},
{
"math_id": 6,
"text": "i(X)"
},
{
"math_id": 7,
"text": "Y."
}
]
| https://en.wikipedia.org/wiki?curid=1344480 |
13445497 | Boat rigging | Setting up a rowing boat to accommodate the crew for rowing
Boats used in the sport of rowing may be adjusted in many different ways according to the needs of the crew, the type of racing, and anticipated rowing conditions. The primary objective of rigging a boat is to accommodate the different physiques and styles of rowing of the crew in such a way that the oars move in similar arcs through the water, thus improving the crew's efficiency and cohesiveness.
Together, the various adjustments are known as the 'rig' of the boat. Within a multi-rower crew, such as an eight, different oarsmen will make small adjustments to their own position, though most settings are usually uniform throughout the crew.
The order of the outriggers on the boat can also be altered so that rowers on different sides can row in different positions in the boat. This is covered in the article on Boat positions.
Gearing.
The oar acts as a lever, pivoting around the gate, which acts as a fulcrum. The oar's button sets the leverage ratio between the inboard and outboard portions of the oar and therefore sets the gearing.
Moving the button towards the handle reduces the inboard and increases the outboard, making each stroke harder but more effective. Such a gearing might be used for sprint racing.
The distance of the gate from the boat's centerline is usually adjustable by 3 to 4 cm (2 to 2.5 inches).
The gearing is usually set the same for all rowers in a crew, though a particularly tall or strong oarsman may have a different gearing to accommodate them.
The gear ratio is calculated slightly differently for sculling and sweep boats.
Sculling:
formula_0
Sweep:
formula_1
where formula_2 is the overall oar length, formula_3 is the inboard length, and formula_4 is the spread
The reason for two formulas is that spread is typically measured as the distance between port and starboard pins in a sculling boat and the distance between the keel and the pin for a sweep boat. Common gear ratios for sculling are between 2.4 and 2.6; for sweep common gear ratios are between 3.0 and 3.2.
Height.
The height of the gate can be adjusted, usually by moving washers on the pin from below it to above it, or vice versa. This may be required if the boat is sitting particularly low or high in the water, due to the crew's weight. If the crew anticipates rough water the boat may be rigged higher to allow more clearance of the blade above the water on the recovery.
The height of the gate is usually measured from the lowest point on the top of the seat.
Transverse Pitch.
The gate can be rotated so that the blade is presented to the water at a slight angle, usually so that the top of the blade is further towards the stern than the bottom while it is in the water. This makes it easier to keep the blade at the right height during the stroke and to extract it at the end. Usually 3 to 5 degrees of transverse pitch is used, and when rowers talk of 'pitch' they are referring to transverse pitch.
Transverse pitch may be achieved by rotating the pin on which the gate pivots, or by adding shaped wedges into the back of the gate for the oar to rest on.
Transverse pitch is usually set the same for all members of a crew. If the rowers on one side had a different transverse pitch than the other it would tend to unbalance the boat.
In the UK it is called Stern pitch.
Lateral Pitch.
Lateral pitch is the angle by which the pin leans away from the boat, with the top of the pin further from the boat's centreline than the bottom. Lateral pitch typically ranges from 0 to 2 degrees.
The effect of lateral pitch is to give more transverse pitch at the start of the stroke, and less at the finish, and may make the rower feel that the oar stays at the right height in the water more easily.
Footstretcher.
The footstretcher is where the rower's feet are attached to the boat, and has a pair of shoes or simple clogs attached to it. Adjustments of the footstretcher are usually made on the basis of the individual rower's physique.
Rake (angle).
The footstretcher can sometimes be adjusted for the angle to the horizontal, allowing for more or less flexibility in the rower's ankles.
In most boats set at 45 degrees angle relative to the keel / waterline. Although, 42 degrees is ideal..
- Flatter for inflexible ankles: too much reduces effectiveness of the leg push and increases the likelihood of over-reaching
- Steeper for more flexible ankles and can be used to prevent over-reaching: too much will increase the likelihood of achilles injuries
Height.
Changing the height of the feet changes how easy it is for the rower to reach forward and the amount of power they can comfortably apply. Lowering the feet allows a greater body angle at the catch, while raising the feet reduces the moment arm between the handle and the force at the feet, allowing greater force application for the same core/postural muscle strength.
Position.
The footstretcher can also move bow-wards or sternwards, usually to accommodate the length of the rower's legs. Typically a coach will start rigging the boat on the basis of all of the crew achieving the same position at the finish of the stroke, by adjusting the positions of the footstretchers.
The Slide.
The slide (the runners on which the seat rolls) can usually be adjusted fore and aft so that the rower can use full reach. If the coach considers that a rower is over-reaching at the catch, he may adjust the slide so that the rower hits the end of the slide ('frontstops') when the legs are compressed to the correct angle, preventing over-compression. | [
{
"math_id": 0,
"text": "G = \\frac{2(OA-IB)}{S}"
},
{
"math_id": 1,
"text": "G = \\frac{(OA-IB)}{S}"
},
{
"math_id": 2,
"text": "OA"
},
{
"math_id": 3,
"text": "IB"
},
{
"math_id": 4,
"text": "S"
}
]
| https://en.wikipedia.org/wiki?curid=13445497 |
13446 | Hebrew alphabet | Alphabet of the Hebrew language
The Hebrew alphabet (,<templatestyles src="Citation/styles.css"/>[a] ), known variously by scholars as the Ktav Ashuri, Jewish script, square script and block script, is traditionally an abjad script used in the writing of the Hebrew language and other Jewish languages, most notably Yiddish, Ladino, Judeo-Arabic, and Judeo-Persian. In modern Hebrew, vowels are increasingly introduced. It is also used informally in Israel to write Levantine Arabic, especially among Druze. It is an offshoot of the Imperial Aramaic alphabet, which flourished during the Achaemenid Empire and which itself derives from the Phoenician alphabet.
Historically, two separate abjad scripts have been used to write Hebrew. The original, old Hebrew script, known as the paleo-Hebrew alphabet, has been largely preserved in a variant form as the Samaritan alphabet. The present "Jewish script" or "square script", on the contrary, is a stylized form of the Aramaic alphabet and was technically known by Jewish sages as Ashurit (lit. "Assyrian script"), since its origins were alleged to be from Assyria.
Various "styles" (in current terms, "fonts") of representation of the Jewish script letters described in this article also exist, including a variety of cursive Hebrew styles. In the remainder of this article, the term "Hebrew alphabet" refers to the square script unless otherwise indicated.
The Hebrew alphabet has 22 letters. It does not have case. Five letters have different forms when used at the end of a word. Hebrew is written from right to left. Originally, the alphabet was an abjad consisting only of consonants, but is now considered an "impure abjad". As with other abjads, such as the Arabic alphabet, during its centuries-long use scribes devised means of indicating vowel sounds by separate vowel points, known in Hebrew as "niqqud." In both biblical and rabbinic Hebrew, the letters can also function as "matres lectionis", which is when certain consonants are used to indicate vowels. There is a trend in Modern Hebrew towards the use of "matres lectionis" to indicate vowels that have traditionally gone unwritten, a practice known as "full spelling".
The Yiddish alphabet, a modified version of the Hebrew alphabet used to write Yiddish, is a true alphabet, with all vowels rendered in the spelling, except in the case of inherited Hebrew words, which typically retain their Hebrew consonant-only spellings.
The Arabic and Hebrew alphabets have similarities because they are both derived from the Aramaic alphabet, which in turn derives either from paleo-Hebrew or the Phoenician alphabet, both being slight regional variations of the Proto-Canaanite alphabet used in ancient times to write the various Canaanite languages (including Hebrew, Moabite, Phoenician, Punic, et cetera).
History.
The Canaanite dialects were largely indistinguishable before around 1000 BCE. An example of related early Semitic inscriptions from the area include the tenth-century Gezer calendar over which scholars are divided as to whether its language is Hebrew or Phoenician and whether the script is Proto-Canaanite or paleo-Hebrew.
A Hebrew variant of the Proto-Canaanite alphabet, called the paleo-Hebrew alphabet by scholars, began to emerge around 800 BCE. An example is the Siloam inscription (c. 700 BCE).
The paleo-Hebrew alphabet was used in the ancient kingdoms of Israel and Judah. Following the Babylonian exile of the Kingdom of Judah in the 6th century BCE, Jews began using a form of the Imperial Aramaic alphabet, another offshoot of the same family of scripts, which flourished during the Achaemenid Empire. The Samaritans, who remained in the Land of Israel, continued to use the paleo-Hebrew alphabet. During the 3rd century BCE, Jews began to use a stylized, "square" form of the Aramaic alphabet that was used by the Persian Empire (and which in turn had been adopted from the Assyrians), while the Samaritans continued to use a form of the paleo-Hebrew script called the Samaritan alphabet. After the fall of the Persian Empire in 330 BCE, Jews used both scripts before settling on the square Assyrian form.
The square Hebrew alphabet was later adapted and used for writing languages of the Jewish diaspora – such as Karaim, the Judeo-Arabic languages, Judaeo-Spanish, and Yiddish. The Hebrew alphabet continued in use for scholarly writing in Hebrew and came again into everyday use with the rebirth of the Hebrew language as a spoken language in the 18th and 19th centuries, especially in Israel.
Description.
General.
In the traditional form, the Hebrew alphabet is an abjad consisting only of consonants, written from right to left. It has 22 letters, five of which use different forms at the end of a word.
Vowels.
In the traditional form, vowels are indicated by the weak consonants Aleph (<templatestyles src="Script/styles_hebrew.css" />א), He (<templatestyles src="Script/styles_hebrew.css" />ה), Waw/Vav (<templatestyles src="Script/styles_hebrew.css" />ו), or Yodh (<templatestyles src="Script/styles_hebrew.css" />י) serving as vowel letters, or "matres lectionis": the letter is combined with a previous vowel and becomes silent, or by imitation of such cases in the spelling of other forms. Also, a system of vowel points to indicate vowels (diacritics), called niqqud, was developed. In modern forms of the alphabet, as in the case of Yiddish and to some extent Modern Hebrew, vowels may be indicated. Today, the trend is toward full spelling with the weak letters acting as true vowels.
When used to write Yiddish, vowels are indicated, using certain letters, either with niqqud diacritics (e.g. <templatestyles src="Script/styles_hebrew.css" />אָ or <templatestyles src="Script/styles_hebrew.css" />יִ) or without (e.g. <templatestyles src="Script/styles_hebrew.css" />ע or <templatestyles src="Script/styles_hebrew.css" />י), except for Hebrew words, which in Yiddish are written in their Hebrew spelling.
To preserve the proper vowel sounds, scholars developed several different sets of vocalization and diacritical symbols called "nequdot" (<templatestyles src="Script/styles_hebrew.css" />נקודות, literally "points"). One of these, the Tiberian system, eventually prevailed. Aaron ben Moses ben Asher, and his family for several generations, are credited for refining and maintaining the system. These points are normally used only for special purposes, such as Biblical books intended for study, in poetry or when teaching the language to children. The Tiberian system also includes a set of cantillation marks, called "trope" or , used to indicate how scriptural passages should be chanted in synagogue recitations of scripture (although these marks do not appear in the scrolls). In everyday writing of modern Hebrew, "niqqud" are absent; however, patterns of how words are derived from Hebrew roots (called "shorashim" or "triliterals") allow Hebrew speakers to determine the vowel-structure of a given word from its consonants based on the word's context and part of speech.
Alphabet.
Unlike the Paleo-Hebrew writing script, the modern Hebrew script has five letters that have special final forms,<templatestyles src="Citation/styles.css"/>[c] called sofit (, meaning in this context "final" or "ending") form, used only at the end of a word, somewhat as in the Greek or in the Arabic and Mandaic alphabets.<templatestyles src="Citation/styles.css"/>[b] These are shown below the normal form in the following table (letter names are Unicode standard). Although Hebrew is read and written from right to left, the following table shows the letters in order from left to right:
Order.
As far back as the 13th century BCE, ancient Hebrew abecedaries indicate a slightly different ordering of the alphabet. The Zayit Stone, Izbet Sartah ostracon, and one inscription from Kuntillet Ajrud each contain a number of reverse letter orders; such as -, -, -, etc.
A reversal to can be clearly seen in the Book of Lamentations, whose first four chapters are ordered as alphabetical acrostics. In the Masoretic text, the first chapter has the now-usual ordering, and the second, third and fourth chapters exhibit . In the Dead Sea Scrolls version (4QLam/4Q111), reversed ordering also appears in the first chapter (i.e. in all the first four chapters). The fact that these chapters follows the pre-exilic order is evidence for them being written shortly after the events described, rather than being later, post-exilic compositions.
Pronunciation.
Alphabet.
The descriptions that follow are based on the pronunciation of modern standard Israeli Hebrew.
By analogy with the other dotted/dotless pairs, dotless tav, <templatestyles src="Script/styles_hebrew.css" />ת, would be expected to be pronounced /θ/ (voiceless dental fricative), and dotless dalet <templatestyles src="Script/styles_hebrew.css" />ד as /ð/ (voiced dental fricative), but these were lost among most Jews due to these sounds not existing in the countries where they lived (such as in nearly all of Eastern Europe). Yiddish modified /θ/ to /s/ (cf. seseo in Spanish), but in modern Israeli Hebrew, it is simply pronounced /t/. Likewise, historical /ð/ is simply pronounced /d/.
Shin and sin.
"Shin" and "sin" are represented by the same letter, <templatestyles src="Script/styles_hebrew.css" />ש, but are two separate phonemes. When vowel diacritics are used, the two phonemes are differentiated with a "shin"-dot or "sin"-dot; the "shin"-dot is above the upper-right side of the letter, and the "sin"-dot is above the upper-left side of the letter.
Historically, "left-dot-sin" corresponds to Proto-Semitic *, which in biblical-Judaic-Hebrew corresponded to the voiceless alveolar lateral fricative (or /ś/).
Dagesh.
Historically, the consonants <templatestyles src="Script/styles_hebrew.css" />ב "bet", <templatestyles src="Script/styles_hebrew.css" />ג "gimmel", <templatestyles src="Script/styles_hebrew.css" />ד "daleth", <templatestyles src="Script/styles_hebrew.css" />כ "kaf", <templatestyles src="Script/styles_hebrew.css" />פ "pe" and <templatestyles src="Script/styles_hebrew.css" />ת "tav" each had two sounds: one hard (plosive), and one soft (fricative), depending on the position of the letter and other factors. When vowel diacritics are used, the hard sounds are indicated by a central dot called "dagesh" (<templatestyles src="Script/styles_hebrew.css" />דגש), while the soft sounds lack a "dagesh". In modern Hebrew, however, the "dagesh" only changes the pronunciation of <templatestyles src="Script/styles_hebrew.css" />ב "bet", <templatestyles src="Script/styles_hebrew.css" />כ "kaf", and <templatestyles src="Script/styles_hebrew.css" />פ "pe", and does not affect the name of the letter. The differences are as follows:
In other dialects (mainly liturgical) there are variations from this pattern.
Sounds represented with diacritic geresh.
The sounds , , , written ⟨<templatestyles src="Script/styles_hebrew.css" />צ׳⟩, ⟨<templatestyles src="Script/styles_hebrew.css" />ג׳⟩, ⟨<templatestyles src="Script/styles_hebrew.css" />ז׳⟩, and , non-standardly sometimes transliterated ⟨<templatestyles src="Script/styles_hebrew.css" />וו⟩, are often found in slang and loanwords that are part of the everyday Hebrew colloquial vocabulary. The symbol resembling an apostrophe after the Hebrew letter modifies the pronunciation of the letter and is called a "geresh".
The pronunciation of the following letters can also be modified with the geresh diacritic. The represented sounds are however foreign to Hebrew phonology, i.e., these symbols mainly represent sounds in foreign words or names when transliterated with the Hebrew alphabet, and not loanwords.
"Geresh" is also used to denote an abbreviation consisting of a single Hebrew letter, while "gershayim" (a doubled "geresh") are used to denote acronyms pronounced as a string of letters; "geresh" and "gershayim" are also used to denote Hebrew numerals consisting of a single Hebrew letter or of multiple Hebrew letters, respectively. Geresh is also the name of a cantillation mark used for Torah recitation, though its visual appearance and function are different in that context.
Identical pronunciation.
In much of Israel's general population, especially where Ashkenazic pronunciation is prevalent, many letters have the same pronunciation. They are as follows:
Ancient Hebrew pronunciation.
Some of the variations in sound mentioned above are due to a systematic feature of Ancient Hebrew. The six consonants were pronounced differently depending on their position. These letters were also called "BeGeD KeFeT" letters . The full details are very complex; this summary omits some points. They were pronounced as plosives at the beginning of a syllable, or when doubled. They were pronounced as fricatives when preceded by a vowel (commonly indicated with a macron, ḇ ḡ ḏ ḵ p̄ ṯ). The plosive and double pronunciations were indicated by the "dagesh". In Modern Hebrew the sounds ḏ and ḡ have reverted to and , respectively, and ṯ has become , so only the remaining three consonants show variation. <templatestyles src="Script/styles_hebrew.css" />ר "resh" may have also been a "doubled" letter, making the list "BeGeD KePoReT". (Sefer Yetzirah, 4:1)
Regional and historical variation.
The following table contains the pronunciation of the Hebrew letters in reconstructed historical forms and dialects using the . The apostrophe-looking symbol after some letters is not a yud but a geresh. It is used for loanwords with non-native Hebrew sounds. The dot in the middle of some of the letters, called a "dagesh kal", also modifies the sounds of the letters <templatestyles src="Script/styles_hebrew.css" />ב, <templatestyles src="Script/styles_hebrew.css" />כ and <templatestyles src="Script/styles_hebrew.css" />פ in modern Hebrew (in some forms of Hebrew it modifies also the sounds of the letters <templatestyles src="Script/styles_hebrew.css" />ג, <templatestyles src="Script/styles_hebrew.css" />ד and/or <templatestyles src="Script/styles_hebrew.css" />ת; the "dagesh chazak" – orthographically indistinguishable from the "dagesh kal" – designates gemination, which today is realized only rarely – e.g. in biblical recitations or when using Arabic loanwords).
Vowels.
Matres lectionis.
<templatestyles src="Script/styles_hebrew.css" />א "alef", <templatestyles src="Script/styles_hebrew.css" />ע "ayin", <templatestyles src="Script/styles_hebrew.css" />ו "waw/vav" and <templatestyles src="Script/styles_hebrew.css" />י "yod" are letters that can sometimes indicate a vowel instead of a consonant (which would be, respectively, ). When they do, <templatestyles src="Script/styles_hebrew.css" />ו and <templatestyles src="Script/styles_hebrew.css" />י are considered to constitute part of the vowel designation in combination with a niqqud symbol – a vowel diacritic (whether or not the diacritic is marked), whereas <templatestyles src="Script/styles_hebrew.css" />א and <templatestyles src="Script/styles_hebrew.css" />ע are considered to be mute, their role being purely indicative of the non-marked vowel.
Vowel points.
"Niqqud" is the system of dots that help determine vowels and consonants. In Hebrew, all forms of "niqqud" are often omitted in writing, except for children's books, prayer books, poetry, foreign words, and words which would be ambiguous to pronounce. Israeli Hebrew has five vowel phonemes, , but many more written symbols for them:
Meteg.
By adding a vertical line (called "Meteg") underneath the letter and to the left of the vowel point, the vowel is made long. The "meteg" is only used in Biblical Hebrew, not Modern Hebrew.
Sh'va.
By adding two vertical dots (called "Sh'va") underneath the letter, the vowel is made very short. When sh'va is placed on the first letter of the word, mostly it is "è" (but in some instances, it makes the first letter silent without a vowel (vowel-less): e.g. וְ "wè" to "w")
Gershayim.
The symbol <templatestyles src="Script/styles_hebrew.css" />״ is called a gershayim and is a punctuation mark used in the Hebrew language to denote acronyms. It is written before the last letter in the acronym, e.g. <templatestyles src="Script/styles_hebrew.css" />ר״ת. Gershayim is also the name of a cantillation mark in the reading of the Torah, printed above the accented letter, e.g. <templatestyles src="Script/styles_hebrew.css" />א֞.
Stylistic variants.
The following table displays typographic and chirographic variants of each letter. For the five letters that have a different final form used at the end of words, the final forms are displayed beneath the regular form.
The block (square, or "print" type) and cursive ("handwritten" type) are the only variants in widespread contemporary use. Rashi is also used, for historical reasons, in a handful of standard texts.
Numeric values of letters.
Following the adoption of Greek Hellenistic alphabetic numeration practice, Hebrew letters started being used to denote numbers in the late 2nd century BC, and performed this arithmetic function for about a thousand years. Nowadays alphanumeric notation is used only in specific contexts, e.g. denoting dates in the Hebrew calendar, denoting grades of school in Israel, other listings (e.g. , – "phase a, phase b"), commonly in Kabbalah (Jewish mysticism) in a practice known as gematria, and often in religious contexts.
The numbers 500, 600, 700, 800 and 900 are commonly represented by the juxtapositions <templatestyles src="Script/styles_hebrew.css" />ת״ק, <templatestyles src="Script/styles_hebrew.css" />ת״ר, <templatestyles src="Script/styles_hebrew.css" />ת״ש, <templatestyles src="Script/styles_hebrew.css" />ת״ת, and <templatestyles src="Script/styles_hebrew.css" />תת״ק respectively.
Adding a geresh ("<templatestyles src="Script/styles_hebrew.css" />׳") to a letter multiplies its value by one thousand, for example, the year 5778 is portrayed as <templatestyles src="Script/styles_hebrew.css" />ה׳תשע״ח, where <templatestyles src="Script/styles_hebrew.css" />ה׳ represents 5000, and <templatestyles src="Script/styles_hebrew.css" />תשע״ח represents 778.
Transliterations and transcriptions.
The following table lists transliterations and transcriptions of Hebrew letters used in Modern Hebrew.
Clarifications:
Note: SBL's transliteration system, recommended in its "Handbook of Style", differs slightly from the 2006 "precise" transliteration system of the Academy of the Hebrew Language; for "<templatestyles src="Script/styles_hebrew.css" />צ" SBL uses "ṣ" (≠ AHL "ẓ"), and for with no dagesh, SBL uses the same symbols as for with dagesh (i.e. "b", "g", "d", "k", "f", "t").
<templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> A1<templatestyles src="Citation/styles.css"/>^ 2<templatestyles src="Citation/styles.css"/>^ 3<templatestyles src="Citation/styles.css"/>^ 4<templatestyles src="Citation/styles.css"/>^ In transliterations of modern Israeli Hebrew, initial and final <templatestyles src="Script/styles_hebrew.css" />ע (in regular transliteration), silent or initial <templatestyles src="Script/styles_hebrew.css" />א, and silent <templatestyles src="Script/styles_hebrew.css" />ה are "not" transliterated. To the eye of readers orientating themselves on Latin (or similar) alphabets, these letters might seem to be transliterated as vowel letters; however, these are in fact transliterations of the vowel diacritics – niqqud (or are representations of the spoken vowels). E.g., in ("if", ), ("mother", ) and ("nut", ), the letter <templatestyles src="Script/styles_hebrew.css" />א always represents the same consonant: (glottal stop), whereas the vowels /i/, /e/ and /o/ respectively represent the spoken vowel, whether it is orthographically denoted by diacritics or not. Since the Academy of the Hebrew Language ascertains that <templatestyles src="Script/styles_hebrew.css" />א in initial position is not transliterated, the symbol for the glottal stop ʾ is omitted from the transliteration, and only the subsequent vowels are transliterated (whether or not their corresponding vowel diacritics appeared in the text being transliterated), resulting in "im", "em" and "om", respectively.
<templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> B1<templatestyles src="Citation/styles.css"/>^ 2<templatestyles src="Citation/styles.css"/>^ 3<templatestyles src="Citation/styles.css"/>^ The diacritic geresh – "<templatestyles src="Script/styles_hebrew.css" />׳" – is used with some other letters as well (<templatestyles src="Script/styles_hebrew.css" />ד׳, <templatestyles src="Script/styles_hebrew.css" />ח׳, <templatestyles src="Script/styles_hebrew.css" />ט׳, <templatestyles src="Script/styles_hebrew.css" />ע׳, <templatestyles src="Script/styles_hebrew.css" />ר׳, <templatestyles src="Script/styles_hebrew.css" />ת׳), but only to transliterate "from" other languages "to" Hebrew – never to spell Hebrew words; therefore they were not included in this table (correctly translating a Hebrew text with these letters would require using the spelling in the language from which the transliteration to Hebrew was originally made). The non-standard "<templatestyles src="Script/styles_hebrew.css" />ו׳" and "<templatestyles src="Script/styles_hebrew.css" />וו" <templatestyles src="Citation/styles.css"/>[e1] are sometimes used to represent , which like , and appears in Hebrew slang and loanwords.
<templatestyles src="Citation/styles.css"/> <templatestyles src="Citation/styles.css"/> C1<templatestyles src="Citation/styles.css"/>^ 2<templatestyles src="Citation/styles.css"/>^ The Sound (as "ch" in loch) is often transcribed "ch", inconsistently with the guidelines specified by the Academy of the Hebrew Language: → "cham"; → "schach".
<templatestyles src="Citation/styles.css"/> D<templatestyles src="Citation/styles.css"/>^ Although the Bible does include a single occurrence of a final pe with a dagesh (Book of Proverbs 30, 6: ""), in modern Hebrew is always represented by pe in its regular, not final, form "<templatestyles src="Script/styles_hebrew.css" />פ", even when in final word position, which occurs with loanwords (e.g. "shop"), foreign names (e.g. "Philip") and some slang (e.g. "slept deeply").
Religious use.
The letters of the Hebrew alphabet have played varied roles in Jewish religious literature over the centuries, primarily in mystical texts. Some sources in classical rabbinical literature seem to acknowledge the historical provenance of the currently used Hebrew alphabet and deal with them as a mundane subject (the Jerusalem Talmud, for example, records that "the Israelites took for themselves square calligraphy", and that the letters "came with the Israelites from Ashur [Assyria]"); others attribute mystical significance to the letters, connecting them with the process of creation or the redemption. In mystical conceptions, the alphabet is considered eternal, pre-existent to the Earth, and the letters themselves are seen as having holiness and power, sometimes to such an extent that several stories from the Talmud illustrate the idea that they cannot be destroyed.
The idea of the letters' creative power finds its greatest vehicle in the Sefer Yezirah, or "Book of Creation", a mystical text of uncertain origin which describes a story of creation highly divergent from that in the Book of Genesis, largely through exposition on the powers of the letters of the alphabet. The supposed creative powers of the letters are also referenced in the Talmud and Zohar.
Another book, the 13th-century Kabbalistic text Sefer HaTemunah, holds that a single letter of unknown pronunciation, held by some to be the four-pronged shin on one side of the teffilin box, is missing from the current alphabet. The world's flaws, the book teaches, are related to the absence of this letter, the eventual revelation of which will repair the universe. Another example of messianic significance attached to the letters is the teaching of Rabbi Eliezer that the five letters of the alphabet with final forms hold the "secret of redemption".
In addition, the letters occasionally feature in aggadic portions of non-mystical rabbinic literature. In such aggada the letters are often given anthropomorphic qualities and depicted as speaking to God. Commonly their shapes are used in parables to illustrate points of ethics or theology. An example from the Babylonian Talmud (a parable intended to discourage speculation about the universe before creation):
<templatestyles src="Template:Quote_box/styles.css" />
"Why does the story of creation begin with bet?... In the same manner that the letter bet is closed on all sides and only open in front, similarly you are not permitted to inquire into what is before or what was behind, but only from the actual time of Creation."
Babylonian Talmud, Tractate Hagigah, 77c
Extensive instructions about the proper methods of forming the letters are found in Mishnat Soferim, within Mishna Berura of Yisrael Meir Kagan.
Mathematical use.
In set theory, formula_0, pronounced aleph-naught, aleph-zero, or aleph-null, is used to mark the cardinal number of an infinite countable set, such as formula_1, the set of all integers. More generally, the formula_2 aleph number notation marks the ordered sequence of all distinct infinite cardinal numbers.
Less frequently used, the formula_3 beth number notation is used for the iterated power sets of formula_0. The second element formula_4 is the cardinality of the continuum. Very occasionally, a gimel function is used in cardinal notation.
Unicode and HTML.
The Unicode Hebrew block extends from U+0590 to U+05FF and from U+FB1D to U+FB4F. It includes letters, ligatures, combining diacritical marks ("Niqqud" and cantillation marks) and punctuation. The Numeric Character References is included for HTML. These can be used in many markup languages, and they are often used in Wiki to create the Hebrew glyphs compatible with the majority of web browsers.
Standard Hebrew keyboards have a 101-key layout. Like the standard QWERTY layout, the Hebrew layout was derived from the order of letters on Hebrew typewriters.
Notes.
a<templatestyles src="Citation/styles.css"/>^ "Alef-bet" is commonly written in Israeli Hebrew without the "" (, "[Hebrew] hyphen"), , as opposed to with the hyphen, .
b<templatestyles src="Citation/styles.css"/>^ The Arabic letters generally (as six of the primary letters can have only two variants) have four forms, according to their place in the word. The same goes with the Mandaic ones, except for three of the 22 letters, which have only one form.
c<templatestyles src="Citation/styles.css"/>^ In forms of Hebrew older than Modern Hebrew, <templatestyles src="Script/styles_hebrew.css" />בי״ת, <templatestyles src="Script/styles_hebrew.css" />כ״ף, and <templatestyles src="Script/styles_hebrew.css" />פ״א can only be read "b", "k" and "p", respectively, at the beginning of a word, while they will have the sole value of "v", "kh" and "f" in a "sofit" (final) position, with few exceptions. In medial positions, both pronunciations are possible. In Modern Hebrew this restriction is not absolute, e.g. and never (= "physicist"), and never (= "snob"). A "dagesh" may be inserted to unambiguously denote the plosive variant: <templatestyles src="Script/styles_hebrew.css" />בּ = , <templatestyles src="Script/styles_hebrew.css" />כּ = , <templatestyles src="Script/styles_hebrew.css" />פּ =; similarly (though today very rare in Hebrew and common only in Yiddish) a rafé placed above the letter unambiguously denotes the fricative variant: <templatestyles src="Script/styles_hebrew.css" />בֿ = , <templatestyles src="Script/styles_hebrew.css" />כֿ = and <templatestyles src="Script/styles_hebrew.css" />פֿ = . In Modern Hebrew orthography, the sound at the end of a word is denoted by the regular form "<templatestyles src="Script/styles_hebrew.css" />פ", as opposed to the final form "<templatestyles src="Script/styles_hebrew.css" />ף", which always denotes (see table of transliterations and transcriptions, comment<templatestyles src="Citation/styles.css"/>[D]).
d<templatestyles src="Citation/styles.css"/>^ However, <templatestyles src="Script/styles_hebrew.css" />וו (two separate vavs), used in Ktiv male, is to be distinguished from the "Yiddish ligature" <templatestyles src="Script/styles_hebrew.css" />װ (also two vavs but together as one character).
e1<templatestyles src="Citation/styles.css"/>^ e2<templatestyles src="Citation/styles.css"/>^ e3<templatestyles src="Citation/styles.css"/>^ e4<templatestyles src="Citation/styles.css"/>^ e5<templatestyles src="Citation/styles.css"/>^ The Academy of the Hebrew Language states that both and be indistinguishably represented in Hebrew using the letter vav. Sometimes the vav is indeed doubled, however not to denote as opposed to but rather, when spelling without niqqud, to denote the phoneme /v/ at a non-initial and non-final position in the word, whereas a single vav at a non-initial and non-final position in the word in spelling without niqqud denotes one of the phonemes /u/ or /o/. To pronounce foreign words and loanwords containing the sound , Hebrew readers must therefore rely on former knowledge and context.
Explanatory footnotes
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\aleph_0"
},
{
"math_id": 1,
"text": "\\mathbb Z"
},
{
"math_id": 2,
"text": "\\aleph_\\alpha"
},
{
"math_id": 3,
"text": "\\beth_\\alpha"
},
{
"math_id": 4,
"text": "\\beth_1"
}
]
| https://en.wikipedia.org/wiki?curid=13446 |
13454825 | List of Runge–Kutta methods | Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation
formula_0
Explicit Runge–Kutta methods take the form
formula_1
Stages for implicit methods of s stages take the more general form, with the solution to be found over all s
formula_2
Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows:
formula_3
For adaptive and implicit methods, the Butcher tableau is extended to give values of formula_4, and the estimated error is then
formula_5.
Explicit methods.
The explicit methods are those where the matrix formula_6 is lower triangular.
Forward Euler.
The Euler method is first order. The lack of stability and accuracy limits its popularity mainly to use as a simple introductory example of a numeric solution method.
formula_7
Explicit midpoint method.
The (explicit) midpoint method is a second-order method with two stages (see also the implicit midpoint method below):
formula_8
Heun's method.
Heun's method is a second-order method with two stages. It is also known as the explicit trapezoid rule, improved Euler's method, or modified Euler's method:
formula_9
Ralston's method.
Ralston's method is a second-order method with two stages and a minimum local error bound:
formula_10
formula_11
formula_12
Generic third-order method.
See Sanderse and Veldman (2019).
for "α" ≠ 0, <templatestyles src="Fraction/styles.css" />2⁄3, 1:
formula_13
formula_14
formula_15
Ralston's third-order method.
Ralston's third-order method is used in the embedded Bogacki–Shampine method.
formula_16
formula_17
Classic fourth-order method.
The "original" Runge–Kutta method.
formula_18
3/8-rule fourth-order method.
This method doesn't have as much notoriety as the "classic" method, but is just as classic because it was proposed in the same paper (Kutta, 1901).
formula_19
Ralston's fourth-order method.
This fourth order method has minimum truncation error.
formula_20
Embedded methods.
The embedded methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step, and as result, allow to control the error with adaptive stepsize. This is done by having two methods in the tableau, one with order p and one with order p-1.
The lower-order step is given by
formula_21
where the formula_22 are the same as for the higher order method. Then the error is
formula_23
which is formula_24. The Butcher Tableau for this kind of method is extended to give the values of formula_4
formula_25
Heun–Euler.
The simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is:
formula_26
The error estimate is used to control the stepsize.
Fehlberg RK1(2).
The Fehlberg method has two methods of orders 1 and 2. Its extended Butcher Tableau is:
The first row of "b" coefficients gives the second-order accurate solution, and the second row has order one.
Bogacki–Shampine.
The Bogacki–Shampine method has two methods of orders 2 and 3. Its extended Butcher Tableau is:
The first row of "b" coefficients gives the third-order accurate solution, and the second row has order two.
Fehlberg.
The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4; it is sometimes dubbed RKF45 . Its extended Butcher Tableau is:
formula_27
The first row of "b" coefficients gives the fifth-order accurate solution, and the second row has order four.
The coefficients here allow for an adaptive stepsize to be determined automatically.
Cash-Karp.
Cash and Karp have modified Fehlberg's original idea. The extended tableau for the Cash–Karp method is
The first row of "b" coefficients gives the fifth-order accurate solution, and the second row has order four.
Dormand–Prince.
The extended tableau for the Dormand–Prince method is
The first row of "b" coefficients gives the fifth-order accurate solution, and the second row gives the fourth-order accurate solution.
Implicit methods.
Backward Euler.
The backward Euler method is first order. Unconditionally stable and non-oscillatory for linear diffusion problems.
formula_28
Implicit midpoint.
The implicit midpoint method is of second order. It is the simplest method in the class of collocation methods known as the Gauss-Legendre methods. It is a symplectic integrator.
formula_29
Crank-Nicolson method.
The Crank–Nicolson method corresponds to the implicit trapezoidal rule and is a second-order accurate and A-stable method.
formula_30
Gauss–Legendre methods.
These methods are based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method of order four has Butcher tableau:
formula_31
The Gauss–Legendre method of order six has Butcher tableau:
formula_32
Diagonally Implicit Runge–Kutta methods.
Diagonally Implicit Runge–Kutta (DIRK) formulae have been widely used for the numerical solution of stiff initial value problems;
the advantage of this approach is that here the solution may be found sequentially as opposed to simultaneously.
The simplest method from this class is the order 2 implicit midpoint method.
Kraaijevanger and Spijker's two-stage Diagonally Implicit Runge–Kutta method:
formula_33
Qin and Zhang's two-stage, 2nd order, symplectic Diagonally Implicit Runge–Kutta method:
formula_34
Pareschi and Russo's two-stage 2nd order Diagonally Implicit Runge–Kutta method:
formula_35
This Diagonally Implicit Runge–Kutta method is A-stable if and only if formula_36. Moreover, this method is L-stable if and only if formula_37 equals one of the roots of the polynomial formula_38, i.e. if formula_39.
Qin and Zhang's Diagonally Implicit Runge–Kutta method corresponds to Pareschi and Russo's Diagonally Implicit Runge–Kutta method with formula_40.
Two-stage 2nd order Diagonally Implicit Runge–Kutta method:
formula_41
Again, this Diagonally Implicit Runge–Kutta method is A-stable if and only if formula_36. As the previous method, this method is again L-stable if and only if formula_37 equals one of the roots of the polynomial formula_38, i.e. if formula_39. This condition is also necessary for 2nd order accuracy.
Crouzeix's two-stage, 3rd order Diagonally Implicit Runge–Kutta method:
formula_42
Crouzeix's three-stage, 4th order Diagonally Implicit Runge–Kutta method:
formula_43
with formula_44.
Three-stage, 3rd order, L-stable Diagonally Implicit Runge–Kutta method:
formula_45
with formula_46
Nørsett's three-stage, 4th order Diagonally Implicit Runge–Kutta method has the following Butcher tableau:
formula_47
with formula_37 one of the three roots of the cubic equation formula_48. The three roots of this cubic equation are approximately formula_49, formula_50, and formula_51. The root formula_52 gives the best stability properties for initial value problems.
Four-stage, 3rd order, L-stable Diagonally Implicit Runge–Kutta method
formula_53
Lobatto methods.
There are three main families of Lobatto methods, called IIIA, IIIB and IIIC (in classical mathematical literature, the symbols I and II are reserved for two types of Radau methods). These are named after Rehuel Lobatto as a reference to the Lobatto quadrature rule, but were introduced by Byron L. Ehle in his thesis. All are implicit methods, have order 2"s" − 2 and they all have "c"1 = 0 and "c""s" = 1. Unlike any explicit method, it's possible for these methods to have the order greater than the number of stages. Lobatto lived before the classic fourth-order method was popularized by Runge and Kutta.
Lobatto IIIA methods.
The Lobatto IIIA methods are collocation methods. The second-order method is known as the trapezoidal rule:
formula_54
The fourth-order method is given by
formula_55
These methods are A-stable, but not L-stable and B-stable.
Lobatto IIIB methods.
The Lobatto IIIB methods are not collocation methods, but they can be viewed as discontinuous collocation methods . The second-order method is given by
formula_56
The fourth-order method is given by
formula_57
Lobatto IIIB methods are A-stable, but not L-stable and B-stable.
Lobatto IIIC methods.
The Lobatto IIIC methods also are discontinuous collocation methods. The second-order method is given by
formula_58
The fourth-order method is given by
formula_59
They are L-stable. They are also algebraically stable and thus B-stable, that makes them suitable for stiff problems.
Lobatto IIIC* methods.
The Lobatto IIIC* methods are also known as Lobatto III methods (Butcher, 2008), Butcher's Lobatto methods (Hairer et al., 1993), and Lobatto IIIC methods (Sun, 2000) in the literature. The second-order method is given by
formula_60
Butcher's three-stage, fourth-order method is given by
formula_61
These methods are not A-stable, B-stable or L-stable. The Lobatto IIIC* method for formula_62 is sometimes called the explicit trapezoidal rule.
Generalized Lobatto methods.
One can consider a very general family of methods with three real parameters formula_63 by considering Lobatto coefficients of the form
formula_64,
where
formula_65.
For example, Lobatto IIID family introduced in (Nørsett and Wanner, 1981), also called Lobatto IIINW, are given by
formula_66
and
formula_67
These methods correspond to formula_68, formula_69, formula_70, and formula_71. The methods are L-stable. They are algebraically stable and thus B-stable.
Radau methods.
Radau methods are fully implicit methods (matrix "A" of such methods can have any structure). Radau methods attain order 2"s" − 1 for "s" stages. Radau methods are A-stable, but expensive to implement. Also they can suffer from order reduction.
The first order Radau method is similar to backward Euler method.
Radau IA methods.
The third-order method is given by
formula_72
The fifth-order method is given by
formula_73
Radau IIA methods.
The "c"i of this method are zeros of
formula_74.
The third-order method is given by
formula_75
The fifth-order method is given by
formula_76
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{d y}{d t} = f(t, y)."
},
{
"math_id": 1,
"text": "\\begin{align}\ny_{n+1} &= y_n + h \\sum_{i=1}^s b_i k_i \\\\\nk_1 &= f(t_n, y_n), \\\\\nk_2 &= f(t_n+c_2h, y_n+h(a_{21}k_1)), \\\\\nk_3 &= f(t_n+c_3h, y_n+h(a_{31}k_1+a_{32}k_2)), \\\\\n&\\;\\;\\vdots \\\\\nk_i &= f\\left(t_n + c_i h, y_n + h \\sum_{j = 1}^{i-1} a_{ij} k_j\\right).\n\\end{align}"
},
{
"math_id": 2,
"text": "k_i = f\\left(t_n + c_i h, y_n + h \\sum_{j = 1}^{s} a_{ij} k_j\\right). "
},
{
"math_id": 3,
"text": "\n\\begin{array}{c|cccc}\nc_1 & a_{11} & a_{12}& \\dots & a_{1s}\\\\\nc_2 & a_{21} & a_{22}& \\dots & a_{2s}\\\\\n\\vdots & \\vdots & \\vdots& \\ddots& \\vdots\\\\\nc_s & a_{s1} & a_{s2}& \\dots & a_{ss} \\\\\n\\hline\n & b_1 & b_2 & \\dots & b_s\\\\\n\\end{array}\n"
},
{
"math_id": 4,
"text": "b^*_i"
},
{
"math_id": 5,
"text": " e_{n+1} = h\\sum_{i=1}^s (b_i - b^*_i) k_i"
},
{
"math_id": 6,
"text": "[a_{ij}]"
},
{
"math_id": 7,
"text": "\n\\begin{array}{c|c}\n0 & 0 \\\\\n\\hline\n & 1 \\\\\n\\end{array}\n"
},
{
"math_id": 8,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0 \\\\\n1/2 & 1/2 & 0 \\\\\n\\hline\n & 0 & 1 \\\\\n\\end{array}\n"
},
{
"math_id": 9,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0 \\\\\n1 & 1 & 0 \\\\\n\\hline\n & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 10,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0 \\\\\n2/3 & 2/3 & 0 \\\\\n\\hline\n & 1/4 & 3/4 \\\\\n\\end{array}\n"
},
{
"math_id": 11,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 \\\\\n\\alpha & \\alpha & 0 \\\\\n\\hline\n & 1-\\frac{1}{2\\alpha} & \\frac{1}{2\\alpha} \\\\\n\\end{array}\n"
},
{
"math_id": 12,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1/2 & 1/2 & 0 & 0 \\\\\n1 & -1 & 2 & 0 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n\\end{array}\n"
},
{
"math_id": 13,
"text": "\n\\begin{array}{c|ccc} \n0 & 0 & 0 & 0\\\\ \n\\alpha & \\alpha & 0 & 0\\\\ \n1 &1+\\frac{1- \\alpha}{\\alpha (3\\alpha -2)} & -\\frac{1- \\alpha}{\\alpha(3\\alpha -2)} & 0\\\\ \n\\hline\n & \\frac{1}{2}-\\frac{1}{6\\alpha} & \\frac{1}{6\\alpha(1-\\alpha)} & \\frac{2-3\\alpha}{6(1-\\alpha)} \\\\\n\\end{array}\n"
},
{
"math_id": 14,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1/3 & 1/3 & 0 & 0 \\\\\n2/3 & 0 & 2/3 & 0 \\\\\n\\hline\n & 1/4 & 0 & 3/4 \\\\\n\\end{array}\n"
},
{
"math_id": 15,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n8/15 & 8/15 & 0 & 0 \\\\\n2/3 & 1/4 & 5/12 & 0 \\\\\n\\hline\n & 1/4 & 0 & 3/4 \\\\\n\\end{array}\n"
},
{
"math_id": 16,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1/2 & 1/2 & 0 & 0 \\\\\n3/4 & 0 & 3/4 & 0 \\\\\n\\hline\n & 2/9 & 1/3 & 4/9 \\\\\n\\end{array}\n"
},
{
"math_id": 17,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1 & 1 & 0 & 0 \\\\\n1/2 & 1/4 & 1/4 & 0 \\\\\n\\hline\n & 1/6 & 1/6 & 2/3 \\\\\n\\end{array}\n"
},
{
"math_id": 18,
"text": "\n\\begin{array}{c|cccc}\n0 & 0 & 0 & 0 & 0\\\\\n1/2 & 1/2 & 0 & 0 & 0\\\\\n1/2 & 0 & 1/2 & 0 & 0\\\\\n1 & 0 & 0 & 1 & 0\\\\\n\\hline\n & 1/6 & 1/3 & 1/3 & 1/6\\\\\n\\end{array}\n"
},
{
"math_id": 19,
"text": "\n\\begin{array}{c|cccc}\n0 & 0 & 0 & 0 & 0\\\\\n1/3 & 1/3 & 0 & 0 & 0\\\\\n2/3 & -1/3 & 1 & 0 & 0\\\\\n1 & 1 & -1 & 1 & 0\\\\ \n\\hline\n & 1/8 & 3/8 & 3/8 & 1/8\\\\\n\\end{array}\n"
},
{
"math_id": 20,
"text": "\n\\begin{array}{c|cccc}\n0 & 0 & 0 & 0 & 0\\\\\n\\frac{2}{5} & \\frac{2}{5} & 0 & 0 & 0\\\\\n\\frac{14 - 3 \\sqrt{5}}{16} & \\frac{-2\\,889 + 1\\,428\\sqrt{5}}{1\\,024} & \\frac{3\\,785 - 1\\,620\\sqrt{5}}{1\\,024} & 0 & 0\\\\\n1 & \\frac{-3\\,365 + 2\\,094\\sqrt{5}}{6\\,040} & \\frac{-975 - 3\\,046\\sqrt{5}}{2\\,552} & \\frac{467\\,040 + 203\\,968\\sqrt{5}}{240\\,845} & 0\\\\\n\\hline\n & \\frac{263 + 24\\sqrt{5}}{1\\,812} & \\frac{125 - 1000\\sqrt{5}}{3\\,828} & \\frac{3\\,426\\,304 + 1\\,661\\,952\\sqrt{5}}{5\\,924\\,787} & \\frac{30 - 4\\sqrt{5}}{123}\\\\\n\\end{array}\n"
},
{
"math_id": 21,
"text": " y^*_{n+1} = y_n + h\\sum_{i=1}^s b^*_i k_i, "
},
{
"math_id": 22,
"text": "k_i"
},
{
"math_id": 23,
"text": " e_{n+1} = y_{n+1} - y^*_{n+1} = h\\sum_{i=1}^s (b_i - b^*_i) k_i, "
},
{
"math_id": 24,
"text": "O(h^p)"
},
{
"math_id": 25,
"text": "\n\\begin{array}{c|cccc}\nc_1 & a_{11} & a_{12}& \\dots & a_{1s}\\\\\nc_2 & a_{21} & a_{22}& \\dots & a_{2s}\\\\\n\\vdots & \\vdots & \\vdots& \\ddots& \\vdots\\\\\nc_s & a_{s1} & a_{s2}& \\dots & a_{ss} \\\\\n\\hline\n & b_1 & b_2 & \\dots & b_s\\\\\n & b_1^* & b_2^* & \\dots & b_s^*\\\\\n\\end{array}\n"
},
{
"math_id": 26,
"text": "\n\\begin{array}{c|cc}\n\t0&\\\\\n\t1& \t1 \\\\\n\\hline\n&\t1/2& \t1/2\\\\\n\t&\t1 &\t0\n\\end{array}\n"
},
{
"math_id": 27,
"text": "\\begin{array}{r|ccccc}\n0 & & & & & \\\\\n1 / 4 & 1 / 4 & & & \\\\\n3 / 8 & 3 / 32 & 9 / 32 & & \\\\\n12 / 13 & 1932 / 2197 & -7200 / 2197 & 7296 / 2197 & \\\\\n1 & 439 / 216 & -8 & 3680 / 513 & -845 / 4104 & \\\\\n1 / 2 & -8 / 27 & 2 & -3544 / 2565 & 1859 / 4104 & -11 / 40 \\\\\n\\hline & 16 / 135 & 0 & 6656 / 12825 & 28561 / 56430 & -9 / 50 & 2 / 55 \\\\\n& 25 / 216 & 0 & 1408 / 2565 & 2197 / 4104 & -1 / 5 & 0\n\\end{array}"
},
{
"math_id": 28,
"text": "\n\\begin{array}{c|c}\n1 & 1 \\\\\n\\hline\n & 1 \\\\\n\\end{array}\n"
},
{
"math_id": 29,
"text": "\n\\begin{array}{c|c}\n1/2 & 1/2 \\\\\n\\hline\n & 1\n\\end{array}\n"
},
{
"math_id": 30,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0 \\\\\n1 & 1/2 & 1/2 \\\\\n\\hline\n & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 31,
"text": "\n\\begin{array}{c|cc}\n\\frac{1}{2}-\\frac{\\sqrt3}{6} & \\frac{1}{4} & \\frac{1}{4}-\\frac{\\sqrt3}{6} \\\\\n\\frac{1}{2}+\\frac{\\sqrt3}{6} & \\frac{1}{4}+\\frac{\\sqrt3}{6} &\\frac{1}{4} \\\\\n\\hline \n & \\frac{1}{2} & \\frac{1}{2}\\\\\n & \\frac12+\\frac{\\sqrt3}{2} & \\frac12-\\frac{\\sqrt3}{2} \\\\\n\\end{array}\n"
},
{
"math_id": 32,
"text": "\n\\begin{array}{c|ccc}\n\\frac{1}{2} - \\frac{\\sqrt{15}}{10} & \\frac{5}{36} & \\frac{2}{9}- \\frac{\\sqrt{15}}{15} & \\frac{5}{36} - \\frac{\\sqrt{15}}{30} \\\\\n\\frac{1}{2} & \\frac{5}{36} + \\frac{\\sqrt{15}}{24} & \\frac{2}{9} & \\frac{5}{36} - \\frac{\\sqrt{15}}{24}\\\\\n\\frac{1}{2} + \\frac{\\sqrt{15}}{10} & \\frac{5}{36} + \\frac{\\sqrt{15}}{30} & \\frac{2}{9} + \\frac{\\sqrt{15}}{15} & \\frac{5}{36} \\\\\n\\hline\n & \\frac{5}{18} & \\frac{4}{9} & \\frac{5}{18} \\\\\n & -\\frac56 & \\frac83 & -\\frac56\n\\end{array}\n"
},
{
"math_id": 33,
"text": "\n\\begin{array}{c|cc}\n1/2 & 1/2 & 0 \\\\\n3/2 & -1/2 & 2 \\\\\n\\hline \n & -1/2 & 3/2 \\\\\n\\end{array}\n"
},
{
"math_id": 34,
"text": "\n\\begin{array}{c|cc}\n1/4 & 1/4 & 0 \\\\\n3/4 & 1/2 & 1/4 \\\\\n\\hline \n & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 35,
"text": "\n\\begin{array}{c|cc}\nx & x & 0 \\\\\n1 - x & 1 - 2x & x \\\\\n\\hline \n & \\frac{1}{2} & \\frac{1}{2}\\\\\n\\end{array}\n"
},
{
"math_id": 36,
"text": "x \\ge \\frac{1}{4}"
},
{
"math_id": 37,
"text": "x"
},
{
"math_id": 38,
"text": "x^2 - 2x + \\frac{1}{2}"
},
{
"math_id": 39,
"text": "x = 1 \\pm \\frac{\\sqrt2}{2}"
},
{
"math_id": 40,
"text": "x = 1/4"
},
{
"math_id": 41,
"text": "\n\\begin{array}{c|cc}\nx & x & 0 \\\\\n1 & 1 - x & x \\\\\n\\hline \n & 1 - x & x\\\\\n\\end{array}\n"
},
{
"math_id": 42,
"text": "\n\\begin{array}{c|cc}\n\\frac{1}{2}+\\frac{\\sqrt3}{6} & \\frac{1}{2}+\\frac{\\sqrt3}{6} & 0 \\\\\n\\frac{1}{2}-\\frac{\\sqrt3}{6} & -\\frac{\\sqrt3}{3} & \\frac{1}{2}+\\frac{\\sqrt3}{6} \\\\\n\\hline \n & \\frac{1}{2} & \\frac{1}{2}\\\\\n\\end{array}\n"
},
{
"math_id": 43,
"text": "\n\\begin{array}{c|ccc}\n\\frac{1+\\alpha}{2} & \\frac{1+\\alpha}{2} & 0 & 0 \\\\\n\\frac{1}{2} & -\\frac{\\alpha}{2} & \\frac{1+\\alpha}{2} & 0 \\\\\n\\frac{1-\\alpha}{2} & 1+\\alpha & -(1+2\\,\\alpha) & \\frac{1+\\alpha}{2} \\\\\\hline \n & \\frac{1}{6\\alpha^2} & 1 - \\frac{1}{3\\alpha^2} & \\frac{1}{6\\alpha^2}\\\\\n\\end{array}\n"
},
{
"math_id": 44,
"text": "\\alpha = \\frac{2}{\\sqrt3}\\cos{\\frac{\\pi}{18}}"
},
{
"math_id": 45,
"text": "\n\\begin{array}{c|ccc}\nx & x & 0 & 0 \\\\\n\\frac{1+x}{2} & \\frac{1-x}{2} & x & 0 \\\\\n1 & -3x^2/2+4x-1/4 & 3x^2/2-5x+5/4 & x \\\\\n\\hline \n & -3x^2/2+4x-1/4 & 3x^2/2-5x+5/4 & x \\\\\n\\end{array}\n"
},
{
"math_id": 46,
"text": "x = 0.4358665215"
},
{
"math_id": 47,
"text": "\n\\begin{array}{c|ccc}\nx & x & 0 & 0 \\\\\n1/2 & 1/2-x & x & 0 \\\\\n1-x & 2x & 1-4x & x \\\\\n\\hline\n & \\frac{1}{6(1-2x)^2} & \\frac{3(1-2x)^2 - 1}{3(1-2x)^2} & \\frac{1}{6(1-2x)^2} \\\\\n\\end{array}\n"
},
{
"math_id": 48,
"text": "x^3 -3x^2/2 + x/2 - 1/24 = 0"
},
{
"math_id": 49,
"text": "x_1 = 1.06858"
},
{
"math_id": 50,
"text": "x_2 = 0.30254"
},
{
"math_id": 51,
"text": "x_3 = 0.12889"
},
{
"math_id": 52,
"text": "x_1"
},
{
"math_id": 53,
"text": "\n\\begin{array}{c|cccc}\n1/2 & 1/2 & 0 & 0 & 0 \\\\\n2/3 & 1/6 & 1/2 & 0 & 0 \\\\\n1/2 & -1/2 & 1/2 & 1/2 & 0 \\\\\n1 & 3/2 & -3/2 & 1/2 & 1/2 \\\\\n\\hline\n & 3/2 & -3/2 & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 54,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0 \\\\\n1 & 1/2 & 1/2\\\\\n\\hline\n & 1/2 & 1/2\\\\\n& 1 & 0 \\\\\n\\end{array}\n"
},
{
"math_id": 55,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1/2 & 5/24& 1/3 & -1/24\\\\\n1 & 1/6 & 2/3 & 1/6 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n& -\\frac12 & 2 & -\\frac12 \\\\\n\\end{array}\n"
},
{
"math_id": 56,
"text": "\n\\begin{array}{c|cc}\n0 & 1/2 & 0 \\\\\n1 & 1/2 & 0 \\\\\n\n\\hline\n & 1/2 & 1/2\\\\\n& 1 & 0 \\\\\n\n\\end{array}\n"
},
{
"math_id": 57,
"text": "\n\\begin{array}{c|ccc}\n0 & 1/6 & -1/6& 0 \\\\\n1/2 & 1/6 & 1/3 & 0 \\\\\n1 & 1/6 & 5/6 & 0 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n& -\\frac12 & 2 & -\\frac12 \\\\\n\\end{array}\n"
},
{
"math_id": 58,
"text": "\n\\begin{array}{c|cc}\n0 & 1/2 & -1/2\\\\\n1 & 1/2 & 1/2 \\\\\n\\hline\n & 1/2 & 1/2 \\\\\n& 1 & 0 \\\\\n\\end{array}\n"
},
{
"math_id": 59,
"text": "\n\\begin{array}{c|ccc}\n0 & 1/6 & -1/3& 1/6 \\\\\n1/2 & 1/6 & 5/12& -1/12\\\\\n1 & 1/6 & 2/3 & 1/6 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n& -\\frac12 & 2 & -\\frac12 \\\\\n\\end{array}\n"
},
{
"math_id": 60,
"text": "\n\\begin{array}{c|cc}\n0 & 0 & 0\\\\\n1 & 1 & 0 \\\\\n\\hline\n & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 61,
"text": "\n\\begin{array}{c|ccc}\n0 & 0 & 0 & 0 \\\\\n1/2 & 1/4 & 1/4 & 0\\\\\n1 & 0 & 1 & 0 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n\\end{array}\n"
},
{
"math_id": 62,
"text": "s = 2"
},
{
"math_id": 63,
"text": " (\\alpha_{A},\\alpha_{B},\\alpha_{C}) "
},
{
"math_id": 64,
"text": "a_{i,j}(\\alpha_{A},\\alpha_{B},\\alpha_{C}) = \\alpha_{A}a_{i,j}^A + \\alpha_{B}a_{i,j}^B + \\alpha_{C}a_{i,j}^C + \\alpha_{C*}a_{i,j}^{C*} "
},
{
"math_id": 65,
"text": "\\alpha_{C*} = 1 - \\alpha_{A} - \\alpha_{B} - \\alpha_{C}"
},
{
"math_id": 66,
"text": "\n\\begin{array}{c|cc}\n0 & 1/2 & 1/2\\\\\n1 & -1/2 & 1/2 \\\\\n\\hline\n & 1/2 & 1/2 \\\\\n\\end{array}\n"
},
{
"math_id": 67,
"text": "\n\\begin{array}{c|ccc}\n0 & 1/6 & 0 & -1/6 \\\\\n1/2 & 1/12 & 5/12 & 0\\\\\n1 & 1/2 & 1/3 & 1/6 \\\\\n\\hline\n & 1/6 & 2/3 & 1/6 \\\\\n\\end{array}\n"
},
{
"math_id": 68,
"text": "\\alpha_{A} = 2"
},
{
"math_id": 69,
"text": "\\alpha_{B} = 2"
},
{
"math_id": 70,
"text": "\\alpha_{C} = -1"
},
{
"math_id": 71,
"text": "\\alpha_{C*} = -2"
},
{
"math_id": 72,
"text": "\n\\begin{array}{c|cc}\n0 & 1/4 & -1/4 \\\\\n2/3 & 1/4 & 5/12 \\\\\n\\hline\n & 1/4 & 3/4 \\\\\n\\end{array}\n"
},
{
"math_id": 73,
"text": "\n\\begin{array}{c|ccc}\n0 & \\frac{1}{9} & \\frac{-1 - \\sqrt{6}}{18} & \\frac{-1 + \\sqrt{6}}{18} \\\\\n\\frac{3}{5} - \\frac{\\sqrt{6}}{10} & \\frac{1}{9} & \\frac{11}{45} + \\frac{7\\sqrt{6}}{360} & \\frac{11}{45} - \\frac{43\\sqrt{6}}{360}\\\\\n\\frac{3}{5} + \\frac{\\sqrt{6}}{10} & \\frac{1}{9} & \\frac{11}{45} + \\frac{43\\sqrt{6}}{360} & \\frac{11}{45} - \\frac{7\\sqrt{6}}{360} \\\\\n\\hline\n & \\frac{1}{9} & \\frac{4}{9} + \\frac{\\sqrt{6}}{36} & \\frac{4}{9} - \\frac{\\sqrt{6}}{36} \\\\\n\\end{array}\n"
},
{
"math_id": 74,
"text": "\\frac{d^{s-1}}{dx^{s-1}}(x^{s-1}(x-1)^s)"
},
{
"math_id": 75,
"text": "\n\\begin{array}{c|cc}\n1/3 & 5/12 & -1/12\\\\\n1 & 3/4 & 1/4 \\\\\n\\hline\n & 3/4 & 1/4 \\\\\n\\end{array}\n"
},
{
"math_id": 76,
"text": "\n\\begin{array}{c|ccc}\n\\frac{2}{5} - \\frac{\\sqrt{6}}{10} & \\frac{11}{45} - \\frac{7\\sqrt{6}}{360} & \\frac{37}{225} - \\frac{169\\sqrt{6}}{1800} & -\\frac{2}{225} + \\frac{\\sqrt{6}}{75} \\\\\n\\frac{2}{5} + \\frac{\\sqrt{6}}{10} & \\frac{37}{225} + \\frac{169\\sqrt{6}}{1800} & \\frac{11}{45} + \\frac{7\\sqrt{6}}{360} & -\\frac{2}{225} - \\frac{\\sqrt{6}}{75}\\\\\n1 & \\frac{4}{9} - \\frac{\\sqrt{6}}{36} & \\frac{4}{9} + \\frac{\\sqrt{6}}{36} & \\frac{1}{9} \\\\\n\\hline\n & \\frac{4}{9} - \\frac{\\sqrt{6}}{36} & \\frac{4}{9} + \\frac{\\sqrt{6}}{36} & \\frac{1}{9} \\\\\n\\end{array}\n"
}
]
| https://en.wikipedia.org/wiki?curid=13454825 |
1345686 | Post-money valuation | Post-money valuation is a way of expressing the value of a company after an investment has been made. This value is equal to the sum of the pre-money valuation and the amount of new equity.
These valuations are used to express how much ownership external investors, such as venture capitalists and angel investors, receive when they make a cash injection into a company. The amount external investors invest into a company is equal to the company's post-money valuation multiplied by the fraction of the company those investors own after the investment. Equivalently, the implied post-money valuation is calculated as the dollar amount of investment divided by the equity stake gained in an investment.
More specifically, the post-money valuation of a financial investment deal is given by the formula formula_0, where "PMV" is the post-money valuation, "N" is the number of shares the company has after the investment, and "P" is the price per share at which the investment was made. This formula is similar to the market capitalization formula used to express the value of public companies.
Example 1.
If a company is worth $100 million (pre-money) and an investor makes an investment of $25 million, the new, post-money valuation of the company will be $125 million. The investor will now own 20% of the company.
This basic example illustrates the general concept. However, in actual, real-life scenarios, the calculation of post-money valuation can be more complicated—because the capital structure of companies often includes convertible loans, warrants, and option-based management incentive schemes.
Strictly speaking, the calculation is the price paid per share multiplied by the total number of shares existing after the investment—i.e., it takes into account the number of shares arising from the conversion of loans, exercise of in-the-money warrants, and any in-the-money options. Thus it is important to confirm that the number is a fully diluted and fully converted post-money valuation.
In this scenario, the pre-money valuation should be calculated as the post-money valuation minus the total money coming into the company—not only from the purchase of shares, but also from the conversion of loans, the nominal interest, and the money paid to exercise in-the-money options and warrants.
Example 2.
Consider a company with 1,000,000 shares, a convertible loan note for $1,000,000 converting at 75% of the next round price, warrants for 200,000 shares at $10 a share, and a granted employee stock ownership plan of 200,000 shares at $4 per share. The company receives an offer to invest $8,000,000 at $8 per share.
The post-money valuation is equal to $8 times the number of shares existing after the transaction—in this case, 2,366,667 shares. This figure includes the original 1,000,000 shares, plus 1,000,000 shares from new investment, plus 166,667 shares from the loan conversion ($1,000,000 divided by 75% of the next investment round price of $8, or $1,000,000 / (.75 * 8) ), plus 200,000 shares from in-the-money options. The fully converted, fully diluted post-money valuation in this example is $18,933,336.
The pre-money valuation would be $9,133,336—calculated by taking the post-money valuation of $18,933,336 and subtracting the $8,000,000 of new investment, as well as $1,000,000 for the loan conversion and $800,000 from the exercise of the rights under the ESOP. Note that the warrants cannot be exercised because they are not in-the-money (i.e. their price, $10 a share, is still higher than the new investment price of $8 a share).
Versus market value.
Importantly, a company's post-money valuation is not equal to its market value. The post-money valuation formula does not take into account the special features of preferred stock. It assumes that preferred stock has the same value as common stock, which is usually not true as preferred stock often has liquidation preference, participation, and other features that make it worth more than common stock. Because preferred stock are worth more than common stock, post-money valuations tend to overstate the value of companies. Will Gornall and Ilya Strebulaev provide the fair values of the 135 of the largest U.S. venture capital-backed companies and argue that these companies' post-money valuations are an average of 50% above their market values.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " PMV = N \\times P "
}
]
| https://en.wikipedia.org/wiki?curid=1345686 |
1345771 | Position (geometry) | Vector representing the position of a point with respect to a fixed origin
In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents a point "P" in space. Its length represents the distance in relation to an arbitrary reference origin "O", and its direction represents the angular orientation with respect to given reference axes. Usually denoted x, r, or s, it corresponds to the straight line segment from "O" to "P".
In other words, it is the displacement or translation that maps the origin to "P":
formula_0
The term position vector is used mostly in the fields of differential geometry, mechanics and occasionally vector calculus.
Frequently this is used in two-dimensional or three-dimensional space, but can be easily generalized to Euclidean spaces and affine spaces of any dimension.
Relative position.
The relative position of a point "Q" with respect to point "P" is the Euclidean vector resulting from the subtraction of the two absolute position vectors (each with respect to the origin):
formula_1
where formula_2.
The relative direction between two points is their relative position normalized as a unit vector:
formula_3
where the denominator is the distance between the two points, formula_4.
A relative direction is a bound vector, in contrast to an ordinary direction, which is a free vector.
Definition and representation.
Three dimensions.
In three dimensions, any set of three-dimensional coordinates and their corresponding basis vectors can be used to define the location of a point in space—whichever is the simplest for the task at hand may be used.
Commonly, one uses the familiar Cartesian coordinate system, or sometimes spherical polar coordinates, or cylindrical coordinates:
formula_5
where "t" is a parameter, owing to their rectangular or circular symmetry. These different coordinates and corresponding basis vectors represent the same position vector. More general curvilinear coordinates could be used instead and are in contexts like continuum mechanics and general relativity (in the latter case one needs an additional time coordinate).
"n" dimensions.
Linear algebra allows for the abstraction of an "n"-dimensional position vector. A position vector can be expressed as a linear combination of basis vectors:
formula_6
The set of all position vectors forms position space (a vector space whose elements are the position vectors), since positions can be added (vector addition) and scaled in length (scalar multiplication) to obtain another position vector in the space. The notion of "space" is intuitive, since each "xi" ("i" = 1, 2, …, "n") can have any value, the collection of values defines a point in space.
The "dimension" of the position space is "n" (also denoted dim("R") = "n"). The "coordinates" of the vector r with respect to the basis vectors e"i" are "x""i". The vector of coordinates forms the coordinate vector or "n"-tuple ("x"1, "x"2, …, "xn").
Each coordinate "xi" may be parameterized a number of parameters "t". One parameter "xi"("t") would describe a curved 1D path, two parameters "xi"("t"1, "t"2) describes a curved 2D surface, three "xi"("t"1, "t"2, "t"3) describes a curved 3D volume of space, and so on.
The linear span of a basis set "B" = {e1, e2, …, e"n"} equals the position space "R", denoted span("B") = "R".
Applications.
Differential geometry.
Position vector fields are used to describe continuous and differentiable space curves, in which case the independent parameter needs not be time, but can be (e.g.) arc length of the curve.
Mechanics.
In any equation of motion, the position vector r("t") is usually the most sought-after quantity because this function defines the motion of a particle (i.e. a point mass) – its location relative to a given coordinate system at some time "t".
To define motion in terms of position, each coordinate may be parametrized by time; since each successive value of time corresponds to a sequence of successive spatial locations given by the coordinates, the continuum limit of many successive locations is a path the particle traces.
In the case of one dimension, the position has only one component, so it effectively degenerates to a scalar coordinate. It could be, say, a vector in the "x" direction, or the radial "r" direction. Equivalent notations include
formula_7
Derivatives.
For a position vector r that is a function of time "t", the time derivatives can be computed with respect to "t". These derivatives have common utility in the study of kinematics, control theory, engineering and other sciences.
formula_8
where dr is an infinitesimally small displacement (vector).
formula_9
formula_10
These names for the first, second and third derivative of position are commonly used in basic kinematics. By extension, the higher-order derivatives can be computed in a similar fashion. Study of these higher-order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite sequence, enabling several analytical techniques in engineering and physics. | [
{
"math_id": 0,
"text": "\\mathbf{r}=\\overrightarrow{OP}."
},
{
"math_id": 1,
"text": "\\Delta \\mathbf{r}=\\mathbf{s} - \\mathbf{r}=\\overrightarrow{PQ},"
},
{
"math_id": 2,
"text": "\\mathbf{s}=\\overrightarrow{OQ}"
},
{
"math_id": 3,
"text": "\\Delta \\mathbf{\\hat{r}}=\\Delta \\mathbf{r} / \\|\\Delta \\mathbf{r}\\|,"
},
{
"math_id": 4,
"text": "\\| \\Delta \\mathbf{r} \\|"
},
{
"math_id": 5,
"text": " \\begin{align} \n \\mathbf{r}(t) \n & \\equiv \\mathbf{r}(x,y,z) \\equiv x(t)\\mathbf{\\hat{e}}_x + y(t)\\mathbf{\\hat{e}}_y + z(t)\\mathbf{\\hat{e}}_z \\\\\n & \\equiv \\mathbf{r}(r,\\theta,\\phi) \\equiv r(t)\\mathbf{\\hat{e}}_r\\big(\\theta(t), \\phi(t)\\big) \\\\\n & \\equiv \\mathbf{r}(r,\\phi,z) \\equiv r(t)\\mathbf{\\hat{e}}_r\\big(\\phi(t)\\big) + z(t)\\mathbf{\\hat{e}}_z, \\\\\n\\end{align}"
},
{
"math_id": 6,
"text": "\\mathbf{r} = \\sum_{i=1}^n x_i \\mathbf{e}_i = x_1 \\mathbf{e}_1 + x_2 \\mathbf{e}_2 + \\dotsb + x_n \\mathbf{e}_n. "
},
{
"math_id": 7,
"text": " \\mathbf{x} \\equiv x \\equiv x(t), \\quad r \\equiv r(t), \\quad s \\equiv s(t)."
},
{
"math_id": 8,
"text": "\\mathbf{v} = \\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t},"
},
{
"math_id": 9,
"text": "\\mathbf{a} = \\frac{\\mathrm{d}\\mathbf{v}}{\\mathrm{d}t} = \\frac{\\mathrm{d}^2\\mathbf{r}}{\\mathrm{d}t^2}."
},
{
"math_id": 10,
"text": "\\mathbf{j} = \\frac{\\mathrm{d}\\mathbf{a}}{\\mathrm{d}t} = \\frac{\\mathrm{d}^2\\mathbf{v}}{\\mathrm{d}t^2} = \\frac{\\mathrm{d}^3\\mathbf{r}}{\\mathrm{d}t^3}."
}
]
| https://en.wikipedia.org/wiki?curid=1345771 |
13460400 | Plus–minus method | The plus–minus method, also known as CRM (conventional reciprocal method), is a geophysical method to analyze seismic refraction data developed by J. G. Hagedoorn. It can be used to calculate the depth and velocity variations of an undulating layer boundary for slope angles less than ~10°.
Theory.
In the plus–minus method, the near surface is modeled as a layer above a halfspace where both the layer and the halfspace are allowed to have varying velocities. The method is based on the analysis of the so-called 'plus time' formula_0 and 'minus time' formula_1 that are given by:
formula_2
formula_3
where formula_4 is the traveltime from A to B, formula_5 the traveltime from A to X and formula_6 the traveltime from B to X.
Assuming that the layer boundary is planar between A" and B" and that the dip is small (<10°), the plus time formula_7 corresponds to the intercept time in classic refraction analysis and the minus time formula_8 can be expressed as:
formula_9
where formula_10 is the offset between A and X and formula_11 is the velocity of the halfspace.
Therefore, the slope of the minus time formula_12 can be used to estimate the velocity of the halfspace formula_11:
formula_13
The interval formula_14 over which the slope is estimated should be chosen according to data quality. A larger formula_14 results in more stable velocity estimates but also introduces stronger smoothing.
Like in classical refraction analysis, the thickness of the upper layer can be derived from the intercept time formula_0:
formula_15
This requires an estimation of the velocity of the upper layer formula_16 which can be obtained from the direct wave in the traveltime diagram.
Furthermore, the results of the plus–minus method can be used to calculate the shot-receiver static shift formula_17:
formula_18
where formula_19 is the datum elevation and formula_20 the surface elevation at station X.
Applications.
The plus–minus method was developed for shallow seismic surveys where a thin, low velocity weathering layer covers the more solid basement. The thickness of the weathering layer is, among others, important for static corrections in reflection seismic processing or for engineering purposes. An important advantage of the method is that it does not require manual interpretation of the intercept time or the crossover point. This makes it is also easy to implement in computer programs. However, it is only applicable if the layer boundary is planar in parts and the dips are small. These assumptions often lead to smoothing of the actual topography of the layer boundary. Nowadays, the plus–minus method has mostly been replaced by more advanced inversion methods that have less restrictions. However, the plus–minus method is still used for real-time processing in the field because of its simplicity and low computational costs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t^+"
},
{
"math_id": 1,
"text": "t^-"
},
{
"math_id": 2,
"text": " t^+ = t_{AX} + t_{BX} - t_{AB} "
},
{
"math_id": 3,
"text": " t^- = t_{AX} - t_{BX} - t_{AB} "
},
{
"math_id": 4,
"text": "t_{AB}"
},
{
"math_id": 5,
"text": "t_{AX}"
},
{
"math_id": 6,
"text": "t_{BX}"
},
{
"math_id": 7,
"text": "t^+ "
},
{
"math_id": 8,
"text": "t^- "
},
{
"math_id": 9,
"text": " t^- = t^+ + \\frac{2x}{v_2} "
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "v_2"
},
{
"math_id": 12,
"text": "\\triangle t^-/\\triangle x"
},
{
"math_id": 13,
"text": " v_2(x) = 2 \\frac{\\triangle x}{\\triangle t^-} "
},
{
"math_id": 14,
"text": "\\triangle x"
},
{
"math_id": 15,
"text": " z(x) = \\frac{t^+ v_1(x) v_2(x)}{ 2 \\sqrt{v_2^2 - v_1^2}} "
},
{
"math_id": 16,
"text": "v_1(x)"
},
{
"math_id": 17,
"text": "\\triangle \\tau(x)"
},
{
"math_id": 18,
"text": " \\triangle \\tau(x) = - \\frac{z(x)}{v_1(x)} + \\frac{E_X - E_S + z(x)}{v_2(x)} "
},
{
"math_id": 19,
"text": "E_X"
},
{
"math_id": 20,
"text": "E_S"
}
]
| https://en.wikipedia.org/wiki?curid=13460400 |
1346058 | Mode choice | Mode choice analysis is the third step in the conventional four-step transportation forecasting model of transportation planning, following trip distribution and preceding route assignment. From origin-destination table inputs provided by trip distribution, mode choice analysis allows the modeler to determine probabilities that travelers will use a certain mode of transport. These probabilities are called the modal share, and can be used to produce an estimate of the amount of trips taken using each feasible mode.
History.
The early transportation planning model developed by the Chicago Area Transportation Study (CATS) focused on transit. It wanted to know how much travel would continue by transit. The CATS divided transit trips into two classes: trips to the Central Business District, or CBD (mainly by subway/elevated transit, express buses, and commuter trains) and other (mainly on the local bus system). For the latter, increases in auto ownership and use were a trade-off against bus use; trend data were used. CBD travel was analyzed using historic mode choice data together with projections of CBD land uses. Somewhat similar techniques were used in many studies. Two decades after CATS, for example, the London study followed essentially the same procedure, but in this case, researchers first divided trips into those made in the inner part of the city and those in the outer part. This procedure was followed because it was thought that income (resulting in the purchase and use of automobiles) drove mode choice.
Diversion curve techniques.
The CATS had diversion curve techniques available and used them for some tasks. At first, the CATS studied the diversion of auto traffic from streets and arterial roads to proposed expressways. Diversion curves were also used for bypasses built around cities to find out what percent of traffic would use the bypass. The mode choice version of diversion curve analysis proceeds this way: one forms a ratio, say:
formula_0
where:
"cm" = travel time by mode "m" and
"R" is empirical data in the form:
Given the "R" that we have calculated, the graph tells us the percent of users in the market that will choose transit. A variation on the technique is to use costs rather than time in the diversion ratio. The decision to use a time or cost ratio turns on the problem at hand. Transit agencies developed diversion curves for different kinds of situations, so variables like income and population density entered implicitly.
Diversion curves are based on empirical observations, and their improvement has resulted from better (more and more pointed) data. Curves are available for many markets. It is not difficult to obtain data and array results. Expansion of transit has motivated data development by operators and planners. Yacov Zahavi’s UMOT studies, discussed earlier, contain many examples of diversion curves.
In a sense, diversion curve analysis is expert system analysis. Planners could "eyeball" neighborhoods and estimate transit ridership by routes and time of day. Instead, diversion is observed empirically and charts drawn.
Disaggregate travel demand models.
Travel demand theory was introduced in the appendix on traffic generation. The core of the field is the set of models developed following work by Stan Warner in 1962 (Strategic Choice of Mode in Urban Travel: A Study of Binary Choice). Using data from the CATS, Warner investigated classification techniques using models from biology and psychology. Building from Warner and other early investigators, disaggregate demand models emerged. Analysis is disaggregate in that individuals are the basic units of observation, yet aggregate because models yield a single set of parameters describing the choice behavior of the population. Behavior enters because the theory made use of consumer behavior concepts from economics and parts of choice behavior concepts from psychology. Researchers at the University of California, Berkeley (especially Daniel McFadden, who won a Nobel Prize in Economics for his efforts) and the Massachusetts Institute of Technology (Moshe Ben-Akiva) (and in MIT associated consulting firms, especially Cambridge Systematics) developed what has become known as choice models, direct demand models (DDM), Random Utility Models (RUM) or, in its most used form, the multinomial logit model (MNL).
Choice models have attracted a lot of attention and work; the Proceedings of the International Association for Travel Behavior Research chronicles the evolution of the models. The models are treated in modern transportation planning and transportation engineering textbooks.
One reason for rapid model development was a felt need. Systems were being proposed (especially transit systems) where no empirical experience of the type used in diversion curves was available. Choice models permit comparison of more than two alternatives and the importance of attributes of alternatives. There was the general desire for an analysis technique that depended less on aggregate analysis and with a greater behavioral content. And there was attraction, too, because choice models have logical and behavioral roots extended back to the 1920s as well as roots in Kelvin Lancaster’s consumer behavior theory, in utility theory, and in modern statistical methods.
Psychological roots.
Early psychology work involved the typical experiment: Here are two objects with weights, "w1" and "w2", which is heavier? The finding from such an experiment would be that the greater the difference in weight, the greater the probability of choosing correctly. Graphs similar to the one on the right result.
Louis Leon Thurstone proposed (in the 1920s) that perceived weight,
"w" = "v" + "e",
where "v" is the true weight and "e" is random with
"E"("e") = 0.
The assumption that "e" is normally and identically distributed (NID) yields the binary probit model.
Econometric formulation.
Economists deal with utility rather than physical weights, and say that
observed utility = mean utility + random term.
The characteristics of the object, x, must be considered, so we have
"u"("x") = "v"("x") + "e"("x").
If we follow Thurston's assumption, we again have a probit model.
An alternative is to assume that the error terms are independently and identically distributed with a Weibull, Gumbel Type I, or double exponential distribution. (They are much the same, and differ slightly in their tails (thicker) from the normal distribution). This yields the multinomial logit model (MNL). Daniel McFadden argued that the Weibull had desirable properties compared to other distributions that might be used. Among other things, the error terms are normally and identically distributed. The logit model is simply a log ratio of the probability of choosing a mode to the probability of not choosing a mode.
formula_1
Observe the mathematical similarity between the logit model and the S-curves we estimated earlier, although here share increases with utility rather than time. With a choice model we are explaining the share of travelers using a mode (or the probability that an individual traveler uses a mode multiplied by the number of travelers).
The comparison with S-curves is suggestive that modes (or technologies) get adopted as their utility increases, which happens over time for several reasons. First, because the utility itself is a function of network effects, the more users, the more valuable the service, higher the utility associated with joining the network. Second because utility increases as user costs drop, which happens when fixed costs can be spread over more users (another network effect). Third technological advances, which occur over time and as the number of users increases, drive down relative cost.
An illustration of a utility expression is given:
formula_2
where
"Pi" = Probability of choosing mode i.
"PA" = Probability of taking auto
"cA,cT" = cost of auto, transit
"tA,tT" = travel time of auto, transit
"I" = income
"N" = Number of travelers
With algebra, the model can be translated to its most widely used form:
formula_3
formula_4
formula_5
formula_6
It is fair to make two conflicting statements about the estimation and use of this model:
The "house of cards" problem largely arises from the utility theory basis of the model specification. Broadly, utility theory assumes that (1) users and suppliers have perfect information about the market; (2) they have deterministic functions (faced with the same options, they will always make the same choices); and (3) switching between alternatives is costless. These assumptions don’t fit very well with what is known about behavior. Furthermore, the aggregation of utility across the population is impossible since there is no universal utility scale.
Suppose an option has a net utility "ujk" (option "k", person "j"). We can imagine that having a systematic part "vjk" that is a function of the characteristics of an object and person "j", plus a random part "ejk", which represents tastes, observational errors and a bunch of other things (it gets murky here). (An object such as a vehicle does not have utility, it is characteristics of a vehicle that have utility.) The introduction of "e" lets us do some aggregation. As noted above, we think of observable utility as being a function:
formula_7
where each variable represents a characteristic of the auto trip. The value "β0" is termed an alternative specific constant. Most modelers say it represents characteristics left out of the equation (e.g., the political correctness of a mode, if I take transit I feel morally righteous, so "β"0 may be negative for the automobile), but it includes whatever is needed to make error terms NID.
Econometric estimation.
Turning now to some technical matters, how do we estimate "v(x)"? Utility ("v(x)") isn’t observable. All we can observe are choices (say, measured as 0 or 1), and we want to talk about probabilities of choices that range from 0 to 1. (If we do a regression on 0s and 1s we might measure for "j" a probability of 1.4 or −0.2 of taking an auto.) Further, the distribution of the error terms wouldn’t have appropriate statistical characteristics.
The MNL approach is to make a maximum likelihood estimate of this functional form. The likelihood function is:
formula_8
we solve for the estimated parameters
formula_9
that max "L"*. This happens when:
formula_10
The log-likelihood is easier to work with, as the products turn to sums:
formula_11
If "γ" takes the value 0, the probability of drawing our sample is 0. If "γ" is 0.1, then the probability of getting our sample is: f(1,1,1,0,1) = f(1)f(1)f(1)f(0)f(1) = 0.1×0.1×0.1×0.9×0.1 = 0.00009 We can compute the probability of obtaining our sample over a range of "γ" – this is our likelihood function. The likelihood function for n independent observations in a logit model is
formula_12
where: "Yi" = 1 or 0 (choosing e.g. auto or not-auto) and Pi = the probability of observing "Y""i" = 1
The log likelihood is thus:
formula_13
In the binomial (two alternative) logit model,
formula_14, so
formula_15
The log-likelihood function is maximized setting the partial derivatives to zero:
formula_16
The above gives the essence of modern MNL choice modeling.
Additional topics.
Topics not touched on include the “red bus, blue bus” problem; the use of nested models (e.g., estimate choice between auto and transit, and then estimate choice between rail and bus transit); how consumers’ surplus measurements may be obtained; and model estimation, goodness of fit, etc. For these topics see a textbook such as Ortuzar and Willumsen (2001).
Returning to roots.
The discussion above is based on the economist’s utility formulation. At the time MNL modeling was developed there was some attention to psychologist's choice work (e.g., Luce’s choice axioms discussed in his Individual Choice Behavior, 1959). It has an analytic side in computational process modeling. Emphasis is on how people think when they make choices or solve problems (see Newell and Simon 1972). Put another way, in contrast to utility theory, it stresses not the choice but the way the choice was made. It provides a conceptual framework for travel choices and agendas of activities involving considerations of long and short term memory, effectors, and other aspects of thought and decision processes. It takes the form of rules dealing with the way information is searched and acted on. Although there is a lot of attention to behavioral analysis in transportation work, the best of modern psychological ideas are only beginning to enter the field. (e.g. Golledge, Kwan and Garling 1984; Garling, Kwan, and Golledge 1994). | [
{
"math_id": 0,
"text": "\n\\frac{c_\\text{transit} }\n{c_\\text{auto} } = R\n"
},
{
"math_id": 1,
"text": "\n\\log \\left( \\frac{P_i }\n{1 - P_i } \\right) = v(x_i )\n"
},
{
"math_id": 2,
"text": "\n\\log \\left( \\frac{P_A }\n{1 - P_A } \\right) = \\beta _0 + \\beta _1 \\left( c_A - c_T \\right) + \\beta _2 \\left( t_A - t_T \\right) + \\beta _3 I + \\beta _4 N = v_A \n"
},
{
"math_id": 3,
"text": "\n\\frac{P_A } {1 - P_A } = e^{v_A }\n"
},
{
"math_id": 4,
"text": "\nP_A = e^{v_A } - P_A e^{v_A } \n"
},
{
"math_id": 5,
"text": "\nP_A \\left( 1 + e^{v_A } \\right) = e^{v_A } \n"
},
{
"math_id": 6,
"text": "\nP_A = \\frac{e^{v_A } } {1 + e^{v_A } } \n"
},
{
"math_id": 7,
"text": "\nv_A = \\beta _0 + \\beta _1 \\left( c_A - c_T \\right) + \\beta _2 \\left( t_A - t_T \\right) + \\beta _3 I + \\beta _4 N\n"
},
{
"math_id": 8,
"text": "\nL^* = \\prod_{n = 1}^N {f\\left( {y_n \\left| {x_n ,\\theta } \\right.} \\right)} \n"
},
{
"math_id": 9,
"text": "\n\\hat \\theta \\,\n"
},
{
"math_id": 10,
"text": "\n\\frac{\\partial L}\n{\\partial \\hat \\theta _N } = 0\n"
},
{
"math_id": 11,
"text": "\n\\ln L^* = \\sum_{n = 1}^N \\ln f\\left( y_n \\left| x_n ,\\theta \\right. \\right)\n"
},
{
"math_id": 12,
"text": "\nL^* = \\prod_{n = 1}^N {P_i ^{Y_i } } \\left( 1 - P_i \\right)^{1 - Y_i } \n"
},
{
"math_id": 13,
"text": "\n\\ell = \\ln L^* = \\sum_{i = 1}^n \\left[ Y_i \\ln P_i + \\left( 1 - Y_i \\right)\\ln \\left( 1 - P_i \\right) \\right]\n"
},
{
"math_id": 14,
"text": "\nP_\\text{auto} = \\frac{e^{v(x_\\text{auto} )} }\n{1 + e^{v(x_\\text{auto} )} }\n"
},
{
"math_id": 15,
"text": "\n\\ell = \\ln L^* = \\sum_{i = 1}^n \\left[ Y_i v(x_\\text{auto} ) - \\ln \\left( 1 + e^{v(x_\\text{auto} )} \\right) \\right]\n"
},
{
"math_id": 16,
"text": "\n\\frac{\\partial \\ell}{\\partial \\beta} = \\sum_{i = 1}^n \\left( Y_i - \\hat P_i \\right) = 0\n"
}
]
| https://en.wikipedia.org/wiki?curid=1346058 |
1346096 | Transfer operator | In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system.
The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
Definition.
The iterated function to be studied is a map formula_0 for an arbitrary set formula_1.
The transfer operator is defined as an operator formula_2 acting on the space of functions formula_3 as
formula_4
where formula_5 is an auxiliary valuation function. When formula_6 has a Jacobian determinant formula_7, then formula_8 is usually taken to be formula_9.
The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoretic pushforward of "g": in essence, the transfer operator is the direct image functor in the category of measurable spaces. The left-adjoint of the Perron–Frobenius operator is the Koopman operator or composition operator. The general setting is provided by the Borel functional calculus.
As a general rule, the transfer operator can usually be interpreted as a (left-)shift operator acting on a shift space. The most commonly studied shifts are the subshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include the Jacobi operator and the Hessenberg matrix, both of which generate systems of orthogonal polynomials via a right-shift.
Applications.
Whereas the iteration of a function formula_6 naturally leads to a study of the orbits of points of X under iteration (the study of point dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear in physics problems, such as quantum chaos and statistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications to rational drug design, through the field of molecular dynamics.
It is often the case that the transfer operator is positive, has discrete positive real-valued eigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator.
The eigenfunctions of the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantum Hamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selected ensemble of quantum states will encompass a large number of very different fractal eigenstates with non-zero support over the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time and the increase of entropy.
The transfer operator of the Bernoulli map formula_10 is exactly solvable and is a classic example of deterministic chaos; the discrete eigenvalues correspond to the Bernoulli polynomials. This operator also has a continuous spectrum consisting of the Hurwitz zeta function.
The transfer operator of the Gauss map formula_11 is called the Gauss–Kuzmin–Wirsing (GKW) operator. The theory of the GKW dates back to a hypothesis by Gauss on continued fractions and is closely related to the Riemann zeta function. | [
{
"math_id": 0,
"text": "f\\colon X\\rightarrow X"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\mathcal{L}"
},
{
"math_id": 3,
"text": "\\{\\Phi\\colon X\\rightarrow \\mathbb{C}\\}"
},
{
"math_id": 4,
"text": "(\\mathcal{L}\\Phi)(x) = \\sum_{y\\,\\in\\, f^{-1}(x)} g(y) \\Phi(y)"
},
{
"math_id": 5,
"text": "g\\colon X\\rightarrow\\mathbb{C}"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "|J|"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "g=1/|J|"
},
{
"math_id": 10,
"text": "b(x)=2x-\\lfloor 2x\\rfloor"
},
{
"math_id": 11,
"text": "h(x)=1/x-\\lfloor 1/x \\rfloor"
}
]
| https://en.wikipedia.org/wiki?curid=1346096 |
1346182 | Isochron dating | Technique of radiometric dating
Isochron dating is a common technique of radiometric dating and is applied to date certain events, such as crystallization, metamorphism, shock events, and differentiation of precursor melts, in the history of rocks. Isochron dating can be further separated into "mineral isochron dating" and "whole rock isochron dating"; both techniques are applied frequently to date terrestrial and also extraterrestrial rocks (meteorites). The advantage of isochron dating as compared to simple radiometric dating techniques is that no assumptions are needed about the initial amount of the daughter nuclide in the radioactive decay sequence. Indeed, the initial amount of the daughter product can be determined using isochron dating. This technique can be applied if the daughter element has at least one stable isotope other than the daughter isotope into which the parent nuclide decays.
Basis for method.
All forms of isochron dating assume that the source of the rock or rocks contained unknown amounts of both radiogenic and non-radiogenic isotopes of the daughter element, along with some amount of the parent nuclide. Thus, at the moment of crystallization, the ratio of the concentration of the radiogenic isotope of the daughter element to that of the non-radiogenic isotope is some value independent of the concentration of the parent. As time goes on, some amount of the parent decays into the radiogenic isotope of the daughter, increasing the ratio of the concentration of the radiogenic isotope to that of the non-radiogenic isotope of the daughter element. The greater the initial concentration of the parent, the greater the concentration of the radiogenic daughter isotope will be at some particular time. Thus, the ratio of the radiogenic to non-radiogenic isotopes of the daughter element will become larger with time, while the ratio of parent to daughter will become smaller. For rocks that start out with a small concentration of the parent, the radiogenic/non-radiogenic ratio of the daughter element will not change as quickly as it will with rocks that start out with a large concentration of the parent.
Assumptions.
An isochron diagram will only give a valid age if all samples are "cogenetic", which means they have "the same initial isotopic composition" (that is, the rocks are from the same unit, the minerals are from the same rock, etc.), all samples have the same initial isotopic composition (at t0), and the system has remained closed.
Isochron plots.
The mathematical expression from which the isochron is derived is
formula_0
where
"t" is age of the sample,
"D"* is number of atoms of the radiogenic daughter isotope in the sample,
"D"0 is number of atoms of the daughter isotope in the original or initial composition,
n is number of atoms of the parent isotope in the sample at the present,
"λ" is the decay constant of the parent isotope, equal to the inverse of the radioactive half-life of the parent isotope times the natural logarithm of 2, and
("e"λ"t"-1) is the slope of the isochron which defines the age of the system.
Because the isotopes are measured by mass spectrometry, ratios are used instead of absolute concentrations since mass spectrometers usually measure the former rather than the latter. (See the section on isotope ratio mass spectrometry.) As such, isochrons are typically defined by the following equation, which normalizes the concentration of parent and radiogenic daughter isotopes to the concentration of a non-radiogenic isotope of the daughter element that is assumed to be constant:
formula_1
where
formula_2 is the concentration of the non-radiogenic isotope of the daughter element (assumed constant),
formula_3 is the present concentration of the radiogenic daughter isotope,
formula_4 is the initial concentration of the radiogenic daughter isotope, and
formula_5 is the present concentration of the parent isotope that has decayed over time formula_6.
To perform dating, a rock is crushed to a fine powder, and minerals are separated by various physical and magnetic means. Each mineral has different ratios between parent and daughter concentrations. For each mineral, the ratios are related by the following equation:
formula_7 (1)
where
formula_8 is the initial concentration of the parent isotope, and
formula_9 is the total amount of the parent isotope which has decayed by time formula_6.
The proof of (1) amounts to simple algebraic manipulation. It is useful in this form because it exhibits the relationship between quantities that actually exist at present. To wit, formula_10, formula_11 and formula_2 respectively correspond to the concentrations of parent, daughter and non-radiogenic isotopes found in the rock at the time of measurement.
The ratios formula_12or formula_13 (relative concentration of present daughter and non-radiogenic isotopes) and formula_14 or formula_15 (relative concentration of present parent and non-radiogenic isotope) are measured by mass spectrometry and plotted against each other in a three-isotope plot known as an "isochron plot".
If all data points lie on a straight line, this line is called an isochron. The better the fit of the data points to a line, the more reliable the resulting age estimate. Since the ratio of the daughter and non-radiogenic isotopes is proportional to the ratio of the parent and non-radiogenic isotopes, the slope of the isochron gets steeper with time. The change in slope from initial conditions—assuming an initial isochron slope of zero (a horizontal isochron) at the point of intersection (intercept) of the isochron with the y-axis—to the current computed slope gives the age of the rock. The slope of the isochron, formula_16 or formula_17, represents the ratio of daughter to parent as used in standard radiometric dating and can be derived to calculate the age of the sample at time "t". The y-intercept of the isochron line yields the initial radiogenic daughter ratio, formula_18.
Whole rock isochron dating uses the same ideas but instead of different minerals obtained from one rock uses different types of rocks that are derived from a common reservoir; e.g. the same precursor melt. It is possible to date the differentiation of the precursor melt which then cooled and crystallized into the different types of rocks.
One of the best known isotopic systems for isochron dating is the rubidium–strontium system. Other systems that are used for isochron dating include samarium–neodymium, and uranium–lead. Some isotopic systems based on short-living extinct radionuclides such as 53Mn, 26Al, 129I, 60Fe and others are used for isochron dating of events in the early history of the Solar System. However, methods using extinct radionuclides give only relative ages and have to be calibrated with radiometric dating techniques based on long-living radionuclides like Pb-Pb dating to give absolute ages.
Application.
Isochron dating is useful in the determination of the age of igneous rocks, which have their initial origin in the cooling of liquid magma. It is also useful to determine the time of metamorphism, shock events (such as the consequence of an asteroid impact) and other events depending on the behaviour of the particular isotopic systems under such events. It can be used to determine the age of grains in sedimentary rocks and understand their origin by a method known as a provenance study.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mathrm{D*}} = {\\mathrm{D}}_{\\mathrm{0}} + \\mathrm{n} \\cdot (e^{\\lambda t}-1),"
},
{
"math_id": 1,
"text": "\\left(\\frac{\\mathrm{D*}}{\\mathrm{D}_{ref}}\\right)_{\\mathrm{present}} = \\left(\\frac{\\mathrm{D_0}}{\\mathrm{D}_{ref}}\\right)_{\\mathrm{initial}} + \\left(\\frac{\\mathrm{P_t}}{\\mathrm{D}_{ref}}\\right) \\cdot (e^{\\lambda t}-1),"
},
{
"math_id": 2,
"text": "D_{ref}"
},
{
"math_id": 3,
"text": "D*"
},
{
"math_id": 4,
"text": "D_0"
},
{
"math_id": 5,
"text": "P_t"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "{\\mathrm{D}_0 + \\Delta{P}_t \\over D_{ref} } = \n{\\Delta{P}_t \\over P_i-\\Delta{P}_t } \\left ( { P_i-\\Delta{P}_t \\over D_{ref} }\\right ) + {D_0 \\over D_{ref}}"
},
{
"math_id": 8,
"text": "P_i"
},
{
"math_id": 9,
"text": "\\Delta{P}_t"
},
{
"math_id": 10,
"text": "P_i-\\Delta{P}_t"
},
{
"math_id": 11,
"text": "D_0+\\Delta{P}_t"
},
{
"math_id": 12,
"text": "\\frac{\\mathrm{D*}}{\\mathrm{D}_{ref}}"
},
{
"math_id": 13,
"text": "D_0+\\Delta{P}_t \\over D_{ref}"
},
{
"math_id": 14,
"text": "\\frac{\\mathrm{P_t}}{\\mathrm{D}_{ref}}"
},
{
"math_id": 15,
"text": "{ P_i-\\Delta{P}_t \\over D_{ref} }"
},
{
"math_id": 16,
"text": "(e^{\\lambda t}-1)"
},
{
"math_id": 17,
"text": "\\Delta{P}_t \\over P-\\Delta{P}_t"
},
{
"math_id": 18,
"text": "\\frac{\\mathrm{D_0}}{\\mathrm{D}_{ref}}"
}
]
| https://en.wikipedia.org/wiki?curid=1346182 |
13461936 | Timoshenko–Ehrenfest beam theory | Model of shear deformation and bending effects
The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of 4th order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter (in principle comparable to the height of the beam or shorter), and thus the distance between opposing shear forces decreases.
Rotary inertia effect was introduced by Bresse and Rayleigh.
If the shear modulus of the beam material approaches infinity—and thus the beam becomes rigid in shear—and if rotational inertia effects are neglected, Timoshenko beam theory converges towards Euler–Bernoulli beam theory.
Quasistatic Timoshenko beam.
In static Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by
formula_0
where formula_1 are the coordinates of a point in the beam, formula_2 are the components of the displacement vector in the three coordinate directions, formula_3 is the angle of rotation of the normal to the mid-surface of the beam, and formula_4 is the displacement of the mid-surface in the formula_5-direction.
The governing equations are the following coupled system of ordinary differential equations:
formula_6
The Timoshenko beam theory for the static case is equivalent to the Euler–Bernoulli theory when the last term above is neglected, an approximation that is valid when
formula_7
where
Combining the two equations gives, for a homogeneous beam of constant cross-section,
formula_16
The bending moment formula_17 and the shear force formula_18 in the beam are related to the displacement formula_4 and the rotation formula_3. These relations, for a linear elastic Timoshenko beam, are:
formula_19
Boundary conditions.
The two equations that describe the deformation of a Timoshenko beam have to be augmented with boundary conditions if they are to be solved. Four boundary conditions are needed for the problem to be well-posed. Typical boundary conditions are:
Strain energy of a Timoshenko beam.
The strain energy of a Timoshenko beam is expressed as a sum of strain energy due to bending and shear. Both these components are quadratic in their variables. The strain energy function of a Timoshenko beam can be written as,
formula_21
Example: Cantilever beam.
For a cantilever beam, one boundary is clamped while the other is free. Let us use a right handed coordinate system where the formula_22 direction is positive towards right and the formula_5 direction is positive upward. Following normal convention, we assume that positive forces act in the positive directions of the formula_22 and formula_5 axes and positive moments act in the clockwise direction. We also assume that the sign convention of the stress resultants (formula_17 and formula_18) is such that positive bending moments compress the material at the bottom of the beam (lower formula_5 coordinates) and positive shear forces rotate the beam in a counterclockwise direction.
Let us assume that the clamped end is at formula_23 and the free end is at formula_24. If a point load formula_25 is applied to the free end in the positive formula_5 direction, a free body diagram of the beam gives us
formula_26
and
formula_27
Therefore, from the expressions for the bending moment and shear force, we have
formula_28
Integration of the first equation, and application of the boundary condition formula_29 at formula_30, leads to
formula_31
The second equation can then be written as
formula_32
Integration and application of the boundary condition formula_33 at formula_30 gives
formula_34
The axial stress is given by
formula_35
Dynamic Timoshenko beam.
In Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by
formula_36
where formula_1 are the coordinates of a point in the beam, formula_2 are the components of the displacement vector in the three coordinate directions, formula_3 is the angle of rotation of the normal to the mid-surface of the beam, and formula_4 is the displacement of the mid-surface in the formula_5-direction.
Starting from the above assumption, the Timoshenko beam theory, allowing for vibrations, may be described with the coupled linear partial differential equations:
formula_37
formula_38
where the dependent variables are formula_39, the translational displacement of the beam, and formula_40, the angular displacement. Note that unlike the Euler–Bernoulli theory, the angular deflection is another variable and not approximated by the slope of the deflection. Also,
These parameters are not necessarily constants.
For a linear elastic, isotropic, homogeneous beam of constant cross-section these two equations can be combined to give
formula_44
However, it can easily be shown that this equation is incorrect. Consider the case where q is constant and does not depend on x or t, combined with the presence of a small damping all time derivatives will go to zero when t goes to infinity. The shear terms are not present in this situation, resulting in the Euler-Bernoulli beam theory, where shear deformation is neglected.
The Timoshenko equation predicts a critical frequency
formula_45
For normal modes the Timoshenko equation can be solved. Being a fourth order equation, there are four independent solutions, two oscillatory and two evanescent for frequencies below formula_46.
For frequencies larger than formula_46 all solutions are oscillatory and, as consequence, a second spectrum appears.
Axial effects.
If the displacements of the beam are given by
formula_47
where formula_48 is an additional displacement in the formula_22-direction, then the governing equations of a Timoshenko beam take the form
formula_49
where formula_50 and formula_51 is an externally applied axial force. Any external axial force is balanced by the stress resultant
formula_52
where formula_53 is the axial stress and the thickness of the beam has been assumed to be formula_54.
The combined beam equation with axial force effects included is
formula_55
Damping.
If, in addition to axial forces, we assume a damping force that is proportional to the velocity with the form
formula_56
the coupled governing equations for a Timoshenko beam take the form
formula_57
formula_58
and the combined equation becomes
formula_59
A caveat to this Ansatz damping force (resembling viscosity) is that, whereas viscosity leads to a frequency-dependent and amplitude-independent damping rate of beam oscillations, the empirically measured damping rates are frequency-insensitive, but depend on the amplitude of beam deflection.
Shear coefficient.
Determining the shear coefficient is not straightforward (nor are the determined values widely accepted, i.e. there's more than one answer); generally it must satisfy:
formula_60 .
The shear coefficient depends on Poisson's ratio. The attempts to provide precise expressions were made by many scientists, including Stephen Timoshenko, Raymond D. Mindlin, G. R. Cowper, N. G. Stephen, J. R. Hutchinson etc. (see also the derivation of the Timoshenko beam theory as a refined beam theory based on the variational-asymptotic method in the book by Khanh C. Le leading to different shear coefficients in the static and dynamic cases). In engineering practice, the expressions by Stephen Timoshenko are sufficient in most cases. In 1975 Kaneko published an excellent review of studies of the shear coefficient. More recently new experimental data show that the shear coefficient is underestimated.
Corrective shear coefficients for homogeneous isotropic beam according to Cowper - selection.
where formula_61 is Poisson's ratio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n u_x(x,y,z) = -z~\\varphi(x) ~;~~ u_y(x,y,z) = 0 ~;~~ u_z(x,y) = w(x)\n"
},
{
"math_id": 1,
"text": "(x,y,z)"
},
{
"math_id": 2,
"text": "u_x, u_y, u_z"
},
{
"math_id": 3,
"text": "\\varphi"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "z"
},
{
"math_id": 6,
"text": "\n \\begin{align}\n & \\frac{\\mathrm{d}^2}{\\mathrm{d} x^2}\\left(EI\\frac{\\mathrm{d} \\varphi}{\\mathrm{d} x}\\right) = q(x) \\\\\n & \\frac{\\mathrm{d} w}{\\mathrm{d} x} = \\varphi - \\frac{1}{\\kappa AG} \\frac{\\mathrm{d}}{\\mathrm{d} x}\\left(EI\\frac{\\mathrm{d} \\varphi}{\\mathrm{d} x}\\right).\n \\end{align}\n"
},
{
"math_id": 7,
"text": "\n\\frac{3EI}{\\kappa L^2 A G} \\ll 1\n"
},
{
"math_id": 8,
"text": "L"
},
{
"math_id": 9,
"text": "A"
},
{
"math_id": 10,
"text": "E"
},
{
"math_id": 11,
"text": "G"
},
{
"math_id": 12,
"text": "I"
},
{
"math_id": 13,
"text": "\\kappa"
},
{
"math_id": 14,
"text": "\\kappa = 5/6"
},
{
"math_id": 15,
"text": "q(x)"
},
{
"math_id": 16,
"text": "\n EI~\\cfrac{\\mathrm{d}^4 w}{\\mathrm{d} x^4} = q(x) - \\cfrac{EI}{\\kappa A G}~\\cfrac{\\mathrm{d}^2 q}{\\mathrm{d} x^2}\n "
},
{
"math_id": 17,
"text": "M_{xx}"
},
{
"math_id": 18,
"text": "Q_x"
},
{
"math_id": 19,
"text": "\n M_{xx} = -EI~\\frac{\\partial \\varphi}{\\partial x} \\quad \\text{and} \\quad\n Q_{x} = \\kappa~AG~\\left(-\\varphi + \\frac{\\partial w}{\\partial x}\\right) \\,.\n"
},
{
"math_id": 20,
"text": "q(x,t)"
},
{
"math_id": 21,
"text": "\nW=\\int_{[0,L]} \\frac{EI}{2}\\left(\\frac{d \\varphi}{d x}\\right)^2+\\frac{kGA}{2}\\left(\\varphi-\\frac{d w}{d x}\\right)^2\n"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "x=L"
},
{
"math_id": 24,
"text": "x=0"
},
{
"math_id": 25,
"text": "P"
},
{
"math_id": 26,
"text": "\n -Px - M_{xx} = 0 \\implies M_{xx} = -Px\n "
},
{
"math_id": 27,
"text": " P + Q_x = 0 \\implies Q_x = -P\\,.\n "
},
{
"math_id": 28,
"text": "\n Px = EI\\,\\frac{d\\varphi}{dx} \\qquad \\text{and} \\qquad -P = \\kappa AG\\left(-\\varphi + \\frac{dw}{dx}\\right) \\,.\n "
},
{
"math_id": 29,
"text": "\\varphi = 0"
},
{
"math_id": 30,
"text": "x = L"
},
{
"math_id": 31,
"text": "\n \\varphi(x) = -\\frac{P}{2EI}\\,(L^2-x^2) \\,.\n "
},
{
"math_id": 32,
"text": "\n \\frac{dw}{dx} = -\\frac{P}{\\kappa AG} - \\frac{P}{2EI}\\,(L^2-x^2)\\,.\n "
},
{
"math_id": 33,
"text": "w = 0"
},
{
"math_id": 34,
"text": "\n w(x) = \\frac{P(L-x)}{\\kappa AG} - \\frac{Px}{2EI}\\,\\left(L^2-\\frac{x^2}{3}\\right) + \\frac{PL^3}{3EI} \\,.\n "
},
{
"math_id": 35,
"text": "\n \\sigma_{xx}(x,z) = E\\,\\varepsilon_{xx} = -E\\,z\\,\\frac{d\\varphi}{dx} = -\\frac{Pxz}{I} = \\frac{M_{xx}z}{I} \\,.\n "
},
{
"math_id": 36,
"text": "\n u_x(x,y,z,t) = -z~\\varphi(x,t) ~;~~ u_y(x,y,z,t) = 0 ~;~~ u_z(x,y,z,t) = w(x,t)\n"
},
{
"math_id": 37,
"text": "\n\\rho A\\frac{\\partial^{2}w}{\\partial t^{2}} - q(x,t) = \\frac{\\partial}{\\partial x}\\left[ \\kappa AG \\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\\right]\n"
},
{
"math_id": 38,
"text": "\n\\rho I\\frac{\\partial^{2}\\varphi}{\\partial t^{2}} = \\frac{\\partial}{\\partial x}\\left(EI\\frac{\\partial \\varphi}{\\partial x}\\right)+\\kappa AG\\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\n"
},
{
"math_id": 39,
"text": "w(x,t)"
},
{
"math_id": 40,
"text": "\\varphi(x,t)"
},
{
"math_id": 41,
"text": "\\rho"
},
{
"math_id": 42,
"text": "m := \\rho A"
},
{
"math_id": 43,
"text": "J := \\rho I"
},
{
"math_id": 44,
"text": "\n EI~\\cfrac{\\partial^4 w}{\\partial x^4} + m~\\cfrac{\\partial^2 w}{\\partial t^2} - \\left(J + \\cfrac{E I m}{\\kappa A G}\\right)\\cfrac{\\partial^4 w}{\\partial x^2~\\partial t^2} + \\cfrac{m J}{\\kappa A G}~\\cfrac{\\partial^4 w}{\\partial t^4} = q(x,t) + \\cfrac{J}{\\kappa A G}~\\cfrac{\\partial^2 q}{\\partial t^2} - \\cfrac{EI}{\\kappa A G}~\\cfrac{\\partial^2 q}{\\partial x^2}\n "
},
{
"math_id": 45,
"text": "\n \\omega_C=2 \\pi f_c=\\sqrt{\\frac{\\kappa GA}{\\rho I}}. \n"
},
{
"math_id": 46,
"text": "f_c"
},
{
"math_id": 47,
"text": "\n u_x(x,y,z,t) = u_0(x,t)-z~\\varphi(x,t) ~;~~ u_y(x,y,z,t) = 0 ~;~~ u_z(x,y,z,t) = w(x,t)\n"
},
{
"math_id": 48,
"text": "u_0"
},
{
"math_id": 49,
"text": "\n \\begin{align}\nm \\frac{\\partial^{2}w}{\\partial t^{2}} & = \\frac{\\partial}{\\partial x}\\left[ \\kappa AG \\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\\right] + q(x,t) \\\\\nJ \\frac{\\partial^{2}\\varphi}{\\partial t^{2}} & = N(x,t)~\\frac{\\partial w}{\\partial x} + \\frac{\\partial}{\\partial x}\\left(EI\\frac{\\partial \\varphi}{\\partial x}\\right)+\\kappa AG\\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\n \\end{align}\n"
},
{
"math_id": 50,
"text": "J = \\rho I"
},
{
"math_id": 51,
"text": "N(x,t)"
},
{
"math_id": 52,
"text": "\n N_{xx}(x,t) = \\int_{-h}^{h} \\sigma_{xx}~dz\n "
},
{
"math_id": 53,
"text": "\\sigma_{xx}"
},
{
"math_id": 54,
"text": "2h"
},
{
"math_id": 55,
"text": "\n EI~\\cfrac{\\partial^4 w}{\\partial x^4} + N~\\cfrac{\\partial^2 w}{\\partial x^2} + m~\\frac{\\partial^2 w}{\\partial t^2} - \\left(J+\\cfrac{mEI}{\\kappa AG}\\right)~\\cfrac{\\partial^4 w}{\\partial x^2 \\partial t^2} + \\cfrac{mJ}{\\kappa AG}~\\cfrac{\\partial^4 w}{\\partial t^4} = q + \\cfrac{J}{\\kappa AG}~\\frac{\\partial^2 q}{\\partial t^2} - \\cfrac{EI}{\\kappa A G}~\\frac{\\partial^2 q}{\\partial x^2}\n"
},
{
"math_id": 56,
"text": "\n \\eta(x)~\\cfrac{\\partial w}{\\partial t}\n "
},
{
"math_id": 57,
"text": "\nm \\frac{\\partial^{2}w}{\\partial t^{2}} + \\eta(x)~\\cfrac{\\partial w}{\\partial t} = \\frac{\\partial}{\\partial x}\\left[ \\kappa AG \\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\\right] + q(x,t)\n"
},
{
"math_id": 58,
"text": "\nJ \\frac{\\partial^{2}\\varphi}{\\partial t^{2}} = N\\frac{\\partial w}{\\partial x} + \\frac{\\partial}{\\partial x}\\left(EI\\frac{\\partial \\varphi}{\\partial x}\\right)+\\kappa AG\\left(\\frac{\\partial w}{\\partial x}-\\varphi\\right)\n"
},
{
"math_id": 59,
"text": "\n \\begin{align}\n EI~\\cfrac{\\partial^4 w}{\\partial x^4} & + N~\\cfrac{\\partial^2 w}{\\partial x^2} + m~\\frac{\\partial^2 w}{\\partial t^2} - \\left(J+\\cfrac{mEI}{\\kappa AG}\\right)~\\cfrac{\\partial^4 w}{\\partial x^2 \\partial t^2} + \\cfrac{mJ}{\\kappa AG}~\\cfrac{\\partial^4 w}{\\partial t^4} + \\cfrac{J \\eta(x)}{\\kappa AG}~\\cfrac{\\partial^3 w}{\\partial t^3} \\\\\n & -\\cfrac{EI}{\\kappa AG}~\\cfrac{\\partial^2}{\\partial x^2}\\left(\\eta(x)\\cfrac{\\partial w}{\\partial t}\\right) + \\eta(x)\\cfrac{\\partial w}{\\partial t} = q + \\cfrac{J}{\\kappa AG}~\\frac{\\partial^2 q}{\\partial t^2} - \\cfrac{EI}{\\kappa A G}~\\frac{\\partial^2 q}{\\partial x^2}\n \\end{align}\n"
},
{
"math_id": 60,
"text": "\\int_A \\tau dA = \\kappa A G (\\varphi - \\frac{\\partial w}{\\partial x})"
},
{
"math_id": 61,
"text": "\\nu"
}
]
| https://en.wikipedia.org/wiki?curid=13461936 |
13463690 | Contraction principle (large deviations theory) | In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space "via" a continuous function.
Statement.
Let "X" and "Y" be Hausdorff topological spaces and let ("μ""ε")"ε">0 be a family of probability measures on "X" that satisfies the large deviation principle with rate function "I" : "X" → [0, +∞]. Let "T" : "X" → "Y" be a continuous function, and let "ν""ε" = "T"∗("μ""ε") be the push-forward measure of "μ""ε" by "T", i.e., for each measurable set/event "E" ⊆ "Y", "ν""ε"("E") = "μ""ε"("T"−1("E")). Let
formula_0
with the convention that the infimum of "I" over the empty set ∅ is +∞. Then: | [
{
"math_id": 0,
"text": "J(y) := \\inf \\{ I(x) \\mid x \\in X \\text{ and } T(x) = y \\},"
}
]
| https://en.wikipedia.org/wiki?curid=13463690 |
13463844 | Richardson's theorem | Undecidability of equality of real numbers
In mathematics, Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers, π, formula_0 and exponential and sine functions. It was proved in 1968 by the mathematician and computer scientist Daniel Richardson of the University of Bath.
Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number ln 2, the variable "x", the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions.
For some classes of expressions generated by other primitives than in Richardson's theorem, there exist algorithms that can determine whether an expression is zero.
Statement of the theorem.
Richardson's theorem can be stated as follows:
Let "E" be a set of expressions that represent formula_1 functions. Suppose that "E" includes these expressions:
Suppose "E" is also closed under a few standard operations. Specifically, suppose that if "A" and "B" are in "E", then all of the following are also in "E":
Then the following decision problems are unsolvable:
Extensions.
After Hilbert's tenth problem was solved in 1970, B. F. Caviness observed that the use of "ex" and ln 2 could be removed.
Wang later noted that under the same assumptions under which the question of whether there was "x" with "A"("x") < 0 was insolvable, the question of whether there was "x" with "A"("x") = 0 was also insolvable.
Miklós Laczkovich removed also the need for π and reduced the use of composition. In particular, given an expression "A"("x") in the ring generated by the integers, "x", sin "xn", and sin("x" sin "xn") (for "n" ranging over positive integers), both the question of whether "A"("x") > 0 for some "x" and whether "A"("x") = 0 for some "x" are unsolvable.
By contrast, the Tarski–Seidenberg theorem says that the first-order theory of the real field is decidable, so it is not possible to remove the sine function entirely.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ln 2,"
},
{
"math_id": 1,
"text": "\\R\\to\\R"
},
{
"math_id": 2,
"text": "e^{ax^2}"
}
]
| https://en.wikipedia.org/wiki?curid=13463844 |
13464844 | Bell Laboratories Layered Space-Time | Bell Laboratories Layer Space-Time (BLAST) is a transceiver architecture for offering spatial multiplexing over multiple-antenna wireless communication systems. Such systems have multiple antennas at both the transmitter and the receiver in an effort to exploit the many different paths between the two in a highly-scattering wireless environment. BLAST was developed by Gerard Foschini at Lucent Technologies' Bell Laboratories (now Nokia Bell Labs). By careful allocation of the data to be transmitted to the transmitting antennas, multiple data streams can be transmitted simultaneously within a single frequency band — the data capacity of the system then grows directly in line with the number of antennas (subject to certain assumptions). This represents a significant advance on current, single-antenna systems.
V-BLAST.
V-BLAST (Vertical-Bell Laboratories Layered Space-Time) is a detection algorithm to the receipt of multi-antenna MIMO systems. Available for the first time in 1996 at Bell Laboratories in New Jersey in the United States by Gerard J. Foschini. He proceeded simply to eliminate interference caused successively issuers.
Its principle is quite simple: to make a first detection of the most powerful signal. It regenerates the received signal from this user from this decision. Then, the signal is regenerated subtracted from the received signal and, with this new signal, it proceeds to the detection of the second user's most powerful, since it has already cleared the first and so forth. What gives a vector containing received less interference.
The complete detection algorithm can be summarized as recursive as follows:
Initialize:
formula_0
Recursive:
formula_1
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\ni&\\leftarrow 1 \\\\\nr_1&=r \\\\\nG_1&=(H^HH+\\sigma ^2I_{N_t})^{-1}H^H \\\\\nk_1&=\\arg \\min \\left \\| (G_1)_j \\right \\|^2 \\\\\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\nw_k &= (G_i)_{ki} \\\\\ny_k&=w^T_k\\times r_i \\\\\n\\hat{s}_k&=sign(y_k) \\\\\nr_{i+1}&=r_i-\\hat{s}_k(H)_{ki} \\\\\nG_{i+1}&=((H^H_iH_i)+\\sigma^2I_{Nt})^{-1}H^H_i \\\\\nk_{i+1}&=\\arg \\min \\left\\| (G_{i+1})_j \\right\\|^2 \\\\\ni &\\leftarrow i+1\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=13464844 |
1346524 | Route assignment | Transportation networks
Route assignment, route choice, or traffic assignment concerns the selection of routes (alternatively called paths) between origins and destinations in transportation networks. It is the fourth step in the conventional transportation forecasting model, following trip generation, trip distribution, and mode choice. The zonal interchange analysis of trip distribution provides origin-destination trip tables. Mode choice analysis tells which travelers will use which mode. To determine facility needs and costs and benefits, we need to know the number of travelers on each route and link of the network (a route is simply a chain of links between an origin and destination). We need to undertake traffic (or trip) assignment. Suppose there is a network of highways and transit systems and a proposed addition. We first want to know the present pattern of traffic delay and then what would happen if the addition were made.
General Approaches.
Long-standing techniques.
The problem of estimating how many users are on each route is long standing. Planners started looking hard at it as freeways and expressways began to be developed. The freeway offered a superior level of service over the local street system, and diverted traffic from the local system. At first, diversion was the technique. Ratios of travel time were used, tempered by considerations of costs, comfort, and level of service.
The Chicago Area Transportation Study (CATS) researchers developed diversion curves for freeways versus local streets. There was much work in California also, for California had early experiences with freeway planning. In addition to work of a diversion sort, the CATS attacked some technical problems that arise when one works with complex networks. One result was the Bellman–Ford–Moore algorithm for finding shortest paths on networks.
The issue the diversion approach did not handle was the feedback from the quantity of traffic on links and routes. If a lot of vehicles try to use a facility, the facility becomes congested and travel time increases. Absent some way to consider feedback, early planning studies (actually, most in the period 1960-1975) ignored feedback. They used the Moore algorithm to determine shortest paths and assigned all traffic to shortest paths. That is called all or nothing assignment because either all of the traffic from "i" to "j" moves along a route or it does not.
The all-or-nothing or shortest path assignment is not trivial from a technical-computational view. Each traffic zone is connected to "n - 1" zones, so there are numerous paths to be considered. In addition, we are ultimately interested in traffic on links. A link may be a part of several paths, and traffic along paths has to be summed link by link.
An argument can be made favoring the all-or-nothing approach. It goes this way: The planning study is to support investments so that a good level of service is available on all links. Using the travel times associated with the planned level of service, calculations indicate how traffic will flow once improvements are in place. Knowing the quantities of traffic on links, the capacity to be supplied to meet the desired level of service can be calculated.
Heuristic procedures.
To take account of the effect of traffic loading on travel times and traffic equilibria, several heuristic calculation procedures were developed. One heuristic proceeds incrementally. The traffic to be assigned is divided into parts (usually 4). Assign the first part of the traffic. Compute new travel times and assign the next part of the traffic. The last step is repeated until all the traffic is assigned. The CATS used a variation on this; it assigned row by row in the O-D table.
The heuristic included in the FHWA collection of computer programs proceeds another way.
These procedures seem to work "pretty well," but they are not exact.
Frank-Wolfe algorithm.
Dafermos (1968) applied the Frank-Wolfe algorithm (1956, Florian 1976), which can be used to deal with the traffic equilibrium problem. Suppose we are considering a highway network. For each link there is a function stating the relationship between resistance and volume of traffic. The Bureau of Public Roads (BPR) developed a link (arc) congestion (or volume-delay, or link performance) function, which we will term "Sa(va)"
formula_0
There are other congestion functions. The CATS has long used a function different from that used by the BPR, but there seems to be little difference between results when the CATS and BPR functions are compared.
Equilibrium assignment.
To assign traffic to paths and links we have to have rules, and there are the well-known Wardrop equilibrium conditions. The essence of these is that travelers will strive to find the shortest (least resistance) path from origin to destination, and network equilibrium occurs when no traveler can decrease travel effort by shifting to a new path. These are termed user optimal conditions, for no user will gain from changing travel paths once the system is in equilibrium.
The user optimum equilibrium can be found by solving the following nonlinear programming problem
formula_1
subject to:
formula_2
formula_3
formula_4
where
formula_5
is the number of vehicles on path "r" from origin "i" to destination "j". So constraint (2) says that all travel must take place –"i = 1 ... n; j = 1 ... n"
formula_6 = 1 if link a is on path r from i to j ; zero otherwise. So constraint (1) sums traffic on each link. There is a constraint for each link on the network. Constraint (3) assures no negative traffic.
Example.
An example from Eash, Janson, and Boyce (1979) will illustrate the solution to the nonlinear program problem. There are two links from node 1 to node 2, and there is a resistance function for each link (see Figure 1). Areas under the curves in Figure 2 correspond to the integration from 0 to "a" in equation 1, they sum to 220,674. Note that the function for link "b" is plotted in the reverse direction.
formula_7
formula_8
formula_9
Figure 1: Two Route Network
Figure 2: Graphical Solution to the Equilibrium Assignment Problem
Figure 3: Allocation of Vehicles not Satisfying the Equilibrium Condition
At equilibrium there are 2,152 vehicles on link "a" and 5847 on link "b". Travel time is the same on each route: about 63.
Figure 3 illustrates an allocation of vehicles that is not consistent with the equilibrium solution. The curves are unchanged. But with the new allocation of vehicles to routes the shaded area has to be included in the solution, so the Figure 3 solution is larger than the solution in Figure 2 by the area of the shaded area.
Integrating travel choices.
The urban transportation planning model evolved as a set of steps to be followed, and models evolved for use in each step. Sometimes there were steps within steps, as was the case for the first statement of the Lowry model. In some cases, it has been noted that steps can be integrated. More generally, the steps abstract from decisions that may be made simultaneously, and it would be desirable to better replicate that in the analysis.
Disaggregate demand models were first developed to treat the mode choice problem. That problem assumes that one has decided to take a trip, where that trip will go, and at what time the trip will be made. They have been used to treat the implied broader context. Typically, a nested model will be developed, say, starting with the probability of a trip being made, then examining the choice among places, and then mode choice. The time of travel is a bit harder to treat.
Wilson's doubly constrained entropy model has been the point of departure for efforts at the aggregate level. That model contains the constraint
formula_10
where the formula_11 are the link travel costs, formula_12 refers to traffic on a link, and C is a resource constraint to be sized when fitting the model with data. Instead of using that form of the constraint, the monotonically increasing resistance function used in traffic assignment can be used. The result determines zone-to-zone movements and assigns traffic to networks, and that makes much sense from the way one would imagine the system works – zone-to-zone traffic depends on the resistance occasioned by congestion.
Alternatively, the link resistance function may be included in the objective function (and the total cost function eliminated from the constraints).
A generalized disaggregate choice approach has evolved as has a generalized aggregate approach. The large question is that of the relations between them. When we use a macro model, we would like to know the disaggregate behavior it represents. If we are doing a micro analysis, we would like to know the aggregate implications of the analysis.
Wilson derives a gravity-like model with weighted parameters that say something about the attractiveness of origins and destinations. Without too much math we can write probability of choice statements based on attractiveness, and these take a form similar to some varieties of disaggregate demand models.
Integrating travel demand with route assignment.
It has long been recognized that travel demand is influenced by network supply. The example of a new bridge opening where none was before inducing additional traffic has been noted for centuries. Much research has gone into developing methods for allowing the forecasting system to directly account for this phenomenon. Evans (1974) published a doctoral dissertation on a mathematically rigorous combination of the gravity distribution model with the equilibrium assignment model. The earliest citation of this integration is the work of Irwin and Von Cube, as related by Florian et al. (1975), who comment on the work of Evans:
"The work of Evans resembles somewhat the algorithms developed by Irwin and Von Cube ['Capacity Restraint in Multi-Travel Mode Assignment Programs' H.R.B. Bulletin 347 (1962)] for a transportation study of Toronto. Their work allows for feedback between congested assignment and trip distribution, although they apply sequential procedures. Starting from an initial solution of the distribution problem, the interzonal trips are assigned to the initial shortest routes. For successive iterations, new shortest routes are computed, and their lengths are used as access times for input the distribution model. The new interzonal flows are then assigned in some proportion to the routes already found. The procedure is stopped when the interzonal times for successive iteration are quasi-equal."
Florian et al. proposed a somewhat different method for solving the combined distribution assignment, applying directly the Frank-Wolfe algorithm. Boyce et al. (1988) summarize the research on Network Equilibrium Problems, including the assignment with elastic demand.
Discussion.
A three link problem can not be solved graphically, and most transportation network problems involve a large numbers of nodes and links. Eash et al., for instance, studied the road net on DuPage County where there were about 30,000 one-way links and 9,500 nodes. Because problems are large, an algorithm is needed to solve the assignment problem, and the Frank-Wolfe algorithm (with various modern modifications since first published) is used. Start with an all or nothing assignment, and then follow the rule developed by Frank-Wolfe to iterate toward the minimum value of the objective function. (The algorithm applies successive feasible solutions to achieve convergence to the optimal solution. It uses an efficient search procedure to move the calculation rapidly toward the optimal solution.) Travel times correspond to the dual variables in this programming problem.
It is interesting that the Frank-Wolfe algorithm was available in 1956. Its application was developed in 1968, and it took almost another two decades before the first equilibrium assignment algorithm was embedded in commonly used transportation planning software (Emme and Emme/2, developed by Florian and others in Montreal). We would not want to draw any general conclusion from the slow application observation, mainly because we can find counter examples about the pace and pattern of technique development. For example, the simplex method for the solution of linear programming problems was worked out and widely applied prior to the development of much of programming theory.
The problem statement and algorithm have general applications across civil engineering -– hydraulics, structures, and construction. (See Hendrickson and Janson 1984).
Empirical Studies of Route Choice.
Route assignment models are based at least to some extent on empirical studies of how people choose routes in a network. Such studies are generally focused on a particular mode, and make use of either stated preference or revealed preference models.
Bicycle.
Cyclists have been found to prefer designated bike lanes and avoid steep hills.
Public Transport.
Public transport has long been considered in the context of route assignment and many studies have been conducted on transit route choice. Among other factors, transit users attempt to minimize total travel time, time or distance walking, and number of transfers.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nS_a \\left( {v_a } \\right) = t_a \\left( {1 + 0.15\\left( {\\frac{{v_a }}\n{{c_a }}} \\right)^4 } \\right)\n"
},
{
"math_id": 1,
"text": "\n\\min \\sum_a {\\int_0^{v_a} {S_a \\left( x \\right)} } dx\n"
},
{
"math_id": 2,
"text": " v_a = \\sum_i {\\sum_j {\\sum_r {\\alpha _{ij}^{ar} x_{ij}^r } } } "
},
{
"math_id": 3,
"text": "\n\\sum_r {x_{ij}^r = T_{ij} } \n"
},
{
"math_id": 4,
"text": "\nv_a \\geq 0,\\;x_{ij}^r \\geq 0\n"
},
{
"math_id": 5,
"text": "\nx_{ij}^r \n"
},
{
"math_id": 6,
"text": "\\alpha _{ij}^{ar} "
},
{
"math_id": 7,
"text": "\nS_a = 15\\left( {1 + 0.15\\left( {\\frac{{v_a }}{{1000}}} \\right)^4 } \\right) \n"
},
{
"math_id": 8,
"text": " \nS_b = 20\\left( {1 + 0.15\\left( {\\frac{{v_b }}{{3000}}} \\right)^4 } \\right)\n"
},
{
"math_id": 9,
"text": "\nv_a + v_b = 8000\n"
},
{
"math_id": 10,
"text": "t_{ij}c_{ij}=C"
},
{
"math_id": 11,
"text": "c_{ij}"
},
{
"math_id": 12,
"text": "t_{ij}"
}
]
| https://en.wikipedia.org/wiki?curid=1346524 |
13467150 | Net realizable value | Net realizable value (NRV) is a measure of a fixed or current asset's worth when held in inventory, in the field of accounting. NRV is part of the Generally Accepted Accounting Principles (GAAP) and International Financial Reporting Standards (IFRS) that apply to valuing inventory, so as to not overstate or understate the value of inventory goods. Net realizable value is generally equal to the selling price of the inventory goods less the selling costs (completion and disposal). Therefore, it is expected sales price less selling costs (e.g. repair and disposal costs). NRV prevents overstating or understating of an assets value. NRV is the price cap when using the Lower of Cost or Market Rule.
Under IFRS, companies need to record the cost of their ending inventory at the lower of cost and NRV, to ensure that their inventory and income statement are not overstated (under ASPE, companies record the lower of cost and market value). For example, under IFRS, at a company's year end, if an unfinished good that already cost $25 is expected to sell for $100 to a customer, but it will take an additional $20 to complete and $10 to advertise to the customer, its NRV will be formula_0. In this year's income statement, since the cost of the good ($25) is less than its NRV ($70), the cost of the good will get recorded as the cost of inventory. In next year's income statement after the good was sold, this company will record a revenue of $100, cost of goods sold of $25, and cost of completion and disposal of formula_1. This leads to a profit of formula_2 on this transaction.
Suppose we changed the example so that it costs $60 to advertise to the customer. Now the good's NRV will be formula_3. In this year's income statement, since the NRV ($20) is less than the cost of the good ($25), the NRV will get recorded as the Cost of Ending Inventory. To do so, an inventory write down of formula_4 is done, and hence a decrease of $5 in this year's income statement. In the next year's income statement after the good was sold, this company will record a revenue of $100, Cost of Goods Sold of $20, and Cost of Completion and Disposal of formula_5. This leads to the company breaking even on this transaction (formula_6).
Inventory can be valued at either its historical cost or its market value. Because the market value of an inventory is not always available, NRV is sometimes used as a substitute for this value.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\$100-\\$20-\\$10=\\$70"
},
{
"math_id": 1,
"text": "\\$20+\\$10=\\$30"
},
{
"math_id": 2,
"text": "\\$100-\\$25-\\$30=\\$45"
},
{
"math_id": 3,
"text": "\\$100-\\$20 -\\$60=\\$20"
},
{
"math_id": 4,
"text": "\\$25-\\$20=\\$5"
},
{
"math_id": 5,
"text": "\\$20+\\$60 = \\$80"
},
{
"math_id": 6,
"text": "\\$100-\\$20-\\$80=\\$0"
}
]
| https://en.wikipedia.org/wiki?curid=13467150 |
1346871 | Lévy distribution | Probability distribution
In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is known as a van der Waals profile. It is a special case of the inverse-gamma distribution. It is a stable distribution.
Definition.
The probability density function of the Lévy distribution over the domain formula_2 is
formula_3
where formula_0 is the location parameter, and formula_4 is the scale parameter. The cumulative distribution function is
formula_5
where formula_6 is the complementary error function, and formula_7 is the Laplace function (CDF of the standard normal distribution). The shift parameter formula_0 has the effect of shifting the curve to the right by an amount formula_0 and changing the support to the interval [formula_0, formula_1). Like all stable distributions, the Lévy distribution has a standard form "f"("x"; 0, 1) which has the following property:
formula_8
where "y" is defined as
formula_9
The characteristic function of the Lévy distribution is given by
formula_10
Note that the characteristic function can also be written in the same form used for the stable distribution with formula_11 and formula_12:
formula_13
Assuming formula_14, the "n"th moment of the unshifted Lévy distribution is formally defined by
formula_15
which diverges for all formula_16, so that the integer moments of the Lévy distribution do not exist (only some fractional moments).
The moment-generating function would be formally defined by
formula_17
however, this diverges for formula_18 and is therefore not defined on an interval around zero, so the moment-generating function is actually undefined.
Like all stable distributions except the normal distribution, the wing of the probability density function exhibits heavy tail behavior falling off according to a power law:
formula_19 as formula_20
which shows that the Lévy distribution is not just heavy-tailed but also fat-tailed. This is illustrated in the diagram below, in which the probability density functions for various values of "c" and formula_14 are plotted on a log–log plot:
The standard Lévy distribution satisfies the condition of being stable:
formula_21
where formula_22 are independent standard Lévy-variables with formula_23
Random-sample generation.
Random samples from the Lévy distribution can be generated using inverse transform sampling. Given a random variate "U" drawn from the uniform distribution on the unit interval (0, 1], the variate "X" given by
formula_35
is Lévy-distributed with location formula_0 and scale formula_4. Here formula_7 is the cumulative distribution function of the standard normal distribution.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\infty"
},
{
"math_id": 2,
"text": "x \\ge \\mu"
},
{
"math_id": 3,
"text": "f(x; \\mu, c) = \\sqrt{\\frac{c}{2\\pi}} \\, \\frac{e^{-\\frac{c}{2(x - \\mu)}}}{(x - \\mu)^{3/2}},"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "F(x; \\mu, c) = \\operatorname{erfc}\\left(\\sqrt{\\frac{c}{2(x - \\mu)}}\\right) = 2 - 2 \\Phi\\left({\\sqrt{\\frac{c}{(x - \\mu)}}}\\right),"
},
{
"math_id": 6,
"text": "\\operatorname{erfc}(z)"
},
{
"math_id": 7,
"text": "\\Phi(x)"
},
{
"math_id": 8,
"text": "f(x; \\mu, c) \\,dx = f(y; 0, 1) \\,dy,"
},
{
"math_id": 9,
"text": "y = \\frac{x - \\mu}{c}."
},
{
"math_id": 10,
"text": "\\varphi(t; \\mu, c) = e^{i\\mu t - \\sqrt{-2ict}}."
},
{
"math_id": 11,
"text": "\\alpha = 1/2"
},
{
"math_id": 12,
"text": "\\beta = 1"
},
{
"math_id": 13,
"text": "\\varphi(t; \\mu, c) = e^{i\\mu t - |ct|^{1/2} (1 - i\\operatorname{sign}(t))}."
},
{
"math_id": 14,
"text": "\\mu = 0"
},
{
"math_id": 15,
"text": "m_n\\ \\stackrel{\\text{def}}{=}\\ \\sqrt{\\frac{c}{2\\pi}} \\int_0^\\infty \\frac{e^{-c/2x} x^n}{x^{3/2}} \\,dx,"
},
{
"math_id": 16,
"text": "n \\geq 1/2"
},
{
"math_id": 17,
"text": "M(t; c)\\ \\stackrel{\\mathrm{def}}{=}\\ \\sqrt{\\frac{c}{2\\pi}} \\int_0^\\infty \\frac{e^{-c/2x + tx}}{x^{3/2}} \\,dx,"
},
{
"math_id": 18,
"text": "t > 0"
},
{
"math_id": 19,
"text": "f(x; \\mu, c) \\sim \\sqrt{\\frac{c}{2\\pi}} \\, \\frac{1}{x^{3/2}}"
},
{
"math_id": 20,
"text": "x \\to \\infty,"
},
{
"math_id": 21,
"text": "(X_1 + X_2 + \\dotsb + X_n) \\sim n^{1/\\alpha}X,"
},
{
"math_id": 22,
"text": "X_1, X_2, \\ldots, X_n, X"
},
{
"math_id": 23,
"text": "\\alpha = 1/2."
},
{
"math_id": 24,
"text": "X \\sim \\operatorname{Levy}(\\mu, c)"
},
{
"math_id": 25,
"text": "kX + b \\sim \\operatorname{Levy}(k\\mu + b, kc)."
},
{
"math_id": 26,
"text": "X \\sim \\operatorname{Levy}(0, c)"
},
{
"math_id": 27,
"text": "X \\sim \\operatorname{Inv-Gamma}(1/2, c/2)"
},
{
"math_id": 28,
"text": "Y \\sim \\operatorname{Normal}(\\mu, \\sigma^2)"
},
{
"math_id": 29,
"text": "(Y - \\mu)^{-2} \\sim \\operatorname{Levy}(0, 1/\\sigma^2)."
},
{
"math_id": 30,
"text": "X \\sim \\operatorname{Normal}(\\mu, 1/\\sqrt{\\sigma})"
},
{
"math_id": 31,
"text": "(X - \\mu)^{-2} \\sim \\operatorname{Levy}(0, \\sigma)"
},
{
"math_id": 32,
"text": "X \\sim \\operatorname{Stable}(1/2, 1, c, \\mu)"
},
{
"math_id": 33,
"text": "X\\,\\sim\\,\\operatorname{Scale-inv-\\chi^2}(1, c)"
},
{
"math_id": 34,
"text": "(X - \\mu)^{-1/2} \\sim \\operatorname{FoldedNormal}(0, 1/\\sqrt{c})"
},
{
"math_id": 35,
"text": "X = F^{-1}(U) = \\frac{c}{(\\Phi^{-1}(1 - U/2))^2} + \\mu"
},
{
"math_id": 36,
"text": "\\alpha"
},
{
"math_id": 37,
"text": "c=\\alpha^2"
}
]
| https://en.wikipedia.org/wiki?curid=1346871 |
13471652 | Generalized forces | Concept in Lagrangian mechanics
In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces F"i", "i" = 1, …, "n", acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate.
Virtual work.
Generalized forces can be obtained from the computation of the virtual work, δW, of the applied forces.
The virtual work of the forces, F"i", acting on the particles "Pi", "i" = 1, ..., "n", is given by
formula_0
where "δ"r"i" is the virtual displacement of the particle Pi.
Generalized coordinates.
Let the position vectors of each of the particles, r"i", be a function of the generalized coordinates, "qj", "j" = 1, ..., "m". Then the virtual displacements "δ"r"i" are given by
formula_1
where δqj is the virtual displacement of the generalized coordinate qj.
The virtual work for the system of particles becomes
formula_2
Collect the coefficients of δqj so that
formula_3
Generalized forces.
The virtual work of a system of particles can be written in the form
formula_4
where
formula_5
are called the generalized forces associated with the generalized coordinates "qj", "j" = 1, ..., "m".
Velocity formulation.
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be V"i", then the virtual displacement "δ"r"i" can also be written in the form
formula_6
This means that the generalized force, Qj, can also be determined as
formula_7
D'Alembert's principle.
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, Pi, of mass mi is
formula_8
where A"i" is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates "qj", "j" = 1, ..., "m", then the generalized inertia force is given by
formula_9
D'Alembert's form of the principle of virtual work yields
formula_10
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta W = \\sum_{i=1}^n \\mathbf F_i \\cdot \\delta \\mathbf r_i"
},
{
"math_id": 1,
"text": "\\delta \\mathbf{r}_i = \\sum_{j=1}^m \\frac {\\partial \\mathbf {r}_i} {\\partial q_j} \\delta q_j,\\quad i=1,\\ldots, n,"
},
{
"math_id": 2,
"text": "\\delta W = \\mathbf F_1 \\cdot \\sum_{j=1}^m \\frac {\\partial \\mathbf r_1} {\\partial q_j} \\delta q_j +\\ldots+ \\mathbf F_n \\cdot \\sum_{j=1}^m \\frac {\\partial \\mathbf r_n} {\\partial q_j} \\delta q_j."
},
{
"math_id": 3,
"text": "\\delta W = \\sum_{i=1}^n \\mathbf F_i \\cdot \\frac {\\partial \\mathbf r_i} {\\partial q_1} \\delta q_1 +\\ldots+ \\sum_{i=1}^n \\mathbf F_i \\cdot \\frac {\\partial \\mathbf r_i} {\\partial q_m} \\delta q_m."
},
{
"math_id": 4,
"text": " \\delta W = Q_1\\delta q_1 + \\ldots + Q_m\\delta q_m,"
},
{
"math_id": 5,
"text": "Q_j = \\sum_{i=1}^n \\mathbf F_i \\cdot \\frac {\\partial \\mathbf r_i} {\\partial q_j},\\quad j=1,\\ldots, m,"
},
{
"math_id": 6,
"text": "\\delta \\mathbf r_i = \\sum_{j=1}^m \\frac {\\partial \\mathbf V_i} {\\partial \\dot q_j} \\delta q_j,\\quad i=1,\\ldots, n."
},
{
"math_id": 7,
"text": "Q_j = \\sum_{i=1}^n \\mathbf F_i \\cdot \\frac {\\partial \\mathbf V_i} {\\partial \\dot{q}_j}, \\quad j=1,\\ldots, m."
},
{
"math_id": 8,
"text": "\\mathbf F_i^*=-m_i\\mathbf A_i,\\quad i=1,\\ldots, n,"
},
{
"math_id": 9,
"text": "Q^*_j = \\sum_{i=1}^n \\mathbf F^*_{i} \\cdot \\frac {\\partial \\mathbf V_i} {\\partial \\dot q_j},\\quad j=1,\\ldots, m."
},
{
"math_id": 10,
"text": " \\delta W = (Q_1+Q^*_1)\\delta q_1 + \\ldots + (Q_m+Q^*_m)\\delta q_m."
}
]
| https://en.wikipedia.org/wiki?curid=13471652 |
13473033 | Bending stiffness | Continuum mechanics
The bending stiffness (formula_0) is the resistance of a member against bending deflection/deformation. It is a function of the Young's modulus formula_1, the second moment of area formula_2 of the beam cross-section about the axis of interest, length of the beam and beam boundary condition. Bending stiffness of a beam can analytically be derived from the equation of beam deflection when it is applied by a force.
formula_3
where formula_4 is the applied force and formula_5 is the deflection. According to elementary beam theory, the relationship between the applied bending moment formula_6 and the resulting curvature formula_7 of the beam is:
formula_8
where formula_9 is the deflection of the beam and formula_10 is the distance along the beam. Double integration of the above equation leads to computing the deflection of the beam, and in turn, the bending stiffness of the beam.
Bending stiffness in beams is also known as Flexural rigidity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "K = \\frac{\\mathrm{p}}{\\mathrm{w}}"
},
{
"math_id": 4,
"text": "\\mathrm{p}"
},
{
"math_id": 5,
"text": "\\mathrm{w}"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "\\kappa"
},
{
"math_id": 8,
"text": "M = E I \\kappa \\approx E I \\frac{\\mathrm{d}^2 w}{\\mathrm{d} x^2}"
},
{
"math_id": 9,
"text": "w"
},
{
"math_id": 10,
"text": "x"
}
]
| https://en.wikipedia.org/wiki?curid=13473033 |
13473221 | Quantum bus | A quantum bus is a device which can be used to store or transfer information between independent qubits in a quantum computer, or combine two qubits into a superposition. It is the quantum analog of a classical bus.
There are several physical systems that can be used to realize a quantum bus, including trapped ions, photons, and superconducting qubits. Trapped ions, for example, can use the quantized motion of ions (phonons) as a quantum bus, while photons can act as a carrier of quantum information by utilizing the increased interaction strength provided by cavity quantum electrodynamics. Circuit quantum electrodynamics, which uses superconducting qubits coupled to a microwave cavity on a chip, is another example of a quantum bus that has been successfully demonstrated in experiments.
History.
The concept was first demonstrated by researchers at Yale University and the National Institute of Standards and Technology (NIST) in 2007. Prior to this experimental demonstration, the quantum bus had been described by scientists at NIST as one of the possible cornerstone building blocks in quantum computing architectures.
Mathematical description.
A quantum bus for superconducting qubits can be built with a resonance cavity. The hamiltonian for a system with qubit A, qubit B, and the resonance cavity or quantum bus connecting the two is formula_0 where formula_1 is the single qubit hamiltonian, formula_2 is the raising or lowering operator for creating or destroying excitations in the formula_3th qubit, and formula_4 is controlled by the amplitude of the D.C. and radio frequency flux bias.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{H}=\\hat{H}_r+\\sum\\limits_{j=A,B} \\hat{H}_j +\\sum\\limits_{j=A,B}hg_i\\left(\\hat{a}^\\dagger\\hat{\\sigma}^j_-+\\hat{a}\\hat{\\sigma}^j_{\\text{+}}\\right)"
},
{
"math_id": 1,
"text": "\\hat{H}_j = \\frac{1}{2}\\hbar\\omega_j\\hat{\\sigma}^j_+\\hat{\\sigma}^j_-"
},
{
"math_id": 2,
"text": "\\hat{\\sigma}^j_+\\hat{\\sigma}^j_-"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "\\hbar\\omega_j"
}
]
| https://en.wikipedia.org/wiki?curid=13473221 |
13474705 | Tilted large deviation principle | Mathematical formula
In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by exponential tilting, i.e. integration against an exponential functional. It can be seen as an alternative formulation of Varadhan's lemma.
Statement of the theorem.
Let "X" be a Polish space (i.e., a separable, completely metrizable topological space), and let ("μ""ε")"ε">0 be a family of probability measures on "X" that satisfies the large deviation principle with rate function "I" : "X" → [0, +∞]. Let "F" : "X" → R be a continuous function that is bounded from above. For each Borel set "S" ⊆ "X", let
formula_0
and define a new family of probability measures ("ν""ε")"ε">0 on "X" by
formula_1
Then ("ν""ε")"ε">0 satisfies the large deviation principle on "X" with rate function "I""F" : "X" → [0, +∞] given by
formula_2 | [
{
"math_id": 0,
"text": "J_{\\varepsilon} (S) = \\int_{S} e^{ F(x) / \\varepsilon} \\, \\mathrm{d} \\mu_{\\varepsilon} (x)"
},
{
"math_id": 1,
"text": "\\nu_{\\varepsilon} (S) = \\frac{J_{\\varepsilon} (S)}{J_{\\varepsilon} (X)}."
},
{
"math_id": 2,
"text": "I^{F} (x) = \\sup_{y \\in X} \\big[ F(y) - I(y) \\big] - \\big[ F(x) - I(x) \\big]."
}
]
| https://en.wikipedia.org/wiki?curid=13474705 |
13475776 | Resistive skin time | The resistive skin time is a characteristic time of typical magnetohydrodynamic (MHD) phenomena.
Definition.
The resistive skin time is defined as:
formula_0
where formula_1 is the resistivity, formula_2 is a typical radius of the device and formula_3 is the magnetic permeability. | [
{
"math_id": 0,
"text": "\\tau_R=\\frac{\\mu_0 a^2}{\\eta}"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "\\mu_0"
}
]
| https://en.wikipedia.org/wiki?curid=13475776 |
13477276 | Conjunctive query | In database theory, a conjunctive query is a restricted form of first-order queries using the logical conjunction operator. Many first-order queries can be written as conjunctive queries. In particular, a large part of queries issued on relational databases can be expressed in this way. Conjunctive queries also have a number of desirable theoretical properties that larger classes of queries (e.g., the relational algebra queries) do not share.
Definition.
The conjunctive queries are the fragment of (domain independent) first-order logic given by the set of
formulae that can be constructed from atomic formulae using conjunction ∧ and
existential quantification ∃, but not using disjunction ∨, negation ¬,
or universal quantification ∀.
Each such formula can be rewritten (efficiently) into an equivalent formula in prenex normal form, thus this form is usually simply assumed.
Thus conjunctive queries are of the following general form:
formula_0,
with the free variables formula_1 being called distinguished variables, and the bound variables formula_2 being called undistinguished variables. formula_3 are atomic formulae.
As an example of why the restriction to domain independent first-order logic is important, consider formula_4, which is not domain independent; see Codd's theorem. This formula cannot be implemented in the select-project-join fragment of relational algebra, and hence should not be considered a conjunctive query.
Conjunctive queries can express a large proportion of queries that are frequently issued on relational databases. To give an example, imagine a relational database for storing information about students, their address, the courses they take and their gender. Finding all male students and their addresses who attend a course that is also attended by a female student is expressed by the following conjunctive query:
(student, address) . ∃ (student2, course) .
attends(student, course) ∧ gender(student, 'male') ∧
attends(student2, course) ∧
gender(student2, 'female') ∧ lives(student, address)
Note that since the only entity of interest is the male student and his address, these are the only distinguished variables, while the variables codice_0, codice_1 are only existentially quantified, i.e. undistinguished.
Fragments.
Conjunctive queries without distinguished variables are called boolean conjunctive queries. Conjunctive queries where all variables are distinguished (and no variables are bound) are called equi-join queries, because they are the equivalent, in the relational calculus, of the equi-join queries in the relational algebra (when selecting all columns of the result).
Relationship to other query languages.
Conjunctive queries also correspond to select-project-join queries in relational algebra (i.e.,
relational algebra queries that do not use the operations union or difference) and to select-from-where queries in SQL in which the where-condition uses exclusively conjunctions of atomic equality conditions, i.e. conditions constructed from column names and constants using no comparison operators other than "=", combined using "and". Notably, this excludes the use of aggregation and subqueries. For example, the above query can be written as an SQL query of the conjunctive query fragment as
select l.student, l.address
from attends a1, gender g1,
attends a2, gender g2,
lives l
where a1.student = g1.student and
a2.student = g2.student and
l.student = g1.student and
a1.course = a2.course and
g1.gender = 'male' and
g2.gender = 'female';
Datalog.
Besides their logical notation, conjunctive queries can also be written as Datalog rules. Many authors in fact prefer the following Datalog notation for the query above:
result(student, address) :- attends(student, course), gender(student, male),
attends(student2, course), gender(student2, female),
lives(student, address).
Although there are no quantifiers in this notation, variables appearing in the head of the rule are still implicitly universally quantified, while variables only appearing in the body of the rule are still implicitly existentially quantified.
While any conjunctive query can be written as a Datalog rule, not every Datalog program can be written as a conjunctive query. In fact, only single rules over extensional predicate symbols can be easily rewritten as an equivalent conjunctive query. The problem of deciding whether for a given Datalog program there is an equivalent nonrecursive program (corresponding to a positive relational algebra query, or, equivalently, a formula of positive existential first-order logic, or, as a special case, a conjunctive query) is known as the Datalog boundedness problem and is undecidable.
Extensions.
Extensions of conjunctive queries capturing more expressive power include:
The formal study of all of these extensions is justified by their application in relational databases and is in the realm of database theory.
Complexity.
For the study of the computational complexity of evaluating conjunctive queries, two problems have to be distinguished. The first is the problem of evaluating a conjunctive query on a relational database where both the query and the database are considered part of the input. The complexity of this problem is usually referred to as combined complexity, while the complexity of the problem of evaluating a query on a relational database, where the query is assumed fixed, is
called data complexity.
Conjunctive queries are NP-complete with respect to combined complexity, while the data complexity of conjunctive queries is very low, in the parallel complexity class AC0, which is contained in LOGSPACE and thus in polynomial time. The NP-hardness of conjunctive queries may appear surprising, since relational algebra and SQL strictly subsume the conjunctive queries and are thus at least as hard (in fact, relational algebra is PSPACE-complete with respect to combined complexity and is therefore even harder under widely held complexity-theoretic assumptions). However, in the usual application scenario, databases are large, while queries are very small, and the data complexity model may be appropriate for studying and describing their difficulty.
The problem of listing all answers to a non-Boolean conjunctive query has been studied in the context of enumeration algorithms, with a characterization (under some computational hardness assumptions) of the queries for which enumeration can be performed with linear time preprocessing and constant delay between each solution. Specifically, these are the acyclic conjunctive queries which also satisfy a "free-connex" condition.
Formal properties.
Conjunctive queries are one of the great success stories of database theory in that many interesting problems that are computationally hard or undecidable for larger classes of queries are feasible for conjunctive queries. For example, consider the query containment problem. We write formula_5 for two database relations formula_6 of the same schema if and only if each tuple occurring in formula_7 also occurs in formula_8. Given a query formula_9 and a relational database instance formula_10, we write the result relation of evaluating the query on the instance simply as formula_11. Given two queries formula_12 and formula_13 and a database schema, the query containment problem is the problem of deciding whether for all possible database instances formula_10 over the input database schema, formula_14. The main application of query containment is in query optimization: Deciding whether two queries are equivalent is possible by simply checking mutual containment.
The query containment problem is undecidable for relational algebra and SQL but is decidable and NP-complete for conjunctive queries. In fact, it turns out that the query containment problem for conjunctive queries is exactly the same problem as the query evaluation problem. Since queries tend to be small, NP-completeness here is usually considered acceptable. The query containment problem for conjunctive queries is also equivalent to the constraint satisfaction problem.
An important class of conjunctive queries that have polynomial-time combined complexity are the acyclic conjunctive queries. The query evaluation, and thus query containment, is LOGCFL-complete and thus in polynomial time. Acyclicity of conjunctive queries is a structural property of queries that is defined with respect to the query's hypergraph: a conjunctive query is acyclic if and only if it has hypertree-width 1. For the special case of conjunctive queries in which all relations used are binary, this notion corresponds to the treewidth of the dependency graph of the variables in the query (i.e., the graph having the variables of the query as nodes and an undirected edge formula_15 between two variables if and only if there is an atomic formula formula_16 or formula_17 in the query) and the conjunctive query is acyclic if and only if its dependency graph is acyclic.
An important generalization of acyclicity is the notion of bounded hypertree-width, which is a measure of how close to acyclic a hypergraph is, analogous to bounded treewidth in graphs. Conjunctive queries of bounded tree-width have LOGCFL combined complexity.
Unrestricted conjunctive queries over tree data (i.e., a relational database consisting of a binary child relation of a tree as well as unary relations for labeling the tree nodes) have polynomial time combined complexity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x_1, \\ldots, x_k).\\exists x_{k+1}, \\ldots x_m. A_1 \\wedge \\ldots \\wedge A_r"
},
{
"math_id": 1,
"text": "x_1, \\ldots, x_k"
},
{
"math_id": 2,
"text": "x_{k+1}, \\ldots, x_m"
},
{
"math_id": 3,
"text": "A_1, \\ldots, A_r"
},
{
"math_id": 4,
"text": "x_1.\\exists x_2. R(x_2)"
},
{
"math_id": 5,
"text": "R \\subseteq S"
},
{
"math_id": 6,
"text": "R, S"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "Q"
},
{
"math_id": 10,
"text": "I"
},
{
"math_id": 11,
"text": "Q(I)"
},
{
"math_id": 12,
"text": "Q_1"
},
{
"math_id": 13,
"text": "Q_2"
},
{
"math_id": 14,
"text": "Q_1(I) \\subseteq Q_2(I)"
},
{
"math_id": 15,
"text": "\\{x,y\\}"
},
{
"math_id": 16,
"text": "R(x,y)"
},
{
"math_id": 17,
"text": "R(y,x)"
}
]
| https://en.wikipedia.org/wiki?curid=13477276 |
13478 | H | 8th letter of the Latin alphabet
H, or h, is the eighth letter of the Latin alphabet, used in the modern English alphabet, including the alphabets of other western European languages and others worldwide. Its name in English is "aitch" (pronounced , plural "aitches"), or regionally "haitch" , plural "haitches."
Name.
English.
For most English speakers, the name for the letter is pronounced as and spelled "aitch" or occasionally "eitch". The pronunciation and the associated spelling "haitch" are often considered to be h-adding and are considered non-standard in England. It is, however, a feature of Hiberno-English, and occurs sporadically in various other dialects.
The perceived name of the letter affects the choice of indefinite article before initialisms beginning with H: for example "an H-bomb" or "a H-bomb". The pronunciation may be a hypercorrection formed by analogy with the names of the other letters of the alphabet, most of which include the sound they represent.
The "haitch" pronunciation of "h" has spread in England, being used by approximately 24% of English people born since 1982, and polls continue to show this pronunciation becoming more common among younger native speakers. Despite this increasing number, the pronunciation without the sound is still considered standard in England, although the pronunciation with is also attested as a legitimate variant. In Northern Ireland, the pronunciation of the letter has been used as a shibboleth, with Catholics typically pronouncing it with the and Protestants pronouncing the letter without it.
Authorities disagree about the history of the letter's name. The "Oxford English Dictionary" says the original name of the letter was in Latin; this became in Vulgar Latin, passed into English via Old French , and by Middle English was pronounced . "The American Heritage Dictionary of the English Language" derives it from French "hache" from Latin "haca" or "hic". Anatoly Liberman suggests a conflation of two obsolete orderings of the alphabet, one with "H" immediately followed by "K" and the other without any "K": reciting the former's "..., H, K, L..." as when reinterpreted for the latter "..., H, L..." would imply a pronunciation of for "H".
History.
The original Semitic letter Heth most likely represented the voiceless pharyngeal fricative (). The form of the letter probably stood for a fence or posts.
The Greek Eta 'Η' in archaic Greek alphabets, before coming to represent a long vowel, , still represented a similar sound, the voiceless glottal fricative . In this context, the letter eta is also known as "Heta". Thus, in the Old Italic alphabets, the letter Heta of the Euboean alphabet was adopted with its original sound value .
While Etruscan and Latin had as a phoneme, almost all Romance languages lost the sound—Romanian later re-borrowed the phoneme from its neighbouring Slavic languages, and Spanish developed a secondary from , before losing it again; various Spanish dialects have developed as an allophone of or in most Spanish-speaking countries, and various dialects of Portuguese use it as an allophone of . 'H' is also used in many spelling systems in digraphs and trigraphs, such as 'ch', which represents in Spanish, Galician, and Old Portuguese; in French and modern Portuguese; in Italian and French.
Use in writing systems.
English.
In English, ⟨h⟩ occurs as a single-letter grapheme (being either silent or representing the voiceless glottal fricative and in various digraphs:
The letter is silent in a syllable rime, as in "ah", "ohm", "dahlia", "cheetah", and "pooh-poohed", as well as in certain other words (mostly of French origin) such as "hour", "honest", "herb" (in American but not British English) and "vehicle" (in certain varieties of English). Initial is often not pronounced in the weak form of some function words, including "had", "has", "have", "he", "her", "him", "his", and in some varieties of English (including most regional dialects of England and Wales), it is often omitted in all words. It was formerly common for "an" rather than "a" to be used as the indefinite article before a word beginning with in an unstressed syllable, as in "an historian", but the use of "a" is now more usual.
In English, the pronunciation of ⟨h⟩ as /h/ can be analyzed as a voiceless vowel. That is, when the phoneme /h/ precedes a vowel, /h/ may be realized as a voiceless version of the subsequent vowel. For example, the word ⟨hit⟩, /hɪt/ is realized as [ɪ̥ɪt].
H is the eighth most frequently used letter in the English language (after S, N, I, O, A, T, and E), with a frequency of about 4.2% in words.
Other languages.
In German, following a vowel, it often silently indicates that the vowel is long: In the word ('heighten'), the second ⟨h⟩ is mute for most speakers outside of Switzerland. In 1901, a spelling reform eliminated the silent ⟨h⟩ in nearly all instances of ⟨th⟩ in native German words such as "thun" ('to do') or "Thür" ('door'). It has been left unchanged in words derived from Greek, such as ('theater') and ('throne'), which continue to be spelled with ⟨th⟩ even after the last German spelling reform.
In Spanish and Portuguese, ⟨h⟩ is a silent letter with no pronunciation, as in ('son') and ('Hungarian'). The spelling reflects an earlier pronunciation of the sound . In words where the ⟨h⟩ is derived from a Latin , it is still sometimes pronounced with the value in some regions of Andalusia, Extremadura, Canarias, Cantabria, and the Americas. Some words beginning with or , such as and , were given an initial ⟨h⟩ to avoid confusion between their initial semivowels and the consonants ⟨j⟩ and ⟨v⟩. This is because ⟨j⟩ and ⟨v⟩ used to be considered variants of ⟨i⟩ and ⟨u⟩ respectively. ⟨h⟩ also appears in the digraph ⟨ch⟩, which represents in Spanish and northern Portugal, and in varieties that have merged both sounds (the latter originally represented by ⟨x⟩ instead), such as most of the Portuguese language and some Spanish dialects, prominently Chilean Spanish.
French orthography classifies words that begin with this letter in two ways, one of which can affect the pronunciation, even though it is a silent letter either way. The "H muet", or "mute" ⟨h⟩, is considered as though the letter were not there at all. For example, the singular definite article "le" or "la", which is elided to "l"' before a vowel, elides before an "H muet" followed by a vowel. For example, "le + hébergement" becomes "l'hébergement" ('the accommodation'). The other kind of ⟨h⟩ is called "h aspiré" ("aspirated '⟨h⟩'", though it is not normally aspirated phonetically), and does not allow elision or liaison. For example, in "le homard" ('the lobster') the article "le" remains unelided, and may be separated from the noun with a bit of a glottal stop. Most words that begin with an "H muet" come from Latin ("honneur", "homme") or from Greek through Latin ("hécatombe"), whereas most words beginning with an "H aspiré" come from Germanic ("harpe", "hareng") or non-Indo-European languages ("harem", "hamac", "haricot"); in some cases, an orthographic ⟨h⟩ was added to disambiguate the and semivowel pronunciations before the introduction of the distinction between the letters ⟨v⟩ and ⟨u⟩: "huit" (from "uit", ultimately from Latin "octo"), "huître" (from "uistre", ultimately from Greek through Latin "ostrea").
In Italian, ⟨h⟩ has no phonological value. Its most important uses are in the digraphs 'ch' and 'gh' , as well as to differentiate the spellings of certain short words that are homophones, for example, some present tense forms of the verb "avere" ('to have') (such as "hanno", 'they have', vs. "anno", 'year'), and in short interjections ("oh", "ehi").
Some languages, including Czech, Slovak, Hungarian, Finnish, and Estonian, use ⟨h⟩ as a breathy voiced glottal fricative , often as an allophone of otherwise voiceless in a voiced environment.
In Hungarian, the letter represents a phoneme with four allophones: before vowels, between two vowels, after front vowels, and word-finally after back vowels. It can also be a silent word-finally after back vowels. It is when geminated. In archaic spelling, the digraph ⟨ch⟩ represents (as in the name "Széchenyi") and (as in "pech", which is pronounced ); in certain environments it breaks palatalization of a consonant, as in the name "Beöthy", which is pronounced (without the intervening "h," the name "Beöty" could be pronounced ); and finally, it acts as a silent component of a digraph, as in the name "Vargha," pronounced .
In Ukrainian and Belarusian, when written in the Latin alphabet, ⟨h⟩ is also commonly used for , which is otherwise written with the Cyrillic letter ⟨г⟩.
In Irish, ⟨h⟩ is not considered an independent letter, except for a very few non-native words; however, ⟨h⟩ placed after a consonant is known as a "séimhiú" and indicates the lenition of that consonant; ⟨h⟩ began to replace the original form of a séimhiú, a dot placed above the consonant, after the introduction of typewriters.
In most dialects of Polish, both ⟨h⟩ and the digraph ⟨ch⟩ always represent .
In Basque, during the 20th century, it was not used in the orthography of the Basque dialects in Spain but it marked an aspiration in the North-Eastern dialects. During the standardization of Basque in the 1970s, a compromise was reached that "h" would be accepted if it were the first consonant in a syllable. Hence, "herri" ("people") and "etorri" ("to come") were accepted instead of "erri" (Biscayan) and "ethorri" (Souletin).
Other systems.
As a phonetic symbol in the International Phonetic Alphabet (IPA), it is used mainly for the so-called aspirations (fricative or trills), and variations of the plain letter are used to represent two sounds: the lowercase form represents the voiceless glottal fricative, and the small capital form represents the voiceless epiglottal fricative (or trill). With a bar, minuscule is used for a voiceless pharyngeal fricative. Specific to the IPA, a hooked is used for a voiced glottal fricative, and a superscript is used to represent aspiration.
Other representations.
Computing.
1 Also for encodings based on ASCII, including the DOS, Windows, ISO-8859, and Macintosh families of encodings.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{H}"
}
]
| https://en.wikipedia.org/wiki?curid=13478 |
13478284 | Differentiation of integrals | Problem in mathematics
In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space "X" with a measure "μ" and a metric "d", one asks for what functions "f" : "X" → R does
formula_0
for all (or at least "μ"-almost all) "x" ∈ "X"? (Here, as in the rest of the article, "B""r"("x") denotes the open ball in "X" with "d"-radius "r" and centre "x".) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that "f"("x") is a "good representative" for the values of "f" near "x".
Theorems on the differentiation of integrals.
Lebesgue measure.
One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider "n"-dimensional Lebesgue measure "λ""n" on "n"-dimensional Euclidean space R"n". Then, for any locally integrable function "f" : R"n" → R, one has
formula_1
for "λ""n"-almost all points "x" ∈ R"n". It is important to note, however, that the measure zero set of "bad" points depends on the function "f".
Borel measures on R"n".
The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if "μ" is any locally finite Borel measure on R"n" and "f" : R"n" → R is locally integrable with respect to "μ", then
formula_2
for "μ"-almost all points "x" ∈ R"n".
Gaussian measures.
The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space ("H", ⟨ , ⟩) equipped with a Gaussian measure "γ". As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting:
However, there is some hope if one has good control over the covariance of "γ". Let the covariance operator of "γ" be "S" : "H" → "H" given by
formula_5
or, for some countable orthonormal basis ("e""i")"i"∈N of "H",
formula_6
In 1981, Preiss and Jaroslav Tišer showed that if there exists a constant 0 < "q" < 1 such that
formula_7
then, for all "f" ∈ "L"1("H", "γ"; R),
formula_8
where the convergence is convergence in measure with respect to "γ". In 1988, Tišer showed that if
formula_9
for some "α" > 5 ⁄ 2, then
formula_10
for "γ"-almost all "x" and all "f" ∈ "L""p"("H", "γ"; R), "p" > 1.
As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure "γ" on a separable Hilbert space "H" so that, for all "f" ∈ "L"1("H", "γ"; R),
formula_11
for "γ"-almost all "x" ∈ "H". However, it is conjectured that no such measure exists, since the "σ""i" would have to decay very rapidly.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\lim_{r \\to 0} \\frac1{\\mu \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\mu(y) = f(x)"
},
{
"math_id": 1,
"text": "\\lim_{r \\to 0} \\frac1{\\lambda^{n} \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\lambda^{n} (y) = f(x)"
},
{
"math_id": 2,
"text": "\\lim_{r \\to 0} \\frac1{\\mu \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\mu (y) = f(x)"
},
{
"math_id": 3,
"text": "\\lim_{r \\to 0} \\frac{\\gamma \\big( M \\cap B_{r} (x) \\big)}{\\gamma \\big( B_{r} (x) \\big)} = 1."
},
{
"math_id": 4,
"text": "\\lim_{r \\to 0} \\inf \\left\\{ \\left. \\frac1{\\gamma \\big( B_{s} (x) \\big)} \\int_{B_{s} (x)} f(y) \\, \\mathrm{d} \\gamma(y) \\right| x \\in H, 0 < s < r \\right\\} = + \\infty."
},
{
"math_id": 5,
"text": "\\langle Sx, y \\rangle = \\int_{H} \\langle x, z \\rangle \\langle y, z \\rangle \\, \\mathrm{d} \\gamma(z),"
},
{
"math_id": 6,
"text": "Sx = \\sum_{i \\in \\mathbf{N}} \\sigma_{i}^{2} \\langle x, e_{i} \\rangle e_{i}."
},
{
"math_id": 7,
"text": "\\sigma_{i + 1}^{2} \\leq q \\sigma_{i}^{2},"
},
{
"math_id": 8,
"text": "\\frac1{\\mu \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\mu(y) \\xrightarrow[r \\to 0]{\\gamma} f(x),"
},
{
"math_id": 9,
"text": "\\sigma_{i + 1}^{2} \\leq \\frac{\\sigma_{i}^{2}}{i^{\\alpha}}"
},
{
"math_id": 10,
"text": "\\frac1{\\mu \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\mu(y) \\xrightarrow[r \\to 0]{} f(x),"
},
{
"math_id": 11,
"text": "\\lim_{r \\to 0} \\frac{1}{\\gamma \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} f(y) \\, \\mathrm{d} \\gamma(y) = f(x)"
}
]
| https://en.wikipedia.org/wiki?curid=13478284 |
13478480 | Biological exponential growth | Biological exponential growth is the unrestricted growth of a population of organisms, occurring when resources in its habitat are unlimited. Most commonly apparent in species that reproduce quickly and asexually, like bacteria, exponential growth is intuitive from the fact that each organism can divide and produce two copies of itself. Each descendent bacterium can itself divide, again doubling the population size. The bacterium Escherichia coli, under optimal conditions, may divide as often as twice per hour. Left unrestricted, a colony would cover the Earth's surface in less than a day.
If, in a hypothetical population of size "N", the birth rates (per capita) are represented as "b" and death rates (per capita) as "d", then the increase or decrease in "N" during a time period "t" will be
formula_0
(b-d) is called the 'intrinsic rate of natural increase' and is a very important parameter chosen for assessing the impacts of any biotic or abiotic factor on population growth.
Resource availability is essential for the unimpeded growth of a population. Ideally, when resources in the habitat are unlimited, each species can fully realize its innate potential to grow in number, as Charles Darwin observed while developing his theory of natural selection. Any species growing exponentially under unlimited resource conditions can reach enormous population densities in a short time. Darwin showed how even a slow-growing animal like the elephant could theoretically reach an enormous population if there were unlimited resources for its growth in its habitat. This is unrealistic in almost all situations (with exceptions, such as a laboratory); there is simply a finite quantity of everything necessary for life, and individuals in a population will compete with their own or other species for these finite resources. As the population approaches its carrying capacity, the rate of growth decreases, and the population trend will become logistic.
Once the carrying capacity, or K, is incorporated to account for the finite resources that a population will be competing for within an environment, the aforementioned equation becomes the following:
formula_1
A graph of this equation creates an S-shaped curve, which demonstrates how initial population growth is exponential due to the abundance of resources and lack of competition. As resources become more limited, the growth rate tapers off, and eventually, once growth rates are at the carrying capacity of the environment, the population size will taper off. This S-shaped curve observed in logistic growth is a more accurate model than exponential growth for observing real-life population growth of organisms.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
John A. Miller and Stephen B. Harley zoology 4th edition | [
{
"math_id": 0,
"text": "\\frac{dN}{dt}=(b-d)N"
},
{
"math_id": 1,
"text": "\\frac{dN}{dt}=r_{max}\\frac{dN}{dt}=r_{max}N\\frac{K-N}{K}"
}
]
| https://en.wikipedia.org/wiki?curid=13478480 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.