id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
574759
|
Feed forward (control)
|
Control paradigm in which errors are measured before they can affect a system
A feed forward (sometimes written feedforward) is an element or pathway within a control system that passes a controlling signal from a source in its external environment to a load elsewhere in its external environment. This is often a command signal from an external operator.
In control engineering, a feedforward control system is a control system that uses sensors to detect disturbances affecting the system and then applies an additional input to minimize the effect of the disturbance. This requires a mathematical model of the system so that the effect of disturbances can be properly predicted.
A control system which has only feed-forward behavior responds to its control signal in a pre-defined way without responding to the way the system reacts; it is in contrast with a system that also has feedback, which adjusts the input to take account of how it affects the system, and how the system itself may vary unpredictably.
In a feed-forward system, the control variable adjustment is not error-based. Instead it is based on knowledge about the process in the form of a mathematical model of the process and knowledge about, or measurements of, the process disturbances.
Some prerequisites are needed for control scheme to be reliable by pure feed-forward without feedback: the external command or controlling signal must be available, and the effect of the output of the system on the load should be known (that usually means that the load must be predictably unchanging with time). Sometimes pure feed-forward control without feedback is called 'ballistic', because once a control signal has been sent, it cannot be further adjusted; any corrective adjustment must be by way of a new control signal. In contrast, 'cruise control' adjusts the output in response to the load that it encounters, by a feedback mechanism.
These systems could relate to control theory, physiology, or computing.
Overview.
With feed-forward or feedforward control, the disturbances are measured and accounted for before they have time to affect the system. In the house example, a feed-forward system may measure the fact that the door is opened and automatically turn on the heater before the house can get too cold. The difficulty with feed-forward control is that the effects of the disturbances on the system must be accurately predicted, and there must not be any unmeasured disturbances. For instance, if a window was opened that was not being measured, the feed-forward-controlled thermostat might let the house cool down.
The term has specific meaning within the field of CPU-based automatic control. The discipline of feedforward control as it relates to modern, CPU based automatic controls is widely discussed, but is seldom practiced due to the difficulty and expense of developing or providing for the mathematical model required to facilitate this type of control. Open-loop control and feedback control, often based on canned PID control algorithms, are much more widely used.
There are three types of control systems: open loop, feed-forward, and feedback. An example of a pure open loop control system is manual non-power-assisted steering of a motor car; the steering system does not have access to an auxiliary power source and does not respond to varying resistance to turning of the direction wheels; the driver must make that response without help from the steering system. In comparison, power steering has access to a controlled auxiliary power source, which depends on the engine speed. When the steering wheel is turned, a valve is opened which allows fluid under pressure to turn the driving wheels. A sensor monitors that pressure so that the valve only opens enough to cause the correct pressure to reach the wheel turning mechanism. This is feed-forward control where the output of the system, the change in direction of travel of the vehicle, plays no part in the system. See Model predictive control.
If the driver is included in the system, then they do provide a feedback path by observing the direction of travel and compensating for errors by turning the steering wheel. In that case you have a feedback system, and the block labeled "System" in Figure(c) is a feed-forward system.
In other words, systems of different types can be nested, and the overall system regarded as a black-box.
Feedforward control is distinctly different from open loop control and teleoperator systems. Feedforward control requires a mathematical model of the plant (process and/or machine being controlled) and the plant's relationship to any inputs or feedback the system might receive. Neither open loop control nor teleoperator systems require the sophistication of a mathematical model of the physical system or plant being controlled. Control based on operator input without integral processing and interpretation through a mathematical model of the system is a teleoperator system and is not considered feedforward control.
History.
Historically, the use of the term "feedforward" is found in works by Harold S. Black in US patent 1686792 (invented 17 March 1923) and D. M. MacKay as early as 1956. While MacKay's work is in the field of biological control theory, he speaks only of feedforward systems. MacKay does not mention "feedforward control" or allude to the discipline of "feedforward controls". MacKay and other early writers who use the term "feedforward" are generally writing about theories of how human or animal brains work. Black also has US patent 2102671 invented 2 August 1927 on the technique of feedback applied to electronic systems.
The discipline of "feedforward controls" was largely developed by professors and graduate students at Georgia Tech, MIT, Stanford and Carnegie Mellon. Feedforward is not typically hyphenated in scholarly publications. Meckl and Seering of MIT and Book and Dickerson of Georgia Tech began the development of the concepts of Feedforward Control in the mid-1970s. The discipline of Feedforward Controls was well defined in many scholarly papers, articles and books by the late 1980s.
Benefits.
The benefits of feedforward control are significant and can often justify the extra cost, time and effort required to implement the technology. Control accuracy can often be improved by as much as an order of magnitude if the mathematical model is of sufficient quality and implementation of the feedforward control law is well thought out. Energy consumption by the feedforward control system and its driver is typically substantially lower than with other controls. Stability is enhanced such that the controlled device can be built of lower cost, lighter weight, springier materials while still being highly accurate and able to operate at high speeds. Other benefits of feedforward control include reduced wear and tear on equipment, lower maintenance costs, higher reliability and a substantial reduction in hysteresis. Feedforward control is often combined with feedback control to optimize performance.
Model.
The mathematical model of the plant (machine, process or organism) used by the feedforward control system may be created and input by a control engineer or it may be learned by the control system. Control systems capable of learning and/or adapting their mathematical model have become more practical as microprocessor speeds have increased. The discipline of modern feedforward control was itself made possible by the invention of microprocessors.
Feedforward control requires integration of the mathematical model into the control algorithm such that it is used to determine the control actions based on what is known about the state of the system being controlled. In the case of control for a lightweight, flexible robotic arm, this could be as simple as compensating between when the robot arm is carrying a payload and when it is not. The target joint angles are adjusted to place the payload in the desired position based on knowing the deflections in the arm from the mathematical model's interpretation of the disturbance caused by the payload. Systems that plan actions and then pass the plan to a different system for execution do not satisfy the above definition of feedforward control. Unless the system includes a means to detect a disturbance or receive an input and process that input through the mathematical model to determine the required modification to the control action, it is not true feedforward control.
Open system.
In systems theory, an open system is a feed forward system that does not have any feedback loop to control its output. In contrast, a closed system uses on a feedback loop to control the operation of the system. In an open system, the output of the system is not fed back into the input to the system for control or operation.
Applications.
Physiological feed-forward system.
In physiology, feed-forward control is exemplified by the normal anticipatory regulation of heartbeat in advance of actual physical exertion by the central autonomic network. Feed-forward control can be likened to learned anticipatory responses to known cues (predictive coding). Feedback regulation of the heartbeat provides further adaptiveness to the running eventualities of physical exertion. Feedforward systems are also found in biological control of other variables by many regions of animals brains.
Even in the case of biological feedforward systems, such as in the human brain, knowledge or a mental model of the plant (body) can be considered to be mathematical as the model is characterized by limits, rhythms, mechanics and patterns.
A pure feed-forward system is different from a homeostatic control system, which has the function of keeping the body's internal environment 'steady' or in a 'prolonged steady state of readiness.' A homeostatic control system relies mainly on feedback (especially negative), in addition to the feedforward elements of the system.
Gene regulation and feed-forward.
Feed-forward loops (FFLs), a three-node graph of the form A affects B and C and B affects C, are frequently observed in transcription networks in several organisms including "E. coli" and "S. cerevisiae", suggesting that they perform functions that are important for the functioning of these organisms. In "E. coli" and "S. cerevisiae" transcription networks have been extensively studied, FFLs occur approximately three times more frequently than expected based on random (Erdös-Rényi) networks.
Edges in transcription networks are directed and signed, as they represent activation (+) or repression (-). The sign of a path in a transcription network can be obtained by multiplying the signs of the edges in the path, so a path with an odd number of negative signs is negative. There are eight possible three-node FFLs as each of the three arrows can be either repression or activation, which can be classified into coherent or incoherent FFLs. Coherent FFLs have the same sign for both the paths from A to C, and incoherent FFLs have different signs for the two paths.
The temporal dynamics of FFLs show that coherent FFLs can be sign-sensitive delays that filter input into the circuit. We consider the differential equations for a Type-I coherent FFL, where all the arrows are positive:
formula_0
formula_1
Where formula_2 and formula_3 are increasing functions in formula_4 and formula_5 representing production, and formula_6 and formula_7 are rate constants representing degradation or dilution of formula_5 and formula_8 respectively. formula_9 can represent an AND gate where formula_10 if either formula_11 or formula_12, for instance if formula_13 where formula_14 and formula_15 are step functions. In this case the FFL creates a time-delay between a sustained on-signal, i.e. increase in formula_4 and the output increase in formula_8. This is because production of formula_4 must first induce production of formula_5, which is then needed to induce production of formula_8. However, there is no time-delay in for an off-signal because a reduction of formula_4 immediately results in a decrease in the production term formula_9. This system therefore filters out fluctuations in the on-signal and detects persistent signals. This is particularly relevant in settings with stochastically fluctuating signals. In bacteria these circuits create time delays ranging from a few minutes to a few hours.
Similarly, an inclusive-OR gate in which formula_8 is activated by either formula_4 or formula_5 is a sign-sensitive delay with no delay after the ON step but with a delay after the OFF step. This is because an ON pulse immediately activates B and C, but an OFF step does not immediately result in deactivation of C because B can still be active. This can protect the system from fluctuations that result in the transient loss of the ON signal and can also provide a form of memory. Kalir, Mangan, and Alon, 2005 show that the regulatory system for flagella in "E. coli" is regulated with a Type 1 coherent feedforward loop.
For instance, the regulation of the shift from one carbon source to another in diauxic growth in "E. coli" can be controlled via a type-1 coherent FFL. In diauxic growth cells growth using two carbon sources by first rapidly consuming the preferred carbon source, and then slowing growth in a lag phase before consuming the second less preferred carbon source. In E. coli, glucose is preferred over both arabinose and lactose. The absence of glucose is represented via a small molecule cAMP. Diauxic growth in glucose and lactose is regulated by a simple regulatory system involving cAMP and the lac operon. However, growth in arabinose is regulated by a feedforward loop with an AND gate which confers an approximately 20 minute time delay between the ON-step in which cAMP concentration increases when glucose is consumed and when arabinose transporters are expressed. There is no time delay with the OFF signal which occurs when glucose is present. This prevents the cell from shifting to growth on arabinose based on short term fluctuations in glucose availability.
Additionally, feedforward loops can facilitate cellular memory. Doncic and Skotheim (2003) show this effect in the regulation in the mating of yeast, where extracellular mating pheromone induces mating behavior, including preventing cells from entering the cell cycle. The mating pheromone activates the MAPK pathway, which then activates the cell-cycle inhibitor Far1 and the transcription factor Ste12, which in turn increases the synthesis of inactive Far1. In this system, the concentration of active Far1 depends on the time integral of a function of the external mating pheromone concentration. This dependence on past levels of mating pheromone is a form of cellular memory. This system simultaneously allows for the stability and reversibility.
Incoherent feedforward loops, in which the two paths from the input to the output node have different signs result in short pulses in response to an ON signal. In this system, input A simultaneous directly increases and indirectly decreases synthesis of output node C. If the indirect path to C (via B) is slower than the direct path a pulse of output is produced in the time period before levels of B are high enough to inhibit synthesis of C. Response to epidermal growth factor (EGF) in dividing mammalian cells is an example of a Type-1 incoherent FFL.
The frequent observation of feed-forward loops in various biological contexts across multiple scales suggests that they have structural properties that are highly adaptive in many contexts. Several theoretical and experimental studies including those discussed here show that FFLs create a mechanism for biological systems to process and store information, which is important for predictive behavior and survival in complex dynamically changing environments.
Feed-forward systems in computing.
In computing, feed-forward normally refers to a perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback loops. The connections are set up during a training phase, which in effect is when the system is a feedback system.
Long distance telephony.
In the early 1970s, intercity coaxial transmission systems, including L-carrier, used feed-forward amplifiers to diminish linear distortion. This more complex method allowed wider bandwidth than earlier feedback systems. Optical fiber, however, made such systems obsolete before many were built.
Automation and machine control.
Feedforward control is a discipline within the field of automatic controls used in automation.
Parallel feed-forward compensation with derivative (PFCD).
The method is rather a new technique that changes the phase of an open-loop transfer function of a non-minimum phase system into minimum phase.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Div col/styles.css"/>
|
[
{
"math_id": 0,
"text": "\\frac{\\delta B}{\\delta t} = \\beta_B (A) - \\gamma_{B}B"
},
{
"math_id": 1,
"text": "\\frac{\\delta C}{\\delta t} = \\beta_C (A, B) - \\gamma_{C}C"
},
{
"math_id": 2,
"text": "\\beta_y"
},
{
"math_id": 3,
"text": "\\beta_z"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "\\gamma_Y"
},
{
"math_id": 7,
"text": "\\gamma_z"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "\\beta_C (A,B)"
},
{
"math_id": 10,
"text": "\\beta_C (A,B)=0"
},
{
"math_id": 11,
"text": "A = 0"
},
{
"math_id": 12,
"text": "B = 0"
},
{
"math_id": 13,
"text": " \\beta_C (A,B)=\\beta_C \\theta_A (A>k_{AC} ) \\theta_A (B>k_{ABC} )"
},
{
"math_id": 14,
"text": "\\theta_A"
},
{
"math_id": 15,
"text": "\\theta_B"
}
] |
https://en.wikipedia.org/wiki?curid=574759
|
57477295
|
Design optimization
|
Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:
Design optimization problem.
The formal mathematical (standard form) statement of the design optimization problem is
formula_0
where
The problem formulation stated above is a convention called the "negative null form", since all constraint function are expressed as equalities and negative inequalities with zero on the right-hand side. This convention is used so that numerical algorithms developed to solve design optimization problems can assume a standard expression of the mathematical problem.
We can introduce the vector-valued functions
formula_9 " "
to rewrite the above statement in the compact expression
formula_10
We call formula_11 the "set" or "system of" ("functional") "constraints" and formula_8 the "set constraint".
Application.
Design optimization applies the methods of mathematical optimization to design problem formulations and it is sometimes used interchangeably with the term engineering optimization. When the objective function "f" is a vector rather than a scalar, the problem becomes a multi-objective optimization one. If the design optimization problem has more than one mathematical solutions the methods of global optimization are used to identified the global optimum.
Optimization Checklist
A detailed and rigorous description of the stages and practical applications with examples can be found in the book Principles of Optimal Design.
Practical design optimization problems are typically solved numerically and many exist in academic and commercial forms. There are several domain-specific applications of design optimization posing their own specific challenges in formulating and solving the resulting problems; these include, shape optimization, wing-shape optimization, topology optimization, architectural design optimization, power optimization. Several books, articles and journal publications are listed below for reference.
One modern application of design optimization is structural design optimization (SDO) is in building and construction sector. SDO emphasizes automating and optimizing structural designs and dimensions to satisfy a variety of performance objectives. These advancements aim to optimize the configuration and dimensions of structures to optimize augmenting strength, minimize material usage, reduce costs, enhance energy efficiency, improve sustainability, and optimize several other performance criteria. Concurrently, structural design automation endeavors to streamline the design process, mitigate human errors, and enhance productivity through computer-based tools and optimization algorithms. Prominent practices and technologies in this domain include the parametric design, generative design, building information modelling (BIM) technology, machine learning (ML), and artificial intelligence (AI), as well as integrating finite element analysis (FEA) with simulation tools.
Further reading.
Structural Topology Optimization.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n&{\\operatorname{minimize}}& & f(x) \\\\\n&\\operatorname{subject\\;to}\n& &h_i(x) = 0, \\quad i = 1, \\dots,m_1 \\\\ \n&&&g_j(x) \\leq 0, \\quad j = 1,\\dots,m_2 \\\\\n&\\operatorname{and}\n& &x \\in X \\subseteq R^n\n\\end{align}"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "x_1, x_2, ..., x_n"
},
{
"math_id": 3,
"text": "f(x)"
},
{
"math_id": 4,
"text": "h_i(x)"
},
{
"math_id": 5,
"text": "m_1"
},
{
"math_id": 6,
"text": "g_j(x)"
},
{
"math_id": 7,
"text": "m_2"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "\\begin{align}\n&&&{h = (h_1,h_2,\\dots,h_{m1})}\\\\\n\\operatorname{and}\\\\\n&&&{g = (g_1, g_2,\\dots, g_{m2})}\n\\end{align}\n\n"
},
{
"math_id": 10,
"text": "\\begin{align}\n&{\\operatorname{minimize}}& & f(x) \\\\\n&\\operatorname{subject\\;to}\n& &h(x) = 0,\\quad g(x) \\leq 0,\\quad x \\in X \\subseteq R^n\\\\\n\\end{align}"
},
{
"math_id": 11,
"text": "h, g"
}
] |
https://en.wikipedia.org/wiki?curid=57477295
|
57478553
|
PC-SAFT
|
PC-SAFT (perturbed chain SAFT) is an equation of state that is based on statistical associating fluid theory (SAFT). Like other SAFT equations of state, it makes use of chain and association terms developed by Chapman, et al from perturbation theory. However, unlike earlier SAFT equations of state that used unbonded spherical particles as a reference fluid, it uses spherical particles in the context of hard chains as reference fluid for the dispersion term.
PC-SAFT was developed by Joachim Gross and Gabriele Sadowski, and was first presented in their 2001 article. Further research extended PC-SAFT for use with associating and polar molecules, and it has also been modified for use with polymers. A version of PC-SAFT has also been developed to describe mixtures with ionic compounds (called electrolyte PC-SAFT or ePC-SAFT).
Form of the Equation of State.
The equation of state is organized into terms that account for different types of intermolecular interactions, including terms for
The equation is most often expressed in terms of the residual Helmholtz energy because all other thermodynamic properties can be easily found by taking the appropriate derivatives of the Helmholtz energy.
formula_0
Here formula_1 is the molar residual Helmholtz energy.
formula_2
Hard Chain Term.
where
|
[
{
"math_id": 0,
"text": "a = a^\\text{hc} + a^\\text{disp} + a^\\text{assoc} + a^\\text{dipole} + a^\\text{ion}"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "\\frac{a^\\text{hc}}{kT} = \\overline{m} \\cdot a^\\text{hs} - \\sum_{i=1}^{NC} {x_i \\cdot (m_i-1) \\cdot \\ln(g^\\text{hs}_{i,i}) }"
},
{
"math_id": 3,
"text": "NC"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "\\overline{m} = \\sum_{i=1}^{NC} {x_i m_i} "
},
{
"math_id": 6,
"text": " a^\\text{hs} "
},
{
"math_id": 7,
"text": " g^\\text{hs}_{i,i} "
}
] |
https://en.wikipedia.org/wiki?curid=57478553
|
57479201
|
Leftist grammar
|
In formal language theory, a leftist grammar is a formal grammar on which certain restrictions are made on the left and right sides of the grammar's productions. Only two types of productions are allowed, namely those of the form formula_0 (insertion rules) and formula_1 (deletion rules). Here, formula_2 and formula_3 are terminal symbols. This type of grammar was motivated by accessibility problems in the field computer security.
Computational properties.
The membership problem for leftist grammars is decidable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a \\to ba"
},
{
"math_id": 1,
"text": "cd \\to d"
},
{
"math_id": 2,
"text": "a,b,c"
},
{
"math_id": 3,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=57479201
|
57481189
|
Viscous damping
|
Form of damping resulting from an object moving through a viscous fluid
In continuum mechanics, viscous damping is a formulation of the damping phenomena, in which the source of damping force is modeled as a function of the volume, shape, and velocity of an object traversing through a real fluid with viscosity.
Typical examples of viscous damping in mechanical systems include:
Viscous damping also refers to damping devices. Most often they damp motion by providing a force or torque opposing motion proportional to the velocity. This may be affected by fluid flow or motion of magnetic structures. The intended effect is to improve the damping ratio.
Single-degree-of-freedom system.
In a single-degree-of-freedom system, viscous damping model relates force to velocity as shown below:
formula_0
Where formula_1 is the viscous damping coefficient with SI units of formula_2. This model adequately describes the damping force on a body that is moving at a moderate speed through a fluid. It is also the most common modeling choice for damping.
|
[
{
"math_id": 0,
"text": "f=c\\dot x"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "N\\cdot s/m"
}
] |
https://en.wikipedia.org/wiki?curid=57481189
|
57482546
|
Fuchs relation
|
In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called "Fuchsian equations". It is named after Lazarus Immanuel Fuchs.
Definition Fuchsian equation.
A linear differential equation in which every singular point, including the point at infinity, is a regular singularity is called "Fuchsian equation" or "equation of Fuchsian type". For Fuchsian equations a formal fundamental system exists at any point, due to the Fuchsian theory.
Coefficients of a Fuchsian equation.
Let formula_0 be the formula_1 regular singularities in the finite part of the complex plane of the linear differential equationformula_2
with meromorphic functions formula_3. For linear differential equations the singularities are exactly the singular points of the coefficients. formula_4 is a Fuchsian equation if and only if the coefficients are rational functions of the form
formula_5
with the polynomial formula_6 and certain polynomials formula_7 for formula_8, such that formula_9. This means the coefficient formula_3 has poles of order at most formula_10, for formula_8.
Fuchs relation.
Let formula_4 be a Fuchsian equation of order formula_11 with the singularities formula_12 and the point at infinity. Let formula_13 be the roots of the indicial polynomial relative to formula_14, for formula_15. Let formula_16 be the roots of the indicial polynomial relative to formula_17, which is given by the indicial polynomial of formula_18 transformed by formula_19 at formula_20. Then the so called "Fuchs relation" holds:
formula_21.
The Fuchs relation can be rewritten as infinite sum. Let formula_22 denote the indicial polynomial relative to formula_23 of the Fuchsian equation formula_4. Define formula_24 as
formula_25
where formula_26 gives the trace of a polynomial formula_27, i. e., formula_28 denotes the sum of a polynomial's roots counted with multiplicity.
This means that formula_29 for any ordinary point formula_30, due to the fact that the indicial polynomial relative to any ordinary point is formula_31. The transformation formula_19, that is used to obtain the indicial equation relative to formula_17, motivates the changed sign in the definition of formula_32 for formula_33. The rewritten Fuchs relation is:
formula_34
|
[
{
"math_id": 0,
"text": "a_1, \\dots, a_r \\in \\mathbb{C}"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "Lf := \\frac{d^nf}{dz^n} + q_1\\frac{d^{n-1}f}{dz^{n-1}} + \\cdots + q_{n-1}\\frac{df}{dz} + q_nf"
},
{
"math_id": 3,
"text": "q_i"
},
{
"math_id": 4,
"text": "Lf=0"
},
{
"math_id": 5,
"text": "q_i(z) = \\frac{Q_i(z)}{\\psi^i}"
},
{
"math_id": 6,
"text": "\\psi := \\prod_{j=0}^r (z-a_j) \\in\\mathbb{C}[z]"
},
{
"math_id": 7,
"text": "Q_i \\in \\mathbb{C}[z]"
},
{
"math_id": 8,
"text": "i\\in \\{1,\\dots,n\\}"
},
{
"math_id": 9,
"text": "\\deg(Q_i) \\leq i(r-1)"
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "a_1, \\dots, a_r\\in\\mathbb{C}"
},
{
"math_id": 13,
"text": "\\alpha_{i1},\\dots,\\alpha_{in}\\in\\mathbb{C}"
},
{
"math_id": 14,
"text": "a_i"
},
{
"math_id": 15,
"text": "i\\in\\{1,\\dots,r\\}"
},
{
"math_id": 16,
"text": "\\beta_1,\\dots,\\beta_n\\in\\mathbb{C}"
},
{
"math_id": 17,
"text": "\\infty"
},
{
"math_id": 18,
"text": "Lf"
},
{
"math_id": 19,
"text": "z=x^{-1}"
},
{
"math_id": 20,
"text": "x=0"
},
{
"math_id": 21,
"text": "\\sum_{i=1}^r \\sum_{k=1}^n \\alpha_{ik} + \\sum_{k=1}^n \\beta_{k} = \\frac{n(n-1)(r-1)}{2}"
},
{
"math_id": 22,
"text": "P_{\\xi}"
},
{
"math_id": 23,
"text": "\\xi\\in\\mathbb{C}\\cup\\{\\infty\\}"
},
{
"math_id": 24,
"text": "\\operatorname{defect}: \\mathbb{C}\\cup\\{\\infty\\}\\to\\mathbb{C}"
},
{
"math_id": 25,
"text": "\\operatorname{defect}(\\xi):=\n\\begin{cases}\n\t\\operatorname{Tr}(P_\\xi) - \\frac{n(n-1)}{2}\\text{, for }\\xi\\in\\mathbb{C}\\\\\n\t\\operatorname{Tr}(P_\\xi) + \\frac{n(n-1)}{2}\\text{, for }\\xi=\\infty\n\\end{cases}"
},
{
"math_id": 26,
"text": "\\operatorname{Tr}(P):=\\sum_{\\{z\\in\\mathbb{C}: P(z)=0\\}} z"
},
{
"math_id": 27,
"text": "P"
},
{
"math_id": 28,
"text": "\\operatorname{Tr}"
},
{
"math_id": 29,
"text": "\\operatorname{defect}(\\xi)=0"
},
{
"math_id": 30,
"text": "\\xi"
},
{
"math_id": 31,
"text": "P_\\xi(\\alpha)= \\alpha(\\alpha-1)\\cdots(\\alpha-n+1)"
},
{
"math_id": 32,
"text": "\\operatorname{defect}"
},
{
"math_id": 33,
"text": "\\xi=\\infty"
},
{
"math_id": 34,
"text": "\\sum_{\\xi\\in\\mathbb{C}\\cup\\{\\infty\\}} \\operatorname{defect}(\\xi) = 0."
}
] |
https://en.wikipedia.org/wiki?curid=57482546
|
57482552
|
Fuchsian theory
|
The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.
At any ordinary point of a homogeneous linear differential equation of order formula_0 there exists a fundamental system of formula_0 linearly independent power series solutions. A non-ordinary point is called a singularity. At a singularity the maximal number of linearly independent power series solutions may be less than the order of the differential equation.
Generalized series solutions.
The generalized series at formula_1 is defined by
formula_2
which is known as "Frobenius series", due to the connection with the Frobenius series method. Frobenius series solutions are formal solutions of differential equations. The formal derivative of formula_3, with formula_4, is defined such that formula_5. Let formula_6 denote a Frobenius series relative to formula_7, then
formula_8
where formula_9 denotes the falling factorial notation.
Indicial equation.
Let formula_10 be a Frobenius series relative to formula_11. Let formula_12 be a linear differential operator of order formula_0 with one valued coefficient functions formula_13. Let all coefficients formula_14 be expandable as Laurent series with finite principle part at formula_7. Then there exists a smallest formula_15 such that formula_16 is a power series for all formula_17. Hence, formula_18 is a Frobenius series of the form formula_19, with a certain power series formula_20 in formula_21. The "indicial polynomial" is defined by formula_22 which is a polynomial in formula_23, i.e., formula_24 equals the coefficient of formula_18 with lowest degree in formula_21. For each formal Frobenius series solution formula_6 of formula_25, formula_23 must be a root of the indicial polynomial at formula_7, i. e., formula_23 needs to solve the "indicial equation" formula_26.
If formula_7 is an ordinary point, the resulting indicial equation is given by formula_27. If formula_7 is a regular singularity, then formula_28 and if formula_7 is an irregular singularity, formula_29 holds. This is illustrated by the later examples. The indicial equation relative to formula_30 is defined by the indicial equation of formula_31, where formula_32 denotes the differential operator formula_33 transformed by formula_34which is a linear differential operator in formula_35, at formula_36.
Example: Regular singularity.
The differential operator of order formula_37, formula_38, has a regular singularity at formula_39. Consider a Frobenius series solution relative to formula_40, formula_41 with formula_42.
formula_43
This implies that the degree of the indicial polynomial relative to formula_40 is equal to the order of the differential equation, formula_44.
Example: Irregular singularity.
The differential operator of order formula_37, formula_45, has an irregular singularity at formula_39. Let formula_6 be a Frobenius series solution relative to formula_40.
formula_46
Certainly, at least one coefficient of the lower derivatives pushes the exponent of formula_47 down. Inevitably, the coefficient of a lower derivative is of smallest exponent. The degree of the indicial polynomial relative to formula_40 is less than the order of the differential equation, formula_48.
Formal fundamental systems.
We have given a homogeneous linear differential equation formula_25 of order formula_0 with coefficients that are expandable as Laurent series with finite principle part. The goal is to obtain a fundamental set of formal Frobenius series solutions relative to any point formula_1. This can be done by the Frobenius series method, which says: The starting exponents are given by the solutions of the indicial equation and the coefficients describe a polynomial recursion. W.l.o.g., assume formula_49.
Fundamental system at ordinary point.
If formula_40 is an ordinary point, a fundamental system is formed by the formula_0 linearly independent formal Frobenius series solutions formula_50, where formula_51 denotes a formal power series in formula_47 with formula_52, for formula_53. Due to the reason that the starting exponents are integers, the Frobenius series are power series.
Fundamental system at regular singularity.
If formula_40 is a regular singularity, one has to pay attention to roots of the indicial polynomial that differ by integers. In this case the recursive calculation of the Frobenius series' coefficients stops for some roots and the Frobenius series method does not give an formula_0-dimensional solution space. The following can be shown independent of the distance between roots of the indicial polynomial: Let formula_4 be a formula_54-fold root of the indicial polynomial relative to formula_40. Then the part of the fundamental system corresponding to formula_23 is given by the formula_54 linearly independent formal solutions
formula_55
where formula_51 denotes a formal power series in formula_47 with formula_52, for formula_56. One obtains a fundamental set of formula_0 linearly independent formal solutions, because the indicial polynomial relative to a regular singularity is of degree formula_0.
General result.
One can show that a linear differential equation of order formula_0 always has formula_0 linearly independent solutions of the form
formula_57
where formula_58 and formula_59, and the formal power series formula_60.
formula_40 is an irregular singularity if and only if there is a solution with formula_61. Hence, a differential equation is of Fuchsian type if and only if for all formula_62 there exists a fundamental system of Frobenius series solutions with formula_63 at formula_7.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\xi\\in\\mathbb{C}"
},
{
"math_id": 2,
"text": "(z-\\xi)^\\alpha\\sum_{k=0}^\\infty c_k(z-\\xi)^k, \\text{ with } \\alpha,c_k \\in \\mathbb{C} \\text{ and } c_0\\neq0, "
},
{
"math_id": 3,
"text": "z^\\alpha"
},
{
"math_id": 4,
"text": "\\alpha\\in\\mathbb{C}"
},
{
"math_id": 5,
"text": "(z^\\alpha)'=\\alpha z^{\\alpha-1}"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "\\xi"
},
{
"math_id": 8,
"text": "{d^nf \\over d z^n} = (z-\\xi)^{\\alpha-n}\\sum_{k=0}^\\infty (\\alpha+k)^{\\underline{n}} c_k(z-\\xi)^k,"
},
{
"math_id": 9,
"text": "\\alpha^{\\underline{n}}:=\\prod_{i=0}^{n-1}(\\alpha-i) = \\alpha(\\alpha-1)\\cdots(\\alpha-n+1)"
},
{
"math_id": 10,
"text": "f:=(z-\\xi)^{\\alpha}\\sum_{k=0}^{\\infty}c_k(z-\\xi)^k"
},
{
"math_id": 11,
"text": "\\xi \\in \\mathbb{C}"
},
{
"math_id": 12,
"text": "Lf=f^{(n)} + q_1f^{(n-1)} + \\cdots + q_nf\n"
},
{
"math_id": 13,
"text": "q_1, \\dots, q_n"
},
{
"math_id": 14,
"text": "q_1,\\dots,q_n"
},
{
"math_id": 15,
"text": "N\\in\\mathbb{N}"
},
{
"math_id": 16,
"text": "(z-\\xi)^Nq_i"
},
{
"math_id": 17,
"text": "i\\in\\{1,\\dots, n\\}"
},
{
"math_id": 18,
"text": "Lf"
},
{
"math_id": 19,
"text": "Lf=(z-\\xi)^{\\alpha-n-N}\\psi(z)"
},
{
"math_id": 20,
"text": "\\psi(z)"
},
{
"math_id": 21,
"text": "(z-\\xi)"
},
{
"math_id": 22,
"text": "P_{\\xi}:=\\psi(0)"
},
{
"math_id": 23,
"text": "\\alpha"
},
{
"math_id": 24,
"text": "P_{\\xi}"
},
{
"math_id": 25,
"text": "Lf=0"
},
{
"math_id": 26,
"text": "P_{\\xi}(\\alpha) = 0"
},
{
"math_id": 27,
"text": "\\alpha^{\\underline{n}}=0"
},
{
"math_id": 28,
"text": "\\deg(P_{\\xi}(\\alpha))=n"
},
{
"math_id": 29,
"text": "\\deg(P_{\\xi}(\\alpha))<n"
},
{
"math_id": 30,
"text": "\\xi=\\infty"
},
{
"math_id": 31,
"text": "\\widetilde{L}f"
},
{
"math_id": 32,
"text": "\\widetilde{L}"
},
{
"math_id": 33,
"text": "L"
},
{
"math_id": 34,
"text": "z=x^{-1}"
},
{
"math_id": 35,
"text": "x"
},
{
"math_id": 36,
"text": "x=0"
},
{
"math_id": 37,
"text": "2"
},
{
"math_id": 38,
"text": "Lf := f''+\\frac{1}{z}f'+\\frac{1}{z^2}f"
},
{
"math_id": 39,
"text": "z=0"
},
{
"math_id": 40,
"text": "0"
},
{
"math_id": 41,
"text": "f := z^\\alpha(c_0 + c_1z + c_2 z^2 + \\cdots)"
},
{
"math_id": 42,
"text": "c_0\\neq0"
},
{
"math_id": 43,
"text": "\n\\begin{align}\nLf & = z^{\\alpha-2}(\\alpha(\\alpha-1)c_0 + \\cdots) + \\frac{1}{z}z^{\\alpha-1}(\\alpha c_0 + \\cdots) + \\frac{1}{z^2}z^{\\alpha}(c_0 + \\cdots) \\\\[5pt]\n& = z^{\\alpha-2}c_0(\\alpha(\\alpha-1) + \\alpha + 1) + \\cdots.\n\\end{align}\n"
},
{
"math_id": 44,
"text": "\\deg(P_0(\\alpha)) = \\deg(\\alpha^2 + 1) = 2"
},
{
"math_id": 45,
"text": "Lf:=f''+\\frac{1}{z^2}f' + f"
},
{
"math_id": 46,
"text": "\n\\begin{align}\nLf & = z^{\\alpha-2}(\\alpha(\\alpha-1)c_0 + \\cdots) + \\frac{1}{z^2}z^{\\alpha-1}(\\alpha c_0 + \\cdots) + z^{\\alpha}(c_0 + \\cdots) \\\\[5pt]\n& = z^{\\alpha-3} c_0 \\alpha + z^{\\alpha-2}(c_0\\alpha(\\alpha-1) + c_1) + \\cdots.\n\\end{align}\n"
},
{
"math_id": 47,
"text": "z"
},
{
"math_id": 48,
"text": "\\deg(P_0(\\alpha)) = \\deg(\\alpha) = 1 < 2"
},
{
"math_id": 49,
"text": "\\xi=0"
},
{
"math_id": 50,
"text": "\\psi_1, z\\psi_2, \\dots, z^{n-1}\\psi_{n}"
},
{
"math_id": 51,
"text": "\\psi_i\\in\\mathbb{C}[[z]]"
},
{
"math_id": 52,
"text": "\\psi(0)\\neq0"
},
{
"math_id": 53,
"text": "i\\in\\{1,\\dots,n\\}"
},
{
"math_id": 54,
"text": "\\mu"
},
{
"math_id": 55,
"text": "\\begin{align}\n& z^\\alpha \\psi_0 \\\\\n& z^\\alpha \\psi_1 + z^\\alpha\\log(z)\\psi_0\\\\\n& z^\\alpha \\psi_2 + 2z^\\alpha\\log(z)\\psi_1 + z^\\alpha\\log^2(z)\\psi_0\\\\\n& \\qquad \\vdots\\\\\n& z^\\alpha \\psi_{\\mu-1} + \\cdots + \\binom{\\mu-1}{k} z^{\\alpha}\\log^k(z)\\psi_{\\mu-k} + \\cdots + z^\\alpha \\log^{\\mu-1}(z)\\psi_0\n\\end{align}\n"
},
{
"math_id": 56,
"text": "i\\in\\{0,\\dots,\\mu-1\\}"
},
{
"math_id": 57,
"text": "\\exp(u(z^{-1/s}))\\cdot z^\\alpha(\\psi_0(z^{1/s}) + \\cdots + \\log^k(z) \\psi_k(z^{1/s}) + \\cdots + \\log^{w}(z) \\psi_w(z^{1/s}))"
},
{
"math_id": 58,
"text": "s\\in\\mathbb{N}\\setminus\\{0\\}, u(z)\\in\\mathbb{C}[z]"
},
{
"math_id": 59,
"text": "u(0)=0, \\alpha\\in\\mathbb{C}, w\\in\\mathbb{N}"
},
{
"math_id": 60,
"text": "\\psi_0(z),\\dots,\\psi_w\\in\\mathbb{C}[[z]]"
},
{
"math_id": 61,
"text": "u\\neq 0"
},
{
"math_id": 62,
"text": "\\xi\\in\\mathbb{C}\\cup\\{\\infty\\}"
},
{
"math_id": 63,
"text": "u=0"
}
] |
https://en.wikipedia.org/wiki?curid=57482552
|
5748495
|
Young model
|
Cellular Communications systems
Young model is a radio propagation model that was built on the data collected on New York City. It typically models the behaviour of cellular communication systems in large cities.
Applicable to/under conditions.
This model is ideal for modeling the behaviour of cellular communications in large cities with tall structures.
Coverage.
Frequency: 150 MHz to 3700 MHz
History.
Young model was built on the data of 1952 in New York City.
Mathematical formulation.
The mathematical formulation for Young model is:
formula_0
Where,
"L" = path loss. Unit: decibel (dB)
"G"B = gain of base transmitter. Unit: decibel (dB)
"G"M = gain of mobile transmitter. Unit: decibel (dB)
"h"B = height of base station antenna. Unit: meter (m)
"h"M = height of mobile station antenna. Unit: meter (m)
"d" = link distance. Unit: kilometer (km)
formula_1 = clutter factor
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L \\; = \\; G_B \\; G_M \\; \\left (\\frac{h_B \\; h_M}{d^2} \\right )^2\\beta"
},
{
"math_id": 1,
"text": "\\beta"
}
] |
https://en.wikipedia.org/wiki?curid=5748495
|
5748810
|
Maximal arc
|
A maximal arc in a finite projective plane is a largest possible ("k","d")-arc in that projective plane. If the finite projective plane has order "q" (there are "q"+1 points on any line), then for a maximal arc, "k", the number of points of the arc, is the maximum possible (= "qd" + "d" - "q") with the property that no "d"+1 points of the arc lie on the same line.
Definition.
Let formula_0 be a finite projective plane of order "q" (not necessarily desarguesian). Maximal arcs of "degree" "d" ( 2 ≤ "d" ≤ "q"- 1) are ("k","d")-arcs in formula_0, where "k" is maximal with respect to the parameter "d", in other words, "k" = "qd" + "d" - "q".
Equivalently, one can define maximal arcs of degree "d" in formula_0 as non-empty sets of points "K" such that every line intersects the set either in 0 or "d" points.
Some authors permit the degree of a maximal arc to be 1, "q" or even "q"+ 1. Letting "K" be a maximal ("k", "d")-arc in a projective plane of order "q", if
All of these cases are considered to be "trivial" examples of maximal arcs, existing in any type of projective plane for any value of "q". When 2 ≤ "d" ≤ "q"- 1, the maximal arc is called "non-trivial", and the definition given above and the properties listed below all refer to non-trivial maximal arcs.
Partial geometries.
One can construct partial geometries, derived from maximal arcs:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "(q+1)-\\frac{q}{d}"
},
{
"math_id": 2,
"text": "S(K)=(P,B,I)"
},
{
"math_id": 3,
"text": "pg(q-d,q-\\frac{q}{d},q-\\frac{q}{d}-d+1)"
},
{
"math_id": 4,
"text": "PG(3,2^h) (h\\geq 1)"
},
{
"math_id": 5,
"text": "d=2^s (1\\leq s\\leq m)"
},
{
"math_id": 6,
"text": "T_2^{*}(K)=(P,B,I)"
},
{
"math_id": 7,
"text": "T_2^{*}(K)"
},
{
"math_id": 8,
"text": "pg(2^h-1,(2^h+1)(2^m-1),2^m-1)\\,"
}
] |
https://en.wikipedia.org/wiki?curid=5748810
|
57495962
|
Connected relation
|
Property of a relation on a set
<templatestyles src="Stack/styles.css"/>
In mathematics, a relation on a set is called connected or complete or total if it relates (or "compares") all distinct pairs of elements of the set in one direction or the other while it is called strongly connected if it relates all pairs of elements. As described in the terminology section below, the terminology for these properties is not uniform. This notion of "total" should not be confused with that of a total relation in the sense that for all formula_0 there is a formula_1 so that formula_2 (see serial relation).
Connectedness features prominently in the definition of total orders: a total (or linear) order is a partial order in which any two elements are comparable; that is, the order relation is connected. Similarly, a strict partial order that is connected is a strict total order.
A relation is a total order if and only if it is both a partial order and strongly connected. A relation is a strict total order if, and only if, it is a strict partial order and just connected. A strict total order can never be strongly connected (except on an empty domain).
Formal definition.
A relation formula_3 on a set formula_4 is called when for all formula_5
formula_6
or, equivalently, when for all formula_5
formula_7
A relation with the property that for all formula_5
formula_8
is called .
Terminology.
The main use of the notion of connected relation is in the context of orders, where it is used to define total, or linear, orders. In this context, the property is often not specifically named. Rather, total orders are defined as partial orders in which any two elements are comparable.
Thus, is used more generally for relations that are connected or strongly connected. However, this notion of "total relation" must be distinguished from the property of being serial, which is also called total. Similarly, connected relations are sometimes called , although this, too, can lead to confusion: The universal relation is also called complete, and "complete" has several other meanings in order theory.
Connected relations are also called or said to satisfy trichotomy (although the more common definition of trichotomy is stronger in that exactly one of the three options formula_9 must hold).
When the relations considered are not orders, being connected and being strongly connected are importantly different properties. Sources which define both then use pairs of terms such as and connected, complete and strongly complete, total and complete, and , or and , respectively, as alternative names for the notions of connected and strongly connected as defined above.
Characterizations.
Let formula_3 be a homogeneous relation. The following are equivalent:
where formula_13 is the universal relation and formula_14 is the converse relation of formula_15
The following are equivalent:
where formula_18 is the complementary relation of the identity relation formula_19 and formula_14 is the converse relation of formula_15
Introducing progressions, Russell invoked the axiom of connection:
<templatestyles src="Template:Blockquote/styles.css" />Whenever a series is originally given by a transitive asymmetrical relation, we can express connection by the condition that any two terms of our series are to have the generating relation.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x \\in X"
},
{
"math_id": 1,
"text": "y \\in X"
},
{
"math_id": 2,
"text": "x \\mathrel{R} y"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "x, y \\in X,"
},
{
"math_id": 6,
"text": "\\text{ if } x \\neq y \\text{ then } x \\mathrel{R} y \\quad \\text{or} \\quad y \\mathrel{R} x,"
},
{
"math_id": 7,
"text": "x \\mathrel{R} y \\quad \\text{or} \\quad y \\mathrel{R} x \\quad \\text{or} \\quad x = y."
},
{
"math_id": 8,
"text": "x \\mathrel{R} y \\quad \\text{or} \\quad y \\mathrel{R} x"
},
{
"math_id": 9,
"text": "x \\mathrel{R} y, y \\mathrel{R} x, x = y"
},
{
"math_id": 10,
"text": "U \\subseteq R \\cup R^\\top"
},
{
"math_id": 11,
"text": "\\overline{R} \\subseteq R^\\top"
},
{
"math_id": 12,
"text": "\\overline{R}"
},
{
"math_id": 13,
"text": "U"
},
{
"math_id": 14,
"text": "R^\\top"
},
{
"math_id": 15,
"text": "R."
},
{
"math_id": 16,
"text": "\\overline{I} \\subseteq R \\cup R^\\top"
},
{
"math_id": 17,
"text": "\\overline{R} \\subseteq R^\\top \\cup I"
},
{
"math_id": 18,
"text": "\\overline{I}"
},
{
"math_id": 19,
"text": "I"
},
{
"math_id": 20,
"text": "E"
},
{
"math_id": 21,
"text": "G"
},
{
"math_id": 22,
"text": "\\{ a, b, c \\},"
},
{
"math_id": 23,
"text": "\\{ (a, b), (b, c), (c, a) \\}"
},
{
"math_id": 24,
"text": "X,"
}
] |
https://en.wikipedia.org/wiki?curid=57495962
|
57497987
|
+ h.c.
|
+ h.c. is an abbreviation for "plus the "H" ermitian "c" onjugate"; it means is that there are additional terms which are the Hermitian conjugates of all of the preceding terms, and is a convenient shorthand to omit half the terms actually present.
Context and use.
The notation convention "+ h.c." is common in quantum mechanics in the context of writing out formulas for Lagrangians and Hamiltonians, which conventionally are both required to be Hermitian operators.
The expression
formula_0
means
formula_1
The mathematics of quantum mechanics is based on complex numbers, whereas almost all observations (measurements) are only real numbers. Adding its own conjugate to an operator guarantees that the combination is Hermitian, which in turn guarantees that the combined operator's eigenvalues will be real numbers, suitable for predicting values of observations / measurements.
Dagger and asterisk notation.
In the expressions above, formula_2 is used as the symbol for the Hermitian conjugate (also called the "conjugate transpose") of formula_3, defined as applying both the complex conjugate and the transpose transformations to the operator formula_3, in any order.
The dagger (formula_4) is an old notation in mathematics, but is still widespread in quantum-mechanics. In mathematics (particularly linear algebra) the Hermitian conjugate of formula_3 is commonly written as formula_5, but in quantum mechanics the asterisk (formula_6) notation is sometimes used for the complex conjugate only, and not the combined conjugate transpose (Hermitian conjugate).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{L} = A + B + C + ~ \\text{h.c.} ~"
},
{
"math_id": 1,
"text": "\\mathcal{L} = A + B + C + A^\\dagger + B^\\dagger + C^\\dagger ~."
},
{
"math_id": 2,
"text": "A^\\dagger"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "\\dagger"
},
{
"math_id": 5,
"text": "A^\\ast"
},
{
"math_id": 6,
"text": "\\ast"
}
] |
https://en.wikipedia.org/wiki?curid=57497987
|
57498426
|
Periodic table of topological invariants
|
Indication of topological symmetry groups to topological condensed matter
The periodic table of topological invariants is an application of topology to physics. It indicates the group of topological invariant for topological insulators and topological superconductors in each dimension and in each discrete symmetry class.
Discrete symmetry classes.
There are ten discrete symmetry classes of topological insulators and superconductors, corresponding to the ten Altland–Zirnbauer classes of random matrices. They are defined by three symmetries of the Hamiltonian formula_0, (where formula_1, and formula_2, are the annihilation and creation operators of mode formula_3, in some arbitrary spatial basis) : time reversal symmetry, particle hole (or charge conjugation) symmetry, and chiral (or sublattice) symmetry.
Chiral symmetry is a unitary operator formula_4, that acts on formula_1, as a unitary rotation (formula_5,) and satisfies formula_6. A Hamiltonian formula_7 possesses chiral symmetry when formula_8, for some choice of formula_4 (on the level of first-quantised Hamiltonians, this means formula_9 and formula_7 are anticommuting matrices).
Time reversal is an antiunitary operator formula_10, that acts on formula_11, (where formula_12, is an arbitrary complex coefficient, and formula_13, denotes complex conjugation) as formula_14. It can be written as formula_15 where formula_16 is the complex conjugation operator and formula_17 is a unitary matrix. Either formula_18 or formula_19. A Hamiltonian with time reversal symmetry satisfies formula_20, or on the level of first-quantised matrices, formula_21, for some choice of formula_17.
Charge conjugation formula_22 is also an antiunitary operator which acts on formula_11 as formula_23, and can be written as formula_24 where formula_25 is unitary. Again either formula_26 or formula_27 depending on what formula_25 is. A Hamiltonian with particle hole symmetry satisfies formula_28, or on the level of first-quantised Hamiltonian matrices, formula_29, for some choice of formula_25.
In the Bloch Hamiltonian formalism for periodic crystals, where the Hamiltonian formula_30 acts on modes of crystal momentum formula_31, the chiral symmetry, TRS, and PHS conditions become formula_32, formula_33 and formula_34.
It is evident that if two of these three symmetries are present, then the third is also present, due to the relation formula_35.
The aforementioned discrete symmetries label 10 distinct discrete symmetry classes, which coincide with the Altland–Zirnbauer classes of random matrices.
Equivalence classes of Hamiltonians.
A bulk Hamiltonian in a particular symmetry group is restricted to be a Hermitian matrix with no zero-energy eigenvalues (i.e. so that the spectrum is "gapped" and the system is a bulk insulator) satisfying the symmetry constraints of the group. In the case of formula_36 dimensions, this Hamiltonian is a continuous function formula_30 of the formula_37 parameters in the Bloch momentum vector formula_38 in the Brillouin zone; then the symmetry constraints must hold for all formula_39.
Given two Hamiltonians formula_40 and formula_41, it may be possible to continuously deform formula_40 into formula_41 while maintaining the symmetry constraint and gap (that is, there exists continuous function formula_42 such that for all formula_43 the Hamiltonian has no zero eigenvalue and symmetry condition is maintained, and formula_44 and formula_45). Then we say that formula_40 and formula_41 are equivalent.
However, it may also turn out that there is no such continuous deformation. in this case, physically if two materials with bulk Hamiltonians formula_40 and formula_41, respectively, neighbor each other with an edge between them, when one continuously moves across the edge one must encounter a zero eigenvalue (as there is no continuous transformation that avoids this). This may manifest as a gapless zero energy edge mode or an electric current that only flows along the edge.
An interesting question is to ask, given a symmetry class and a dimension of the Brillouin zone, what are all the equivalence classes of Hamiltonians. Each equivalence class can be labeled by a topological invariant; two Hamiltonians whose topological invariant are different cannot be deformed into each other and belong to different equivalence classes.
Classifying spaces of Hamiltonians.
For each of the symmetry classes, the question can be simplified by deforming the Hamiltonian into a "projective" Hamiltonian, and considering the symmetric space in which such Hamiltonians live. These classifying spaces are shown for each symmetry class:
For example, a (real symmetric) Hamiltonian in symmetry class AI can have its formula_46 positive eigenvalues deformed to +1 and its formula_47 negative eigenvalues deformed to -1; the resulting such matrices are described by the union of real Grassmannians formula_48
Classification of invariants.
The strong topological invariants of a many-band system in formula_37 dimensions can be labeled by the elements of the formula_37-th homotopy group of the symmetric space. These groups are displayed in this table, called the periodic table of topological insulators:
There may also exist weak topological invariants (associated to the fact that the suspension of the Brillouin zone is in fact equivalent to a formula_50 sphere wedged with lower-dimensional spheres), which are not included in this table. Furthermore, the table assumes the limit of an infinite number of bands, i.e. involves formula_51 Hamiltonians for formula_52.
The table also is periodic in the sense that the group of invariants in formula_37 dimensions is the same as the group of invariants in formula_53 dimensions. In the case of no antiunitary symmetries, the invariant groups are periodic in dimension by 2.
For nontrivial symmetry classes, the actual invariant can be defined by one of the following integrals over all or part of the Brillouin zone: the Chern number, the Wess-Zumino winding number, the Chern–Simons invariant, the Fu–Kane invariant.
Dimensional reduction and Bott clock.
The periodic table also displays a peculiar property: the invariant groups in formula_37 dimensions are identical to those in formula_54 dimensions but in a different symmetry class. Among the complex symmetry classes, the invariant group for A in formula_37 dimensions is the same as that for AIII in formula_54 dimensions, and vice versa. One can also imagine arranging each of the eight real symmetry classes on the Cartesian plane such that the formula_55 coordinate is formula_56 if time reversal symmetry is present and formula_49 if it is absent, and the formula_57 coordinate is formula_58 if particle hole symmetry is present and formula_49 if it is absent. Then the invariant group in formula_37 dimensions for a certain real symmetry class is the same as the invariant group in formula_54 dimensions for the symmetry class directly one space clockwise. This phenomenon was termed the "Bott clock" by Alexei Kitaev, in reference to the Bott periodicity theorem.
The Bott clock can be understood by considering the problem of Clifford algebra extensions. Near an interface between two inequivalent bulk materials, the Hamiltonian approaches a gap closing. To lowest order expansion in momentum slightly away from the gap closing, the Hamiltonian takes the form of a Dirac Hamiltonian formula_59. Here, formula_60 are a representation of the Clifford Algebra formula_61, while formula_62 is an added "mass term" that and anticommutes with the rest of the Hamiltonian and vanishes at the interface (thus giving the interface a gapless edge mode at formula_63). The formula_62 term for the Hamiltonian on one side of the interface cannot be continuously deformed into the formula_62 term for the Hamiltonian on the other side of the interface. Thus (letting formula_64 be an arbitrary positive scalar) the problem of classifying topological invariants reduces to the problem of classifying all possible inequivalent choices of formula_65 to extend the Clifford algebra to one higher dimension, while maintaining the symmetry constraints.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\hat{H} = \\sum_{i,j} H_{ij} c_i^{\\dagger} c_j"
},
{
"math_id": 1,
"text": "c_i"
},
{
"math_id": 2,
"text": "c_i^{\\dagger}"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "S c_i S^{-1} = (U_S)_{ij} c_j"
},
{
"math_id": 6,
"text": "S^2 = 1"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": " S\\hat{H}S^{-1}=-\\hat{H}"
},
{
"math_id": 9,
"text": "U_S"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "\\alpha c_i"
},
{
"math_id": 12,
"text": "\\alpha"
},
{
"math_id": 13,
"text": "^*"
},
{
"math_id": 14,
"text": "T \\alpha c_i T^{-1} = \\alpha^* {(U_T)}_{ij}c_j"
},
{
"math_id": 15,
"text": "T = U_T \\mathcal{K}"
},
{
"math_id": 16,
"text": "\\mathcal{K}"
},
{
"math_id": 17,
"text": "U_T"
},
{
"math_id": 18,
"text": "T^2 = 1"
},
{
"math_id": 19,
"text": "T^2 = -1"
},
{
"math_id": 20,
"text": "T\\hat{H}T^{-1} = \\hat{H}"
},
{
"math_id": 21,
"text": "U_T H^* U_T^{-1} = H"
},
{
"math_id": 22,
"text": "C"
},
{
"math_id": 23,
"text": "C \\alpha c_i C^{-1} = \\alpha^* (U_C^{\\dagger})_{ji}c_j"
},
{
"math_id": 24,
"text": "C = U_C \\mathcal{K}"
},
{
"math_id": 25,
"text": "U_C"
},
{
"math_id": 26,
"text": "C^2 =1"
},
{
"math_id": 27,
"text": "C^2 = -1"
},
{
"math_id": 28,
"text": "C\\hat{H}C^{-1} = - \\hat{H}"
},
{
"math_id": 29,
"text": "U_C H^* U_C^{-1} = - H"
},
{
"math_id": 30,
"text": "H(k)"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "U_S H(k) U_S^{-1} = -H(k)"
},
{
"math_id": 33,
"text": "U_T H(k)^* U_T^{-1} = H(-k)"
},
{
"math_id": 34,
"text": "U_C H(k)^* U_C^{-1} = -H(-k)"
},
{
"math_id": 35,
"text": "S= TC"
},
{
"math_id": 36,
"text": "d>0"
},
{
"math_id": 37,
"text": "d"
},
{
"math_id": 38,
"text": " \\vec{k}"
},
{
"math_id": 39,
"text": "\\vec{k}"
},
{
"math_id": 40,
"text": "H_1"
},
{
"math_id": 41,
"text": "H_2"
},
{
"math_id": 42,
"text": "H(t, \\vec{k})"
},
{
"math_id": 43,
"text": "0 \\le t \\le 1"
},
{
"math_id": 44,
"text": "H(0, \\vec{k} ) = H_1( \\vec{k})"
},
{
"math_id": 45,
"text": "H(1, \\vec{k} ) = H_2( \\vec{k})"
},
{
"math_id": 46,
"text": "n"
},
{
"math_id": 47,
"text": "N-n"
},
{
"math_id": 48,
"text": "\\bigcup_{n=0}^\\infty Gr(n, N) = \\bigcup_{n=0}^\\infty O(N)/O(n)\\times O(N-n)"
},
{
"math_id": 49,
"text": "0"
},
{
"math_id": 50,
"text": "d+1"
},
{
"math_id": 51,
"text": "N \\times N"
},
{
"math_id": 52,
"text": "N \\to \\infty"
},
{
"math_id": 53,
"text": "d+8"
},
{
"math_id": 54,
"text": "d-1"
},
{
"math_id": 55,
"text": "x"
},
{
"math_id": 56,
"text": "T^2"
},
{
"math_id": 57,
"text": "y"
},
{
"math_id": 58,
"text": "C^2"
},
{
"math_id": 59,
"text": "H_\\text{Dirac}(\\vec{k}) = \\sum_{j=1}^d \\Gamma_j v_j k_j + m\\Gamma_0 "
},
{
"math_id": 60,
"text": "\\Gamma_1, \\Gamma_2, \\ldots, \\Gamma_d"
},
{
"math_id": 61,
"text": "\\lbrace \\Gamma_i , \\Gamma_j \\rbrace = 2\\delta_{ij}"
},
{
"math_id": 62,
"text": "m\\Gamma_0"
},
{
"math_id": 63,
"text": "k=0"
},
{
"math_id": 64,
"text": "m"
},
{
"math_id": 65,
"text": "\\Gamma_0"
}
] |
https://en.wikipedia.org/wiki?curid=57498426
|
57499209
|
Higher-spin theory
|
A theory with particles of spin more than two
Higher-spin theory or higher-spin gravity is a common name for field theories that contain massless fields of spin greater than two. Usually, the spectrum of such theories contains the graviton as a massless spin-two field, which explains the second name. Massless fields are gauge fields and the theories should be (almost) completely fixed by these higher-spin symmetries. Higher-spin theories are supposed to be consistent quantum theories and, for this reason, to give examples of quantum gravity. Most of the interest in the topic is due to the AdS/CFT correspondence where there is a number of conjectures relating higher-spin theories to weakly coupled conformal field theories. It is important to note that only certain parts of these theories are known at present (in particular, standard action principles are not known) and not many examples have been worked out in detail except some specific toy models (such as the higher-spin extension of pure Chern–Simons, Jackiw–Teitelboim, selfdual (chiral) and Weyl gravity theories).
Free higher-spin fields.
Systematic study of massless arbitrary spin fields was initiated by Christian Fronsdal. A free spin-s field can be represented by a tensor gauge field.
formula_0
This (linearised) gauge symmetry generalises that of massless spin-one (photon) formula_1 and that of massless spin-two (graviton) formula_2. Fronsdal also found linear equations of motion and a quadratic action that is invariant under the symmetries above. For example, the equations are
formula_3
where in the first bracket one needs formula_4 terms more to make the expression symmetric and in the second bracket one needs formula_5 permutations. The equations are gauge invariant provided the field is double-traceless formula_6 and the gauge parameter is traceless formula_7.
Essentially, the higher spin problem can be stated as a problem to find a nontrivial interacting theory with at least one massless higher-spin field (higher in this context usually means greater than two).
A theory for "massive" arbitrary higher-spin fields is proposed by C. Hagen and L. Singh. This massive theory is important because, according to various conjectures, spontaneously broken gauges of higher-spins may contain an infinite tower of "massive" higher-spin particles on the top of the massless modes of lower spins s ≤ 2 like graviton similarly as in string theories.
The linearized version of the higher-spin supergravity gives rise to dual graviton field in first order form. Interestingly, the Curtright field of such dual gravity model is of a mixed symmetry, hence the dual gravity theory can also be "massive". Also the chiral and nonchiral actions can be obtained from the manifestly covariant Curtright action.
No-go theorems.
Possible interactions of massless higher spin particles with themselves and with low spin particles are (over)constrained by the basic principles of quantum field theory like Lorentz invariance. Many results in the form of no-go theorems have been obtained up to date
Flat space.
Most of the no-go theorems constrain interactions in the flat space.
One of the most well-known is the Weinberg low energy theorem that explains "why there are no macroscopic fields corresponding to particles of spin 3 or higher". The Weinberg theorem can be interpreted in the following way: Lorentz invariance of the S-matrix is equivalent, for massless particles, to decoupling of longitudinal states. The latter is equivalent to gauge invariance under the linearised gauge symmetries above. These symmetries lead, for formula_8, to 'too many' conservation laws that trivialise scattering so that formula_9.
Another well-known result is the Coleman–Mandula theorem. that, under certain assumptions, states that any symmetry group of S-matrix is "necessarily locally isomorphic to the direct product of an internal symmetry group and the Poincaré group". This means that there cannot be any symmetry generators transforming as tensors of the Lorentz group – S-matrix cannot have symmetries that would be associated with higher spin charges.
Massless higher spin particles also cannot consistently couple to nontrivial gravitational backgrounds. An attempt to simply replace partial derivatives with the covariant ones turns out to be inconsistent with gauge invariance. Nevertheless, a consistent gravitational coupling does exist in the light-cone gauge (to the lowest order).
Other no-go results include a direct analysis of possible interactions and show, for example, that the gauge symmetries cannot be deformed in a consistent way so that they form an algebra.
Anti-de Sitter space.
In anti-de Sitter space some of the flat space no-go results are still valid and some get slightly modified. In particular, it was shown by Fradkin and Vasiliev that one can consistently couple massless higher-spin fields to gravity at the first non-trivial order. The same result in flat space was obtained by Bengtsson, Bengtsson and Linden in the light-cone gauge the same year. The difference between the flat space result and the AdS one is that the gravitational coupling of massless higher-spin fields cannot be written in the manifestly covariant form in flat space as different from the AdS case.
An AdS analog of the Coleman–Mandula theorem was obtained by Maldacena and Zhiboedov. AdS/CFT correspondence replaces the flat space S-matrix with the holographic correlation functions. It then can be shown that the asymptotic higher-spin symmetry in
anti-de Sitter space implies that the holographic correlation functions are those of the singlet sector a free vector model conformal field theory (see also higher-spin AdS/CFT correspondence below). Let us stress that all n-point correlation functions are not vanishing so this statement is not exactly the analogue of the triviality of the S-matrix. An important difference from the flat space results, e.g. Coleman–Mandula and Weinberg theorems, is that one can break higher-spin symmetry in a controllable way, which is called slightly broken higher-spin symmetry. In the latter case the holographic S-matrix corresponds to highly nontrivial Chern–Simons matter theories rather than to a free CFT.
As in the flat space case, other no-go results include a direct analysis of possible interactions. Starting from the quartic order a generic higher-spin gravity (defined to be the dual of the free vector model, see also higher-spin AdS/CFT correspondence below) is plagued by non-localities, which is the same problem as in flat space.
Various approaches to higher-spin theories.
The existence of many higher-spin theories is well-justified on the basis of AdS/correspondence, but none of these hypothetical theories is known in full detail. Most of the common approaches to the higher-spin problem are described below.
Chiral higher-spin gravity.
Generic theories with massless higher-spin fields are obstructed by non-localities, see No-go theorems. Chiral higher-spin gravity is a unique higher-spin theory with propagating massless fields that is not plagued by non-localities. It is the smallest nontrivial extension of the graviton with massless higher-spin fields in four dimensions. It has a simple action in the light-cone gauge:
formula_10
where formula_11 represents two helicity eigen-states formula_12 of a massless spin-formula_13 field in four dimensions (for low spins one finds formula_14 representing a scalar field, where light-cone gauge makes no difference; one finds formula_15 for photons and formula_16 for gravitons). The action has two coupling constants: a dimensionless formula_17 and a dimensionful formula_18 which can be associated with the Planck length. Given three helicities formula_19 fixed there is a unique cubic interaction formula_20, which in the spinor-helicity base can be represented as formula_21 for positive formula_22. The main feature of chiral theory is the dependence of couplings on the helicities formula_23, which forces the sum formula_22 to be positive (there exists an anti-chiral theory where the sum is negative). The theory is one-loop finite and its one-loop amplitudes are related to those of self-dual Yang-Mills theory. The theory can be thought of as a higher-spin extension of self-dual Yang–Mills theory. Chiral theory admits an extension to anti-de Sitter space, where it is a unique perturbatively local higher-spin theory with propagating massless higher-spin fields.
Conformal higher-spin gravity.
Usual massless higher-spin symmetries generalise the action of the linearised diffeomorphisms from the metric tensor to higher-spin fields. In the
context of gravity one may also be interested in conformal gravity that enlarges diffeomorphisms with Weyl transformations formula_24 where formula_25 is an arbitrary function. The simplest example of a conformal gravity is in four dimensions
formula_26
One can try to generalise this idea to higher-spin fields by postulating the linearised gauge transformations of the form
formula_27
where formula_28 is a higher-spin generalisation of the Weyl symmetry. As different from massless higher-spin fields, conformal higher-spin
fields are much more tractable: they can propagate on nontrivial gravitational background and admit interactions in flat space. In particular, the action of conformal higher-spin
theories is known to some extent – it can be obtained as an effective action for a free conformal field theory coupled to the conformal higher-spin background.
Collective dipole.
The idea is conceptually similar to the reconstruction approach just described, but performs a complete reconstruction in some sense. One begins with the free formula_29 model partition function and performs a change of variables by passing from the formula_29 scalar fields formula_30, formula_31 to a new bi-local variable formula_32. In the limit of large formula_33 this change of variables is well-defined, but has a nontrivial Jacobian. The same partition function can then be rewritten as a path integral over bi-local formula_34. It can also be shown that in the free approximation the bi-local variables describe free massless fields of all spins formula_35 in anti-de Sitter space. Therefore, the action in term of the bi-local formula_34 is a candidate for the action of a higher-spin theory
Holographic RG flow.
The idea is that the equations of the exact renormalization group can be reinterpreted as equations of motions with the RG energy scale playing the role of the radial coordinate in anti-de Sitter space. This idea can be applied to the conjectural duals of higher-spin theories, for example, to the free formula_29 model.
Noether procedure.
Noether procedure is a canonical perturbative method to introduce interactions. One begins with a sum of free (quadratic) actions formula_36 and linearised gauge symmetries formula_37, which are given by Fronsdal Lagrangian and by the gauge transformations above. The idea is to add all possible corrections that are cubic in the fields formula_38 and, at the same time, allow for field-dependent deformations formula_39 of the gauge transformations. One then requires the full action to be gauge invariant
formula_40
and solves this constraint at the first nontrivial order in the weak-field expansion (note that formula_41 because the free action is gauge invariant). Therefore, the first condition is formula_42. One has to mod out by the trivial solutions that result from nonlinear field redefinitions in the free action. The deformation procedure may not stop at this order and one may have to add quartic terms formula_43 and further corrections formula_44 to the gauge transformations that are quadratic in the fields and so on. The systematic approach is via BV-BRST techniques. Unfortunately, the Noether procedure approach has not given yet any complete example of a higher-spin theory, the difficulties being not only in the technicalities but also in the conceptual understanding of locality in higher-spin theories. Unless locality is imposed one can always find a solution to the Noether procedure (for example, by inverting the kinetic operator in formula_42 that results from the second term) or, the same time, by performing a suitable nonlocal redefinition one can remove any interaction. At present, it seems that higher-spin theories cannot be fully understood as field theories due to quite non-local interactions they have.
Reconstruction.
The higher-spin AdS/CFT correspondence can be used in the reverse order – one can attempt to build the interaction vertices of the higher-spin theory in such a way that they reproduce the correlation functions of a given conjectural CFT dual. This approach takes advantage of the fact that the kinematics of AdS theories is, to some extent, equivalent to the kinematics of conformal field theories in one dimension lower – one has exactly the same number of independent structures on both sides. In particular, the cubic part of the action of the Type-A higher-spin theory was found by inverting the three-point functions of the higher-spin currents in the free scalar CFT. Some quartic vertices have been reconstructed too.
Three dimensions and Chern–Simons.
In three dimensions neither gravity nor massless higher-spin fields have any propagating degrees of freedom. It is known that the Einstein–Hilbert action with negative cosmological constant can be rewritten in the Chern–Simons form for formula_45
formula_46
where there are two independent formula_47-connections, formula_48 and formula_49. Due to isomorphisms formula_50 and formula_51 the algebra formula_47 can be understood as the Lorentz algebra in three dimensions. These two connections are related to vielbein formula_52 and spin-connection formula_53 (Note that in three dimensions, the spin-connection, being anti-symmetric in formula_54 is equivalent to an formula_55 vector via formula_56, where formula_57 is the totally anti-symmetric Levi-Civita symbol). Higher-spin extensions are straightforward to construct: instead of formula_58 connection one can take a connection of formula_59, where formula_60 is any Lie algebra containing the 'gravitational' formula_47 subalgebra. Such theories have been extensively studied due their relation to AdS/CFT and W-algebras as asymptotic symmetries.
Vasiliev equations.
Vasiliev equations are formally consistent gauge invariant nonlinear equations whose linearization over a specific vacuum solution describes free massless higher-spin fields on anti-de Sitter space. The Vasiliev equations are classical equations and no Lagrangian is known that starts from canonical two-derivative Fronsdal Lagrangian and is completed by interactions terms. There is a number of variations of Vasiliev equations that work in three, four and arbitrary number of space-time dimensions. Vasiliev's equations admit supersymmetric extensions with any number of super-symmetries and allow for Yang–Mills gaugings. Vasiliev's equations are background independent, the simplest exact solution being anti-de Sitter space. However, locality has not been an assumption used in the derivation and, for this reason, some of the results obtained from the equations are inconsistent with higher-spin theories and AdS/CFT duality. Locality issues remain to be clarified.
Higher-spin AdS/CFT correspondence.
Higher-spin theories are of interest as models of AdS/CFT correspondence.
Klebanov–Polyakov conjecture.
In 2002, Klebanov and Polyakov put forward a conjecture that the free and critical formula_29 vector models, as conformal field theories in three dimensions, should be dual to a theory in four-dimensional anti-de Sitter space with infinite number of massless higher-spin gauge fields. This conjecture was further extended and generalised to Gross–Neveu and super-symmetric models. The most general extension is to a class of Chern–Simons matter theories.
The rationale for the conjectures is that there are some conformal field theories that, in addition to the stress-tensor, have an infinite number of conserved tensors formula_61, where spin runs over all positive integers formula_62 (in the formula_29 model the spin is even). The stress-tensor corresponds to the formula_63 case. By the standard AdS/CFT lore, the fields that are dual to conserved currents have to be gauge fields. For example, the stress-tensor is dual to the spin-two graviton field. A generic example of a conformal field theory with higher-spin currents is any free CFT. For instance, the free formula_29 model is defined by
formula_64
where formula_65. It can be shown that there exist an infinite number of quasi-primary operators
formula_66
that are conserved. Under certain assumptions it was shown by Maldacena and Zhiboedov that 3d conformal field theories with higher spin currents are free, which can be extended to any dimension greater than two. Therefore, higher-spin theories are generic duals of free conformal field theories. A theory that is dual to the free scalar CFT is called Type-A in the literature and the theory that is dual to the free fermion CFT is called Type-B.
Another example is the critical vector model, which is a theory with action
formula_67
taken at the fixed point. This theory is interacting and does not have conserved higher-spin currents. However, in the large N limit it can be shown to have 'almost' conserved
higher-spin currents and the conservation is broken by formula_68 effects. More generally, free and critical vector models belong to the class of Chern–Simons matter theories that have slightly broken higher-spin symmetry.
Gaberdiel–Gopakumar conjecture.
The conjecture put forward by Gaberdiel and Gopakumar is an extension of the Klebanov–Polyakov conjecture to formula_69. It states that the formula_70 minimal models in the large formula_33 limit should be dual to theories with massless higher-spin fields and two scalar fields. Massless higher-spin fields do not propagate in three dimensions, but can be described, as is discussed above, by the Chern–Simons action. However, it is not known to extend this action as to include the matter fields required by the duality.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\delta \\Phi_{\\mu_1\\mu_2...\\mu_s}=\\partial_{\\mu_1}\\xi_{\\mu_2...\\mu_s}+\\text{permutations}"
},
{
"math_id": 1,
"text": "\\delta A_\\mu =\\partial_\\mu \\xi"
},
{
"math_id": 2,
"text": "\\delta h_{\\mu\\nu}=\\partial_\\mu \\xi_\\nu +\\partial_\\nu \\xi_\\mu"
},
{
"math_id": 3,
"text": " \\square \\Phi_{\\mu_1\\mu_2...\\mu_s} -\\left(\\partial_{\\mu_1}\\partial^\\nu \\Phi_{\\nu\\mu_2...\\mu_s}+\\text{ permutations}\\right) + \\frac12 \\left(\n\\partial_{\\mu_1}\\partial_{\\mu_2}\\Phi^\\nu{}_{\\nu\\mu_3...\\mu_s}+\\text{permutations}\\right)=0"
},
{
"math_id": 4,
"text": "s-1"
},
{
"math_id": 5,
"text": "s(s-1)/2-1"
},
{
"math_id": 6,
"text": "\\Phi^\\nu{}_\\nu{}^\\lambda{}_{\\lambda\\mu_5...\\mu_s}=0"
},
{
"math_id": 7,
"text": "\\xi^\\nu{}_{\\nu\\mu_3...\\mu_{s-1}}=0"
},
{
"math_id": 8,
"text": "s>2"
},
{
"math_id": 9,
"text": "S=1"
},
{
"math_id": 10,
"text": "\\mathcal{S}=\\int \\mathrm{d}^4x \\left[\\sum_{\\lambda\\geq0} \\Phi_{-\\lambda} \\square \\Phi_{\\lambda}+\\sum_{\\lambda_{1,2,3}} \\frac{g\\, {\\mathrm{l_p}}^{\\lambda_1+\\lambda_2+\\lambda_3-1} }{\\Gamma(\\lambda_1+\\lambda_2+\\lambda_3)} V_{\\lambda_1,\\lambda_2,\\lambda_3} \\Phi_{\\lambda_1}\\Phi_{\\lambda_2}\\Phi_{\\lambda_3}\\right]"
},
{
"math_id": 11,
"text": "\\Phi_\\lambda(x)"
},
{
"math_id": 12,
"text": "\\lambda=\\pm s"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "\\Phi_0"
},
{
"math_id": 15,
"text": "\\Phi_{\\pm1}"
},
{
"math_id": 16,
"text": "\\Phi_{\\pm2}"
},
{
"math_id": 17,
"text": "g"
},
{
"math_id": 18,
"text": "\\mathrm{l}_p"
},
{
"math_id": 19,
"text": "\\lambda_{1,2,3}"
},
{
"math_id": 20,
"text": "V_{\\lambda_1,\\lambda_2,\\lambda_3}"
},
{
"math_id": 21,
"text": "[12]^{\\lambda_1+\\lambda_2-\\lambda_3}[23]^{\\lambda_2+\\lambda_3-\\lambda_1}[13]^{\\lambda_1+\\lambda_3-\\lambda_2}"
},
{
"math_id": 22,
"text": "\\lambda_1+\\lambda_2+\\lambda_3"
},
{
"math_id": 23,
"text": "\\Gamma(\\lambda_1+\\lambda_2+\\lambda_3)^{-1}"
},
{
"math_id": 24,
"text": "g_{\\mu\\nu}\\rightarrow\\Omega^2(x)g_{\\mu\\nu}"
},
{
"math_id": 25,
"text": "\\Omega(x)"
},
{
"math_id": 26,
"text": "\\mathcal{S}=\\int \\mathrm{d}^4x \\sqrt{-g} C_{\\mu\\nu\\lambda\\rho}C^{\\mu\\nu\\lambda\\rho}"
},
{
"math_id": 27,
"text": " \\delta \\Phi_{\\mu_1\\mu_2...\\mu_s}=\\partial_{\\mu_1}\\xi_{\\mu_2...\\mu_s}+g_{\\mu_1\\mu_2} \\zeta_{\\mu_3...\\mu_s}+\\text{permutations}"
},
{
"math_id": 28,
"text": "\\zeta_{\\mu_1...\\mu_{s-2}}"
},
{
"math_id": 29,
"text": "O(N)"
},
{
"math_id": 30,
"text": "\\phi^i(x)"
},
{
"math_id": 31,
"text": "i=1,...,N"
},
{
"math_id": 32,
"text": "\\Psi(x,y)=\\sum_i \\phi^i(x)\\phi^i(y)"
},
{
"math_id": 33,
"text": "N"
},
{
"math_id": 34,
"text": "\\Psi(x,y)"
},
{
"math_id": 35,
"text": "s=0,1,2,3,...."
},
{
"math_id": 36,
"text": "S_2"
},
{
"math_id": 37,
"text": "\\delta_0 "
},
{
"math_id": 38,
"text": "S_3"
},
{
"math_id": 39,
"text": "\\delta_1"
},
{
"math_id": 40,
"text": "0=\\delta S=\\delta_0 S_2+\\delta_0 S_3 +\\delta_1 S_2+..."
},
{
"math_id": 41,
"text": "\\delta_0 S_2=0"
},
{
"math_id": 42,
"text": "\\delta_0 S_3 +\\delta_1 S_2=0"
},
{
"math_id": 43,
"text": "S_4"
},
{
"math_id": 44,
"text": "\\delta_2"
},
{
"math_id": 45,
"text": "SL(2,\\mathbb{R})\\oplus SL(2,\\mathbb{R})"
},
{
"math_id": 46,
"text": "S=S_{CS}(A)-S_{CS}(\\bar{A}) \\qquad \\qquad S_{CS}(A)=\\frac{k}{4\\pi} \\int \\mathrm{tr}(A\\wedge dA+\\frac23 A\\wedge A \\wedge A)\\,,"
},
{
"math_id": 47,
"text": "sl(2,\\mathbb{R})"
},
{
"math_id": 48,
"text": "A"
},
{
"math_id": 49,
"text": "\\bar{A}"
},
{
"math_id": 50,
"text": "so(2,2)\\sim sl(2,\\mathbb{R})\\oplus sl(2,\\mathbb{R})"
},
{
"math_id": 51,
"text": "sl(2,\\mathbb{R})\\sim so(2,1)"
},
{
"math_id": 52,
"text": "e^a_\\mu "
},
{
"math_id": 53,
"text": "\\omega^{a,b}_\\mu"
},
{
"math_id": 54,
"text": "a,b"
},
{
"math_id": 55,
"text": "so(2,1)"
},
{
"math_id": 56,
"text": "\\tilde{\\omega}^a_\\mu=\\epsilon^a{}_{bc}\\omega^{b,c}_\\mu"
},
{
"math_id": 57,
"text": "\\epsilon^{abc}"
},
{
"math_id": 58,
"text": "sl(2,\\mathbb{R})\\oplus sl(2,\\mathbb{R})"
},
{
"math_id": 59,
"text": "\\mathfrak{g}\\oplus \\mathfrak{g}"
},
{
"math_id": 60,
"text": "\\mathfrak{g}"
},
{
"math_id": 61,
"text": "\\partial^c j_{ca_2...a_s}=0"
},
{
"math_id": 62,
"text": "s=1,2,3,..."
},
{
"math_id": 63,
"text": "s=2"
},
{
"math_id": 64,
"text": "S=\\frac12 \\int d^dx\\, \\partial_m\\phi^i \\partial^m \\phi^j \\delta_{ij},"
},
{
"math_id": 65,
"text": "i,j=1,...,N"
},
{
"math_id": 66,
"text": "j_{a_1a_2...a_s}=\\partial_{a_1}...\\partial_{a_s}\\phi^i \\phi^j\\delta_{ij} +\\text{plus terms with different arrangement of derivatives and minus traces}"
},
{
"math_id": 67,
"text": "S= \\int d^3x\\, \\frac12\\partial_m\\phi^i \\partial^m \\phi^j \\delta_{ij}+\\frac{\\lambda}{4} (\\phi^i \\phi^j \\delta_{ij})^2"
},
{
"math_id": 68,
"text": "1/N"
},
{
"math_id": 69,
"text": "AdS_3/CFT^2"
},
{
"math_id": 70,
"text": "W_N"
}
] |
https://en.wikipedia.org/wiki?curid=57499209
|
57500874
|
Jostel's TSH index
|
Jostel's TSH index (TSHI or JTI), also referred to as Jostel's thyrotropin index or Thyroid Function index (TFI), is a method for estimating the thyrotropic (i.e. thyroid stimulating) function of the anterior pituitary lobe in a quantitative way. The equation has been derived from the logarithmic standard model of thyroid homeostasis. In a paper from 2014 further study was suggested to show if it is useful, but the 2018 guideline by the European Thyroid Association for the diagnosis of uncertain cases of central hypothyroidism regarded it as beneficial. It is also recommended for purposes of differential diagnosis in the sociomedical expert assessment.
How to determine JTI.
Jostel's TSH index can be calculated with
formula_0
from equilibrium serum concentrations of thyrotropin (TSH), free T4 (FT4) and a correction coefficient derived from the logarithmic standard model (β = 0.1345).
An alternative standardised form (standardised TSH index or sTSHI) is calculated with.
formula_1
as a z-transformed value incorporating mean (2.7) and standard deviation (0.676) of TSHI in a reference population<ref name="DOI10.1155/2012/351864">.</ref>
Clinical significance.
The TSH index is reduced in patients with secondary hypothyroidism resulting from thyrotropic insufficiency. For this indication, it has, however, up to now only been validated in adults. JTI was also found reduced in cases of TACITUS syndrome (non-thyroidal illness syndrome) as an example of type 1 thyroid allostasis. Conversely, an elevated thyroid function index may serve as a biomarker for type 2 allostasis and contextual stress.
Jostel's TSH index may decrease under therapy with the antidiabetic drug metformin, especially in women under oral contraceptives.
In two large population-based cohorts included in the Study of Health in Pomerania differentially correlated to some markers of body composition. Correlation was positive to body mass index (BMI), waist circumference and fat mass, but negative to body cell mass. With the exception of fat mass all correlations were age-dependent. Very similar observations have been made earlier in the NHANES dataset.
In Parkinson's disease, JTI is significantly elevated in early sub-types of the disease compared to an advanced group.
A longitudinal study in euthyroid subjects with structural heart disease found that JTI predicts the risk of malignant arrhythmia including ventricular fibrillation and ventricular tachycardia. This applies to both incidence and event-free survival. It was therefore concluded that an elevated set point of thyroid homeostasis may contribute to cardiovascular risk. A positive correlation of JTI to SIQALS 2, a score for allostatic load, suggests that thyroid hormones are among the mediators linking stress to major cardiovascular endpoints.
Another study demonstrated the TSH index to inversely correlate to thyroid's secretory capacity and thyroid volume. It is unclear if this finding reflects shortcomings of the index (i.e. low specificity in the setting of subclinical hypothyroidism) or plastic responses of the pituitary gland to beginning hypothyroidism.
In subjects with type 2 diabetes, treatment with beta blockers resulted in increased TSH index, but the mechanism is unclear.
Negative correlation of Jostel's TSH index to the urinary excretion of certain phthalates suggests that endocrine disruptors may affect the central set point of thyroid homeostasis.
Drugs that reduce the TSH index, probable via effects on the central set point of the feedback loop, include mirtazapine and oxcarbazepine.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "TSHI = \\ln(TSH) + 0.1345 \\cdot FT4"
},
{
"math_id": 1,
"text": "sTSHI = \\frac {TSHI - 2.7} {0.676}"
}
] |
https://en.wikipedia.org/wiki?curid=57500874
|
57504451
|
Algorithms and Combinatorics
|
Algorithms and Combinatorics () is a book series in mathematics, and particularly in combinatorics and the design and analysis of algorithms. It is published by Springer Science+Business Media, and was founded in 1987.
Books.
The books published in this series include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}^n"
}
] |
https://en.wikipedia.org/wiki?curid=57504451
|
57506816
|
Hall circles
|
Hall circles (also known as M-circles and N-circles) are a graphical tool in control theory used to obtain values of a closed-loop transfer function from the Nyquist plot (or the Nichols plot) of the associated open-loop transfer function. Hall circles have been introduced in control theory by Albert C. Hall in his thesis.
Construction.
Consider a closed-loop linear control system with open-loop transfer function given by transfer function formula_0 and with a unit gain in the feedback loop. The closed-loop transfer function is given by formula_1.
To check the stability of "T"("s"), it is possible to use the Nyquist stability criterion with the Nyquist plot of the open-loop transfer function "G"("s"). Note, however, that only the Nyquist plot of "G"("s") does not give the actual values of "T"("s"). To get this information from the G(s)-plane, Hall proposed to construct the locus of points in the "G"("s")-plane such that "T"("s") has constant magnitude and the also the locus of points in the "G"("s")-plane such that "T"("s") has constant phase angle.
Given a positive real value "M" representing a fixed magnitude, and denoting G(s) by "z", the points satisfying formula_2are given by the points "z" in the "G"("s")-plane such that the ratio of the distance between "z" and 0 and the distance between "z" and -1 is equal to "M". The points "z" satisfying this locus condition are circles of Apollonius, and this locus is known in the context of control systems as "M-circles".
Given a positive real value "N" representing a phase angle, the points satisfying formula_3are given by the points z in the "G"("s")-plane such that the angle between -1 and z and the angle between 0 and z is constant. In other words, the angle opposed to the line segment between -1 and 0 must be constant. This implies that the points z satisfying this locus condition are arcs of circles, and this locus is known in the context of control systems as "N-circles".
Usage.
To use the Hall circles, a plot of M and N circles is done over the Nyquist plot of the open-loop transfer function. The points of the intersection between these graphics give the corresponding value of the closed-loop transfer function.
Hall circles are also used with the Nichols plot and in this setting, are also known as Nichols chart. Rather than overlaying directly the Hall circles over the Nichols plot, the points of the circles are transferred to a new coordinate system where the ordinate is given by formula_4 and the abscissa is given by formula_5. The advantage of using Nichols chart is that adjusting the gain of the open loop transfer function directly reflects in up and down translation of the Nichols plot in the chart.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G(s)"
},
{
"math_id": 1,
"text": " T(s) = \\frac{G(s)}{1+G(s)} "
},
{
"math_id": 2,
"text": " M = |T(s)| = \\frac{|G(s)|}{|1+G(s)|} = \\frac{|z|}{|1+z|} "
},
{
"math_id": 3,
"text": " N = \\arg \\left[\\frac{G(s)}{1+G(s)}\\right] = \\arg[G(s)] - \\arg[1+G(s)] = \\arg[z] - \\arg[1+z] "
},
{
"math_id": 4,
"text": " 20 \\log_{10}(|G(s)|) "
},
{
"math_id": 5,
"text": " \\arg(G(s))"
}
] |
https://en.wikipedia.org/wiki?curid=57506816
|
57522708
|
Ayşe Şahin
|
Turkish-American mathematician
Ayşe Arzu Şahin is a Turkish-American mathematician who works in dynamical systems. She was appointed the Dean of the College of Science and Mathematics at Wright State University in June 2020, and is a co-author of two textbooks on calculus and dynamical systems.
Education and career.
Şahin graduated from Mount Holyoke College in 1988. She completed her Ph.D. in 1994 at the University of Maryland, College Park. Her dissertation, "Tiling Representations of formula_0 Actions and formula_1-Equivalence in Two Dimensions", was supervised by Daniel Rudolph.
She joined the mathematics faculty at North Dakota State University, where she worked from 1994 until 2001, when she moved to DePaul University. At DePaul, she became a full professor in 2010, and co-directed a master's program in Middle School Mathematics. She moved again to Wright State as Chair of the Department of Mathematics and Statistics at Wright State in 2015.
Books.
In 2017, with Kathleen Madden and Aimee Johnson, Şahin published the textbook "Discovering Discrete Dynamical Systems" through the Mathematical Association of America. She is also a co-author of "Calculus: Single and Multivariable" (7th ed., Wiley, 2016), a text whose many other co-authors include Deborah Hughes Hallett, William G. McCallum, and Andrew M. Gleason.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^2"
},
{
"math_id": 1,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=57522708
|
5752486
|
Hydroxymethylglutaryl-CoA reductase
|
In enzymology, a Hydroxymethylglutaryl-CoA reductase (EC 1.1.1.88) is an enzyme that catalyzes the chemical reaction
(R)-mevalonate + CoA + 2 NAD+ formula_0 3-hydroxy-3-methylglutaryl-CoA + 2 NADH + 2 H+
The 3 substrates of this enzyme are (R)-mevalonate, CoA, and NAD+, whereas its 3 products are 3-hydroxy-3-methylglutaryl-CoA, NADH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (R)-mevalonate:NAD+ oxidoreductase (CoA-acylating). Other names in common use include beta-hydroxy-beta-methylglutaryl coenzyme A reductase, beta-hydroxy-beta-methylglutaryl CoA-reductase, 3-hydroxy-3-methylglutaryl coenzyme A reductase, and hydroxymethylglutaryl coenzyme A reductase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=5752486
|
57525937
|
Orbital angular momentum of free electrons
|
Quantised attribute of electrons in free space
Electrons in free space can carry quantized orbital angular momentum (OAM) projected along the direction of propagation. This orbital angular momentum corresponds to helical wavefronts, or, equivalently, a phase proportional to the azimuthal angle. Electron beams with quantized orbital angular momentum are also called electron vortex beams.
Theory.
An electron in free space travelling at non-relativistic speeds, follows the Schrödinger equation for a free particle, that is formula_0 where formula_1 is the reduced Planck constant, formula_2 is the single-electron wave function, formula_3 its mass, formula_4 the position vector, and formula_5 is time.
This equation is a type of wave equation and when written in the Cartesian coordinate system (formula_6,formula_7,formula_8), the solutions are given by a linear combination of plane waves, in the form of formula_9 where formula_10 is the linear momentum and formula_11 is the electron energy, given by the usual dispersion relation formula_12. By measuring the momentum of the electron, its wave function must collapse and give a particular value. If the energy of the electron beam is selected beforehand, the total momentum (not its directional components) of the electrons is fixed to a certain degree of precision.
When the Schrödinger equation is written in the cylindrical coordinate system (formula_13,formula_14,formula_8), the solutions are no longer plane waves, but instead are given by Bessel beams, solutions that are a linear combination of formula_15 that is, the product of three types of functions: a plane wave with momentum formula_16 in the formula_8-direction, a radial component written as a Bessel function of the first kind formula_17, where formula_18 is the linear momentum in the radial direction, and finally an azimuthal component written as formula_19 where formula_20 (sometimes written formula_21) is the magnetic quantum number related to the angular momentum formula_22 in the formula_8-direction. Thus, the dispersion relation reads formula_23. By azimuthal symmetry, the wave function has the property that formula_24 is necessarily an integer, thus formula_25 is quantized. If a measurement of formula_22 is performed on an electron with selected energy, as formula_26 does not depend on formula_20, it can give any integer value. It is possible to experimentally prepare states with non-zero formula_20 by adding an azimuthal phase to an initial state with formula_27; experimental techniques designed to measure the orbital angular momentum of a single electron are under development. Simultaneous measurement of electron energy and orbital angular momentum is allowed because the Hamiltonian commutes with the angular momentum operator related to formula_22.
Note that the equations above follow for any free quantum particle with mass, not necessarily electrons. The quantization of formula_28 can also be shown in the spherical coordinate system, where the wave function reduces to a product of spherical Bessel functions and spherical harmonics.
Preparation.
There are a variety of methods to prepare an electron in an orbital angular momentum state. All methods involve an interaction with an optical element such that the electron acquires an azimuthal phase. The optical element can be material, magnetostatic, or electrostatic. It is possible to either directly imprint an azimuthal phase, or to imprint an azimuthal phase with a holographic diffraction grating, where grating pattern is defined by the interference of the azimuthal phase and a planar or spherical carrier wave.
Applications.
Electron vortex beams have a variety of proposed and demonstrated applications, including for mapping magnetization, studying chiral molecules and chiral plasmon resonances, and identification of crystal chirality.
Measurement.
Interferometric methods borrowed from light optics also work to determine the orbital angular momentum of free electrons in pure states. Interference with a planar reference wave, diffractive filtering and self-interference can serve to characterize a prepared electron orbital angular momentum state. In order to measure the orbital angular momentum of a superposition or of the mixed state that results from interaction with an atom or material, a non-interferometric method is necessary. Wavefront flattening, transformation of an orbital angular momentum state into a planar wave, or cylindrically symmetric Stern-Gerlach-like measurement is necessary to measure the orbital angular momentum mixed or superposition state.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i\\hbar\\frac{\\partial}{\\partial t} \\Psi(\\mathbf{r},t) = \\frac{-\\hbar^2}{2m}\\nabla^2 \\Psi(\\mathbf{r},t),"
},
{
"math_id": 1,
"text": "\\hbar"
},
{
"math_id": 2,
"text": "\\Psi(\\mathbf r , t)"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "\\mathbf r"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "z"
},
{
"math_id": 9,
"text": "\\Psi_{\\mathbf p}(\\mathbf{r},t)\\propto e^{i(\\mathbf{p}\\cdot\\mathbf{r}-E(\\mathbf{p})t)/\\hbar}"
},
{
"math_id": 10,
"text": "\\mathbf{p}"
},
{
"math_id": 11,
"text": "E(\\mathbf{p})"
},
{
"math_id": 12,
"text": "E(\\mathbf{p})=\\frac{p^2}{2m}"
},
{
"math_id": 13,
"text": "\\rho"
},
{
"math_id": 14,
"text": "\\theta"
},
{
"math_id": 15,
"text": "\\Psi_{p_\\rho,\\,p_z,\\,\\ell}(\\rho,\\theta,z)\\propto J_{|\\ell|}\\left(\\frac{p_\\rho \\rho}{\\hbar} \\right)e^{i(p_zz-Et)/\\hbar}e^{i\\ell\\theta},"
},
{
"math_id": 16,
"text": "p_z"
},
{
"math_id": 17,
"text": "J_{|\\ell|}"
},
{
"math_id": 18,
"text": "p_\\rho"
},
{
"math_id": 19,
"text": "e^{i\\ell\\theta}"
},
{
"math_id": 20,
"text": "\\ell"
},
{
"math_id": 21,
"text": "m_z"
},
{
"math_id": 22,
"text": "L_z"
},
{
"math_id": 23,
"text": "E=(p_z^2+p_\\rho^2)/2m"
},
{
"math_id": 24,
"text": "\\ell=0,\\pm 1,\\pm2,\\cdots"
},
{
"math_id": 25,
"text": "L_z = \\hbar\\ell"
},
{
"math_id": 26,
"text": "E"
},
{
"math_id": 27,
"text": "\\ell = 0"
},
{
"math_id": 28,
"text": "L_z"
}
] |
https://en.wikipedia.org/wiki?curid=57525937
|
57526
|
Péclet number
|
Ratio of a fluid's advective and diffusive transport rates
In continuum mechanics, the Péclet number (Pe, after Jean Claude Eugène Péclet) is a class of dimensionless numbers relevant in the study of transport phenomena in a continuum. It is defined to be the ratio of the rate of advection of a physical quantity by the flow to the rate of diffusion of the same quantity driven by an appropriate gradient. In the context of species or mass transfer, the Péclet number is the product of the Reynolds number and the Schmidt number (Re × Sc). In the context of the thermal fluids, the thermal Péclet number is equivalent to the product of the Reynolds number and the Prandtl number (Re × Pr).
The Péclet number is defined as:
formula_0
For mass transfer, it is defined as:
formula_1
Such ratio can also be re-written in terms of times, as a ratio between the characteristic temporal intervals of the system:
formula_2
For formula_3 the diffusion happens in a much longer time compared to the advection, and therefore the latter of the two phenomena predominates in the mass transport.
For heat transfer, the Péclet number is defined as:
formula_4
where L is the characteristic length, u the local flow velocity, D the mass diffusion coefficient, Re the Reynolds number, Sc the Schmidt number, Pr the Prandtl number, and α the thermal diffusivity,
formula_5
where k is the thermal conductivity, ρ the density, and cp the specific heat capacity.
In engineering applications the Péclet number is often very large. In such situations, the dependency of the flow upon "downstream" locations is diminished, and variables in the flow tend to become 'one-way' properties. Thus, when modelling certain situations with high Péclet numbers, simpler computational models can be adopted.
A flow will often have different Péclet numbers for heat and mass. This can lead to the phenomenon of double diffusive convection.
In the context of particulate motion the Péclet number has also been called Brenner number, with symbol Br, in honour of Howard Brenner.
The Péclet number also finds applications beyond transport phenomena, as a general measure for the relative importance of the random fluctuations and of the systematic average behavior in mesoscopic systems
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Pe} = \\dfrac{ \\mbox{advective transport rate} }{ \\mbox{diffusive transport rate} }"
},
{
"math_id": 1,
"text": "\\mathrm{Pe}_L = \\frac{L u}{D} = \\mathrm{Re}_L \\, \\mathrm{Sc}"
},
{
"math_id": 2,
"text": "\\mathrm{Pe}_L = \\frac{u/L}{D/L^2} = \\frac{L^2/D}{L/u} = \\frac{\\mbox{diffusion time}}{\\mbox{advection time}}"
},
{
"math_id": 3,
"text": "\\mathrm{Pe_L} \\gg 1"
},
{
"math_id": 4,
"text": "\\mathrm{Pe}_L = \\frac{L u}{\\alpha} = \\mathrm{Re}_L \\, \\mathrm{Pr}."
},
{
"math_id": 5,
"text": "\\alpha = \\frac{k}{\\rho c_p}"
}
] |
https://en.wikipedia.org/wiki?curid=57526
|
57528170
|
Rayleigh–Lorentz pendulum
|
Rayleigh–Lorentz pendulum (or Lorentz pendulum) is a simple pendulum, but subjected to a slowly varying frequency due to an external action (frequency is varied by varying the pendulum length), named after Lord Rayleigh and Hendrik Lorentz. This problem formed the basis for the concept of adiabatic invariants in mechanics. On account of the slow variation of frequency, it is shown that the ratio of average energy to frequency is constant.
History.
The pendulum problem was first formulated by Lord Rayleigh in 1902, although some mathematical aspects have been discussed before by Léon Lecornu in 1895 and Charles Bossut in 1778. Unaware of Rayleigh's work, at the first Solvay conference in 1911, Hendrik Lorentz proposed a question, "How does a simple pendulum behave when the length of the suspending thread is gradually shortened?", in order to clarify the quantum theory at that time. To that Albert Einstein responded the next day by saying that both energy and frequency of the quantum pendulum changes such that their ratio is constant, so that the pendulum is in the same quantum state as the initial state. These two separate works formed the basis for the concept of adiabatic invariant, which found applications in various fields and old quantum theory. In 1958, Subrahmanyan Chandrasekhar took interest in the problem and studied it so that a renewed interest in the problem was set, subsequently to be studied by many other researchers like John Edensor Littlewood etc.
Mathematical description.
The equation of the simple harmonic motion with frequency formula_0 for the displacement formula_1 is given by
formula_2
If the frequency is constant, the solution is simply given by formula_3. But if the frequency is allowed to vary slowly with time formula_4, or precisely, if the characteristic time scale for the frequency variation is much smaller than the time period of oscillation, i.e.,
formula_5
then it can be shown that
formula_6
where formula_7 is the average energy averaged over an oscillation. Since the frequency is changing with time due to external action, conservation of energy no longer holds and the energy over a single oscillation is not constant. During an oscillation, the frequency changes (however slowly), so does its energy. Therefore, to describe the system, one defines the average energy per unit mass for a given potential formula_8 as follows
formula_9
where the closed integral denotes that it is taken over a complete oscillation. Defined this way, it can be seen that the averaging is done, weighting each element of the orbit by the fraction of time that the pendulum spends in that element. For simple harmonic oscillator, it reduces to
formula_10
where both the amplitude and frequency are now functions of time.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "x(t)"
},
{
"math_id": 2,
"text": "\\ddot{x} +\\omega^2 x=0."
},
{
"math_id": 3,
"text": "x=A\\cos(\\omega t+\\phi)"
},
{
"math_id": 4,
"text": "\\omega = \\omega(t)"
},
{
"math_id": 5,
"text": "\\left|\\frac{1}{\\omega} \\frac{d\\omega}{dt}\\right| \\ll \\omega,"
},
{
"math_id": 6,
"text": "\\frac{\\bar{E}}{\\omega} = \\text{constant},"
},
{
"math_id": 7,
"text": "\\bar{E}"
},
{
"math_id": 8,
"text": "V(x;\\omega)"
},
{
"math_id": 9,
"text": "\\bar{E} = \\frac{\\displaystyle\\oint dt \\left[\\tfrac{1}{2} \\left(\\dot{x}\\right)^2 + V(x(t);\\omega(t))\\right] }{\\displaystyle \\oint dt}"
},
{
"math_id": 10,
"text": "\\bar{E} = \\tfrac{1}{2} A^2\\omega^2"
}
] |
https://en.wikipedia.org/wiki?curid=57528170
|
57530022
|
Richard Fork
|
American physicist
Richard L. Fork (1 September 1935 – 16 May 2018) was an American physicist.
Biography.
Fork received a bachelor's degree in mathematics and physics from Principia College in 1957, and earned his doctorate in physics from the Massachusetts Institute of Technology. He began working for Bell Laboratories in 1962, and joined the faculty of Rensselaer Institute of Technology in 1990. Four years later, Dr. Fork left Rensselaer for the University of Alabama in Huntsville. Over the course of his career, Fork was granted fellowship of the American Physical Society and Optical Society of America. He retired in 2017 and died on May 16, 2018, of respiratory arrest in Huntsville. Dr. Fork also acted as a mentor who guided and assisted dozens of students pursuing optical/physics/laser based degrees at UAH.
Achievements.
Richard Fork has been very active in the field of generating light pulses with lasers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "l_{eff} = [l-2(f_1+f_2)](f_1/f_2)^2"
},
{
"math_id": 1,
"text": "l_{eff}"
}
] |
https://en.wikipedia.org/wiki?curid=57530022
|
57530282
|
Tennis ball theorem
|
Smooth curves that evenly divide the area of a sphere have at least 4 inflections
In geometry, the tennis ball theorem states that any smooth curve on the surface of a sphere that divides the sphere into two equal-area subsets without touching or crossing itself must have at least four inflection points, points at which the curve does not consistently bend to only one side of its tangent line.
The tennis ball theorem was first published under this name by Vladimir Arnold in 1994, and is often attributed to Arnold, but a closely related result appears earlier in a 1968 paper by Beniamino Segre, and the tennis ball theorem itself is a special case of a theorem in a 1977 paper by Joel L. Weiner. The name of the theorem comes from the standard shape of a tennis ball, whose seam forms a curve that meets the conditions of the theorem; the same kind of curve is also used for the seams on baseballs.
The tennis ball theorem can be generalized to any curve that is not contained in a closed hemisphere. A centrally symmetric curve on the sphere must have at least six inflection points. The theorem is analogous to the four-vertex theorem according to which any smooth closed plane curve has at least four points of extreme curvature.
Statement.
Precisely, an inflection point of a doubly continuously differentiable (formula_0) curve on the surface of a sphere is a point formula_1 with the following property: let formula_2 be the connected component containing formula_1 of the intersection of the curve with its tangent great circle at formula_1. (For most curves formula_2 will just be formula_1 itself, but it could also be an arc of the great circle.) Then, for formula_1 to be an inflection point, every neighborhood of formula_2 must contain points of the curve that belong to both of the hemispheres separated by this great circle.
The theorem states that every formula_0 curve that partitions the sphere into two equal-area components has at least four inflection points in this sense.
Examples.
The tennis ball and baseball seams can be modeled mathematically by a curve made of four semicircular arcs, with exactly four inflection points where pairs of these arcs meet.
A great circle also bisects the sphere's surface, and has infinitely many inflection points, one at each point of the curve. However, the condition that the curve divide the sphere's surface area equally is a necessary part of the theorem. Other curves that do not divide the area equally, such as circles that are not great circles, may have no inflection points at all.
Proof by curve shortening.
One proof of the tennis ball theorem uses the curve-shortening flow, a process for continuously moving the points of the curve towards their local centers of curvature. Applying this flow to the given curve can be shown to preserve the smoothness and area-bisecting property of the curve. Additionally, as the curve flows, its number of inflection points never increases. This flow eventually causes the curve to transform into a great circle, and
the convergence to this circle can be approximated by a Fourier series. Because curve-shortening does not change any other great circle, the first term in this series is zero, and combining this with a theorem of Sturm on the number of zeros of Fourier series shows that, as the curve nears this great circle, it has at least four inflection points. Therefore, the original curve also has at least four inflection points.
Related theorems.
A generalization of the tennis ball theorem applies to any simple smooth curve on the sphere that is not contained in a closed hemisphere. As in the original tennis ball theorem, such curves must have at least four inflection points. If a curve on the sphere is centrally symmetric, it must have at least six inflection points.
A closely related theorem of also concerns simple closed spherical curves, on spheres embedded into three-dimensional space. If, for such a curve, formula_3 is any point of the three-dimensional convex hull of a smooth curve on the sphere that is not a vertex of the curve, then at least four points of the curve have osculating planes passing through formula_3. In particular, for a curve not contained in a hemisphere, this theorem can be applied with formula_3 at the center of the sphere. Every inflection point of a spherical curve has an osculating plane that passes through the center of the sphere, but this might also be true of some other points.
This theorem is analogous to the four-vertex theorem, that every smooth simple closed curve in the plane has four vertices (extreme points of curvature). It is also analogous to a theorem of August Ferdinand Möbius that every non-contractible smooth curve in the projective plane has at least three inflection points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^2"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "o"
}
] |
https://en.wikipedia.org/wiki?curid=57530282
|
5753249
|
Artin–Hasse exponential
|
In mathematics, the Artin–Hasse exponential, introduced by Artin and Hasse (1928), is the power series given by
formula_0
Motivation.
One motivation for considering this series to be analogous to the exponential function comes from infinite products. In the ring of formal power series Q"x" we have the identity
formula_1
where μ(n) is the Möbius function. This identity can be verified by showing the logarithmic derivative of the two sides are equal and that both sides have the same constant term. In a similar way, one can verify a product expansion for the Artin–Hasse exponential:
formula_2
So passing from a product over all "n" to a product over only "n" prime to "p", which is a typical operation in "p"-adic analysis, leads from "e""x" to "E""p"("x").
Properties.
The coefficients of "E""p"("x") are rational. We can use either formula for "E""p"("x") to prove that, unlike "e""x", all of its coefficients are "p"-integral; in other words, the denominators of the coefficients of "E""p"("x") are not divisible by "p". A first proof uses the definition of "E""p"("x") and Dwork's lemma, which says that a power series "f"("x") = 1 + ... with rational coefficients has "p"-integral coefficients if and only if "f"("x""p")/"f"("x")"p" ≡ 1 mod "pZ"p""x". When "f"("x") = "E""p"("x"), we have "f"("x""p")/"f"("x")"p" = "e"−"px", whose constant term is 1 and all higher coefficients are in "pZ"p".
A second proof comes from the infinite product for "E""p"("x"): each exponent -μ("n")/"n" for "n" not divisible by "p" is a "p"-integral, and when a rational number "a" is "p"-integral all coefficients in the binomial expansion of (1 - "x""n")"a" are "p"-integral by "p"-adic continuity of the binomial coefficient polynomials "t"("t"-1)...("t"-"k"+1)/"k"! in "t" together with their obvious integrality when "t" is a nonnegative integer ("a" is a "p"-adic limit of nonnegative integers) . Thus each factor in the product of "E""p"("x") has "p"-integral coefficients, so "E""p"("x") itself has "p"-integral coefficients.
The ("p"-integral) series expansion has radius of convergence 1.
Combinatorial interpretation.
The Artin–Hasse exponential is the generating function for the probability a uniformly randomly selected element of "S""n" (the symmetric group with "n" elements) has "p"-power order (the number of which is denoted by "t""p,n"):
formula_3
This gives a third proof that the coefficients of "E""p"("x") are "p"-integral, using the theorem of Frobenius that in a finite group of order divisible by "d" the number of elements of order dividing "d" is also divisible by "d". Apply this theorem to the "n"th symmetric group with "d" equal to the highest power of "p" dividing "n"!.
More generally, for any topologically finitely generated profinite group "G" there is an identity
formula_4
where "H" runs over open subgroups of "G" with finite index (there are finitely many of each index since "G" is topologically finitely generated) and "aG,n" is the number of continuous homomorphisms from "G" to "Sn". Two special cases are worth noting. (1) If "G" is the "p"-adic integers, it has exactly one open subgroup of each "p"-power index and a continuous homomorphism from "G" to "Sn" is essentially the same thing as choosing an element of "p"-power order in "Sn", so we have recovered the above combinatorial interpretation of the Taylor coefficients in the Artin–Hasse exponential series. (2) If "G" is a finite group then the sum in the exponential is a finite sum running over all subgroups of "G", and continuous homomorphisms from "G" to "Sn" are simply homomorphisms from "G" to "Sn". The result in this case is due to Wohlfahrt (1977). The special case when "G" is a finite cyclic group is due to Chowla, Herstein, and Scott (1952), and takes the form
formula_5
where "am,n" is the number of solutions to "gm" = 1 in "Sn".
David Roberts provided a natural combinatorial link between the Artin–Hasse exponential and the regular exponential in the spirit of the ergodic perspective (linking the "p"-adic and regular norms over the rationals) by showing that the Artin–Hasse exponential is also the generating function for the probability that an element of the symmetric group is unipotent in characteristic "p", whereas the regular exponential is the probability that an element of the same group is unipotent in characteristic zero.
Conjectures.
At the 2002 PROMYS program, Keith Conrad conjectured that the coefficients of formula_6 are uniformly distributed in the p-adic integers with respect to the normalized Haar measure, with supporting computational evidence. The problem is still open.
Dinesh Thakur has also posed the problem of whether the Artin–Hasse exponential reduced mod "p" is transcendental over formula_7.
|
[
{
"math_id": 0,
"text": " E_p(x) = \\exp\\left(x + \\frac{x^p}{p} + \\frac{x^{p^2}}{p^2} + \\frac{x^{p^3}}{p^3} +\\cdots\\right)."
},
{
"math_id": 1,
"text": "e^x = \\prod_{n \\geq 1}(1-x^n)^{-\\mu(n)/n},"
},
{
"math_id": 2,
"text": "E_p(x) = \\prod_{(p,n)=1}(1-x^n)^{-\\mu(n)/n}."
},
{
"math_id": 3,
"text": "E_p(x)=\\sum_{n\\ge 0} \\frac{t_{p,n}}{n!}x^n."
},
{
"math_id": 4,
"text": "\\exp(\\sum_{H \\subset G} x^{[G:H]}/[G:H])=\\sum_{n\\ge 0} \\frac{a_{G,n}}{n!}x^n,"
},
{
"math_id": 5,
"text": "\\exp(\\sum_{d|m} x^d/d)=\\sum_{n\\ge 0} \\frac{a_{m,n}}{n!}x^n,"
},
{
"math_id": 6,
"text": "E_p(x)"
},
{
"math_id": 7,
"text": "\\mathbb{F}_p(x)"
}
] |
https://en.wikipedia.org/wiki?curid=5753249
|
57533821
|
Bodenstein number
|
The Bodenstein number (abbreviated "Bo", named after Max Bodenstein) is a dimensionless parameter in chemical reaction engineering, which describes the ratio of the amount of substance introduced by convection to that introduced by diffusion. Hence, it characterises the backmixing in a system and allows statements whether and how much volume elements or substances within a chemical reactor mix due to the prevalent currents. It is defined as the ratio of the convection current to the dispersion current. The Bodenstein number is an element of the "dispersion model of residence times" and is therefore also called the "dimensionless dispersion coefficient".
Mathematically, two idealized extreme cases exist for the Bodenstein number. These, however, cannot be fully reached in practice:
Control of the flow velocity within a reactor allows to adjust the Bodenstein number to a pre-calculated desired value, so that the desired degree of backmixing of the substances in the reactor can be reached.
Determination of the Bodenstein number.
The Bodenstein number is calculated according to
formula_2
where
It can also be determined experimentally from the distribution of the residence times. Assuming an open system:
formula_6
holds, where
|
[
{
"math_id": 0,
"text": " Bo = 0 "
},
{
"math_id": 1,
"text": " Bo = \\infty "
},
{
"math_id": 2,
"text": "\\mathit{Bo}=\\frac{u \\cdot L}{D_\\mathrm{ax}}"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "D_\\mathrm{ax}"
},
{
"math_id": 6,
"text": "\\sigma_\\theta^2=\\frac{\\sigma^2}{\\tau^2}=\\frac{2}{\\mathit{Bo}}+\\frac{8}{\\mathit{Bo}^2}"
},
{
"math_id": 7,
"text": "\\sigma^{2}_{\\theta}"
},
{
"math_id": 8,
"text": "\\sigma^2"
},
{
"math_id": 9,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=57533821
|
5753978
|
Ohnesorge number
|
Number that relates the viscous forces to inertial and surface tension forces
The Ohnesorge number (Oh) is a dimensionless number that relates the viscous forces to inertial and surface tension forces. The number was defined by Wolfgang von Ohnesorge in his 1936 doctoral thesis.
It is defined as:
formula_0
Where
Applications.
The Ohnesorge number for a 3 mm diameter rain drop is typically ~0.002. Larger Ohnesorge numbers indicate a greater influence of the viscosity.
This is often used to relate to free surface fluid dynamics such as dispersion of liquids in gases and in spray technology.
In inkjet printing, liquids whose Ohnesorge number are in the range 0.1 < "Oh" < 1.0 are jettable (1<Z<10 where Z is the reciprocal of the Ohnesorge number).
|
[
{
"math_id": 0,
"text": " \\mathrm{Oh} = \\frac{ \\mu}{ \\sqrt{\\rho \\, \\sigma \\, L }} = \\frac{\\sqrt{\\mathrm{We}}}{\\mathrm{Re}} \\sim \\frac{\\mbox{viscous forces}}{\\sqrt{{\\mbox{inertia}} \\cdot {\\mbox{surface tension}}}} "
},
{
"math_id": 1,
"text": " \\mathrm{Oh} = 1/\\sqrt{\\mathrm{La}}"
}
] |
https://en.wikipedia.org/wiki?curid=5753978
|
57552333
|
Health effects of Bisphenol A
|
Controversy centering on concerns about the biomedical significance of bisphenol A (BPA)
Bisphenol A controversy centers on concerns and debates about the biomedical significance of bisphenol A (BPA), which is a precursor to polymers that are used in some consumer products, including some food containers. The concerns began with the hypothesis that BPA is an endocrine disruptor, i.e. it mimics endocrine hormones and thus has the unintended and possibly far-reaching effects on people in physical contact with the chemical.
Since 2008, several governments have investigated its safety, which prompted some retailers to withdraw polycarbonate products. The U.S. Food and Drug Administration (FDA) ended its authorization of the use of BPA in baby bottles and infant formula packaging, based on market abandonment, not safety. The European Union and Canada have banned BPA use in baby bottles.
The U.S. FDA states "BPA is safe at the current levels occurring in foods" based on extensive research, including two more studies issued by the agency in early 2014. The European Food Safety Authority (EFSA) reviewed new scientific information on BPA in 2008, 2009, 2010, 2011 and 2015: EFSA's experts concluded on each occasion that they could not identify any new evidence which would lead them to revise their opinion that the known level of exposure to BPA is safe; however, the EFSA does recognize some uncertainties, and will continue to investigate them.
In February 2016, France stated that it intends to propose BPA as a REACH Regulation candidate substance of very high concern (SVHC). The European Chemicals Agency agreed to the proposal in June 2017.
Production.
The BPA controversy has gained momentum because of the quantity of BPA produced by the chemical industry. World production capacity of BPA was 1 million tons in the 1980s, and more than 2.2 million tons in 2009. It is a high production volume chemical. In 2003, U.S. consumption was 856,000 tons, 72% of which used to make polycarbonate plastic and 21% going into epoxy resins. In the U.S., less than 5% of the BPA produced is used in food contact applications, but remains in the canned food industry and printing applications such as sales receipts. On 20 February 2018, "Packaging Digest" reported that "At least 90%" of food cans no longer contained BPA.
Occurrence.
BPA is rarely encountered in industrial products: it is invariably bound in a polymeric structure. Concerns therefore about exposure focus on the degradation, mainly by hydrolysis, of these polymers and the plastic objects derived therefrom.
Polycarbonate plastic, which is formed from BPA, is used to make a variety of common products including baby and water bottles, sports equipment, medical and dental devices, dental fillings sealants, CDs and DVDs, household electronics, eyeglass lenses, foundry castings, and the lining of water pipes.
BPA is also used in the synthesis of polysulfones and polyether ketones, as an antioxidant in some plasticizers, and as a polymerization inhibitor in PVC. Epoxy resins derived from bisphenol A are used as coatings on the inside of almost all food and beverage cans; however, due to BPA health concerns, in Japan epoxy coating was mostly replaced by PET film.
Bisphenol A is a preferred color developer in carbonless copy paper and thermal point of sale receipt paper. When used in thermal paper, BPA is present as "free" (i.e., discrete, non-polymerized) BPA, which is likely to be more available for exposure than BPA polymerized into a resin or plastic. Upon handling, BPA in thermal paper can be transferred to skin, and there is some concern that residues on hands could be ingested through incidental hand-to-mouth contact. Furthermore, some studies suggest that dermal absorption may contribute some small fraction to the overall human exposure. European data indicate that the use of BPA in paper may also contribute to the presence of BPA in the stream of recycled paper and in landfills. Although there are no estimates for the amount of BPA used in thermal paper in the United States, in Western Europe, the volume of BPA reported to be used in thermal paper in 2005/2006 was 1,890 tonnes per year, while total production was estimated at 1,150,000 tonnes per year. (Figures taken from 2012 EPA draft paper.) Studies document potential spreading and accumulation of BPA in paper recycling, suggesting its presence for decades in paper recycling loop even after a hypothetical ban. Epoxy resin may or may not contain BPA, and is employed to bind gutta percha in some root canal procedures.
Biomedical history.
In the early 1930s, the British biochemist Edward Charles Dodds tested BPA as an artificial estrogen, but found it to be 37,000 times less effective than estradiol. Dodds eventually developed a structurally similar compound, diethylstilbestrol (DES), which was used as a synthetic estrogen drug in women and animals until it was banned due to its risk of causing cancer; the ban on use of DES in humans came in 1971 and in animals, in 1979. BPA was never used as a drug. BPA's ability to mimic the effects of natural estrogen derives from the similarity of phenol groups on both BPA and estradiol, which enable this synthetic molecule to trigger estrogenic pathways in the body. Typically phenol-containing molecules similar to BPA are known to exert weak estrogenic activities, thus it is also considered an endocrine disrupter (ED) and estrogenic chemical. Xenoestrogens is another category the chemical BPA fits under because of its capability to interrupt the network that regulates the signals which control the reproductive development in humans and animals.
In 1997, adverse effects of low-dose BPA exposure in laboratory animals were first proposed. Modern studies began finding possible connections to health issues caused by exposure to BPA during pregnancy and during development. See Public health regulatory history in the United States and Chemical manufacturers' reactions to bans. As of 2014, research and debates are ongoing as to whether BPA should be banned or not.
A 2007 study investigated the interaction between bisphenol A's and estrogen-related receptor γ (ERR-γ). This orphan receptor (endogenous ligand unknown) behaves as a constitutive activator of transcription. BPA seems to bind strongly to ERR-γ (dissociation constant = 5.5 nM), but only weakly to the ER. BPA binding to ERR-γ preserves its basal constitutive activity. It can also protect it from deactivation from the SERM 4-hydroxytamoxifen (afimoxifene). This may be the mechanism by which BPA acts as a xenoestrogen. Different expression of ERR-γ in different parts of the body may account for variations in bisphenol A effects. For instance, ERR-γ has been found in high concentration in the placenta, explaining reports of high bisphenol accumulation in this tissue. BPA has also been found to act as an agonist of the GPER (GPR30).
Safety.
Health effects.
According to the European Food Safety Authority "BPA poses no health risk to consumers of any age group (including unborn children, infants and adolescents) at current exposure levels". But in 2017 the European Chemicals Agency concluded that BPA should be listed as a substance of very high concern due to its properties as an endocrine disruptor.
In 2012, the United States' Food and Drug Administration (FDA) banned the use of BPA in baby bottles intended for children under 12 months. The Natural Resources Defense Council called the move inadequate, saying the FDA needed to ban BPA from all food packaging. The FDA maintains that the agency continues to support the safety of BPA for use in products that hold food.
The U.S. Environmental Protection Agency (EPA) also holds the position that BPA is not a health concern. In 2011, Andrew Wadge, the chief scientist of the United Kingdom's Food Standards Agency, commented on a 2011 U.S. study on dietary exposure of adult humans to BPA, saying, "This corroborates other independent studies and adds to the evidence that BPA is rapidly absorbed, detoxified, and eliminated from humans – therefore is not a health concern."
The Endocrine Society said in 2015 that the results of ongoing laboratory research gave grounds for concern about the potential hazards of endocrine-disrupting chemicals – including BPA – in the environment, and that on the basis of the precautionary principle these substances should continue to be assessed and tightly regulated. A 2016 review of the literature said that the potential harms caused by BPA were a topic of scientific debate and that further investigation was a priority because of the association between BPA exposure and adverse human health effects including reproductive and developmental effects and metabolic disease.
United States expert panel conclusions.
In 2007, the U.S. federal government invited experts to Chapel Hill, North Carolina to perform a scientific assessment of literature on BPA. Thirty-eight experts in fields involved with bisphenol A gathered in Chapel Hill, North Carolina to review several hundred studies on BPA, many conducted by members of the group. At the end of the meeting, the group issued the Chapel Hill Consensus Statement, which stated "BPA at concentrations found in the human body is associated with organizational changes in the prostate, breast, testis, mammary glands, body size, brain structure and chemistry, and behavior of laboratory animals." The Chapel Hill Consensus Statement stated that average BPA levels in people were above those that cause harm to many animals in laboratory experiments. It noted that while BPA is not persistent in the environment or in humans, biomonitoring surveys indicate that exposure is continuous. This is problematic because acute animal exposure studies are used to estimate daily human exposure to BPA, and no studies that had examined BPA pharmacokinetics in animal models had followed continuous low-level exposures. The authors added that measurement of BPA levels in serum and other body fluids suggests the possibilities that BPA intake is much higher than accounted for or that BPA can bioaccumulate in some conditions (such as pregnancy). Following the Chapel Hill Statement, the US National Toxicology Program – Center for the Evaluation of Risks to Human Reproduction (NTP – CERHR), sponsored another literature assessment. The report, released in 2008, noted that "the possibility that bisphenol A may alter human development cannot be dismissed". Despite this report, the US Food and Drug Administration (FDA) BPA Task Force (formed in April 2008), concluded that products containing BPA were safe. In 2009, the FDA Science Board Subcommittee on Bisphenol A, an external committee assigned to review the FDA's report "concluded that the FDA failed to conduct a rigorous or extensive exposure assessment", leading the US Environmental Protection Agency (EPA) to conduct their own assessment.
The United States Federal Interagency Working Group (FIW) included a goal to reduce BPA exposure in the 2 December 2010 release of their 2020 Healthy People national objectives for improving the health of all Americans.
Metabolic disease.
Numerous animal studies have demonstrated an association between endocrine disrupting chemicals (including BPA) and obesity. However, the relationship between bisphenol A exposure and obesity in humans is unclear. Cohort studies have shown there has been an association of prenatal BPA exposure and increased body fat percentage at age 7 and increased BMI by age 9. Not all studies have shown a positive relationship between BPA exposure and obesity, further studies on the effects of BPA on metabolic diseases need to take diet into consideration to remove any influence it might have on the outcome. Proposed mechanisms for BPA exposure to increase the risk of obesity include BPA-induced thyroid dysfunction, activation of the PPAR-gamma receptor, and disruption of neural circuits that regulate feeding behavior. BPA works by imitating the natural hormone 17B-estradiol. In the past BPA has been considered a weak mimicker of estrogen but newer evidence indicates that it is a potent mimicker. When it binds to estrogen receptors it triggers alternative estrogenic effects that begin outside of the nucleus. This different path induced by BPA has been shown to alter glucose and lipid metabolism in animal studies.
There are different effects of BPA exposure during different stages of development. During adulthood, BPA exposure modifies insulin sensitivity and insulin release without affecting weight.
Thyroid function.
A 2007 review concluded that bisphenol-A has been shown to bind to thyroid hormone receptor and perhaps has selective effects on its functions.
A 2009 review about environmental chemicals and thyroid function raised concerns about BPA effects on triiodothyronine and concluded that "available evidence suggests that governing agencies need to regulate the use of thyroid-disrupting chemicals, particularly as such uses relate exposures of pregnant women, neonates and small children to the agents".
A 2009 review summarized BPA adverse effects on thyroid hormone action.
A 2016 case control study found that there was a significant association between urinary BPA levels and increased TSH levels (Thyroid- stimulating hormone) in a group of adult women.
Neurological effects.
Limited epidemiological evidence suggests that exposure to BPA in the uterus and during childhood is associated with poor behavioral outcomes in humans. Exposure may be associated with higher levels of anxiety, depression, hyperactivity, and aggression in children. A panel convened by the National Toxicology Program (NTP) of the U.S. National Institutes of Health determined that there was "some concern" about BPA's effects on fetal and infant brain development and behavior. In January 2010, based on the NTP report, the FDA expressed the same level of concern.
A 2007 literature review concluded that BPA, like other chemicals that mimic estrogen (xenoestrogens), should be considered as a player within the nervous system that can regulate or alter its functions through multiple pathways. A 2008 review of animal research found that low-dose BPA maternal exposure can cause long-term consequences for the neurobehavioral development in mice.
A 2009 review raised concerns about a BPA effect on the anteroventral periventricular nucleus.
Disruption of the dopaminergic system.
A 2008 review of human participants has concluded that BPA mimics estrogenic activity and affects various dopaminergic processes to enhance mesolimbic dopamine activity resulting in hyperactivity, attention deficits, and a heightened sensitivity to drugs of abuse.
Cancer.
According to the WHO's INFOSAN, carcinogenicity studies conducted under the U.S. National Toxicology Program have shown increases in leukemia and testicular interstitial cell tumors in male rats. However, according to the note, "these studies have not been considered as convincing evidence of a potential cancer risk because of the doubtful statistical significance of the small differences in incidences from controls."
A 2010 review concluded that bisphenol A may increase cancer risk.
Several studies show evidence that the formation of prostate cancer in men is directly proportional to BPA exposure. Male subject diagnosed with prostate cancer were found to have higher urine concentration of BPA as opposed to the concentrations found in the control group's. This correlation may be due to BPA's ability to induce cell proliferation of the prostate cancer cells.
Breast cancer.
Higher susceptibility to breast cancer has been found in many studies of rodents and primates exposed to BPA. However, it is the impact BPA has on breast cancer development in humans is unclear, as it is difficult to quantify an individual's BPA exposure over their lifetime. BPA, which includes a phenolic structure, has shown an association with agonist and antagonistic endocrine receptors that facilitate endocrine disorders such as breast and prostate cancer. Other endocrine disorders include infertility, polycystic ovary syndrome, and precocious puberty.
More oxidative stress in breast cancer cells were found to be directly proportional to BPA exposure as per the findings in several in vitro studies. Additionally, work related exposure to BPA, and women who are postmenopausal have suggested an increase in breast cancer incidence.
Mechanism of action.
BPA is an endocrine disruptor, meaning BPA has a similar structure to oestrogen (ligand) and can bind to the oestrogen receptor ERα and ERβ and activate it.
Oestrogen is hydrophobic and is able to diffuse through the plasma membrane and into the target cell. Oestradiol binding to the oestrogen receptor releases the heat shock protein from the ligand binding domain of the receptor causing dimerization. The nuclear localisation signal targets the ligand-receptor complex to the nucleus where it can bind oestrogen response elements within the promoter of target genes on DNA. Subsequently, various cofactors are recruited allowing transcription of genes including those involved in cell proliferation.
When BPA is exposed to high temperatures or changes in pH, the ester bond linking BPA monomers is hydrolysed. Free BPA then competes with oestrogen for ERα and ERβ binding sites. When BPA successfully binds the receptor, it interacts with ERE and increases expression of target genes like WNT-4 and RANKL; two key players in stem cell proliferation and carcinogenesis. BPA was also shown to inactivate p53 which prevents tumour formation as it triggers apoptosis.
Fertility.
As of 2022, current evidence shows a possible positive correlation between BPA levels, lower sperm quality, decreased motility and an increase in sperm immaturity. There is tentative evidence to support the idea that BPA exposure has negative effects on human fertility. Few studies have investigated whether recurrent miscarriage is associated with BPA levels. Exposure to BPA does not appear to be linked with higher rates of endometrial hyperplasia. Exposure to BPA does not appear to be linked with higher rates of endometrial hyperplasia. A 2009 cohort study linked urinary BPA concentration of women undergoing IVF egg retrieval, with an inverse correlation to oocyte release. The study found that for each unit increase in day 3 FSH (IU/L), there was an average decrease of 9% in the number of oocytes retrieved. The positive correlations found in animal studies warrants the continued research of BPA for couple fecundity.
Ubiquitous in environment through consumer products such as reusable plastics, food and beverage container liners, baby bottles, water resistant clothing. It has been identified as an EDC and found in urine, blood, amniotic fluid, breast milk and cord blood. Comparing blood BPA and phthalate levels between fertile and infertile women between the ages of 20–40, using gas chromatographic-mass spectrometry to analyze the amount of BPA, phthalate and their metabolites in peripheral venous blood, showed significantly elevated serum BPA level in infertile women, as well as women with PCOS (polycystic ovarian syndrome) and women with endometriosis
BPA is shown to have transgenerational effect by targeting ovarian function by changes in the structural integrity of microtubules that constitute meiotic spindles. BPA contaminants pass through amniotic fluid can alter steroidogenesis in fetal development. This will result if oocyte maturation failure as well as fertility This in turn will result in transgenerational effect and affect the third generation of offspring
Sexual function.
Higher BPA exposure has been associated with increased self-reporting of decreased male sexual function but few studies examining this relationship have been conducted.
Asthma.
Studies in mice have found a link between BPA exposure and asthma; a 2010 study on mice has concluded that perinatal exposure to 10 μg/mL of BPA in drinking water enhances allergic sensitization and bronchial inflammation and responsiveness in an animal model of asthma. A study published in JAMA Pediatrics has found that prenatal exposure to BPA is also linked to lower lung capacity in some young children. This study had 398 mother-infant pairs and looked at their urine samples to detect concentrations of BPA. They study found that every 10-fold increase in BPA was tied to a 55% increase in the odds of wheezing. The higher the concentration of BPA during pregnancy were linked to decrease lung capacity in children under four years old but the link disappeared at age 5. Associate professor of pediatrics at the University of Maryland School of Medicine said, "Exposure during pregnancy, not after, appears to be the critical time for BPA, possibly because it's affecting important pathways that help the lung develop."
In 2013, research from scientists at the Columbia Center for Children's Environmental Health also found a link between the compound and an increased risk for asthma. The research team reported that children with higher levels of BPA at ages 3, 5 and 7 had increased odds of developing asthma when they were between the ages of 5 and 12. The children in this study had about the same concentration of BPA exposure as the average U.S. child. Dr. Kathleen Donohue, an instructor at Columbia University Medical Center said, "they saw an increased risk of asthma at fairly routine, low doses of BPA." Kim Harley, who studies environmental chemicals and children's health, commented in the Scientific American journal saying while the study does not show that BPA causes asthma or wheezing, "it's an important study because we don't know a lot right now about how BPA affects immune response and asthma...They measured BPA at different ages, measured asthma and wheeze at multiple points, and still found consistent associations."
Animal research.
The first evidence of the estrogenicity of bisphenol A came from experiments on rats conducted in the 1930s, but it was not until 1997 that adverse effects of low-dose exposure on laboratory animals were first reported.
Bisphenol A is an endocrine disruptor that can mimic estrogen and has been shown to cause negative health effects in animal studies. Bisphenol A closely mimics the structure and function of the hormone estradiol by binding to and activating the same estrogen receptor as the natural hormone. Early developmental stages appear to be the period of greatest sensitivity to its effects,
A study from 2008 concluded that blood levels of bisphenol A in neonatal mice are the same whether it is injected or ingested.
The current U.S. human exposure limit set by the EPA is 50 μg/kg/day.
In a 2010 commentary a group of scientists criticized a study designed to test low dose BPA exposure published in "Toxicological Sciences" and a later editorial by the same journal, which claimed the rats used in the study were insensitive to estrogen and that had other problems like the use of BPA-containing polycabonate cages while the authors disagreed.
Different expression of ERR-γ in different parts of the body may account for variations in bisphenol A effects. For instance, ERR-γ has been found in high concentration in the placenta, explaining reports of high bisphenol accumulation in this tissue.
Environmental effects.
In 2010, the U.S. Environmental Protection Agency reported that over one million pounds of BPA are released into the environment annually. BPA can be released into the environment by both pre-consumer and post-consumer leaching. Common routes of introduction from the pre-consumer perspective into the environment are directly from chemical plastics, coat and staining manufacturers, foundries who use BPA in casting sand, or transport of BPA and BPA-containing products . Post-consumer BPA waste comes from effluent discharge from municipal wastewater treatment plants, irrigation pipes used in agriculture, ocean-borne plastic trash, indirect leaching from plastic, paper, and metal waste in landfills, and paper or material recycling companies. Despite a rapid soil and water half-life of 4.5 days, and an air half-life of less than one day, BPA's ubiquity makes it an important pollutant. BPA has a low rate of evaporation from water and soil, which presents issues, despite its biodegradability and low concern for bio-accumulation. BPA has low volatility in the atmosphere and a low vapor pressure between 5.00 and 5.32 Pascals. BPA has a high water solubility of about 120 mg/L and most of its reactions in the environment are aqueous. An interesting fact is that BPA dust is flammable if ignited, but it has a minimal explosive concentration in air. Also, in aqueous solutions, BPA has shown absorption of wavelengths greater than 250 nm.
The ubiquitous nature of BPA makes the compound an important pollutant to study as it has been shown to interfere with nitrogen fixation at the roots of leguminous plants associated with the bacterial symbiont "Sinorhizobium meliloti." A 2013 study also observed changes in plant health due to BPA exposure. The study exposed soybean seedlings to various concentrations of BPA and saw changes in root growth, nitrate production, ammonium production, and changes in the activities of nitrate reductase and nitrite reductase. At low doses of BPA, the growth of roots were improved, the amount of nitrate in roots increased, the amount of ammonium in roots decreased, and the nitrate and nitrite reductase activities remained unchanged. However, at considerably higher concentrations of BPA, the opposite effects were seen for all but an increase in nitrate concentration and a decrease in nitrite and nitrate reductase activities. Nitrogen is both a plant nutritional substance, but also the basis of growth and development in plants. Changing concentrations of BPA can be harmful to the ecology of an ecosystem, as well as to humans if the plants are produced to be consumed.
The amount of absorbed BPA on sediment was also seen to decrease with increases in temperature, as demonstrated by a study in 2006 with various plants from the XiangJiang River in Central-South China. In general, as temperature increases, the water solubility of a compound increases. Therefore, the amount of sorbate that enters the solid phase will lower at the equilibrium point. It was also observed that the adsorption process of BPA on sediment is exothermic, the molar formation enthalpy, ΔH°, was negative, the free energy ΔG°, was negative, and the molar entropy, ΔS°, was positive. This indicates that the adsorption of BPA is driven by enthalpy. The adsorption of BPA has also been observed to decrease with increasing pH.
A 2005 study conducted in the United States had found that 91–98% of BPA may be removed from water during treatment at municipal water treatment plants. A more detailed explanation of aqueous reactions of BPA can be observed in the Degradation of BPA section below. Nevertheless, a 2009 meta-analysis of BPA in the surface water system showed BPA present in surface water and sediment in the United States and Europe. According to Environment Canada in 2011, "BPA can currently be found in municipal wastewater. […]initial assessment shows that at low levels, bisphenol A can harm fish and organisms over time."
BPA affects growth, reproduction, and development in aquatic organisms. Among freshwater organisms, fish appear to be the most sensitive species. Evidence of endocrine-related effects in fish, aquatic invertebrates, amphibians, and reptiles has been reported at environmentally relevant exposure levels lower than those required for acute toxicity. There is a widespread variation in reported values for endocrine-related effects, but many fall in the range of 1μg/L to 1 mg/L.
A 2009 review of the biological impacts of plasticizers on wildlife published by the Royal Society with a focus on aquatic and terrestrial annelids, molluscs, crustaceans, insects, fish and amphibians concluded that BPA affects reproduction in all studied animal groups, impairs development in crustaceans and amphibians and induces genetic aberrations.
Vertebrates.
BPA is known as an endocrine disruptor compound (EDC) and has major neurological effects on vertebrates. Depending on the vertebrate species studied, the documented effects of ingestion and exposure to BPA may differ. In species such as Zebrafish, BPA affects the lateral line which is crucial for sensory perception and may affect the expression of genes that are controlling heart and skeletal muscle metabolism, as well as insulin secretion control. Aquatic vertebrates are especially impacted by BPA in reproduction. In the broad- snouted caiman, Caiman latirostris, gender is normally determined by the temperature at which the egg is incubated at. A study was conducted where their eggs were exposed to BPA. The first set was exposed at about 1000 μg/egg and all of the offspring were female. When the eggs were exposed at a lower concentration at about 90 μg/egg, the offspring produced were males. These male offspring exhibited disrupted seminiferous tubules. In mice, maternal diet has been studied and found to have a major effect on the offspring that were exposed to BPA during certain developmental stages. There are no direct studies on humans, however, studies on the vertebrates suggest the potential harm it may have.
Reproductive effects.
Bisphenol A (BPA) is an environmental contaminant that disrupts the ecosystem, with the most profound effects observed in vertebrates. BPA infiltrates the environment by running off of landfills so because of this, it is mostly found in water. Aquatic vertebrates are thus the most affected by this form of pollution. After the aquatic vertebrates inhale BPA through their gills or skin, they are mainly affected by BPA at the cellular level, affecting their estrogen levels. BPA binds to the estrogen receptors and has an antagonist effect, which means that it decreases the amount of estrogen produced. To regulate reproductive functions, Gonadotropin releasing hormone (GnRH) is released. This helps with maturation of the sex organs in both males and females. Another study found that Barbus sp., immature barbels, in a river with traces of BPA expressed intersex characteristics. They had gonads with oogonia, spermatogonia and spermatocytes. Researchers concluded that BPA did not induce but did contribute to these intersex morphological expressions. After being exposed to 1μg/L BPA, Salmo trutta, brown trout, had reduced sperm density and mobility. In Pimephales promelas, fathead minnows, there was a reduction of sperm production. Both species were also exposed to 2 μg/L and 5 μg/L of BPA and it resulted in delayed ovulation or no ovulation for the fish.
A study was conducted using adult female Gobiocypris rarus, a rare minnow. The fish were exposed to 5 μg/L, 15μg/L and 50 μg/L of BPA for 14 days and 35 days. The results showed the group exposed to the highest amount of BPA (50 μg/L) for 35 days showed suppressed effects on oocyte development. It also showed that all groups had a stimulatory effect on the hepatic vitellogenin transcription (VTG). VTG is an indicator that the vertebrate has become exposed to environmental estrogens. The groups exposed to a lower concentration of BPA (5 μg/L & 15μg/L) showed an increase in expressed ovarian steroidogenic genes. Meanwhile, the group exposed to a higher concentration of BPA (50 μg/L) showed a decrease in expressed ovarian steroidogenic genes.
Although aquatic vertebrates are most commonly affected by BPA exposure in natural settings, researchers often learn how BPA effects other vertebrates using experimental mice models. In a study conducted twenty years ago, there was an accidental BPA exposure. This resulted in an increase in chromosomally abnormal eggs. This led researchers to question what other effects this has on mammals. It showed that BPA leads to meiotic changes such as fertility and maturation of sex organs. Scientists started to realize that this type of exposure could lead to mutations and affect multiple generations. Because of this, "BPA free" products started to be made but to do this, BPS was being used. A study was conducted showing that exposure to BPS increased mutations before zygotic development showing that it is just as dangerous as BPA.
Behavioral effects.
BPA has major effects on the behavior of vertebrates, especially in their sensory processing systems. In zebrafish BPA can disrupt the signaling in the endocrine system and affect auditory development and function. Similar to a human ear, the zebrafish have a sensory organ called the lateral line that detects different forms of vibration. The hair cells within the lateral line are very sensitive to the toxic effects of BPA and are most commonly killed from BPA; fish are able to regrow hair cells but BPA has decreased their ability to reproduce them as efficiently. Fish without a fully functioning lateral line have behavioral changes such as: higher risk of predation, lowered prey detection and possible reproduction abilities. Unlike fish, mammals have a threat to go deaf if exposed directly.
As well as an endocrine disruptor compound (EDC), BPA has been found to inhibit nerve conduction. In the sciatic nerve of a frog (Rana tigrina), BPA inhibits the fast-conducting compound action potential (CAP). Estrogen receptors found in the plasma membrane of the sciatic nerve are affected by the BPA and inhibit CAP. However, estrogen receptors are not the only reason for inhibition, BPA is able to inhibit nerve functions without affecting estrogen.
A study in mice shows that BPA as an EDC acts as an agonist/antagonist for behavioral effects. BPA caused a decrease in exploratory and spatial behaviors in male mice who were exposed in the developmental state. In order to expose the males, pregnant females were fed with BPA in food the mice were compared to males whose mothers were fed with a phytoestrogen-free CTL diet. Males with the BPA exposure in developmental stages were less likely to be territorial when the other male mice were present. BPA exposure changed the behavior of sex and species-dependent behavior. These conclusions are suggestions to support the idea that BPA can cause sexually selected traits. Furthermore, maternal diet and exposing the developmental mice to BPA, may cause harm and lead to sexually dimorphic responses.
Positions of national and international bodies.
World Health Organization.
In November 2009, the WHO announced to organize an expert consultation in 2010 to assess low-dose BPA exposure health effects, focusing on the nervous and behavioral system and exposure to young children. The 2010 WHO expert panel recommended no new regulations limiting or banning the use of bisphenol-A, stating that "initiation of public health measures would be premature."
United States.
In 2013, the FDA posted on its web site: "Is BPA safe? Yes. Based on FDA's ongoing safety review of scientific evidence, the available information continues to support the safety of BPA for the approved uses in food containers and packaging. People are exposed to low levels of BPA because, like many packaging components, very small amounts of BPA may migrate from the food packaging into foods or beverages." FDA issued a statement on the basis of three previous reviews by a group of assembled Agency experts in 2014 in its "Final report for the review of literature and data on BPA" that said in part, "The results of these new toxicity data and studies do not affect the dose-effect level and the existing NOAEL (5 mg/kg bw/day; oral exposure)."
Australia and New Zealand.
In 2009 the Australia and New Zealand Food Safety Authority (Food Standards Australia New Zealand) did not see any health risk with bisphenol A baby bottles if the manufacturer's instructions were followed, as levels of exposure were very low and would not pose a significant health risk. It added that "the move by overseas manufacturers to stop using BPA in baby bottles is a voluntary action and not the result of a specific action by regulators." In 2008 it had suggested the use of glass baby bottles if parents had concerns.
In 2012 the Australian Government introduced a voluntary phase out of BPA use in polycarbonate baby bottles.
Canada.
In April 2008, Health Canada concluded that, while adverse health effects were not expected, the margin of safety was too small for formula-fed infants and proposed classifying the chemical as "'toxic' to human health and the environment." The Canadian Minister of Health announced Canada's intent to ban the import, sale, and advertisement of polycarbonate baby bottles containing bisphenol A due to safety concerns, and investigate ways to reduce BPA contamination of baby formula packaged in metal cans. Subsequent news reports from April 2008 showed many retailers removing polycarbonate drinking products from their shelves.
On 18 October 2008, Health Canada noted that "bisphenol A exposure to newborns and infants is below levels that cause effects" and that the "general public need not be concerned".
In 2010, Canada's department of the environment declared BPA to be a "toxic substance" and added it to schedule 1 of the Canadian Environmental Protection Act, 1999.
European Union.
The 2008 European Union Risk Assessment Report on bisphenol A, published by the European Commission and European Food Safety Authority (EFSA), concluded that bisphenol A-based products, such as polycarbonate plastic and epoxy resins, are safe for consumers and the environment when used as intended. By October 2008, after the Lang Study was published, the EFSA issued a statement concluding that the study provided no grounds to revise the current Tolerable Daily Intake (TDI) level for BPA of 0.05 mg/kg bodyweight.
On 22 December 2009, the EU Environment ministers released a statement expressing concerns over recent studies showing adverse effects of exposure to endocrine disruptors.
In September 2010, the European Food Safety Authority (EFSA) concluded after a "comprehensive evaluation of recent toxicity data […] that no new study could be identified, which would call for a revision of the current TDI". The Panel noted that some studies conducted on developing animals have suggested BPA-related effects of possible toxicological relevance, in particular biochemical changes in brain, immune-modulatory effects and enhanced susceptibility to breast tumours but considered that those studies had several shortcomings so the relevance of these findings for human health could not be assessed.
On 25 November 2010, the European Union executive commission said it planned to ban the manufacturing by 1 March 2011 and ban the marketing and market placement of polycarbonate baby bottles containing the organic compound bisphenol A by 1 June 2011, according to John Dalli, commissioner in charge of health and consumer policy. This was backed by a majority of EU governments. The ban was called an over-reaction by Richard Sharpe, of the Medical Research Council's Human Reproductive Sciences Unit, who said to be unaware of any convincing evidence justifying the measure and criticized it as being done on political, rather than scientific grounds.
In January 2011 use of bisphenol A in baby bottles was forbidden in all EU-countries.
After reviewing more recent research, in 2012 EFSA made a decision to re-evaluate the human risks associated with exposure to BPA. They completed a draft assessment of consumer exposure to BPA in July 2013 and at that time asked for public input from all stakeholders to assist in forming a final report, which is expected to be completed in 2014.
In January 2014, EFSA presented a second part of the draft opinion which discussed the human health risks posed by BPA. The draft opinion was accompanied by an eight-week public consultation and also included adverse effects on the liver and kidney as related to BPA. From this it was recommended that the current TDI to be revised. In January 2015 EFSA indicated that the TDI was reduced from 50 to 4 μg/kg body weight/day – a recommendation, as national legislatures make the laws.
The EU Commission issued a new regulation regarding the use of bisphenol A in thermal paper on 12 December 2016. According to this new regulation, thermal paper containing bisphenol A cannot be placed on the EU market after 2 January 2020. This regulation came into effect on 2 January 2017 but there is a transition period of three years.
On 12 January 2017, BPA was added to the candidate list of substances of very high concern (SVHC). Candidate SVHC listing is a first step towards restricting the importing and use of a chemical in the EU. If the European Chemical Agency assigns SVHC status, the presence of BPA in a product at a concentration above 0.1% must be disclosed to a purchaser (with different rules for consumer and business purchasers). In February 2016, France had announced that it intended to propose BPA as a candidate SVHC by 8 August 2016.
Denmark.
In May 2009, the Danish parliament passed a resolution to ban the use of BPA in baby bottles, which had not been enacted by April 2010. In March 2010, a temporary ban was declared by the Health Minister.
Belgium.
In March 2010, senator Philippe Mahoux proposed legislation to ban BPA in food contact plastics. In May 2011, senators Dominique Tilmans and Jacques Brotchi proposed legislation to ban BPA from thermal paper.
France.
On 5 February 2010, the French Food Safety Agency (AFSSA) questioned the previous assessments of the health risks of BPA, especially in regard to behavioral effects observed in rat pups following exposure in utero and during the first months of life. In April 2010, the AFFSA suggested the adoption of better labels for food products containing BPA.
On 24 March 2010, the French Senate unanimously approved a proposition of law to ban BPA from baby bottles. The National Assembly (Lower House) approved the text on 23 June 2010, which has been applicable law since 2 July 2010. On 12 October 2011, the French National Assembly voted a law forbidding the use of Bisphenol A in products aimed at less than 3-year-old children for 2013, and 2014 for all food containers.
On 9 October 2012, the French Senate adopted unanimously the law proposition to suspend manufacture, import, export and marketing of all food containers that include bisphenol A for 2015. The ban of bisphenol A in 2013 for food products designed for children less than 3-years-old was maintained.
Germany.
On 19 September 2008, the German Federal Institute for Risk Assessment (Bundesinstitut für Risikobewertung, BfR) stated that there was no reason to change the current risk assessment for bisphenol A on the basis of the Lang Study.
In October 2009, the German environmental organization Bund für Umwelt und Naturschutz Deutschland requested a ban on BPA for children's products, especially pacifiers, and products that make contact with food. In response, some manufacturers voluntarily removed the problematic pacifiers from the market.
Netherlands.
On 3 March 2016, the Netherlands Food and Consumer Product Safety Authority (NVWA) issued cautionary recommendations to the Minister of Health, Welfare, and Sport and the Secretary for Economic Affairs, on the public intake of BPA, especially for vulnerable groups such as women who are pregnant or breastfeeding, and those with developing immune systems such as children below the age of 10. This was done in response to recent published research, and conclusions reached by the European Food Safety Authority. It also called for the concentration of BPA in drinking water to be lowered below 0.2 μg/L, in line with the maximum tolerable intake they recommend.
Switzerland.
In February 2009, the Swiss Federal Office for Public Health, based on reports of other health agencies, stated that the intake of bisphenol A from food represents no risk to the consumer, including newborns and infants. However, in the same statement, it advised for proper use of polycarbonate baby bottles and listed alternatives.
Sweden.
By 26 May 1995, the Swedish Chemicals Agency asked for a BPA ban in baby bottles, but the Swedish Food Safety Authority prefers to await the expected European Food Safety Authority's updated review. The Minister of Environment said to wait for the EFSA review but not for too long.
From March 2011 it is prohibited to manufacture babybottles containing bisphenol A and from July 2011 they can not be bought in stores. On 12 April 2012, the Swedish government announced that Sweden will ban BPA in cans containing food for children under the age of three.
Since January 2, 2020, BPA has been banned in thermal receipts as a consequence of the EU wide ban.
Since September 1, 2016, it is prohibited to use BPA when relining water pipes with CIPP.
United Kingdom.
In December 2009, responding to a letter from a group of seven scientists that urged the UK Government to "adopt a standpoint consistent with the approach taken by other Governments who have ended the use of BPA in food contact products marketed at children", the UK Food Standards Agency reaffirmed, in January 2009, its view that "exposure of UK consumers to BPA from all sources, including food contact materials, was well below levels considered harmful".
Turkey.
As of 10 June 2011, Turkey banned the use of BPA in baby bottles and other PC items produced for babies.
Japan.
Between 1998 and 2003, the canning industry voluntarily replaced its BPA-containing epoxy resin can liners with BPA-free polyethylene terephthalate (PET) in many of its products. For other products, it switched to a different epoxy lining that yielded much less migration of BPA into food than the previously used resin. In addition, polycarbonate tableware for school lunches was replaced by BPA-free plastics.
Human exposure sources.
The major human exposure route to BPA is diet, including ingestion of contaminated food and water.
It is especially likely to leach from plastics when they are cleaned with harsh detergents or when they contain acidic or high-temperature liquids. BPA is used to form epoxy resin coating of water pipes; in older buildings, such resin coatings are used to avoid replacement of deteriorating pipes. In the workplace, while handling and manufacturing products which contain BPA, inhalation and dermal exposures are the most probable routes. There are many uses of BPA for which related potential exposures have not been fully assessed including digital media, electrical and electronic equipment, automobiles, sports safety equipment, electrical laminates for printed circuit boards, composites, paints, and adhesives. In addition to being present in many products that people use on a daily basis, BPA has the ability to bioaccumulate, especially in water bodies. In one review, it was seen that although BPA is biodegradable, it is still detected after wastewater treatment in many waterways at concentrations of approximately 1 ug/L. This study also looked at other pathways where BPA could potentially bioaccumulate and found "low-moderate potential...in microorganisms, algae, invertebrates, and fish in the environment" suggesting that some environmental exposures are less likely.
In November 2009, the "Consumer Reports" magazine published an analysis of BPA content in some canned foods and beverages, where in specific cases the content of a single can of food could exceed the FDA "Cumulative Exposure Daily Intake" limit.
The CDC had found bisphenol A in the urine of 95% of adults sampled in 1988–1994 and in 93% of children and adults tested in 2003–04. The USEPA Reference dose (RfD) for BPA is 50 μg/kg/day which is not enforceable but is the recommended safe level of exposure. The most sensitive animal studies show effects at much lower doses, and several studies of children, who tend to have the highest levels, have found levels over the EPA's suggested safe limit figure.
A 2009 Health Canada study found that the majority of canned soft drinks it tested had low, but measurable levels of bisphenol A. A study conducted by the University of Texas School of Public Health in 2010 found BPA in 63 of 105 samples of fresh and canned foods, including fresh turkey sold in plastic packaging and canned infant formula. A 2011 study published in "Environmental Health Perspectives", "Food Packaging and Bisphenol A and Bis(2-Ethyhexyl) Phthalate Exposure: Findings from a Dietary Intervention," selected 20 participants based on their self-reported use of canned and packaged foods to study BPA. Participants ate their usual diets, followed by three days of consuming foods that were not canned or packaged. The study's findings include: 1) evidence of BPA in participants' urine decreased by 50% to 70% during the period of eating fresh foods; and 2) participants' reports of their food practices suggested that consumption of canned foods and beverages and restaurant meals were the most likely sources of exposure to BPA in their usual diets. The researchers note that, even beyond these 20 participants, BPA exposure is widespread, with detectable levels in urine samples in more than an estimated 90% of the U.S. population. Another U.S. study found that consumption of soda, school lunches, and meals prepared outside the home were statistically significantly associated with higher urinary BPA.
A 2011 experiment by researchers at the Harvard School of Public Health indicated that BPA used in the lining of food cans is absorbed by the food and then ingested by consumers. Of 75 participants, half ate a lunch of canned vegetable soup for five days, followed by five days of fresh soup, while the other half did the same experiment in reverse order. "The analysis revealed that when participants ate the canned soup, they experienced more than a 1,000 percent increase in their urinary concentrations of BPA, compared to when they dined on fresh soup."
A 2009 study found that drinking from polycarbonate bottles increased urinary bisphenol A levels by two-thirds, from 1.2 μg/g creatinine to 2 μg/g creatinine. Consumer groups recommend that people wishing to lower their exposure to bisphenol A avoid canned food and polycarbonate plastic containers (which shares resin identification code 7 with many other plastics) unless the packaging indicates the plastic is bisphenol A-free. To avoid the possibility of BPA leaching into food or drink, the National Toxicology Panel recommends avoiding microwaving food in plastic containers, putting plastics in the dishwasher, or using harsh detergents.
Besides diet, exposure can also occur through air and through skin absorption. Free BPA is found in high concentration in thermal paper and carbonless copy paper, which would be expected to be more available for exposure than BPA bound into resin or plastic. Popular uses of thermal paper include receipts, event and cinema tickets, labels, and airline tickets. A Swiss study found that 11 of 13 thermal printing papers contained 8 – 17 g/kg bisphenol A (BPA). Upon dry finger contact with a thermal paper receipt, roughly 1 μg BPA (0.2 – 6 μg) was transferred to the forefinger and the middle finger. For wet or greasy fingers approximately 10 times more was transferred. Extraction of BPA from the fingers was possible up to 2 hours after exposure. Further, it has been demonstrated that thermal receipts placed in contact with paper currency in a wallet for 24 hours cause a dramatic increase in the concentration of BPA in paper currency, making paper money a secondary source of exposure. Another study has identified BPA in all of the waste paper samples analysed (newspapers, magazines, office paper, etc.), indicating direct results of contamination through paper recycling. Free BPA can readily be transferred to skin, and residues on hands can be ingested. Bodily intake through dermal absorption (99% of which comes from handling receipts) has been shown for the general population to be 0.219 ng/kg bw/day (occupationally exposed persons absorb higher amounts at 16.3 ng/kg bw/day) whereas aggregate intake (food/beverage/environment) for adults is estimated at 0.36–0.43 μg/kg bw/day (estimated intake for occupationally exposed adults is 0.043–100 μg/kg bw/day).
A study from 2011 found that Americans of all age groups had twice as much BPA in their bodies as Canadians; the reasons for the disparity were unknown, as there was no evidence to suggest higher amounts of BPA in U.S. foods, or that consumer products available in the U.S. containing BPA were BPA-free in Canada. According to another study it may have been due to differences in how and when the surveys were done, because "although comparisons of measured concentrations can be made across populations, this must be done with caution owing to differences in sampling, in the analytical methods used and in the sensitivity of the assays."
Comparing data from the National Health and Nutrition Examination Surveys (NHANES) from four time periods between 2003 and 2012, urinary BPA data the median daily intake for the overall population is approximately 25 ng/kg/day and below current health based guidelines. Additionally, daily intake of BPA in the United States has decreased significantly compared to the intakes measured in 2003–2004. Public attention and governmental action during this time period may have decreased the exposure to BPA somewhat but these studies did not include children under the age of six. According to the Endocrine Society, age of exposure is an important factor in determining the extent to which endocrine disrupting chemicals will have an effect, and the effects on developing fetuses or infants is quite different than an adult.
Fetal and early-childhood exposures.
A 2009 study found higher urinary concentrations in young children than in adults under typical exposure scenarios. In adults, BPA is eliminated from the body through a detoxification process in the liver. In infants and children, this pathway is not fully developed so they have a decreased ability to clear BPA from their systems. Several recent studies of children have found levels that exceed the EPAs suggested safe limit figure.
Infants fed with liquid formula are among the most exposed, and those fed formula from polycarbonate bottles can consume up to 13 micrograms of bisphenol A per kg of body weight per day (μg/kg/day; see table below). In the U.S. and Canada, BPA has been found in infant liquid formula in concentrations varying from 0.48 to 11 ng/g. BPA has been rarely found in infant powder formula (only 1 of 14). The U.S. Department of Health & Human Services (HHS) states that "the benefit of a stable source of good nutrition from infant formula and food outweighs the potential risk of BPA exposure". BPA is present in human breast milk, having been found by several studies in 62–75% of breast milk samples. This is presumably due to the mothers being exposed to BPA since it is not naturally produced by the body.
Children may be more susceptible to BPA exposure than adults (see health effects).
A 2010 study of people in Austria, Switzerland, and Germany has suggested polycarbonate (PC) baby bottles as the most prominent role of exposure for infants, and canned food for adults and teenagers. In the United States, the growing concern over BPA exposure in infants in recent years has led the manufacturers of plastic baby bottles to stop using BPA in their bottles. The FDA banned the use of BPA in baby bottles and sippy cups (July 2012) as well as the use of epoxy resins in infant formula packaging. However, babies may still be exposed if they are fed with old or hand-me-down bottles bought before the companies stopped using BPA.
One often overlooked source of exposure occurs when a pregnant woman is exposed, thereby exposing the fetus. Animal studies have shown that BPA can be found in both the placenta and the amniotic fluid of pregnant mice. Since BPA was also "detected in the urine and serum of pregnant women and the serum, plasma, and placenta of newborn infants" a study to examine the externalizing behaviors associated with prenatal exposure to BPA was performed which suggests that exposures earlier in development have more of an effect on the behavior outcomes and that female children (2-years-old) are impacted more than males. A study of 244 mothers indicated that exposure to BPA before birth could affect the behavior of girls at age 3. Girls whose mother's urine contained high levels of BPA during pregnancy scored worse on tests of anxiety and hyperactivity. Although these girls still scored within a normal range, for every 10-fold increase in the BPA of the mother, the girls scored at least six points lower on the tests. Boys did not seem to be affected by their mother's BPA levels during pregnancy. After the baby is born, maternal exposure can continue to affect the infant through transfer of BPA to the infant via breast milk. Because of these exposures that can occur both during and after pregnancy, mothers wishing to limit their child's exposure to BPA should attempt to limit their own exposures during that time period.
While the majority of exposures have been shown to come through the diet, accidental ingestion can also be considered a source of exposure. One study conducted in Japan tested plastic baby books to look for possible leaching into saliva when babies chew on them. While the results of this study have yet to be replicated, it gives reason to question whether exposure can also occur in infants through ingestion by chewing on certain books or toys.
Regulation.
Public health regulatory history in the United States.
Charles Schumer introduced a 'BPA-Free Kids Act of 2008' to the U.S. Senate seeking to ban BPA in any product designed for use by children and require the Center for Disease Control to conduct a study about the health effects of BPA exposure. It was reintroduced in 2009 in both Senate and House, but died in committee each time.
In 2008, the FDA reassured consumers that current limits were safe, but convened an outside panel of experts to review the issue. The Lang study was released, and co-author David Melzer presented the results of the study before the FDA panel. An editorial accompanying the Lang study's publication criticized the FDA's assessment of bisphenol A: "A fundamental problem is that the current ADI [acceptable daily intake] for BPA is based on experiments conducted in the early 1980s using outdated methods (only very high doses were tested) and insensitive assays. More recent findings from independent scientists were rejected by the FDA, apparently because those investigators did not follow the outdated testing guidelines for environmental chemicals, whereas studies using the outdated, insensitive assays (predominantly involving studies funded by the chemical industry) are given more weight in arriving at the conclusion that BPA is not harmful at current exposure levels." The FDA was criticized that it was "basing its conclusion on two studies while downplaying the results of hundreds of other studies." Diana Zuckerman, president of the National Research Center for Women and Families, criticized the FDA in her testimony at the FDA's public meeting on the draft assessment of bisphenol A for use in food contact applications, that "At the very least, the FDA should require a prominent warning on products made with BPA".
In March 2009 Suffolk County, New York became the first county to pass legislation to ban baby beverage containers made with bisphenol A. By March 2009, legislation to ban bisphenol A had been proposed in both House and Senate.
In the same month, Rochelle Tyl, author of two studies used by FDA to assert BPA safety in August 2008, said those studies did not claim that BPA is safe, because they were not designed to cover all aspects of the chemical's effects. In May 2009, Minnesota and Chicago were the first U.S. jurisdictions to pass regulations limiting or banning BPA. In June 2009, the FDA announced its decision to reconsider the BPA safety levels. Grassroots political action led Connecticut to become the first U.S. state to ban bisphenol A not only from infant formula and baby food containers, but also from any reusable food or beverage container. In July 2009, the California Environmental Protection Agency's Developmental and Reproductive Toxicity Identification Committee in the California Office of Environmental Health Hazard Assessment unanimously voted against placing Bisphenol A on the state's list of chemicals that are believed to cause reproductive harm. The panel was concerned over the growing scientific evidence showing BPA's reproductive harm in animals, found that there was insufficient data of the effects in humans. Critics pointed out that the same panel failed to add second-hand smoke to the list until 2006, and only one chemical was added to the list in the last three years. In September, the U.S. Environmental Protection Agency announced that it was evaluating BPA for an action plan development. In October, the NIH announced $30,000,000 in stimulus grants to study the health effects of BPA. This money was supposed to result in many peer-reviewed publications.
On 15 January 2010, the FDA expressed "some concern", the middle level in its scale of concerns, about the potential effects of BPA on the brain, behavior, and prostate gland in fetuses, infants, and young children, and announced that it was taking reasonable steps to reduce human exposure to BPA in the food supply. However, the FDA was not recommending that families change the use of infant formula or foods, as it saw the benefit of a stable source of good nutrition as outweighing the potential risk from BPA exposure. On the same date, the Department of Health and Human Services released information to help parents to reduce children's BPA exposure. As of 2010 many U.S. states were considering some sort of BPA ban.
In June 2010 the 2008–2009 Annual Report of the President's Cancer Panel was released and recommended: "Because of the long latency period of many cancers, the available evidence argues for a precautionary approach to these diverse chemicals, which include (…) bisphenol A". In August 2010, the Maine Board of Environmental Protection voted unanimously to ban the sale of baby bottles and other reusable food and beverage containers made with bisphenol A as of January 2012. In February 2011, the newly elected governor of Maine, Paul LePage, gained national attention when he spoke on a local TV news show saying he hoped to repeal the ban because, "There hasn't been any science that identifies that there is a problem" and added: "The only thing that I've heard is if you take a plastic bottle and put it in the microwave and you heat it up, it gives off a chemical similar to estrogen. So the worst case is some women may have little beards." In April 2011, the Maine legislature passed a bill to ban the use of BPA in baby bottles, sippy cups, and other reusable food and beverage containers, effective 1 January 2012. Governor LePage refused to sign the bill.
In October 2011, California banned BPA from baby bottles and toddlers' drinking cups, effective 1 July 2013. By 2011, 26 states had proposed legislation that would ban certain uses of BPA. Many bills died in committee. In July 2011, the American Medical Association (AMA) declared feeding products for babies and infants that contain BPA should be banned. It recommended better federal oversight of BPA and clear labeling of products containing it. It stressed the importance of the FDA to "actively incorporate current science into the regulation of food and beverage BPA-containing products."
In 2012, the FDA concluded an assessment of scientific research on the effects of BPA and stated in the March 2012 Consumer Update that "the scientific evidence at this time does not suggest that the very low levels of human exposure to BPA through the diet are unsafe" although recognizing "potential uncertainties in the overall interpretation of these studies including route of exposure used in the studies and the relevance of animal models to human health. The FDA is continuing to pursue additional research to resolve these uncertainties." Yet on 17 July 2012, the FDA banned BPA from baby bottles and sippy cups. A FDA spokesman said the agency's action was not based on safety concerns and that "the agency continues to support the safety of BPA for use in products that hold food." Since manufacturers had already stopped using the chemical in baby bottles and sippy cups, the decision was a response to a request by the American Chemistry Council, the chemical industry's main trade association, who believed that a ban would boost consumer confidence. The ban was criticized as "purely cosmetic" by the Environmental Working Group, which stated that "If the agency truly wants to prevent people from being exposed to this toxic chemical associated with a variety of serious and chronic conditions it should ban its use in cans of infant formula, food and beverages." The Natural Resources Defense Council called the move inadequate saying, the FDA needs to ban BPA from all food packaging.
As of 2014, 12 states have banned BPA from children's bottles and feeding containers.
Environmental regulation in the United States.
On 30 December 2009 EPA released a so-called action plan for four chemicals, including BPA, which would have added it to the list of "chemicals of concern" regulated under the Toxic Substances Control Act. In February 2010, after lobbyists for the chemical industry had met with administration officials, the EPA delayed BPA regulation by not including the chemical.
On 29 March 2010, EPA published a revised action plan for BPA as "chemical of concern". In October 2010 an advanced Notice of Proposed Rulemaking for BPA testing was published in the Federal Register July 2011. After more than 3 years at the Office of Information and Regulatory Affairs (OIRA), part of the Office of Management and Budget (OMB), which has to review draft proposals within 3 months, OIRA had not done so.
In September 2013 EPA withdrew its 2010 draft BPA rule. saying the rule was "no longer necessary", because EPA was taking a different track at looking at chemicals, a so-called "Work Plan" of more than 80 chemicals for risk assessment and risk reduction. Another proposed rule that EPA withdrew would have limited industry's claims of confidential business information (CBI) for the health and safety studies needed, when new chemicals are submitted under TSCA for review. The EPA said it continued "to try to reduce unwarranted claims of confidentiality and has taken a number of significant steps that have had dramatic results... tightening policies for CBI claims and declassifying unwarranted confidentiality claims, challenging companies to review existing CBI claims to ensure that they are still valid and providing easier and enhanced access to a wider array of information."
The chemical industry group American Chemistry Council commended EPA for "choosing a course of action that will ultimately strengthen the performance of the nation's primary chemical management law." Richard Denison, senior scientist with the Environmental Defense Fund, commented "both rules were subject to intense opposition and lobbying from the chemical industry" and "Faced presumably with the reality that [the Office of Information and Regulatory Affairs] was never going to let EPA even propose the rules for public comment, EPA decided to withdraw them."
On 29 January 2014 EPA released a final alternatives assessment for BPA in thermal paper as part of its Design for the Environment program.
Chemical manufacturers reactions to bans.
In March 2009 the six largest U.S. producers of baby bottles decided to stop using bisphenol A in their products. The same month Sunoco, a producer of gasoline and chemicals, refused to sell BPA to companies for use in food and water containers for children younger than 3, saying it could not be certain of the compound's safety.
In May 2009, Lyndsey Layton from the Washington Post accused manufacturers of food and beverage containers and some of their biggest customers of the public relations and lobbying strategy to block government BPA bans. She noted that, "Despite more than 100 published studies by government scientists and university laboratories that have raised health concerns about the chemical, the Food and Drug Administration has deemed it safe largely because of two studies, both funded by a chemical industry trade group". In August 2009 the "Milwaukee Journal Sentinel" investigative series into BPA and its effects showed the Society of the Plastics Industry plans of a major public relations blitz to promote BPA, including plans to attack and discredit those who report or comment negatively on BPA and its effects.
BPA free, unknown substitute.
The chemical industry over time responded to criticism of BPA by promoting "BPA-free" products. For example, in 2010, General Mills announced it had found a "BPA-free alternative" can liner that works with tomatoes. It said it would begin using the BPA-free alternative in tomato products sold by its organic foods subsidiary Muir Glen with that year's tomato harvest. As of 2014, General Mills has refused to state which alternative chemical it uses, and whether it uses it on any of its other canned products.
BPA free, epoxyfree.
A minority of companies have stated what alternative compound(s) they use. Following an inquiry by Representative Edward Markey (D-Mass) seventeen companies replied saying they were going BPA-free, including Campbell Soup Company and General Mills Inc. None of the companies said they are or were going to use Bisphenol S; only four stated the alternative to BPA that they will be using. ConAgra stated in 2013 "alternate liners for tomatoes are vinyl...New aerosol cans are lined with polyester resin". Eden Foods stated that only their "beans are canned with a liner of an oleoresinous c-enamel that does not contain the endocrine disruptor BPA. Oleoresin is a mixture of oil and resin extracted from plants such as pine or balsam fir". Hain Celestial Group will use "modified polyester and/ or acrylic … by June 2014 for our canned soups, beans, and vegetables". Heinz stated in 2011 it "intend[s] to replace epoxy linings in all our food containers…. We have prioritized baby foods", and in 2012 "no BPA in any plastic containers we use".
BPA substitute BPS.
Some "BPA free" plastics are made from epoxy containing a compound called bisphenol S (BPS). BPS shares a similar structure and versatility to BPA and has been used in numerous products from currency to thermal receipt paper. Widespread human exposure to BPS was confirmed in an analysis of urine samples taken in the U.S., Japan, China, and five other Asian countries. Researchers found BPS in all the receipt paper, 87 percent of the paper currency and 52 percent of recycled paper they tested. The study found that people may be absorbing 19 times more BPS through their skin than the amount of BPA they absorbed, when it was more widely used.
In a 2011 study researchers looked at 455 common plastic products and found that 70% tested positive for estrogenic activity. After the products had been washed or microwaved the proportion rose to 95%. The study concluded: "Almost all commercially available plastic products we sampled, independent of the type of resin, product, or retail source, leached chemicals having reliably-detectable EA [endocrine activity], including those advertised as BPA-free. In some cases, BPA-free products released chemicals having more EA than BPA-containing products." A systematic review published in 2015 found that "based on the current literature, BPS and BPF are as hormonally active as BPA, and have endocrine disrupting effects."
Phenol-based substitutes.
Among potential substitutes for BPA, phenol-based chemicals closely related to BPA have been identified. The non-extensive list includes bisphenol E (BPE), bisphenols B (BPB), 4-cumylphenol (HPP) and bisphenol F (BPF), with only BPS being currently used as main substitute in thermal paper.
Degradation of BPA.
Microbial degradation.
The enzyme 4-hydroxyacetophenone monooxygenase, which can be found in "Pseudomonas fluorescens", uses (4-hydroxyphenyl)ethan-1-one, NADPH, H+ and O2 to produce 4-hydroxyphenyl acetate, NADP+, and H2O.
The fungus "Cunninghamella elegans" is also able to degrade synthetic phenolic compounds like bisphenol A.
Plant degradation.
"Portulaca oleracea" efficiently removes bisphenol A from a hydroponic solution. How this happens is unclear.
Photodegradation.
Photodegradation is BPA's main method of natural weathering in the environment, via the Photo Fries rearrangement. Experimentally, BPA has been shown to photodegrade in reactions catalyzed by zinc oxide, titanium dioxide, and tin dioxide, as methods of water decontamination procedures. The Photo Fries degradation is a complex rearrangement of the aromatic carbonate backbone of BPA into phenyl salicylate and dihydroxybenzophenone derivatives before the energized ring releases carbon dioxide. In aqueous solution, BPA shows UV absorption of wavelengths between 250 nm and 360 nm, and the Photo Fries degradation occurs at wavelengths less than 300 nm. The reaction begins by an alpha cleavage between the carbonyl carbon and the oxygen in the carbonate linkage, with the subsequent Photo Fries rearrangement of the products. Seen is the mechanism of the photodegradation of BPA by the Photo Fries reaction:
Combustion of BPA.
Hydroxyl radicals are powerful oxidants that transform BPA into different forms of phenolic group compounds. The advanced photocatalytic oxidation of BPA, using compounds like sodium hypochlorite, NaOCl, as the oxidizing agent, can accelerate the degradation efficiency by releasing oxygen into the water. This decomposition occurs when BPA is exposed to UV irradiation. This release of oxygen, another strong oxidant, also causes BPA disintegration in aqueous conditions to produce carbon dioxide and water. The dissolved carbon dioxide in the water results in an increase of carbonic acid, therefore causing an acidification of the water.
Oxidation of BPA by ozone.
During water treatment, BPA can be removed through ozonation. A 2008 study has identified the degradation products of this reaction, through the use of liquid chromatography and mass spectrometry. The reaction of BPA and ozone is seen below:
Solutions of BPA and water decreased in pH after the ozonation process was completed. pH drops from 6.5 to 4.5 pH units were observed. This is likely because of the formation of carboxylic acids. These products were produced when the solution was 20±2 °C. The products have high molecular weight. Also, ozone is electrophilic, so reactions were between ozone and aromatic rings by electrophilic substitution.
Kinetics of BPA degradation.
In 1991, the first explanation of the rate of BPA degradation through ozonation was determined.
formula_0
This relates the concentration of BPA to time by the apparent dissociation constant, concentration of BPA, and the concentration of ozone.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "-{d[BPA] \\over dt} = k_\\text{apparent}[BPA][O_3]"
}
] |
https://en.wikipedia.org/wiki?curid=57552333
|
5755288
|
Harmonic polynomial
|
Polynomial whose Laplacian is zero
In mathematics, a polynomial formula_0 whose Laplacian is zero is termed a harmonic polynomial.
The harmonic polynomials form a subspace of the vector space of polynomials over the given field. In fact, they form a graded subspace. For the real field (formula_1), the harmonic polynomials are important in mathematical physics.
The Laplacian is the sum of second-order partial derivatives with respect to each of the variables, and is an invariant differential operator under the action of the orthogonal group via the group of rotations.
The standard separation of variables theorem states that every multivariate polynomial over a field can be decomposed as a finite sum of products of a radial polynomial and a harmonic polynomial. This is equivalent to the statement that the polynomial ring is a free module over the ring of radial polynomials.
Examples.
Consider a degree-formula_2 univariate polynomial formula_3. In order to be harmonic, this polynomial must satisfy
formula_4
at all points formula_5. In particular, when formula_6, we have a polynomial formula_7, which must satisfy the condition formula_8. Hence, the only harmonic polynomials of one (real) variable are affine functions formula_9.
In the multivariable case, one finds nontrivial spaces of harmonic polynomials. Consider for instance the bivariate quadratic polynomial
formula_10
where formula_11 are real coefficients. The Laplacian of this polynomial is given by
formula_12
Hence, in order for formula_13 to be harmonic, its coefficients need only satisfy the relationship formula_14. Equivalently, all (real) quadratic bivariate harmonic polynomials are linear combinations of the polynomials
formula_15
Note that, as in any vector space, there are other choices of basis for this same space of polynomials.
A basis for real bivariate harmonic polynomials up to degree 6 is given as follows:
formula_16
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "p(x) := \\textstyle\\sum_{k=0}^d a_k x^k"
},
{
"math_id": 4,
"text": "0 = \\tfrac{\\partial^2}{\\partial x^2} p(x) = \\sum_{k=2}^d k(k-1) a_k x^{k-2}"
},
{
"math_id": 5,
"text": "x \\in \\mathbb{R}"
},
{
"math_id": 6,
"text": "d=2"
},
{
"math_id": 7,
"text": "p(x) = a_0 + a_1 x + a_2 x^2"
},
{
"math_id": 8,
"text": "a_2 = 0"
},
{
"math_id": 9,
"text": "x \\mapsto a_0 + a_1 x"
},
{
"math_id": 10,
"text": "p(x,y) := a_{0,0} + a_{1,0} x + a_{0,1} y + a_{1,1} x y + a_{2,0} x^2 + a_{0,2} y^2, "
},
{
"math_id": 11,
"text": "a_{0,0}, a_{1,0}, a_{0,1}, a_{1,1}, a_{2,0}, a_{0,2}"
},
{
"math_id": 12,
"text": "\\Delta p(x,y) = \\tfrac{\\partial^2}{\\partial x^2} p(x,y) + \\tfrac{\\partial^2}{\\partial y^2} p(x,y) = 2(a_{2,0} + a_{0,2})."
},
{
"math_id": 13,
"text": "p(x,y)"
},
{
"math_id": 14,
"text": "a_{2,0} = -a_{0,2}"
},
{
"math_id": 15,
"text": "\n1, \\quad x, \\quad y, \\quad xy, \\quad x^2 - y^2.\n"
},
{
"math_id": 16,
"text": "\\begin{align}\n\\phi_{0} (x,y) &= 1 \\\\\n\\phi_{1,1}(x,y) &= x & \\phi_{1,2}(x,y) &= y \\\\\n\\phi_{2,1}(x,y) &= x y & \\phi_{2,2}(x,y) &= x^2 - y^2 \\\\\n\\phi_{3,1}(x,y) &= y^3 - 3 x^2 y & \\phi_{3,2}(x,y) &= x^3 - 3 x y^2 \\\\\n\\phi_{4,1}(x,y) &= x^3 y - x y^3 & \\phi_{4,2}(x,y) &= -x^4 + 6 x^2 y^2 - y^4 \\\\\n\\phi_{5,1}(x,y) &= 5 x^4 y - 10 x^2 y^3 + y^5 & \\phi_{5,2}(x,y) &= x^5 - 10 x^3 y^2 + 5 x y^4 \\\\\n\\phi_{6,1}(x,y) &= 3 x^5 y - 10 x^3 y^3 + 3 x y^5 & \\phi_{6,2}(x,y) &= -x^6 + 15 x^4 y^2 - 15 x^2 y^4 + y^6\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=5755288
|
5755338
|
Radical polynomial
|
Abstract algebra polynominal in mathematics
In mathematics, in the realm of abstract algebra, a radical polynomial is a multivariate polynomial over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if
formula_0
is a polynomial ring, the ring of radical polynomials is the subring generated by the polynomial
formula_1
Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group.
The ring of radical polynomials is a graded subalgebra of the ring of all polynomials.
The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial. This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k[x_1, x_2,\\ldots, x_n]"
},
{
"math_id": 1,
"text": "\\sum_{i=1}^n x_i^2."
}
] |
https://en.wikipedia.org/wiki?curid=5755338
|
57554421
|
Pomeranchuk instability
|
The Pomeranchuk instability is an instability in the shape of the Fermi surface of a material with interacting fermions, causing Landau’s Fermi liquid theory to break down. It occurs when a Landau parameter in Fermi liquid theory has a sufficiently negative value, causing deformations of the Fermi surface to be energetically favourable. It is named after the Soviet physicist Isaak Pomeranchuk.
Introduction: Landau parameter for a Fermi liquid.
In a Fermi liquid, renormalized single electron propagators (ignoring spin) are formula_0
where capital momentum letters denote four-vectors formula_1 and the Fermi surface has zero energy; poles of this function determine the quasiparticle energy-momentum dispersion relation. The four-point vertex function formula_2 describes the diagram with two incoming electrons of momentum formula_3 and formula_4; two outgoing electrons of momentum formula_5 and formula_6; and amputated external lines:formula_7 Call the momentum transferformula_8 When formula_9 is very small (the regime of interest here), the "T"-channel dominates the "S"- and "U"-channels. The Dyson equation then offers a simpler description of the four-point vertex function in terms of the 2-particle irreducible formula_10, which corresponds to all diagrams connected after cutting two electron propagators: formula_11 Solving for formula_12 shows that, in the similar-momentum, similar-wavelength limit formula_13, the former tends towards an operator formula_14 satisfyingformula_15 whereformula_16 The normalized Landau parameter is defined in terms of formula_14 as formula_17 where formula_18 is the density of Fermi surface states. In the Legendre eigenbasis formula_19, the parameter formula_20 admits the expansion formula_21 Pomeranchuk's analysis revealed that each formula_22 cannot be very negative.
Stability criterion.
In a 3D isotropic Fermi liquid, consider small density fluctuations formula_23 around the Fermi momentum formula_24, where the shift in Fermi surface expands in spherical harmonics as formula_25 The energy associated with a perturbation is approximated by the functional formula_26 where formula_27. Assuming formula_28, these terms are,formula_29 and so formula_30
When the Pomeranchuk stability criterion formula_31 is satisfied, this value is positive, and the Fermi surface distortion formula_32 requires energy to form. Otherwise, formula_32 releases energy, and will grow without bound until the model breaks down. That process is known as Pomeranchuk instability.
In 2D, a similar analysis, with circular wave fluctuations formula_33 instead of spherical harmonics and Chebyshev polynomials instead of Legendre polynomials, shows the Pomeranchuk constraint to be In anisotropic materials, the same qualitative result is true—for sufficiently negative Landau parameters, unstable fluctuations spontaneously destroy the Fermi surface.
The point at which formula_34 is of much theoretical interest as it indicates a quantum phase transition from a Fermi liquid to a different state of matter Above zero temperature a quantum critical state exists.
Physical quantities with manifest Pomeranchuk criterion.
Many physical quantities in Fermi liquid theory are simple expressions of components of Landau parameters. A few standard ones are listed here; they diverge or become unphysical beyond the quantum critical point.
Isothermal compressibility: formula_35
Effective mass: formula_36
Speed of first sound: formula_37
Unstable zero sound modes.
The Pomeranchuk instability manifests in the dispersion relation for the zeroth sound, which describes how the localized fluctuations of the momentum density function formula_38 propagate through space and time.
Just as the quasiparticle dispersion is given by the pole of the one-particle propagator, the zero sound dispersion relation is given by the pole of the "T"-channel of the vertex function formula_39 near small formula_40. Physically, this describes the propagation of an electron hole pair, which is responsible for the fluctuations in formula_38.
From the relation formula_41 and ignoring the contributions of formula_42 for formula_43, the zero sound spectrum is given by the four-vectors formula_44 satisfying formula_45 Equivalently, where formula_46 and formula_47.
When formula_48, the equation (1) can be implicitly solved for a real solution formula_49, corresponding to a real dispersion relation of oscillatory waves.
When formula_50, the solution formula_49 is pure imaginary, corresponding to an exponential change in amplitude over time. For formula_51, the imaginary part formula_52, damping waves of zeroth sound. But for formula_53 and sufficiently small formula_54, the imaginary part formula_55, implying exponential growth of any low-momentum zero sound perturbation.
Nematic phase transition.
Pomeranchuk instabilities in non-relativistic systems at formula_56 cannot exist. However, instabilities at formula_57 have interesting solid state applications. From the form of spherical harmonics formula_58 (or formula_59 in 2D), the Fermi surface is distorted into an ellipsoid (or ellipse). Specifically, in 2D, the quadrupole moment order parameter formula_60 has nonzero vacuum expectation value in the formula_57 Pomeranchuk instability. The Fermi surface has eccentricity formula_61 and spontaneous major axis orientation formula_62. Gradual spatial variation in formula_63 forms gapless Goldstone modes, forming a nematic liquid statistically analogous to a liquid crystal. Oganesyan et al.'s analysis of a model interaction between quadrupole moments predicts damped zero sound fluctuations of the quadrupole moment condensate for waves oblique to the ellipse axes.
The 2d square tight-binding Hubbard Hamiltonian with next-to-nearest neighbour interaction has been found by Halboth and Metzner to display instability in susceptibility of "d"-wave fluctuations under renormalization group flow. Thus, the Pomeranchuk instability is suspected to explain the experimentally measured anisotropy in cuprate superconductors such as LSCO and YBCO.
|
[
{
"math_id": 0,
"text": "G(K)=\\frac{Z}{k_0 -\\epsilon_{\\vec{k}} + i\\eta \\sgn(k_0)}\\text{,}"
},
{
"math_id": 1,
"text": "K=(k_0,\\vec{k})"
},
{
"math_id": 2,
"text": "\\Gamma_{(K_3,K_4;K_1,K_2)}"
},
{
"math_id": 3,
"text": "K_1"
},
{
"math_id": 4,
"text": "K_2"
},
{
"math_id": 5,
"text": "K_3"
},
{
"math_id": 6,
"text": "K_4"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\Gamma_{(K_3, K_4 ; K_1, K_2)}&=\\int{\\prod_{i=1}^2{dX_i\\,e^{iK_i X_i}}\\prod_{i=3}^4{dX_i\\,e^{-iK_i X_i}}\\langle T\\psi^{\\dagger}(X_3)\\psi^{\\dagger}(X_4)\\psi(X_1)\\psi(X_2)\\rangle} \\\\\n&=(2\\pi)^8 \\delta(K_1-K_3)\\delta(K_2-K_4) G(K_1) G(K_2) - {} \\\\\n&\\phantom{{}={}}(2\\pi)^8 \\delta(K_1-K_4)\\delta(K_2-K_3) G(K_1) G(K_2) + {} \\\\\n&\\phantom{{}={}}(2\\pi)^4 \\delta({K_1+K_2-K_3-K_4}) G(K_1)G(K_2)G(K_3)G(K_4) i\\Gamma_{(K_3, K_4 ; K_1, K_2)}\\text{.}\n\\end{align}"
},
{
"math_id": 8,
"text": "K'=(k'_0,\\vec{k'})=K_1-K_3\\text{.}"
},
{
"math_id": 9,
"text": "K'"
},
{
"math_id": 10,
"text": "\\tilde{\\Gamma}"
},
{
"math_id": 11,
"text": "\\Gamma_{K_3, K_4; K_1, K_2} = \\tilde\\Gamma_{K_3, K_4; K_1, K_2} - i \\sum_Q \\tilde\\Gamma _ {K_3, Q+K';K_1,Q} G(Q)G(Q+K') \\Gamma_{Q,K_4; Q+K', K_2}\\text{.}"
},
{
"math_id": 12,
"text": "\\Gamma"
},
{
"math_id": 13,
"text": "k'\\ll\\omega'\\ll1"
},
{
"math_id": 14,
"text": "\\Gamma_{K_1,K_2}^{\\omega}"
},
{
"math_id": 15,
"text": "L=\\Gamma^{-1}-(\\Gamma^\\omega)^{-1}\\text{,}"
},
{
"math_id": 16,
"text": "L_{Q''+K'', Q'-K'; Q'', Q'} = -i\\delta_{Q'',Q'}\\delta_{K'',K'}G(Q')G(K'+Q')\\text{.}"
},
{
"math_id": 17,
"text": "f_{kk'} = Z^2 N \\Gamma^\\omega ( (\\epsilon_{\\rm F}, \\vec{k}) , (\\epsilon_{\\rm F}, \\vec{k'}))\\text{,}"
},
{
"math_id": 18,
"text": "N=\\frac{p_{\\mathrm{F}}m_{\\mathrm{F}}^*}{\\pi^2}"
},
{
"math_id": 19,
"text": "\\{P_\\ell\\}_\\ell"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "f_{p_{\\rm F} \\hat{k}, p_{\\rm F} \\hat{k'}} = \\sum_{\\ell=0}^{\\infty}{P_\\ell(\\hat{k} \\cdot \\hat{k'})f_\\ell}\\text{.}"
},
{
"math_id": 22,
"text": "f_\\ell"
},
{
"math_id": 23,
"text": "\\delta n_k=\\Theta(|k|-p_{\\mathrm{F}})-\\Theta(|k|-p_{\\mathrm{F}}'(\\hat{k}))"
},
{
"math_id": 24,
"text": "p_\\mathrm{F}"
},
{
"math_id": 25,
"text": "p_{\\rm F}'(\\hat{k}) = \\sum_{l=0}^\\infty Y_{l,m}(\\hat{k}) \\delta \\phi_{lm}\\text{.}"
},
{
"math_id": 26,
"text": "E = \\sum_{\\vec{k}} \\epsilon_{\\vec{k}} \\delta n_{\\vec{k}} + \\sum_{\\vec{k},\\vec{k'}}{ \\frac{1}{2NV}f_{\\vec{k}\\vec{k'}} \\delta n_{\\vec{k}} \\delta n_\\vec{k'} }"
},
{
"math_id": 27,
"text": "\\vec{\\epsilon_k}=v_\\mathrm{F}(|\\vec{k}|-p_\\mathrm{F})"
},
{
"math_id": 28,
"text": "|\\delta\\phi_{lm}|\\ll|p_{\\rm F}|"
},
{
"math_id": 29,
"text": "\\begin{align}\n&\\sum_{k} \\epsilon_k \\delta n_k = \\frac{2}{( 2 \\pi)^3}\\int d^2 \\hat{k} \\int_{p_{\\rm F}}^{p_{\\rm F}'(\\hat{k})} v_{\\rm F} (p'-p_{\\rm F}) p'^2 d p' = \\frac{p_{\\rm F}^2 v_{\\rm F}}{(2 \\pi)^3} \\sum_{lm} (\\delta \\phi_{lm})^2 \\frac{4 \\pi}{2l+1} \\frac{ (l+m)!}{(l-m)!} \\\\\n&\\sum_{k, k'} f_{k k'} \\delta n_k \\delta n_{k'} = \\frac{2 p_{\\rm F}^4}{(2\\pi)^6 } \\int d^2 \\hat{k} d^2 \\hat{k'} (p_{\\rm F}'(\\hat{k})-p_{\\rm F})(p_{\\rm F}'(\\hat{k'})_{\\rm F})f_{p_{\\rm F} \\hat{k}, p_{\\rm F} \\hat{k'}}\n\\end{align}"
},
{
"math_id": 30,
"text": "E = \\frac{p_{\\rm F}^2 v_{\\rm F}}{2 (\\pi)^2} \\sum_{lm} (\\delta \\phi_{lm})^2 \\frac{(l+m)!}{(2l+1)(l-m)!}\\left( 1+ \\frac{f_l}{2l+1}\\right)\\text{.}"
},
{
"math_id": 31,
"text": "f_l >-(2l+1)"
},
{
"math_id": 32,
"text": "\\delta\\phi_{lm}"
},
{
"math_id": 33,
"text": " \\propto e^{i l \\theta}"
},
{
"math_id": 34,
"text": "F_l = - (2l+1)"
},
{
"math_id": 35,
"text": "\\kappa = -\\frac{1}{V} \\frac{\\partial V}{\\partial P} =\\frac{N/n^2}{1+f_0} "
},
{
"math_id": 36,
"text": "m^* = \\frac{p_{\\rm F}}{v_{\\rm F}} = m(1+f_1/3)"
},
{
"math_id": 37,
"text": "C = \\sqrt{\\frac{p_{\\rm F}^2 (1+ f_0)}{m^2( 3+f_1)}}"
},
{
"math_id": 38,
"text": "\\delta n_k"
},
{
"math_id": 39,
"text": "\\Gamma(K_3, K_4; K_1, K_2)"
},
{
"math_id": 40,
"text": "K_1-K_3"
},
{
"math_id": 41,
"text": "\\Gamma= ((\\Gamma^\\omega)^{-1} - L)^{-1}"
},
{
"math_id": 42,
"text": "f_\\ell"
},
{
"math_id": 43,
"text": "\\ell >0"
},
{
"math_id": 44,
"text": "K' = (\\omega(\\vec{k'}), \\vec{k'})"
},
{
"math_id": 45,
"text": "\\frac{Z^2 N}{f_0} =-i \\sum_Q G(Q+K')G(Q+K)\\text{.}"
},
{
"math_id": 46,
"text": "s = \\frac{\\omega(\\vec{k})}{|\\vec{k}|p_{\\rm F}} "
},
{
"math_id": 47,
"text": "x = \\frac{|k|}{p_{\\rm F}}"
},
{
"math_id": 48,
"text": "f_0>0"
},
{
"math_id": 49,
"text": "s(x)"
},
{
"math_id": 50,
"text": "f_0<0"
},
{
"math_id": 51,
"text": "-1<f_0<0"
},
{
"math_id": 52,
"text": "\\Im(s(x))<0"
},
{
"math_id": 53,
"text": "-1 >f_0"
},
{
"math_id": 54,
"text": "x"
},
{
"math_id": 55,
"text": "\\Im(s(x))>0"
},
{
"math_id": 56,
"text": "l=1"
},
{
"math_id": 57,
"text": "l=2"
},
{
"math_id": 58,
"text": "Y_{2,m} (\\theta, \\phi) "
},
{
"math_id": 59,
"text": "e^{2i\\theta}"
},
{
"math_id": 60,
"text": "\\tilde{Q}(q) = \\sum_k e^{2i \\theta_q} \\psi^{\\dagger}_{k+q} \\psi_k "
},
{
"math_id": 61,
"text": "|\\langle \\tilde{Q}(0) \\rangle|"
},
{
"math_id": 62,
"text": "\\theta =\\arg(\\langle \\tilde{Q}(0) \\rangle)"
},
{
"math_id": 63,
"text": "\\theta(\\vec{r})"
}
] |
https://en.wikipedia.org/wiki?curid=57554421
|
57555
|
Acid dissociation constant
|
Measure of an acid's strength in solution
In chemistry, an acid dissociation constant (also known as acidity constant, or acid-ionization constant; denoted &NoBreak;&NoBreak;) is a quantitative measure of the strength of an acid in solution. It is the equilibrium constant for a chemical reaction
<chem>HA <=> A^- + H^+</chem>
known as dissociation in the context of acid–base reactions. The chemical species HA is an acid that dissociates into , called the conjugate base of the acid, and a hydrogen ion, . The system is said to be in equilibrium when the concentrations of its components do not change over time, because both forward and backward reactions are occurring at the same rate.
The dissociation constant is defined by
formula_0 or by its logarithmic form
formula_1
where quantities in square brackets represent the molar concentrations of the species at equilibrium. For example, a hypothetical weak acid having "K"a = 10−5, the value of log "K"a is the exponent (−5), giving p"K"a = 5. For acetic acid, "K"a = 1.8 x 10−5, so p"K"a is about 5. A higher "K"a corresponds to a stronger acid (an acid that is more dissociated at equilibrium). The form p"K"a is often used because it provides a convenient logarithmic scale, where a lower p"K"a corresponds to a stronger acid.
Theoretical background.
The acid dissociation constant for an acid is a direct consequence of the underlying thermodynamics of the dissociation reaction; the p"K"a value is directly proportional to the standard Gibbs free energy change for the reaction. The value of the p"K"a changes with temperature and can be understood qualitatively based on Le Châtelier's principle: when the reaction is endothermic, "K"a increases and p"K"a decreases with increasing temperature; the opposite is true for exothermic reactions.
The value of p"K"a also depends on molecular structure of the acid in many ways. For example, Pauling proposed two rules: one for successive p"K"a of polyprotic acids (see Polyprotic acids below), and one to estimate the p"K"a of oxyacids based on the number of =O and −OH groups (see Factors that affect p"K"a values below). Other structural factors that influence the magnitude of the acid dissociation constant include inductive effects, mesomeric effects, and hydrogen bonding. Hammett type equations have frequently been applied to the estimation of p"K"a.
The quantitative behaviour of acids and bases in solution can be understood only if their p"K"a values are known. In particular, the pH of a solution can be predicted when the analytical concentration and p"K"a values of all acids and bases are known; conversely, it is possible to calculate the equilibrium concentration of the acids and bases in solution when the pH is known. These calculations find application in many different areas of chemistry, biology, medicine, and geology. For example, many compounds used for medication are weak acids or bases, and a knowledge of the p"K"a values, together with the octanol-water partition coefficient, can be used for estimating the extent to which the compound enters the blood stream. Acid dissociation constants are also essential in aquatic chemistry and chemical oceanography, where the acidity of water plays a fundamental role. In living organisms, acid–base homeostasis and enzyme kinetics are dependent on the p"K"a values of the many acids and bases present in the cell and in the body. In chemistry, a knowledge of p"K"a values is necessary for the preparation of buffer solutions and is also a prerequisite for a quantitative understanding of the interaction between acids or bases and metal ions to form complexes. Experimentally, p"K"a values can be determined by potentiometric (pH) titration, but for values of p"K"a less than about 2 or more than about 11, spectrophotometric or NMR measurements may be required due to practical difficulties with pH measurements.
Definitions.
According to Arrhenius's original molecular definition, an acid is a substance that dissociates in aqueous solution, releasing the hydrogen ion (a proton):
<chem>HA <=> A- + H+</chem>
The equilibrium constant for this dissociation reaction is known as a dissociation constant. The liberated proton combines with a water molecule to give a hydronium (or oxonium) ion (naked protons do not exist in solution), and so Arrhenius later proposed that the dissociation should be written as an acid–base reaction:
<chem>HA + H2O <=> A- + H3O+</chem>
Brønsted and Lowry generalised this further to a proton exchange reaction:
formula_2
The acid loses a proton, leaving a conjugate base; the proton is transferred to the base, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is and the conjugate acid is the hydronium ion. The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid .
<chem>HA + S <=> A- + SH+</chem>
In solution chemistry, it is common to use as an abbreviation for the solvated hydrogen ion, regardless of the solvent. In aqueous solution denotes a solvated hydronium ion rather than a proton.
The designation of an acid or base as "conjugate" depends on the context. The conjugate acid of a base B dissociates according to
<chem>BH+ + OH- <=> B + H2O</chem>
which is the reverse of the equilibrium
formula_3
The hydroxide ion , a well known base, is here acting as the conjugate base of the acid water. Acids and bases are thus regarded simply as donors and acceptors of protons respectively.
A broader definition of acid dissociation includes hydrolysis, in which protons are produced by the splitting of water molecules. For example, boric acid () produces as if it were a proton donor, but it has been confirmed by Raman spectroscopy that this is due to the hydrolysis equilibrium:
<chem>B(OH)3 + 2 H2O <=> B(OH)4- + H3O+</chem>
Similarly, metal ion hydrolysis causes ions such as to behave as weak acids:
<chem>[Al(H2O)6]^3+ + H2O <=> [Al(H2O)5(OH)]^2+ + H3O+</chem>
According to Lewis's original definition, an acid is a substance that accepts an electron pair to form a coordinate covalent bond.
Equilibrium constant.
An acid dissociation constant is a particular example of an equilibrium constant. The dissociation of a monoprotic acid, HA, in dilute solution can be written as
<chem>HA <=> A- + H+</chem>
The thermodynamic equilibrium constant &NoBreak;&NoBreak; can be defined by
formula_4
where formula_5 represents the activity, at equilibrium, of the chemical species X. formula_6 is dimensionless since activity is dimensionless. Activities of the products of dissociation are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression.
Since activity is the product of concentration and activity coefficient ("γ") the definition could also be written as
formula_7
where formula_8 represents the concentration of HA and &NoBreak;&NoBreak; is a quotient of activity coefficients.
To avoid the complications involved in using activities, dissociation constants are determined, where possible, in a medium of high ionic strength, that is, under conditions in which &NoBreak;&NoBreak; can be assumed to be always constant. For example, the medium might be a solution of 0.1 molar (M) sodium nitrate or 3 M potassium perchlorate. With this assumption,
formula_9
formula_10
is obtained. Note, however, that all published dissociation constant values refer to the specific ionic medium used in their determination and that different values are obtained with different conditions, as shown for acetic acid in the illustration above. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Cumulative and stepwise constants.
A cumulative equilibrium constant, denoted by &NoBreak;&NoBreak; is related to the product of stepwise constants, denoted by &NoBreak;&NoBreak; For a dibasic acid the relationship between stepwise and overall constants is as follows
<chem>H2A <=> A^2- + 2H+</chem>
formula_11
formula_12
Note that in the context of metal-ligand complex formation, the equilibrium constants for the formation of metal complexes are usually defined as "association" constants. In that case, the equilibrium constants for ligand protonation are also defined as association constants. The numbering of association constants is the reverse of the numbering of dissociation constants; in this example formula_13
Association and dissociation constants.
When discussing the properties of acids it is usual to specify equilibrium constants as acid dissociation constants, denoted by "K"a, with numerical values given the symbol p"K"a.
formula_14
On the other hand, association constants are used for bases.
formula_15
However, general purpose computer programs that are used to derive equilibrium constant values from experimental data use association constants for both acids and bases. Because stability constants for a metal-ligand complex are always specified as association constants, ligand protonation must also be specified as an association reaction. The definitions show that the value of an acid dissociation constant is the reciprocal of the value of the corresponding association constant:
formula_16
formula_17
formula_18
Notes
formula_19
Temperature dependence.
All equilibrium constants vary with temperature according to the van 't Hoff equation
formula_20
&NoBreak;&NoBreak; is the gas constant and &NoBreak;&NoBreak; is the absolute temperature. Thus, for exothermic reactions, the standard enthalpy change, &NoBreak;&NoBreak;, is negative and "K" decreases with temperature. For endothermic reactions, &NoBreak;&NoBreak; is positive and "K" increases with temperature.
The standard enthalpy change for a reaction is itself a function of temperature, according to Kirchhoff's law of thermochemistry:
formula_21
where &NoBreak;&NoBreak; is the heat capacity change at constant pressure. In practice &NoBreak;&NoBreak; may be taken to be constant over a small temperature range.
Dimensionality.
In the equation
formula_22
"K"a appears to have dimensions of concentration. However, since formula_23, the equilibrium constant, &NoBreak;&NoBreak;, "cannot" have a physical dimension. This apparent paradox can be resolved in various ways.
The procedures, (1) and (2), give identical numerical values for an equilibrium constant. Furthermore, since a concentration &NoBreak;&NoBreak; is simply proportional to mole fraction &NoBreak;&NoBreak; and density &NoBreak;&NoBreak;:
formula_24
and since the molar mass &NoBreak;&NoBreak; is a constant in dilute solutions, an equilibrium constant value determined using (3) will be simply proportional to the values obtained with (1) and (2).
It is common practice in biochemistry to quote a value with a dimension as, for example, ""K"a = 30 mM" in order to indicate the scale, millimolar (mM) or micromolar (μM) of the concentration values used for its calculation.
Strong acids and bases.
An acid is classified as "strong" when the concentration of its undissociated species is too low to be measured. Any aqueous acid with a p"K"a value of less than 0 is almost completely deprotonated and is considered a "strong acid". All such acids transfer their protons to water and form the solvent cation species (H3O+ in aqueous solution) so that they all have essentially the same acidity, a phenomenon known as solvent leveling. They are said to be "fully dissociated" in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit. Likewise, any aqueous base with an association constant p"K"b less than about 0, corresponding to p"K"a greater than about 14, is leveled to OH− and is considered a "strong base".
Nitric acid, with a p"K" value of around −1.7, behaves as a strong acid in aqueous solutions with a pH greater than 1. At lower pH values it behaves as a weak acid.
p"K"a values for strong acids have been estimated by theoretical means. For example, the p"K"a value of aqueous HCl has been estimated as −9.3.
Monoprotic acids.
After rearranging the expression defining "K"a, and putting pH
−log10[H+], one obtains
formula_25
This is the Henderson–Hasselbalch equation, from which the following conclusions can be drawn.
1; since log(1)
0, the pH at half-neutralization is numerically equal to p"K"a. Conversely, when pH
p"K"a, the concentration of HA is equal to the concentration of A−.
In water, measurable p"K"a values range from about −2 for a strong acid to about 12 for a very weak acid (or strong base).
A buffer solution of a desired pH can be prepared as a mixture of a weak acid and its conjugate base. In practice, the mixture can be created by dissolving the acid in water, and adding the requisite amount of strong acid or base. When the p"K"a and analytical concentration of the acid are known, the extent of dissociation and pH of a solution of a monoprotic acid can be easily calculated using an ICE table.
Polyprotic acids.
A polyprotic acid is a compound which may lose more than 1 proton. Stepwise dissociation constants are each defined for the loss of a single proton. The constant for dissociation of the first proton may be denoted as "K"a1 and the constants for dissociation of successive protons as "K"a2, etc. Phosphoric acid, , is an example of a polyprotic acid as it can lose three protons.
When the difference between successive p"K" values is about four or more, as in this example, each species may be considered as an acid in its own right; In fact salts of H2PO4− may be crystallised from solution by adjustment of pH to about 5.5 and salts of may be crystallised from solution by adjustment of pH to about 10. The species distribution diagram shows that the concentrations of the two ions are maximum at pH 5.5 and 10.
When the difference between successive p"K" values is less than about four there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. The case of citric acid is shown at the right; solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
According to Pauling's first rule, successive p"K" values of a given acid increase (p"K"a2 > p"K"a1). For oxyacids with more than one ionizable hydrogen on the same atom, the p"K"a values often increase by about 5 units for each proton removed, as in the example of phosphoric acid above.
It can be seen in the table above that the second proton is removed from a negatively charged species. Since the proton carries a positive charge extra work is needed to remove it, which is why p"K"a2 is greater than p"K"a1. p"K"a3 is greater than p"K"a2 because there is further charge separation. When an exception to Pauling's rule is found, it indicates that a major change in structure is also occurring. In the case of (aq), the vanadium is octahedral, 6-coordinate, whereas vanadic acid is tetrahedral, 4-coordinate. This means that four "particles" are released with the first dissociation, but only two "particles" are released with the other dissociations, resulting in a much greater entropy contribution to the standard Gibbs free energy change for the first reaction than for the others.
Isoelectric point.
For substances in solution, the isoelectric point (p"I") is defined as the pH at which the sum, weighted by charge value, of concentrations of positively charged species is equal to the weighted sum of concentrations of negatively charged species. In the case that there is one species of each type, the isoelectric point can be obtained directly from the p"K" values. Take the example of glycine, defined as AH. There are two dissociation equilibria to consider.
<chem>AH2+ <=> AH~+ H+ \qquad [AH][H+] = \mathit{K}_1 [AH2+]</chem>
<chem>AH <=> A^-~+H+ \qquad [A^- ][H+] = \mathit{K}_2 [AH]</chem>
Substitute the expression for [AH] from the second equation into the first equation
<chem>[A^- ][H+]^2 = \mathit{K}_1 \mathit{K}_2 [AH2+]</chem>
At the isoelectric point the concentration of the positively charged species, , is equal to the concentration of the negatively charged species, , so
formula_26
Therefore, taking cologarithms, the pH is given by
formula_27
p"I" values for amino acids are listed at proteinogenic amino acid. When more than two charged species are in equilibrium with each other a full speciation calculation may be needed.
Bases and basicity.
The equilibrium constant "K"b for a base is usually defined as the "association" constant for protonation of the base, B, to form the conjugate acid, .
<chem>B + H2O <=> HB+ + OH-</chem>
Using similar reasoning to that used before
formula_28
"K"b is related to "K"a for the conjugate acid. In water, the concentration of the hydroxide ion, , is related to the concentration of the hydrogen ion by , therefore
formula_29
Substitution of the expression for into the expression for "K"b gives
formula_30
When "K"a, "K"b and "K"w are determined under the same conditions of temperature and ionic strength, it follows, taking cologarithms, that p"K"b = p"K"w − p"K"a. In aqueous solutions at 25 °C, p"K"w is 13.9965, so
formula_31
with sufficient accuracy for most practical purposes. In effect there is no need to define p"K"b separately from p"K"a, but it is done here as often only p"K"b values can be found in the older literature.
For an hydrolyzed metal ion, "K"b can also be defined as a stepwise "dissociation" constant
formula_32
formula_33
This is the reciprocal of an association constant for formation of the complex.
Basicity expressed as dissociation constant of conjugate acid.
Because the relationship p"K"b = p"K"w − p"K"a holds only in aqueous solutions (though analogous relationships apply for other amphoteric solvents), subdisciplines of chemistry like organic chemistry that usually deal with nonaqueous solutions generally do not use p"K"b as a measure of basicity. Instead, the p"K"a of the conjugate acid, denoted by p"K"aH, is quoted when basicity needs to be quantified. For base B and its conjugate acid BH+ in equilibrium, this is defined as
formula_34
A higher value for p"K"aH corresponds to a stronger base. For example, the values and indicate that (triethylamine) is a stronger base than (pyridine).
Amphoteric substances.
An amphoteric substance is one that can act as an acid or as a base, depending on pH. Water (below) is amphoteric. Another example of an amphoteric molecule is the bicarbonate ion that is the conjugate base of the carbonic acid molecule H2CO3 in the equilibrium
<chem>H2CO3 + H2O <=> HCO3- + H3O+</chem>
but also the conjugate acid of the carbonate ion in (the reverse of) the equilibrium
<chem>HCO3- + OH- <=> CO3^2- + H2O</chem>
Carbonic acid equilibria are important for acid–base homeostasis in the human body.
An amino acid is also amphoteric with the added complication that the neutral molecule is subject to an internal acid–base equilibrium in which the basic amino group attracts and binds the proton from the acidic carboxyl group, forming a zwitterion.
<chem>NH2CHRCO2H <=> NH3+CHRCO2-</chem>
At pH less than about 5 both the carboxylate group and the amino group are protonated. As pH increases the acid dissociates according to
<chem>NH3+CHRCO2H <=> NH3+CHRCO2- + H+</chem>
At high pH a second dissociation may take place.
<chem>NH3+CHRCO2- <=> NH2CHRCO2- + H+</chem>
Thus the amino acid molecule is amphoteric because it may either be protonated or deprotonated.
Water self-ionization.
The water molecule may either gain or lose a proton. It is said to be amphiprotic. The ionization equilibrium can be written
<chem>H2O <=> OH- + H+</chem>
where in aqueous solution denotes a solvated proton. Often this is written as the hydronium ion , but this formula is not exact because in fact there is solvation by more than one water molecule and species such as , , and are also present.
The equilibrium constant is given by
formula_35
With solutions in which the solute concentrations are not very high, the concentration can be assumed to be constant, regardless of solute(s); this expression may then be replaced by
formula_36
The self-ionization constant of water, "K"w, is thus just a special case of an acid dissociation constant. A logarithmic form analogous to p"K"a may also be defined
formula_37
These data can be modelled by a parabola with
formula_38
From this equation, p"K"w = 14 at 24.87 °C. At that temperature both hydrogen and hydroxide ions have a concentration of 10−7 M.
Acidity in nonaqueous solutions.
A solvent will be more likely to promote ionization of a dissolved acidic molecule in the following circumstances:
p"K"a values of organic compounds are often obtained using the aprotic solvents dimethyl sulfoxide (DMSO) and acetonitrile (ACN).
DMSO is widely used as an alternative to water because it has a lower dielectric constant than water, and is less polar and so dissolves non-polar, hydrophobic substances more easily. It has a measurable p"K"a range of about 1 to 30. Acetonitrile is less basic than DMSO, and, so, in general, acids are weaker and bases are stronger in this solvent. Some p"K"a values at 25 °C for acetonitrile (ACN) and dimethyl sulfoxide (DMSO). are shown in the following tables. Values for water are included for comparison.
Ionization of acids is less in an acidic solvent than in water. For example, hydrogen chloride is a weak acid when dissolved in acetic acid. This is because acetic acid is a much weaker base than water.
<chem>HCl + CH3CO2H <=> Cl- + CH3C(OH)2+</chem>
formula_2
Compare this reaction with what happens when acetic acid is dissolved in the more acidic solvent pure sulfuric acid:
<chem>H2SO4 + CH3CO2H <=> HSO4- + CH3C(OH)2+</chem>
The unlikely geminal diol species is stable in these environments. For aqueous solutions the pH scale is the most convenient acidity function. Other acidity functions have been proposed for non-aqueous media, the most notable being the Hammett acidity function, "H"0, for superacid media and its modified version "H"− for superbasic media.
In aprotic solvents, oligomers, such as the well-known acetic acid dimer, may be formed by hydrogen bonding. An acid may also form hydrogen bonds to its conjugate base. This process, known as homoconjugation, has the effect of enhancing the acidity of acids, lowering their effective p"K"a values, by stabilizing the conjugate base. Homoconjugation enhances the proton-donating power of toluenesulfonic acid in acetonitrile solution by a factor of nearly 800.
In aqueous solutions, homoconjugation does not occur, because water forms stronger hydrogen bonds to the conjugate base than does the acid.
Mixed solvents.
When a compound has limited solubility in water it is common practice (in the pharmaceutical industry, for example) to determine p"K"a values in a solvent mixture such as water/dioxane or water/methanol, in which the compound is more soluble. In the example shown at the right, the p"K"a value rises steeply with increasing percentage of dioxane as the dielectric constant of the mixture is decreasing.
A p"K"a value obtained in a mixed solvent cannot be used directly for aqueous solutions. The reason for this is that when the solvent is in its standard state its activity is "defined" as one. For example, the standard state of water:dioxane mixture with 9:1 mixing ratio is precisely that solvent mixture, with no added solutes. To obtain the p"K"a value for use with aqueous solutions it has to be extrapolated to zero co-solvent concentration from values obtained from various co-solvent mixtures.
These facts are obscured by the omission of the solvent from the expression that is normally used to define p"K"a, but p"K"a values obtained in a "given" mixed solvent can be compared to each other, giving relative acid strengths. The same is true of p"K"a values obtained in a particular non-aqueous solvent such a DMSO.
A universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
Factors that affect p"K"a values.
Pauling's second rule is that the value of the first p"K"a for acids of the formula XO"m"(OH)"n" depends primarily on the number of oxo groups "m", and is approximately independent of the number of hydroxy groups "n", and also of the central atom X. Approximate values of p"K"a are 8 for "m" = 0, 2 for "m" = 1, −3 for "m" = 2 and < −10 for "m" = 3. Alternatively, various numerical formulas have been proposed including p"K"a = 8 − 5"m" (known as Bell's rule), p"K"a = 7 − 5"m", or p"K"a = 9 − 7"m". The dependence on "m" correlates with the oxidation state of the central atom, X: the higher the oxidation state the stronger the oxyacid.
For example, p"K"a for HClO is 7.2, for HClO2 is 2.0, for HClO3 is −1 and HClO4 is a strong acid (p"K"a ≪ 0). The increased acidity on adding an oxo group is due to stabilization of the conjugate base by delocalization of its negative charge over an additional oxygen atom. This rule can help assign molecular structure: for example, phosphorous acid, having molecular formula H3PO3, has a p"K"a near 2, which suggested that the structure is HPO(OH)2, as later confirmed by NMR spectroscopy, and not P(OH)3, which would be expected to have a p"K"a near 8.
Inductive effects and mesomeric effects affect the p"K"a values. A simple example is provided by the effect of replacing the hydrogen atoms in acetic acid by the more electronegative chlorine atom. The electron-withdrawing effect of the substituent makes ionisation easier, so successive p"K"a values decrease in the series 4.7, 2.8, 1.4, and 0.7 when 0, 1, 2, or 3 chlorine atoms are present. The Hammett equation, provides a general expression for the effect of substituents.
log("K"a) = log("K") + ρσ.
"K"a is the dissociation constant of a substituted compound, "K" is the dissociation constant when the substituent is hydrogen, ρ is a property of the unsubstituted compound and σ has a particular value for each substituent. A plot of log("K"a) against σ is a straight line with intercept log("K") and slope ρ. This is an example of a linear free energy relationship as log("K"a) is proportional to the standard free energy change. Hammett originally formulated the relationship with data from benzoic acid with different substituents in the "ortho-" and "para-" positions: some numerical values are in Hammett equation. This and other studies allowed substituents to be ordered according to their electron-withdrawing or electron-releasing power, and to distinguish between inductive and mesomeric effects.
Alcohols do not normally behave as acids in water, but the presence of a double bond adjacent to the OH group can substantially decrease the p"K"a by the mechanism of keto–enol tautomerism. Ascorbic acid is an example of this effect. The diketone 2,4-pentanedione (acetylacetone) is also a weak acid because of the keto–enol equilibrium. In aromatic compounds, such as phenol, which have an OH substituent, conjugation with the aromatic ring as a whole greatly increases the stability of the deprotonated form.
Structural effects can also be important. The difference between fumaric acid and maleic acid is a classic example. Fumaric acid is (E)-1,4-but-2-enedioic acid, a "trans" isomer, whereas maleic acid is the corresponding "cis" isomer, i.e. (Z)-1,4-but-2-enedioic acid (see cis-trans isomerism). Fumaric acid has p"K"a values of approximately 3.0 and 4.5. By contrast, maleic acid has p"K"a values of approximately 1.5 and 6.5. The reason for this large difference is that when one proton is removed from the "cis" isomer (maleic acid) a strong intramolecular hydrogen bond is formed with the nearby remaining carboxyl group. This favors the formation of the maleate H+, and it opposes the removal of the second proton from that species. In the "trans" isomer, the two carboxyl groups are always far apart, so hydrogen bonding is not observed.
Proton sponge, 1,8-bis(dimethylamino)naphthalene, has a p"K"a value of 12.1. It is one of the strongest amine bases known. The high basicity is attributed to the relief of strain upon protonation and strong internal hydrogen bonding.
Effects of the solvent and solvation should be mentioned also in this section. It turns out, these influences are more subtle than that of a dielectric medium mentioned above. For example, the expected (by electronic effects of methyl substituents) and observed in gas phase order of basicity of methylamines, Me3N > Me2NH > MeNH2 > NH3, is changed by water to Me2NH > MeNH2 > Me3N > NH3. Neutral methylamine molecules are hydrogen-bonded to water molecules mainly through one acceptor, N–HOH, interaction and only occasionally just one more donor bond, NH–OH2. Hence, methylamines are stabilized to about the same extent by hydration, regardless of the number of methyl groups. In stark contrast, corresponding methylammonium cations always utilize all the available protons for donor NH–OH2 bonding. Relative stabilization of methylammonium ions thus decreases with the number of methyl groups explaining the order of water basicity of methylamines.
Thermodynamics.
An equilibrium constant is related to the standard Gibbs energy change for the reaction, so for an acid dissociation constant
formula_39.
"R" is the gas constant and "T" is the absolute temperature. Note that p"K"a
−log("K"a) and 2.303 ≈ ln(10). At 25 °C, Δ"G"⊖ in kJ·mol−1 ≈ 5.708 p"K"a (1 kJ·mol−1 = 1000 joules per mole). Free energy is made up of an enthalpy term and an entropy term.
formula_40
The standard enthalpy change can be determined by calorimetry or by using the van 't Hoff equation, though the calorimetric method is preferable. When both the standard enthalpy change and acid dissociation constant have been determined, the standard entropy change is easily calculated from the equation above. In the following table, the entropy terms are calculated from the experimental values of p"K"a and Δ"H"⊖. The data were critically selected and refer to 25 °C and zero ionic strength, in water.
<templatestyles src="Reflist/styles.css" />
The first point to note is that, when p"K"a is positive, the standard free energy change for the dissociation reaction is also positive. Second, some reactions are exothermic and some are endothermic, but, when Δ"H"⊖ is negative "T"ΔS⊖ is the dominant factor, which determines that Δ"G"⊖ is positive. Last, the entropy contribution is always unfavourable (Δ"S"⊖ < 0) in these reactions. Ions in aqueous solution tend to orient the surrounding water molecules, which orders the solution and decreases the entropy. The contribution of an ion to the entropy is the partial molar entropy which is often negative, especially for small or highly charged ions. The ionization of a neutral acid involves formation of two ions so that the entropy decreases (Δ"S"⊖ < 0). On the second ionization of the same acid, there are now three ions and the anion has a charge, so the entropy again decreases.
Note that the "standard" free energy change for the reaction is for the changes "from" the reactants in their standard states "to" the products in their standard states. The free energy change "at" equilibrium is zero since the chemical potentials of reactants and products are equal at equilibrium.
Experimental determination.
The experimental determination of p"K"a values is commonly performed by means of titrations, in a medium of high ionic strength and at constant temperature. A typical procedure would be as follows. A solution of the compound in the medium is acidified with a strong acid to the point where the compound is fully protonated. The solution is then titrated with a strong base until all the protons have been removed. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constants are found by fitting calculated pH values to the observed values, using the method of least squares.
The total volume of added strong base should be small compared to the initial volume of titrand solution in order to keep the ionic strength nearly constant. This will ensure that p"K"a remains invariant during the titration.
A calculated titration curve for oxalic acid is shown at the right. Oxalic acid has p"K"a values of 1.27 and 4.27. Therefore, the buffer regions will be centered at about pH 1.3 and pH 4.3. The buffer regions carry the information necessary to get the p"K"a values as the concentrations of acid and conjugate base change along a buffer region.
Between the two buffer regions there is an end-point, or equivalence point, at about pH 3. This end-point is not sharp and is typical of a diprotic acid whose buffer regions overlap by a small amount: p"K"a2 − p"K"a1 is about three in this example. (If the difference in p"K" values were about two or less, the end-point would not be noticeable.) The second end-point begins at about pH 6.3 and is sharp. This indicates that all the protons have been removed. When this is so, the solution is not buffered and the pH rises steeply on addition of a small amount of strong base. However, the pH does not continue to rise indefinitely. A new buffer region begins at about pH 11 (p"K"w − 3), which is where self-ionization of water becomes important.
It is very difficult to measure pH values of less than two in aqueous solution with a glass electrode, because the Nernst equation breaks down at such low pH values. To determine p"K" values of less than about 2 or more than about 11 spectrophotometric or NMR measurements may be used instead of, or combined with, pH measurements.
When the glass electrode cannot be employed, as with non-aqueous solutions, spectrophotometric methods are frequently used. These may involve absorbance or fluorescence measurements. In both cases the measured quantity is assumed to be proportional to the sum of contributions from each photo-active species; with absorbance measurements the Beer–Lambert law is assumed to apply.
Isothermal titration calorimetry (ITC) may be used to determine both a p"K" value and the corresponding standard enthalpy for acid dissociation. Software to perform the calculations is supplied by the instrument manufacturers for simple systems.
Aqueous solutions with normal water cannot be used for 1H NMR measurements but heavy water, , must be used instead. 13C NMR data, however, can be used with normal water and 1H NMR spectra can be used with non-aqueous media. The quantities measured with NMR are time-averaged chemical shifts, as proton exchange is fast on the NMR time-scale. Other chemical shifts, such as those of 31P can be measured.
Micro-constants.
For some polyprotic acids, dissociation (or association) occurs at more than one nonequivalent site, and the observed macroscopic equilibrium constant, or macro-constant, is a combination of micro-constants involving distinct species. When one reactant forms two products in parallel, the macro-constant is a sum of two micro-constants, formula_41 This is true for example for the deprotonation of the amino acid cysteine, which exists in solution as a neutral zwitterion . The two micro-constants represent deprotonation either at sulphur or at nitrogen, and the macro-constant sum here is the acid dissociation constant formula_42
Similarly, a base such as spermine has more than one site where protonation can occur. For example, mono-protonation can occur at a terminal group or at internal groups. The "K"b values for dissociation of spermine protonated at one or other of the sites are examples of micro-constants. They cannot be determined directly by means of pH, absorbance, fluorescence or NMR measurements; a measured "K"b value is the sum of the K values for the micro-reactions.
formula_43
Nevertheless, the site of protonation is very important for biological function, so mathematical methods have been developed for the determination of micro-constants.
When two reactants form a single product in parallel, the macro-constant formula_44 For example, the abovementioned equilibrium for spermine may be considered in terms of "K"a values of two tautomeric conjugate acids, with macro-constant In this case formula_45 This is equivalent to the preceding expression since formula_46 is proportional to formula_47
When a reactant undergoes two reactions in series, the macro-constant for the combined reaction is the product of the micro-constant for the two steps. For example, the abovementioned cysteine zwitterion can lose two protons, one from sulphur and one from nitrogen, and the overall macro-constant for losing two protons is the product of two dissociation constants formula_48 This can also be written in terms of logarithmic constants as formula_49
Applications and significance.
A knowledge of p"K"a values is important for the quantitative treatment of systems involving acid–base equilibria in solution. Many applications exist in biochemistry; for example, the p"K"a values of proteins and amino acid side chains are of major importance for the activity of enzymes and the stability of proteins. Protein p"K"a values cannot always be measured directly, but may be calculated using theoretical methods. Buffer solutions are used extensively to provide solutions at or near the physiological pH for the study of biochemical reactions; the design of these solutions depends on a knowledge of the p"K"a values of their components. Important buffer solutions include MOPS, which provides a solution with pH 7.2, and tricine, which is used in gel electrophoresis. Buffering is an essential part of acid base physiology including acid–base homeostasis, and is key to understanding disorders such as acid–base disorder. The isoelectric point of a given molecule is a function of its p"K" values, so different molecules have different isoelectric points. This permits a technique called isoelectric focusing, which is used for separation of proteins by 2-D gel polyacrylamide gel electrophoresis.
Buffer solutions also play a key role in analytical chemistry. They are used whenever there is a need to fix the pH of a solution at a particular value. Compared with an aqueous solution, the pH of a buffer solution is relatively insensitive to the addition of a small amount of strong acid or strong base. The buffer capacity of a simple buffer solution is largest when pH = p"K"a. In acid–base extraction, the efficiency of extraction of a compound into an organic phase, such as an ether, can be optimised by adjusting the pH of the aqueous phase using an appropriate buffer. At the optimum pH, the concentration of the electrically neutral species is maximised; such a species is more soluble in organic solvents having a low dielectric constant than it is in water. This technique is used for the purification of weak acids and bases.
A pH indicator is a weak acid or weak base that changes colour in the transition pH range, which is approximately p"K"a ± 1. The design of a universal indicator requires a mixture of indicators whose adjacent p"K"a values differ by about two, so that their transition pH ranges just overlap.
In pharmacology, ionization of a compound alters its physical behaviour and macro properties such as solubility and lipophilicity, log "p"). For example, ionization of any compound will increase the solubility in water, but decrease the lipophilicity. This is exploited in drug development to increase the concentration of a compound in the blood by adjusting the p"K"a of an ionizable group.
Knowledge of p"K"a values is important for the understanding of coordination complexes, which are formed by the interaction of a metal ion, Mm+, acting as a Lewis acid, with a ligand, L, acting as a Lewis base. However, the ligand may also undergo protonation reactions, so the formation of a complex in aqueous solution could be represented symbolically by the reaction
formula_50
To determine the equilibrium constant for this reaction, in which the ligand loses a proton, the p"K"a of the protonated ligand must be known. In practice, the ligand may be polyprotic; for example EDTA4− can accept four protons; in that case, all p"K"a values must be known. In addition, the metal ion is subject to hydrolysis, that is, it behaves as a weak acid, so the p"K" values for the hydrolysis reactions must also be known.
Assessing the hazard associated with an acid or base may require a knowledge of p"K"a values. For example, hydrogen cyanide is a very toxic gas, because the cyanide ion inhibits the iron-containing enzyme cytochrome c oxidase. Hydrogen cyanide is a weak acid in aqueous solution with a p"K"a of about 9. In strongly alkaline solutions, above pH 11, say, it follows that sodium cyanide is "fully dissociated" so the hazard due to the hydrogen cyanide gas is much reduced. An acidic solution, on the other hand, is very hazardous because all the cyanide is in its acid form. Ingestion of cyanide by mouth is potentially fatal, independently of pH, because of the reaction with cytochrome c oxidase.
In environmental science acid–base equilibria are important for lakes and rivers; for example, humic acids are important components of natural waters. Another example occurs in chemical oceanography: in order to quantify the solubility of iron(III) in seawater at various salinities, the p"K"a values for the formation of the iron(III) hydrolysis products , and were determined, along with the solubility product of iron hydroxide.
Values for common substances.
There are multiple techniques to determine the p"K"a of a chemical, leading to some discrepancies between different sources. Well measured values are typically within 0.1 units of each other. Data presented here were taken at 25 °C in water. More values can be found in the Thermodynamics section, above. A table of p"K"a of carbon acids, measured in DMSO, can be found on the page on carbanions.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_\\text{a} = \\mathrm{\\frac{[A^-] [H^+]}{[HA]}},"
},
{
"math_id": 1,
"text": "\\mathrm{p}K_\\ce{a} = - \\log_{10} K_\\text{a} = \\log_{10}\\frac{\\ce{[HA]}}{[\\ce{A^-}][\\ce{H+}]}"
},
{
"math_id": 2,
"text": "\\text{acid} + \\text{base } \\ce{<=>} \\text{ conjugate base} + \\text{conjugate acid}"
},
{
"math_id": 3,
"text": "\\ce{H2O}\\text{ (acid)} + \\ce{B}\\text{ (base) } \\ce{<=> OH-}\\text{ (conjugate base)} + \\ce{BH+}\\text{ (conjugate acid)}"
},
{
"math_id": 4,
"text": "K^\\ominus = \\frac{\\{\\ce{A^-}\\} \\{\\ce{H+}\\}}{\\ce{\\{HA\\} }}"
},
{
"math_id": 5,
"text": "\\{X\\}"
},
{
"math_id": 6,
"text": "K^\\ominus"
},
{
"math_id": 7,
"text": "K^\\ominus = {\\frac{[\\ce{A^-}] [\\ce{H+}]}\\ce{[HA] }\\Gamma}, \\quad \\Gamma=\\frac{\\gamma_\\ce{A^-} \\ \\gamma_\\ce{H+}}{\\gamma_\\ce{HA} \\ } "
},
{
"math_id": 8,
"text": "[\\text{HA}]"
},
{
"math_id": 9,
"text": "K_\\text{a} = \\frac{K^\\ominus}{\\Gamma} = \\mathrm{\\frac{[A^-] [H^+]}{[HA]}}"
},
{
"math_id": 10,
"text": "\\mathrm{p}K_\\ce{a} = -\\log_{10}\\frac{[\\ce{A^-}][\\ce{H^+}]}{[\\ce{HA}]} = \\log_{10}\\frac{\\ce{[HA]}}{[\\ce{A^-}][\\ce{H+}]}"
},
{
"math_id": 11,
"text": "\\beta_2 = \\frac{\\ce{[H2A]}}{[\\ce{A^2-}][\\ce{H+}]^2}"
},
{
"math_id": 12,
"text": "\\log \\beta_2 = \\mathrm{p}K_\\ce{a1} + \\mathrm{p}K_\\ce{a2}"
},
{
"math_id": 13,
"text": "\\log \\beta_1 = \\mathrm{p}K_\\ce{a2}"
},
{
"math_id": 14,
"text": "K_\\text{dissoc} = \\frac{ \\ce{[A- ][H+]}}{\\ce{[HA]}}: \\mathrm{p}K_\\text{a} = -\\log K_\\text{dissoc} "
},
{
"math_id": 15,
"text": "K_\\text{assoc} = \\frac{\\ce{[HA]}}{\\ce{[A- ][H+]}} "
},
{
"math_id": 16,
"text": "K_\\text{dissoc} = \\frac{1}{K_\\text{assoc}}"
},
{
"math_id": 17,
"text": "\\log K_\\text{dissoc} = - \\log K_\\text{assoc}"
},
{
"math_id": 18,
"text": "\\mathrm{p}K_\\text{dissoc} = - \\mathrm{p}K_\\text{assoc}"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\log K_{\\text{assoc},1} &= \\mathrm{p}K_{\\text{dissoc},3} \\\\\n\\log K_{\\text{assoc},2} &= \\mathrm{p}K_{\\text{dissoc},2} \\\\\n\\log K_{\\text{assoc},3} &= \\mathrm{p}K_{\\text{dissoc},1}\n\\end{align}"
},
{
"math_id": 20,
"text": "\n \\frac{\\mathrm{d} \\ln\\left(K\\right)}{\\mathrm{d}T} = \\frac{\\Delta H^\\ominus}{RT^2}\n"
},
{
"math_id": 21,
"text": "\\left(\\frac{\\partial\\Delta H}{\\partial T}\\right)_p = \\Delta C_p"
},
{
"math_id": 22,
"text": "K_\\mathrm{a} = \\mathrm{\\frac{[A^-] [H^+]}{[HA]}},"
},
{
"math_id": 23,
"text": "\\Delta G = -RT\\ln K"
},
{
"math_id": 24,
"text": "c_i = \\frac{x_i\\rho}{M} "
},
{
"math_id": 25,
"text": "\n \\mathrm{pH} = \\mathrm{p}K_\\text{a} + \\log\\mathrm{\\frac{[A^-]}{[HA]}}\n"
},
{
"math_id": 26,
"text": "[\\ce{H+}]^2 = K_1 K_2"
},
{
"math_id": 27,
"text": "\\mathrm{p}I = \\frac{\\mathrm{p}K_1 + \\mathrm{p}K_2}{2}"
},
{
"math_id": 28,
"text": "\n\\begin{align}\nK_\\text{b} &= \\mathrm{\\frac{[HB^+] [OH^-]}{[B]}} \\\\\n\\mathrm{p}K_\\text{b} &= - \\log_{10}\\left(K_\\text{b}\\right)\n\\end{align}"
},
{
"math_id": 29,
"text": "\n\\mathrm{[OH^-]} = \\frac{K_\\mathrm{w}}{\\mathrm{[H^+]}}\n"
},
{
"math_id": 30,
"text": "\n K_\\text{b} = \\frac{[\\mathrm{HB^+}]K_\\text{w}}{\\mathrm{[B] [H^+]}} = \\frac{K_\\text{w}}{K_\\text{a}}\n"
},
{
"math_id": 31,
"text": "\\mathrm{p}K_\\text{b} \\approx 14 - \\mathrm{p}K_\\text{a}"
},
{
"math_id": 32,
"text": "\\mathrm{M}_p(\\ce{OH})_q \\leftrightharpoons \\mathrm{M}_p(\\ce{OH})^{+}_{q-1} + \\ce{OH-}"
},
{
"math_id": 33,
"text": "K_\\mathrm{b} = \\frac{[\\mathrm{M}_p(\\ce{OH})^{+}_{q-1}] [\\ce{OH-}]}{[\\mathrm{M}_p(\\ce{OH})_q]}"
},
{
"math_id": 34,
"text": "\\mathrm{p}K_\\mathrm{aH}(\\mathrm{B})=\\mathrm{p}K_\\mathrm{a}(\\ce{BH+})=-\\log_{10}\\Big(\\frac{[\\ce{B}][\\ce{H+}]}{[\\ce{BH+}]}\\Big)"
},
{
"math_id": 35,
"text": "\n K_\\text{a} = \\mathrm{\\frac{[H^+] [OH^-]}{[H_2O]}}\n"
},
{
"math_id": 36,
"text": "\n K_\\text{w} = [\\mathrm{H}^+] [\\mathrm{OH}^-]\\,\n"
},
{
"math_id": 37,
"text": "\\mathrm{p}K_\\text{w} = - \\log_{10}\\left(K_\\text{w}\\right)"
},
{
"math_id": 38,
"text": "\\mathrm p K_\\mathrm w = 14.94 - 0.04209\\ T + 0.0001718\\ T^2"
},
{
"math_id": 39,
"text": "\\Delta G^\\ominus = -RT \\ln K_\\text{a} \\approx 2.303 RT\\ \\mathrm{p}K_\\text{a}"
},
{
"math_id": 40,
"text": "\\Delta G^\\ominus = \\Delta H^\\ominus - T \\Delta S^\\ominus"
},
{
"math_id": 41,
"text": "K = K_X + K_Y."
},
{
"math_id": 42,
"text": "K_\\mathrm a = K_\\mathrm a \\ce{(-SH)} + K_\\mathrm a \\ce{(-NH3+)}."
},
{
"math_id": 43,
"text": "K_\\text{b} = K_\\text{terminal} + K_\\text{internal}"
},
{
"math_id": 44,
"text": "1/K = 1/K_X + 1/K_Y ."
},
{
"math_id": 45,
"text": "1/K_\\text{a} = 1/K_{\\text{a},\\text{terminal}} + 1/K_{\\text{a},\\text{internal}}."
},
{
"math_id": 46,
"text": "K_\\mathrm{b}"
},
{
"math_id": 47,
"text": "1/K_\\mathrm{a}."
},
{
"math_id": 48,
"text": "K = K_\\mathrm a \\ce{(-SH)} K_\\mathrm a \\ce{(-NH3+)}."
},
{
"math_id": 49,
"text": "\\mathrm p K = \\mathrm p K_\\mathrm a \\ce{(-SH)} + \\mathrm p K_\\mathrm a \\ce{(-NH3+)}."
},
{
"math_id": 50,
"text": "[\\ce{M(H2O)_\\mathit{n}}]^{m+} + \\ce{LH <=> } \\ [\\ce{M(H2O)}_{n - 1} \\ce{L}]^{(m - 1)+} + \\ce{H3O+}"
}
] |
https://en.wikipedia.org/wiki?curid=57555
|
575641
|
Casting out nines
|
Arithmetic procedure of verifying operations using modulo characteristics of digit 9
Casting out nines is any of three arithmetical procedures:
Digit sums.
To "cast out nines" from a single number, its decimal digits can be simply added together to obtain its so-called digit sum. The digit sum of 2946, for example is 2 + 9 + 4 + 6 = 21. Since 21 = 2946 − 325 × 9, the effect of taking the digit sum of 2946 is to "cast out" 325 lots of 9 from it. If the digit 9 is ignored when summing the digits, the effect is to "cast out" one more 9 to give the result 12.
More generally, when casting out nines by summing digits, any set of digits which add up to 9, or a multiple of 9, can be ignored. In the number 3264, for example, the digits 3 and 6 sum to 9. Ignoring these two digits, therefore, and summing the other two, we get 2 + 4 = 6. Since 6 = 3264 − 362 × 9, this computation has resulted in casting out 362 lots of 9 from 3264.
For an arbitrary number, formula_0, normally represented by the sequence of decimal digits, formula_1, the digit sum is formula_2. The difference between the original number and its digit sum is
formula_3
Because numbers of the form formula_4 are always divisible by 9 (since formula_5), replacing the original number by its digit sum has the effect of casting out
formula_6
lots of 9.
Digital roots.
If the procedure described in the preceding paragraph is repeatedly applied to the result of each previous application, the eventual result will be a single-digit number from which "all" 9s, with the possible exception of one, have been "cast out". The resulting single-digit number is called the "digital root" of the original. The exception occurs when the original number has a digital root of 9, whose digit sum is itself, and therefore will not be cast out by taking further digit sums.
The number 12565, for instance, has digit sum 1+2+5+6+5 = 19, which, in turn, has digit sum 1+9=10, which, in its turn has digit sum 1+0=1, a single-digit number. The digital root of 12565 is therefore 1, and its computation has the effect of casting out (12565 - 1)/9 = 1396 lots of 9 from 12565.
Checking calculations by casting out nines.
To check the result of an arithmetical calculation by casting out nines, each number in the calculation is replaced by its digital root and the same calculations applied to these digital roots. The digital root of the result of this calculation is then compared with that of the result of the original calculation. If no mistake has been made in the calculations, these two digital roots must be the same. Examples in which casting-out-nines has been used to check addition, subtraction, multiplication, and division are given below.
Examples.
Addition.
In each addend, cross out all 9s and pairs of digits that total 9, then add together what remains. These new values are called "excesses". Add up leftover digits for each addend until one digit is reached. Now process the sum and also the excesses to get a "final" excess.
How it works.
The method works because the original numbers are 'decimal' (base 10), the modulus is chosen to differ by 1, and casting out is equivalent to taking a digit sum. In general any two 'large' integers, "x" and "y", expressed in any smaller "modulus" as "x"' and "y' " (for example, modulo 7) will always have the same sum, difference or product as their originals. This property is also preserved for the 'digit sum' where the base and the modulus differ by 1.
If a calculation was correct before casting out, casting out on both sides will preserve correctness. However, it is possible that two previously unequal integers will be identical modulo 9 (on average, a ninth of the time).
The operation does not work on fractions, since a given fractional number does not have a unique representation.
A variation on the explanation.
A trick to learn to add with nines is to add ten to the digit and to count back one. Since we are adding 1 to the tens digit and subtracting one from the units digit, the sum of the digits should remain the same. For example, 9 + 2 = 11 with 1 + 1 = 2. When adding 9 to itself, we would thus expect the sum of the digits to be 9 as follows: 9 + 9 = 18, (1 + 8 = 9) and 9 + 9 + 9 = 27, (2 + 7 = 9). Let us look at a simple multiplication: 5 × 7 = 35, (3 + 5 = 8). Now consider (7 + 9) × 5 = 16 × 5 = 80, (8 + 0 = 8) or 7 × (9 + 5) = 7 × 14 = 98, (9 + 8 = 17), (1 + 7 = 8).
Any non-negative integer can be written as 9×n + a, where 'a' is a single digit from 0 to 8, and 'n' is some non-negative integer.
Thus, using the distributive rule, (9×n + a)×(9×m + b)= 9×9×n×m + 9(am + bn) + ab. Since the first two factors are multiplied by 9, their sums will end up being 9 or 0, leaving us with 'ab'. In our example, 'a' was 7 and 'b' was 5. We would expect that in any base system, the number before that base would behave just like the nine.
Limitation to casting out nines.
While extremely useful, casting out nines does not catch all errors made while doing calculations. For example, the casting-out-nines method would not recognize the error in a calculation of 5 × 7 which produced any of the erroneous results 8, 17, 26, etc. (that is, any result congruent to 8 modulo 9). In particular, casting out nines does not catch transposition errors, such as 1324 instead of 1234. In other words, the method only catches erroneous results whose digital root is one of the 8 digits that is different from that of the correct result.
History.
A form of casting out nines known to ancient Greek mathematicians was described by the Roman bishop Hippolytus (170–235) in "The Refutation of all Heresies", and more briefly by the Syrian Neoplatonist philosopher Iamblichus (c.245–c.325) in his commentary on the "Introduction to Arithmetic" of Nicomachus of Gerasa. Both Hippolytus's and Iamblichus's descriptions, though, were limited to an explanation of how repeated digital sums of Greek numerals were used to compute a unique "root" between 1 and 9. Neither of them displayed any awareness of how the procedure could be used to check the results of arithmetical computations.
The earliest known surviving work which describes how casting out nines can be used to check the results of arithmetical computations is the "Mahâsiddhânta", written around 950 by the Indian mathematician and astronomer Aryabhata II (c.920–c.1000).
Writing about 1020, the Persian polymath, Ibn Sina (Avicenna) (c.980–1037), also gave full details of what he called the "Hindu method" of checking arithmetical calculations by casting out nines.
The procedure was described by Fibonacci in his "Liber Abaci".
Generalization.
This method can be generalized to determine the remainders of division by certain prime numbers.
Since 3·3 = 9,
formula_7
So we can use the remainder from casting out nines to get the remainder of division by three.
Casting out ninety nines is done by adding groups of two digits instead just one digit.
Since 11·9 = 99,
formula_8
So we can use the remainder from casting out ninety nines to get the remainder of division by eleven. This is called casting out elevens. The same result can also be calculated directly by alternately adding and subtracting the digits that make up formula_9. Eleven divides formula_9 if and only if eleven divides that sum.
Casting out nine hundred ninety nines is done by adding groups of three digits.
Since 37·27 = 999,
formula_10
So we can use the remainder from casting out nine hundred ninety nines to get the remainder of division by thirty seven.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^n d_n + 10^{n-1} d_{n-1} + \\cdots + d_0"
},
{
"math_id": 1,
"text": "d_nd_{n-1} \\dots d_0"
},
{
"math_id": 2,
"text": "d_n + d_{n-1} + \\cdots + d_0"
},
{
"math_id": 3,
"text": "\n\\begin{align}\n& 10^n d_n + 10^{n-1} d_{n-1} + \\cdots + d_0 - \\left(d_n + d_{n-1} + \\cdots + d_0\\right) \\\\\n= {} & \\left(10^n-1\\right)d_n + \\left(10^{n-1}-1\\right)d_{n-1} + \\cdots + 9 d_1.\n\\end{align}\n"
},
{
"math_id": 4,
"text": "10^i -1"
},
{
"math_id": 5,
"text": "10^i -1 = 9\\times\\left(10^{i-1} + 10^{i-2} + \\cdots + 1\\right)"
},
{
"math_id": 6,
"text": "\n\\frac{10^n -1}{9}d_n + \\frac{10^{n -1}-1}{9}d_{n-1} + \\cdots + d_1\n"
},
{
"math_id": 7,
"text": " n \\bmod 3 = ( n \\bmod 9 ) \\bmod 3. "
},
{
"math_id": 8,
"text": " n \\bmod 11 = ( n \\bmod 99 ) \\bmod 11. "
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": " n \\bmod 37 = ( n \\bmod 999 ) \\bmod 37. "
}
] |
https://en.wikipedia.org/wiki?curid=575641
|
5756885
|
Luzin N property
|
Measure theory concept
In mathematics, a function "f" on the interval ["a", "b"] has the Luzin N property, named after Nikolai Luzin (also called Luzin property or N property) if for all formula_0 such that formula_1, there holds: formula_2, where formula_3 stands for the Lebesgue measure.
Note that the image of such a set "N" is not necessarily measurable, but since the Lebesgue measure is complete, it follows that if the Lebesgue outer measure of that set is zero, then it is measurable and its Lebesgue measure is zero as well.
Properties.
Any differentiable function has the Luzin N property. This extends to functions that are differentiable on a cocountable set, as the image of a countable set is countable and thus a null set, but not to functions differentiable on a conull set:
The Cantor function does not have the Luzin N property, as the Lebesgue measure of the Cantor set is zero, but its image is the complete [0,1] interval.
A function "f" on the interval ["a","b"] is absolutely continuous if and only if it is continuous, is of bounded variation and has the Luzin N property.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N\\subset[a,b]"
},
{
"math_id": 1,
"text": "\\lambda(N)=0"
},
{
"math_id": 2,
"text": "\\lambda(f(N))=0"
},
{
"math_id": 3,
"text": "\\lambda"
}
] |
https://en.wikipedia.org/wiki?curid=5756885
|
57569740
|
Open microfluidics
|
Microfluidics refers to the flow of fluid in channels or networks with at least one dimension on the micron scale. In open microfluidics, also referred to as open surface microfluidics or open-space microfluidics, at least one boundary confining the fluid flow of a system is removed, exposing the fluid to air or another interface such as a second fluid.
Types of open microfluidics.
Open microfluidics can be categorized into various subsets. Some examples of these subsets include open-channel microfluidics, paper-based, and thread-based microfluidics.
Open-channel microfluidics.
In open-channel microfluidics, a surface tension-driven capillary flow occurs and is referred to as spontaneous capillary flow (SCF). SCF occurs when the pressure at the advancing meniscus is negative. The geometry of the channel and contact angle of fluids has been shown to produce SCF if the following equation is true.
formula_0
Where pf is the free perimeter of the channel (i.e., the interface not in contact with the channel wall), and pw is the wetted perimeter (i.e., the walls in contact with the fluid), and θ is the contact angle of the fluid on the material of the device.
Paper-based microfluidics.
Paper-based microfluidics utilizes the wicking ability of paper for functional readouts. Paper-based microfluidics is an attractive method because paper is cheap, easily accessible, and has a low environmental impact. Paper is also versatile because it is available in various thicknesses and pore sizes. Coatings such as wax have been used to guide flow in paper microfluidics. In some cases, dissolvable barriers have been used to create boundaries on the paper and control the fluid flow. The application of paper as a diagnostic tool has shown to be powerful because it has successfully been used to detect glucose levels, bacteria, viruses, and other components in whole blood. Cell culture methods within paper have also been developed. Lateral flow immunoassays, such as those used in pregnancy tests, are one example of the application of paper for point of care or home-based diagnostics. Disadvantages include difficulty of fluid retention and high limits of detection.
Thread-based microfluidics.
Thread-based microfluidics, an offshoot from paper-based microfluidics, utilizes the same capillary based wicking capabilities. Common thread materials include nitrocellulose, rayon, nylon, hemp, wool, polyester, and silk. Threads are versatile because they can be woven to form specific patterns. Additionally, two or more threads can converge together in a knot bringing two separate ‘streams’ of fluid together as a reagent mixing method. Threads are also relatively strong and difficult to break from handling which makes them stable over time and easy to transport. Thread-based microfluidics has been applied to 3D tissue engineering and analyte analysis.
Capillary filaments in open microfluidics.
Open capillary microfluidics are channels that expose fluids to open air by excluding the ceiling and/or floor of the channel. Rather than rely on using pumps or syringes to maintain flow, open capillary microfluidics uses surface tension to facilitate the flow. The elimination of and infusion source reduces the size of the device and associated apparatus, along with other aspects that could obstruct their use. The dynamics of capillary-driven flow in open microfluidics are highly reliant on two types of geometric channels commonly known as either rectangular U-grooves or triangular V-grooves. The geometry of the channels dictates the flow along the interior walls fabricated with various ever-evolving processes.
Capillary filaments in U-groove.
Rectangular open-surface U-grooves are the easiest type of open microfluidic channel to fabricate. This design can maintain the same order of magnitude velocity in comparison to V-groove. Channels are made of glass or high clarity glass substitutes such as polymethyl methacrylate (PMMA), polycarbonate (PC), or cyclic olefin copolymer (COC). To eliminate the remaining resistance after etching, channels are given hydrophilic treatment using oxygen plasma or deep reactive-ion etching(DRIE).
Capillary filaments in V-groove.
V-groove, unlike U-groove, allows for a variety of velocities depending on the groove angle. V-grooves with sharp groove angle result in the interface curvature at the corners explained by reduced Concus-Finn conditions. In a perfect inner corner of a V-groove, the filament will advance indefinitely in the groove allowing the formation of capillary filament depending on the wetting conditions. The width of the groove plays an important role in controlling the fluid flow. The narrower the V-groove is, the better the capillary flow of liquids is even for highly viscous liquids such as blood; this effect has been used to produce an autonomous assay. The fabrication of a V-groove is more difficult than a U-groove as it poses a higher risk for faulty construction, since the corner has to be tightly sealed.
Advantages.
One of the main advantages of open microfluidics is ease of accessibility which enables intervention (i.e., for adding or removing reagents) to the flowing liquid in the system. Open microfluidics also allows simplicity of fabrication thus eliminating the need to bond surfaces. When one of the boundaries of a system is removed, a larger liquid-gas interface results, which enables liquid-gas reactions. Open microfluidic devices enable better optical transparency because at least one side of the system is not covered by the material which can reduce autofluorescence during imaging. Further, open systems minimize and sometimes eliminate bubble formation, a common problem in closed systems.
In closed system microfluidics, the flow in the channels is driven by pressure via pumps (syringe pumps), valves (trigger valves), or electrical field. An example of one of these methods for achieving low flow rates using temperature-controlled evaporation has been described for an open microfluidics system, allowing for long incubation hours for biological applications and requiring small sample volumes. Open system microfluidics enable surface-tension driven flow in channels thereby eliminating the need for external pumping methods. For example, some open microfluidic devices consist of a reservoir port and pumping port that can be filled with fluid using a pipette. Eliminating external pumping requirements lowers cost and enables device use in all laboratories with pipettes.
Materials Solutions.
Thankfully, while many problems exist with PDMS, many solutions have also been developed. To address the negative hydrophobicity and porosity that PDMS exhibits, researchers have started to use coatings such as BSA (bovine serum albumin) or charged molecules to create a layer between the native PDMS and the cells. Other researchers have successfully employed several of the Pluronic surfactants, a tri-block copolymer that has two hydrophilic blocks surrounding a hydrophobic core often used to increase the hydrophilic nature of numerous substrates, and even borosilicate glass coatings to address the hydrophobicity problem. Interestingly, treatment with either of the prior two compounds can result in prevention of non-specific protein adsorption, as they (and other coatings) form stable adsorption interactions with the PDMS, which aides in reducing PDSM interference with cell culture media. These compounds and materials can affect surface properties and should be carefully tested to note the impact on cultured cells. Researchers developed 3D scaffolding systems to mimic "in vivo" environments so that more cells and cell types can grow in an effort to address the problem that not all cell types can grow on PDMS. Like coating the PDMS, 3D scaffolding systems employ alternatives materials like ECM (extracellular matrix) proteins so rather than not binding the native PDMS, cells are more likely to bind to the proteins. Lastly, researchers have addressed the permeability of PDMS to water vapor using some elegant solutions. For example, a portion of the microfluidic system can be designated for humidification and cast in PDMS, or other material like glass.
Disadvantages.
Some drawbacks of open microfluidics include evaporation, contamination, and limited flow rate. Open systems are susceptible to evaporation which can greatly affect readouts when fluid volumes are on the microscale. Additionally, due to the nature of open systems, they are more susceptible to contamination than closed systems. Cell culture and other methods where contamination or small particulates are a concern must be carefully performed to prevent contamination. Lastly, open systems have a limited flow rate because induced pressures cannot be used to drive flow.
Materials.
Polydimethylsiloxane (PDMS) is an ideal material to fabricate microfluidic devices for cell culture applications due to several advantageous properties such as low processing costs, ease of manufacture, rapid prototyping, ease of surface modification, and cellular non-toxicity. While there are several benefits that arise from using native Polydimethylsiloxane (PDMS), there are also some drawbacks that researchers must account for in their experiments. First, PDMS is both hydrophobic and porous, meaning that small molecules or other hydrophobic molecules can be adsorbed onto it. Such molecules include anything from methyl- or alkyl-containing molecules, and even certain dyes like Nile Red. Researchers identified in 2008 that plasma could be used to reduce the hydrophobicity of PDMS, though it returned about two weeks after treatment. Some researchers postulate that integrating removable polycaprolactone (PCL) fiber-based electrospun scaffolds under NaOH treatment enhances hydrophilicity as well as mitigating hydrophobicity, while promoting more efficient cell communication. Another problem that arises with PDMS is that it can interfere with the media that circulates in the channels. Incomplete curing of PDMS channels can lead to PDMS leaching into the media and, even when complete curing takes place, components of the media can still unintentionally attach to free hydrophobic sites on the PDMS walls. Yet another problem arises with the gas permeability of PDMS. Most researchers take advantage of this to oxygenate both the PDMS and the circulating media, but this trait also makes the microfluidic system especially vulnerable to water vapor loss. Lastly, not all cell types can grow, or will grow at the same levels, on native PDMS. For instance, high levels of rapid cell death in two fibroblast types grown on native PDMS were observed as early as 1994, which posed problems for the widespread use of PDMS in microfluidic cell culture.
Applications.
Like many microfluidic technologies, open system microfluidics has been applied to nanotechnology, biotechnology, fuel cells, and point of care (POC) testing. For cell-based studies, open-channel microfluidic devices enable access to cells for single cell probing within the channel. Other applications include capillary gel electrophoresis, water-in-oil emulsions, and biosensors for POC systems. Suspended microfluidic devices, open microfluidic devices where the floor of the device is removed, have been used to study cellular diffusion and migration of cancer cells. Suspended and rail-based microfluidics have been used for micropatterning and studying cell communication.
Materials Solutions Applications.
Applications of these solutions are still in use today, as seen by the following examples. In 2014, Lei et al was testing the impedance of human oral cancer cells in the presence of cisplatin, a known anti-cancer drug, by molding the cells into a 3D scaffolding. The authors had noted from previous studies that cellular impedance could be correlated to cellular viability and proliferation in 2D cell culture and hoped to translate that correlation into 3D cell culture. Using agarose to create the 3D scaffolding, the researchers measured the growth and proliferation of human oral cancer cells in the presence and absence of cisplatin using fluorescent DNA assays and observed that there was indeed a correlation like that observed in 2D model. Not only did this prove that principles from 2D cell culture could be translated to 3D open microfluidic cell culture, but it also potentially lays the foundation for a more personalized treatment plan for cancer patients. They postulated that future developments could transform this method into an assay that could test patient cancer cell response to known anti-cancer drugs.
Another group used a similar method, but instead of creating a 3D scaffolding, they employed several different PDMS coatings to determine the best option for studying cancer stem cells. The group looked at BSA and ECM proteins and found that, while their experimental evidence supported BSA as the best coating for circulating cancer cells (CSC’s), phenotypic changes did occur to the cells (namely, elongation), but did not impact the cells’ ability to perform normal cell functions. A key caveat to note here is that BSA is not a blanket solution that works for every cell type- different coatings work better or worse for certain cell types and these differences should be considered when developing an experiment.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{pf \\over pw}< cos(\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=57569740
|
57570231
|
MyAnimeList
|
English-language anime and manga database website
MyAnimeList, often abbreviated as MAL, is an anime and manga social networking and social cataloging application website run by volunteers. The site provides its users with a list-like system to organize and score anime and manga. It facilitates finding users who share similar tastes and provides a large database on anime and manga. As of March 2024, the site reported having approximately 26,417 anime and 68,308 manga entries. In 2015, the site received 120 million visitors per month.
History.
The site was launched in November 2004 by Garrett Gyssler and maintained solely by him until 2008. Originally, the website was called AnimeList, but Gyssler decided to incorporate the possessive "My" at the beginning, following the fashion of the most important social network in those years: Myspace.
On August 4, 2008, CraveOnline, a men's entertainment and lifestyle site owned by AtomicOnline, purchased MyAnimeList for an undisclosed sum of money. In 2015, DeNA announced that it had purchased MyAnimeList from CraveOnline, and that they would partner with Anime Consortium Japan to stream anime on the service, via Daisuki.
MyAnimeList announced in April 2016 that they had embedded episodes from Crunchyroll and Hulu directly onto the site, with over 20,000 episodes being made available on the site.
On March 8, 2018, MyAnimeList opened an online manga store, in partnership with Kodansha Comics and Viz Media, allowing users to purchase manga digitally from the website. The service originally launched in Canada but later expanded to United States, the United Kingdom, and several other English-speaking countries.
MAL became inaccessible for several days in May and June 2018 when site staff took it offline for maintenance, citing security and privacy concerns. The site operators also disabled the API for third-party apps, rendering them unusable. The moves were done in an effort to conform to the European Union's GDPR program.
MyAnimeList was acquired by Media Do in January 2019; with their purchase, they announced their intention to focus on marketing and e-book sales to strengthen the site.
On September 25, 2019, HIDIVE and MyAnimeList announced a partnership which would incorporate MyAnimeList's content ratings into HIDIVE's streaming platform, while exclusively providing MyAnimeList users with a curated selection of embedded HIDIVE content for free.
On February 18, 2021, MyAnimeList announced it had conducted a third-party allotment of ¥, with Kodansha, Shueisha, and Shogakukan, and parent company Media Do collectively investing ¥. On May 31, 2021, it was revealed that Akatsuki, The Anime Times Company, DMM.com, and Kadokawa Corporation had invested ¥ during its initial third-party allotment. On July 26, 2021, it was revealed that Bushiroad, Dentsu, and other companies had invested ¥, with the total third-party allotment raising to ¥.
In October 2021, MyAnimeList collaborated with e-book publisher and parent company Media Do to release "Fist of the North Star Manga Fragments: Dying Like a Man", a series of non-fungible token (NFT) products based on the "Fist of the North Star" manga.
On May 10, 2023, MyAnimeList went under an emergency maintenance after being hacked with the titles of all anime being replaced with a reference to "Serial Experiments Lain". On May 13, the website resumed online activity after restoring its databases. Personal information and data of users was not breached during the hack, however any list updates, forum posts, edits, etc. made ~8.5 hours before the incident would have to be remade.
Features.
MyAnimeList only lists anime, aeni, donghua as well as manga, manhwa, manhua, doujinshi and light novels. Users create lists that they strive to complete. Users can submit reviews, write recommendations, blogs, produce interest stacks, post in the site's forum, create clubs to unite with people of similar interests, and subscribe to the RSS news feed of anime and manga related news. MAL also initiates challenges for users to complete their 'lists.'
Scoring.
MyAnimeList allows users to score the anime and manga on their list on a scale from 1 to 10. These scores are then aggregated to give each show in the database a rank from best to worst. A show's rank is calculated twice a day using the following formula:
formula_0
Where formula_1 stands for the total number of user votes, formula_2 for the average user score, formula_3 for the minimum number of votes required to get a calculated score (currently 50), and formula_4 for the average score across the entire anime/manga database. Only scores where a user has completed at least 20% of the anime/manga are calculated.
In February 2020, MyAnimeList updated its scoring system to prevent vote brigading.
Controversy.
In January 2017, MyAnimeList rewrote an anti-Nazi article written by a contributor on the site to be more pro-Nazi without notice to the contributor.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R = \\frac{vS + mC}{v + m}"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "C"
}
] |
https://en.wikipedia.org/wiki?curid=57570231
|
57574496
|
Contracted Bianchi identities
|
In general relativity and tensor calculus, the contracted Bianchi identities are:
formula_0
where formula_1 is the Ricci tensor, formula_2 the scalar curvature, and formula_3 indicates covariant differentiation.
These identities are named after Luigi Bianchi, although they had been already derived by Aurel Voss in 1880. In the Einstein field equations, the contracted Bianchi identity ensures consistency with the vanishing divergence of the matter stress–energy tensor.
Proof.
Start with the Bianchi identity
formula_4
Contract both sides of the above equation with a pair of metric tensors:
formula_5
formula_6
formula_7
formula_8
The first term on the left contracts to yield a Ricci scalar, while the third term contracts to yield a mixed Ricci tensor,
formula_9
The last two terms are the same (changing dummy index "n" to "m") and can be combined into a single term which shall be moved to the right,
formula_10
which is the same as
formula_11
Swapping the index labels "l" and "m" on the left side yields
formula_12
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\nabla_\\rho {R^\\rho}_\\mu = {1 \\over 2} \\nabla_{\\mu} R"
},
{
"math_id": 1,
"text": "{R^\\rho}_\\mu"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\nabla_\\rho"
},
{
"math_id": 4,
"text": " R_{abmn;\\ell} + R_{ab\\ell m;n} + R_{abn\\ell;m} = 0."
},
{
"math_id": 5,
"text": " g^{bn} g^{am} (R_{abmn;\\ell} + R_{ab\\ell m;n} + R_{abn\\ell;m}) = 0,"
},
{
"math_id": 6,
"text": " g^{bn} (R^m {}_{bmn;\\ell} - R^m {}_{bm\\ell;n} + R^m {}_{bn\\ell;m}) = 0,"
},
{
"math_id": 7,
"text": " g^{bn} (R_{bn;\\ell} - R_{b\\ell;n} - R_b {}^m {}_{n\\ell;m}) = 0,"
},
{
"math_id": 8,
"text": " R^n {}_{n;\\ell} - R^n {}_{\\ell;n} - R^{nm} {}_{n\\ell;m} = 0."
},
{
"math_id": 9,
"text": " R_{;\\ell} - R^n {}_{\\ell;n} - R^m {}_{\\ell;m} = 0."
},
{
"math_id": 10,
"text": " R_{;\\ell} = 2 R^m {}_{\\ell;m},"
},
{
"math_id": 11,
"text": " \\nabla_m R^m {}_\\ell = {1 \\over 2} \\nabla_\\ell R."
},
{
"math_id": 12,
"text": " \\nabla_\\ell R^\\ell {}_m = {1 \\over 2} \\nabla_m R."
}
] |
https://en.wikipedia.org/wiki?curid=57574496
|
57577573
|
Simon problems
|
Fifteen problems in mathematical physic
In mathematics, the Simon problems (or Simon's problems) are a series of fifteen questions posed in the year 2000 by Barry Simon, an American mathematical physicist. Inspired by other collections of mathematical problems and open conjectures, such as the famous list by David Hilbert, the Simon problems concern quantum operators. Eight of the problems pertain to anomalous spectral behavior of Schrödinger operators, and five concern operators that incorporate the Coulomb potential.
In 2014, Artur Avila won a Fields Medal for work including the solution of three Simon problems. Among these was the problem of proving that the set of energy levels of one particular abstract quantum system was in fact the Cantor set, a challenge known as the "Ten Martini Problem" after the reward that Mark Kac offered for solving it.
The 2000 list was a refinement of a similar set of problems that Simon had posed in 1984.
Context.
Background definitions for the "Coulomb energies" problems (formula_0 non-relativistic particles (electrons) in formula_1 with spin formula_2 and an infinitely heavy nucleus with charge formula_3 and Coulombic mutual interaction):
The 1984 list.
Simon listed the following problems in 1984:
In 2000, Simon claimed that five of the problems he listed had been solved.
The 2000 list.
The Simon problems as listed in 2000 (with original categorizations) are:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\mathbb{R}^{3}"
},
{
"math_id": 2,
"text": " 1/2 "
},
{
"math_id": 3,
"text": "Z"
},
{
"math_id": 4,
"text": "\\mathcal{H}_f^{(N)}"
},
{
"math_id": 5,
"text": "L^2(\\mathbb{R}^{3N}; \\mathbb{C}^{2N})"
},
{
"math_id": 6,
"text": " (L^2(\\mathbb{R}^{3})\\otimes \\mathbb{C}^{2})^{\\otimes N}"
},
{
"math_id": 7,
"text": "H(N, Z) := \\sum_{i = 1}^N(-\\Delta_i - \\frac{Z}{|x_i|} ) + \\sum_{i < j}\\frac{1}{|x_i - x_j|}"
},
{
"math_id": 8,
"text": "x_i \\in \\mathbb{R}^3 "
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "\\Delta_i"
},
{
"math_id": 11,
"text": "x_i"
},
{
"math_id": 12,
"text": "E(N, Z) := \\min_{\\mathcal{H}_f} H(N, Z)"
},
{
"math_id": 13,
"text": "(N,Z)"
},
{
"math_id": 14,
"text": "N_0(Z)"
},
{
"math_id": 15,
"text": "E(N + j, Z) = E(N, Z)"
},
{
"math_id": 16,
"text": "j"
},
{
"math_id": 17,
"text": "2Z"
}
] |
https://en.wikipedia.org/wiki?curid=57577573
|
5758323
|
Porphobilinogen deaminase
|
Porphobilinogen deaminase (hydroxymethylbilane synthase, or uroporphyrinogen I synthase) is an enzyme (EC 2.5.1.61) that in humans is encoded by the HMBS gene. Porphobilinogen deaminase is involved in the third step of the heme biosynthetic pathway. It catalyzes the head to tail condensation of four porphobilinogen molecules into the linear hydroxymethylbilane while releasing four ammonia molecules:
4 porphobilinogen + H2O formula_0 hydroxymethylbilane + 4 NH3
Structure and function.
Functionally, porphobilinogen deaminase catalyzes the loss of ammonia from the porphobilinogen monomer (deamination) and its subsequent polymerization to a linear tetrapyrrole, which is released as hydroxymethylbilane:
The structure of 40-42 kDa porphobilinogen deaminase, which is highly conserved amongst organisms, consists of three domains. Domains 1 and 2 are structurally very similar: each consisting of five beta-sheets and three alpha helices in humans. Domain 3 is positioned between the other two and has a flattened beta-sheet geometry. A dipyrrole, a cofactor of this enzyme consisting of two condensed porphobilinogen molecules, is covalently attached to domain 3 and extends into the active site, the cleft between domains 1 and 2. Several positively charged arginine residues, positioned to face the active site from domains 1 and 2, have been shown to stabilize the carboxylate functionalities on the incoming porphobilinogen as well as the growing pyrrole chain. These structural features presumably favor the formation of the final hydroxymethylbilane product. Porphobilinogen deaminase usually exists in dimer units in the cytoplasm of the cell.
Reaction mechanism.
The first step is believed to involve an E1 elimination of ammonia from porphobilinogen, generating a carbocation intermediate (1). This intermediate is then attacked by the dipyrrole cofactor of porphobilinogen deaminase, which after losing a proton yields a trimer covalently bound to the enzyme (2). This intermediate is then open to further reaction with porphobilinogen (1 and 2 repeated three more times). Once a hexamer is formed, hydrolysis allows hydroxymethylbilane to be released, as well as cofactor regeneration (3).
Pathology.
The most well-known health issue involving porphobilinogen deaminase is acute intermittent porphyria, an autosomal dominant genetic disorder where insufficient hydroxymethylbilane is produced, leading to a build-up of porphobilinogen in the cytoplasm. This is caused by a gene mutation that, in 90% of cases, causes decreased amounts of enzyme. However, mutations where less-active enzymes and/or different isoforms have been described. At least 115 disease-causing mutations in this gene have been discovered.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=5758323
|
5758721
|
Rayl
|
A rayl (symbol Rayl) is one of two units of specific acoustic impedance and characteristic acoustic impedance; one an MKS unit, and the other a CGS unit. These have the same dimensions as momentum per volume.
The units are named after John William Strutt, 3rd Baron Rayleigh. They are not to be confused with the unit of photon flux, the rayleigh.
Explanation.
Specific acoustic impedance.
When sound waves pass through any physical substance the pressure of the waves causes the particles of the substance to move. The sound specific impedance is the ratio between the sound pressure and the particle velocity it produces.
"Specific acoustic impedance" is defined as:
formula_0
where formula_1 and formula_2 are the specific acoustic impedance, pressure and particle velocity phasors, formula_3 is the position and formula_4 is the frequency.
Characteristic acoustic impedance.
The rayl is also used for the "characteristic (acoustic) impedance" of a medium, which is an inherent property of a medium:
formula_5
Here, formula_6 is the characteristic impedance, and formula_7 and formula_8 are the density and speed of sound in the unperturbed medium (i.e. when there are no sound waves travelling in it).
In a viscous medium, there will be a phase difference between the pressure and velocity, so the specific acoustic impedance formula_9 will be different from the characteristic acoustic impedance formula_6.
MKS and CGS units.
Subscripts are used in this section to distinguish identically named units. Texts often refer to "the MKS rayl" to ensure clarity.
The MKS unit of specific acoustic impedance is the pascal-second per meter, and is often called the rayl (MKS: 1 Rayl = 1 Pa·s·m−1).
The MKS unit and the CGS unit confusingly have the same name, but are not the same quantity (or unit):
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "{\\underline{Z}(\\mathbf{r},\\omega) = \\frac{\\underline{p}(\\mathbf{r},\\omega)}{\\underline{v}(\\mathbf{r},\\omega)}}"
},
{
"math_id": 1,
"text": "\\underline{Z}, \\underline{p}"
},
{
"math_id": 2,
"text": "\\underline{v}"
},
{
"math_id": 3,
"text": "\\mathbf{r}"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "{Z_0= \\rho_0 c_0}"
},
{
"math_id": 6,
"text": "{Z_0}"
},
{
"math_id": 7,
"text": "{\\rho_0}"
},
{
"math_id": 8,
"text": "{c_0}"
},
{
"math_id": 9,
"text": "\\underline{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=5758721
|
5759
|
Complex analysis
|
Branch of mathematics studying functions of a complex variable
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, "holomorphic functions".
The concept can be extended to functions of several complex variables.
History.
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
Complex functions.
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values formula_0 from the domain and their images formula_1 in the range may be separated into real and imaginary parts:
formula_2
where formula_3 are all real-valued.
In other words, a complex function formula_4 may be decomposed into
formula_5 and formula_6
i.e., into two real-valued functions (formula_7, formula_8) of two real variables (formula_9, formula_10).
Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re "f", Im "f") or, alternatively, as a vector-valued function from X into formula_11
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
Holomorphic functions.
Complex functions that are differentiable at every point of an open subset formula_12 of the complex plane are said to be "holomorphic on" formula_12. In the context of complex analysis, the derivative of formula_13 at formula_14 is defined to be
formula_15
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach formula_14 in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the "n"th derivative need not imply the existence of the ("n" + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on formula_12 can be approximated arbitrarily well by polynomials in some neighborhood of every point in formula_12. This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are "nowhere" analytic; see .
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions formula_16, are holomorphic over the entire complex plane, making them "entire" "functions", while rational functions formula_17, where "p" and "q" are polynomials, are holomorphic on domains that exclude points where "q" is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as "meromorphic functions". On the other hand, the functions formula_18, formula_19, and formula_20 are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If formula_4, defined by formula_21, where formula_22, is holomorphic on a region formula_12, then for all formula_23,
formula_24
In terms of the real and imaginary parts of the function, "u" and "v", this is equivalent to the pair of equations formula_25 and formula_26, where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: formula_27, formula_28, or formula_29 for some formula_30. In other words, if two distinct complex numbers formula_0 and formula_31 are not in the range of an entire function formula_13, then formula_13 is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
Major results.
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "f(z)"
},
{
"math_id": 2,
"text": "z=x+iy \\quad \\text{ and } \\quad f(z) = f(x+iy)=u(x,y)+iv(x,y),"
},
{
"math_id": 3,
"text": "x,y,u(x,y),v(x,y)"
},
{
"math_id": 4,
"text": "f:\\mathbb{C}\\to\\mathbb{C}"
},
{
"math_id": 5,
"text": "u:\\mathbb{R}^2\\to\\mathbb{R} \\quad"
},
{
"math_id": 6,
"text": "\\quad v:\\mathbb{R}^2\\to\\mathbb{R},"
},
{
"math_id": 7,
"text": "u"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "\\mathbb R^2."
},
{
"math_id": 12,
"text": "\\Omega"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "z_0"
},
{
"math_id": 15,
"text": "f'(z_0) = \\lim_{z \\to z_0} \\frac{f(z)-f(z_0)}{z-z_0}."
},
{
"math_id": 16,
"text": "\\mathbb{C}\\to\\mathbb{C}"
},
{
"math_id": 17,
"text": "p/q"
},
{
"math_id": 18,
"text": "z\\mapsto \\Re(z)"
},
{
"math_id": 19,
"text": "z\\mapsto |z|"
},
{
"math_id": 20,
"text": "z\\mapsto \\bar{z}"
},
{
"math_id": 21,
"text": "f(z) = f(x + iy) = u(x, y) + iv(x, y)"
},
{
"math_id": 22,
"text": "x, y, u(x, y),v(x, y) \\in \\R"
},
{
"math_id": 23,
"text": "z_0\\in \\Omega"
},
{
"math_id": 24,
"text": "\\frac{\\partial f}{\\partial\\bar{z}}(z_0) = 0,\\ \\text{where } \\frac\\partial{\\partial\\bar{z}} \\mathrel{:=} \\frac12\\left(\\frac\\partial{\\partial x} + i\\frac\\partial{\\partial y}\\right)."
},
{
"math_id": 25,
"text": "u_x = v_y"
},
{
"math_id": 26,
"text": "u_y=-v_x"
},
{
"math_id": 27,
"text": "\\mathbb{C}"
},
{
"math_id": 28,
"text": "\\mathbb{C}\\setminus\\{z_0\\}"
},
{
"math_id": 29,
"text": "\\{z_0\\}"
},
{
"math_id": 30,
"text": "z_0\\in\\mathbb{C}"
},
{
"math_id": 31,
"text": "w"
}
] |
https://en.wikipedia.org/wiki?curid=5759
|
5759586
|
List of psychoactive plants
|
List of plant species with reported psychoactive properties
This is a list of plant species that, when consumed by humans, are known or suspected to produce psychoactive effects: changes in nervous system function that alter perception, mood, consciousness, cognition or behavior. Many of these plants are used intentionally as psychoactive drugs, for medicinal, religious, and/or recreational purposes. Some have been used ritually as entheogens for millennia.
The plants are listed according to the specific psychoactive chemical substances they contain; many contain multiple known psychoactive compounds.
Cannabinoids.
Species of the genus "Cannabis", known colloquially as marijuana, including "Cannabis sativa" and "Cannabis indica", is a popular psychoactive plant that is often used medically and recreationally. The principal psychoactive substance in "Cannabis", tetrahydrocannabinol (THC), contains no nitrogen, unlike many (but not all) other psychoactive substances and is not an indole, tryptamine, phenethylamine, anticholinergic (deliriant) or dissociative drug. THC is just one of more than 100 identified cannabinoid compounds in "Cannabis", which also include cannabinol (CBN) and cannabidiol (CBD).
"Cannabis" plants vary widely, with different strains producing dynamic balances of cannabinoids (THC, CBD, etc.) and yielding markedly different effects. Popular strains are often hybrids of "C. sativa" and "C. indica".
The medicinal effects of cannabis are widely studied, and are active topics of research both at universities and private research firms. Many jurisdictions have laws regulating or prohibiting the cultivation, sale and/or use of medical and recreational cannabis.
Tryptamines.
Many of the psychedelic plants contain dimethyltryptamine (DMT), or other tryptamines, which are either snorted (Virola, Yopo snuffs), vaporized, or drunk with MAOIs (Ayahuasca). It cannot simply be eaten as it is not orally active without an MAOI and it needs to be extremely concentrated to be vaporized.
Acanthaceae.
"Species", "Alkaloid content, where given, refers to dried material"
Fabaceae (Leguminosae).
1,2,3,4-Tetrahydro-6-methoxy-2,9-dimethyl-beta-carboline, Plant, 1,2,3,4-Tetrahydro-6-methoxy-2-methyl-beta-carboline, Plant, 5-Methoxy-N,N-dimethyltryptamine, Bark, 5-Methoxy-N-methyltryptamine, Bark, Bufotenin, plant, beans, Bufotenin N-oxide, Fruit, beans, N,N-Dimethyltryptamine-oxide, Fruit
Poaceae (Gramineae).
Some Graminae (grass) species contain gramine, which can cause brain damage, other organ damage, central nervous system damage and death in sheep.
None of the above alkaloids are said to have been found in "Phalaris californica", "Phalaris canariensis", "Phalaris minor" and hybrids of "P. arundinacea" together with "P. aquatica".
Phenethylamines.
Species, Alkaloid Content (Fresh) – Alkaloid Content (Dried)
Beta-carbolines.
Beta-carbolines are "reversible" MAO-A inhibitors. They are found in some plants used to make Ayahuasca. In high doses the harmala alkaloids are somewhat hallucinogenic on their own. β-carboline is a benzodiazepine receptor inverse agonist and can therefore have convulsive, anxiogenic and memory enhancing effects.
Opiates.
Opiates are the natural products of many plants, the most famous and historically relevant of which is Papaver somniferum. Opiates are defined as natural products (or their esters and salts that revert to the natural product in the human body), whereas opioids are defined as semi-synthetic or fully synthetic compounds that trigger the Opioid receptor of the mu sub-type. Other opiate receptors, such as kappa- and delta-opiate receptors are part of this system but do not cause the characteristic behavioral depression and analgesia which is mostly mediated through the mu-opiate receptor.
An opiate, in classical pharmacology, is a substance derived from opium. In more modern usage, the term opioid is used to designate all substances, both natural and synthetic, that bind to opioid receptors in the brain (including antagonists). Opiates are alkaloid compounds naturally found in the Papaver somniferum plant (opium poppy). The psychoactive compounds found in the opium plant include morphine, codeine, and thebaine. Opiates have long been used for a variety of medical conditions with evidence of opiate trade and use for pain relief as early as the eighth century AD. Opiates are considered drugs with moderate to high abuse potential and are listed on various "Substance-Control Schedules" under the Uniform Controlled Substances Act of the United States of America.
In 2014, between 13 and 20 million people used opiates recreationally (0.3% to 0.4% of the global population between the ages of 15 and 65). According to the CDC, from this population, there were 47,000 deaths, with a total of 500,000 deaths from 2000 to 2014. In 2016, the World Health Organization reported that 27 million people suffer from Opioid use disorder. They also reported that in 2015, 450,000 people died as a result of drug use, with between a third and a half of that number being attributed to opioids.
Papaver somniferum.
The plant contains a latex that thickens into opium when it is dried. Opium contains approximately 40 alkaloids, which are summarized as opium alkaloids. The main psychoactive alkaloids are:
Atherospermataceae.
Laurelia novae-zelandiae ~ pukateine
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\le"
}
] |
https://en.wikipedia.org/wiki?curid=5759586
|
57598662
|
Extended hemispherical lens
|
The extended hemispherical lens is a commonly used lens for millimeter-wave electromagnetic radiation. Such lenses are typically fabricated from dielectric materials such as Teflon or silicon. The geometry consists of a hemisphere of radius R on a cylinder of length L, with the same radius.
Scanning performance.
When a feed element is placed a distance d off the central axis, then the main beam will be steered an angle γ off-axis. The relation between d and γ can be determined from geometrical optics:
formula_0
This relation is used when designing focal plane arrays to be used with the extended hemispherical lens.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{d}{L} = \\tan \\gamma. "
}
] |
https://en.wikipedia.org/wiki?curid=57598662
|
5760445
|
Tesla Roadster (first generation)
|
Electric convertible sports car produced 2008–2012
The Tesla Roadster is a battery electric sports car, based on the Lotus Elise chassis, produced by Tesla Motors (now Tesla, Inc.) from 2008 to 2012. The Roadster was the first highway legal, serial production, all-electric car to use lithium-ion battery cells, and the first production all-electric car to travel more than per charge. It is also the first production car to be launched into deep space, carried by a Falcon Heavy rocket in a test flight on February 6, 2018.
Tesla sold about 2,450 Roadsters in over 30 countries, and most of the last Roadsters were sold in Europe and Asia during the fourth quarter of 2012. Tesla produced right-hand-drive Roadsters from early 2010. The Roadster qualified for government incentives in several nations.
According to the U.S. EPA, the Roadster can travel on a single charge of its lithium-ion battery pack. The vehicle can accelerate from in 3.7 or 3.9 seconds depending on the model. It has a top speed of . The Roadster's efficiency, as of 2008[ [update]], was reported as (2.0 L/100 km). It uses 21.7 kWh/100 mi (135 Wh/km) battery-to-wheel, and has an efficiency of 88% on average.
History.
Prototypes of the car were officially revealed to the public on July 19, 2006, in Santa Monica, California, at a 350-person invitation-only event held in Barker Hangar at Santa Monica Airport. It was featured in "Time" in December 2006 as the recipient of the magazine's "Best Inventions 2006—Transportation Invention" award.
The first Tesla Roadster was delivered in February 2008 to Tesla co-founder, chairman and product architect Elon Musk. The company produced 500 similar vehicles through June 2009. In July 2009, Tesla began production of its 2010 model-year Roadster—the first major product upgrade. Simultaneously, Tesla began producing the Roadster Sport, the first derivative of Tesla's proprietary, patented powertrain. The car accelerates from in 3.7 seconds, compared to 3.9 seconds for the standard Roadster.
Changes for the 2010 model-year cars included:
All of these features, except for the motor, were available either as standard or as add-on option for the non-sport model.
Beginning mid-March 2010, Tesla, in an effort to show off the practicality of its electric cars, sent one of its Roadsters around the world. Starting at the Geneva International Motor Show, the Roadster completed its journey upon its arrival in Paris on September 28, 2010.
In July 2010, Tesla introduced the "Roadster 2.5" update including:
Tesla produced the Roadster until January 2012, when its supply of gliders ran out, as its contract with Lotus Cars for 2,500 gliders expired at the end of 2011. Tesla stopped taking orders for the Roadster in the U.S. market in August 2011. Featuring new options and enhanced components, the 2012 Tesla Roadster was sold in limited numbers only in Europe, Asia, and Australia. Tesla's U.S. exemption for not having advanced (two-stage) passenger airbags expired for cars made after the end of 2011 so the last Roadsters could not be sold in the American market. Fifteen Final Edition Roadsters were produced to close the manufacturing cycle of Tesla's first electric car.
For a time, Tesla offered an optional upgrade to existing Roadsters, the Roadster 3.0. It offered a new battery pack with cells from LG Chem increasing capacity by 50% to 70 kWh, a new aero kit designed to reduce drag, and new tires with lower rolling resistance. The upgrade was offered between September 2015 and late 2016 at a cost of US$.
In November 2023, Tesla open-sourced some of the Roadster's design and engineering documents, as well as diagnostic software.
Development.
After Martin Eberhard sold NuvoMedia to TV Guide, he wanted a sports car, but could not find one to his liking. His battery experience with the Rocket eBook inspired him to develop an electric car.
During his search, Eberhard test drove the tzero, a concept car from the small automaker AC Propulsion. Eberhard and Marc Tarpenning, who had also driven the tzero, tried to convince the company to put the car into production, but when they declined, they decided to establish Tesla Motors in Delaware on July 1, 2003, to pursue the idea commercially. South African-born entrepreneur Elon Musk would also test drive a tzero and encouraged AC Propulsion to put the car into production, instead the company connected Musk with Eberhard and Tarpenning. Musk took an active role within the company starting in 2004, including investing US$7.5 million (~$ in 2023), overseeing Roadster product design from the beginning, and greatly expanding Tesla's long-term strategic sales goals by using the sports car to fund the development of mainstream vehicles. Musk became Tesla's chairman of the board in April 2004 and helped recruit J. B. Straubel as chief technology officer in March 2004. Musk received the Global Green 2006 product design award for the design of the Tesla Roadster, presented by Mikhail Gorbachev, and he received the 2007 Index Design award for the design of the Tesla Roadster.
Before Tesla had developed the Roadster's proprietary powertrain, they borrowed a tzero for use as a development mule and converted the vehicle from lead–acid AGM batteries to lithium-ion cells, which substantially increased the range, reduced weight, and boosted 0 to 60 mph performance. Tesla licensed AC Propulsion's EV power system design and reductive charging patent, which covers integration of the charging electronics with the inverter, thus reducing mass, complexity, and cost. Tesla, however, was dissatisfied with how the motor and transmission worked in the chassis. Tesla then designed and built its own power electronics, motor, and other drivetrain components that incorporated this licensed technology from AC Propulsion. Given the extensive redevelopment of the vehicle, Tesla Motors no longer licenses any proprietary technology from AC Propulsion. The Roadster's powertrain is unique.
On July 11, 2005, Tesla and British sports car maker Lotus entered an agreement about products and services based on the Lotus Elise, where Lotus provided advice on designing and developing a vehicle as well as producing partly assembled vehicles, and amended in 2009, helped with basic chassis development. The Roadster has a parts overlap of roughly 6% with the Lotus Elise, a 2-inch-longer wheelbase, and a slightly stiffer chassis according to Eberhard. Tesla's designers chose to construct the body panels using resin transfer molded carbon fiber composite to minimize weight; this choice makes the Roadster one of the least expensive cars with an entirely carbon fiber skin.
Several prototypes of the Tesla Roadster were produced from 2004 through 2007. Initial studies were done in two development mule vehicles based on Lotus Elises equipped with all-electric drive systems. Tesla then built and tested ten engineering prototypes (EP1 through EP10) in late 2006 and early 2007, which led to many minor changes. Next, Tesla produced at least 26 validation prototypes, which were delivered beginning in March 2007. These final revisions were endurance and crash tested in preparation for series production.
In August 2007, Martin Eberhard was replaced by an interim CEO, Michael Marks. Marks accepted the temporary position while a recruitment was undertaken. In December 2007, Ze'ev Drori became the CEO and president of Tesla. In October 2008, Musk succeeded Drori as CEO. Drori left the company in December. In January 2008, the U.S. National Highway Traffic Safety Administration (NHTSA) announced that it would grant Tesla a waiver of the "advanced" (two-stage) air bag rule noting that the Roadster includes standard air bags. Similar waivers were granted to other small volume manufacturers, including Lotus, Ferrari, and Bugatti. Tesla delivered its first production car in February 2008 to Musk.
Tesla announced in early August 2009 that Roadster sales had resulted in overall corporate profitability for the month of July 2009, earning US$ on revenue of US$.
Tesla, which signed a production contract with Lotus in 2007 to produce "gliders" (complete cars minus electric powertrain) for the Roadster, announced in early 2010 that Roadster production would continue until early 2012. Starting one year prior to the end of the contract, no changes to the order was allowed to give time for tooling changes at Lotus's assembly plant in the UK.
Several years later in 2018, Musk would go on to say that using the Lotus Elise as a base for the Roadster was a poor strategy because the Elise was incompatible with the intended AC Propulsion technology and was modified so extensively only 7% of the Elise remained in common with the final production Roadster.
Production.
Tesla's cumulative production of the Roadster reached 1,000 cars in January 2010. The Roadster is considered an American car though many carry a Vehicle Identification Number beginning with the letter "S", which is the designation for the United Kingdom. Some, however, carry a number starting with "5" appropriate to the US. Parts were sourced from around the world. The body panels came from French supplier Sotira. These were sent from France to Hethel, U.K., where Tesla contracted with Lotus to build the Roadster's unique chassis. The Roadster shares roughly 7% of its components with the Lotus Elise including the windshield, airbags, some dashboard parts, and suspension components. The Roadster's single-speed gearbox was made in Detroit by BorgWarner. Brakes and airbags were made by Siemens in Germany, and some crash testing was conducted at Siemens as well. 30–40% of components were sourced from Taiwan.
For Roadsters bound for customers in North America, the glider was sent to Tesla's facility in Menlo Park, California for final assembly, and for Roadsters bound for customers in Europe or elsewhere outside of North America, the glider was sent to a facility at Wymondham near Hethel for final assembly. At these locations, Tesla employees installed the powertrain, which consisted of the battery pack, power electronics module, gearbox and motor.
Tesla ordered 2,500 gliders from Lotus, which ceased production in December 2011 when their contract expired. Tesla ended production of the Roadster in January 2012.
Timeline.
Subsequent to completion of the first production car, the company announced problems with transmission reliability. The development transmission, with first gear enabled to accelerate in 4 seconds, was reported to have a life expectancy of as low as only a few thousand miles. Tesla's first two transmission suppliers were unable to produce transmissions, in quantity, that could withstand the gear-shift requirements of the high torque, high rpm electric motor. In December 2007, Tesla announced plans to ship the initial Roadsters with the transmissions locked into second gear, providing acceleration in 5.7 seconds and allowing customers to swap out transmissions under warranty when the finalized transmission, power electronics module (PEM), and cooling system became available. The EPA range of the car was also restated downward from . The downward revision was attributed to an error in equipment calibration at the laboratory that conducted the original test.
Special final edition.
Tesla produced a special edition of 15 Final Edition Roadsters to close the production cycle of the electric car. The 15 special-edition cars were sold in each of the three sales regions, North America, Europe and Asia, and five units were allocated to each. The Final Edition Roadster did not have any performance modifications, but featured sporting atomic red paint, a duo of dark silver stripes on its hood and rear clamshell, and exclusive anthracite aluminum wheels.
Specifications.
Motor.
The Roadster is powered by a 3-phase, 4-pole, induction electric motor with a maximum output power of . Its maximum torque of is immediately available and remains constant from 0 to 6,000 rpm; nearly instantaneous torque is a characteristic of electric motors and offers one of the biggest performance differences from internal combustion engines. The motor is air-cooled and does not need a liquid cooling system.
The Sport model introduced during the Jan 2009 Detroit Auto Show includes a motor with a higher density, hand-wound stator that produces a maximum of . Both motors are designed for rotational speeds of up to 14,000 rpm, and the regular motor delivers a typical efficiency of 88% or 90%; 80% at peak power. It weighs less than .
Transmission.
Starting in September 2008 Tesla selected BorgWarner to manufacture gearboxes and began equipping all Roadsters with a single speed, fixed gear gearbox (8.2752:1 ratio) with an electrically actuated parking pawl mechanism and a mechanical lubrication pump.
The company previously worked with several companies, including XTrac and Magna International, to find the right automatic transmission, but a two-gear solution proved to be too challenging. This led to substantial delays in production. At the "Town Hall Meeting" with owners in December 2007, Tesla announced plans to ship the initial 2008 Roadsters with their interim Magna two-speed direct shift manual transmissions locked into second gear, limiting the performance of the car to less than what was originally stated ( in 5.7 seconds instead of the announced 4.0 seconds). Tesla also announced it would upgrade those transmissions under warranty when the final transmission became available. At the "Town Hall Meeting" with owners on January 30, 2008, Tesla Motors described the planned transmission upgrade as a single-speed gearbox with a drive ratio of 8.27:1 combined with improved electronics and motor cooling that retain the acceleration from in under 4 seconds and an improved motor limit of 14,000 rpm to retain the top speed.
Gear selector.
In the interior the gear selector is similar to a push-button automatic with buttons labeled P, R, N and D. Some earlier models have a gear lever similar to that in cars with manual transmission.
Performance.
The Roadster's acceleration time is 3.9 seconds for the Standard model and 3.7 seconds for the 2010 V2.5 Sport, which "Motor Trend" confirmed in the first independent, instrumented testing of the V2.5 Sport model. The magazine also recorded a time of 12.6 seconds at . Tesla said the top speed is electronically limited to . Tesla claims it has a weight of , a drag coefficient of "C"=0.35–0.36 and a rolling resistance of "C"=0.011.
Tesla began delivering the higher performance version of the Roadster in July 2009. The Roadster Sport has adjustable dampers and a new hand-wound motor, capable of in 3.7 seconds. Scotty Pollacheck, a high-performance driver for Killacycle, drove a 2010 Tesla Roadster Sport at the Wayland Invitational Drag Race in Portland, Oregon, in July 2009. He did a quarter-mile (~400 m) in dry conditions in 12.643 seconds, setting a new record in the National Electric Drag Racing Association among the SP/A3 class of vehicles. The combined range (specifying distance traveled between charges) measured in February 2008 for early production Roadsters was city, highway, and combined (city/highway). In August 2008, additional testing with the newer Powertrain 1.5 resulted in an EPA combined range of . The vehicle set a new distance record when it completed the Rallye Monte Carlo d'Energies Alternatives with left on the charge. A Roadster drove around the world (although flying as cargo over oceans) in 2012, and repeated it in 80 days with other electric cars in 2016.
Simon Hackett and Emilis Prelgauskas broke the distance record for an electric vehicle, driving from Alice Springs to Marla, South Australia, in Simon's Tesla Roadster. The car had about of range left when the drive was completed.
Battery system.
Tesla refers to the Roadster's battery pack as the Energy Storage System or ESS. The ESS contains 6,831 lithium ion cells arranged into 11 "sheets" connected in series; each sheet contains 9 "bricks" connected in series; each "brick" contains 69 cells connected in parallel (11S 9S 69P). The cells are of the 18650 form factor commonly found in laptop batteries. Sources disagree on the exact type of Li-Ion cells—GreenCar says lithium cobalt oxide (LiCo), while researchers at DTU/INESC Porto state lithium manganese oxide (LMO). LiCo has higher reaction energy during thermal runaway than LMO.
The pack is designed to prevent catastrophic cell failures from propagating to adjacent cells (thermal runaway), even when the cooling system is off. Coolant is pumped continuously through the ESS both when the car is running and when the car is turned off if the pack retains more than a 90% charge. The coolant pump draws 146 watts. The cooling and battery management system keeps the temperatures and voltages within specific limits.
A full recharge to 53 kWh requires about <templatestyles src="Fraction/styles.css" />3+1⁄2 hours using the "High Power Wall Connector", which supplies 70-amp, 240-volt electricity.
Tesla said in February 2009 that the ESS had expected life span of seven years/, and began selling pre-purchase battery replacements for about one third of the battery's price today, with the replacement to be delivered after seven years. Tesla says the ESS retains 70% capacity after five years and of driving, assuming driven each year. A July 2013 study found that after , Roadster batteries still had 80%–85% capacity and the only significant factor is mileage (not temperature).
Tesla announced plans to sell the battery system to "TH!NK" and possibly others through its Tesla Energy Group division. The TH!NK plans were put on hold by interim CEO Michael Marks in September 2007. TH!NK now obtains its lithium-ion batteries from Enerdel.
Recharging.
The Roadster uses a proprietary AC charging connector, although Tesla sells a mobile adapter that enables recharging with an SAE J1772 connector. The vehicle was not provided with any DC fast-charging ability and was not retrofitted later on when the Tesla Supercharger network was established. It can be recharged with AC using:
Charging times vary depending on the ESS's state-of-charge, the available voltage, and the available circuit breaker amp rating (current). In a best-case scenario using a 240 V charger on a 90 A circuit breaker, Tesla documents a recharging rate of of range for each hour charging; a complete recharge from empty would require just under four hours. The slowest charging rate using a 120 V outlet on a 15 A circuit breaker would add of range for each hour charging; a complete recharge from empty would require 48 hours.
Energy efficiency.
In June 2006, Tesla reported the Roadster's battery-to-wheel efficiency as 110 Wh/km (17.7 kWh/100 mi) on an unspecified driving cycle—either a constant ) or SAE J1634 test—and stated a charging efficiency of 86% for an overall plug-to-wheel efficiency of 128 Wh/km (20.5 kWh/100 mi).
In March 2007, Tesla reported the Roadster's efficiency on the EPA highway cycle as "135 mpg [U.S.] equivalent, per the conversion rate used by the EPA" or 133 Wh/km (21.5 kWh/100 mi) battery-to-wheel and 155 Wh/km (24.9 kWh/100 mi) plug-to-wheel. The official U.S. window sticker of the 2009 Tesla Roadster showed an EPA rated energy consumption of 32 kWh/100 mi in city and 33 kWh/100 mi on the highway, equivalent to 105 mpg city and 102 mpg highway. The EPA rating for on board energy efficiency for electric vehicles before 2010 was expressed as kilowatt-hour per 100 miles (kWh/100 mi). Since November 2010, with the introduction of the Nissan Leaf and the Chevrolet Volt, EPA began using a new metric, miles per gallon gasoline equivalent (MPGe). The Roadster was never officially rated by the EPA in MPGe.
In August 2007, Tesla dynamometer testing of a validation prototype on the EPA combined cycle yielded a range of using 23.9 kWh/100 mi (149 Wh/km) battery-to-wheel and 33.6 kWh/100 mi (209 Wh/km) plug-to-wheel.
In February 2008, Tesla reported improved plug-to-wheel efficiency after testing a validation prototype car at an EPA-certified location. Those tests yielded a range of and a plug-to-wheel efficiency of 32.1 kWh/100 mi (199 Wh/km).
In August 2008, Tesla reported on testing with the new, single-speed gearbox and upgraded electronics of powertrain 1.5, which yielded an EPA range of and an EPA combined cycle, plug-to-wheel efficiency of 28 kWh/100 mi (174 Wh/km).
In 2007, the Roadster's battery-to-wheel motor efficiency was reported as 88% to 90% on average and 80% at peak power. For comparison, internal combustion engines have a tank-to-wheel efficiency of about 15%. Taking a more complete picture including the cost of energy drawn from its source, Tesla reports that their technology, assuming electricity generated from natural gas-burning power plants, has a high well-to-wheel efficiency of 1.14 km per megajoule, compared to 0.202 km/MJ for gasoline-powered sports cars, 0.478 km/MJ for gasoline-powered commuter cars, 0.556 km/MJ for hybrid cars, and 0.348 km/MJ for hydrogen fuel cell vehicles.
Petroleum-equivalent efficiency.
As the Roadster does not use gasoline, petroleum efficiency (MPG, L/100 km) cannot be measured directly but instead is calculated using one of several equivalent methods:
A number comparable to the typical Monroney sticker's "pump-to-wheel" fuel efficiency can be calculated based on regulations from the DOE and its energy content for a U.S. gallon of gasoline of 33,705 <templatestyles src="Fraction/styles.css" />Wh⁄gal (also called the Lower Heating Value (LHV) of gasoline):
formula_0
For CAFE regulatory purposes, the DOE's full petroleum-equivalency equation combines the primary energy efficiencies of the US electric grid and the well-to-pump path with a "fuel content factor" that quantifies the value of conservation, scarcity of fuels, and energy security in the US. This combination yields a factor of 82,049 <templatestyles src="Fraction/styles.css" />Wh⁄gal in the above equation and a regulatory fuel efficiency of 293 mpggeCAFE.
Recharging with electricity from the average US grid, the factor changes to 12,307 <templatestyles src="Fraction/styles.css" />Wh⁄galUS to remove the "fuel content factor" = <templatestyles src="Fraction/styles.css" />1⁄0.15 and the above equation yields a full-cycle energy-equivalency of 44.0 mpgge full-cycle. For full-cycle comparisons, the sticker or "pump-to-wheel" value from a gasoline-fueled vehicle must be multiplied by the fuel's "well-to-pump" efficiency; the DOE regulation specifies a "well-to-pump" efficiency of 83% for gasoline. The Prius's sticker , for example, converts to a full-cycle energy-equivalent of 38.2 mpgfull-cycle.
Recharging with electricity generated by newer, 58% efficiency CCGT power plants, changes the factor to 21,763 <templatestyles src="Fraction/styles.css" />Wh⁄gal in the above equation and yields a fuel efficiency of 77.7 mpgge.
Recharging with non-fossil fuel electricity sources such as hydroelectric, solar power, wind or nuclear, the petroleum equivalent efficiency can be even higher as fossil fuel is not directly used in refueling.
Service.
Whereas vehicles with internal combustion engines require more frequent service for oil changes and routine maintenance on engine components and other related systems, Tesla's website recommends the owner bring the vehicle in for service "once a year or every 12,000 miles". For other concerns with vehicles, Tesla created a "mobile service unit" that dispatches company-trained technicians to customers' homes or offices in case the owner is experiencing problems. Tesla charges the customer according to the distance the service unit needs to travel: one US dollar per mile roundtrip with a 100-dollar minimum. Technicians drive company vans equipped with numerous tools and testing equipment to do "in the field" repairs, enhancements and software upgrades. Tesla debuted this "house call" approach in the spring of 2009, when the company announced a recall due to a manufacturing problem in the Lotus assembly plant, which also affected the Lotus Elise and other models from the British sports car maker.
The first Tesla service center, in Los Angeles, California, was opened on Santa Monica Boulevard on May 1, 2008. Tesla publicly opened their second showroom and service area in Menlo Park, California on July 22, 2008. The Menlo Park location is also the final assembly area for Tesla Roadsters. Tesla also operates service centers in New York City, Miami, Chicago, and Seattle.
In 2007, Tesla announced plans to build additional service centers over the following few years to support sales of its next vehicle, the Model S sports sedan. This included an additional 15 service centers in United States major metropolitan locations. Possible locations for sales and service locations in Europe were announced in a letter to customers in May 2008.
Recalls.
As of 2017[ [update]], Tesla has issued two product safety recalls for the Roadster.
In May 2009, Tesla issued a recall for 345 Roadsters manufactured before April 22, 2009. Tesla sent technicians to customers' homes to tighten the rear, inner hub flange bolts. Using wording from the National Highway Traffic and Safety Administration, Tesla told customers that without this adjustment, the driver could lose control of the car. The problem originated at the Lotus assembly line, where the Roadster glider was built. Lotus also recalled some Elise and Exige vehicles for the same reason.
On October 1, 2010, Tesla issued a second product safety recall in the US affecting 439 Roadsters. The recall involved the 12 V low-voltage auxiliary cable from a redundant back-up system. The recall followed an incident where the low voltage auxiliary cable in a vehicle chafed against the edge of a carbon fiber panel, causing a short, smoke and a possible fire behind the right front headlamp. This issue was limited to the 12 V low-voltage auxiliary cable and did not involve the main battery pack or main power system.
Reviews.
Tesla Roadster reviews can be grouped in two main categories: older reviews of "validation prototypes" (2006–2008), before Tesla began serial production and customer deliveries, and reviews on cars in serial production (2008–2010).
The global online auto review site Autoguide.com tested Tesla's fourth-generation car in October 2010. Autoguide editor Derek Kreindler said "The Tesla Roadster 2.5 S is a massively impressive vehicle, more spacecraft than sports car. Theories like global warming, peak oil and rising oil prices should no longer bring heart palpitations to car fans. The Tesla shows just how good zero-emissions 'green' technology can be. Quite frankly, getting into a normal car at the end of the test drive was a major letdown. The whirr of the engine, the shove in the backside and the little roadster that seems to pivot around you is replaced by a grunting, belching, feedback-free driving experience". He continues on that "but for a $100,000 car, it could use some work" complaining of purposefully cheap work.
In the March 2010 print edition of British enthusiast magazine "EVO" (p. 120), editor Richard Meaden was the first to review the all-new right-hand-drive version of the Roadster. He said the car had "serious, instantaneous muscle". "With so much torque from literally no revs the acceleration punch is wholly alien. Away from traffic lights you'd murder anything, be it a 911 Turbo, GT-R or 599, simply because while they have to mess about with balancing revs and clutch, or fiddle with launch controls and invalid warranties, all you have to do is floor the throttle and wave goodbye".
In December 2009, "The Wall Street Journal" editor Joseph White conducted an extended test-drive and determined that "you can have enormous fun within the legal speed limit as you whoosh around unsuspecting Camry drivers, zapping from 40 to 60 miles per hour in two seconds while the startled victims eat your electric dust". White praised the car's environmental efficiency but said consumer demand reflected not the environmental attributes of the car but its performance. "The Tesla turns the frugal environmentalist aesthetic on its head. Sure, it doesn't burn petroleum, and if plugged into a wind turbine or a nuclear plant, it would be a very low-carbon machine. But anyone who buys one will get the most satisfaction from smoking someone's doors off. The Tesla's message is that 'green' technology can appeal to the id, not just the superego".
In December 2009, "Motor Trend" was the first to independently confirm the Roadster Sport's reported time of 3.7 seconds. ("Motor Trend" recorded of 3.70 seconds; it recorded a quarter-mile test at 12.6 sec at .) Engineering editor Kim Reynolds called the acceleration "breathtaking" and said the car confirms "Tesla as an actual car company. ...Tesla is the first maker to crack the EV legitimacy barrier in a century".
In November 2009, "Automobile Magazine West Coast" editor Jason Cammisa spent a week driving a production Tesla Roadster. Cammisa was immediately impressed with the acceleration, saying the car "explodes off the line, pulling like a small jet plane. ... It's like driving a Lamborghini with a big V-12 revved over 6000 rpm at all times, waiting to pounce—without the noise, vibration, or misdemeanor arrest for disturbing the peace". He also took the car to Infineon Raceway in Sonoma, California, and praised the car for its robustness, saying the Roadster:
wins the Coolest Car I've Ever Driven award. Why? Despite the flat-out sprints, the drag racing, the donuts, the top-speed runs, and dicing through traffic like there's a jet pack strapped to the trunk, Pacific Gas and Electric—which generated power for the Tesla—released into the atmosphere the same amount of carbon dioxide as would a gasoline-powered car getting 99 mpg. And the Roadster didn't break. It didn't smoke, lock up, freeze, or experience flux-capacitor failure. Over the past ten decades, no company has been able to reinvent the car—not General Motors with the EV1, not Toyota with the Prius. And now, a bunch of dudes from Silicon Valley have created an electric car that really works—as both an environmental fix and a speed fix
In 2009 the Tesla Roadster was one of the Scandinavian Sports Car of the Year participants. In a comparison made by Nordic car magazines "Tekniikan Maailma" (Finland), "Teknikens Värld" (Sweden) and "Bil Magasinet" (Denmark), critics praised the torque of the car and a track car structure, but also highlighted more negative aspects such as a short battery life; they were unable to drive a full track lap in dry track conditions.
In May 2009, "Car and Driver" technical editor Aaron Robinson wrote a review based on the first extended test-drive of a production Tesla Roadster. Robinson had the car for nearly a week at his home. He complained of "design anomalies, daily annoyances, absurd ergonomics, and ridiculous economics" and stated he never got to see if the car could go 240 miles on a single charge because the torturous seating forced him to stop driving the car. He also complained of Tesla increasing the car prices on those who had already made deposits and charging extra for previously free necessary components.
In February 2009, automotive critic Dan Neil of the "Los Angeles Times" called the production Tesla Roadster "a superb piece of machinery: stiff, well sorted, highly focused, dead-sexy and eerily quick". Neil said he had the car for 24 hours but "caned it like the Taliban caned Gillette salesmen and it never even blinked".
In February 2009, "Road & Track" tested another production vehicle and conducted the first independently verified metered testing of the Roadster. Engineering editor Dennis Simanitis said the testing confirmed what he called "extravagant claims", that the Roadster had a 4.0 s acceleration and a range. They said the Roadster felt like "an over-ballasted Lotus Elise", but the weight was well-distributed, so the car remained responsive. "Fit and finish of our Tesla were exemplary", which "Road & Track" thought fit the target market. Overall, they considered it a "delight" to drive. Testing a pre-production car in early 2008, "Road & Track" said "The Tesla feels composed and competent at speed with great turn-in and transitioning response", though they recommended against it as a "primary grocery-getter".
In January 2009, automotive critic Warren Brown of "The Washington Post" called the production Roadster "a head-turner, jaw-dropper. It is sexy as all get-out". He described the feeling behind the wheel as, "Wheeeeeee! Drive a Tesla, even if you have to fly to Tesla's Menlo Park, Calif., headquarters, to get your hands on one for a day. ... If this is the future of the automobile, I want it".
In a review of a Roadster prototype before the cars were in serial production, "Motor Trend" gave a generally favorable review in March 2008, stating that, it was "undeniably, unbelievably efficient" and would be "profoundly humbling to just about any rumbling Ferrari or Porsche that makes the mistake of pulling up next to a silent, Tesla Roadster at a stoplight"; they nonetheless detected a "nasty drive-train buck" during the test drive of an early Roadster with the older, two-speed transmission.
In a July 8, 2007, review of a prototype Roadster, Jay Leno wrote, "If you like sports cars and you want to be green, this is the only way to go. The Tesla is a car that you can live with, drive and enjoy as a sports car. I had a brief drive in the car and it was quite impressive. This is an electric car that is fun to drive".
In a November 27, 2006, review of a prototype Roadster in "Slate", Paul Boutin wrote, "A week ago, I went for a spin in the fastest, most fun car I've ever ridden in—and that includes the Aston Martin I tried to buy once. I was so excited, in fact, that I decided to take a few days to calm down before writing about it. Well, my waiting period is over, I'm thinking rationally, and I'm still unbelievably stoked about the Tesla".
"Top Gear" controversy.
In the third quarter of 2008, "Top Gear"'s Jeremy Clarkson reviewed two production Roadsters with the v1.5 transmission and described the driving experience with the exclamations "God Almighty! Wave goodbye to the world of dial-up, and say hello to the world of broadband motoring!" and "This car is biblically quick!" when comparing the acceleration versus the car the Roadster was based on, a Lotus Elise. Clarkson also noted, however, that the handling of the car was not as sharp as that of the Elise: "through the corners things are less rosy".
The segment also claimed that the car's batteries would run flat after of heavy use on a track and showed the car being pushed off the track.
Tesla spokesperson responded with statements in blogs and to mainstream news organizations that the cars provided to "Top Gear" never had less than 20% charge and never experienced brake failure.
In addition, neither car provided to "Top Gear" needed to be pushed off the track at any point.
Clarkson also showed a wind turbine with stationary rotor blades and complained that it would take countless hours to refuel the car using such a source of electricity, although the car can be charged from a 240 V 70 A outlet in as little as 3.5 hours.
After numerous blogs and several large news organizations began following the controversy, the BBC issued a statement saying "the tested Tesla was filmed being pushed into the shed in order to show what would happen if the Roadster had run out of charge. "Top Gear" stands by the findings in this film and is content that it offers a fair representation of the Tesla's performance on the day it was tested", without addressing the other alleged misrepresentations that Tesla highlighted to the media.
After several weeks of increasing pressure and inquiries from the BBC, Clarkson wrote a blog entry for "The Times", acknowledging that "Inevitably, the film we had shot was a bit of a mess. There was a handful of shots of a silver car. Some of a grey car". "But as a device for moving you and your things around, it is about as much use as a bag of muddy spinach". In the months that followed Clarkson's acknowledgment, the original episode—including the misstatements—reran on BBC America and elsewhere without any editing.
On March 29, 2011, Tesla sued the programme over libel and malicious falsehood, while simultaneously launching the website TeslaVsTopGear.com. The current position of Tesla is found on its web page.
In a blogpost, producer Andy Wilman has referred to Tesla's allegations as a "crusade" and contested the truth value of Tesla's statements.
On October 19, 2011, the High Court in London rejected Tesla's libel claim.
Tesla appealed High Court's decision to the Court of Appeal, where a three-judge panel of Lords Justice upheld the lower court's decision, and ordered Tesla to pay the BBC's legal costs of £100,000.
Sales.
Tesla delivered approximately 2,450 Roadsters worldwide between February 2008 and December 2012. Featuring new options and enhanced features, the 2012 Tesla Roadster was sold in limited numbers only in mainland Europe, Asia and Australia, and as of July 2012, less than 140 units were available for sale in Europe and Asia before the remaining inventory would be sold out. Tesla's US exemption for not having special two-stage passenger airbags expired for cars made after the end of 2011 so the last Roadsters were not sold in the American market for regulatory reasons. The U.S. was the leading market with about 1,800 Roadsters sold. There were fewer than 50 right-hand-drive models of the Tesla Roadster produced and hand built in the UK.
United States.
The Roadster had a three-year, warranty. Tesla also offered an extended powertrain warranty and a battery replacement warranty.
In July 2009, Tesla announced that US consumers could finance the Roadster through Bank of America. Financing was available for up to 75% of the total vehicle purchase price.
Tesla sold Roadsters directly to customers. It sold them online, in 13 showrooms and over the phone in North America and Europe. Tesla does not operate through franchise dealerships but operates company-owned stores. The company said that it took its retail cues from Apple, Starbucks and other non-automotive retailers.
Outside the United States.
The company has been shipping cars to European customers since mid-2009. Tesla sold out of its EU special-edition vehicle, which had a 2010 model-year production run of 250 cars. A total of 575 units have been sold in Europe through October 2012.
Tesla first overseas showroom opened in London in 2009, with right-hand-drive models promised for early 2010. Showrooms in Munich and Monaco were also added in 2009, followed by Zurich and Copenhagen in 2010 and Milan in 2011. Reservations for the 2010 Roadster were available for a €3,000 refundable reservation fee.
From 2009 to 2014, Hansjoerg von Gemmingen of Karlsruhe, Germany drove his Tesla Roadster , this being the mileage world record for an all-electric vehicle and reached in 2017. He also drove another in a Tesla Model S and voiced his plan to become the first man to travel a million kilometres in an electric vehicle.
Kevin Yu, the director of Tesla Motors Asia Pacific, said Roadsters in Japan had additional yearly taxes for exceeding the width limit of normal sized cars.
Pricing complaints.
In 2009, Roadster reservation holders who had already placed deposits up to US$50,000 (~$ in 2023) to lock in their orders were informed that their orders had been unlocked and that they had to re-option their ordered vehicles on the threat of losing their spot on the orders list. Tesla then raised the prices of several options, and a new Tesla Roadster with the same set of features that had previously been standard became US$6,700 more expensive than before. For example, the high performance charger that was previously claimed to be standard on all vehicles was changed to be an optional feature costing US$3,000, and the previously claimed standard forged alloy wheels became a US$2,300 upgrade. One person who pre-ordered a Tesla Roadster complained:
<templatestyles src="Template:Blockquote/styles.css" />I am [pre-ordered owner] number 395. I am not a rich person dabbling in a plaything. I thought I was actually doing some good by supporting a company that was moving us to a more sustainable future. I put $50,000 of my own money down on this car in May of 2007. I withstood the delays. I held in there when it almost seemed the company was going bankrupt. Now, after locking in my options, they pull this on me.
Awards.
The world distance record of for a production electric car on a single charge was set by a Roadster on October 27, 2009, during the Global Green Challenge in outback Australia, in which it averaged a speed of . In March 2010, a Tesla Roadster became the first electric vehicle to win the Monte Carlo Alternative Energy Rally and the first to win any Federation Internationale de l'Automobile-sanctioned championship when a Roadster driven by former Formula One driver Érik Comas beat 96 competitors for range, efficiency and performance in the three-day, nearly challenge.
Space launch.
In December 2017, Elon Musk announced that his personal Tesla Roadster, sn:686, would be launched into space, serving as dummy payload on the maiden flight of the SpaceX Falcon Heavy rocket. The launch on February 6, 2018, was successful; the vehicle was placed into a heliocentric orbit that took it beyond Mars's orbital path around the Sun.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{33705\\,\\frac{\\mathrm{Wh}}{\\mathrm{gal_{ge}}}}\n {135\\,\\frac{\\mathrm{Wh}}{\\mathrm{km}} \\times \\frac{1.6\\, \\mathrm{km}}{\\mathrm{mi}}}\n \\times 77.6 \\% {\\mathrm{_{charging\\ eff.}}}= 120 \\,\\mathrm{mpg_{ge}} = 1.95 \\frac{\\mathrm{L_{ge}}}{100\\, \\mathrm{km}}"
}
] |
https://en.wikipedia.org/wiki?curid=5760445
|
576108
|
Parametric equation
|
Representation of a curve by a function of a parameter
In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, called a parametric curve and parametric surface, respectively. In such cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (alternatively spelled as parametrisation) of the object.
For example, the equations
formula_0
form a parametric representation of the unit circle, where t is the parameter: A point ("x", "y") is on the unit circle if and only if there is a value of t such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors:
formula_1
Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations.
In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is "one" and "one" parameter is used, for surfaces dimension "two" and "two" parameters, etc.).
Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled t; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve.
Implicitization.
Converting a set of parametric equations to a single implicit equation involves eliminating the variable t from the simultaneous equations formula_2 This process is called <dfn >implicitization</dfn>. If one of these equations can be solved for t, the expression obtained can be substituted into the other equation to obtain an equation involving x and y only: Solving formula_3 to obtain formula_4 and using this in formula_5 gives the explicit equation formula_6 while more complicated cases will give an implicit equation of the form formula_7
If the parametrization is given by rational functions
formula_8
where p, q, and r are set-wise coprime polynomials, a resultant computation allows one to implicitize. More precisely, the implicit equation is the resultant with respect to t of "xr"("t") – "p"("t") and "yr"("t") – "q"("t").
In higher dimensions (either more than two coordinates or more than one parameter), the implicitization of rational parametric equations may by done with Gröbner basis computation; see .
To take the example of the circle of radius a, the parametric equations
formula_9
can be implicitized in terms of x and y by way of the Pythagorean trigonometric identity. With
formula_10
and
formula_11
we get
formula_12
and thus
formula_13
which is the standard equation of a circle centered at the origin.
Parametric plane curves.
Parabola.
The simplest equation for a parabola,
formula_14
can be (trivially) parameterized by using a free parameter t, and setting
formula_15
Explicit equations.
More generally, any curve given by an explicit equation
formula_16
can be (trivially) parameterized by using a free parameter t, and setting
formula_17
Circle.
A more sophisticated example is the following. Consider the unit circle which is described by the ordinary (Cartesian) equation
formula_18
This equation can be parameterized as follows:
formula_19
With the Cartesian equation it is easier to check whether a point lies on the circle or not. With the parametric version it is easier to obtain points on a plot.
In some contexts, parametric equations involving only rational functions (that is fractions of two polynomials) are preferred, if they exist. In the case of the circle, such a "<dfn >rational parameterization</dfn>" is
formula_20
With this pair of parametric equations, the point (−1, 0) is not represented by a real value of t, but by the limit of x and y when t tends to infinity.
Ellipse.
An ellipse in canonical position (center at origin, major axis along the x-axis) with semi-axes a and b can be represented parametrically as
formula_21
An ellipse in general position can be expressed as
formula_22
as the parameter t varies from 0 to 2"π". Here ("X"c , "Y"c) is the center of the ellipse, and φ is the angle between the x-axis and the major axis of the ellipse.
Both parameterizations may be made rational by using the tangent half-angle formula and setting formula_23
Lissajous curve.
A Lissajous curve is similar to an ellipse, but the x and y sinusoids are not in phase. In canonical position, a Lissajous curve is given by
formula_24
where kx and ky are constants describing the number of lobes of the figure.
Hyperbola.
An east-west opening hyperbola can be represented parametrically by
formula_25
or, rationally
formula_26
A north-south opening hyperbola can be represented parametrically as
formula_27
or, rationally
formula_28
In all these formulae ("h" , "k") are the center coordinates of the hyperbola, a is the length of the semi-major axis, and b is the length of the semi-minor axis. Note that in the rational forms of these formulae, the points ("−a" , 0) and (0 , "−a"), respectively, are not represented by a real value of t, but are the limit of x and y as t tends to infinity.
Hypotrochoid.
A hypotrochoid is a curve traced by a point attached to a circle of radius r rolling around the inside of a fixed circle of radius R, where the point is at a distance d from the center of the interior circle.
The parametric equations for the hypotrochoids are:
formula_29
Some examples:
Parametric space curves.
Helix.
Parametric equations are convenient for describing curves in higher-dimensional spaces. For example:
formula_30
describes a three-dimensional curve, the helix, with a radius of a and rising by 2"πb" units per turn. The equations are identical in the plane to those for a circle.
Such expressions as the one above are commonly written as
formula_31
where r is a three-dimensional vector.
Parametric surfaces.
A torus with major radius R and minor radius r may be defined parametrically as
formula_32
where the two parameters t and u both vary between 0 and 2"π".
As u varies from 0 to 2"π" the point on the surface moves about a short circle passing through the hole in the torus. As t varies from 0 to 2"π" the point on the surface moves about a long circle around the hole in the torus.
Straight line.
The parametric equation of the line through the point formula_33 and parallel to the vector formula_34 is
formula_35
Applications.
Kinematics.
In kinematics, objects' paths through space are commonly described as parametric curves, with each spatial coordinate depending explicitly on an independent parameter (usually time). Used in this way, the set of parametric equations for the object's coordinates collectively constitute a vector-valued function for position. Such parametric curves can then be integrated and differentiated termwise. Thus, if a particle's position is described parametrically as
formula_36
then its velocity can be found as
formula_37
and its acceleration as
formula_38
Computer-aided design.
Another important use of parametric equations is in the field of computer-aided design (CAD). For example, consider the following three representations, all of which are commonly used to describe planar curves.
Each representation has advantages and drawbacks for CAD applications.
The explicit representation may be very complicated, or even may not exist. Moreover, it does not behave well under geometric transformations, and in particular under rotations. On the other hand, as a parametric equation and an implicit equation may easily be deduced from an explicit representation, when a simple explicit representation exists, it has the advantages of both other representations.
Implicit representations may make it difficult to generate points on the curve, and even to decide whether there are real points. On the other hand, they are well suited for deciding whether a given point is on a curve, or whether it is inside or outside of a closed curve.
Such decisions may be difficult with a parametric representation, but parametric representations are best suited for generating points on a curve, and for plotting it.
Integer geometry.
Numerous problems in integer geometry can be solved using parametric equations. A classical such solution is Euclid's parametrization of right triangles such that the lengths of their sides "a", "b" and their hypotenuse "c" are coprime integers. As a and b are not both even (otherwise "a", "b" and "c" would not be coprime), one may exchange them to have a even, and the parameterization is then
formula_39
where the parameters m and n are positive coprime integers that are not both odd.
By multiplying "a", "b" and c by an arbitrary positive integer, one gets a parametrization of all right triangles whose three sides have integer lengths.
Underdetermined linear systems.
A system of m linear equations in n unknowns is underdetermined if it has more than one solution. This occurs when the matrix of the system and its augmented matrix have the same rank r and "r" < "n". In this case, one can select "n" − "r" unknowns as parameters and represent all solutions as a parametric equation where all unknowns are expressed as linear combinations of the selected ones. That is, if the unknowns are formula_40 one can reorder them for expressing the solutions as
formula_41
Such a parametric equation is called a <dfn >parametric form</dfn> of the solution of the system.
The standard method for computing a parametric form of the solution it to use Gaussian elimination for computing a reduced row echelon form of the augmented matrix. Then the unknowns that can be used as parameters are the ones that correspond to columns not containing any leading entry (that is the left most non zero entry in a row or the matrix), and the parametric form can be straightforwardly deduced.
|
[
{
"math_id": 0,
"text": "\\begin{align}\n x &= \\cos t \\\\\n y &= \\sin t\n\\end{align}"
},
{
"math_id": 1,
"text": "(x, y)=(\\cos t, \\sin t)."
},
{
"math_id": 2,
"text": "x=f(t),\\ y=g(t)."
},
{
"math_id": 3,
"text": "y=g(t)"
},
{
"math_id": 4,
"text": "t=g^{-1}(y)"
},
{
"math_id": 5,
"text": "x=f(t)"
},
{
"math_id": 6,
"text": " x=f(g^{-1}(y)),"
},
{
"math_id": 7,
"text": "h(x,y)=0."
},
{
"math_id": 8,
"text": "x=\\frac{p(t)}{r(t)},\\qquad y=\\frac{q(t)}{r(t)},"
},
{
"math_id": 9,
"text": "\\begin{align}\n x &= a \\cos(t) \\\\\n y &= a \\sin(t)\n\\end{align}"
},
{
"math_id": 10,
"text": "\\begin{align}\n\\frac{x}{a} &= \\cos(t) \\\\\n\\frac{y}{a} &= \\sin(t) \\\\\n\\end{align}"
},
{
"math_id": 11,
"text": "\\cos(t)^2 + \\sin(t)^2 = 1,"
},
{
"math_id": 12,
"text": "\\left(\\frac{x}{a}\\right)^2 + \\left(\\frac{y}{a}\\right)^2 = 1,"
},
{
"math_id": 13,
"text": "x^2+y^2=a^2,"
},
{
"math_id": 14,
"text": "y = x^2"
},
{
"math_id": 15,
"text": "x = t, y = t^2 \\quad \\mathrm{for} -\\infty < t < \\infty."
},
{
"math_id": 16,
"text": "y = f(x)"
},
{
"math_id": 17,
"text": "x = t, y = f(t) \\quad \\mathrm{for} -\\infty < t < \\infty."
},
{
"math_id": 18,
"text": " x^2 + y^2 = 1."
},
{
"math_id": 19,
"text": "(x,y)=(\\cos(t),\\; \\sin(t))\\quad\\mathrm{for}\\ 0\\leq t < 2\\pi."
},
{
"math_id": 20,
"text": "\\begin{align}\n x &= \\frac{1 - t^2}{1 + t^2} \\\\\n y &= \\frac{2t}{1 + t^2}\\,.\n\\end{align}"
},
{
"math_id": 21,
"text": "\\begin{align}\n x &= a\\,\\cos t \\\\\n y &= b\\,\\sin t\\,.\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{alignat}{4}\n x ={}&& X_\\mathrm{c} &+ a\\,\\cos t\\,\\cos \\varphi {}&&- b\\,\\sin t\\,\\sin\\varphi \\\\\n y ={}&& Y_\\mathrm{c} &+ a\\,\\cos t\\,\\sin \\varphi {}&&+ b\\,\\sin t\\,\\cos\\varphi\n\\end{alignat}"
},
{
"math_id": 23,
"text": "\\tan\\frac{t}{2} = u\\,."
},
{
"math_id": 24,
"text": "\\begin{align}\n x &= a\\,\\cos(k_xt) \\\\\n y &= b\\,\\sin(k_yt)\n\\end{align}"
},
{
"math_id": 25,
"text": "\\begin{align}\n x &= a\\sec t + h \\\\\n y &= b\\tan t + k\\,,\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\n x &= a\\frac{1 + t^2}{1 - t^2} + h \\\\\n y &= b\\frac{2t}{1 - t^2} + k\\,.\n\\end{align}"
},
{
"math_id": 27,
"text": "\\begin{align}\n x &= b\\tan t + h \\\\\n y &= a\\sec t + k\\,,\n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align}\n x &= b\\frac{2t}{1 - t^2} + h \\\\\n y &= a\\frac{1 + t^2}{1 - t^2} + k\\,.\n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\n x (\\theta) &= (R - r)\\cos\\theta + d\\cos\\left({R - r \\over r}\\theta\\right) \\\\\n y (\\theta) &= (R - r)\\sin\\theta - d\\sin\\left({R - r \\over r}\\theta\\right)\\,.\n\\end{align}"
},
{
"math_id": 30,
"text": "\\begin{align}\n x &= a \\cos(t) \\\\\n y &= a \\sin(t) \\\\\n z &= bt\\,\n\\end{align}"
},
{
"math_id": 31,
"text": "\\begin{align}\n\\mathbf{r}(t) &= (x(t), y(t), z(t)) \\\\\n &= (a \\cos(t), a \\sin(t), b t)\\,,\n\\end{align}"
},
{
"math_id": 32,
"text": "\\begin{align}\nx &= \\cos(t)\\left(R + r \\cos(u)\\right), \\\\\ny &= \\sin(t)\\left(R + r \\cos(u)\\right), \\\\\nz &= r \\sin(u)\\,.\n\\end{align}"
},
{
"math_id": 33,
"text": "\\left( x_0, y_0, z_0 \\right)"
},
{
"math_id": 34,
"text": " a \\hat\\mathbf{i} + b \\hat\\mathbf{j} + c \\hat\\mathbf{k}"
},
{
"math_id": 35,
"text": "\\begin{align}\nx & = x_0 +a t \\\\\ny & = y_0 +b t \\\\\nz & = z_0 +c t\n\\end{align}"
},
{
"math_id": 36,
"text": "\\mathbf{r}(t) = (x(t), y(t), z(t))\\,,"
},
{
"math_id": 37,
"text": "\\begin{align}\n\\mathbf{v}(t) &= \\mathbf{r}'(t) \\\\\n &= (x'(t), y'(t), z'(t))\\,,\n\\end{align}"
},
{
"math_id": 38,
"text": "\\begin{align}\n\\mathbf{a}(t) &= \\mathbf{v}'(t) = \\mathbf{r}''(t) \\\\\n &= (x''(t), y''(t), z''(t))\\,.\n\\end{align}"
},
{
"math_id": 39,
"text": "\\begin{align}\na &= 2mn \\\\\nb &= m^2 - n^2 \\\\\nc &= m^2 + n^2\\,,\n\\end{align}"
},
{
"math_id": 40,
"text": "x_1, \\ldots, x_n,"
},
{
"math_id": 41,
"text": "\n\\begin{align}\nx_1 &= \\beta_1+\\sum_{j=r+1}^n \\alpha_{1,j}x_j\\\\\n\\vdots\\\\\nx_r &= \\beta_r+\\sum_{j=r+1}^n \\alpha_{r,j}x_j\\\\\nx_{r+1} &= x_{r+1}\\\\\n\\vdots\\\\\nx_n &= x_n.\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=576108
|
5761408
|
Indecomposable distribution
|
Probability distribution
In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: "Z" ≠ "X" + "Y". If it can be so expressed, it is decomposable: "Z" = "X" + "Y". If, further, it can be expressed as the distribution of the sum of two or more independent "identically" distributed random variables, then it is divisible: "Z" = "X"1 + "X"2.
formula_0
then the probability distribution of "X" is indecomposable.
Proof: Given non-constant distributions "U" and "V," so that "U" assumes at least two values "a", "b" and "V" assumes two values "c", "d," with "a" < "b" and "c" < "d", then "U" + "V" assumes at least three distinct values: "a" + "c", "a" + "d", "b" + "d" ("b" + "c" may be equal to "a" + "d", for example if one uses 0, 1 and 0, 1). Thus the sum of non-constant distributions assumes at least three values, so the Bernoulli distribution is not the sum of non-constant distributions.
formula_1
This probability distribution is decomposable (as the distribution of the sum of two Bernoulli-distributed random variables) if
formula_2
and otherwise indecomposable. To see, this, suppose "U" and "V" are independent random variables and "U" + "V" has this probability distribution. Then we must have
formula_3
for some "p", "q" ∈ [0, 1], by similar reasoning to the Bernoulli case (otherwise the sum "U" + "V" will assume more than three values). It follows that
formula_4
formula_5
formula_6
This system of two quadratic equations in two variables "p" and "q" has a solution ("p", "q") ∈ [0, 1]2 if and only if
formula_7
Thus, for example, the discrete uniform distribution on the set {0, 1, 2} is indecomposable, but the binomial distribution for two trials each having probabilities 1/2, thus giving respective probabilities "a, b, c" as 1/4, 1/2, 1/4, is decomposable.
formula_8
is indecomposable.
formula_9
where the independent random variables "X""n" are each equal to 0 or 1 with equal probabilities – this is a Bernoulli trial of each digit of the binary expansion.
formula_10
on {0, 1, 2, ...}.
For any positive integer "k", there is a sequence of negative-binomially distributed random variables "Y""j", "j" = 1, ..., "k", such that "Y"1 + ... + "Y""k" has this geometric distribution. Therefore, this distribution is infinitely divisible.
On the other hand, let "D""n" be the "n"th binary digit of "Y", for "n" ≥ 0. Then the "D""n"'s are independent and
formula_11
and each term in this sum is indecomposable.
Related concepts.
At the other extreme from indecomposability is infinite divisibility.
|
[
{
"math_id": 0,
"text": "X = \\begin{cases}\n1 & \\text{with probability } p, \\\\\n0 & \\text{with probability } 1-p,\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "X = \\begin{cases}\n2 & \\text{with probability } a, \\\\\n1 & \\text{with probability } b, \\\\\n0 & \\text{with probability } c.\n\\end{cases}\n"
},
{
"math_id": 2,
"text": "\\sqrt{a} + \\sqrt{c} \\le 1 \\ "
},
{
"math_id": 3,
"text": "\n\\begin{matrix}\nU = \\begin{cases}\n1 & \\text{with probability } p, \\\\\n0 & \\text{with probability } 1 - p,\n\\end{cases}\n& \\mbox{and} &\nV = \\begin{cases}\n1 & \\text{with probability } q, \\\\\n0 & \\text{with probability } 1 - q,\n\\end{cases}\n\\end{matrix}\n"
},
{
"math_id": 4,
"text": "a = pq, \\, "
},
{
"math_id": 5,
"text": "c = (1-p)(1-q), \\, "
},
{
"math_id": 6,
"text": "b = 1 - a - c. \\, "
},
{
"math_id": 7,
"text": "\\sqrt{a} + \\sqrt{c} \\le 1. \\ "
},
{
"math_id": 8,
"text": "f(x) = {1 \\over \\sqrt{2\\pi\\,}} x^2 e^{-x^2/2}"
},
{
"math_id": 9,
"text": " \\sum_{n=1}^\\infty {X_n \\over 2^n }, "
},
{
"math_id": 10,
"text": "\\Pr(Y = n) = (1-p)^n p\\, "
},
{
"math_id": 11,
"text": " Y = \\sum_{n=1}^\\infty 2^n D_n, "
}
] |
https://en.wikipedia.org/wiki?curid=5761408
|
57621327
|
Radiation efficiency
|
Telecommunications performance metric
In antenna theory, radiation efficiency is a measure of how well a radio antenna converts the radio-frequency power accepted at its terminals into radiated power. Likewise, in a receiving antenna it describes the proportion of the radio wave's power intercepted by the antenna which is actually delivered as an electrical signal. It is not to be confused with antenna efficiency, which applies to aperture antennas such as a parabolic reflector or phased array, or antenna/aperture illumination efficiency, which relates the maximum directivity of an antenna/aperture to its standard directivity.
Definition.
Radiation efficiency is defined as "The ratio of the total power radiated by an antenna to the net power accepted by the antenna from the connected transmitter." It is sometimes expressed as a percentage (less than 100), and is frequency dependent. It can also be described in decibels. The gain of an antenna is the directivity multiplied by the radiation efficiency. Thus, we have
formula_0
where formula_1 is the gain of the antenna in a specified direction, formula_2 is the radiation efficiency, and formula_3 is the directivity of the antenna in the specified direction.
For wire antennas which have a defined radiation resistance the radiation efficiency is the ratio of the radiation resistance to the total resistance of the antenna including ground loss (see below) and conductor resistance. In practical cases the resistive loss in any tuning and/or matching network is often included, although network loss is strictly not a property of the antenna.
For other types of antenna the radiation efficiency is less easy to calculate and is usually determined by measurements.
Radiation efficiency of an antenna or antenna array having several ports.
In the case of an antenna or antenna array having multiple ports, the radiation efficiency depends on the excitation. More precisely, the radiation efficiency depends on the relative phases and the relative amplitudes of the signals applied to the different ports. This dependence is always present, but it is easier to interpret in the case where the interactions between the ports are sufficiently small. These interactions may be large in many actual configurations, for instance in an antenna array built in a mobile phone to provide spatial diversity and/or spatial multiplexing. In this context, it is possible to define an efficiency metric as the minimum radiation efficiency for all possible excitations, denoted by formula_4, which is related to the radiation efficiency figure given by formula_5.
Another interesting efficiency metric is the maximum radiation efficiency for all possible excitations, denoted by formula_6. It is possible to consider that using formula_4 as design parameter is particularly relevant to a multiport antenna array intended for MIMO transmission with spatial multiplexing, and that using formula_6 as design parameter is particularly relevant to a multiport antenna array intended for beamforming in a single direction or over a small solid angle.
Measurement of the radiation efficiency.
Measurements of the radiation efficiency are difficult. Classical techniques include the ″Wheeler method″ (also referred to as ″Wheeler cap method″) and the ″Q factor method″. The Wheeler method uses two impedance measurements, one of which with the antenna located in a metallic box (the cap). Unfortunately, the presence of the cap is likely to significantly modify the current distribution on the antenna, so that the resulting accuracy is difficult to determine. The Q factor method does not use a metallic enclosure, but the method is based on the assumption that the Q factor of an ideal antenna is known, the ideal antenna being identical to the actual antenna except that the conductors have perfect conductivity and any dielectrics have zero loss. Thus, the Q factor method is only semi-experimental, because it relies on a theoretical computation using an assumed geometry of the actual antenna. Its accuracy is also difficult to determine. Other radiation efficiency measurement techniques include: the pattern integration method, which requires gain measurements over many directions and two polarizations; and reverberation chamber techniques, which utilize a mode-stirred reverberation chamber.
Ohmic and ground loss.
The loss of radio-frequency power to heat can be subdivided many different ways, depending on the number of significantly lossy objects electrically coupled to the antenna, and on the level of detail desired. Typically the simplest is to consider two types of loss: "ohmic loss" and "ground loss".
When discussed as distinct from "ground loss", the term "ohmic loss" refers to the heat-producing resistance to the flow of radio current in the conductors of the antenna, their electrical connections, and possibly loss in the antenna's feed cable. Because of the skin effect, resistance to radio-frequency current is generally much higher than direct current resistance.
For vertical monopoles and other antennas placed near the ground, "ground loss" occurs due to the electrical resistance encountered by radio-frequency fields and currents passing through the soil in the vicinity of the antenna, as well as ohmic resistance in metal objects in the antenna's surroundings (such as its mast or stalk), and "ohmic resistance" in its ground plane / counterpoise, and in electrical and mechanical bonding connections. When considering antennas that are mounted a few wavelengths above the earth on a non-conducting, radio-transparent mast, ground losses are small enough compared to conductor losses that they can be ignored.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G= e_R \\, D"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "e_R"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "e_{R\\,MIN}"
},
{
"math_id": 5,
"text": "F_{RE}=\\sqrt{1-e_{R\\,MIN}}"
},
{
"math_id": 6,
"text": "e_{R\\,MAX}"
}
] |
https://en.wikipedia.org/wiki?curid=57621327
|
57622
|
Common sunflower
|
Species of flowering plant in the family of Asteraceae
<templatestyles src="Template:Taxobox/core/styles.css" />
The common sunflower (Helianthus annuus) is a species of large annual forb of the daisy family Asteraceae. The common sunflower is harvested for its edible oily seeds which are used in the production of cooking oil, as well as other uses such as food for livestock, bird food, and planting in domestic gardens for aesthetics. Wild plants are known for their multiple flower heads, whereas the domestic sunflower often possesses a single large flower head atop an unbranched stem.
Etymology.
In the binomial name "Helianthus annuus", the genus name is derived from the Greek "ἥλιος : hḗlios" 'sun' and "ἄνθος : ánthos" 'flower'. The species name "annuus" means 'annual' in Latin.
Domestication.
The plant was first domesticated in the Americas. Sunflower seeds were brought to Europe from the Americas in the 16th century, where, along with sunflower oil, they became a widespread cooking ingredient. With time, the bulk of industrial-scale production has shifted to Eastern Europe, and (as of 2020[ [update]]) Russia and Ukraine together produce over half of worldwide seed production.
Description.
The plant has an erect rough-hairy stem, reaching typical heights of . The tallest sunflower on record achieved . Sunflower leaves are broad, coarsely toothed, rough and mostly alternate; those near the bottom are largest and commonly heart-shaped.
Flower.
The plant flowers in summer. What is often called the "flower" of the sunflower is actually a "flower head" (pseudanthium), wide, of numerous small individual five-petaled flowers ("florets"). The outer flowers, which resemble petals, are called ray flowers. Each "petal" consists of a ligule composed of fused petals of an asymmetrical ray flower. They are sexually sterile and may be yellow, red, orange, or other colors. The spirally arranged flowers in the center of the head are called disk flowers. These mature into fruit (sunflower "seeds").
The prairie sunflower ("H. petiolaris") is similar in appearance to the wild common sunflower; the scales in its central disk are tipped by white hairs.
Heliotropism.
A common misconception is that flowering sunflower heads track the Sun across the sky. Although immature flower buds exhibit this behaviour, the mature flowering heads point in a fixed (and typically easterly) direction throughout the day. This old misconception was disputed in 1597 by the English botanist John Gerard, who grew sunflowers in his famous herbal garden: "[some] have reported it to turn with the Sun, the which I could never observe, although I have endeavored to find out the truth of it." The uniform alignment of sunflower heads in a field might give some people the false impression that the flowers are tracking the Sun.
This alignment results from heliotropism in an earlier development stage, the young flower stage, before full maturity of flower heads (anthesis). Young sunflowers orient themselves in the direction of the sun. At dawn the head of the flower faces east and moves west throughout the day. When sunflowers reach full maturity they no longer follow the sun, and continuously face east. Young flowers reorient overnight to face east in anticipation of the morning. Their heliotropic motion is a circadian rhythm, synchronized by the sun, which continues if the sun disappears on cloudy days or if plants are moved to constant light. They are able to regulate their circadian rhythm in response to the blue-light emitted by a light source. If a sunflower plant in the bud stage is rotated 180°, the bud will be turning away from the sun for a few days, as resynchronization with the sun takes time.
When growth of the flower stalk stops and the flower is mature, the heliotropism also stops and the flower faces east from that moment onward. This eastward orientation allows rapid warming in the morning and, as a result, an increase in pollinator visits. Sunflowers do not have a pulvinus below their inflorescence. A pulvinus is a flexible segment in the leaf stalks (petiole) of some plant species and functions as a 'joint'. It effectuates leaf motion due to reversible changes in turgor pressure, which occurs without growth. The sensitive plant's closing leaves are a good example of reversible leaf movement via pulvinuli.
Floret arrangement.
Generally, each floret is oriented toward the next by approximately the golden angle, 137.5°, producing a pattern of interconnecting spirals, where the number of left spirals and the number of right spirals are successive Fibonacci numbers. Typically, there are 34 spirals in one direction and 55 in the other; however, in a very large sunflower head there could be 89 in one direction and 144 in the other. This pattern produces the most efficient packing of seeds mathematically possible within the flower head.
A model for the pattern of florets in the head of a sunflower was proposed by H. Vogel in 1979. This is expressed in polar coordinates
formula_0
formula_1
where θ is the angle, "r" is the radius or distance from the center, and "n" is the index number of the floret and "c" is a constant scaling factor. It is a form of Fermat's spiral. The angle 137.5° is related to the golden ratio (55/144 of a circular angle, where 55 and 144 are Fibonacci numbers) and gives a close packing of florets. This model has been used to produce computer generated representations of sunflowers.
Genome.
The sunflower genome is diploid with a base chromosome number of 17 and an estimated genome size of 2,871–3,189 million base pairs. Some sources claim its true size is around 3.5 billion base pairs (slightly larger than the human genome).
Distribution and habitat.
The plant was first domesticated in the Americas. Sunflowers grow best in fertile, moist, well-drained soil with heavy mulch. They often appear on dry open areas and foothills. Outside of cultivation, the common sunflower is found on moist clay-based soils in areas with climates similar to Texas. In contrast the related "Helianthus debilis" and "Helianthus petiolaris" are found on drier, sandier soils.
The precise native range is difficult to determine. According to Plants of the World Online (POWO) it is native to Arizona, California, and Nevada in the present day United States and to all parts of Mexico except the Gulf Coast and southeast. Though not giving much detail, the Missouri Botanical Garden Plant Finder also lists it as native to the Western United States and Canada. The information published by the Biota of North America Program (BONAP) largely agrees with this, showing the common sunflower as native to states west of the Mississippi, though also listed as a noxious weed in Iowa, Minnesota, and Texas. Regardless of its original range it can now be found in almost every part of the world that is not tropical, desert, or tundra.
Ecology.
Threats and diseases.
One of the major threats that sunflowers face today is "Fusarium", a filamentous fungus that is found largely in soil and plants. It is a pathogen that over the years has caused an increasing amount of damage and loss of sunflower crops, some as extensive as 80% of damaged crops.
Downy mildew is another disease to which sunflowers are susceptible. Its susceptibility to downy mildew is particularly high due to the sunflower's way of growth and development. Sunflower seeds are generally planted only an inch deep in the ground. When such shallow planting is done in moist and soaked earth or soil, it increases the chances of diseases such as downy mildew.
Another major threat to sunflower crops is broomrape, a parasite that attacks the root of the sunflower and causes extensive damage to sunflower crops, as high as 100%.
Cultivation.
In commercial planting, seeds are planted apart and deep.
History.
Common sunflower was one of several plants cultivated by Native Americans in prehistoric North America as part of the Eastern Agricultural Complex. Although it was commonly accepted that the sunflower was first domesticated in what is now the southeastern US, roughly 5,000 years ago, there is evidence that it was first domesticated in Mexico around 2600 BCE. These crops were found in Tabasco, Mexico, at the San Andres dig site. The earliest known examples in the US of a fully domesticated sunflower have been found in Tennessee, and date to around 2300 BCE. Other very early examples come from rockshelter sites in Eastern Kentucky. Many indigenous American peoples used the sunflower as the symbol of their solar deity, including the Aztecs and the Otomi of Mexico and the Incas in South America. In 1510, early Spanish explorers encountered the sunflower in the Americas and carried its seeds back to Europe. Of the four plants known to have been domesticated in eastern North America and to have become important agricultural commodities, the sunflower is currently the most economically important.
Research of phylogeographic relations and population demographic patterns across sunflowers has demonstrated that earlier cultivated sunflowers form a clade from wild populations from the Great Plains, which indicates that there was a single domestication event in central North America. Following the cultivated sunflower's origin, it may have gone through significant bottlenecks dating back to ~5,000 years ago.
In the 16th century the first crop breeds were brought from America to Europe by explorers. Domestic sunflower seeds have been found in Mexico, dating to 2100 BCE. Native American people grew sunflowers as a crop from Mexico to Southern Canada. They then were introduced to the Russian Empire, where oilseed cultivators were located, and the flowers were developed and grown on an industrial scale. The Russian Empire reintroduced this oilseed cultivation process to North America in the mid-20th century; North America began their commercial era of sunflower production and breeding. New breeds of the "Helianthus spp." began to become more prominent in new geographical areas. During the 18th century, the use of sunflower oil became very popular in Russia, particularly with members of the Russian Orthodox Church, because only plant-based fats were allowed during Lent, according to fasting traditions. In the early 19th century, it was first commercialized in the village of Alexeyevka in Voronezh Governorate by the merchant named Daniil Bokaryov, who developed a technology suitable for its large-scale extraction, and quickly spread around. The town's coat of arms has included an image of a sunflower ever since.
Production.
In 2020, world production of sunflower seeds was 50 million tonnes, led by Russia and Ukraine with 53% combined of the total (table).
Fertilizer use.
Researchers have analyzed the impact of various nitrogen-based fertilizers on the growth of sunflowers. Ammonium nitrate was found to produce better nitrogen absorption than urea, which performed better in low-temperature areas.
Crop rotation.
Sunflower cultivation typically uses crop rotation, often with cereals, soybean, or rapeseed. This reduces idle periods and increases total sunflower production and profitability.
Hybrids and cultivars.
In today's market, most of the sunflower seeds provided or grown by farmers are hybrids. Hybrids or hybridized sunflowers are produced by cross-breeding different types and species, for example cultivated sunflowers with wild species. By doing so, new genetic recombinations are obtained ultimately leading to the production of new hybrid species. These hybrid species generally have a higher fitness and carry properties or characteristics that farmers look for, such as resistance to pathogens.
Hybrid, "Helianthus annuus dwarf2" does not contain the hormone gibberellin and does not display heliotropic behavior. Plants treated with an external application of the hormone display a temporary restoration of elongation growth patterns. This growth pattern diminished by 35% 7–14 days after final treatment.
Hybrid male sterile and male fertile flowers that display heterogeneity have a low crossover of honeybee visitation. Sensory cues such as pollen odor, diameter of seed head, and height may influence pollinator visitation of pollinators that display constancy behavior patterns.
Sunflowers are grown as ornamentals in a domestic setting. Being easy to grow and producing spectacular results in any good, moist soil in full sun, they are a favourite subject for children. A large number of cultivars, of varying size and color, are now available to grow from seed. The following are cultivars of sunflowers (those marked agm have gained the Royal Horticultural Society's Award of Garden Merit):-
Uses.
Sunflower "whole seed" (fruit) are sold as a snack food, raw or after roasting in ovens, with or without salt and/or seasonings added. Sunflower seeds can be processed into a peanut butter alternative, sunflower butter. It is also sold as food for birds and can be used directly in cooking and salads. Native Americans had multiple uses for sunflowers in the past, such as in bread, medical ointments, dyes and body paints.
Sunflower oil, extracted from the seeds, is used for cooking, as a carrier oil and to produce margarine and biodiesel, as it is cheaper than olive oil. A range of sunflower varieties exist with differing fatty acid compositions; some "high-oleic" types contain a higher level of monounsaturated fats in their oil than even olive oil. The oil is also sometimes used in soap. After World War I, during the Russian Civil War, people in Ukraine used sunflower seed oil in lamps as a substitute for kerosene due to shortages. The light from such a lamp has been described as "miserable" and "smoky".
The cake remaining after the seeds have been processed for oil is used as livestock feed. The hulls resulting from the dehulling of the seeds before oil extraction can also be fed to domestic animals. Some recently developed cultivars have drooping heads. These cultivars are less attractive to gardeners growing the flowers as ornamental plants, but appeal to farmers, because they reduce bird damage and losses from some plant diseases. Sunflowers also produce latex, and are the subject of experiments to improve their suitability as an alternative crop for producing hypoallergenic rubber.
Traditionally, several Native American groups planted sunflowers on the north edges of their gardens as a "fourth sister" to the better-known three sisters combination of corn, beans, and squash. Annual species are often planted for their allelopathic properties. It was also used by Native Americans to dress hair. Among the Zuni people, the fresh or dried root is chewed by the medicine man before sucking venom from a snakebite and applying a poultice to the wound. This compound poultice of the root is applied with much ceremony to rattlesnake bites.
However, for commercial farmers growing other commodity crops, the wild sunflower is often considered a weed. Especially in the Midwestern US, wild (perennial) species are often found in corn and soybean fields and can decrease yields. The decrease in yield can be attributed to the production of phenolic compounds which are used to reduce competition for nutrients in nutrient-poor growing areas of the common sunflower.
Phytoremediation.
"Helianthus annuus" can be used in phytoremediation to extract pollutants from soil such as lead and other heavy metals, such as cadmium, zinc, cesium, strontium, and uranium. The phytoremediation process begins by absorbing the heavy metal(s) through the roots, which gradually accumulate in other areas, such as the shoots and leaves. "Helianthus annuus" can also be used in rhizofiltration to neutralize radionuclides, such as caesium-137 and strontium-90 from a pond after the Chernobyl disaster. A similar campaign was mounted in response to the Fukushima Daiichi nuclear disaster.
Culture.
According to Iroquois mythology, the first sunflowers grew out of Earth Woman's legs after she died giving birth to her twin sons, Sapling and Flint.
During the 19th century, it was believed that nearby plants of the species would protect a home from malaria.
The Zuni people use the blossoms ceremonially for anthropic worship. Sunflowers were also worshipped by the Incas because they viewed it as a symbol for the Sun.
The flowers are the subject of Vincent van Gogh's "Sunflowers" series of still-life paintings.
In July 2015, viable seeds were acquired from the field where Malaysia Airlines Flight 17 crashed on a year earlier and were grown in tribute to the 15 Dutch residents of Hilversum who were killed. Earlier that year, Fairfax chief correspondent Paul McGeough and photographer Kate Geraghty had collected 1.5 kg of sunflower seeds from the wreck site for family and friends of the 38 Australian victims, who aimed to give them a poignant symbol of hope.
On 13 May 2021, during the National Costume competition of the Miss Universe 2020 beauty pageant, Miss Dominican Republic Kimberly Jiménez wore a "Goddess of Sunflowers" costume covered in gold and yellow rhinestones that included several real sunflowers sewn onto the fabric.
Modern stories often claim that in Greek mythology, the nymph Clytie transformed into a sunflower when she pined after her former lover Helios, the god of the sun, who spurned her and left her for another. However, sunflowers are not native to Greece or Italy, but to North America. The original story is about another flower, the heliotropium.
National and state symbol.
The sunflower is the national flower of Ukraine. Ukrainians used sunflower as a main source of cooking oil instead of butter or lard forbidden by the Orthodox Church when observing Lent. They were also planted to serve as bioremediation in Chernobyl. In June 1996, United States, Russia and Ukraine officials planted sunflowers at the Pervomaysk missile base where Soviet nuclear weapons were formerly placed. During the Russian invasion of Ukraine, a video widely shared on social media showed a Ukrainian woman confronting a Russian soldier, telling the latter to "take these seeds and put them in your pockets so at least sunflowers will grow when you all lie down here". The sunflower has since become a global symbol of resistance, unity, and hope.
The sunflower is also the state flower of the US state of Kansas, and one of the city flowers of Kitakyūshū, Japan.
Movement symbol.
During the late 19th century, the flower was used as the symbol of the Aesthetic Movement.
The sunflower was chosen as the symbol of the Spiritualist Church, for many reasons, but mostly because of the (false) belief that the flowers turn toward the sun as "Spiritualism turns toward the light of truth". Modern Spiritualists often have art or jewelry with sunflower designs.
The sunflower is often used as a symbol of green ideology. The flower is also the symbol of the Vegan Society.
The sunflower is the symbol behind the Sunflower Movement, a 2014 mass protest in Taiwan.
The Hidden Disabilities Sunflower was first used as a visible symbol (typically worn on a lanyard) May 2016 at London Gatwick Airport. It has since come into common usage throughout the UK, and in the Commonwealth more generally.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r=c \\sqrt{n},"
},
{
"math_id": 1,
"text": "\\theta=n \\times 137.5^{\\circ},"
}
] |
https://en.wikipedia.org/wiki?curid=57622
|
57625144
|
Adamic–Adar index
|
The Adamic–Adar index is a measure introduced in 2003 by Lada Adamic and Eytan Adar to predict links in a social network, according to the amount of shared links between two nodes. It is defined as the sum of the inverse logarithmic degree centrality of the neighbours shared by the two nodes
formula_0
where formula_1 is the set of nodes adjacent to formula_2. The definition is based on the concept that common elements with very large neighbourhoods are less significant when predicting a connection between two nodes compared with elements shared between a small number of nodes.
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\nA(x,y) = \\sum_{u \\in N(x) \\cap N(y)} \\frac{1}{\\log{|N(u)|}}\n"
},
{
"math_id": 1,
"text": "N(u)"
},
{
"math_id": 2,
"text": "u"
}
] |
https://en.wikipedia.org/wiki?curid=57625144
|
57633087
|
Compact object (mathematics)
|
Mathematical concept
In mathematics, compact objects, also referred to as finitely presented objects, or objects of finite presentation, are objects in a category satisfying a certain finiteness condition.
Definition.
An object "X" in a category "C" which admits all filtered colimits (also known as direct limits) is called compact if the functor
formula_0
commutes with filtered colimits, i.e., if the natural map
formula_1
is a bijection for any filtered system of objects formula_2 in "C". Since elements in the filtered colimit at the left are represented by maps formula_3, for some "i", the surjectivity of the above map amounts to requiring that a map formula_4 factors over some formula_2.
The terminology is motivated by an example arising from topology mentioned below. Several authors also use a terminology which is more closely related to algebraic categories: use the terminology "finitely presented object" instead of compact object. call these the "objects of finite presentation".
Compactness in ∞-categories.
The same definition also applies if "C" is an ∞-category, provided that the above set of morphisms gets replaced by the mapping space in "C" (and the filtered colimits are understood in the ∞-categorical sense, sometimes also referred to as filtered homotopy colimits).
Compactness in triangulated categories.
For a triangulated category "C" which admits all coproducts, defines an object to be compact if
formula_5
commutes with coproducts. The relation of this notion and the above is as follows: suppose "C" arises as the homotopy category of a stable ∞-category admitting all filtered colimits. (This condition is widely satisfied, but not automatic.) Then an object in "C" is compact in Neeman's sense if and only if it is compact in the ∞-categorical sense. The reason is that in a stable ∞-category, formula_6 always commutes with finite colimits since these are limits. Then, one uses a presentation of filtered colimits as a coequalizer (which is a finite colimit) of an infinite coproduct.
Examples.
The compact objects in the category of sets are precisely the finite sets.
For a ring "R", the compact objects in the category of "R"-modules are precisely the finitely presented "R"-modules. In particular, if "R" is a field, then compact objects are finite-dimensional vector spaces.
Similar results hold for any category of algebraic structures given by operations on a set obeying equational laws. Such categories, called varieties, can be studied systematically using Lawvere theories. For any Lawvere theory "T", there is a category Mod("T") of models of "T", and the compact objects in Mod("T") are precisely the finitely presented models. For example: suppose "T" is the theory of groups. Then Mod("T") is the category of groups, and the compact objects in Mod("T") are the finitely presented groups.
The compact objects in the derived category formula_7 of "R"-modules are precisely the perfect complexes.
Compact topological spaces are "not" the compact objects in the category of topological spaces. Instead these are precisely the finite sets endowed with the discrete topology. The link between compactness in topology and the above categorical notion of compactness is as follows: for a fixed topological space formula_8, there is the category formula_9 whose objects are the open subsets of formula_8 (and inclusions as morphisms). Then, formula_8 is a compact topological space if and only if formula_8 is compact as an object in formula_9.
If formula_10 is any category, the category of presheaves formula_11 (i.e., the category of functors from formula_12 to sets) has all colimits. The original category formula_10 is connected to formula_11 by the Yoneda embedding formula_13. For "any" object formula_8 of formula_10, formula_14 is a compact object (of formula_11).
In a similar vein, any category formula_10 can be regarded as a full subcategory of the category formula_15 of ind-objects in formula_10. Regarded as an object of this larger category, "any" object of formula_10 is compact. In fact, the compact objects of formula_15 are precisely the objects of formula_10 (or, more precisely, their images in formula_15).
Non-examples.
Derived category of sheaves of Abelian groups on a noncompact X.
In the unbounded derived category of sheaves of Abelian groups formula_16 for a non-compact topological space formula_8, it is generally not a compactly generated category. Some evidence for this can be found by considering an open cover formula_17 (which can never be refined to a finite subcover using the non-compactness of formula_8) and taking a mapformula_18for some formula_19. Then, for this map formula_20 to lift to an elementformula_21it would have to factor through some formula_22, which is not guaranteed. Proving this requires showing that any compact object has support in some compact subset of formula_8, and then showing this subset must be empty.
Derived category of quasi-coherent sheaves on an Artin stack.
For algebraic stacks formula_23 over positive characteristic, the unbounded derived category formula_24 of quasi-coherent sheaves is in general not compactly generated, even if formula_23 is quasi-compact and quasi-separated. In fact, for the algebraic stack formula_25, there are no compact objects other than the zero object. This observation can be generalized to the following theorem: if the stack formula_23 has a stabilizer group formula_26 such that
then the only compact object in formula_24 is the zero object. In particular, the category is not compactly generated.
This theorem applies, for example, to formula_30 by means of the embedding formula_31 sending a point formula_32 to the identity matrix plus formula_33 at the formula_34-th column in the first row.
Compactly generated categories.
In most categories, the condition of being compact is quite strong, so that most objects are not compact. A category formula_10 is compactly generated if any object can be expressed as a filtered colimit of compact objects in formula_10. For example, any vector space "V" is the filtered colimit of its finite-dimensional (i.e., compact) subspaces. Hence the category of vector spaces (over a fixed field) is compactly generated.
Categories which are compactly generated and also admit all colimits are called accessible categories.
Relation to dualizable objects.
For categories "C" with a well-behaved tensor product (more formally, "C" is required to be a monoidal category), there is another condition imposing some kind of finiteness, namely the condition that an object is "dualizable". If the monoidal unit in "C" is compact, then any dualizable object is compact as well. For example, "R" is compact as an "R"-module, so this observation can be applied. Indeed, in the category of "R"-modules the dualizable objects are the finitely presented projective modules, which are in particular compact. In the context of ∞-categories, dualizable and compact objects tend to be more closely linked, for example in the ∞-category of complexes of "R"-modules, compact and dualizable objects agree. This and more general example where dualizable and compact objects agree are discussed in .
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{Hom}_C(X, \\cdot) : C \\to \\mathrm{Sets}, Y \\mapsto \\operatorname{Hom}_C(X, Y)"
},
{
"math_id": 1,
"text": "\\operatorname{colim} \\operatorname{Hom}_C(X, Y_i) \\to \\operatorname{Hom}_C(X, \\operatorname{colim}_i Y_i)"
},
{
"math_id": 2,
"text": "Y_i"
},
{
"math_id": 3,
"text": "X \\to Y_i"
},
{
"math_id": 4,
"text": "X \\to \\operatorname{colim}_i Y_i"
},
{
"math_id": 5,
"text": "\\operatorname{Hom}_C(X, \\cdot) : C \\to \\mathrm{Ab}, Y \\mapsto \\operatorname{Hom}_C(X, Y)"
},
{
"math_id": 6,
"text": "\\operatorname{Hom}_C(X, -)"
},
{
"math_id": 7,
"text": "D(R-\\text{Mod})"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "\\text{Open}(X)"
},
{
"math_id": 10,
"text": "C"
},
{
"math_id": 11,
"text": "\\text{PreShv}(C)"
},
{
"math_id": 12,
"text": "C^{op}"
},
{
"math_id": 13,
"text": "h_{(-)}: C \\to \\text{PreShv}(C), X \\mapsto h_{X} := \\operatorname{Hom}(-, X)"
},
{
"math_id": 14,
"text": "h_X"
},
{
"math_id": 15,
"text": "\\text{Ind}(C)"
},
{
"math_id": 16,
"text": "D(\\text{Sh}(X;\\text{Ab}))"
},
{
"math_id": 17,
"text": "\\mathcal{U} = \\{U_i \\}_{i \\in I}"
},
{
"math_id": 18,
"text": "\\phi\\in\\text{Hom}(\\mathcal{F}^\\bullet,\\underset{i\\in I}{\\text{colim}} \\mathbb{Z}_{U_i})"
},
{
"math_id": 19,
"text": "\\mathcal{F}^\\bullet \\in \\text{Ob}(D(\\text{Sh}(X;\\text{Ab})))"
},
{
"math_id": 20,
"text": "\\phi"
},
{
"math_id": 21,
"text": "\\psi \\in \\underset{i \\in I}{\\text{colim}} \\text{ Hom}(\\mathcal{F}^\\bullet, \\mathbb{Z}_{U_i})"
},
{
"math_id": 22,
"text": "\\mathbb{Z}_{U_i}"
},
{
"math_id": 23,
"text": "\\mathfrak{X}"
},
{
"math_id": 24,
"text": "D_{qc}(\\mathfrak{X})"
},
{
"math_id": 25,
"text": "B\\mathbb{G}_a"
},
{
"math_id": 26,
"text": "G"
},
{
"math_id": 27,
"text": "k"
},
{
"math_id": 28,
"text": "\\overline{G} = G\\otimes_k\\overline{k}"
},
{
"math_id": 29,
"text": "\\mathbb{G}_a"
},
{
"math_id": 30,
"text": "G=GL_n"
},
{
"math_id": 31,
"text": "\\mathbb{G}_a \\to GL_n"
},
{
"math_id": 32,
"text": "x \\in \\mathbb{G}_a(S)"
},
{
"math_id": 33,
"text": "x"
},
{
"math_id": 34,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=57633087
|
576354
|
142857
|
Sequence of six digits
Natural number
142,857 is the natural number following 142,856 and preceding 142,858. It is a Kaprekar number.
142857, the six repeating digits of (0.142857), is the best-known cyclic number in base 10. If it is multiplied by 2, 3, 4, 5, or 6, the answer will be a cyclic permutation of itself, and will correspond to the repeating digits of , , , , or respectively.
1 × 142,857 = 142,857
2 × 142,857 = 285,714
3 × 142,857 = 428,571
4 × 142,857 = 571,428
5 × 142,857 = 714,285
6 × 142,857 = 857,142
7 × 142,857 = 999,999
Calculation.
If multiplying by an integer greater than 7, there is a simple process to get to a cyclic permutation of 142857. By adding the rightmost six digits (ones through hundred thousands) to the remaining digits and repeating this process until only six digits are left, it will result in a cyclic permutation of 142857:
142857 × 8 = 1142856
1 + 142856 = 142857
142857 × 815 = 116428455
116 + 428455 = 428571
1428572 = 142857 × 142857 = 20408122449
20408 + 122449 = 142857
Multiplying by a multiple of 7 will result in 999999 through this process:
142857 × 74 = 342999657
342 + 999657 = 999999
If you square the last three digits and subtract the square of the first three digits, you also get back a cyclic permutation of the number.
8572 = 734449
1422 = 20164
734449 − 20164 = 714285
It is the repeating part in the decimal expansion of the rational number = 0.142857. Thus, multiples of are simply repeated copies of the corresponding multiples of 142857:
formula_0
Connection to the enneagram.
The 142857 number sequence is used in the enneagram figure, a symbol of the Gurdjieff Work used to explain and visualize the dynamics of the interaction between the two great laws of the Universe (according to G. I. Gurdjieff), the Law of Three and the Law of Seven. The movement of the numbers of 142857 divided by , . etc., and the subsequent movement of the enneagram, are portrayed in Gurdjieff's sacred dances known as the movements.
Other properties.
The 142857 number sequence is also found in several decimals in which the denominator has a factor of 7. In the examples below, the numerators are all 1, however there are instances where it does not have to be, such as (0.285714).
For example, consider the fractions and equivalent decimal values listed below:
= 0.142857...
= 0.0714285...
= 0.03571428...
= 0.0285714...
= 0.017857142...
= 0.0142857...
The above decimals follow the 142857 rotational sequence. There are fractions in which the denominator has a factor of 7, such as and , that do not follow this sequence and have other values in their decimal digits.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n\\tfrac17 & = 0.\\overline{142857}\\ldots \\\\[3pt]\n\\tfrac27 & = 0.\\overline{285714}\\ldots \\\\[3pt]\n\\tfrac37 & = 0.\\overline{428571}\\ldots \\\\[3pt]\n\\tfrac47 & = 0.\\overline{571428}\\ldots \\\\[3pt]\n\\tfrac57 & = 0.\\overline{714285}\\ldots \\\\[3pt]\n\\tfrac67 & = 0.\\overline{857142}\\ldots \\\\[3pt]\n\\tfrac77 & = 0.\\overline{999999}\\ldots = 1 \\\\[3pt]\n\\tfrac87 & = 1.\\overline{142857}\\ldots \\\\[3pt]\n\\tfrac97 & = 1.\\overline{285714}\\ldots \\\\\n& \\,\\,\\,\\vdots\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=576354
|
576387
|
Finite field arithmetic
|
Arithmetic in a field with a finite number of elements
In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers.
There are infinitely many different finite fields. Their number of elements is necessarily of the form "pn" where "p" is a prime number and "n" is a positive integer, and two finite fields of the same size are isomorphic. The prime "p" is called the characteristic of the field, and the positive integer "n" is called the dimension of the field over its prime field.
Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments.
Effective polynomial representation.
The finite field with "p""n" elements is denoted GF("p""n") and is also called the Galois field of order "p""n", in honor of the founder of finite field theory, Évariste Galois. GF("p"), where "p" is a prime number, is simply the ring of integers modulo "p". That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulo "p". For instance, in GF(5), 4 + 3 = 7 is reduced to 2 modulo 5. Division is multiplication by the inverse modulo "p", which may be computed using the extended Euclidean algorithm.
A particular case is GF(2), where addition is exclusive OR (XOR) and multiplication is AND. Since the only invertible element is 1, division is the identity function.
Elements of GF("p""n") may be represented as polynomials of degree strictly less than "n" over GF("p"). Operations are then performed modulo "m(x)" where "m(x)" is an irreducible polynomial of degree "n" over GF("p"), for instance using polynomial long division. Addition is the usual addition of polynomials, but the coefficients are reduced modulo "p". Multiplication is also the usual multiplication of polynomials, but with coefficients multiplied modulo "p" and polynomials multiplied modulo the polynomial "m(x)". This representation in terms of polynomial coefficients is called a monomial basis (a.k.a. 'polynomial basis').
There are other representations of the elements of GF("p""n"); some are isomorphic to the polynomial representation above and others look quite different (for instance, using matrices). Using a normal basis may have advantages in some contexts.
When the prime is 2, it is conventional to express elements of GF("p""n") as binary numbers, with the coefficient of each term in a polynomial represented by one bit in the corresponding element's binary expression. Braces ( "{" and "}" ) or similar delimiters are commonly added to binary numbers, or to their hexadecimal equivalents, to indicate that the value gives the coefficients of a basis of a field, thus representing an element of the field. For example, the following are equivalent representations of the same value in a characteristic 2 finite field:
Primitive polynomials.
There are many irreducible polynomials (sometimes called reducing polynomials) that can be used to generate a finite field, but they do not all give rise to the same representation of the field.
A monic irreducible polynomial of degree "n" having coefficients in the finite field GF("q"), where "q" = "p""t" for some prime p and positive integer t, is called a primitive polynomial if all of its roots are primitive elements of GF("qn"). In the polynomial representation of the finite field, this implies that x is a primitive element. There is at least one irreducible polynomial for which x is a primitive element. In other words, for a primitive polynomial, the powers of "x" generate every nonzero value in the field.
In the following examples it is best not to use the polynomial representation, as the meaning of "x" changes between the examples. The monic irreducible polynomial "x"8 + "x"4 + "x"3 + "x" + 1 over GF(2) is not primitive. Let "λ" be a root of this polynomial (in the polynomial representation this would be "x"), that is, "λ"8 + "λ"4 + "λ"3 + "λ" + 1 = 0. Now "λ"51 = 1, so "λ" is not a primitive element of GF(28) and generates a multiplicative subgroup of order 51. The monic irreducible polynomial "x"8 + "x"4 + "x"3 + "x"2 + 1 over GF(2) is primitive, and all 8 roots are generators of GF(28).
All GF(28) have a total of 128 generators (see Number of primitive elements), and for a primitive polynomial, 8 of them are roots of the reducing polynomial. Having "x" as a generator for a finite field is beneficial for many computational mathematical operations.
Addition and subtraction.
Addition and subtraction are performed by adding or subtracting two of these polynomials together, and reducing the result modulo the characteristic.
In a finite field with characteristic 2, addition modulo 2, subtraction modulo 2, and XOR are identical. Thus,
Under regular addition of polynomials, the sum would contain a term 2"x"6. This term becomes 0"x"6 and is dropped when the answer is reduced modulo 2.
Here is a table with both the normal algebraic sum and the characteristic 2 finite field sum of a few polynomials:
In computer science applications, the operations are simplified for finite fields of characteristic 2, also called GF(2"n") Galois fields, making these fields especially popular choices for applications.
Multiplication.
Multiplication in a finite field is multiplication modulo an irreducible reducing polynomial used to define the finite field. (I.e., it is multiplication followed by division using the reducing polynomial as the divisor—the remainder is the product.) The symbol "•" may be used to denote multiplication in a finite field.
Rijndael's (AES) finite field.
Rijndael (standardised as AES) uses the characteristic 2 finite field with 256 elements, which can also be called the Galois field GF(28). It employs the following reducing polynomial for multiplication:
"x"8 + "x"4 + "x"3 + "x" + 1.
For example, {53} • {CA} = {01} in Rijndael's field because
and
The latter can be demonstrated through long division (shown using binary notation, since it lends itself well to the task. Notice that exclusive OR is applied in the example and not arithmetic subtraction, as one might use in grade-school long division.):
11111101111110 (mod) 100011011
^100011011
01110000011110
^100011011
0110110101110
^100011011
010101110110
^100011011
00100011010
^100011011
000000001
Multiplication in this particular finite field can also be done using a modified version of the "peasant's algorithm". Each polynomial is represented using the same binary notation as above. Eight bits is sufficient because only degrees 0 to 7 are possible in the terms of each (reduced) polynomial.
This algorithm uses three variables (in the computer programming sense), each holding an eight-bit representation. a and b are initialized with the multiplicands; p accumulates the product and must be initialized to 0.
At the start and end of the algorithm, and the start and end of each iteration, this invariant is true: a b + p is the product. This is obviously true when the algorithm starts. When the algorithm terminates, a or b will be zero so p will contain the product.
This algorithm generalizes easily to multiplication over other fields of characteristic 2, changing the lengths of a, b, and p and the value codice_0 appropriately.
Multiplicative inverse.
The multiplicative inverse for an element a of a finite field can be calculated a number of different ways:
Implementation tricks.
Generator based tables.
When developing algorithms for Galois field computation on small Galois fields, a common performance optimization approach is to find a generator "g" and use the identity:
formula_0
to implement multiplication as a sequence of table look ups for the log"g"("a") and "g""y" functions and an integer addition operation. This exploits the property that every finite field contains generators. In the Rijndael field example, the polynomial "x" + 1 (or {03}) is one such generator. A necessary but not sufficient condition for a polynomial to be a generator is to be irreducible.
An implementation must test for the special case of "a" or "b" being zero, as the product will also be zero.
This same strategy can be used to determine the multiplicative inverse with the identity:
formula_1
Here, the order of the generator, |"g"|, is the number of non-zero elements of the field. In the case of GF(28) this is 28 − 1 = 255. That is to say, for the Rijndael example: ("x" + 1)255 = 1. So this can be performed with two look up tables and an integer subtract. Using this idea for exponentiation also derives benefit:
formula_2
This requires two table look ups, an integer multiplication and an integer modulo operation. Again a test for the special case "a" = 0 must be performed.
However, in cryptographic implementations, one has to be careful with such implementations since the cache architecture of many microprocessors leads to variable timing for memory access. This can lead to implementations that are vulnerable to a timing attack.
Carryless multiply.
For binary fields GF(2"n"), field multiplication can be implemented using a carryless multiply such as CLMUL instruction set, which is good for "n" ≤ 64. A multiplication uses one carryless multiply to produce a product (up to 2"n" − 1 bits), another carryless multiply of a pre-computed inverse of the field polynomial to produce a quotient = ⌊product / (field polynomial)⌋, a multiply of the quotient by the field polynomial, then an xor: result = product ⊕ ((field polynomial) ⌊product / (field polynomial)⌋). The last 3 steps (pclmulqdq, pclmulqdq, xor) are used in the Barrett reduction step for fast computation of CRC using the x86 pclmulqdq instruction.
Composite exponent.
When "k" is a composite number, there will exist isomorphisms from a binary field GF(2"k") to an extension field of one of its subfields, that is, GF((2"m")"n") where "k" = "m" "n". Utilizing one of these isomorphisms can simplify the mathematical considerations as the degree of the extension is smaller with the trade off that the elements are now represented over a larger subfield. To reduce gate count for hardware implementations, the process may involve multiple nesting, such as mapping from GF(28) to GF(((22)2)2).
Program examples.
C programming example.
Here is some C code which will add and multiply numbers in the characteristic 2 finite field of order 28, used for example by Rijndael algorithm or Reed–Solomon, using the Russian peasant multiplication algorithm:
/* Add two numbers in the GF(2^8) finite field */
uint8_t gadd(uint8_t a, uint8_t b) {
return a ^ b;
/* Multiply two numbers in the GF(2^8) finite field defined
* by the modulo polynomial relation x^8 + x^4 + x^3 + x + 1 = 0
* (the other way being to do carryless multiplication followed by a modular reduction)
uint8_t gmul(uint8_t a, uint8_t b) {
uint8_t p = 0; /* accumulator for the product of the multiplication */
while (a != 0 && b != 0) {
if (b & 1) /* if the polynomial for b has a constant term, add the corresponding a to p */
p ^= a; /* addition in GF(2^m) is an XOR of the polynomial coefficients */
if (a & 0x80) /* GF modulo: if a has a nonzero term x^7, then must be reduced when it becomes x^8 */
a = (a « 1) ^ 0x11b; /* subtract (XOR) the primitive polynomial x^8 + x^4 + x^3 + x + 1 (0b1_0001_1011) – you can change it but it must be irreducible */
else
a «= 1; /* equivalent to a*x */
b »= 1;
return p;
This example has cache, timing, and branch prediction side-channel leaks, and is not suitable for use in cryptography.
D programming example.
This D program will multiply numbers in Rijndael's finite field and generate a PGM image:
Multiply two numbers in the GF(2^8) finite field defined
by the polynomial x^8 + x^4 + x^3 + x + 1.
ubyte gMul(ubyte a, ubyte b) pure nothrow {
ubyte p = 0;
foreach (immutable ubyte counter; 0 .. 8) {
p ^= -(b & 1) & a;
auto mask = -((a » 7) & 1);
// 0b1_0001_1011 is x^8 + x^4 + x^3 + x + 1.
a = cast(ubyte)((a « 1) ^ (0b1_0001_1011 & mask));
b »= 1;
return p;
void main() {
import std.stdio, std.conv;
enum width = ubyte.max + 1, height = width;
auto f = File("rijndael_finite_field_multiplication.pgm", "wb");
f.writefln("P5\n%d %d\n255", width, height);
foreach (immutable y; 0 .. height)
foreach (immutable x; 0 .. width) {
immutable char c = gMul(x.to!ubyte, y.to!ubyte);
f.write(c);
This example does not use any branches or table lookups in order to avoid side channels and is therefore suitable for use in cryptography.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "ab = g^{\\log_g(ab)} = g^{\\log_g(a) + \\log_g (b)}"
},
{
"math_id": 1,
"text": "a^{-1} = g^{\\log_g\\left(a^{-1}\\right)} = g^{-\\log_g(a)} = g^{|g| - \\log_g(a)}"
},
{
"math_id": 2,
"text": "a^n = g^{\\log_g\\left(a^n\\right)} = g^{n\\log_g(a)} = g^{n\\log_g(a) \\pmod{|g|}}"
}
] |
https://en.wikipedia.org/wiki?curid=576387
|
57638895
|
Unseen species problem
|
Estimating the numbers of species
The unseen species problem in ecology deals with the estimation of the number of species represented in an ecosystem that were not observed by samples. It more specifically relates to how many new species would be discovered if more samples were taken in an ecosystem. The study of the unseen species problem was started in the early 1940s, by Alexander Steven Corbet. He spent two years in British Malaya trapping butterflies and was curious how many new species he would discover if he spent another two years trapping. Many different estimation methods have been developed to determine how many new species would be discovered given more samples.
The unseen species problem also applies more broadly, as the estimators can be used to estimate any new elements of a set not previously found in samples. An example of this is determining how many words William Shakespeare knew based on all of his written works.
The unseen species problem can be broken down mathematically as follows: If formula_0 independent samples are taken, formula_1, and then if formula_2 more independent samples were taken, the number of unseen species that will be discovered by the additional samples is given by
formula_3
with formula_4 being the second set of formula_2 samples.
History.
In the early 1940s Alexander Steven Corbet spent 2 years in British Malaya trapping butterflies. He kept track of how many species he observed, and how many members of each species were captured. For example, there were 74 different species of which he captured only 2 individual butterflies.
When Corbet returned to the United Kingdom, he approached biostatistician Ronald Fisher and asked how many new species of butterflies he could expect to catch if he went trapping for another two years; in essence, Corbet was asking how many species he observed zero times.
Fisher responded with a simple estimation: for an additional 2 years of trapping, Corbet could expect to capture 75 new species. He did this using a simple summation (data provided by Orlitsky in the table from the Example below:
formula_5
Here formula_6 corresponds to the number of individual species that were observed formula_7 times. Fisher's sum was later confirmed by Good–Toulmin.
Estimators.
To estimate the number of unseen species, let formula_8 be the number of future samples (formula_2) divided by the number of past samples (formula_0), or formula_9. Let formula_6 be the number of individual species observed formula_7 times (for example, if there were 74 species of butterflies with 2 observed members throughout the samples, then formula_10).
Good–Toulmin estimator.
The Good–Toulmin (GT) estimator was developed by Good and Toulmin in 1953. The estimate of the unseen species based on the Good–Toulmin estimator is given by
formula_11
The Good–Toulmin Estimator has been shown to be a good estimate for values of formula_12 The Good–Toulmin estimator also approximately satisfies
formula_13
This means that formula_14 estimates formula_15 to within formula_16 as long as formula_12
However, for formula_17, the Good–Toulmin estimator fails to capture accurate results. This is because, if formula_17 formula_14 increases by formula_18 for formula_7 with formula_19 meaning that if formula_19 formula_14 grows super-linearly in formula_20 but formula_15 can grow at most linearly with formula_21 Therefore, when formula_17 formula_14 grows "faster" than formula_15 and does "not" approximate the true value.
To compensate for this, Efron and Thisted in 1976 showed that a truncated Euler transform can also be a usable estimate (the "ET" estimate):
formula_22
with
formula_23
where formula_24 and
formula_25
where formula_26 is the location chosen to truncate the Euler transform.
Smoothed Good–Toulmin estimator.
Similar to the approach by Efron and Thisted, Alon Orlitsky, Ananda Theertha Suresh, and Yihong Wu developed the smooth Good–Toulmin estimator. They realized that the Good–Toulmin estimator failed because of the exponential growth, and not its bias. Therefore, they estimated the number of unseen species by truncating the series
formula_27
Orlitsky, Suresh, and Wu also noted that for distributions with formula_28, the driving term in the summation estimate is the formula_29 term, regardless of which value of formula_30 is chosen. To solve this, they selected a random nonnegative integer formula_31, truncated the series at formula_31, and then took the average over a distribution about formula_31. The resulting estimator is
formula_32
This method was chosen because the bias of formula_33 shifts signs due to the formula_34 coefficient. Averaging over a distribution of formula_31 therefore reduces the bias. This means that the estimator can be written as the linear combination of the prevalence:
formula_35
Depending on the distribution of formula_31 chosen, the results will vary. With this method, estimates can be made for formula_36, and this is the best possible.
Species discovery curve.
The species discovery curve can also be used. This curve relates the number of species found in an area as a function of the time. These curves can also be created by using estimators (such as the Good–Toulmin estimator) and plotting the number of unseen species at each value for formula_37.
A species discovery curve is always increasing, as there is never a sample that could decrease the number of discovered species. Furthermore, the species discovery curve is also decelerating – the more samples taken, the fewer unseen species are expected to be discovered. The species discovery curve will also never asymptote, as it is assumed that although the discovery rate might become infinitely slow, it will never actually stop. Two common models for a species discovery curve are the logarithmic and the exponential function.
Example: Corbet's butterflies.
As an example, consider the data Corbet provided Fisher in the 1940s. Using the Good–Toulmin model, the number of unseen species is found using
formula_38
This can then be used to create a relationship between formula_37 and formula_15.
This relationship is shown in the plot below.
From the plot, it is seen that at formula_39, which was the value of formula_37 that Corbet brought to Fisher, the resulting estimate of formula_15 is 75, matching what Fisher found. This plot also acts as a species discovery curve for this ecosystem and defines how many new species will be discovered as formula_37 increases (and more samples are taken).
Other uses.
There are numerous uses for the predictive algorithm. Knowing that the estimators are accurate, it allows scientists to extrapolate accurately the results of polling people by a factor of 2. They can predict the number of unique answers based on the number of people that have answered similarly. The method can also be used to determine the extent of someone's knowledge.
Example: How many words did Shakespeare know?
Based on research of Shakespeare's known works done by Thisted and Efron, there are 884,647 total words. The research also found that there are at total of formula_40 different words that appear more than 100 times. Therefore, the total number of unique words was found to be 31,534. Applying the Good–Toulmin model, if an equal number of works by Shakespeare were discovered, then it is estimated that formula_41 unique words would be found. The goal would be to derive formula_42 for formula_43. Thisted and Efron estimate that formula_44, meaning that Shakespeare most likely knew over twice as many words as he actually used in all of his writings.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "X^n \\triangleq X_1, \\ldots, X_n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "\nU \\triangleq U(X^n, X_{n+1}^{m+n}) \\triangleq \\left|\\{X_{n+1}^{m+n}\\} \\setminus \\{X^n\\}\\right|,\n"
},
{
"math_id": 4,
"text": "X_{n+1}^{m+n} \\triangleq X_{n+1}, \\ldots, X_{n+m}"
},
{
"math_id": 5,
"text": "\nU = \\sum_{i=1}^n (-1)^{i+1} \\varphi_i = 118 - 74 + 44 - 24 + \\cdots - 12 + 6 = 75.\n"
},
{
"math_id": 6,
"text": "\\varphi_i"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "t \\triangleq m/n"
},
{
"math_id": 9,
"text": "m = t n"
},
{
"math_id": 10,
"text": "\\varphi_2 = 74"
},
{
"math_id": 11,
"text": "\nU^\\text{GT} \\triangleq U^\\text{GT}(X^n, t) \\triangleq \\sum_{i=1}^\\infty (-t)^{i+1} \\varphi_i.\n"
},
{
"math_id": 12,
"text": "t \\leq 1."
},
{
"math_id": 13,
"text": "\n\\operatorname{\\mathbb E}(U^\\text{GT} - U)^2 \\lesssim nt^2.\n"
},
{
"math_id": 14,
"text": "U^\\text{GT}"
},
{
"math_id": 15,
"text": "U"
},
{
"math_id": 16,
"text": "\\sqrt{n} \\cdot t,"
},
{
"math_id": 17,
"text": "t > 1,"
},
{
"math_id": 18,
"text": "(-t)^i \\varphi_i"
},
{
"math_id": 19,
"text": "\\varphi_i > 0,"
},
{
"math_id": 20,
"text": "t,"
},
{
"math_id": 21,
"text": "t."
},
{
"math_id": 22,
"text": "\nU^\\text{ET} \\triangleq \\sum_{i=1}^n h_h^\\text{ET} \\cdot \\varphi_i,\n"
},
{
"math_id": 23,
"text": "\nh_i^\\text{ET} \\triangleq (-t)^{i+1} \\cdot \\mathbb{P}(X \\geq i),\n"
},
{
"math_id": 24,
"text": "X \\sim \\operatorname{Bin}\\left(k, \\frac{1}{1 + t}\\right),"
},
{
"math_id": 25,
"text": "\n\\mathbb{P}(X \\geq i) =\n\\begin{cases}\n \\displaystyle \\sum_{j=i}^k \\binom{k}{j} \\frac{t^{k-j}}{(1 + t)^k} & \\text{ for } i \\leq k, \\\\\n 0 & \\text{ for } i > k,\n\\end{cases}\n"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "\nU^l \\triangleq -\\sum_{i=1}^l (-t)^i \\varphi_i.\n"
},
{
"math_id": 28,
"text": "t > 1"
},
{
"math_id": 29,
"text": "l-\\text{th}"
},
{
"math_id": 30,
"text": "l"
},
{
"math_id": 31,
"text": "L"
},
{
"math_id": 32,
"text": "\nU^L = \\operatorname{E}_L\\left[-\\sum_{i=1}^L (-t)^i \\varphi_i\\right].\n"
},
{
"math_id": 33,
"text": "U^l"
},
{
"math_id": 34,
"text": "(-t)^i"
},
{
"math_id": 35,
"text": "\nU^L = \\operatorname{E}_L\\left[-\\sum_{i\\geq 1} (-t)^i \\varphi_i \\mathbf{1}_{i\\leq L}\\right]\n= -\\sum_{i \\geq 1} (-t)^i \\Pr(L \\geq i) \\varphi_i.\n"
},
{
"math_id": 36,
"text": "t \\propto \\ln n"
},
{
"math_id": 37,
"text": "t"
},
{
"math_id": 38,
"text": "\nU = -\\sum_{i=1}^\\infty (-t)^i \\varphi_i.\n"
},
{
"math_id": 39,
"text": "t = 1"
},
{
"math_id": 40,
"text": "N = 864"
},
{
"math_id": 41,
"text": "U^\\text{words} \\approx 11{,}460"
},
{
"math_id": 42,
"text": "U^\\text{words}"
},
{
"math_id": 43,
"text": "t = \\infty"
},
{
"math_id": 44,
"text": "U^\\text{words}(t \\to \\infty) \\approx 35{,}000"
}
] |
https://en.wikipedia.org/wiki?curid=57638895
|
5764764
|
Radiation trapping
|
Radiation trapping, imprisonment of resonance radiation, radiative transfer of spectral lines, line transfer or radiation diffusion is a phenomenon in physics whereby radiation may be "trapped" in a system as it is emitted by one atom and absorbed by another.
Classical description.
Classically, one can think of radiation trapping as a multiple-scattering phenomena, where a photon is scattered by multiple atoms in a cloud. This motivates treatment as a diffusion problem. As such, one can primarily consider the mean free path of light, defined as the reciprocal of the density of scatterers and the scattering cross section:
formula_0
One can assume for simplicity that the scattering diagram is isotropic, which ends up being a good approximation for atoms with equally populated sublevels of total angular momentum. In the classical limit, we can think of the electromagnetic energy density as what is being diffused. So, we consider the diffusion constant in three dimensions,
formula_1
where formula_2 is the transport time. The transport time accounts for both the group delay between scattering events and Wigner's delay time, which is associated with an elastic scattering process. It is written as
formula_3
where formula_4 is the group velocity. When the photons are near resonance, the lifetime of an excited state in the atomic vapor is equal to the transport time, formula_5, independent of the detuning. This comes in handy, since the average number of scattering events is the ratio of the time spent in the system to the lifetime of the excited state (or equivalently, the scattering time). Since in a 3D diffusion process the electromagnetic energy density spreads as formula_6, we can find the average number of scattering events for a photon before it escapes:
formula_7
Finally, the number of scattering events can be related to the optical depth formula_8 as follows. Since formula_9, the number of scattering events scales with the square of the optical depth.
Derivation of the Holstein equation.
In 1947, Theodore Holstein attacked the problem of imprisonment of resonance radiation in a novel way. Foregoing the classical method presented in the prior section, Holstein asserted that there could not exist a mean free path for the photons. His treatment begins with the introduction of a probability function formula_10, which describes the probability that a photon emitted at formula_11 is absorbed within the volume element formula_12 about the point formula_11. Additionally, one can enforce atom number conservation to write
formula_13
where formula_14 represent the number increase and decrease in population of excited atoms, and formula_15 is the number density of excited atoms. If the reciprocal lifetime of an excited atom is given by formula_16, then formula_17 is given by
formula_18
Then formula_19 is obtained by considering all other volume elements, which is where the introduction of formula_20 becomes useful. The contribution of an outside volume formula_21 to the number of excited atoms is given by the number of photons emitted by that outside volume formula_21 multiplied by the probability that those photons are absorbed within the volume formula_12. Integration over all outside volume elements yields
formula_22
Substituting formula_19 and formula_17 into the particle conservation law, we arrive at an integral equation for the density of excited atoms – the Holstein equation
formula_23
Finding the escape probability of photons from the Holstein equation.
Now to find the escape probability of the photons, we consider solutions by ansatz of the form
formula_24
Observing the Holstein equation, one can note that these solutions are subject to the constraint
formula_25
Aided by the exchange symmetry of formula_26, namely that formula_27, one can use variational methods to assert that formula_28 leads to
formula_29
Completing the square and introducing the escape probability formula_30, whose definition follows from that all particles must either be absorbed or escape with a summed probability of 1, an equation in terms of the escape probability is derived:
formula_31
Numerical methods for solving the Holstein equation.
Many contemporary studies in atomic physics utilize numerical solutions to Holstein's equation to both show the presence of radiation trapping in their experimental system and to discuss its effects on the atomic spectra. Radiation trapping has been observed in a variety of experiments, including in the trapping of cesium atoms in a magneto-optical trap (MOT), in the spectroscopic characterization of dense Rydberg gases of strontium atoms, and in lifetime analyses of doped ytterbium(III) oxide for laser improvement.
To solve or simulate the Holstein equation, the Monte Carlo method is commonly employed. An absorption coefficient is calculated for an experiment with a certain opacity, atomic species, Doppler-broadened lineshape, etc., and then a test is made to see whether the photon escapes after formula_32 flights through the atomic vapor (see Figure 1 in the reference).
Other methods include transforming the Holstein equation into a linear generalized eigenvalue problem, which is more computationally expensive and requires the usage of several simplifying assumptions, including but not limited to that the lowest eigenmode of the Holstein equation is parabolic in shape, the atomic vapor is spherical, the atomic vapor has reached a steady state after the near-resonant laser has been shut off, etc.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ell_\\text{mf} = \\frac{1}{\\rho\\sigma_\\text{sc}}."
},
{
"math_id": 1,
"text": "D = \\frac{\\ell^2_\\text{mf}}{3\\tau_r},"
},
{
"math_id": 2,
"text": "\\tau_r"
},
{
"math_id": 3,
"text": "\\tau_r = \\frac{\\ell_\\text{mf}}{\\nu_\\text{g}} + \\tau_\\text{W},"
},
{
"math_id": 4,
"text": "\\nu_\\text{g}"
},
{
"math_id": 5,
"text": "\\tau_{r} = \\tau_{at}"
},
{
"math_id": 6,
"text": "\\langle r^2\\rangle = 6Dt"
},
{
"math_id": 7,
"text": "\\langle N^2_\\text{sc}\\rangle = \\frac{\\langle r^2 \\rangle}{6D\\tau_{at}}."
},
{
"math_id": 8,
"text": "b"
},
{
"math_id": 9,
"text": "\\sqrt{\\langle r^2\\rangle} \\sim b\\ell_\\text{mf}"
},
{
"math_id": 10,
"text": "G(\\mathbf r, \\mathbf r')\\,d\\mathbf r"
},
{
"math_id": 11,
"text": "\\mathbf r"
},
{
"math_id": 12,
"text": "d\\mathbf r"
},
{
"math_id": 13,
"text": "A - B = dt\\,d\\mathbf r\\,\\frac{\\partial n(\\mathbf r)}{\\partial t},"
},
{
"math_id": 14,
"text": "A, B"
},
{
"math_id": 15,
"text": "n(\\mathbf r)"
},
{
"math_id": 16,
"text": "\\Gamma"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "B = \\Gamma n(\\mathbf r)\\,d\\mathbf r\\,dt."
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "G(\\mathbf r, \\mathbf r')"
},
{
"math_id": 21,
"text": "d\\mathbf r'"
},
{
"math_id": 22,
"text": "A = \\Gamma \\,dt\\,d\\mathbf r\\,\\int d\\mathbf r'\\, n(\\mathbf r') G(\\mathbf r, \\mathbf r')."
},
{
"math_id": 23,
"text": "\\frac{\\partial n(\\mathbf r)}{\\partial t} = -\\Gamma n(\\mathbf r) + \\Gamma \\int d\\mathbf r'\\, n(\\mathbf r') G(\\mathbf r, \\mathbf r')."
},
{
"math_id": 24,
"text": "n(\\mathbf r, t) = n(r) e^{\\beta}."
},
{
"math_id": 25,
"text": "(1 - \\beta/\\Gamma) n(\\mathbf r) = \\int d\\mathbf r'\\, n(\\mathbf r') G(\\mathbf r, \\mathbf r')."
},
{
"math_id": 26,
"text": "G"
},
{
"math_id": 27,
"text": "G(\\mathbf r, \\mathbf r') = G(\\mathbf r', \\mathbf r)"
},
{
"math_id": 28,
"text": "\\delta(\\beta/\\Gamma) = 0"
},
{
"math_id": 29,
"text": "\\frac{\\beta}{\\Gamma} = 1 - \\frac{\\iint d\\mathbf r\\,d\\mathbf r'\\, G(\\mathbf r, \\mathbf r') n(\\mathbf r) n(\\mathbf r')}{\\int d\\mathbf r\\, n^2(\\mathbf r)}."
},
{
"math_id": 30,
"text": "E(\\mathbf r) \\equiv 1 - \\int d\\mathbf r'\\, G(\\mathbf r, \\mathbf r')"
},
{
"math_id": 31,
"text": "\\frac{\\beta}{\\Gamma} = \\frac{\\int d\\mathbf r\\, n^2(\\mathbf r) E(\\mathbf r) + \\frac{1}{2} \\iint d\\mathbf r\\,d\\mathbf r'\\,[n(\\mathbf r) - n(\\mathbf r')]^2 G(\\mathbf r, \\mathbf r')}{\\int d\\mathbf r\\, n^2(\\mathbf r)}."
},
{
"math_id": 32,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=5764764
|
57656
|
Jacobson radical
|
In mathematics, more specifically ring theory, the Jacobson radical of a ring "R" is the ideal consisting of those elements in "R" that annihilate all simple right "R"-modules. It happens that substituting "left" in place of "right" in the definition yields the same ideal, and so the notion is left–right symmetric. The Jacobson radical of a ring is frequently denoted by J("R") or rad("R"); the former notation will be preferred in this article, because it avoids confusion with other radicals of a ring. The Jacobson radical is named after Nathan Jacobson, who was the first to study it for arbitrary rings in .
The Jacobson radical of a ring has numerous internal characterizations, including a few definitions that successfully extend the notion to non-unital rings. The radical of a module extends the definition of the Jacobson radical to include modules. The Jacobson radical plays a prominent role in many ring- and module-theoretic results, such as Nakayama's lemma.
Definitions.
There are multiple equivalent definitions and characterizations of the Jacobson radical, but it is useful to consider the definitions based on if the ring is commutative or not.
Commutative case.
In the commutative case, the Jacobson radical of a commutative ring "R" is defined as the intersection of all maximal ideals formula_0. If we denote Specm "R" as the set of all maximal ideals in "R" thenformula_1This definition can be used for explicit calculations in a number of simple cases, such as for local rings ("R", formula_2), which have a unique maximal ideal, Artinian rings, and products thereof. See the examples section for explicit computations.
Noncommutative/general case.
For a general ring with unity "R", the Jacobson radical J("R") is defined as the ideal of all elements "r" ∈ "R" such that "rM" = 0 whenever "M" is a simple "R"-module. That is,
formula_3
This is equivalent to the definition in the commutative case for a commutative ring "R" because the simple modules over a commutative ring are of the form "R" / formula_0 for some maximal ideal formula_0 of "R", and the annihilators of "R" / formula_0 in "R" are precisely the elements of formula_0, i.e. Ann"R"("R" / formula_0) = formula_0.
Motivation.
Understanding the Jacobson radical lies in a few different cases: namely its applications and the resulting geometric interpretations, and its algebraic interpretations.
Geometric applications.
Although Jacobson originally introduced his radical as a technique for building a theory of radicals for arbitrary rings, one of the motivating reasons for why the Jacobson radical is considered in the commutative case is because of its appearance in Nakayama's lemma. This lemma is a technical tool for studying finitely generated modules over commutative rings that has an easy geometric interpretation: If we have a vector bundle "E" → "X" over a topological space "X", and pick a point "p" ∈ "X", then any basis of "E"|"p" can be extended to a basis of sections of "E"|"U" → "U" for some neighborhood "p" ∈ "U" ⊆ "X".
Another application is in the case of finitely generated commutative rings of the form formula_4 for some base ring "k" (such as a field, or the ring of integers). In this case the nilradical and the Jacobson radical coincide. This means we could interpret the Jacobson radical as a measure for how far the ideal "I" defining the ring "R" is from defining the ring of functions on an algebraic variety because of the Hilbert Nullstellensatz theorem. This is because algebraic varieties cannot have a ring of functions with infinitesimals: this is a structure that is only considered in scheme theory.
Equivalent characterizations.
The Jacobson radical of a ring has various internal and external characterizations. The following equivalences appear in many noncommutative algebra texts such as , , and .
The following are equivalent characterizations of the Jacobson radical in rings with unity (characterizations for rings without unity are given immediately afterward):
For rings without unity it is possible to have "R" = J("R"); however, the equation J("R" / J("R")) = {0} still holds. The following are equivalent characterizations of J("R") for rings without unity:
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak{m}"
},
{
"math_id": 1,
"text": "\\mathrm{J}(R) = \\bigcap_{\n \\mathfrak{m} \\,\\in\\, \\operatorname{Specm}R\n} \\mathfrak{m}"
},
{
"math_id": 2,
"text": "\\mathfrak{p}"
},
{
"math_id": 3,
"text": "\\mathrm{J}(R) = \\{r \\in R \\mid rM = 0 \\text{ where } M \\text{ is simple} \\}."
},
{
"math_id": 4,
"text": "R = k[x_1,\\ldots, x_n]\\,/\\,I "
}
] |
https://en.wikipedia.org/wiki?curid=57656
|
57663216
|
Abstract differential equation
|
In mathematics, an abstract differential equation is a differential equation in which the unknown function and its derivatives take values in some generic abstract space (a Hilbert space, a Banach space, etc.). Equations of this kind arise e.g. in the study of partial differential equations: if to one of the variables is given a privileged position (e.g. time, in heat or wave equations) and all the others are put together, an ordinary "differential" equation with respect to the variable which was put in evidence is obtained. Adding boundary conditions can often be translated in terms of considering solutions in some convenient function spaces.
The classical abstract differential equation which is most frequently encountered is the equation
formula_0
where the unknown function formula_1 belongs to some function space formula_2, formula_3 and formula_4 is an operator (usually a linear operator) acting on this space. An exhaustive treatment of the homogeneous (formula_5) case with a constant operator is given by the theory of C0-semigroups. Very often, the study of other abstract differential equations amounts (by e.g. reduction to a set of equations of the first order) to the study of this equation.
The theory of abstract differential equations has been founded by Einar Hille in several papers and in his book "Functional Analysis and Semi-Groups." Other main contributors were Kōsaku Yosida, Ralph Phillips, Isao Miyadera, and Selim Grigorievich Krein.
Abstract Cauchy problem.
Definition.
Let formula_6 and formula_7 be two linear operators, with domains formula_8 and formula_9, acting in a Banach space formula_2. A function formula_10 is said to have strong derivative (or to be Frechet differentiable or simply differentiable) at the point formula_11 if there exists an element formula_12 such that
formula_13
and its derivative is formula_14.
A solution of the equation
formula_15
is a function formula_16 such that:
The Cauchy problem consists in finding a solution of the equation, satisfying the initial condition formula_22.
Well posedness.
According to the definition of well-posed problem by Hadamard, the Cauchy problem is said to be well posed (or correct) on formula_23 if:
A well posed Cauchy problem is said to be uniformly well posed if formula_25 implies formula_27 uniformly in formula_21 on each finite interval formula_29.
Semigroup of operators associated to a Cauchy problem.
To an abstract Cauchy problem one can associate a semigroup of operators formula_30, i.e. a family of bounded linear operators depending on a parameter formula_21 (formula_31) such that
formula_32
Consider the operator formula_30 which assigns to the element formula_26 the value of the solution formula_33 of the Cauchy problem (formula_34) at the moment of time formula_35. If the Cauchy problem is well posed, then the operator formula_30 is defined on formula_36 and forms a semigroup.
Additionally, if formula_36 is dense in formula_2, the operator formula_30 can be extended to a bounded linear operator defined on the entire space formula_2. In this case one can associate to any formula_37 the function formula_38, for any formula_35. Such a function is called generalized solution of the Cauchy problem.
If formula_36 is dense in formula_2 and the Cauchy problem is uniformly well posed, then the associated semigroup formula_30 is a C0-semigroup in formula_2.
Conversely, if formula_6 is the infinitesimal generator of a C0-semigroup formula_30, then the Cauchy problem
formula_39
is uniformly well posed and the solution is given by
formula_40
Nonhomogeneous problem.
The Cauchy problem
formula_41
with formula_42, is called nonhomogeneous when formula_43. The following theorem gives some sufficient conditions for the existence of the solution:
Theorem. If formula_6 is an infinitesimal generator of a C0-semigroup formula_44 and formula_45 is continuously differentiable, then the function
formula_46
is the unique solution to the (abstract) nonhomogeneous Cauchy problem.
The integral on the right-hand side as to be intended as a Bochner integral.
Time-dependent problem.
The problem of finding a solution to the initial value problem
formula_47
where the unknown is a function formula_48, formula_49 is given and, for each formula_50, formula_51 is a given, closed, linear operator in formula_2 with domain formula_52, independent of formula_21 and dense in formula_2, is called time-dependent Cauchy problem.
An operator valued function formula_53 with values in formula_54 (the space of all bounded linear operators from formula_2 to formula_2), defined and strongly continuous jointly in formula_55 for formula_56, is called a fundamental solution of the time-dependent problem if:
formula_61 is also called evolution operator, propagator, solution operator or Green's function.
A function formula_48 is called a mild solution of the time-dependent problem if it admits the integral representation
formula_62
There are various known sufficient conditions for the existence of the evolution operator formula_53. In practically all cases considered in the literature formula_63 is assumed to be the infinitesimal generator of a C0-semigroup on formula_2. Roughly speaking, if formula_63 is the infinitesimal generator of a contraction semigroup the equation is said to be of "hyperbolic type"; if formula_63 is the infinitesimal generator of an analytic semigroup the equation is said to be of "parabolic type".
Non linear problem.
The problem of finding a solution to either
formula_64
where formula_65 is given, or
formula_66
where formula_6 is a nonlinear operator with domain formula_67, is called nonlinear Cauchy problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=Au+f"
},
{
"math_id": 1,
"text": "u=u(t)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "0\\le t\\le T \\le \\infin"
},
{
"math_id": 4,
"text": "A:X\\to X"
},
{
"math_id": 5,
"text": "f=0"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "D(A)"
},
{
"math_id": 9,
"text": "D(B)"
},
{
"math_id": 10,
"text": "u(t):[0,T]\\to X"
},
{
"math_id": 11,
"text": "t_0"
},
{
"math_id": 12,
"text": "y\\in X"
},
{
"math_id": 13,
"text": "\\lim_{h\\to 0}\\left\\|\\frac{u(t_0+h)-u(t_0)}{h}-y\\right\\|=0"
},
{
"math_id": 14,
"text": "u'(t_0)=y"
},
{
"math_id": 15,
"text": "B\\frac{\\mathrm{d}u}{\\mathrm{d}t}=Au"
},
{
"math_id": 16,
"text": "u(t):[0,\\infty)\\to D(A)\\cap D(B)"
},
{
"math_id": 17,
"text": "(Bu)(t)\\in C([0,\\infty);X),"
},
{
"math_id": 18,
"text": "u'(t)"
},
{
"math_id": 19,
"text": "\\forall t \\in [0,\\infty)"
},
{
"math_id": 20,
"text": "u'(t)\\in D(B)"
},
{
"math_id": 21,
"text": "t"
},
{
"math_id": 22,
"text": "u(0)=u_0 \\in D(A)\\cap D(B)"
},
{
"math_id": 23,
"text": "[0,\\infty)"
},
{
"math_id": 24,
"text": "u_0 \\in D(A)\\cap D(B)"
},
{
"math_id": 25,
"text": "u_n(0)\\to 0"
},
{
"math_id": 26,
"text": "u_n(0)\\in D(A)\\cap D(B)"
},
{
"math_id": 27,
"text": "u_n(t)\\to 0"
},
{
"math_id": 28,
"text": "t \\in [0,\\infty)."
},
{
"math_id": 29,
"text": "[0,T]"
},
{
"math_id": 30,
"text": "U(t)"
},
{
"math_id": 31,
"text": "0<t<\\infty"
},
{
"math_id": 32,
"text": "U(t_1+t_2)=U(t_1)U(t_2)\\quad (0<t_1,t_2<\\infty)."
},
{
"math_id": 33,
"text": "u(t)"
},
{
"math_id": 34,
"text": "u(0)=u_0"
},
{
"math_id": 35,
"text": "t>0"
},
{
"math_id": 36,
"text": "D(A)\\cap D(B)"
},
{
"math_id": 37,
"text": "x_0\\in X"
},
{
"math_id": 38,
"text": "U(t)x_0"
},
{
"math_id": 39,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=Au\\quad u(0)=u_0 \\in D(A)"
},
{
"math_id": 40,
"text": "u(t)=U(t)u_0."
},
{
"math_id": 41,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=Au+f \\quad u(0)=u_0\\in D(A)"
},
{
"math_id": 42,
"text": "f:[0,\\infty)\\to X"
},
{
"math_id": 43,
"text": "f(t)\\neq 0"
},
{
"math_id": 44,
"text": "T(t)"
},
{
"math_id": 45,
"text": "f"
},
{
"math_id": 46,
"text": "u(t)=T(t)u_0+\\int_0^t T(t-s)f(s) \\, ds,\\quad t\\geq 0"
},
{
"math_id": 47,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=A(t)u+f \\quad u(0)=u_0\\in D(A),"
},
{
"math_id": 48,
"text": "u:[0,T]\\to X"
},
{
"math_id": 49,
"text": "f:[0,T]\\to X"
},
{
"math_id": 50,
"text": "t\\in [0,T]"
},
{
"math_id": 51,
"text": "A(t)"
},
{
"math_id": 52,
"text": "D[A(t)]=D"
},
{
"math_id": 53,
"text": "U(t,\\tau)"
},
{
"math_id": 54,
"text": "B(X)"
},
{
"math_id": 55,
"text": "t,\\tau"
},
{
"math_id": 56,
"text": "0\\leq \\tau\\leq t\\leq T"
},
{
"math_id": 57,
"text": "\\frac{\\mathrm{\\delta}U(t,\\tau)}{\\mathrm{\\delta}t}"
},
{
"math_id": 58,
"text": "D"
},
{
"math_id": 59,
"text": "\\frac{\\mathrm{\\delta}U(t,\\tau)}{\\mathrm{\\delta}t}+A(t)U(t,\\tau)=0, \\quad 0\\leq \\tau\\leq t\\leq T,"
},
{
"math_id": 60,
"text": "U(\\tau,\\tau)=I"
},
{
"math_id": 61,
"text": "U(\\tau,\\tau)"
},
{
"math_id": 62,
"text": "u(t)=U(t,0)u_0+\\int_0^t U(t,s)f(s)\\,ds,\\quad t\\geq 0."
},
{
"math_id": 63,
"text": "-A(t)"
},
{
"math_id": 64,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=f(t,u) \\quad u(0)=u_0\\in X"
},
{
"math_id": 65,
"text": "f:[0,T]\\times X\\to X"
},
{
"math_id": 66,
"text": "\\frac{\\mathrm{d}u}{\\mathrm{d}t}=A(t)u \\quad u(0)=u_0\\in D(A)"
},
{
"math_id": 67,
"text": "D(A)\\in X"
}
] |
https://en.wikipedia.org/wiki?curid=57663216
|
57665268
|
Tits metric
|
In mathematics, the Tits metric is a metric defined on the ideal boundary of an Hadamard space (also called a complete CAT(0) space). It is named after Jacques Tits.
Ideal boundary of an Hadamard space.
Let ("X", "d") be an Hadamard space. Two geodesic rays "c"1, "c"2 : [0, ∞] → "X" are called asymptotic if they stay within a certain distance when traveling, i.e.
formula_0
Equivalently, the Hausdorff distance between the two rays is finite.
The asymptotic property defines an equivalence relation on the set of geodesic rays, and the set of equivalence classes is called the ideal boundary ∂"X" of "X". An equivalence class of geodesic rays is called a boundary point of "X". For any equivalence class of rays and any point "p" in "X", there is a unique ray in the class that issues from "p".
Definition of the Tits metric.
First we define an angle between boundary points with respect to a point "p" in "X". For any two boundary points formula_1 in ∂"X", take the two geodesic rays "c"1, "c"2 issuing from "p" corresponding to the two boundary points respectively. One can define an angle of the two rays at "p" called the Alexandrov angle. Intuitively, take the triangle with vertices "p", "c"1("t"), "c"2("t") for a small "t", and construct a triangle in the flat plane with the same side lengths as this triangle. Consider the angle at the vertex of the flat triangle corresponding to "p". The limit of this angle when "t" goes to zero is defined as the Alexandrov angle of the two rays at "p". (By definition of a CAT(0) space, the angle monotonically decreases as "t" decreases, so the limit exists.) Now we define formula_2 to be this angle.
To define the angular metric on the boundary ∂"X" that does not depend on the choice of "p", we take the supremum over all points in "X"
formula_3
The Tits metric "d"T is the length metric associated to the angular metric, that is for any two boundary points, the Tits distance between them is the infimum of lengths of all the curves on the boundary that connect them measured in the angular metric. If there is no such curve with finite length, the Tits distance between the two points is defined as infinity.
The ideal boundary of "X" equipped with the Tits metric is called the Tits boundary, denoted as ∂T"X".
For a complete CAT(0) space, it can be shown that its ideal boundary with the angular metric is a complete CAT(1) space, and its Tits boundary is also a complete CAT(1) space. Thus for any two boundary points formula_1 such that formula_4, we have
formula_5
and the points can be joined by a unique geodesic segment on the boundary. If the space is proper, then any two boundary points at finite Tits distance apart can be joined by a geodesic segment on the boundary.
|
[
{
"math_id": 0,
"text": "\\sup_{t \\ge 0} d(c_1(t), c_2(t)) < \\infty. "
},
{
"math_id": 1,
"text": "\\xi_1, \\xi_2"
},
{
"math_id": 2,
"text": "\\angle_p(\\xi_1, \\xi_2)"
},
{
"math_id": 3,
"text": "\\angle(\\xi_1, \\xi_2) := \\sup_{p\\in X}\\angle_p(\\xi_1, \\xi_2)."
},
{
"math_id": 4,
"text": "\\angle(\\xi_1, \\xi_2) < \\pi"
},
{
"math_id": 5,
"text": "d_\\mathrm{T}(\\xi_1, \\xi_2) = \\angle(\\xi_1,\\xi_2),"
}
] |
https://en.wikipedia.org/wiki?curid=57665268
|
5766547
|
Lebesgue differentiation theorem
|
Mathematical theorem in real analysis
In mathematics, the Lebesgue differentiation theorem is a theorem of real analysis, which states that for almost every point, the value of an integrable function is the limiting average taken around the point. The theorem is named for Henri Lebesgue.
Statement.
For a Lebesgue integrable real or complex-valued function "f" on R"n", the indefinite integral is a set function which maps a measurable set "A" to the Lebesgue integral of formula_0, where formula_1 denotes the characteristic function of the set "A". It is usually written
formula_2
with "λ" the "n"–dimensional Lebesgue measure.
The "derivative" of this integral at "x" is defined to be
formula_3
where |"B"| denotes the volume (i.e., the Lebesgue measure) of a ball "B" centered at "x", and "B" → "x" means that the diameter of "B" tends to 0.
The "Lebesgue differentiation theorem" states that this derivative exists and is equal to "f"("x") at almost every point "x" ∈ R"n". In fact a slightly stronger statement is true. Note that:
formula_4
The stronger assertion is that the right hand side tends to zero for almost every point "x". The points "x" for which this is true are called the Lebesgue points of "f".
A more general version also holds. One may replace the balls "B" by a family formula_5 of sets "U" of "bounded eccentricity". This means that there exists some fixed "c" > 0 such that each set "U" from the family is contained in a ball "B" with formula_6. It is also assumed that every point "x" ∈ R"n" is contained in arbitrarily small sets from formula_5. When these sets shrink to "x", the same result holds: for almost every point "x",
formula_7
The family of cubes is an example of such a family formula_5, as is the family formula_5("m") of rectangles in R2 such that the ratio of sides stays between "m"−1 and "m", for some fixed "m" ≥ 1. If an arbitrary norm is given on R"n", the family of balls for the metric associated to the norm is another example.
The one-dimensional case was proved earlier by . If "f" is integrable on the real line, the function
formula_8
is almost everywhere differentiable, with formula_9 Were formula_10 defined by a Riemann integral this would be essentially the fundamental theorem of calculus, but Lebesgue proved that it remains true when using the Lebesgue integral.
Proof.
The theorem in its stronger form—that almost every point is a Lebesgue point of a locally integrable function "f"—can be proved as a consequence of the weak–"L"1 estimates for the Hardy–Littlewood maximal function. The proof below follows the standard treatment that can be found in , , and .
Since the statement is local in character, "f" can be assumed to be zero outside some ball of finite radius and hence integrable. It is then sufficient to prove that the set
formula_11
has measure 0 for all "α" > 0.
Let "ε" > 0 be given. Using the density of continuous functions of compact support in "L"1(R"n"), one can find such a function "g" satisfying
formula_12
It is then helpful to rewrite the main difference as
formula_13
The first term can be bounded by the value at "x" of the maximal function for "f" − "g", denoted here by formula_14:
formula_15
The second term disappears in the limit since "g" is a continuous function, and the third term is bounded by |"f"("x") − "g"("x")|. For the absolute value of the original difference to be greater than 2"α" in the limit, at least one of the first or third terms must be greater than "α" in absolute value. However, the estimate on the Hardy–Littlewood function says that
formula_16
for some constant "An" depending only upon the dimension "n". The Markov inequality (also called Tchebyshev's inequality) says that
formula_17
thus
formula_18
Since "ε" was arbitrary, it can be taken to be arbitrarily small, and the theorem follows.
Discussion of proof.
The Vitali covering lemma is vital to the proof of this theorem; its role lies in proving the estimate for the Hardy–Littlewood maximal function.
The theorem also holds if balls are replaced, in the definition of the derivative, by families of sets with diameter tending to zero satisfying the "Lebesgue's regularity condition", defined above as "family of sets with bounded eccentricity". This follows since the same substitution can be made in the statement of the Vitali covering lemma.
Discussion.
This is an analogue, and a generalization, of the fundamental theorem of calculus, which equates a Riemann integrable function and the derivative of its (indefinite) integral. It is also possible to show a converse – that every differentiable function is equal to the integral of its derivative, but this requires a Henstock–Kurzweil integral in order to be able to integrate an arbitrary derivative.
A special case of the Lebesgue differentiation theorem is the Lebesgue density theorem, which is equivalent to the differentiation theorem for characteristic functions of measurable sets. The density theorem is usually proved using a simpler method (e.g. see Measure and Category).
This theorem is also true for every finite Borel measure on R"n" instead of Lebesgue measure (a proof can be found in e.g. ). More generally, it is true of any finite Borel measure on a separable metric space such that at least one of the following holds:
A proof of these results can be found in sections 2.8–2.9 of (Federer 1969).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f \\cdot \\mathbf{1}_A"
},
{
"math_id": 1,
"text": "\\mathbf{1}_{A}"
},
{
"math_id": 2,
"text": " A \\mapsto \\int_{A} f\\ \\mathrm{d}\\lambda,"
},
{
"math_id": 3,
"text": "\\lim_{B \\to x} \\frac{1}{|B|} \\int_{B}f \\, \\mathrm{d}\\lambda,"
},
{
"math_id": 4,
"text": "\\left|\\frac{1}{|B|} \\int_{B}f(y) \\, \\mathrm{d}\\lambda(y) - f(x)\\right| = \\left|\\frac{1}{|B|} \\int_{B}(f(y) - f(x))\\, \\mathrm{d}\\lambda(y)\\right| \\le \\frac{1}{|B|} \\int_{B}|f(y) -f(x)|\\, \\mathrm{d}\\lambda(y)."
},
{
"math_id": 5,
"text": "\\mathcal{V}"
},
{
"math_id": 6,
"text": "|U| \\ge c \\, |B|"
},
{
"math_id": 7,
"text": " f(x) = \\lim_{U \\to x, \\, U \\in \\mathcal{V}} \\frac{1}{|U|} \\int_U f \\, \\mathrm{d}\\lambda."
},
{
"math_id": 8,
"text": "F(x) = \\int_{(-\\infty,x]} f(t) \\, \\mathrm{d} t"
},
{
"math_id": 9,
"text": "F'(x) = f(x)."
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "E_\\alpha = \\Bigl\\{ x \\in \\mathbf{R}^n :\\limsup_{|B|\\rightarrow 0, \\, x \\in B} \\frac{1}{|B|} \\bigg|\\int_B f(y) -f(x)\\, \\mathrm{d}y\\bigg| > 2\\alpha \\Bigr\\}"
},
{
"math_id": 12,
"text": "\\|f - g\\|_{L^1} = \\int_{\\mathbf{R}^n} |f(x) - g(x)| \\, \\mathrm{d}x < \\varepsilon."
},
{
"math_id": 13,
"text": " \\frac{1}{|B|} \\int_B f(y) \\, \\mathrm{d}y - f(x) = \\Bigl(\\frac{1}{|B|} \\int_B \\bigl(f(y) - g(y)\\bigr) \\, \\mathrm{d}y \\Bigr) + \\Bigl(\\frac{1}{|B|}\\int_B g(y) \\, \\mathrm{d}y - g(x) \\Bigr)+ \\bigl(g(x) - f(x)\\bigr)."
},
{
"math_id": 14,
"text": "(f-g)^*(x)"
},
{
"math_id": 15,
"text": " \\frac{1}{|B|} \\int_B |f(y) - g(y)| \\, \\mathrm{d}y \\leq \\sup_{r>0} \\frac{1}{|B_r(x)|}\\int_{B_r(x)} |f(y)-g(y)| \\, \\mathrm{d}y = (f-g)^*(x)."
},
{
"math_id": 16,
"text": " \\Bigl| \\left \\{ x : (f-g)^*(x) > \\alpha \\right \\} \\Bigr| \\leq \\frac{A_n}{\\alpha} \\, \\|f - g\\|_{L^1} < \\frac{A_n}{\\alpha} \\, \\varepsilon,"
},
{
"math_id": 17,
"text": " \\Bigl|\\left\\{ x : |f(x) - g(x)| > \\alpha \\right \\}\\Bigr| \\leq \\frac{1}{\\alpha} \\, \\|f - g\\|_{L^1} < \\frac{1}{\\alpha} \\, \\varepsilon"
},
{
"math_id": 18,
"text": " |E_\\alpha| \\leq \\frac{A_n+1}{\\alpha} \\, \\varepsilon."
}
] |
https://en.wikipedia.org/wiki?curid=5766547
|
57667287
|
Dirichlet hyperbola method
|
Mathematical tool for summing multiplicative functions
In number theory, the Dirichlet hyperbola method is a technique to evaluate the sum
formula_0
where formula_1 is a multiplicative function. The first step is to find a pair of multiplicative functions formula_2 and formula_3 such that, using Dirichlet convolution, we have formula_4; the sum then becomes
formula_5
where the inner sum runs over all ordered pairs formula_6 of positive integers such that formula_7. In the Cartesian plane, these pairs lie on a hyperbola, and when the double sum is fully expanded, there is a bijection between the terms of the sum and the lattice points in the first quadrant on the hyperbolas of the form formula_7, where formula_8 runs over the integers formula_9: for each such point formula_6, the sum contains a term formula_10, and vice versa.
Let formula_11 be a real number, not necessarily an integer, such that formula_12, and let formula_13. Then the lattice points can be split into three overlapping regions: one region is bounded by formula_14 and formula_15, another region is bounded by formula_16 and formula_17, and the third is bounded by formula_14 and formula_16. In the diagram, the first region is the union of the blue and red regions, the second region is the union of the red and green, and the third region is the red. Note that this third region is the intersection of the first two regions. By the principle of inclusion and exclusion, the full sum is therefore the sum over the first region, plus the sum over the second region, minus the sum over the third region. This yields the formula
Examples.
Let formula_18 be the divisor-counting function, and let formula_19 be its summatory function:
formula_20
Computing formula_19 naïvely requires factoring every integer in the interval formula_21; an improvement can be made by using a modified Sieve of Eratosthenes, but this still requires formula_22 time. Since formula_23 admits the Dirichlet convolution formula_24, taking formula_25 in (1) yields the formula
formula_26
which simplifies to
formula_27
which can be evaluated in formula_28 operations.
The method also has theoretical applications: for example, Peter Gustav Lejeune Dirichlet introduced the technique in 1849 to obtain the estimate
formula_29
where formula_30 is the Euler–Mascheroni constant.
|
[
{
"math_id": 0,
"text": "F(n) = \\sum_{k=1}^n f(k),"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "f = g \\ast h"
},
{
"math_id": 5,
"text": "F(n) = \\sum_{k=1}^n \\sum_{xy=k}^{} g(x) h(y),"
},
{
"math_id": 6,
"text": "(x,y)"
},
{
"math_id": 7,
"text": "xy=k"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "1 \\leq k \\leq n"
},
{
"math_id": 10,
"text": "g(x)h(y)"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "1 < a < n"
},
{
"math_id": 13,
"text": "b = n/a"
},
{
"math_id": 14,
"text": "1 \\leq x \\leq a"
},
{
"math_id": 15,
"text": "1 \\leq y \\leq n/x"
},
{
"math_id": 16,
"text": "1 \\leq y \\leq b"
},
{
"math_id": 17,
"text": " 1 \\leq x \\leq n/y"
},
{
"math_id": 18,
"text": "\\sigma_0(n)"
},
{
"math_id": 19,
"text": "D(n)"
},
{
"math_id": 20,
"text": "D(n) = \\sum_{k=1}^n \\sigma_0(k)."
},
{
"math_id": 21,
"text": "[1, n]"
},
{
"math_id": 22,
"text": "\\widetilde{O}(n)"
},
{
"math_id": 23,
"text": "\\sigma_0"
},
{
"math_id": 24,
"text": "\\sigma_0 = 1 \\ast 1"
},
{
"math_id": 25,
"text": "a=b=\\sqrt{n}"
},
{
"math_id": 26,
"text": "D(n) = \\sum_{x=1}^{\\sqrt{n}} \\sum_{y=1}^{n/x} 1 \\cdot 1 + \\sum_{y=1}^{\\sqrt{n}} \\sum_{x=1}^{n/y} 1 \\cdot 1 - \\sum_{x=1}^{\\sqrt{n}} \\sum_{y=1}^{\\sqrt{n}} 1 \\cdot 1,"
},
{
"math_id": 27,
"text": "D(n) = 2 \\cdot \\sum_{x=1}^{\\sqrt{n}} \\left\\lfloor \\frac{n}{x} \\right\\rfloor - \\left\\lfloor \\sqrt{n} \\right\\rfloor^2,"
},
{
"math_id": 28,
"text": "O\\left(\\sqrt{n}\\right)"
},
{
"math_id": 29,
"text": "D(n) = n \\log n + (2\\gamma - 1)n + O(\\sqrt{n}),"
},
{
"math_id": 30,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=57667287
|
576681
|
Speedometer
|
Speed gauge in motor vehicles
A speedometer or speed meter is a gauge that measures and displays the instantaneous speed of a vehicle. Now universally fitted to motor vehicles, they started to be available as options in the early 20th century, and as standard equipment from about 1910 onwards. Other vehicles may use devices analogous to the speedometer with different means of sensing speed, eg. boats use a pit log, while aircraft use an airspeed indicator.
Charles Babbage is credited with creating an early type of a speedometer, which was usually fitted to locomotives.
The electric speedometer was invented by the Croat Josip Belušić in 1888 and was originally called a velocimeter.
Operation.
The speedometer was originally patented by Josip Belušić (Giuseppe Bellussich) in 1888. He presented his invention at the 1889 Exposition Universelle in Paris. His invention had a pointer and a magnet, using electricity to work.
German inventor Otto Schultze patented his version (which, like Belušić's, ran on eddy currents) on 7 October 1902.
Mechanical.
Many speedometers use a rotating flexible cable driven by gearing linked to the vehicle's transmission. The early Volkswagen Beetle and many motorcycles, however, use a cable driven from a front wheel.
Some early mechanical speedometers operated on the governor principle where a rotating weight acting against a spring moved further out as the speed increased, similar to the governor used on steam engines. This movement was transferred to the pointer to indicate speed.
This was followed by the Chronometric speedometer where the distance traveled was measured over a precise interval of time (Some Smiths speedometers used 3/4 of a second) measured by an escapement. This was transferred to the speedometer pointer. The chronometric speedometer is tolerant of vibration and was used in motorcycles up to the 1970s.
When the vehicle is in motion, a speedometer gear assembly turns a speedometer cable, which then turns the speedometer mechanism itself. A small permanent magnet affixed to the speedometer cable interacts with a small aluminium cup (called a "speedcup") attached to the shaft of the pointer on the analogue speedometer instrument. As the magnet rotates near the cup, the changing magnetic field produces eddy current in the cup, which itself produces another magnetic field. The effect is that the magnet exerts a torque on the cup, "dragging" it, and thus the speedometer pointer, in the direction of its rotation with no mechanical connection between them.
The pointer shaft is held toward zero by a fine torsion spring. The torque on the cup increases with the speed of rotation of the magnet. Thus an increase in the speed of the car will twist the cup and speedometer pointer against the spring. The cup and pointer will turn until the torque of the eddy currents on the cup are balanced by the opposing torque of the spring, and then stop. Given the torque on the cup is proportional to the car's speed, and the spring's deflection is proportional to the torque, the angle of the pointer is also proportional to the speed, so that equally spaced markers on the dial can be used for gaps in speed. At a given speed, the pointer will remain motionless and point to the appropriate number on the speedometer's dial.
The return spring is calibrated such that a given revolution speed of the cable corresponds to a specific speed indication on the speedometer. This calibration must take into account several factors, including ratios of the tail shaft gears that drive the flexible cable, the final drive ratio in the differential, and the diameter of the driven tires.
One of the key disadvantages of the eddy current speedometer is that it cannot show the vehicle speed when running in reverse gear since the cup would turn in the opposite direction – in this scenario, the needle would be driven against its mechanical stop pin on the zero position.
Electronic.
Many modern speedometers are electronic. In designs derived from earlier eddy-current models, a rotation sensor mounted in the transmission delivers a series of electronic pulses whose frequency corresponds to the (average) rotational speed of the driveshaft, and therefore the vehicle's speed, assuming the wheels have full traction. The sensor is typically a set of one or more magnets mounted on the output shaft or (in transaxles) differential crown wheel, or a toothed metal disk positioned between a magnet and a magnetic field sensor. As the part in question turns, the magnets or teeth pass beneath the sensor, each time producing a pulse in the sensor as they affect the strength of the magnetic field it is measuring. Alternatively, particularly in vehicles with multiplex wiring, some manufacturers use the pulses coming from the ABS wheel sensors which communicate to the instrument panel via the CAN Bus. Most modern electronic speedometers have the additional ability over the eddy current type to show the vehicle's speed when moving in reverse gear.
A computer converts the pulses to a speed and displays this speed on an electronically controlled, analogue-style needle or a digital display. Pulse information is also used for a variety of other purposes by the ECU or full-vehicle control system, e.g. triggering ABS or traction control, calculating average trip speed, or increment the odometer in place of it being turned directly by the speedometer cable.
Another early form of electronic speedometer relies upon the interaction between a precision watch mechanism and a mechanical pulsator driven by the car's wheel or transmission. The watch mechanism endeavours to push the speedometer pointer toward zero, while the vehicle-driven pulsator tries to push it toward infinity. The position of the speedometer pointer reflects the relative magnitudes of the outputs of the two mechanisms.
Virtual Speedometer.
A virtual speedometer is a computer-generated tool that displays the current speed of a vehicle or object. The virtual speedometer typically calculates the object's speed based on the distance it travels over time. Such speedometers are programmed using programming languages such as HTML, CSS, and Javascript. The program uses the mobile device's GPS module.
Consistent use of the GPS module on mobile devices can result in faster battery drain. Furthermore, virtual speedometers calculate speed by measuring the distance and time between two points using GPS signals. However, various environmental factors such as weather conditions, terrain, and obstructions can interfere with the accuracy of these signals and result in inaccurate speed readings.
Bicycle speedometers.
Typical bicycle speedometers measure the time between each wheel revolution and give a readout on a small, handlebar-mounted digital display. The sensor is mounted on the bike at a fixed location, pulsing when the spoke-mounted magnet passes by. In this way, it is analogous to an electronic car speedometer using pulses from an ABS sensor, but with a much cruder time/distance resolution – typically one pulse/display update per revolution, or as seldom as once every 2–3 seconds at low speed with a wheel. However, this is rarely a critical problem, and the system provides frequent updates at higher road speeds where the information is of more importance. The low pulse frequency also has little impact on measurement accuracy, as these digital devices can be programmed by wheel size, or additionally by wheel or tire circumference to make distance measurements more accurate and precise than a typical motor vehicle gauge. However, these devices carry some minor disadvantages in requiring power from batteries that must be replaced every so often in the receiver (and sensor, for wireless models), and, in wired models, the signal is carried by a thin cable that is much less robust than that used for brakes, gears, or cabled speedometers.
Other, usually older bicycle speedometers are cable driven from one or other wheel, as in the motorcycle speedometers described above. These do not require battery power, but can be relatively bulky and heavy, and may be less accurate. The turning force at the wheel may be provided either from a gearing system at the hub (making use of the presence of e.g. a hub brake, cylinder gear, or dynamo) as per a typical motorcycle, or with a friction wheel device that pushes against the outer edge of the rim (same position as rim brakes, but on the opposite edge of the fork) or the sidewall of the tire itself. The former type is quite reliable and low maintenance but needs a gauge and hub gearing properly matched to the rim and tire size, whereas the latter requires little or no calibration for a moderately accurate readout (with standard tires, the "distance" covered in each wheel rotation by a friction wheel set against the rim should scale fairly linearly with wheel size, almost as if it were rolling along the ground itself) but are unsuitable for off-road use, and must be kept properly tensioned and clean of road dirt to avoid slipping or jamming.
Error.
Most speedometers have tolerances of some ±10%, mainly due to variations in tire diameter. Sources of error due to tire diameter variations are wear, temperature, pressure, vehicle load, and nominal tire size. Vehicle manufacturers usually calibrate speedometers to read high by an amount equal to the average error, to ensure that their speedometers never indicate a lower speed than the actual speed of the vehicle, to ensure they are not liable for drivers violating speed limits.
Excessive speedometer errors after manufacture can come from several causes, but most commonly is due to nonstandard tire diameter, in which case the error is:
formula_0
Nearly all tires now have their size is shown as "T/A_W" on the side of the tire (See: Tire code), and the tires.
formula_1
formula_2
For example, a standard tire is "185/70R14" with diameter = 2*185*(70/100)+(14*25.4) = 614.6 mm (185x70/1270 + 14 = 24.20 in). Another is "195/50R15" with 2*195*(50/100)+(15*25.4) = 576.0 mm (195x50/1270 + 15 = 22.68 in). Replacing the first tire (and wheels) with the second (on 15" = 381 mm wheels), a speedometer reads 100 * ((614.6/576) - 1) = 100 * (24.20/22.68 - 1) = 6.7% higher than the actual speed. At an actual speed of 100 km/h (60 mph), the speedometer will indicate 100 x 1.067 = 106.7 km/h (60 * 1.067 = 64.02 mph), approximately.
In the case of wear, a new "185/70R14" tire of 620 mm (24.4 inch) diameter will have ≈8 mm tread depth, at legal limit this reduces to 1.6 mm, the difference being 12.8 mm in diameter or 0.5 inches which is 2% in 620 mm (24.4 inches).
International agreements.
In many countries the legislated error in speedometer readings is ultimately governed by the United Nations Economic Commission for Europe (UNECE) Regulation 39, which covers those aspects of vehicle type approval that relate to speedometers. The main purpose of the UNECE regulations is to facilitate trade in motor vehicles by agreeing on uniform type approval standards rather than requiring a vehicle model to undergo different approval processes in each country where it is sold.
European Union member states must also grant type approval to vehicles meeting similar EU standards. The ones covering speedometers are similar to the UNECE regulation in that they specify that:
The standards specify both the limits on accuracy and many of the details of how it should be measured during the approvals process. For example, the test measurements should be made (for most vehicles) at , and at a particular ambient temperature and road surface. There are slight differences between the different standards, for example in the minimum accuracy of the equipment measuring the true speed of the vehicle.
The UNECE regulation relaxes the requirements for vehicles mass-produced following type approval. At Conformity of Production Audits the upper limit on indicated speed is increased to 110 percent plus for cars, buses, trucks, and similar vehicles, and 110 percent plus for two- or three-wheeled vehicles that have a maximum speed above (or a cylinder capacity, if powered by a heat engine, of more than ). European Union Directive 2000/7/EC, which relates to two- and three-wheeled vehicles, provides similar slightly relaxed limits in production.
Australia.
There were no Australian Design Rules in place for speedometers in Australia before July 1988. They had to be introduced when speed cameras were first used. This means there are no legally accurate speedometers for these older vehicles. All vehicles manufactured on or after 1 July 2007, and all models of vehicle introduced on or after 1 July 2006, must conform to UNECE Regulation 39.
The speedometers in vehicles manufactured before these dates but after 1 July 1995 (or 1 January 1995 for forward control passenger vehicles and off-road passenger vehicles) must conform to the previous Australian design rule. This specifies that they need only display the speed to an accuracy of ±10% at speeds above 40 km/h, and there is no specified accuracy at all for speeds below 40 km/h.
All vehicles manufactured in Australia or imported for supply to the Australian market must comply with the Australian Design Rules. The state and territory governments may set policies for the tolerance of speed over the posted speed limits that may be lower than the 10% in the earlier versions of the Australian Design Rules permitted, such as in Victoria. This has caused some controversy since it would be possible for a driver to be unaware that they are speeding should their vehicle be fitted with an under-reading speedometer.
United Kingdom.
The amended Road Vehicles (Construction and Use) Regulations 1986 permits the use of speedometers that meet either the requirements of EC Council Directive 75/443 (as amended by Directive 97/39) or UNECE Regulation 39.
The Motor Vehicles (Approval) Regulations 2001 permits single vehicles to be approved. As with the UNECE regulation and the EC Directives, the speedometer must never show an indicated speed less than the actual speed. However, it differs slightly from them in specifying that for all actual speeds between 25 mph and 70 mph (or the vehicles' maximum speed if it is lower than this), the indicated speed must not exceed 110% of the actual speed, plus 6.25 mph.
For example, if the vehicle is actually traveling at 50 mph, the speedometer must not show more than 61.25 mph or less than 50 mph.
United States.
Federal standards in the United States allow a maximum 5 mph error at a speed of 50 mph on speedometer readings for commercial vehicles. Aftermarket modifications, such as different tire and wheel sizes or different differential gearing, can cause speedometer inaccuracy.
Regulation in the US.
Starting with U.S. automobiles manufactured on or after 1 September 1979, the NHTSA required speedometers to have a special emphasis on 55 mph (90 km/h) and display no more than a maximum speed of 85 mph (136 km/h). On 25 March 1982, the NHTSA revoked the rule because no "significant safety benefits" could come from maintaining the standard.
GPS.
GPS devices can measure speeds in two ways:
As mentioned in the satnav article, GPS data has been used to overturn a speeding ticket; the GPS logs showed the defendant traveling below the speed limit when they were ticketed. That the data came from a GPS device was likely less important than the fact that it was logged; logs from the vehicle's speedometer could likely have been used instead, had they existed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mbox {Percentage error} = 100\\times\\left(1 - \\frac \\mbox{new diameter} \\mbox{standard diameter}\\right) "
},
{
"math_id": 1,
"text": " \\mbox {Diameter in millimetres} = 2 \\times T \\times A / 100 + W \\times 25.4 "
},
{
"math_id": 2,
"text": " \\mbox {Diameter in inches} = T \\times A / 1270 + W "
}
] |
https://en.wikipedia.org/wiki?curid=576681
|
57671069
|
Two capacitor paradox
|
Thought experiment in physics
The two capacitor paradox or capacitor paradox is a paradox, or counterintuitive thought experiment, in electric circuit theory. The thought experiment is usually described as follows:
Two identical capacitors are connected in parallel with an open switch between them. One of the capacitors is charged with a voltage of formula_0, the other is uncharged. When the switch is closed, some of the charge formula_1 on the first capacitor flows into the second, reducing the voltage on the first and increasing the voltage on the second. When a steady state is reached and the current goes to zero, the voltage on the two capacitors must be equal since they are connected together. Since they both have the same capacitance formula_2 the charge will be divided equally between the capacitors so each capacitor will have a charge of formula_3 and a voltage of formula_4. At the beginning of the experiment the total initial energy formula_5 in the circuit is the energy stored in the charged capacitor:
formula_6
At the end of the experiment the final energy formula_7 is equal to the sum of the energy in the two capacitors
formula_8
Thus the final energy formula_7 is equal to half of the initial energy formula_5. Where did the other half of the initial energy go?
Solutions.
This problem has been discussed in electronics literature at least as far back as 1955. Unlike some other paradoxes in science, this paradox is not due to the underlying physics, but to the limitations of the 'ideal circuit' conventions used in circuit theory. The description specified above is not physically realizable if the circuit is assumed to be made of ideal circuit elements, as is usual in circuit theory. If the wires connecting the two capacitors, the switch, and the capacitors themselves are idealized as having no electrical resistance or inductance as is usual, then closing the switch would connect points at different voltage with a perfect conductor, causing an infinite current to flow. Therefore a solution requires that one or more of the 'ideal' characteristics of the elements in the circuit be relaxed, which was not specified in the above description. The solution differs depending on which of the assumptions about the actual characteristics of the circuit elements is abandoned:
Various additional solutions have been devised, based on more detailed assumptions about the characteristics of the components.
Alternate versions.
There are several alternate versions of the paradox. One is the original circuit with the two capacitors initially charged with equal and opposite voltages formula_9 and formula_10. Another equivalent version is a single charged capacitor short circuited by a perfect conductor. In these cases in the final state the entire charge has been neutralized, the final voltage on the capacitors is zero, so the entire initial energy has vanished. The solutions to where the energy went are similar to those described in the previous section.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_i"
},
{
"math_id": 1,
"text": "Q = CV_i"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "{Q \\over 2}"
},
{
"math_id": 4,
"text": "V_f = {Q \\over 2C} = {V_i \\over 2}"
},
{
"math_id": 5,
"text": "W_i"
},
{
"math_id": 6,
"text": "W_i = {1 \\over 2}CV_i^2"
},
{
"math_id": 7,
"text": "W_f"
},
{
"math_id": 8,
"text": "W_f = {1 \\over 2}CV_f^2 + {1 \\over 2}CV_f^2 = CV_f^2 = C\\left({V_i \\over 2}\\right)^2 = {1 \\over 4}CV_i^2 = {1 \\over 2}W_i"
},
{
"math_id": 9,
"text": "+V_i"
},
{
"math_id": 10,
"text": "-V_i"
}
] |
https://en.wikipedia.org/wiki?curid=57671069
|
576713
|
Congruence of squares
|
In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms.
Derivation.
Given a positive integer "n", Fermat's factorization method relies on finding numbers "x" and "y" satisfying the equality
formula_0
We can then factor "n" = "x"2 − "y"2 = ("x" + "y")("x" − "y"). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, "n" may also be factored if we can satisfy the weaker congruence of squares conditions:
formula_1
formula_2
From here we easily deduce
formula_3
formula_4
This means that "n" divides the product ("x" + "y")("x" − "y"). The second non-triviality condition guarantees that "n" does not divide ("x" + "y") nor ("x" − "y") individually. Thus ("x" + "y") and ("x" − "y") each contain some, but not all, factors of "n", and the greatest common divisors of ("x" + "y", "n") and of ("x" − "y", "n") will give us these factors. This can be done quickly using the Euclidean algorithm.
Most algorithms for finding congruences of squares do not actually guarantee non-triviality; they only make it likely. There is a chance that a congruence found will be trivial, in which case we need to continue searching for another "x" and "y".
Congruences of squares are extremely useful in integer factorization algorithms. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares.
Using a factor base.
A technique pioneered by Dixon's factorization method and improved by continued fraction factorization, the quadratic sieve, and the general number field sieve, is to construct a congruence of squares using a factor base.
Instead of looking for one pair formula_5 directly, we find many "relations" formula_6 where the "y" have only small prime factors (they are smooth numbers), and multiply some of them together to get a square on the right-hand side.
The set of small primes which all the "y" factor into is called the factor base. Construct a logical matrix where each row describes one "y", each column corresponds to one prime in the factor base, and the entry is the parity (even or odd) of the number of times that factor occurs in "y". Our goal is to select a subset of rows whose sum is the all-zero row. This corresponds to a set of "y" values whose product is a square number, i.e. one whose factorization has only even exponents. The products of "x" and "y" values together form a congruence of squares.
This is a classic system of linear equations problem, and can be efficiently solved using Gaussian elimination as soon as the number of rows exceeds the number of columns. Some additional rows are often included to ensure that several solutions exist in the nullspace of our matrix, in case the first solution produces a trivial congruence.
A great advantage of this technique is that the search for relations is embarrassingly parallel; a large number of computers can be set to work searching different ranges of "x" values and trying to factor the resultant "y"s. Only the found relations need to be reported to a central computer, and there is no particular hurry to do so. The searching computers do not even have to be trusted; a reported relation can be verified with minimal effort.
There are numerous elaborations on this technique. For example, in addition to relations where "y" factors completely in the factor base, the "large prime" variant also collects "partial relations" where "y" factors completely except for one larger factor. A second partial relation with the same larger factor can be multiplied by the first to produce a "complete relation".
Examples.
Factorize 35.
We take "n" = 35 and find that
formula_7.
We thus factor as
formula_8
Factorize 1649.
Using "n" = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences
formula_9
formula_10
formula_11
Of these, the first and third have only small primes as factors, and a product of these has an even power of each small prime, and is therefore a square
formula_12
yielding the congruence of squares
formula_13
So using the values of 80 and 114 as our "x" and "y" gives factors
formula_14
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "x^2 - y^2 = n"
},
{
"math_id": 1,
"text": "x^2 \\equiv y^2 \\pmod{n}"
},
{
"math_id": 2,
"text": "x \\not\\equiv \\pm y \\,\\pmod{n}"
},
{
"math_id": 3,
"text": "x^2 - y^2 \\equiv 0 \\pmod{n}"
},
{
"math_id": 4,
"text": "(x + y)(x - y) \\equiv 0 \\pmod{n}"
},
{
"math_id": 5,
"text": "\\textstyle x^2 \\equiv y^2 \\pmod n"
},
{
"math_id": 6,
"text": "\\textstyle x^2 \\equiv y \\pmod n"
},
{
"math_id": 7,
"text": "\\textstyle 6^2 = 36 \\equiv 1 = 1^2 \\pmod{35}"
},
{
"math_id": 8,
"text": " \\gcd( 6-1, 35 ) \\cdot \\gcd( 6+1, 35 ) = 5 \\cdot 7 = 35"
},
{
"math_id": 9,
"text": "41^2 \\equiv 32 = 2^5 \\pmod{1649},"
},
{
"math_id": 10,
"text": "42^2 \\equiv 115 = 5 \\cdot 23 \\pmod{1649},"
},
{
"math_id": 11,
"text": "43^2 \\equiv 200 = 2^3 \\cdot 5^2 \\pmod{1649}."
},
{
"math_id": 12,
"text": "32 \\cdot 200 = 2^{5+3} \\cdot 5^2 = 2^8 \\cdot 5^2 = (2^4 \\cdot 5)^2 = 80^2"
},
{
"math_id": 13,
"text": "32 \\cdot 200 = 80^2 \\equiv 41^2 \\cdot 43^2 \\equiv 114^2 \\pmod{1649}."
},
{
"math_id": 14,
"text": "\\gcd( 114-80, 1649 ) \\cdot \\gcd( 114+80, 1649 ) = 17 \\cdot 97 = 1649."
}
] |
https://en.wikipedia.org/wiki?curid=576713
|
5767229
|
Zsigmondy's theorem
|
On prime divisors of differences two nth powers
In number theory, Zsigmondy's theorem, named after Karl Zsigmondy, states that if formula_0 are coprime integers, then for any integer formula_1, there is a prime number "p" (called a "primitive prime divisor") that divides formula_2 and does not divide formula_3 for any positive integer formula_4, with the following exceptions:
This generalizes Bang's theorem, which states that if formula_16 and formula_17 is not equal to 6, then formula_18 has a prime divisor not dividing any formula_19 with formula_4.
Similarly, formula_20 has at least one primitive prime divisor with the exception formula_21.
Zsigmondy's theorem is often useful, especially in group theory, where it is used to prove that various groups have distinct orders except when they are known to be the same.
History.
The theorem was discovered by Zsigmondy working in Vienna from 1894 until 1925.
Generalizations.
Let formula_22 be a sequence of nonzero integers.
The Zsigmondy set associated to the sequence is the set
formula_23
i.e., the set of indices formula_17 such that every prime dividing formula_24 also divides some formula_25 for some formula_26. Thus Zsigmondy's theorem implies that formula_27, and Carmichael's theorem says that the Zsigmondy set of the Fibonacci sequence is formula_28, and that of the Pell sequence is formula_29. In 2001 Bilu, Hanrot, and Voutier
proved that in general, if formula_22 is a Lucas sequence or a Lehmer sequence, then formula_30 (see OEIS: , there are only 13 such formula_17s, namely 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 18, 30).
Lucas and Lehmer sequences are examples of divisibility sequences.
It is also known that if formula_31 is an elliptic divisibility sequence, then its Zsigmondy
set formula_32 is finite. However, the result is ineffective in the sense that the proof does not give an explicit upper bound for the largest element in formula_32,
although it is possible to give an effective upper bound for the number of elements in formula_32.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a>b>0"
},
{
"math_id": 1,
"text": "n \\ge 1"
},
{
"math_id": 2,
"text": "a^n-b^n"
},
{
"math_id": 3,
"text": "a^k-b^k"
},
{
"math_id": 4,
"text": "k<n"
},
{
"math_id": 5,
"text": "n=1"
},
{
"math_id": 6,
"text": "a-b=1"
},
{
"math_id": 7,
"text": "a^n-b^n=1"
},
{
"math_id": 8,
"text": "n=2"
},
{
"math_id": 9,
"text": "a+b"
},
{
"math_id": 10,
"text": "a^2-b^2=(a+b)(a^1-b^1)"
},
{
"math_id": 11,
"text": "a^1-b^1"
},
{
"math_id": 12,
"text": "n=6"
},
{
"math_id": 13,
"text": "a=2"
},
{
"math_id": 14,
"text": "b=1"
},
{
"math_id": 15,
"text": "a^6-b^6=63=3^2\\times 7=(a^2-b^2)^2 (a^3-b^3)"
},
{
"math_id": 16,
"text": "n>1"
},
{
"math_id": 17,
"text": "n"
},
{
"math_id": 18,
"text": "2^n-1"
},
{
"math_id": 19,
"text": "2^k-1"
},
{
"math_id": 20,
"text": "a^n+b^n"
},
{
"math_id": 21,
"text": "2^3+1^3=9"
},
{
"math_id": 22,
"text": "(a_n)_{n\\ge1}"
},
{
"math_id": 23,
"text": "\\mathcal{Z}(a_n) = \\{n \\ge 1 : a_n \\text{ has no primitive prime divisors}\\}."
},
{
"math_id": 24,
"text": "a_n"
},
{
"math_id": 25,
"text": "a_m"
},
{
"math_id": 26,
"text": "m < n"
},
{
"math_id": 27,
"text": "\\mathcal{Z}(a^n-b^n)\\subset\\{1,2,6\\}"
},
{
"math_id": 28,
"text": "\\{1,2,6,12\\}"
},
{
"math_id": 29,
"text": "\\{1\\}"
},
{
"math_id": 30,
"text": "\\mathcal{Z}(a_n) \\subseteq \\{ 1 \\le n \\le 30 \\}"
},
{
"math_id": 31,
"text": "(W_n)_{n\\ge1}"
},
{
"math_id": 32,
"text": "\\mathcal{Z}(W_n)"
}
] |
https://en.wikipedia.org/wiki?curid=5767229
|
57672953
|
Concrete cone failure
|
Failure mode of anchors in concrete submitted to tensile force
Concrete cone is one of the failure modes of anchors in concrete, loaded by a tensile force. The failure is governed by crack growth in concrete, which forms a typical cone shape having the anchor's axis as revolution axis.
Mechanical models.
ACI 349-85.
Under tension loading, the concrete cone failure surface has 45° inclination. A constant distribution of tensile stresses is then assumed. The concrete cone failure load formula_0 of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by:
formula_1
Where:
formula_2 - tensile strength of concrete
formula_3 - Cone's projected area
Concrete capacity design (CCD) approach for fastening to concrete.
Under tension loading, the concrete capacity of a single anchor is calculated assuming an inclination between the failure surface and surface of the concrete member of about 35°. The concrete cone failure load formula_0 of a single anchor in uncracked concrete unaffected by edge influences or overlapping cones of neighboring anchors is given by:
formula_4,
Where:
formula_5 - 13.5 for post-installed fasteners, 15.5 for cast-in-site fasteners
formula_6 - Concrete compressive strength measured on cubes [MPa]
formula_7 - Embedment depth of the anchor [mm]
The model is based on fracture mechanics theory and takes into account the size effect, particularly for the factor formula_8 which differentiates from formula_9 expected from the first model. In the case of concrete tensile failure with increasing member size, the failure load increases less than the available failure surface; that means the nominal stress at failure (peak load divided by failure area) decreases.
Current codes take into account a reduction of the theoretical concrete cone capacity formula_0 considering: (i) the presence of edges; (ii) the overlapping cones due to group effect; (iii) the presence of an eccentricity of the tension load.
Difference between models.
The tension failure loads predicted by the CCD method fits experimental results over a wide range of embedment depth (e.g. 100 – 600 mm). Anchor load bearing capacity provided by ACI 349 does not consider "size effect" , thus an underestimated value for the load-carrying capacity is obtained for large embedment depths.
Influence of the head size.
For large head size, the bearing pressure in the bearing zone diminishes. An increase of the anchor's load-carrying capacity is observed . Different modification factors were proposed in technical literature.
Un-cracked and cracked concrete.
Anchors, experimentally show a lower load-bearing capacity when installed in a cracked concrete member. The reduction is up to 40% with respect to the un-cracked condition, depending on the crack width. The reduction is due to the impossibility to transfer both normal and tangential stresses at the crack plane.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " N_0"
},
{
"math_id": 1,
"text": "N_0 = f_{ct} {A_{N}} [N] "
},
{
"math_id": 2,
"text": "f_{ct}"
},
{
"math_id": 3,
"text": "A_{N}"
},
{
"math_id": 4,
"text": "N_0 = k \\sqrt{f_{cc}} {h_{ef}}^{1.5} [N] "
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "f_{cc}"
},
{
"math_id": 7,
"text": "{h_{ef}}"
},
{
"math_id": 8,
"text": "{h_{ef}}^{1.5}"
},
{
"math_id": 9,
"text": "{h_{ef}}^{2}"
}
] |
https://en.wikipedia.org/wiki?curid=57672953
|
5767604
|
Frequency divider
|
Circuit
A frequency divider, also called a clock divider or scaler or prescaler, is a circuit that takes an input signal of a frequency, formula_0, and generates an output signal of a frequency:
formula_1
where formula_2 is an integer. Phase-locked loop frequency synthesizers make use of frequency dividers to generate a frequency that is a multiple of a reference frequency. Frequency dividers can be implemented for both analog and digital applications.
Analog.
Analog frequency dividers are less common and used only at very high frequencies. Digital dividers implemented in modern IC technologies can work up to tens of GHz.
Regenerative.
A regenerative frequency divider, also known as a Miller frequency divider, mixes the input signal with the feedback signal from the mixer.
The feedback signal is formula_3. This produces sum and difference frequencies formula_3, formula_4 at the output of the mixer. A low pass filter removes the higher frequency, and the formula_3 frequency is amplified and fed back into the mixer.
Injection-locked.
A free-running oscillator which has a small amount of a higher-frequency signal fed to it, will tend to oscillate in step with the input signal. Such frequency dividers were essential in the development of television.
It operates similarly to an injection locked oscillator. In an injection-locked frequency divider, the frequency of the input signal is a multiple (or fraction) of the free-running frequency of the oscillator. While these frequency dividers tend to be lower power than broadband static (or flip-flop-based) frequency dividers, the drawback is their low locking range. The ILFD locking range is inversely proportional to the quality factor (Q) of the oscillator tank. In integrated circuit designs, this makes an ILFD sensitive to process variations. Care must be taken to ensure the tuning range of the driving circuit (for example, a voltage-controlled oscillator) must fall within the input locking range of the ILFD.
Digital.
For power-of-2 integer division, a simple binary counter can be used, clocked by the input signal. The least-significant output bit alternates at 1/2 the rate of the input clock, the next bit at 1/4 the rate, the third bit at 1/8 the rate, etc. An arrangement of flipflops is a classic method for integer-n division. Such division is frequency and phase coherent to the source over environmental variations, including temperature. The easiest configuration is a series where each flip-flop is a divide-by-2. For a series of three of these, such a system would be a divide-by-8. By adding additional logic gates to the chain of flip-flops, other division ratios can be obtained. Integrated circuit logic families can provide a single-chip solution for some common division ratios.
Another popular circuit to divide a digital signal by an even integer multiple is a Johnson counter. This is a type of shift register network that is clocked by the input signal. The last register's complemented output is fed back to the first register's input. The output signal is derived from one or more of the register outputs. For example, a divide-by-6 divider can be constructed with a 3-register Johnson counter. The six valid values of the counter are 000, 100, 110, 111, 011, and 001. This pattern repeats each time the input signal clocks the network. The output of each register is an f/6 square wave with 120° of phase shift between registers. Additional registers can be added to provide additional integer divisors.
Mixed signal.
("Classification:" "asynchronous sequential logic")<br>
An arrangement of D flip-flops is a classic method for integer-n division. Such division is frequency and phase coherent to the source over environmental variations, including temperature. The easiest configuration is a series where each D flip-flop is a divide-by-2. For a series of three of these, such a system would be a divide-by-8. More complicated configurations have been found that generate odd factors, such as a divide-by-5. Standard, classic logic chips that implement this or similar frequency division functions include the 7456, 7457, 74292, and 74294. (see list of 7400 series and list of 4000 series logic chips)
Fractional-N synthesis.
A fractional-n frequency synthesizer can be constructed using two integer dividers, a divide-by-N, and a divide-by-(N + 1) frequency divider. With a modulus controller, N is toggled between the two values so that the VCO alternates between one locked frequency and the other. The VCO stabilizes at a frequency that is the time average of the two locked frequencies. By varying the percentage of time the frequency divider spends at the two divider values, the frequency of the locked VCO can be selected with very fine granularity.
Delta-sigma.
If the sequence of divide by N and divide by (N + 1) is periodic, spurious signals appear at the VCO output in addition to the desired frequency. Delta-sigma fractional-n dividers overcome this problem by randomizing the selection of N and (N + 1) while maintaining the time-averaged ratios.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f_{in}"
},
{
"math_id": 1,
"text": "\nf_{out} = \\frac{f_{in}}{N}\n"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "f_{in}/2"
},
{
"math_id": 4,
"text": "3f_{in}/2"
}
] |
https://en.wikipedia.org/wiki?curid=5767604
|
5767900
|
Total maximum daily load
|
A total maximum daily load (TMDL) is a regulatory term in the U.S. Clean Water Act, describing a plan for restoring impaired waters that identifies the maximum amount of a pollutant that a body of water can receive while still meeting water quality standards.
State and federal agency responsibilities.
The Clean Water Act requires that state environmental agencies complete TMDLs for impaired waters and that the United States Environmental Protection Agency (EPA) review and approve / disapprove those TMDLs. Because both state and federal governments are involved in completing TMDLs, the TMDL program is an example of cooperative federalism. If a state doesn't take action to develop TMDLs, or if EPA disapproves state-developed TMDLs, the EPA is responsible for issuing TMDLs. EPA published regulations in 1992 establishing TMDL procedures. Application of TMDLs has broadened significantly in the last decade to include many watershed-scale efforts, including the Chesapeake Bay TMDL. TMDLs identify all point source and nonpoint source pollutants within a watershed.
State inventories.
The Clean Water Act requires states to compile lists of water bodies that do not fully support beneficial uses such as aquatic life, fisheries, drinking water, recreation, industry, or agriculture; and to prioritize those water bodies for TMDL development. These inventories are known as "303(d) lists" and characterize waters as "fully supporting", "impaired", or in some cases "threatened" for beneficial uses.
Planning process.
Beneficial use determinations must have sufficient credible water quality data for TMDL planning.
Throughout the U.S., data are often lacking adequate spatial or temporal coverage to reliably establish the sources and magnitude of water quality degradation.
TMDL planning in large watersheds is a process that typically involves the following steps:
Water quality targets.
The purpose of water quality targets is to protect or restore beneficial uses and protect human health. These targets may include state/federal numerical water quality standards or narrative standards, i.e. within the range of "natural" conditions. Establishing targets to restore beneficial uses is challenging and sometimes controversial. For example, the restoration of a fishery may require reducing temperatures, nutrients, sediments, and improving habitat.
Necessary values for each pollutant target to restore fisheries can be uncertain. The potential for a water body to support a fishery even in a pristine state can be uncertain.
Background.
Calculating the TMDL for any given body of water involves the combination of factors that contribute to the problem of nutrient concentrated runoff. Bodies of water are tested for contaminants based on their intended use. Each body of water is tested similarly but designated with a different TMDL. Drinking water reservoirs are designated differently from areas for public swimming and water bodies intended for fishing are designated differently from water located in wildlife conservation areas. The size of the water body also is taken into consideration when TMDL calculating is undertaken. The larger the body of water, the greater the amounts of contaminants can be present and still maintain a margin of safety. The "margin of safety" (MOS) is numeric estimate included in the TMDL calculation, sometimes 10% of the TMDL, intended to allow a safety buffer between the calculated TMDL and the actual load that will allow the water body to meet its beneficial use (since the natural world is complex and several variables may alter future conditions). TMDL is the end product of all point and non-point source pollutants of a single contaminant. Pollutants that originate from a point source are given allowable levels of contaminants to be discharged; this is the "waste load allocation" (WLA). Nonpoint source pollutants are also calculated into the TMDL equation with "load allocation" (LA).
Calculation.
The calculation of a TMDL is as follows:
formula_0
where WLA is the waste load allocation for point sources, LA is the load allocation for nonpoint sources, and MOS is the margin of safety.
Load allocations.
Load allocations are equally challenging as setting targets. Load allocations provide a framework for determining the relative share of natural sources and human sources of pollution.
The natural background load for a pollutant may be imprecisely understood. Industrial dischargers, farmers, land developers, municipalities, natural resource agencies, and other watershed stakeholders each have a vested interest in the outcome.
Implementation.
To implement TMDLs with point sources, wasteload allocations are incorporated into discharge permits for these sources. The permits are issued by EPA or delegated state agencies under the National Pollutant Discharge Elimination System (NPDES). Nonpoint source discharges (e.g. agriculture) are generally in a voluntary compliance scenario. The TMDL implementation plan is intended to help bridge this divide and ensure that watershed beneficial uses are restored and maintained. Local watershed groups play a critical role in educating stakeholders, generating funding, and implementing projects to reduce nonpoint sources of pollution.
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "TMDL = WLA + LA + MOS"
}
] |
https://en.wikipedia.org/wiki?curid=5767900
|
5767980
|
Cross-entropy method
|
Monte Carlo method for importance sampling and optimization
The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.
The method approximates the optimal importance sampling estimator by repeating two phases:
Reuven Rubinstein developed the method in the context of "rare-event simulation", where tiny probabilities must be estimated, for example in network reliability analysis, queueing models, or performance analysis of telecommunication systems. The method has also been applied to the traveling salesman, quadratic assignment, DNA sequence alignment, max-cut and buffer allocation problems.
Estimation via importance sampling.
Consider the general problem of estimating the quantity
formula_0,
where formula_1 is some "performance function" and formula_2 is a member of some parametric family of distributions. Using importance sampling this quantity can be estimated as
formula_3,
where formula_4 is a random sample from formula_5. For positive formula_1, the theoretically "optimal" importance sampling density (PDF) is given by
formula_6.
This, however, depends on the unknown formula_7. The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in the Kullback–Leibler sense) to the optimal PDF formula_8.
Generic CE algorithm.
In several cases, the solution to step 3 can be found "analytically". Situations in which this occurs are
Continuous optimization—example.
The same CE algorithm can be used for optimization, rather than estimation.
Suppose the problem is to maximize some function formula_17, for example,
formula_18.
To apply CE, one considers first the "associated stochastic problem" of estimating
formula_19
for a given "level" formula_20, and parametric family formula_21, for example the 1-dimensional
Gaussian distribution,
parameterized by its mean formula_22 and variance formula_23 (so formula_24 here).
Hence, for a given formula_20, the goal is to find formula_25 so that
formula_26
is minimized. This is done by solving the sample version (stochastic counterpart) of the KL divergence minimization problem, as in step 3 above.
It turns out that parameters that minimize the stochastic counterpart for this choice of target distribution and
parametric family are the sample mean and sample variance corresponding to the "elite samples", which are those samples that have objective function value formula_27.
The worst of the elite samples is then used as the level parameter for the next iteration.
This yields the following randomized algorithm that happens to coincide with the so-called Estimation of Multivariate Normal Algorithm (EMNA), an estimation of distribution algorithm.
Pseudocode.
"// Initialize parameters"
μ := −6
σ2 := 100
t := 0
maxits := 100
N := 100
Ne := 10
"// While maxits not exceeded and not converged"
while t < maxits and σ2 > ε do
"// Obtain N samples from current sampling distribution"
X := SampleGaussian(μ, σ2, N)
"// Evaluate objective function at sampled points"
S := exp(−(X − 2) ^ 2) + 0.8 exp(−(X + 2) ^ 2)
"// Sort X by objective function values in descending order"
X := sort(X, S)
"// Update parameters of sampling distribution via elite samples"
μ := mean(X(1:Ne))
σ2 := var(X(1:Ne))
t := t + 1
"// Return mean of final sampling distribution as solution"
return μ
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ell = \\mathbb{E}_{\\mathbf{u}}[H(\\mathbf{X})] = \\int H(\\mathbf{x})\\, f(\\mathbf{x}; \\mathbf{u})\\, \\textrm{d}\\mathbf{x}"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "f(\\mathbf{x};\\mathbf{u})"
},
{
"math_id": 3,
"text": "\\hat{\\ell} = \\frac{1}{N} \\sum_{i=1}^N H(\\mathbf{X}_i) \\frac{f(\\mathbf{X}_i; \\mathbf{u})}{g(\\mathbf{X}_i)}"
},
{
"math_id": 4,
"text": "\\mathbf{X}_1,\\dots,\\mathbf{X}_N"
},
{
"math_id": 5,
"text": "g\\,"
},
{
"math_id": 6,
"text": " g^*(\\mathbf{x}) = H(\\mathbf{x}) f(\\mathbf{x};\\mathbf{u})/\\ell"
},
{
"math_id": 7,
"text": "\\ell"
},
{
"math_id": 8,
"text": "g^*"
},
{
"math_id": 9,
"text": "\\mathbf{v}^{(0)}"
},
{
"math_id": 10,
"text": "f(\\cdot;\\mathbf{v}^{(t-1)})"
},
{
"math_id": 11,
"text": "\\mathbf{v}^{(t)}"
},
{
"math_id": 12,
"text": "\\mathbf{v}^{(t)} = \\mathop{\\textrm{argmax}}_{\\mathbf{v}} \\frac{1}{N} \\sum_{i=1}^N H(\\mathbf{X}_i)\\frac{f(\\mathbf{X}_i;\\mathbf{u})}{f(\\mathbf{X}_i;\\mathbf{v}^{(t-1)})} \\log f(\\mathbf{X}_i;\\mathbf{v})"
},
{
"math_id": 13,
"text": "f\\,"
},
{
"math_id": 14,
"text": "H(\\mathbf{X}) = \\mathrm{I}_{\\{\\mathbf{x}\\in A\\}}"
},
{
"math_id": 15,
"text": "f(\\mathbf{X}_i;\\mathbf{u}) = f(\\mathbf{X}_i;\\mathbf{v}^{(t-1)})"
},
{
"math_id": 16,
"text": "\\mathbf{X}_k \\in A"
},
{
"math_id": 17,
"text": "S"
},
{
"math_id": 18,
"text": "S(x) = \\textrm{e}^{-(x-2)^2} + 0.8\\,\\textrm{e}^{-(x+2)^2}"
},
{
"math_id": 19,
"text": "\\mathbb{P}_{\\boldsymbol{\\theta}}(S(X)\\geq\\gamma)"
},
{
"math_id": 20,
"text": "\\gamma\\,"
},
{
"math_id": 21,
"text": "\\left\\{f(\\cdot;\\boldsymbol{\\theta})\\right\\}"
},
{
"math_id": 22,
"text": "\\mu_t\\,"
},
{
"math_id": 23,
"text": "\\sigma_t^2"
},
{
"math_id": 24,
"text": "\\boldsymbol{\\theta} = (\\mu,\\sigma^2)"
},
{
"math_id": 25,
"text": "\\boldsymbol{\\theta}"
},
{
"math_id": 26,
"text": "D_{\\mathrm{KL}}(\\textrm{I}_{\\{S(x)\\geq\\gamma\\}}\\|f_{\\boldsymbol{\\theta}})"
},
{
"math_id": 27,
"text": "\\geq\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=5767980
|
57680998
|
Matrix factorization (recommender systems)
|
Mathematical procedure
Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.
Techniques.
The idea behind matrix factorization is to represent users and items in a lower dimensional latent space. Since the initial work by Funk in 2006 a multitude of matrix factorization approaches have been proposed for recommender systems. Some of the most used and simpler ones are listed in the following sections.
Funk MF.
The original algorithm proposed by Simon Funk in his blog post factorized the user-item rating matrix as the product of two lower dimensional matrices, the first one has a row for each user, while the second has a column for each item. The row or column associated to a specific user or item is referred to as "latent factors". Note that, in Funk MF no singular value decomposition is applied, it is a SVD-like machine learning model.
The predicted ratings can be computed as formula_0, where formula_1 is the user-item rating matrix, formula_2 contains the user's latent factors and formula_3 the item's latent factors.
Specifically, the predicted rating user "u" will give to item "i" is computed as:
formula_4
It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated that a matrix factorization with one latent factor is equivalent to a "most popular" or "top popular" recommender (e.g. recommends the items with the most interactions without any personalization). Increasing the number of latent factors will improve personalization, therefore recommendation quality, until the number of factors becomes too high, at which point the model starts to overfit and the recommendation quality will decrease. A common strategy to avoid overfitting is to add regularization terms to the objective function.
Funk MF was developed as a "rating prediction" problem, therefore it uses explicit numerical ratings as user-item interactions.
All things considered, Funk MF minimizes the following objective function:
formula_5
Where formula_6 is defined to be the frobenius norm whereas the other norms might be either frobenius or another norm depending on the specific recommending problem.
SVD++.
While Funk MF is able to provide very good recommendation quality, its ability to use only explicit numerical ratings as user-items interactions constitutes a limitation. Modern day recommender systems should exploit all available interactions both explicit (e.g. numerical ratings) and implicit (e.g. likes, purchases, skipped, bookmarked). To this end SVD++ was designed to take into account implicit interactions as well.
Compared to Funk MF, SVD++ takes also into account user and item bias.
The predicted rating user "u" will give to item "i" is computed as:
formula_7
Where formula_8 refers to the overall average rating over all items and formula_9 and formula_10 refers to the observed deviation of the item i and the user u respectively from the average. SVD++ has however some disadvantages, with the main drawback being that this method is not "model-based." This means that if a new user is added, the algorithm is incapable of modeling it unless the whole model is retrained. Even though the system might have gathered some interactions for that new user, its latent factors are not available and therefore no recommendations can be computed. This is an example of a cold-start problem, that is the recommender cannot deal efficiently with new users or items and specific strategies should be put in place to handle this disadvantage.
A possible way to address this cold start problem is to modify SVD++ in order for it to become a "model-based" algorithm, therefore allowing to easily manage new items and new users.
As previously mentioned in SVD++ we don't have the latent factors of new users, therefore it is necessary to represent them in a different way. The user's latent factors represent the preference of that user for the corresponding item's latent factors, therefore user's latent factors can be estimated via the past user interactions. If the system is able to gather some interactions for the new user it is possible to estimate its latent factors.
Note that this does not entirely solve the cold-start problem, since the recommender still requires some reliable interactions for new users, but at least there is no need to recompute the whole model every time. It has been demonstrated that this formulation is almost equivalent to a SLIM model, which is an item-item model based recommender.
formula_11
With this formulation, the equivalent item-item recommender would be formula_12. Therefore the similarity matrix is symmetric.
Asymmetric SVD.
Asymmetric SVD aims at combining the advantages of SVD++ while being a model based algorithm, therefore being able to consider new users with a few ratings without needing to retrain the whole model. As opposed to the model-based SVD here the user latent factor matrix H is replaced by Q, which learns the user's preferences as function of their ratings.
The predicted rating user "u" will give to item "i" is computed as:
formula_13
With this formulation, the equivalent item-item recommender would be formula_14. Since matrices Q and W are different the similarity matrix is asymmetric, hence the name of the model.
Group-specific SVD.
A group-specific SVD can be an effective approach for the cold-start problem in many scenarios. It clusters users and items based on dependency information and similarities in characteristics. Then once a new user or item arrives, we can assign a group label to it, and approximates its latent factor by the group effects (of the corresponding group). Therefore, although ratings associated with the new user or item are not necessarily available, the group effects provide immediate and effective predictions.
The predicted rating user "u" will give to item "i" is computed as:
formula_15
Here formula_16 and formula_17 represent the group label of user "u" and item "i", respectively, which are identical across members from the same group. And S and T are matrices of group effects. For example, for a new user formula_18 whose latent factor formula_19 is not available, we can at least identify their group label formula_20, and predict their ratings as:
formula_21
This provides a good approximation to the unobserved ratings.
Hybrid MF.
In recent years many other matrix factorization models have been developed to exploit the ever increasing amount and variety of available interaction data and use cases. Hybrid matrix factorization algorithms are capable of merging explicit and implicit interactions or both content and collaborative data
Deep-learning MF.
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture.
While deep learning has been applied to many different scenarios: context-aware, sequence-aware, social tagging etc. its real effectiveness when used in a simple Collaborative filtering scenario has been put into question. Systematic analysis of publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles are reproducible, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced and 11 of them could be outperformed by much older and simpler properly tuned baselines. The articles also highlights a number of potential problems in today's research scholarship and call for improved scientific practices in that area. Similar issues have been spotted also in sequence-aware recommender systems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tilde{R}=H W"
},
{
"math_id": 1,
"text": "\\tilde{R} \\in \\mathbb{R}^{\\text{users} \\times \\text{items}}"
},
{
"math_id": 2,
"text": "H \\in \\mathbb{R}^{\\text{users} \\times \\text{latent factors}}"
},
{
"math_id": 3,
"text": "W \\in \\mathbb{R}^{\\text{latent factors} \\times \\text{items}}"
},
{
"math_id": 4,
"text": "\\tilde{r}_{ui} = \\sum_{f=0}^{\\text{n factors} } H_{u,f}W_{f,i}"
},
{
"math_id": 5,
"text": "\\underset{H, W}{\\operatorname{arg\\,min}}\\, \\|R - \\tilde{R}\\|_{\\rm F} + \\alpha\\|H\\| + \\beta\\|W\\|"
},
{
"math_id": 6,
"text": "\\|.\\|_{\\rm F}"
},
{
"math_id": 7,
"text": "\\tilde{r}_{ui} = \\mu + b_i + b_u + \\sum_{f=0}^{\\text{n factors}} H_{u,f}W_{f,i}"
},
{
"math_id": 8,
"text": "\\mu"
},
{
"math_id": 9,
"text": "b_i"
},
{
"math_id": 10,
"text": "b_u"
},
{
"math_id": 11,
"text": "\\tilde{r}_{ui} = \\mu + b_i + b_u + \\sum_{f=0}^{\\text{n factors}} \\biggl( \\sum_{j=0}^{\\text{n items}} r_{uj} W^T_{j,f} \\biggr) W_{f,i}"
},
{
"math_id": 12,
"text": "\\tilde{R} = R S = R W^{\\rm T} W"
},
{
"math_id": 13,
"text": "\\tilde{r}_{ui} = \\mu + b_i + b_u + \\sum_{f=0}^{\\text{n factors}} \\sum_{j=0}^{\\text{n items}} r_{uj} Q_{j,f}W_{f,i}"
},
{
"math_id": 14,
"text": "\\tilde{R} = R S = R Q^{\\rm T} W"
},
{
"math_id": 15,
"text": "\\tilde{r}_{ui} = \\sum_{f=0}^{\\text{n factors}} (H_{u,f}+S_{v_u,f})(W_{f,i}+T_{f,j_i})"
},
{
"math_id": 16,
"text": "v_u"
},
{
"math_id": 17,
"text": "j_i"
},
{
"math_id": 18,
"text": "u_{new}"
},
{
"math_id": 19,
"text": "H_{u_{new}}"
},
{
"math_id": 20,
"text": "v_{u_{new}}"
},
{
"math_id": 21,
"text": "\\tilde{r}_{u_{new}i} = \\sum_{f=0}^{\\text{n factors}} S_{v_{u_{new}},f}(W_{f,i}+T_{f,j_i})"
}
] |
https://en.wikipedia.org/wiki?curid=57680998
|
57681488
|
Stored Energy at Sea
|
Submerged offshore pump storage system
The Stored Energy at Sea (StEnSEA) project is a pump storage system designed to store significant quantities of electrical energy offshore. After research and development, it was tested on a model scale in November 2016. It is designed to link in well with offshore wind platforms and their issues caused by electrical production fluctuations. It works by water flowing into a container, at significant pressure, thus driving a turbine. When there is spare electricity the water is pumped out, allowing electricity to be generated at a time of increased need.
Development history.
In 2011, the physics Prof. Dr Horst Schmidt-Böcking (Goethe University Frankfurt) and Dr. Gerhard Luther (Saarland University) had the idea of a pump storage system that would be placed on the sea bed. This system would use the high water pressure at great water depths to store energy in hollow bodies.
Shortly, after their idea was published on 1 April 2011 in the newspaper Frankfurter Allgemeine Zeitung, a consortium of the "Fraunhofer Institute for Energy Economics and Energy System Technology" and the construction company "Hochtief AG" was set up. In collaboration they conducted a first preliminary sketch, which proved the feasibility of the pump storage concept. Subsequently, the German Federal Ministry for Economic Affairs and Energy supported the development and testing of the new concept.
Physical principle.
The functionality of a seawater pressure storage power plant is based on usual pumped-hydro storage plants. A hollow concrete sphere with an integrated pump-turbine will be installed on the bottom of the sea. Compared to well known pumped-hydro storage plants, the sea that surrounds the sphere represents the upper water basin. The hollow sphere represents the lower water basin. The StEnSea concept uses the high water pressure difference between the hollow sphere and the surrounding sea, which is about 75 bar (≈1 bar per 10 meters).
In case of overproduction of adjacent energy sources such as wind turbines or photovoltaic systems, the pump-turbine will be enabled to pump water from the cavity against the pressure into the surrounding sea. An empty hollow sphere means a fully charged storage system. When electricity is needed, water from the surrounding sea is guided through the turbine into the cavity, generating electricity. The higher the pressure difference between hollow sphere and the surrounding sea, the higher the energy yield during discharging. While discharging the hollow sphere a vacuum will be created inside. To avoid cavitation, the pump turbines and all other electrical components are placed in a centrally mounted cylinder. An auxiliary feed pump in the bottom of the cylinder is required to fill the cylinder with water and produces an inside pressure.
"Both pumps require an input pressure above the net positive suction head to avoid cavitation while pumping water from the inner volume into the cylinder or from the cylinder out of the sphere. As the pressure difference for the additional pump is much lower than for the pump turbine the required input pressure is lower as well. The input pressure of both pumps is given by the water column above them. For the additional pump this is the water column in the sphere and for the pump turbine it is the water column in the cylinder."
The maximum capacity for the hollow concrete sphere depends on the total pump-turbine efficiency, the installation depth and the inner volume.
formula_0
The stored energy is proportional to the ambient pressure in the depths of the sea. Problems considered during the construction of the hollow sphere were choosing a construction-type that withstands the high water-pressure and which is heavy enough to keep the buoyancy force lower than the gravitational force. This resulted in the spherical construction with an inner diameter of 28.6 meter and a 2.72 meter thick wall made of normal watertight concrete.
Pilot test.
To prove feasibility under real conditions and to acquire measurement data, the Fraunhofer engineers started implementing a pilot project. Hochtief Solutions AG constructed a pilot hollow sphere at a scale of 1:10 out of concrete, with an outer diameter of three meters and an inner volume of eight m3. On 9 November 2016 it was installed in Lake Constance at a depth of 100 meters and tested for four weeks.
During the test phase, the engineers were able to successfully store energy and operate the system in different operating modes. The engineers also studied whether a pressure equalization line to the surface is required. In case of application without the compensating cable, a reduction of costs and expense would be possible. The pilot test revealed, that both operation variants work and would be possible to run.
In the next step, a possible test location in the sea for the carrying out of a demonstration project is to be scrutinized. Then a sphere with the planned demonstration diameter of 30 meters should be built and installed at a suitable location in the sea. Possible places of installation situated near a coast would be for example the Norwegian trench or some Spanish sea areas.
Furthermore, partners from the industry financing half of the project must be found, in order to receive further public funding from the BMWi. Because the total costs for the demonstration project are estimated at a low double-digit million euro amount.
Potential installation sites.
The identification of potential installation sites was undertaken in three consecutive steps. At first, the designation of several arguments depicting the quality of a potential location were determined. Besides the installation depth, which is the main factor involved, variables like slope, geomorphology, distance to a possible grid connection point as well as to bases for servicing and set-up, marine reserves and the requirement for power storage in the surroundings were taken into account.
In the following step, specific values were assigned to the hard parameters, which are required for the use of the technology. Many of these values were determined in a previous feasibility analysis, a few had to be assessed by using comparable applications from different offshore industries. The installation depth of the concrete sphere should be 600-800m below the sea level and have an angle of inclination of less than or equal to 1°. In addition, it is required to reach the next grid connection point within one hundred kilometres as well as a basis, from which maintenance and repair measures can be carried out. Furthermore, an installation basis should not be more than 500 km away and areas with inappropriate geomorphology for example canyons were excluded.
Finally, a global location analysis, based on geo-datasets and the above defined restrictions, was carried out with a Geographical Information System (GIS). In order to make a statement about the potential storage capacities, the resulting areas were assigned to the Exclusive Economic Zones (EEZ) of the affected states. Those and the corresponding capacities for storing electricity are displayed in the table below.
Economic assessment of StEnSea.
StEnSea is a modular high capacity energy storage technology. It's profitability depends on installed units (concrete hollows) per facility (causing scale effects), on the realized arbitrages on the energy market and it depends on the operating hours per year. As well as on the investment and operation cost.
In the following chart the relevant economic parameters for an economic assessment are pictured. About 800 to 1000 full operation cycles per annum are required.
For the operation and management of a storage farm, personal expenditure is based on 0.5 - 2 staff per storage farm, depending on the farm capacity. Labor costs of 70 k€ per year and member of staff are used for the calculation. The price arbitrage is set to be 6 €ct per kWh for the economic assessment, resulting from an average electricity purchase price of 2 €ct per kWh and an average sale price of 8 €ct kWh. This price arbitrage includes the purchase of other services such as the provision of positive or negative balance power, frequency control or reactive power, all of which are not separately considered in the calculations. Planning and approval costs include costs for the site evaluation (as prerequisite for the permission), power plant certification, as well as the project development and management.
Depending on the number of storage units per farm, the unit specific costs for planning and approval vary in the range from 1,070 mio.€ at 120 units to 1,74 mio.€ at 5 units. Also the annuities depend directly on the number of installed units. With 120 units an annuity of 544k€ can be achieved, while only a 232k€ annuity with 5 installed units only is possible.
Ecological effects.
Due to the main components of the construction (primarily steel, concrete for the hollow and cables for the connection), this system presents minimal risks to the eco-system. To avoid sea animals being sucked into the turbine a fine meshed grid is installed. In addition, the flow speed of the water rushing into the hollow is kept low.
Media coverage.
A video post on the public television station ZDF called the hollow concrete balls a “possible solution to store solar and wind energy”. The gained data helped to understand the project better. For further tests on a bigger scale Christian Dick, also a member of the Fraunhofer IEE team, thinks about constructing a big concrete hollow upon the sea.
The TV station ZDF nano produced a documentary about the field study StEnSea in
Lake Constance (German: Bodensee). Christian Dick was cited that “the ball exactly worked like it was supposed to work”. The most important finding was that an air-connection to the surface is not needed, reducing the technical effort significantly. Project leader Matthias Puchta from Fraunhofer IEE said “by pumping out the water we created a nearly total vacuum. Demonstrating that was very exciting, because nobody was able to do that before by using this technology. We showed it works.” For maintenance and possible technical problems the technology will be located in a cylinder, easy to recover and maintain with a robotic submarine. After all this technology could be “a mosaic of our future energy supply".
This opinion was shared by Swiss radio channel SRF as they reported about the project as a “potentially path-breaking experiment”. Thanks to the successful project in the lake, where energy was fed in a test grid and drawn from it, the team intends to install a concrete ball of a diameter 10 times larger than the pilot project (30 meters). Due to Germany's too shallow coastlines, the country will not be used for further projects. The Spanish coastline offers good conditions for a long-term project. This long-term project should last between three and five years under real-life conditions and is supposed to gain the data for the subsequent commercialization.
"Der Spiegel" reported that the technology of StEnSea could be also interesting for offshore wind parks. The economically efficient storage of surplus energy is one of the key tasks for the grid and the energy market, as more and more renewables are taken into the system. Therefore, the technology's role in reorganizing the energy system can be crucial.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C_{max}=\\frac{\\rho_{water} \\cdot \\eta_{turb} \\cdot d \\cdot g \\cdot V_{inner}}{3,69E9}"
}
] |
https://en.wikipedia.org/wiki?curid=57681488
|
5768230
|
Carrier recovery
|
A carrier recovery system is a circuit used to estimate and compensate for frequency and phase differences between a received signal's carrier wave and the receiver's local oscillator for the purpose of coherent demodulation.
In the transmitter of a communications carrier system, a carrier wave is modulated by a baseband signal. At the receiver, the baseband information is extracted from the incoming modulated waveform.
In an ideal communications system, the carrier signal oscillators of the transmitter and receiver would be perfectly matched in frequency and phase, thereby permitting perfect coherent demodulation of the modulated baseband signal.
However, transmitters and receivers rarely share the same carrier oscillator. Communications receiver systems are usually independent of transmitting systems and contain their oscillators with frequency and phase offsets and instabilities. Doppler shift may also contribute to frequency differences in mobile radio frequency communications systems.
All these frequencies and phase variations must be estimated using the information in the received signal to reproduce or recover the carrier signal at the receiver and permit coherent demodulation.
Methods.
For a quiet carrier or a signal containing a dominant carrier spectral line, carrier recovery can be accomplished with a simple band-pass filter at the carrier frequency or with a phase-locked loop, or both.
However, many modulation schemes make this simple approach impractical because most signal power is devoted to modulation—where the information is present—and not to the carrier frequency. Reducing the carrier power results in greater transmitter efficiency. Different methods must be employed to recover the carrier in these conditions.
Non-data-aided.
Non-data-aided/“blind” carrier recovery methods do not rely on knowledge of the modulation symbols. They are typically used for simple carrier recovery schemes or as the initial coarse carrier frequency recovery method. Closed-loop non-data-aided systems are frequently maximum likelihood frequency error detectors.
Multiply-filter-divide.
In this method of non-data-aided carrier recovery, a non-linear operation (frequency multiplier) is applied to the modulated signal to create harmonics of the carrier frequency with the modulation removed (see example below). The carrier harmonic is then band-pass filtered and frequency divided to recover the carrier frequency. (This may be followed by a PLL.) Multiply-filter-divide is an example of open-loop carrier recovery, which is favored in burst transactions (burst mode clock and data recovery) since the acquisition time is typically shorter than for close-loop synchronizers.
If the phase-offset/delay of the multiply-filter-divide system is known, it can be compensated for to recover the correct phase. In practice, applying this phase compensation is complicated.
In general, the modulation's order matches the nonlinear operator required to produce a clean carrier harmonic.
As an example, consider a BPSK signal. We can recover the RF carrier frequency, formula_0 by squaring:
formula_1
This produces a signal at twice the RF carrier frequency with no phase modulation (modulo formula_2 phase is effectively 0 modulation)
For a QPSK signal, we can take the fourth power:
formula_3
Two terms (plus a DC component) are produced. An appropriate filter around formula_4 recovers this frequency.
Costas loop.
The carrier frequency and phase recovery, as well as demodulation, can be accomplished using a Costas loop of the appropriate order. A Costas loop is a cousin of the PLL that uses coherent quadrature signals to measure phase error. This phase error is used to discipline the loop's oscillator. Once correctly aligned/recovered, the quadrature signals also successfully demodulate the signal. Costas loop carrier recovery may be used for any M-ary PSK modulation scheme. One of the Costas Loop's inherent shortcomings is a 360/M degree phase ambiguity present on the demodulated output.
Decision-directed.
At the start of the carrier recovery process, it is possible to achieve symbol synchronization before full carrier recovery because symbol timing can be determined without knowledge of the carrier phase or the carrier's minor frequency variation/offset. In decision directed carrier recovery the output of a symbol decoder is fed to a comparison circuit and the phase difference/error between the decoded symbol and the received signal is used to discipline the local oscillator. Decision-directed methods are suited to synchronizing frequency differences that are less than the symbol rate because comparisons are performed on symbols at or near the symbol rate. Other frequency recovery methods may be necessary to achieve initial frequency acquisition.
A common form of decision-directed carrier recovery begins with quadrature phase correlators producing in-phase and quadrature signals representing a symbol coordinate in the complex plane. This point should correspond to a location in the modulation constellation diagram. The phase error between the received value and nearest/decoded symbol is calculated using arc tangent (or an approximation). However, arc tangent, can only compute a phase correction between 0 and formula_5. Most QAM constellations also have formula_5 phase symmetry. Both of these shortcomings came be overcome by using differential coding.
In low SNR conditions, the symbol decoder will make errors more frequently. Exclusively using the corner symbols in rectangular constellations or giving them more weight versus lower SNR symbols reduces the impact of low SNR decision errors.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega_{RF}"
},
{
"math_id": 1,
"text": "\\begin{align}\nV_{BPSK}(t) &{}= A(t) \\cos(\\omega_{RF}t + n\\pi); n = 0,1 \\\\\nV^2_{BPSK}(t) &{}= A^2(t) \\cos^2(\\omega_{RF}t + n\\pi) \\\\\nV^2_{BPSK}(t) &{}= \\frac{A^2(t)}{2}[1 + \\cos(2\\omega_{RF}t + n2{\\pi})]\n\\end{align}"
},
{
"math_id": 2,
"text": "2\\pi"
},
{
"math_id": 3,
"text": "\\begin{align}\nV_{QPSK}(t) &{}= A(t) \\cos(\\omega_{RF}t + n\\frac{\\pi}{2}); n = 0,1,2,3 \\\\\nV^4_{QPSK}(t) &{}= A^4(t) \\cos^4(\\omega_{RF}t + n\\frac{\\pi}{2}) \\\\\nV^4_{QPSK}(t) &{}= \\frac{A^4(t)}{8}[3 + 4\\cos(2\\omega_{RF}t + n{\\pi}) + \\cos(4\\omega_{RF}t + n2\\pi)]\n\\end{align}"
},
{
"math_id": 4,
"text": "4\\omega_{RF}"
},
{
"math_id": 5,
"text": "\\pi/2"
}
] |
https://en.wikipedia.org/wiki?curid=5768230
|
576855
|
Binary decision diagram
|
Data structure for Boolean functions
In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
Similar data structures include negation normal form (NNF), Zhegalkin polynomials, and propositional directed acyclic graphs (PDAG).
Definition.
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) node formula_0 is labeled by a Boolean variable formula_1 and has two child nodes called low child and high child. The edge from node formula_0 to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variable formula_1. Such a BDD is called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph:
In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique) for a particular function and variable order. This property makes it useful in functional equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1).
Example.
The left figure below shows a binary decision "tree" (the reduction rules are not applied), and a truth table, each representing the function formula_2. In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find formula_3, begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). This leads to the terminal 1, which is the value of formula_3.
The binary decision "tree" of the left figure can be transformed into a binary decision "diagram" by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure.
Another notation for writing this Boolean function is formula_4.
Complemented edges.
An ROBDD can be represented even more compactly, using complemented edges. Complemented edges are formed by annotating low edges as complemented or not. If an edge is complemented, then it refers to the negation of the Boolean function that corresponds to the node that the edge points to (the Boolean function represented by the BDD with root that node). High edges are not complemented, in order to ensure that the resulting BDD representation is a canonical form. In this representation, BDDs have a single leaf node, for reasons explained below.
Two advantages of using complemented edges when representing BDDs are:
A reference to a BDD in this representation is a (possibly complemented) "edge" that points to the root of the BDD. This is in contrast to a reference to a BDD in the representation without use of complemented edges, which is the root node of the BDD. The reason why a reference in this representation needs to be an edge is that for each Boolean function, the function and its negation are represented by an edge to the root of a BDD, and a complemented edge to the root of the same BDD. This is why negation takes constant time. It also explains why a single leaf node suffices: FALSE is represented by a complemented edge that points to the leaf node, and TRUE is represented by an ordinary edge (i.e., not complemented) that points to the leaf node.
For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed. If when we reach the leaf node we have crossed an odd number of complemented edges, then the value of the Boolean function for the given variable assignment is FALSE, otherwise (if we have crossed an even number of complemented edges), then the value of the Boolean function for the given variable assignment is TRUE.
An example diagram of a BDD in this representation is shown on the right, and represents the same Boolean expression as shown in diagrams above, i.e., formula_5. Low edges are dashed, high edges solid, and complemented edges are signified by a circle at their source. The node with the @ symbol represents the reference to the BDD, i.e., the reference edge is the edge that starts from this node.
History.
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions (cofactors) by assigning one variable (cf. "if-then-else normal form"). If such a sub-function is considered as a sub-tree, it can be represented by a "binary decision tree". Binary decision diagrams (BDDs) were introduced by C. Y. Lee, and further studied and made known by Sheldon B. Akers and Raymond T. Boute. Independently of these authors, a BDD under the name "canonical bracket form" was realized Yu. V. Mamrukov in a CAD for analysis of speed-independent circuits. The full potential for efficient algorithms based on the data structure was investigated by Randal Bryant at Carnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structure "Shared Reduced Ordered Binary Decision Diagram" is defined. The notion of a BDD is now generally used to refer to that particular data structure.
In his video lecture "Fun With Binary Decision Diagrams (BDDs)", Donald Knuth calls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science.
Adnan Darwiche and his collaborators have shown that BDDs are one of several normal forms for Boolean functions, each induced by a different combination of requirements. Another important normal form identified by Darwiche is decomposable negation normal form or DNNF.
Applications.
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verification. There are several lesser known applications of BDD, including fault tree analysis, Bayesian reasoning, product configuration, and private information retrieval.
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).
BDDs have been applied in efficient Datalog interpreters.
Variable ordering.
The size of the BDD is determined both by the function being represented and by the chosen ordering of the variables. There exist Boolean functions formula_6 for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (in "n") at best and exponential at worst (e.g., a ripple carry adder). Consider the Boolean function formula_7 Using the variable ordering formula_8, the BDD needs formula_9 nodes to represent the function. Using the ordering formula_10, the BDD consists of formula_11 nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering is NP-hard. For any constant "c" > 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at most "c" times larger than an optimal one. However, there exist efficient heuristics to tackle the problem.
There are functions for which the graph size is always exponential—independent of variable ordering. This holds e.g. for the multiplication function. In fact, the function computing the middle bit of the product of two formula_12-bit numbers does not have an OBDD smaller than formula_13 vertices. (If the multiplication function had polynomial-size OBDDs, it would show that integer factorization is in P/poly, which is not known to be true.)
Researchers have suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (binary moment diagrams), ZDD (zero-suppressed decision diagrams), FBDD (free binary decision diagrams), FDD (functional decision diagrams), PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).
Logical operations on BDDs.
Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms:
However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential in the number of operations. Variable ordering needs to be considered afresh; what may be a good ordering for (some of) the set of BDDs may not be a good ordering for the result of the operation. Also, since constructing the BDD of a Boolean function solves the NP-complete Boolean satisfiability problem and the co-NP-complete tautology problem, constructing the BDD can take exponential time in the size of the Boolean formula even when the resulting BDD is small.
Computing existential abstraction over multiple variables of reduced BDDs is NP-complete.
Model-counting, counting the number of satisfying assignments of a Boolean formula, can be done in polynomial time for BDDs. For general propositional formulas the problem is ♯P-complete and the best known algorithms require an exponential time in the worst case.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "x_i"
},
{
"math_id": 2,
"text": "f(x_1, x_2, x_3)"
},
{
"math_id": 3,
"text": "f(0, 1, 1)"
},
{
"math_id": 4,
"text": "\\overline{x}_1 \\overline{x}_2 \\overline{x}_3 + x_1 x_2 + x_2 x_3"
},
{
"math_id": 5,
"text": "(\\neg x_1 \\wedge \\neg x_2 \\wedge \\neg x_3) \\vee (x_1 \\wedge x_2) \\vee (x_2 \\wedge x_3)"
},
{
"math_id": 6,
"text": "f(x_1,\\ldots, x_{n})"
},
{
"math_id": 7,
"text": "f(x_1,\\ldots, x_{2n}) = x_1x_2 + x_3x_4 + \\cdots + x_{2n-1}x_{2n}."
},
{
"math_id": 8,
"text": "x_1 < x_3 < \\cdots < x_{2n-1} < x_2 < x_4 < \\cdots < x_{2n}"
},
{
"math_id": 9,
"text": "2^{n+1}"
},
{
"math_id": 10,
"text": "x_1 < x_2 < x_3 < x_4 < \\cdots < x_{2n-1} < x_{2n}"
},
{
"math_id": 11,
"text": "2n+2"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "2^{\\lfloor n/2 \\rfloor} / 61 - 4"
}
] |
https://en.wikipedia.org/wiki?curid=576855
|
57687722
|
Spinterface
|
Spinterface is a term coined to indicate an interface between a ferromagnet and an organic semiconductor.
This is a widely investigated topic in molecular spintronics, since the role of interfaces plays a huge part in the functioning of a device. In particular, spinterfaces are widely studied in the scientific community because of their hybrid organic/inorganic composition. In fact, the hybridization between the metal and the organic material can be controlled by acting on the molecules, which are more responsive to electrical and optical stimuli than metals. This gives rise to the possibility of efficiently tuning the magnetic properties of the interface at the atomic scale.
History.
The field of spintronics, which is the scientific field that aims to study the spin-dependent electron transport in solid-state devices, emerged in the last decades of the 20th century, first with the observation of the injection of a spin-polarized current from a ferromagnetic to a paramagnetic metal and subsequently with the discovery of tunnel magnetoresistance and giant magnetoresistance. The field evolved turning towards spin-orbit related phenomena, such as Rashba effect. Only more recently, spintronics has been extended to the organic world, with the idea of exploiting the weak spin-relaxation mechanisms of molecules in order to use them for spin transport. Research in this field started off with hybrid replicas of inorganic spintronic devices, such as spin valves and magnetic tunneling junctions, trying to obtain spin transport in molecular films. However some devices didn't behave as expected, for example vertical spin valves displaying a negative magnetoresistance. It was then quickly understood that the molecular layers don't just play a transport role but they can also act on the spin polarization of the ferromagnet at the interface. Because of this, the interest on ferromagnet/organic interfaces rapidly increased in the scientific community and the term "spinterface" was born. The research is currently aimed at building devices with interfaces engineered in order to tailor the spin injection.
Scientific interest.
The shrinking of device sizes and the attention towards low power consumption applications has led to an ever-growing attention towards the physics of surfaces and interfaces, which play a fundamental role in the functioning of many applications. The breaking of the bulk symmetry which occurs at a surface leads to different physical and chemical properties, which are sometimes impossible to find in the bulk material. In particular, when a solid-state material is interfaced with another solid, the terminations of the two different materials influence each other by means of chemical bonds. The behavior of the interface is highly influenced by the properties of the materials. In particular, in spinterfaces, a metal and an organic semiconductor, which display very different electronic properties, are interfaced and they usually form a strong hybridization. With the final aim of being able to tune and change the electronic and magnetic behavior of the interface, spinterfaces are studied both by inserting them into spintronic devices and, on a more basic level, by investigating the growth of ultra-thin molecular layers on ferromagnetic substates with a surface science approach. The scope of building such interfaces is on one side to exploit the spin-polarized character of the electronic structure of the ferromagnet to induce a spin polarization in the molecular layer and, on the other hand, to influence the magnetic character of the ferromagnetic layer by means of hybridization. Combining this with the fact that usually molecules have a very high responsivity to stimuli (typically impossible to achieve in inorganic materials) there is the hope of being able to easily change the character of the hybridization, hence tuning the properties of the spinterface. This could give rise to a new class of spintronic devices, where the spinterface plays a fundamental and active role.
Physics and applications.
Organic semiconductors are currently used in various applications, for example OLED displays, which can be flexible, thinner, faster and more power efficient than LCD screens, and organic field-effect transistors, intended for large, low-cost electronic products and biodegradable electronics.
In terms of spintronic applications, there are no available commercial devices yet, but the applied research is headed towards the use of spinterfaces mainly for magnetic tunneling junctions and organic spin valves.
Spin-Filtering.
The physical principle that is mainly exploited when talking about spinterfaces is the spin-filtering. This is simply schematized in figure: when one considers the ferromagnet and the organic semiconductor on their own (panel a), the density of states (DOS) of the metal will be unbalanced between the two spin channels, with the difference of the up and down DOS at the Fermi level governing the spin polarization of the current flow; the DOS of the organic semiconductor will have no unbalance between the spin channels and will display localized energy levels, namely highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), with zero DOS at the Fermi Level. When the two materials are put into contact they influence each other's DOS at the interface: the main effects are a broadening of the molecular orbitals and a possible shift of their energy. These effects are in general spin-dependent, since they arise from the hybridization, which is strictly dependent on the DOS of the two materials, which is itself spin-unbalanced in the case of the ferromagnet. As a matter of example, panel b represents the case of a parallel injection of current, while panel c schematizes an antiparallel spin polarization of the current injected in the semiconductor. In this way, the injected current will be polarized accordingly to the interface DOS at the Fermi Level and exploiting the fact that molecules usually have intrinsically weak spin-relaxation mechanisms, molecular layers are great candidates for spin transport applications. By a good material choice one is then able to filter the spins at the spinterface.
Magnetic Tunneling Junction.
Applied research on spinterfaces is often focused on studying the tunnel magnetoresistance (TMR) in hybrid magnetic tunneling junctions (MTJs). Conventional MTJs are composed by two ferromagnetic electrodes separated by an insulating layer, thin enough for electron tunneling events to be relevant. The idea of using spinterfaces consists in replacing the inorganic insulating barrier with an organic one. The motivation for this is given by the flexibility, low cost and higher spin-relaxation times of molecules and the possibility of chemically engineering the interfaces. The physical principle behind MTJs is that the tunneling of the junction is dependent on the relative orientation of the magnetization of the ferromagnetic electrodes. In fact, in the Jullière model, the tunneling current that passes through the junction is proportional to the sum of the products of the DOS of the single spin channels:
formula_0 formula_1
The picture of spin-dependent tunneling is represented in figure, and what is observed is that usually there is a larger tunneling current in the case of parallel alignment of the electrode magnetizations. This is given by the fact that, in this case, the term formula_2 will be way larger than all the other terms, making formula_3.
By changing the relative orientation of the magnetization of the electrodes it is possible to control the conductance state of the tunneling junction and use this principle for applications, for example read-heads of hard disk drives and MRAMs.
If an organic material is inserted as tunneling barrier, the picture becomes more complex, as the formation of spin-hybridization-induced polarized states occurs. These states may affect the tunneling transmission coefficient, which is usually kept constant in the Jullière model. Barraud et al., in a Nature Physics paper, develop a spin transport model that takes into account the effect of the spinterface hybridization. What they observed is that the role of this hybridization in the spin tunneling process is not only relevant, but also capable of inverting the sign of the TMR. This opens the door to a new research front, aimed at tailoring the properties of spintronic devices through the right combination of ferromagnetic metals and molecules.
Spin Valves.
Conventional spin valves are built in a very similar way with respect to magnetic tunneling junctions, the difference is that the two ferromagnetic electrodes are this time separated by a non-magnetic metal instead of an insulator. The physical principle exploited in this case is no longer related to tunneling but to electrical resistance.
The spin-polarized current, coming from one ferromagnetic electrode, can travel in a non-magnetic metal for a certain distance, given by the spin diffusion length of that metal. When the current enters another ferromagnetic material, the relative orientation of the magnetization with respect to the first electrode can lead to a change in the resistance of the junction: if the alignment of the magnetizations is parallel, the spin valve will exhibit a low resistance state, while, in the case of antiparallel alignment, reflection and spin flip scattering events give rise to a high resistance state. From these considerations one can define and evaluate the magnetoresistance of the spin valve:
formula_4
where formula_5 and formula_6 are respectively the resistances for the antiparallel and parallel alignment.
The usual way of creating the possibility of having both parallel and antiparallel alignment is either pinning one of the electrodes by means of exchange bias or directly using different materials with different coercive fields for the two electrodes (pseudo spin valves). The proposed use of spinterfaces in spin valve applications is to interface one of the electrodes with a molecular layer, which is capable of tuning the magnetization properties of the electrode with a change in hybridization. This change of hybridization at the spinterface can be induced in principle both by light (making these systems suitable for ultra-fast applications) and electric voltages. If this process is reversible, there is the possibility of switching from high to low resistance in a very effective way, making the devices faster and more efficient.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " J^{p} \\propto D^{\\uparrow}_1 \\cdot D^{\\uparrow}_2+D^{\\downarrow}_1 \\cdot D^{\\downarrow}_2 \\qquad "
},
{
"math_id": 1,
"text": " \\qquad J^{ap} \\propto D^{\\uparrow}_1 \\cdot D^{\\downarrow}_2+D^{\\downarrow}_1 \\cdot D^{\\uparrow}_2 "
},
{
"math_id": 2,
"text": " D^{\\downarrow}_1 \\cdot D^{\\downarrow}_2 "
},
{
"math_id": 3,
"text": " J^{p} > J^{ap} "
},
{
"math_id": 4,
"text": " MR = \\frac{\\rho_{ap} - \\rho_p}{\\rho_p} "
},
{
"math_id": 5,
"text": " \\rho_{ap} "
},
{
"math_id": 6,
"text": " \\rho_{p} "
}
] |
https://en.wikipedia.org/wiki?curid=57687722
|
57687737
|
Incremental deformations
|
In solid mechanics, the linear stability analysis of an elastic solution is studied using the method of incremental deformations superposed on finite deformations. The method of incremental deformation can be used to solve static, quasi-static and time-dependent problems. The governing equations of the motion are ones of the classical mechanics, such as the conservation of mass and the balance of linear and angular momentum, which provide the equilibrium configuration of the material. The main corresponding mathematical framework is described in the main Raymond Ogden's book "Non-linear elastic deformations" and in Biot's book "Mechanics of incremental deformations", which is a collection of his main papers.
Nonlinear Elasticity.
Kinematics and Mechanics.
Let formula_0 be a three-dimensional Euclidean space. Let formula_1 be two regions occupied by the material in two different instants of time. Let formula_2 be the deformation which transforms the tissue from formula_3, i.e. the material/reference configuration, to the loaded configuration formula_4, i.e. current configuration. Let formula_5 be a formula_6-diffeomorphism from formula_3 to formula_4, with formula_7 being the current position vector, given as a function of the material position formula_8. The deformation gradient is given by
formula_9
Considering a hyperelastic material with an elastic strain energy density formula_10, the Piola-Kirchhoff stress tensor formula_11 is given by formula_12.
For a quasi-static problem, without body forces, the equilibrium equation is
formula_13
where formula_14 is the divergence with respect to the material coordinates.
If the material is incompressible, i.e. the volume of every subdomains does not change during the deformation, a Lagrangian multiplier is typically introduced to enforce the internal isochoric constraint formula_15. So that, the expression of the Piola stress tensor becomes
formula_16
Boundary conditions.
Let formula_17 be the boundary of formula_3, the reference configuration, and formula_18, the boundary of formula_19, the current configuration. One defines the subset formula_20 of formula_17 on which Dirichlet conditions are applied, while Neumann conditions hold on formula_21, such that formula_22. If formula_23 is the displacement vector to be assigned at the portion formula_20 and formula_24 is the traction vector to be assigned to the portion formula_21, the boundary conditions can be written in mixed-form, such as
formula_25
where formula_26 is the displacement and the vector formula_27 is the unit outward normal to formula_28.
Basic solution.
The defined problem is called the boundary value problem (BVP). Hence, let formula_29 be a solution of the BVP. Since formula_30 depends nonlinearly on the deformation gradient, this solution is generally not unique, and it depends on geometrical and material parameters of the problem. So, one has to employ the method of incremental deformation in order to highlight the existence of an adjacent solution for a critical value of a dimensionless parameter, called control parameter formula_31 which "controls" the onset of the instability. This means that by increasing the value of this parameter, at a certain point new solutions appear. Hence, the selected basic solution is not anymore the stable one but it becomes unstable. In a physical way, at a certain time the stored energy, such as the integral of the density formula_30 over all the domain of the basic solution is bigger than the one of the new solutions. To restore the equilibrium, the configuration of the material moves to another configuration which has lower energy.
Method of incremental deformations superposed on finite deformations.
To improve this method, one has to superpose a small displacement formula_32 on the finite deformation basic solution formula_33. So that:
formula_34,
where formula_35 is the perturbed position and formula_36 maps the basic position vector formula_33 in the perturbed configuration formula_37.
In the following, the incremental variables are indicated by formula_38, while the perturbed ones are indicated by formula_39.
Deformation gradient.
The perturbed deformation gradient is given by:
formula_40,
where formula_41, where formula_42 is the gradient operator with respect to the current configuration.
Stresses.
The perturbed Piola stress is given by:
formula_43
where formula_44 denotes the contraction between two tensors, a forth-order tensor formula_45 and a second-order tensor formula_46. Since formula_11 depends on formula_47 through formula_48, its expression can be rewritten by emphasizing this dependence, such as
formula_49
If the material is incompressible, one gets
formula_50
where formula_51 is the increment in formula_52 and formula_53 is called the elastic moduli associated to the pairs formula_54.
It is useful to derive the push-forward of the perturbed Piola stress be defined as
formula_55
where formula_56 is also known as "the tensor of instantaneous moduli", whose components are:
formula_57.
Incremental governing equations.
Expanding the equilibrium equation around the basic solution, one gets
formula_58
Since formula_59 is the solution to the equation at the zero-order, the incremental equation can be rewritten as
formula_60
where formula_61 is the divergence operator with respect to the actual configuration.
The incremental incompressibility constraint reads
formula_62
Expanding this equation around the basic solution, as before, one gets
formula_63
Incremental boundary conditions.
Let formula_64 and formula_65 be the prescribed increment of formula_66 and formula_67 respectively. Hence, the perturbed boundary condition are
formula_68
where formula_69 is the incremental displacement and formula_70.
Solution of the incremental problem.
The incremental equations
formula_71
formula_72
represent the incremental boundary value problem (BVP) and define a system of partial differential equations (PDEs). The unknowns of the problem depend on the considered case. For the first one, such as the compressible case, there are three unknowns, such as the components of the incremental deformations formula_73, linked to the perturbed deformation by this relation formula_74. For the latter case, instead, one has to take into account also the increment formula_51 of the Lagrange multiplier formula_52, introduced to impose the isochoric constraint.
The main difficulty to solve this problem is to transform the problem in a more suitable form for implementing an efficient and robusted numerical solution procedure. The one used in this area is the Stroh formalism. It was originally developed by Stroh for a steady state elastic problem and allows the set of four PDEs with the associated boundary conditions to be transformed into a set of ODEs of first order with initial conditions. The number of equations depends on the dimension of the space in which the problem is set. To do this, one has to apply variable separation and assume periodicity in a given direction depending on the considered situation. In particular cases, the system can be rewritten in a compact form by using the Stroh formalism. Indeed, the shape of the system looks like
formula_75
where formula_76 is the vector which contains all the unknowns of the problem, formula_77 is the only variable on which the rewritten problem depends and the matrix formula_78 is so-called "Stroh" matrix and it has the following form
formula_79
where each block is a matrix and its dimension depends on the dimension of the problem. Moreover, a crucial property of this approach is that formula_80, i.e. formula_81 is the hermitian matrix of formula_82.
Conclusion and remark.
The Stroh formalism provides an optimal form to solve a great variety of elastic problems. Optimal means that one can construct an efficient numerical procedure to solve the incremental problem. By solving the incremental boundary value problem, one finds the relations among the material and geometrical parameters of the problem and the perturbation modes by which the wave propagates in the material, i.e. what denotes the instability. Everything depends on formula_31, the selected parameter denoted as the control one.
By this analysis, in a graph perturbation mode vs control parameter, the minimum value of the perturbation mode represents the first mode at which one can see the onset of the instability. For instance, in the picture, the first value of the mode formula_83 in which the instability emerges is around formula_84 since the trivial solution formula_85 and formula_86 does not have to be considered.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathcal{E} \\in \\mathbb{R}^3 "
},
{
"math_id": 1,
"text": " \\mathcal{B}_0, \\mathcal{B}_a \\in \\mathcal{E}"
},
{
"math_id": 2,
"text": "{\\bf \\chi}"
},
{
"math_id": 3,
"text": " \\mathcal{B}_0 "
},
{
"math_id": 4,
"text": " \\mathcal{B}_a "
},
{
"math_id": 5,
"text": " \\chi "
},
{
"math_id": 6,
"text": " C^1"
},
{
"math_id": 7,
"text": " {\\bf x}= {\\bf \\chi}({\\bf X}) "
},
{
"math_id": 8,
"text": " {\\bf X} "
},
{
"math_id": 9,
"text": " {\\bf F} = {\\rm Grad} \\, {\\bf x}=\\frac{\\partial {\\bf \\chi}(\\bf X)}{\\partial {\\bf X}}."
},
{
"math_id": 10,
"text": " W({\\bf F}) "
},
{
"math_id": 11,
"text": " {\\bf S} "
},
{
"math_id": 12,
"text": " {\\bf S} = \\frac{\\partial W}{\\partial \\,{\\bf F}} "
},
{
"math_id": 13,
"text": "\\begin{align}\n{\\rm Div} \\, {\\bf S} &=0\n & & \\qquad\\text{Equilibrium}, \\\\[3pt]\n\\end{align} "
},
{
"math_id": 14,
"text": " {\\rm Div} "
},
{
"math_id": 15,
"text": " \\det {\\bf F} = 1 "
},
{
"math_id": 16,
"text": "\\begin{align}\n {\\bf S} &= \\frac{\\partial W}{\\partial {\\bf F}} -p {\\bf F}^{-1}\n & & \\qquad\\text{Definition of the stress}.\\\\\n\\end{align} "
},
{
"math_id": 17,
"text": " \\partial \\mathcal{B}_0 "
},
{
"math_id": 18,
"text": " \\partial \\mathcal{B}_a "
},
{
"math_id": 19,
"text": " \\mathcal{B}_a "
},
{
"math_id": 20,
"text": " \\Gamma_{D} "
},
{
"math_id": 21,
"text": " \\Gamma_{N} "
},
{
"math_id": 22,
"text": " \\partial \\mathcal{B}_0 = \\Gamma_{D}\\cup \\Gamma_{N}"
},
{
"math_id": 23,
"text": " {\\bf u}_0^{*} "
},
{
"math_id": 24,
"text": " {\\bf t}_0^{*} "
},
{
"math_id": 25,
"text": " \\begin{align}\n {\\bf u}(\\bf{X}) &={\\bf u}_0^{*}\n & & \\qquad{\\rm on} \\, \\Gamma_{D}, \\\\[3pt]\n {\\bf S}^{\\rm T} \\cdot {\\bf N} &= {\\bf t}_0^{*}\n & & \\qquad {\\rm on} \\, \\Gamma_{N}, \\\\[3pt]\n\\end{align}"
},
{
"math_id": 26,
"text": " {\\bf u} = {\\bf x} - {\\bf X} = \\chi({\\bf X})-{\\bf X} "
},
{
"math_id": 27,
"text": " {\\bf N}"
},
{
"math_id": 28,
"text": " \\partial \\mathcal{B}_0"
},
{
"math_id": 29,
"text": " {\\bf x}^{0} = \\chi^{0}({\\bf X}) "
},
{
"math_id": 30,
"text": " W "
},
{
"math_id": 31,
"text": " \\gamma "
},
{
"math_id": 32,
"text": " \\delta{\\bf x} "
},
{
"math_id": 33,
"text": " {\\bf x}^0"
},
{
"math_id": 34,
"text": " \\bar{{\\bf x}} = {\\bf x}^0 + \\delta{\\bf x} = {\\bf x}^0 + \\chi^1({\\bf x}^0) "
},
{
"math_id": 35,
"text": " \\bar{{\\bf x}} "
},
{
"math_id": 36,
"text": " \\chi^1({\\bf x}^0) "
},
{
"math_id": 37,
"text": " \\delta\\mathcal{B}_a "
},
{
"math_id": 38,
"text": " \\delta(\\bullet) "
},
{
"math_id": 39,
"text": " \\bar{\\bullet} "
},
{
"math_id": 40,
"text": " \\bar{{\\bf F}} = {\\bf F}^0 + \\delta{\\bf F} =(\\mathbf{I}+ \\mathbf{\\Gamma}){\\bf F}^0"
},
{
"math_id": 41,
"text": " \\mathbf{\\Gamma} = {\\rm grad} \\,\\chi^1({\\bf x}^0)"
},
{
"math_id": 42,
"text": " {\\rm grad}"
},
{
"math_id": 43,
"text": " \\bar{{\\bf S}} = {\\bf S}^0+ \\delta{\\bf S} = {\\bf S}^0 + \\frac{\\partial {\\bf S}^0}{\\partial {\\bf F}}\\bigg|_{{\\bf F}^0} : \\delta {\\bf F}\n"
},
{
"math_id": 44,
"text": " :"
},
{
"math_id": 45,
"text": " \\frac{\\partial {\\bf S}^0}{\\partial {\\bf F}}\\bigg|_{{\\bf F}^0} = \\mathcal{A}^1 "
},
{
"math_id": 46,
"text": " \\delta {\\bf F} "
},
{
"math_id": 47,
"text": " {\\bf F}"
},
{
"math_id": 48,
"text": " W"
},
{
"math_id": 49,
"text": " \\bar{{\\bf S}} = \\frac{\\partial W}{\\partial {\\bf F}}\\bigg|_{{\\bf F}^0}+ \\frac{\\partial W}{\\partial {\\bf F} \\partial {\\bf F}}\\bigg|_{{\\bf F}^0} : \\delta {\\bf F}.\n"
},
{
"math_id": 50,
"text": " \\delta{\\bf S} = \\frac{\\partial {\\bf S}^0}{\\partial {\\bf F}} \\delta {\\bf F} = \\mathcal{A}^1 \\delta {\\bf F} + p({\\bf F}^0)^{-1} \\delta {\\bf F}({\\bf F}^0)^{-1}-\\delta p \\,({\\bf F}^{0})^{-1},\n"
},
{
"math_id": 51,
"text": " \\delta p"
},
{
"math_id": 52,
"text": " p"
},
{
"math_id": 53,
"text": " \\mathcal{A}^1"
},
{
"math_id": 54,
"text": " ({\\bf S},{\\bf F})"
},
{
"math_id": 55,
"text": " \\delta {\\bf S}_0={\\bf F}^0\\,\\delta{\\bf S} = \\mathcal{A}^1_0 \\Gamma + p \\Gamma-\\delta p \\, {\\rm I}, "
},
{
"math_id": 56,
"text": " \\mathcal{A}^1_0 "
},
{
"math_id": 57,
"text": " (\\mathcal{A}^1_0)_{ijhk} = {\\bf F}^0_{i\\gamma}\\,{\\bf F}^0_{h\\beta}\\,\\mathcal{A}^1_{\\gamma j \\beta k}"
},
{
"math_id": 58,
"text": " {\\rm Div}(\\bar{\\bf S}) = {\\rm Div}({\\bf S}^0+ \\delta{\\bf S})= {\\rm Div}({\\bf S}^0)+ \\, {\\rm Div}(\\delta {\\bf S})=0."
},
{
"math_id": 59,
"text": " {\\bf S}^0 "
},
{
"math_id": 60,
"text": " {\\rm div}(\\delta {\\bf S}_0)=0,"
},
{
"math_id": 61,
"text": " {\\rm div}"
},
{
"math_id": 62,
"text": "\\det ({\\bf F}^0+\\delta {\\bf F}) =1."
},
{
"math_id": 63,
"text": " {\\rm tr}(\\mathbf{\\Gamma})=0."
},
{
"math_id": 64,
"text": " \\overline{\\delta {\\bf u}}"
},
{
"math_id": 65,
"text": " \\overline{\\delta {\\bf t}}"
},
{
"math_id": 66,
"text": " {\\bf u}_0^{*}"
},
{
"math_id": 67,
"text": " {\\bf t}_0^{*}"
},
{
"math_id": 68,
"text": "\n {\n \\begin{align}\n \\delta {\\bf u}({\\bf x}) &= \\overline{\\delta {\\bf u}} &&\\qquad {\\bf x} \\in {\\Gamma_{D}} \\\\[3pt]\n \\delta {\\bf S}_0^{\\rm T} \\cdot {\\bf n} &= \\overline{\\delta {\\bf t}} &&\\qquad {\\bf x} \\in {\\Gamma_{N}}, \\\\[3pt]\n \\end{align}\n }\n "
},
{
"math_id": 69,
"text": " \\delta {\\bf u} = \\chi^1({\\bf x}^0)-{\\bf x} "
},
{
"math_id": 70,
"text": " {\\Gamma_{D}} \\cup {\\Gamma_{N}} = {\\partial \\Omega} "
},
{
"math_id": 71,
"text": " {\\rm div}(\\delta {\\bf S}_0)=0 \\qquad \\hbox{for compressible material}"
},
{
"math_id": 72,
"text": "\n {\n \\begin{align}\n \\begin{cases}\n {\\rm div}(\\delta {\\bf S}_0)&=0\\\\\n {\\rm tr}(\\mathbf{\\Gamma}) &= 0\n \\end{cases}\n \\qquad \\hbox{for incompressible material}\n \\end{align}\n }\n "
},
{
"math_id": 73,
"text": " \\delta{u}_1({\\bf x}), \\delta{u}_2({\\bf x}), \\delta{u}_3({\\bf x}) "
},
{
"math_id": 74,
"text": " \\chi^1({\\bf x}) = \\delta{u}_1({\\bf x}){\\bf e}_1+\\delta{u}_2({\\bf x}){\\bf e}_2+\\delta{u}_3({\\bf x}){\\bf e}_3 "
},
{
"math_id": 75,
"text": " \\frac{d}{d\\, {\\rm x}} {\\eta} = \\mathbf{G} \\, \\eta,"
},
{
"math_id": 76,
"text": " \\eta "
},
{
"math_id": 77,
"text": " {\\rm x} "
},
{
"math_id": 78,
"text": " {\\bf G} "
},
{
"math_id": 79,
"text": "\n{\\bf G} =\n{ \n\\begin{bmatrix}\n{\\bf G}_1 & {\\bf G}_2\\\\\n{\\bf G}_3 & {\\bf G}_4\n\\end{bmatrix},\n}\n "
},
{
"math_id": 80,
"text": " {\\bf G}_4 = ({\\bf G}_1)^{*} "
},
{
"math_id": 81,
"text": " {\\bf G}_4 "
},
{
"math_id": 82,
"text": " {\\bf G}_1 "
},
{
"math_id": 83,
"text": " kz "
},
{
"math_id": 84,
"text": " 0.3 "
},
{
"math_id": 85,
"text": " \\gamma =0 "
},
{
"math_id": 86,
"text": " kz = 0 "
}
] |
https://en.wikipedia.org/wiki?curid=57687737
|
57687782
|
Capillary breakup rheometry
|
Capillary breakup rheometry is an experimental technique used to assess the extensional rheological response of low viscous fluids. Unlike most shear and extensional rheometers, this technique does not involve active stretch or measurement of stress or strain but exploits only surface tension to create a uniaxial extensional flow. Hence, although it is common practice to use the name rheometer, capillary breakup techniques should be better addressed to as indexers.
Capillary breakup rheometry is based on the observation of breakup dynamics of a thin fluid thread, governed by the interplay of capillary, viscous, inertial and elastic forces. Since no external forcing is exerted in these experiments, the fluid thread can spatially rearrange and select its own time scales. Quantitative observations about strain rate, along with an apparent extensional viscosity and the breakup time of the fluid, can be estimated from the evolution of the minimal diameter of the filament. Moreover, theoretical considerations based on the balance of the forces acting in the liquid filament, allow to derive information such as the extent of non-Newtonian behaviour and the relaxation time.
The information obtained in capillary breakup experiments are a very effective tool in order to quantify heuristic concepts such as "stringiness" or "tackiness", which are commonly used as performance indices in several industrial operations.
At present, the unique commercially available device based on capillary breakup technique is the CaBER.
Theoretical framework.
Capillary breakup rheometry and its recent development are based on the original experimental and theoretical work of Schümmer and Tebel and Entov and co-workers. Nonetheless, this technique found his origins at end of the 19th century with the pioneering work of Joseph Plateau and Lord Rayleigh. Their work entailed considerable progress in describing and understanding surface-tension-driven flows and the physics underlying the tendency of falling liquid streams to spontaneously break into droplets. This phenomenon is known as Plateau–Rayleigh instability.
The linear stability analysis introduced by Plateau and Rayleigh can be employed to determine a wavelength for which a perturbation on a jet surface is unstable. In this case, the pressure gradient across the free-surface can cause the fluid in the thinnest region to be "squeezed" out towards the swollen bulges, thus creating a strong uniaxial extensional flow in the necked region.
As the instability grows and strains become progressively larger, the thinning is governed by non-linear effects. Theoretical considerations on the fluid motion suggested that the behaviour approaching the breakup singularity can be captured using self-similarity. Depending on the relative intensity of inertial, elastic and viscous stresses, different scaling laws based on self-similar considerations have been established to describe the trend of the filament profile near breakup throughout the time.
Experimental configurations.
Capillary thinning and breakup of complex fluids can be studied using different configurations. Historically, mainly three types of free-surface conformations have been employed in experiments: statically-unstable liquid bridges, dripping from a nozzle under gravity and continuous jets. Even though the initial evolution of the capillary instability is affected by the type of conformation used, each configurations capture the same phenomenon at the last stages close to breakup, where thinning dynamics is dominated by fluid properties exclusively.
The different configurations can be best distinguished based on the Weber Number, hence on the relative magnitude between the imposed velocity and the intrinsic capillary speed of the considered material, defined as the ratio between the surface tension and shear viscosity (formula_0).
In the first geometry, the imposed velocity is zero (We=0), after an unstable liquid bridge is generated by rapid motion of two coaxial cylindrical plate. The thinning of the capillary bridge is purely governed by the interplay of inertial, viscous, elastic and capillary forces. This configuration is employed in the CaBER device and it is at present the most used geometry, thanks to its main advantage of maintaining the thinnest point of the filament approximately located in the same point.
In dripping configuration, the fluid leaves a nozzle at a very low velocity (We < 1), allowing the formation of a hemispherical droplet at the tip of the nozzle. When the drop becomes sufficiently heavy, gravitational forces overcome surface tension, and a capillary bridge is formed, connecting the nozzle and the droplet. As the drop falls, the liquid filament becomes progressively thinner, to the point in which gravity becomes unimportant (low Bond number) and the breakup is only driven by capillary action. At this stage, the thinning dynamics is determined by the balance between capillarity and fluid properties.
Lastly, the third configuration consists in a continuous jet exiting a nozzle at a velocity higher than the intrinsic capillary velocity
(We > 1). As the fluid leaves the nozzle, capillary instabilities naturally emerge on the jet and the formed filaments progressively thin as they are being convected downstream with the flow, until eventually the jet breaks into separate droplets. The jetting-based configuration is generally less reproducible compared to the former two due to different experimental challenges, such as accurately controlling the sinusoidal disturbance.
Force balance and apparent extensional viscosity.
The temporal evolution of the thinnest region is determined by a force balance in the fluid filament. A simplified approximate force balance can be written as
formula_1
where formula_2 is the surface tension of the fluid, formula_3 the strain rate at filament midpoint, formula_4 the extensional viscosity, and the term in square brackets represents the non-Newtonian contribution to the total
normal stress difference. The stress balance shows that, if gravity and inertia can be neglected, the capillary pressure formula_5 is counteracted by viscous extensional contribution formula_6 and by non-Newtonian (elastic) contribution.
Depending on the type of fluid, appropriate constitutive models have to be considered and to extract the relevant material functions.
Without any consideration on the nature of the tested fluid, it is possible to obtain a quantitative parameter, the apparent extensional viscosity formula_7 directly from the force balance, among capillary pressure and viscous stresses alone. Assuming an initial cylindrical shape of the filament, the strain rate evolution is defined as
formula_8
Thus, the apparent extensional viscosity is given by
formula_9
Scaling laws.
The behaviour of the fluid determines the relative importance of the viscous and elastic terms in resisting the capillary action. Combining the force balance with different constitutive models, several analytical solutions were derived to describe the thinning dynamics. These scaling laws can be used to identify fluid type and extract material properties.
Scaling law for visco-capillary thinning of Newtonian fluids.
In absence of inertia (Ohnesorge number larger than 1) and gravitational effects, the thinning dynamics of a Newtonian fluid are governed purely by the balance between capillary pressure and viscous stresses. The visco-capillary thinning is described by the similarity solution derived by Papageorgiou, the midpoint diameter temporal evolution may be written as:
formula_10
According to the scaling law, a linear decay of the filament diameter in time and the filament breaking in the middle are the characteristic fingerprint of visco-capillary breakup. A linear regression of experimental data allows to extract the time-to-breakup formula_11 and the capillary speed.
Scaling law for elasto-capillary thinning of elastic fluids.
For non-Newtonian elastic fluids, such as polymer solutions, an elasto-capillary balance governs the breakup dynamics. Different constitutive models were used to model the elastic contribution (Oldroyd-B, FENE-P...). Using an upper convected Maxwell constitutive model, the self-similar thinning process
is described by an analytical solution of the form
formula_12
where formula_13 is the initial diameter of the filament. A linear regression of experimental data allows to extract formula_14 the elastic modulus of the polymer in the solution and formula_15 the relaxation time. The scaling law expresses an exponential decay of the filament diameter in time
The different forms of the scaling law for viscoelastic fluids shows that their thinning behaviour is very distinct from that of Newtonian liquids. Even the presence of a small amount of flexible polymers can significantly alter the breakup dynamics. The elastic stresses generated by the presence of polymers rapidly increase as the filament diameter decreases. The liquid filament is then progressively stabilized by the growing stresses, and it assumes a uniform cylindrical shape, contrary to the case of visco-capillary thinning where the minimum diameter is localized at the filament midpoint.
Instruments.
CaBER.
The CaBER (Capillary Breakup Extensional Rheometer) was the only commercially available instrument based on capillary breakup. Based on the experimental work of Entov, Bazilevsky and co-workers, the CaBER was developed by McKinley and co-workers at MIT in collaboration with the Cambridge Polymer Group in the early 2000s. It was manufactured by Thermo Scientific with the commercial name HAAKE CaBER 1.
The CaBER experiments employ a liquid bridge configuration and can be thought as a quantitative version of a "thumb & forefinger" test.
In CaBER experiments, a small amount of sample is placed between two measurement plates, forming an initial cylindrical configuration. The plates are then rapidly separated over a short predefined distance: the imposed step strain generates an “hour-glass” shaped liquid bridge. The necked sample subsequently thins and eventually breaks under the action of capillary forces.
During the surface-tension-driven thinning process, the evolution of the mid-filament diameter (Dmid(t)) is monitored via a laser micrometre.
The raw CaBER output (Dmid vs time curve) show different characteristic shapes depending on the tested liquid, and both quantitative and qualitative information can be extracted from it. The time-to-breakup is the most direct qualitative information that can be obtain. Although this parameter does not represent a property of the fluid itself, it is certainly useful to quantify the processability of complex fluids.
In terms of quantitative parameters, rheological properties such as the shear viscosity and the relaxation time can be obtained by fitting the diameter evolution data with the appropriate scaling laws. The second quantitative information that can be extracted is the apparent extensional viscosity.
Despite the great potential of the CaBER, this technique also presents a number of experimental challenges, mainly related to the susceptibility to solvent evaporation and the creation of a statically-unstable bridge of very low visco-elastic fluids, for which the fluid filament often happens to break already during the stretch phase. Different modifications of the commercial instrument have been presented to overcome these issues. Amongst others: the use of surrounding media different than air and the Slow Retraction Method (SRM).
Other techniques.
In recent years a number of different techniques have been developed to characterize fluid with very low visco-elasticity, commonly not able to be tested in CaBER devices.
Applications.
There are many processes and applications that involves free-surface flows and uniaxial extension of liquid filaments or jets. Using capillary breakup rheometry to quantify the dynamics of the extensional response provides an effective tool to control processing parameters as well as design complex fluids with required processability.
A list of relevant applications and processes includes:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma/\\eta"
},
{
"math_id": 1,
"text": "\\eta_{E}\\dot{\\epsilon}_{min}(t)\\approx\\frac{\\gamma}{D_{min}(t)/2}-\\left[\\tau_{zz}-\\tau_{rr} \\right]"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "\\dot{\\epsilon}_{mid}"
},
{
"math_id": 4,
"text": "\\eta_{E}"
},
{
"math_id": 5,
"text": "\\gamma/D_{min}"
},
{
"math_id": 6,
"text": "3\\eta_{s}\\dot{\\epsilon}_{min}(t)"
},
{
"math_id": 7,
"text": "\\eta_{E,app}(t)"
},
{
"math_id": 8,
"text": "\\dot{\\epsilon}(t)=-\\frac{2}{D_{min}}\\frac{dD_{min}}{dt}"
},
{
"math_id": 9,
"text": "\\eta_{E,app}(t)=\\frac{\\gamma}{\\frac{dD_{min}(t)}{dt}}"
},
{
"math_id": 10,
"text": "D_{min}(t)=0.1418\\frac{\\gamma}{\\eta_{s}}(t_{b}-t)"
},
{
"math_id": 11,
"text": "t_{b}"
},
{
"math_id": 12,
"text": "\\frac{D_{min}(t)}{D_{0}}=\\left( \\frac{GD_{0}}{4\\gamma} \\right)^{1/3}exp\\left(-t/3\\lambda_{c}\\right)"
},
{
"math_id": 13,
"text": "D_{0}"
},
{
"math_id": 14,
"text": "G"
},
{
"math_id": 15,
"text": "\\lambda_{c}"
}
] |
https://en.wikipedia.org/wiki?curid=57687782
|
57688582
|
CMOS amplifier
|
CMOS amplifiers (complementary metal–oxide–semiconductor amplifiers) are ubiquitous analog circuits used in computers, audio systems, smartphones, cameras, telecommunication systems, biomedical circuits, and many other systems. Their performance impacts the overall specifications of the systems. They take their name from the use of MOSFETs (metal–oxide–semiconductor field-effect transistors) as opposite to bipolar junction transistors (BJTs). MOSFETs are simpler to fabricate and therefore less expensive than BJT amplifiers, still providing a sufficiently high transconductance to allow the design of very high performance circuits. In high performance CMOS (complementary metal–oxide–semiconductor) amplifier circuits, transistors are not only used to amplify the signal but are also used as active loads to achieve higher gain and output swing in comparison with resistive loads.
CMOS technology was introduced primarily for digital circuit design. In the last few decades, to improve speed, power consumption, required area, and other aspects of digital integrated circuits (ICs), the feature size of MOSFET transistors has shrunk (minimum channel length of transistors reduces in newer CMOS technologies). This phenomenon predicted by Gordon Moore in 1975, which is called Moore’s law, and states that in about each 2 years, the number of transistors doubles for the same silicon area of ICs. Progress in memory circuits design is an interesting example to see how process advancement have affected the required size and their performance in the last decades. In 1956, a 5 MB Hard Disk Drive (HDD) weighed over a ton, while these days having 50000 times more capacity with a weight of several tens of grams is very common.
While digital ICs have benefited from the feature size shrinking, analog CMOS amplifiers have not gained corresponding advantages due to the intrinsic limitations of an analog design—such as the intrinsic gain reduction of short channel transistors, which affects the overall amplifier gain. Novel techniques that achieve higher gain also create new problems, like amplifier stability for closed-loop applications. The following addresses both aspects, and summarize different methods to overcome these problems.
Intrinsic gain reduction in modern CMOS technologies.
The maximum gain of a single MOSFET transistor is called intrinsic gain and is equal to
formula_0
where formula_1 is the transconductance, and formula_2is the output resistance of transistor. As a first-order approximation, formula_2is directly proportional to the channel length of transistors. In a single-stage amplifier, one can increase channel length to get higher output resistance and gain as well, but this also increases the parasitic capacitance of transistors, which limits the amplifier bandwidth. The transistor channel length is smaller in modern CMOS technologies, which makes achieving high gain in single-stage amplifiers very challenging. To achieve high gain, the literature has suggested many techniques. The following sections look at different amplifier topologies and their features.
Single-stage amplifiers.
Telescopic, folded cascode (FC), or recycling FC (RFC) are the most common single-stage amplifiers. All these structures use transistors as active loads to provide higher output resistance (= higher gain) and output swing. A telescopic amplifier provides higher gain (due to higher output resistance) and higher bandwidth (due to smaller non-dominant pole at the cascode node). In contrast, it has limited output swing and difficulty in implementation of unity-gain buffer. Although FC has lower gain and bandwidth, it can provide a higher output swing, an important advantage in modern CMOS technologies with reduced supply voltage. Also, since the DC voltage of input and output nodes can be the same, it is more suitable for implementation of unity-gain buffer. FC is recently used to implement integrator in a bio-nano sensor application. Also, it can be used as a stage in multi-stage amplifiers. As an example, FC is used as the input stage of a two-stage amplifier in designing of a potentiostat circuit, which is to measure neuronal activities, or DNA sensing. Also, it can be used to realize transimpedance amplifier (TIA). TIA can be used in amperometric biosensors to measure current of cells or solutions to define the characteristics of a device under test
In the last decade, circuit designers have proposed different modified versions of FC circuit. RFC is one of the modified versions of FC amplifier, which provides higher gain, higher bandwidth, and also higher slew rate in comparison with FC (for the same power consumption). Recently, RFC amplifier has used in hybrid CMOS–graphene sensor array for subsecond measurement of dopamine. It is used as a low-noise amplifier to implement integrator.
Stability.
In many applications, an amplifier drives a capacitor as a load. In some applications, like switched capacitor circuits, the value of capacitive load changes in different cycles. Therefore, it affects output node time constant and amplifier frequency response. Stable behavior of amplifier for all possible capacitive loads is necessary, and designer must consider this issue during designing of circuit. Designer should ensure that phase margin (PM) of the circuit is enough for the worst case. To have proper circuit behavior and time response, designers usually consider a PM of 60 degrees. For higher PM values, the circuit is more stable, but it takes longer for the output voltage to reach its final value. In telescopic and FC amplifiers, the dominant pole is at the output nodes. Also, there is a non-dominant pole at the cascode node. Since capacitive load connected to output nodes, its value affects the location of the dominant pole. This figure shows how capacitive load affects the location of dominant pole formula_3 and stability. Increasing capacitive load moves the dominant pole toward the origin, and since unity gain frequency formula_4 is formula_5 (amplifier gain) times formula_6 it also moves toward the origin. Therefore, PM increases, which improves stability. So, if we ensure stability of a circuit for a minimum capacitive load, it remains stable for larger load values. To achieve greater than 60 degrees PM, the non-dominant pole formula_7 must be greater than formula_8
Multi-stage amplifiers.
In some applications, like switched capacitor filters or integrators, and different types of analog-to-digital converters, having high gain (70-80 dB) is needed, and achieving the required gain sometimes is impossible with single-stage amplifiers. This is more serious in modern CMOS technologies, which transistors have smaller output resistance due to shorter channel length. To achieve high gain as well as high output swing, multi-stage amplifiers have been invented. To implement two-stage amplifier, one can use FC amplifier as the first stage and a common source amplifier as the second stage. Also, to implement four-stage amplifier, 3 common source amplifier can be cascaded with FC amplifier. It should be mentioned that to drive large capacitive loads or small resistive loads, the output stage should be class AB. For example, common source amplifier with class AB behavior can be used as the final stage in three-stage amplifier to not only improve drive capability, but also gain. Class AB amplifier can be used as a column driver in LCDs.
Stability in two-stage amplifiers.
Unlike single-stage amplifiers, multi-stage amplifiers usually have 3 or more poles and if they are used in feedback networks, the closed loop system is probably unstable. To have stable behavior in multi-stage amplifiers, it is necessary to use compensation network. The main goal of compensation network is to modify transfer function of the system in such a way to achieve enough PM. So, by the use of compensation network, we should get frequency response similar to what we showed for single-stage amplifiers. In single-stage amplifiers, capacitive load is connected to the output node, which dominant pole happens there, and increasing its value improves PM. So, it acts like a compensation capacitor (network). To compensate multi-stage amplifiers, compensation capacitor is usually used to move dominant pole to lower frequency to achieve enough PM.
The following figure shows the block diagram of a two-stage amplifier in fully differential and single ended modes. In a two-stage amplifier, input stage can be a Telescopic or FC amplifier. For the second stage, common source amplifier with active load is a common choice. Since output resistance of the first stage is much greater than the second stage, dominant pole is at the output of the first stage.
Without compensation, the amplifier is unstable, or at least does not have enough PM. The load capacitance is connected to the output of the second stage, which non-dominant pole happens there. Therefore, unlike single-stage amplifiers, increasing of capacitive load, moves the non-dominant pole to lower frequency and deteriorates PM. Mesri et al. suggested two-stage amplifiers that behave like single-stage amplifiers, and amplifiers remains stable for larger values of capacitive loads.
To have proper behavior, we need to compensate two-stage or multi-stage amplifiers. The simplest way for compensation of two-stage amplifier, as shown in the left block diagram of the below figure, is to connect compensation capacitor at the output of the first stage, and move dominant pole to lower frequencies. But, realization of capacitor on silicon chip requires considerable area. The most common compensation method in two-stage amplifiers is Miller compensation (middle block diagram in the below figure. In this method, a compensation capacitor is placed between input and output node of the second stage. In this case, the compensation capacitor appears formula_9times greater at the output of the first stage, and pushes the dominant pole as well as unity gain frequency to lower frequencies. Moreover, because of pole splitting effect, it also moves the non-dominant pole to higher frequencies. Therefore, it is a good candidate to make the amplifier stable. The main advantage of Miller compensation method, is to reduce size of the required compensation capacitor by a factor of formula_10The issue raised from Miller compensation capacitor is introducing right-half plane (RHP) zero, which reduces PM. Hopefully, different methods have suggested to solve this issue. As an example, to cancel the effect of RHP zero, nulling resistor can be used in series with compensation capacitor (right block diagram of the below figure). Based on the resistor value, we can push RHP zero to higher frequency (to cancel its effect on PM), or to move it LHP (to improve PM), or even remove the first non-dominant pole to improve Bandwidth and PM. This method of compensation is recently used in amplifier design for potentiostat circuit. Because of process variation, resistor value can change more than 10%, and therefore affects stability. Using current buffer or voltage buffer in series with compensation capacitor is another option to get better results.
|
[
{
"math_id": 0,
"text": "A v_\\text{int} = g_m r_o,"
},
{
"math_id": 1,
"text": "g_m"
},
{
"math_id": 2,
"text": "r_o"
},
{
"math_id": 3,
"text": "(\\omega_1)"
},
{
"math_id": 4,
"text": "(\\omega_\\text{unity})"
},
{
"math_id": 5,
"text": "A_v"
},
{
"math_id": 6,
"text": "\\omega_1,"
},
{
"math_id": 7,
"text": "(\\omega_2)"
},
{
"math_id": 8,
"text": "1.7\\,\\omega_\\text{unity}."
},
{
"math_id": 9,
"text": "1+|A_{v2}|"
},
{
"math_id": 10,
"text": "1+|A_{v2}|."
}
] |
https://en.wikipedia.org/wiki?curid=57688582
|
57691633
|
Weakly dependent random variables
|
In probability, weak dependence of random variables is a generalization of independence that is weaker than the concept of a martingale. A (time) sequence of random variables is weakly dependent if distinct portions of the sequence have a covariance that asymptotically decreases to 0 as the blocks are further separated in time. Weak dependence primarily appears as a technical condition in various probabilistic limit theorems.
Formal definition.
Fix a set S, a sequence of sets of measurable functions formula_0, a decreasing sequence formula_1, and a function formula_2. A sequence formula_3 of random variables is formula_4-weakly dependent iff, for all formula_5, for all formula_6, and formula_7, we have
formula_8
Note that the covariance does "not" decay to 0 uniformly in d and e.
Common applications.
Weak dependence is a sufficient weak condition that many natural instances of stochastic processes exhibit it. In particular, weak dependence is a natural condition for the ergodic theory of random functions.
A sufficient substitute for independence in the Lindeberg–Lévy central limit theorem is weak dependence. For this reason, specializations often appear in the probability literature on limit theorems. These include Withers' condition for strong mixing, Tran's "absolute regularity in the locally transitive sense," and Birkel's "asymptotic quadrant independence."
Weak dependence also functions as a substitute for strong mixing. Again, generalizations of the latter are specializations of the former; an example is Rosenblatt's mixing condition.
Other uses include a generalization of the Marcinkiewicz–Zygmund inequality and Rosenthal inequalities.
Martingales are weakly dependent , so many results about martingales also hold true for weakly dependent sequences. An example is Bernstein's bound on higher moments, which can be relaxed to only require
formula_9
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{\\mathcal{F}_d\\}_{d=1}^{\\infty}\\in\\prod_{d=1}^{\\infty}{\\left(S^d\\to\\mathbb{R}\\right)}"
},
{
"math_id": 1,
"text": "\\{\\theta_{\\delta}\\}_{\\delta=1}^{\\infty}\\to0"
},
{
"math_id": 2,
"text": "\\psi\\in\\mathcal{F}^2\\times(\\mathbb{Z}^+)^2\\to\\mathbb{R}^+"
},
{
"math_id": 3,
"text": "\\{X_n\\}_{n=1}^{\\infty}"
},
{
"math_id": 4,
"text": "(\\{\\mathcal{F}_d\\}_{d=1}^{\\infty},\\{\\theta_{\\delta}\\}_{\\delta},\\psi)"
},
{
"math_id": 5,
"text": "j_1\\leq j_2\\leq\\dots\\leq j_d<j_d+\\delta\\leq k_1\\leq k_2\\leq\\dots\\leq k_e"
},
{
"math_id": 6,
"text": "\\phi\\in\\mathcal{F}_d"
},
{
"math_id": 7,
"text": "\\theta\\in\\mathcal{F}_e"
},
{
"math_id": 8,
"text": "|\\operatorname{Cov}{(\\phi(X_{j_1},\\dots,X_{j_d}), \\theta(X_{k_1},\\dots,X_{k_e}))}|\\leq\\psi(\\phi,\\theta,d,e)\\theta_{\\delta}"
},
{
"math_id": 9,
"text": "\\begin{align}\n\\operatorname{E} \\left [ X_i \\mid X_1, \\dots, X_{i-1} \\right ] &= 0, \\\\\n\\operatorname{E} \\left [ X_i^2 \\mid X_1, \\dots, X_{i-1} \\right ] &\\leq R_i \\operatorname{E} \\left [ X_i^2 \\right ], \\\\\n\\operatorname{E} \\left [ X_i^k \\mid X_1, \\dots, X_{i-1} \\right ] &\\leq \\tfrac{1}{2} \\operatorname{E} \\left[ X_i^2 \\mid X_1, \\dots, X_{i-1} \\right ] L^{k-2} k!\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=57691633
|
577003
|
Decision tree learning
|
Machine learning algorithm
<templatestyles src="Machine learning/styles.css"/>
Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.
Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.
Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making).
General.
Decision tree learning is a method commonly used in data mining. The goal is to create a model that predicts the value of a target variable based on several input variables.
A decision tree is a simple representation for classifying examples. For this section, assume that all of the input features have finite discrete domains, and there is a single target feature called the "classification". Each element of the domain of the classification is called a "class".
A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes).
A tree is built by splitting the source set, constituting the root node of the tree, into subsets—which constitute the successor children. The splitting is based on a set of splitting rules based on classification features. This process is repeated on each derived subset in a recursive manner called recursive partitioning.
The recursion is completed when the subset at a node has all the same values of the target variable, or when splitting no longer adds value to the predictions. This process of "top-down induction of decision trees" (TDIDT) is an example of a greedy algorithm, and it is by far the most common strategy for learning decision trees from data.
In data mining, decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a given set of data.
Data comes in records of the form:
formula_0
The dependent variable, formula_1, is the target variable that we are trying to understand, classify or generalize. The vector formula_2 is composed of the features, formula_3 etc., that are used for that task.
Decision tree types.
Decision trees used in data mining are of two main types:
The term classification and regression tree (CART) analysis is an umbrella term used to refer to either of the above procedures, first introduced by Breiman et al. in 1984. Trees used for regression and trees used for classification have some similarities – but also some differences, such as the procedure used to determine where to split.
Some techniques, often called "ensemble" methods, construct more than one decision tree:
A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.
Notable decision tree algorithms include:
ID3 and CART were invented independently at around the same time (between 1970 and 1980), yet follow a similar approach for learning a decision tree from training tuples.
It has also been proposed to leverage concepts of fuzzy set theory for the definition of a special version of decision tree, known as Fuzzy Decision Tree (FDT).
In this type of fuzzy classification, generally, an input vector formula_2 is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.
Metrics.
Algorithms for constructing decision trees usually work top-down, by choosing a variable at each step that best splits the set of items. Different algorithms use different metrics for measuring "best". These generally measure the homogeneity of the target variable within the subsets. Some examples are given below. These metrics are applied to each candidate subset, and the resulting values are combined (e.g., averaged) to provide a measure of the quality of the split. Depending on the underlying metric, the performance of various heuristic algorithms for decision tree learning may vary significantly.
Estimate of Positive Correctness.
A simple and effective metric can be used to identify the degree to which true positives outweigh false positives (see Confusion matrix). This metric, "Estimate of Positive Correctness" is defined below:
formula_4
In this equation, the total false positives (FP) are subtracted from the total true positives (TP). The resulting number gives an estimate on how many positive examples the feature could correctly identify within the data, with higher numbers meaning that the feature could correctly classify more positive samples. Below is an example of how to use the metric when the full confusion matrix of a certain feature is given:
Feature A Confusion Matrix
Here we can see that the TP value would be 8 and the FP value would be 2 (the underlined numbers in the table). When we plug these numbers in the equation we are able to calculate the estimate: formula_5. This means that using the estimate on this feature would have it receive a score of 6.
However, it should be worth noting that this number is only an estimate. For example, if two features both had a FP value of 2 while one of the features had a higher TP value, that feature would be ranked higher than the other because the resulting estimate when using the equation would give a higher value. This could lead to some inaccuracies when using the metric if some features have more positive samples than others. To combat this, one could use a more powerful metric known as Sensitivity that takes into account the proportions of the values from the confusion matrix to give the actual true positive rate (TPR). The difference between these metrics is shown in the example below:
In this example, Feature A had an estimate of 6 and a TPR of approximately 0.73 while Feature B had an estimate of 4 and a TPR of 0.75. This shows that although the positive estimate for some feature may be higher, the more accurate TPR value for that feature may be lower when compared to other features that have a lower positive estimate. Depending on the situation and knowledge of the data and decision trees, one may opt to use the positive estimate for a quick and easy solution to their problem. On the other hand, a more experienced user would most likely prefer to use the TPR value to rank the features because it takes into account the proportions of the data and all the samples that should have been classified as positive.
Gini impurity.
Gini impurity, Gini's diversity index, or Gini-Simpson Index in biodiversity research, is named after Italian mathematician Corrado Gini and used by the CART (classification and regression tree) algorithm for classification trees. Gini impurity measures how often a randomly chosen element of a set would be incorrectly labeled if it were labeled randomly and independently according to the distribution of labels in the set. It reaches its minimum (zero) when all cases in the node fall into a single target category.
For a set of items with formula_6 classes and relative frequencies formula_7, formula_8, the probability of choosing an item with label formula_9 is formula_7, and the probability of miscategorizing that item is formula_10. The Gini impurity is computed by summing pairwise products of these probabilities for each class label:
formula_11
The Gini impurity is also an information theoretic measure and corresponds to Tsallis Entropy with deformation coefficient formula_12, which in physics is associated with the lack of information in out-of-equilibrium, non-extensive, dissipative and quantum systems. For the limit formula_13 one recovers the usual Boltzmann-Gibbs or Shannon entropy. In this sense, the Gini impurity is nothing but a variation of the usual entropy measure for decision trees.
Information gain.
Used by the ID3, C4.5 and C5.0 tree-generation algorithms. Information gain is based on the concept of entropy and information content from information theory.
Entropy is defined as below
formula_14
where formula_15 are fractions that add up to 1 and represent the percentage of each class present in the child node that results from a split in the tree.
formula_16formula_17
Averaging over the possible values of formula_18,
formula_19formula_20
Where weighted sum of entropies is given by,
formula_21
That is, the expected information gain is the mutual information, meaning that on average, the reduction in the entropy of "T" is the mutual information.
Information gain is used to decide which feature to split on at each step in building the tree. Simplicity is best, so we want to keep our tree small. To do so, at each step we should choose the split that results in the most consistent child nodes. A commonly used measure of consistency is called information which is measured in bits. For each node of the tree, the information value "represents the expected amount of information that would be needed to specify whether a new instance should be classified yes or no, given that the example reached that node".
Consider an example data set with four attributes: "outlook" (sunny, overcast, rainy), "temperature" (hot, mild, cool), "humidity" (high, normal), and "windy" (true, false), with a binary (yes or no) target variable, "play", and 14 data points. To construct a decision tree on this data, we need to compare the information gain of each of four trees, each split on one of the four features. The split with the highest information gain will be taken as the first split and the process will continue until all children nodes each have consistent data, or until the information gain is 0.
To find the information gain of the split using "windy", we must first calculate the information in the data before the split. The original data contained nine yes's and five no's.
formula_22
The split using the feature "windy" results in two children nodes, one for a "windy" value of true and one for a "windy" value of false. In this data set, there are six data points with a true "windy" value, three of which have a "play" (where "play" is the target variable) value of yes and three with a "play" value of no. The eight remaining data points with a "windy" value of false contain two no's and six yes's. The information of the "windy"=true node is calculated using the entropy equation above. Since there is an equal number of yes's and no's in this node, we have
formula_23
For the node where "windy"=false there were eight data points, six yes's and two no's. Thus we have
formula_24
To find the information of the split, we take the weighted average of these two numbers based on how many observations fell into which node.
formula_25
Now we can calculate the information gain achieved by splitting on the "windy" feature.
formula_26
To build the tree, the information gain of each possible first split would need to be calculated. The best first split is the one that provides the most information gain. This process is repeated for each impure node until the tree is complete. This example is adapted from the example appearing in Witten et al.
Information gain is also known as Shannon index in bio diversity research.
Variance reduction.
Introduced in CART, variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a node N is defined as the total reduction of the variance of the target variable Y due to the split at this node:
formula_27
where formula_28, formula_29, and formula_30 are the set of presplit sample indices, set of sample indices for which the split test is true, and set of sample indices for which the split test is false, respectively. Each of the above summands are indeed variance estimates, though, written in a form without directly referring to the mean.
By replacing formula_31 in the formula above with the dissimilarity formula_32 between two objects formula_9 and formula_33, the variance reduction criterion applies to any kind of object for which pairwise dissimilarities can be computed.
Measure of "goodness".
Used by CART in 1984, the measure of "goodness" is a function that seeks to optimize the balance of a candidate split's capacity to create pure children with its capacity to create equally-sized children. This process is repeated for each impure node until the tree is complete. The function formula_34, where formula_35 is a candidate split at node formula_36, is defined as below
formula_37
where formula_38 and formula_39 are the left and right children of node formula_36 using split formula_35, respectively; formula_40 and formula_41 are the proportions of records in formula_36 in formula_38 and formula_39, respectively; and formula_42 and formula_43 are the proportions of class formula_33 records in formula_38 and formula_39, respectively.
Consider an example data set with three attributes: "savings"(low, medium, high), "assets"(low, medium, high), "income"(numerical value), and a binary target variable "credit risk"(good, bad) and 8 data points. The full data is presented in the table below. To start a decision tree, we will calculate the maximum value of formula_34 using each feature to find which one will split the root node. This process will continue until all children are pure or all formula_34 values are below a set threshold.
To find formula_34 of the feature "savings", we need to note the quantity of each value. The original data contained three low's, three medium's, and two high's. Out of the low's, one had a good "credit risk" while out of the medium's and high's, 4 had a good "credit risk". Assume a candidate split formula_35 such that records with a low "savings" will be put in the left child and all other records will be put into the right child.
formula_44
To build the tree, the "goodness" of all candidate splits for the root node need to be calculated. The candidate with the maximum value will split the root node, and the process will continue for each impure node until the tree is complete.
Compared to other metrics such as information gain, the measure of "goodness" will attempt to create a more balanced tree, leading to more-consistent decision time. However, it sacrifices some priority for creating pure children which can lead to additional splits that are not present with other metrics.
Uses.
Advantages.
Amongst other data mining methods, decision trees have various advantages:
Implementations.
Many data mining software packages provide implementations of one or more decision tree algorithms (e.g. random forest).
Open source examples include:
Notable commercial software:
Extensions.
Decision graphs.
In a decision tree, all paths from the root node to the leaf node proceed by way of conjunction, or "AND". In a decision graph, it is possible to use disjunctions (ORs) to join two more paths together using minimum message length (MML). Decision graphs have been further extended to allow for previously unstated new attributes to be learnt dynamically and used at different places within the graph. The more general coding scheme results in better predictive accuracy and log-loss probabilistic scoring. In general, decision graphs infer models with fewer leaves than decision trees.
Alternative search methods.
Evolutionary algorithms have been used to avoid local optimal decisions and search the decision tree space with little "a priori" bias.
It is also possible for a tree to be sampled using MCMC.
The tree can be searched for in a bottom-up fashion. Or several trees can be constructed parallelly to reduce the expected number of tests till classification.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(\\textbf{x},Y) = (x_1, x_2, x_3, ..., x_k, Y)"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\textbf{x}"
},
{
"math_id": 3,
"text": "x_1, x_2, x_3"
},
{
"math_id": 4,
"text": " E_P = TP - FP "
},
{
"math_id": 5,
"text": "E_p = TP - FP = 8 - 2 = 6"
},
{
"math_id": 6,
"text": "J"
},
{
"math_id": 7,
"text": "p_i"
},
{
"math_id": 8,
"text": "i \\in \\{1, 2, ...,J\\}"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "\\sum_{k \\ne i} p_k = 1-p_i"
},
{
"math_id": 11,
"text": "\\operatorname{I}_G(p) = \\sum_{i=1}^J \\left( p_i \\sum_{k\\neq i} p_k \\right)\n = \\sum_{i=1}^J p_i (1-p_i)\n = \\sum_{i=1}^J (p_i - p_i^2)\n = \\sum_{i=1}^J p_i - \\sum_{i=1}^J p_i^2\n = 1 - \\sum^J_{i=1} p_i^2. "
},
{
"math_id": 12,
"text": "q=2"
},
{
"math_id": 13,
"text": "q\\to 1"
},
{
"math_id": 14,
"text": "\\Eta(T) = \\operatorname{I}_{E}\\left(p_1, p_2, \\ldots, p_J\\right)\n = - \\sum^J_{i=1} p_i \\log_2 p_i"
},
{
"math_id": 15,
"text": "p_1, p_2, \\ldots"
},
{
"math_id": 16,
"text": " \\overbrace{IG(T,a)}^\\text{information gain}\n= \\overbrace{\\Eta(T)}^\\text{entropy (parent)}\n- \\overbrace{\\Eta(T\\mid a)}^\\text{sum of entropies (children)} "
},
{
"math_id": 17,
"text": "=-\\sum_{i=1}^J p_i\\log_2 p_i - \\sum_{i=1}^J - \\Pr(i\\mid a)\\log_2 \\Pr(i\\mid a)"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": " \\overbrace{E_A(\\operatorname{IG}(T,a))}^\\text{expected information gain}\n= \\overbrace{I(T; A)}^{\\text{mutual information between } T \\text{ and } A}\n= \\overbrace{\\Eta(T)}^\\text{entropy (parent)}\n- \\overbrace{\\Eta(T\\mid A)}^\\text{weighted sum of entropies (children)} "
},
{
"math_id": 20,
"text": "=-\\sum_{i=1}^J p_i\\log_2 p_i - \\sum_a p(a)\\sum_{i=1}^J-\\Pr(i\\mid a) \\log_2 \\Pr(i\\mid a) "
},
{
"math_id": 21,
"text": "{\\Eta(T\\mid A)}= \\sum_a p(a)\\sum_{i=1}^J-\\Pr(i\\mid a) \\log_2 \\Pr(i\\mid a)"
},
{
"math_id": 22,
"text": " I_E([9,5]) = -\\frac 9 {14}\\log_2 \\frac 9 {14} - \\frac 5 {14}\\log_2 \\frac 5 {14} = 0.94 "
},
{
"math_id": 23,
"text": " I_E([3,3]) = -\\frac 3 6\\log_2 \\frac 3 6 - \\frac 3 6\\log_2 \\frac 3 6 = -\\frac 1 2\\log_2 \\frac 1 2 - \\frac 1 2\\log_2 \\frac 1 2 = 1 "
},
{
"math_id": 24,
"text": " I_E([6,2]) = -\\frac 6 8\\log_2 \\frac 6 8 - \\frac 2 8\\log_2 \\frac 2 8 = -\\frac 3 4\\log_2 \\frac 3 4 - \\frac 1 4\\log_2 \\frac 1 4 = 0.81 "
},
{
"math_id": 25,
"text": " I_E([3,3],[6,2]) = I_E(\\text{windy or not}) = \\frac 6 {14} \\cdot 1 + \\frac 8 {14} \\cdot 0.81 = 0.89 "
},
{
"math_id": 26,
"text": " \\operatorname{IG}(\\text{windy}) = I_E([9,5]) - I_E([3,3],[6,2]) = 0.94 - 0.89 = 0.05 "
},
{
"math_id": 27,
"text": "\nI_V(N) = \\frac{1}{|S|^2}\\sum_{i\\in S} \\sum_{j\\in S} \\frac{1}{2}(y_i - y_j)^2 - \\left(\\frac{|S_t|^2}{|S|^2}\\frac{1}{|S_t|^2}\\sum_{i\\in S_t} \\sum_{j\\in S_t} \\frac{1}{2}(y_i - y_j)^2 + \\frac{|S_f|^2}{|S|^2}\\frac{1}{|S_f|^2}\\sum_{i\\in S_f} \\sum_{j\\in S_f} \\frac{1}{2}(y_i - y_j)^2\\right)\n"
},
{
"math_id": 28,
"text": "S"
},
{
"math_id": 29,
"text": "S_t"
},
{
"math_id": 30,
"text": "S_f"
},
{
"math_id": 31,
"text": "(y_i - y_j)^2"
},
{
"math_id": 32,
"text": "d_{ij}"
},
{
"math_id": 33,
"text": "j"
},
{
"math_id": 34,
"text": "\\varphi(s\\mid t)"
},
{
"math_id": 35,
"text": "s"
},
{
"math_id": 36,
"text": "t"
},
{
"math_id": 37,
"text": "\n\\varphi(s\\mid t) = 2P_L P_R \\sum_{j=1}^\\text{class count}|P(j\\mid t_L) - P(j\\mid t_R)|\n"
},
{
"math_id": 38,
"text": "t_L"
},
{
"math_id": 39,
"text": "t_R"
},
{
"math_id": 40,
"text": "P_L"
},
{
"math_id": 41,
"text": "P_R"
},
{
"math_id": 42,
"text": "P(j\\mid t_L)"
},
{
"math_id": 43,
"text": "P(j\\mid t_R)"
},
{
"math_id": 44,
"text": "\n\\varphi(s\\mid\\text{root}) = 2\\cdot\\frac 3 8\\cdot\\frac 5 8\\cdot \\left(\\left|\\left(\\frac 1 3 - \\frac 4 5\\right)\\right| + \\left|\\left(\\frac 2 3 - \\frac 1 5\\right)\\right|\\right) = 0.44\n"
}
] |
https://en.wikipedia.org/wiki?curid=577003
|
57702051
|
Space of directions
|
In metric geometry, the space of directions at a point describes the directions of curves that start at the point. It generalizes the tangent space in a differentiable manifold.
Definitions.
Let ("M", "d") be a metric space. First we define the upper angle for two curves starting at the same point in "M". So let
formula_0 be two curves with formula_1. The upper angle between them at "p" is
formula_2
The upper angle satisfies the triangle inequality: For three curves formula_3 starting at "p",
formula_4
A curve is said to have a direction if the upper angle of two copies of itself at the starting point is zero. For curves which have directions at a point, we define an equivalence relation on them by saying that two curves are equivalent if the upper angle between them at the point is zero. Two equivalent curves are said to have the same direction at the point.
The set of equivalence classes of curves with directions at the point "p" equipped with the upper angle is a metric space, called the space of directions at the point, denoted as formula_5. The metric completion of the space of directions is called the completed space of directions, denoted as formula_6.
For an Alexandrov space with curvature bounded either above or below, there is also a similar definition in which shortest paths, which always have directions, are used. The space of directions at a point is then defined as the metric completion of the set of equivalence classes of shortest paths starting at the point.
|
[
{
"math_id": 0,
"text": "\\alpha, \\beta:[0,\\varepsilon)\\to M"
},
{
"math_id": 1,
"text": "\\alpha(0)=\\beta(0)=p"
},
{
"math_id": 2,
"text": "\\angle_U(\\alpha,\\beta) := \\varlimsup_{s,t\\to 0} \\arccos \\frac {d(\\alpha(s),p)^2 + d(\\beta(t),p)^2 - d(\\alpha(s), \\beta(t))^2} {2 d(\\alpha(s),p) d(\\beta(t),p)}."
},
{
"math_id": 3,
"text": "\\alpha_1, \\alpha_2, \\alpha_3"
},
{
"math_id": 4,
"text": "\\angle_U(\\alpha_1,\\alpha_3) \\le \\angle_U(\\alpha_1,\\alpha_2) + \\angle_U(\\alpha_2,\\alpha_3)."
},
{
"math_id": 5,
"text": "\\Omega_p(M)"
},
{
"math_id": 6,
"text": "\\overline{\\Omega_p(M)}"
}
] |
https://en.wikipedia.org/wiki?curid=57702051
|
577053
|
Association rule learning
|
Method for discovering interesting relations between variables in databases
<templatestyles src="Machine learning/styles.css"/>
Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule formula_0 found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements.
In addition to the above example from market basket analysis, association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand.
Definition.
Following the original definition by Agrawal, Imieliński, Swami the problem of association rule mining is defined as:
Let formula_1 be a set of n binary attributes called "items".
Let formula_2 be a set of transactions called the "database".
Each "transaction" in D has a unique transaction ID and contains a subset of the items in I.
A "rule" is defined as an implication of the form:
formula_3, where formula_4.
In Agrawal, Imieliński, Swami a "rule" is defined only between a set and a single item, formula_5 for formula_6.
Every rule is composed by two different sets of items, also known as "itemsets", X and Y, where X is called "antecedent" or left-hand-side (LHS) and Y "consequent" or right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statement formula_3 is often read as "if X then Y", where the antecedent (X ) is the "if" and the consequent (Y) is the "then". This simply implies that, in theory, whenever X occurs in a dataset, then Y will as well.
Process.
Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under Support and Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true.
Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data.
There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis. What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior. Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data. Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables.
Benefits
There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases.
Downsides
However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it.
ThresholdsWhen using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied:
The Support Threshold is 30%, Confidence Threshold is 50%
The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last.
To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is the power set over I and has size formula_7 , of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of item n that is within the power set I. An efficient search is possible by using the downward-closure property of support (also called "anti-monotonicity"). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori and Eclat) can find all frequent itemsets.
Useful Concepts.
To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items is formula_8.
An example rule for the supermarket could be formula_9 meaning that if butter and bread are bought, customers also buy milk.
In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence.
Let formula_10 be itemsets, formula_3 an association rule and T a set of transactions of a given database.
Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.
Support.
Support is an indication of how frequently the itemset appears in the dataset.
In our example, it can be easier to explain support by writing formula_11 where A and B are separate item sets that occur at the same time in a transaction.
Using Table 2 as an example, the itemset formula_12 has a support of 1/5=0.2 since it occurs in 20% of all transactions (1 out of 5 transactions). The argument of "support of X" is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive).
Furthermore, the itemset formula_13 has a support of 1/5=0.2 as it appears in 20% of all transactions as well.
When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example.
Minimum support thresholds are useful for determining which itemsets are preferred or interesting.
If we set the support threshold to ≥0.4 in Table 3, then the formula_14 would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset.
Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items.
Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values.
The support of X with respect to T is defined as the proportion of transactions in the dataset which contains the itemset X. Denoting a transaction by formula_15 where i is the unique identifier of the transaction and t is its itemset, the support may be written as:
formula_16
This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together.
Confidence.
Confidence is the percentage of all transactions satisfying X that also satisfy Y.
With respect to T, the confidence value of an association rule, often denoted as formula_3, is the ratio of transactions containing both X and Y to the total amount of X values present, where X is the antecedent and Y is the consequent.
Confidence can also be interpreted as an estimate of the conditional probability formula_17, the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.
It is commonly depicted as:
formula_18
The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions X and Y within the dataset in ratio to transactions containing only X. This means that the number of transactions in both X and Y is divided by those just in X .
For example, Table 2 shows the rule formula_9 which has a confidence of formula_19 in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule formula_20, however, has a confidence of formula_21. This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases.
For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items.
Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships.
Lift.
The "lift" of a rule is defined as:
formula_22
or the ratio of the observed support to that expected if X and Y were independent.
For example, the rule formula_23 has a lift of formula_24.
If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events.
If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets.
If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa.
The value of lift is that it considers both the support of the rule and the overall data set.
Conviction.
The "conviction" of a rule is defined as formula_25.
For example, the rule formula_23 has a conviction of formula_26, and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule formula_23 would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance.
Alternative measures of interestingness.
In addition to confidence, other measures of "interestingness" for rules have been proposed. Some popular measures are:
Several more measures are presented and compared by Tan et al. and by Hahsler. Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness."
History.
The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al., which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper on GUHA, a general data mining method developed by Petr Hájek et al.
An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules with formula_27 and formula_28 greater than user defined constraints.
Statistically sound associations.
One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery controls this risk, in most cases reducing the risk of finding "any" spurious associations to a user-specified significance level.
Algorithms.
Many algorithms for generating association rules have been proposed.
Some well-known algorithms are Apriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database.
Apriori algorithm.
Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties.
Overview: Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as "candidate generation"), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates.
Example: Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'.
Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3.
Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set.
Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set.
Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop.
Advantages and Limitations:
Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the Eclat algorithm. However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan.
Eclat algorithm.
Eclat (alt. ECLAT, stands for Equivalence Class Transformation) is a backtracking algorithm, which traverses the frequent itemset lattice graph in a depth-first search (DFS) fashion. Whereas the breadth-first search (BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS.
To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoes combinatorial explosion.
It is suitable for both sequential as well as parallel execution with locality-enhancing properties.
FP-growth algorithm.
FP stands for frequent pattern.
In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into a trie.
Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly.
Items in each transaction that do not meet the minimum support requirement are discarded.
If many transactions share most frequent items, the FP-tree provides high compression close to tree root.
Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm).
Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this item formula_29.
A new conditional tree is created which is the original FP-tree projected onto formula_29. The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional on formula_29 meet the minimum support threshold. The resulting paths from root to formula_29 will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree.
Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins.
Others.
ASSOC.
The ASSOC procedure is a GUHA method which mines for generalized association rules using fast bitstrings operations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used.
OPUS search.
OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support. Initially used to find rules for a fixed consequent it has subsequently been extended to find rules with any item as a consequent. OPUS search is the core technology in the popular Magnum Opus association discovery system.
Lore.
A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true. Daniel Powers says:
In 1992, Thomas Blischok, manager of a retail consulting group at Teradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves.
Other types of association rule mining.
Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relations "live in", "nearby" and "humid": “Those who "live in" a place which is "nearby" a city with "humid" climate type and also are "younger" than 20 formula_30 their "health condition" is good”. Such association rules can be extracted from RDBMS data or semantic web data.
Contrast set learning is a form of associative learning. Contrast set learners use rules that differ meaningfully in their distribution across subsets.
Weighted class learning is another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results.
High-order pattern discovery facilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data.
K-optimal pattern discovery provides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data.
Approximate Frequent Itemset mining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0.
Generalized Association Rules hierarchical taxonomy (concept hierarchy)
Quantitative Association Rules categorical and quantitative data
Interval Data Association Rules e.g. partition the age into 5-year-increment ranged
Sequential pattern mining discovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions.
Subspace Clustering, a specific type of clustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models.
Warmr, shipped as part of the ACE data mining suite, allows association rule learning for first order relational rules.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{\\mathrm{onions, potatoes}\\} \\Rightarrow \\{\\mathrm{burger}\\}"
},
{
"math_id": 1,
"text": "I=\\{i_1, i_2,\\ldots,i_n\\}"
},
{
"math_id": 2,
"text": "D = \\{t_1, t_2, \\ldots, t_m\\}"
},
{
"math_id": 3,
"text": "X \\Rightarrow Y"
},
{
"math_id": 4,
"text": "X, Y \\subseteq I"
},
{
"math_id": 5,
"text": "X \\Rightarrow i_j"
},
{
"math_id": 6,
"text": "i_j \\in I"
},
{
"math_id": 7,
"text": "2^n-1"
},
{
"math_id": 8,
"text": "I= \\{\\mathrm{milk, bread, butter, beer, diapers, eggs, fruit}\\}"
},
{
"math_id": 9,
"text": "\\{\\mathrm{butter, bread}\\} \\Rightarrow \\{\\mathrm{milk}\\}"
},
{
"math_id": 10,
"text": "X, Y"
},
{
"math_id": 11,
"text": "\\text{support} = P(A\\cap B)= \\frac{(\\text{number of transactions containing }A\\text{ and }B)}\\text{ (total number of transactions)} "
},
{
"math_id": 12,
"text": "X=\\{\\mathrm{beer, diapers}\\}"
},
{
"math_id": 13,
"text": "Y=\\{\\mathrm{milk, bread, butter}\\}"
},
{
"math_id": 14,
"text": "\\{\\mathrm{milk}\\} \\Rightarrow \\{\\mathrm{eggs}\\}"
},
{
"math_id": 15,
"text": "(i,t)"
},
{
"math_id": 16,
"text": "\\mathrm{support\\,of\\,X} = \\frac{|\\{(i,t) \\in T : X \\subseteq t \\}|}{|T|}"
},
{
"math_id": 17,
"text": "P(E_Y | E_X)"
},
{
"math_id": 18,
"text": "\\mathrm{conf}(X \\Rightarrow Y) = P(Y | X) = \\frac{\\mathrm{supp}(X \\cup Y)}{ \\mathrm{supp}(X) }=\\frac{\\text{number of transactions containing }X\\text{ and }Y}{\\text{number of transactions containing }X}"
},
{
"math_id": 19,
"text": "\\frac{1/5}{1/5}=\\frac{0.2}{0.2}=1.0"
},
{
"math_id": 20,
"text": "\\{\\mathrm{fruit}\\} \\Rightarrow \\{\\mathrm{eggs}\\}"
},
{
"math_id": 21,
"text": "\\frac{2/5}{3/5}=\\frac{0.4}{0.6}=0.67"
},
{
"math_id": 22,
"text": " \\mathrm{lift}(X\\Rightarrow Y) = \\frac{ \\mathrm{supp}(X \\cap Y)}{ \\mathrm{supp}(X) \\times \\mathrm{supp}(Y) } "
},
{
"math_id": 23,
"text": "\\{\\mathrm{milk, bread}\\} \\Rightarrow \\{\\mathrm{butter}\\}"
},
{
"math_id": 24,
"text": "\\frac{0.2}{0.4 \\times 0.4} = 1.25 "
},
{
"math_id": 25,
"text": " \\mathrm{conv}(X\\Rightarrow Y) =\\frac{ 1 - \\mathrm{supp}(Y) }{ 1 - \\mathrm{conf}(X\\Rightarrow Y)}"
},
{
"math_id": 26,
"text": "\\frac{1 - 0.4}{1 - 0.5} = 1.2 "
},
{
"math_id": 27,
"text": "\\mathrm{supp}(X)"
},
{
"math_id": 28,
"text": "\\mathrm{conf}(X \\Rightarrow Y)"
},
{
"math_id": 29,
"text": "I"
},
{
"math_id": 30,
"text": "\\implies"
}
] |
https://en.wikipedia.org/wiki?curid=577053
|
57709305
|
Ultrawide formats
|
Photo and video display formats
Ultrawide formats refers to photos, videos, and displays with aspect ratios greater than 2. There were multiple moves in history towards wider formats, including one by Disney, with some of them being more successful than others.
Cameras usually capture ultra-wide photos and videos using an anamorphic format lens, which shrinks the extended horizontal field-of-view (FOV) while saving on film or disk.
Historic Ultrawide Cinema.
Historically ultrawide movie formats have varied between ~2.35 (1678:715), ~2.39 (1024:429) and 2.4. To complicate matters further, films were also produced in following ratios: 2.55, 2.76 and 4.
Developed by Rowe E. Carney Jr. and Tom F. Smith, the Smith-Carney System used a 3 camera system, with 4.6945 (1737:370) ratio, to project movies in 180°. Disney even created a 6.85 ratio, using 5 projectors to display 200°. The only movie filmed in Disney's 6.85 ratio is "Impressions de France".
Wide aspect ratios.
Suggested by Kerns H. Powers of SMPTE in USA, the was developed to unify all other aspect ratios. Subsequently it became the universal standard for "widescreen" and high-definition television.
Around 2007, cameras and non-television screens began to switch from and to 16:9 resolutions.
Extra-wide aspect ratios.
Univisium is an aspect ratio of 2:1, created by Vittorio Storaro of the American Society of Cinematographers (ASC) originally intended to unify all other aspect ratios used in movies.
It is popular on smartphones and cheap VR displays. VR displays halve the screen into two, one for each eye. So a 2:1 VR screen would be halved into two 1:1 screens. Smartphones began moving to this aspect ratio since late 2010s with the release of Samsung Galaxy S8, advertised as 18:9.
Ultra-wide aspect ratios.
is a consumer electronics (CE) marketing term to describe the ultra-widescreen aspect ratio of 64:27 (21<templatestyles src="Fraction/styles.css" />1⁄3:9) = 1024:432 for multiples of 1080 lines. It is used for multiple anamorphic formats and DCI 1024:429 (21.482517:9), but also for ultrawide computer monitors, including 43:18 (21<templatestyles src="Fraction/styles.css" />1⁄2:9) for resolutions based on 720 lines and 12:5 (21<templatestyles src="Fraction/styles.css" />3⁄5:9) for ultrawide variants of resolutions based either on 960 pixels width or 900 lines height.
The 64:27 aspect ratio is the logical extension of the existing video aspect ratios 4:3 and 16:9. It is the third power of 4:3, whereas 16:9 of widescreen HDTV is 4:3 squared. This allows electronic scalers and optical anamorphic lenses to use an easily implementable 4:3 (1.33) scaling factor.
formula_0
21:9 movies usually refers to 1024:429 ≈ 2.387, the aspect ratio of digital ultrawide cinema formats, which is often rounded up to 2.39:1 or 2.4:1
Ultrawide resolution can also be described by its height, such as "UW 1080" and "1080p ultrawide" both stands for the same 2560×1080 resolution.
Super-wide aspect ratios.
In 2016, IMAX announced the release of films in Ultra-WideScreen 3.6 format, with an aspect ratio of 18:5 (36:10). A year later, Samsung and Phillips announced 'super ultra-wide displays', with aspect ratio of 32:9, for "iMax-style cinematic viewing". Panacast developed a 32:9 webcam with three integrated cameras giving 180° view, and resolution matching upcoming 5K 32:9 monitors, 5120x1440. In 2018 Q4, Dell released the U4919DW, a 5K 32:9 monitor with a resolution of 5120x1440, and Phillips announced the 499P9H with the same resolution. 32:9 Ultrawide monitors are often sold as an alternative to dual 16:9 monitor setups and for more inmersive experiences while playing videogames, and many are capable of displaying 2 16:9 inputs at the same time.
32:9 aspect ratio is derived from 16:9 being twice as large. Some manufacturers therefore refer to the resulting total display resolution with a D prefix for "dual" or "double".
Super wide resolutions refers to that with aspect ratio greater than 3.
Ultra-WideScreen 3.6 video never spread, as cinemas in an even wider ScreenX 270° format were released.
4:1 (36:9).
Abel Gance experimented with ultrawide formats including making a film in 4:1 (36:9). He made a rare use of Polyvision, three 35 mm 1.3 images projected side by side in the 1927 film "Napoléon".
AT NAB 2019, Sony introduced a 19.2-metre-wide by 5.4-metre-tall commercial 16K display. It is made up of 576 modules (48 by 12) each 360 pixels across, resulting in a 4:1, 17280x4320p screen.
Multi-Screen Theaters.
Developed by CJ CGV in 2012, ScreenX uses three (or more) projectors to display 270° content, with an unknown aspect ratio above 4. Walls on both sides of a ScreenX theatre are used as projector screens.
Developed by Barco N.V. in 2015, Barco Escape used three projectors of 2.39 ratio to display 270° content, with an aspect ratio of 7.17. The two side screens were angled at 45 degree in order to cover peripheral vision. Barco Escape shut down in February 2018.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tfrac{4}{3} \\cdot \\tfrac{4}{3} \\cdot \\tfrac{4}{3} = \\tfrac{64}{27}"
}
] |
https://en.wikipedia.org/wiki?curid=57709305
|
577097
|
Quadtree
|
Tree data structure in which each internal node has exactly four children, to partition a 2D area
A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The data associated with a leaf cell varies by application, but the leaf cell represents a "unit of interesting spatial information".
The subdivided regions may be square or rectangular, or may have arbitrary shapes. This data structure was named a quadtree by Raphael Finkel and J.L. Bentley in 1974. A similar partitioning is also known as a "Q-tree".
All forms of quadtrees share some common features:
A tree-pyramid (T-pyramid) is a "complete" tree; every node of the T-pyramid has four child nodes except leaf nodes; all leaves are on the same level, the level that corresponds to individual pixels in the image. The data in a tree-pyramid can be stored compactly in an array as an implicit data structure similar to the way a complete binary tree can be stored compactly in an array.
Types.
Quadtrees may be classified according to the type of data they represent, including areas, points, lines and curves. Quadtrees may also be classified by whether the shape of the tree is independent of the order in which data is processed. The following are common types of quadtrees.
Region quadtree.
The region quadtree represents a partition of space in two dimensions by decomposing the region into four equal quadrants, subquadrants, and so on with each leaf node containing data corresponding to a specific subregion. Each node in the tree either has exactly four children, or has no children (a leaf node). The height of quadtrees that follow this decomposition strategy (i.e. subdividing subquadrants as long as there is interesting data in the subquadrant for which more refinement is desired) is sensitive to and dependent on the spatial distribution of interesting areas in the space being decomposed. The region quadtree is a type of trie.
A region quadtree with a depth of n may be used to represent an image consisting of 2n × 2n pixels, where each pixel value is 0 or 1. The root node represents the entire image region. If the pixels in any region are not entirely 0s or 1s, it is subdivided. In this application, each leaf node represents a block of pixels that are all 0s or all 1s. Note the potential savings in terms of space when these trees are used for storing images; images often have many regions of considerable size that have the same colour value throughout. Rather than store a big 2-D array of every pixel in the image, a quadtree can capture the same information potentially many divisive levels higher than the pixel-resolution sized cells that we would otherwise require. The tree resolution and overall size is bounded by the pixel and image sizes.
A region quadtree may also be used as a variable resolution representation of a data field. For example, the temperatures in an area may be stored as a quadtree, with each leaf node storing the average temperature over the subregion it represents.
Point quadtree.
The point quadtree is an adaptation of a binary tree used to represent two-dimensional point data. It shares the features of all quadtrees but is a true tree as the center of a subdivision is always on a point. It is often very efficient in comparing two-dimensional, ordered data points, usually operating in O(log n) time. Point quadtrees are worth mentioning for completeness, but they have been surpassed by "k"-d trees as tools for generalized binary search.
Point quadtrees are constructed as follows. Given the next point to insert, we find the cell in which it lies and add it to the tree. The new point is added such that the cell that contains it is divided into quadrants by the vertical and horizontal lines that run through the point. Consequently, cells are rectangular but not necessarily square. In these trees, each node contains one of the input points.
Since the division of the plane is decided by the order of point-insertion, the tree's height is sensitive to and dependent on insertion order. Inserting in a "bad" order can lead to a tree of height linear in the number of input points (at which point it becomes a linked-list). If the point-set is static, pre-processing can be done to create a tree of balanced height.
Node structure for a point quadtree.
A node of a point quadtree is similar to a node of a binary tree, with the major difference being that it has four pointers (one for each quadrant) instead of two ("left" and "right") as in an ordinary binary tree. Also a key is usually decomposed into two parts, referring to x and y coordinates. Therefore, a node contains the following information:
Point-region (PR) quadtree.
Point-region (PR) quadtrees are very similar to region quadtrees. The difference is the type of information stored about the cells. In a region quadtree, a uniform value is stored that applies to the entire area of the cell of a leaf. The cells of a PR quadtree, however, store a list of points that exist within the cell of a leaf. As mentioned previously, for trees following this decomposition strategy the height depends on the spatial distribution of the points. Like the point quadtree, the PR quadtree may also have a linear height when given a "bad" set.
Edge quadtree.
Edge quadtrees (much like PM quadtrees) are used to store lines rather than points. Curves are approximated by subdividing cells to a very fine resolution, specifically until there is a single line segment per cell. Near corners/vertices, edge quadtrees will continue dividing until they reach their maximum level of decomposition. This can result in extremely unbalanced trees which may defeat the purpose of indexing.
Polygonal map (PM) quadtree.
The polygonal map quadtree (or PM Quadtree) is a variation of quadtree which is used to store collections of polygons that may be degenerate (meaning that they have isolated vertices or edges).
A big difference between PM quadtrees and edge quadtrees is that the cell under consideration is not subdivided if the segments meet at a vertex in the cell.
There are three main classes of PM Quadtrees, which vary depending on what information they store within each black node. PM3 quadtrees can store any amount of non-intersecting edges and at most one point. PM2 quadtrees are the same as PM3 quadtrees except that all edges must share the same end point. Finally PM1 quadtrees are similar to PM2, but black nodes can contain a point and its edges or just a set of edges that share a point, but you cannot have a point and a set of edges that do not contain the point.
Compressed quadtrees.
This section summarizes a subsection from a book by Sariel Har-Peled.
If we were to store every node corresponding to a subdivided cell, we may end up storing a lot of empty nodes. We can cut down on the size of such sparse trees by only storing subtrees whose leaves have interesting data (i.e. "important subtrees"). We can actually cut down on the size even further. When we only keep important subtrees, the pruning process may leave long paths in the tree where the intermediate nodes have degree two (a link to one parent and one child). It turns out that we only need to store the node formula_0 at the beginning of this path (and associate some meta-data with it to represent the removed nodes) and attach the subtree rooted at its end to formula_0. It is still possible for these compressed trees to have a linear height when given "bad" input points.
Although we trim a lot of the tree when we perform this compression, it is still possible to achieve logarithmic-time search, insertion, and deletion by taking advantage of "Z"-order curves. The "Z"-order curve maps each cell of the full quadtree (and hence even the compressed quadtree) in formula_1 time to a one-dimensional line (and maps it back in formula_1 time too), creating a total order on the elements. Therefore, we can store the quadtree in a data structure for ordered sets (in which we store the nodes of the tree).
We must state a reasonable assumption before we continue: we assume that given two real numbers formula_2 expressed as binary, we can compute in formula_1 time the index of the first bit in which they differ. We also assume that we can compute in formula_1 time the lowest common ancestor of two points/cells in the quadtree and establish their relative "Z"-ordering, and we can compute the floor function in formula_1 time.
With these assumptions, point location of a given point formula_3 (i.e. determining the cell that would contain formula_3), insertion, and deletion operations can all be performed in formula_4 time (i.e. the time it takes to do a search in the underlying ordered set data structure).
To perform a point location for formula_3 (i.e. find its cell in the compressed tree):
Without going into specific details, to perform insertions and deletions we first do a point location for the thing we want to insert/delete, and then insert/delete it. Care must be taken to reshape the tree as appropriate, creating and removing nodes as needed.
Image processing using quadtrees.
Quadtrees, particularly the region quadtree, have lent themselves well to image processing applications. We will limit our discussion to binary image data, though region quadtrees and the image processing operations performed on them are just as suitable for colour images.
Image union / intersection.
One of the advantages of using quadtrees for image manipulation is that the set operations of union and intersection can be done simply and quickly.
Given two binary images, the image union (also called "overlay") produces an image wherein a pixel is black if either of the input images has a black pixel in the same location. That is, a pixel in the output image is white only when the corresponding pixel in "both" input images is white, otherwise the output pixel is black. Rather than do the operation pixel by pixel, we can compute the union more efficiently by leveraging the quadtree's ability to represent multiple pixels with a single node. For the purposes of discussion below, if a subtree contains both black and white pixels we will say that the root of that subtree is coloured grey.
The algorithm works by traversing the two input quadtrees (formula_7 and formula_8) while building the output quadtree formula_9. Informally, the algorithm is as follows. Consider the nodes formula_10 and formula_11 corresponding to the same region in the images.
While this algorithm works, it does not by itself guarantee a minimally sized quadtree. For example, consider the result if we were to union a checkerboard (where every tile is a pixel) of size formula_14 with its complement. The result is a giant black square which should be represented by a quadtree with just the root node (coloured black), but instead the algorithm produces a full 4-ary tree of depth formula_15. To fix this, we perform a bottom-up traversal of the resulting quadtree where we check if the four children nodes have the same colour, in which case we replace their parent with a leaf of the same colour.
The intersection of two images is almost the same algorithm. One way to think about the intersection of the two images is that we are doing a union with respect to the "white" pixels. As such, to perform the intersection we swap the mentions of black and white in the union algorithm.
Connected component labelling.
Consider two neighbouring black pixels in a binary image. They are "adjacent" if they share a bounding horizontal or vertical edge. In general, two black pixels are "connected" if one can be reached from the other by moving only to adjacent pixels (i.e. there is a path of black pixels between them where each consecutive pair is adjacent). Each maximal set of connected black pixels is a "connected component". Using the quadtree representation of images, Samet showed how we can find and label these connected components in time proportional to the size of the quadtree. This algorithm can also be used for polygon colouring.
The algorithm works in three steps:
To simplify the discussion, let us assume the children of a node in the quadtree follow the "Z"-order (SW, NW, SE, NE). Since we can count on this structure, for any cell we know how to navigate the quadtree to find the adjacent cells in the different levels of the hierarchy.
Step one is accomplished with a post-order traversal of the quadtree. For each black leaf formula_5 we look at the node or nodes representing cells that are Northern neighbours and Eastern neighbours (i.e. the Northern and Eastern cells that share edges with the cell of formula_5). Since the tree is organized in "Z"-order, we have the invariant that the Southern and Western neighbours have already been taken care of and accounted for. Let the Northern or Eastern neighbour currently under consideration be formula_0. If formula_0 represents black pixels:
Step two can be accomplished using the union-find data structure. We start with each unique label as a separate set. For every equivalence relation noted in the first step, we union the corresponding sets. Afterwards, each distinct remaining set will be associated with a distinct connected component in the image.
Step three performs another post-order traversal. This time, for each black node formula_5 we use the union-find's "find" operation (with the old label of formula_5) to find and assign formula_5 its new label (associated with the connected component of which formula_5 is part).
Mesh generation using quadtrees.
This section summarizes a chapter from a book by Har-Peled and de Berg et al.
Mesh generation is essentially the triangulation of a point set for which further processing may be performed. As such, it is desirable for the resulting triangulation to have certain properties (like non-uniformity, triangles that are not "too skinny", large triangles in sparse areas and small triangles in dense ones, etc.) to make further processing quicker and less error-prone. Quadtrees built on the point set can be used to create meshes with these desired properties.
Consider a leaf of the quadtree and its corresponding cell formula_5. We say formula_5 is "balanced" (for mesh generation) if the cell's sides are intersected by the corner points of neighbouring cells at most once on each side. This means that the quadtree levels of leaves adjacent to formula_5 differ by at most one from the level of formula_5. When this is true for all leaves, we say the whole quadtree is balanced (for mesh generation).
Consider the cell formula_5 and the formula_16 neighbourhood of same-sized cells centred at formula_5. We call this neighbourhood the "extended cluster". We say the quadtree is "well-balanced" if it is balanced, and for every leaf formula_0 that contains a point of the point set, its extended cluster is also in the quadtree and the extended cluster contains no other point of the point set.
Creating the mesh is done as follows:
We consider the corner points of the tree cells as vertices in our triangulation. Before the transformation step we have a bunch of boxes with points in some of them. The transformation step is done in the following manner: for each point, warp the closest corner of its cell to meet it and triangulate the resulting four quadrangles to make "nice" triangles (the interested reader is referred to chapter 12 of Har-Peled for more details on what makes "nice" triangles).
The remaining squares are triangulated according to some simple rules. For each regular square (no points within and no corner points in its sides), introduce the diagonal. Due to the way in which we separated points with the well-balancing property, no square with a corner intersecting a side is one that was warped. As such, we can triangulate squares with intersecting corners as follows. If there is one intersected side, the square becomes three triangles by adding the long diagonals connecting the intersection with opposite corners. If there are four intersected sides, we split the square in half by adding an edge between two of the four intersections, and then connect these two endpoints to the remaining two intersection points. For the other squares, we introduce a point in the middle and connect it to all four corners of the square as well as each intersection point.
At the end of it all, we have a nice triangulated mesh of our point set built from a quadtree.
Pseudocode.
The following pseudo code shows one means of implementing a quadtree which handles only points. There are other approaches available.
Prerequisites.
It is assumed these structures are used.
"// Simple coordinate object to represent points and vectors"
struct XY
float x;
float y;
"// Axis-aligned bounding box with half dimension and center"
struct AABB
XY center;
float halfDimension;
QuadTree class.
This class represents both one quad tree and the node where it is rooted.
class QuadTree
"// Arbitrary constant to indicate how many elements can be stored in this quad tree node"
constant int QT_NODE_CAPACITY = 4;
"// Axis-aligned bounding box stored as a center with half-dimensions"
"// to represent the boundaries of this quad tree"
AABB boundary;
"// Points in this quad tree node"
Array of XY [size = QT_NODE_CAPACITY] points;
"// Children"
QuadTree* northWest;
QuadTree* northEast;
QuadTree* southWest;
QuadTree* southEast;
"// Methods"
function subdivide() {...} "// create four children that fully divide this quad into four quads of equal area"
Insertion.
The following method inserts a point into the appropriate quad of a quadtree, splitting if necessary.
class QuadTree
...
"// Insert a point into the QuadTree"
function insert("XY" p)
"// Ignore objects that do not belong in this quad tree"
if (!boundary.containsPoint(p))
return "false"; "// object cannot be added"
"// If there is space in this quad tree and if doesn't have subdivisions, add the object here"
if (points.size < QT_NODE_CAPACITY && northWest == "null")
points.append(p);
return "true";
"// Otherwise, subdivide and then add the point to whichever node will accept it"
if (northWest == "null")
subdivide();
"// We have to add the points/data contained in this quad array to the new quads if we only want"
"// the last node to hold the data"
if (northWest->insert(p)) return "true";
if (northEast->insert(p)) return "true";
if (southWest->insert(p)) return "true";
if (southEast->insert(p)) return "true";
"// Otherwise, the point cannot be inserted for some unknown reason (this should never happen)"
return "false";
Query range.
The following method finds all points contained within a range.
class QuadTree
...
"// Find all points that appear within a range"
function queryRange("AABB" range)
"// Prepare an array of results"
"Array of XY" pointsInRange;
"// Automatically abort if the range does not intersect this quad"
if (!boundary.intersectsAABB(range))
return pointsInRange; "// empty list"
"// Check objects at this quad level"
for (int p = 0; p < points.size; p++)
if (range.containsPoint(points[p]))
pointsInRange.append(points[p]);
"// Terminate here, if there are no children"
if (northWest == "null")
return pointsInRange;
"// Otherwise, add the points from the children"
pointsInRange.appendArray(northWest->queryRange(range));
pointsInRange.appendArray(northEast->queryRange(range));
pointsInRange.appendArray(southWest->queryRange(range));
pointsInRange.appendArray(southEast->queryRange(range));
return pointsInRange;
References.
Surveys by Aluru and Samet give a nice overview of quadtrees.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "O(1)"
},
{
"math_id": 2,
"text": "\\alpha, \\beta \\in [0, 1)"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "O(\\log{n})"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "q \\in v"
},
{
"math_id": 7,
"text": "T_1"
},
{
"math_id": 8,
"text": "T_2"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "v_1 \\in T_1"
},
{
"math_id": 11,
"text": "v_2 \\in T_2"
},
{
"math_id": 12,
"text": "v_1"
},
{
"math_id": 13,
"text": "v_2"
},
{
"math_id": 14,
"text": "2^{k} \\times 2^{k}"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "5\\times 5"
}
] |
https://en.wikipedia.org/wiki?curid=577097
|
57712889
|
Self-adaptive mechanisms
|
Self-adaptive mechanisms, sometimes simply called adaptive mechanisms, in engineering, are underactuated mechanisms that can adapt to their environment. One of the most well-known example of this type of mechanisms are underactuated fingers, grippers, and robotic hands. Contrary to standard underactuated mechanisms where the motion is governed by the dynamics of the system, the motion of self-adaptive mechanisms is generally constrained by compliant elements cleverly located in the mechanisms.
Definition.
Underactuated mechanisms have a lower number of actuators than the number of degrees of freedom (DOF). In a two-dimensional plane, a mechanism can have up to three DOF (two translations, one rotation), and in three-dimensional Euclidean space, up to six (three translations, three rotations). In the case of self-adaptive mechanisms, the lack of actuators is compensated by passive elements that constrain the motion of the system. Springs are a good example of such elements, but other can be used depending on the type of mechanisms.
One of the earliest example of self-adaptive mechanism is the flapping wing proposed by Leonardo da Vinci in the Codex Atlanticus.
Underactuated hands.
The first commonly known underactuated finger was the Soft-Gripper designed by Shigeo Hirose in the late 1970s. The most common type of transmission mechanisms used in self-adaptive hands are linkages and tendons.
Kinetostatics.
Underactuated fingers and hands are usually analyzed with respect to their kinetostatics (negligible kinetic energy, static analysis of a mechanism in motion) rather than the dynamics of the system, as the kinetic energy of these systems is generally negligible compared to the potential energy stored into the passive elements. The forces applied by each phalanx of an underactuated finger can be computed with the following expression:
formula_0
where F is the vector made of the forces applied, J is the Jacobian matrix of the finger, T* is the transmission matrix, and t is the torque vector made (actuator and passive elements).
Applications.
A self-adaptive robotic hand, SARAH (Self-Adaptive Robot Auxiliary Hand), was designed and built to be part of the Dextre’s toolbox. Dextre is a robotic telemanipulator that resides at the end of CANADARM-2 on the International Space Station. The Yale OpenHand is an example of open source self-adaptive mechanisms that can be found online. Some companies are also selling self-adaptive hands for industrial purposes. Prosthetics is another application for self-adaptive hands. One known example is the SPRING (Self-Adaptive Prosthesis for Restoring Natural Grasping) hand.
Other examples.
Self-adaptive mechanisms can be used for other applications, such as walking robots.
Compliant mechanisms are another example of self-adaptive mechanisms, where the passive elements and the transmission mechanism are a single monolithic block.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{F}=\\mathbf{J}^{-T}\\mathbf{T}^{*T}\\mathbf{t}"
}
] |
https://en.wikipedia.org/wiki?curid=57712889
|
577162
|
Relativistic wave equations
|
Wave equations respecting special and general relativity
In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields.
The solutions to the equations, universally denoted as ψ or Ψ (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background).
In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation;
formula_0
one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator "Ĥ" describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator.
More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group.
History.
Early 1920s: Classical and quantum mechanics.
The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: "quantum mechanics". The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant "ħ", the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on).
Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles.
A description of quantum mechanical systems which could account for "relativistic" effects was sought for by many theoretical physicists from the late 1920s to the mid-1940s. The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation:
by inserting the energy operator and momentum operator into the relativistic energy–momentum relation:
The solutions to (1) are scalar fields. The KG equation is undesirable due to its prediction of "negative" energies and probabilities, as a result of the quadratic nature of (2) – inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless, (1) is applicable to spin-0 bosons.
Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the fine structure in the Hydrogen spectral series. The mysterious underlying property was "spin". The first two-dimensional "spin matrices" (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was "phenomenological". Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for "massless" spin- fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation (2) to the electron – by various manipulations he factorized the equation into the form:
and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices α and "β" in a relativistic wave equation, and explained the fine structure of hydrogen. The solutions to (3A) are multi-component spinor fields, and each component satisfies (1). A remarkable result of spinor solutions is that half of the components describe a particle while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin- fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation.
Although a landmark in quantum theory, the Dirac equation is only true for spin- fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular – not all physicists were comfortable with the "Dirac sea" of negative energy states).
1930s–1960s: Relativistic quantum mechanics of higher-spin particles.
The natural problem became clear: to generalize the Dirac equation to particles with "any spin"; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions.
This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of (3A):
where "ψ" is a spinor field now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices α and "β" are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of 3B satisfy equation (2); instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory.
Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939) see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana's, as spinors were new mathematical tools in the early twentieth century, although Majorana's paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940.
Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors "A" and "B", symmetric in all indices, for a massive particle of spin "n" + <templatestyles src="Fraction/styles.css" />1⁄2 for integer "n" (see Van der Waerden notation for the meaning of the dotted indices):
where "p" is the momentum as a covariant spinor operator. For "n"
0, the equations reduce to the coupled Dirac equations and "A" and "B" together transform as the original Dirac spinor. Eliminating either "A" or "B" shows that "A" and "B" each fulfill (1). The direct derivation of the Dirac-Pauli-Fierz equations using the Bargmann-Wigner operators is given in.
In 1941, Rarita and Schwinger focussed on spin-<templatestyles src="Fraction/styles.css" />3⁄2 particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin "n" + <templatestyles src="Fraction/styles.css" />1⁄2 for integer "n". In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in (3A) and (3B) by an arbitrary constant, subject to a set of conditions which the wave functions must obey.
Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations. In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg, the Joos–Weinberg equation. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles.
1960s–present.
The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present.
Linear equations.
The following equations have solutions which satisfy the superposition principle, that is, the wave functions are additive.
Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wave functions are denoted "ψ", and ∂"μ" are the components of the four-gradient operator.
In matrix equations, the Pauli matrices are denoted by "σμ" in which "μ" = 0, 1, 2, 3, where "σ"0 is the 2 × 2 identity matrix:
formula_1
and the other matrices have their usual representations. The expression
formula_2
is a 2 × 2 matrix operator which acts on 2-component spinor fields.
The gamma matrices are denoted by "γ""μ", in which again "μ"
0, 1, 2, 3, and there are a number of representations to select from. The matrix "γ"0 is "not" necessarily the 4 × 4 identity matrix. The expression
formula_3
is a 4 × 4 matrix operator which acts on 4-component spinor fields.
Note that terms such as ""mc"" scalar multiply an identity matrix of the relevant dimension, the common sizes are 2 × 2 or 4 × 4, and are "conventionally" not written for simplicity.
Linear gauge fields.
The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles:
formula_4
Constructing RWEs.
Using 4-vectors and the energy–momentum relation.
Start with the standard special relativity (SR) 4-vectors
Note that each 4-vector is related to another by a Lorentz scalar:
Now, just apply the standard Lorentz scalar product rule to each one:
The last equation is a fundamental quantum relation.
When applied to a Lorentz scalar field formula_20, one gets the Klein–Gordon equation, the most basic of the quantum relativistic wave equations.
The Schrödinger equation is the low-velocity limiting case ("v" ≪ "c") of the Klein–Gordon equation.
When the relation is applied to a four-vector field formula_24 instead of a Lorentz scalar field formula_20, then one gets the Proca equation (in Lorenz gauge):
formula_25
If the rest mass term is set to zero (light-like particles), then this gives the free Maxwell equation (in Lorenz gauge)
formula_26
Representations of the Lorentz group.
Under a proper orthochronous Lorentz transformation "x" → Λ"x" in Minkowski space, all one-particle quantum states "ψ""j""σ" of spin "j" with spin z-component "σ" locally transform under some representation "D" of the Lorentz group:
formula_27
where "D"(Λ) is some finite-dimensional representation, i.e. a matrix. Here "ψ" is thought of as a column vector containing components with the allowed values of "σ". The quantum numbers "j" and "σ" as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of "σ" may occur more than once depending on the representation. Representations with several possible values for "j" are considered below.
The irreducible representations are labeled by a pair of half-integers or integers ("A", "B"). From these all other representations can be built up using a variety of standard methods, like taking tensor products and direct sums. In particular, space-time itself constitutes a 4-vector representation (, ) so that Λ ∈ "D"(1/2, 1/2). To put this into context; Dirac spinors transform under the (, 0) ⊕ (0, ) representation. In general, the ("A", "B") representation space has subspaces that under the subgroup of spatial rotations, SO(3), transform irreducibly like objects of spin "j", where each allowed value:
formula_28
occurs exactly once. In general, "tensor products of irreducible representations" are reducible; they decompose as direct sums of irreducible representations.
The representations "D"("j", 0) and "D"(0, "j") can each separately represent particles of spin "j". A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation.
Non-linear equations.
There are equations which have solutions that do not satisfy the superposition principle.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " i\\hbar\\frac{\\partial}{\\partial t}\\psi = \\hat{H} \\psi"
},
{
"math_id": 1,
"text": "\\sigma^0 = \\begin{pmatrix} 1&0 \\\\ 0&1 \\\\ \\end{pmatrix} "
},
{
"math_id": 2,
"text": "\\sigma^\\mu \\partial_\\mu \\equiv \\sigma^0 \\partial_0 + \\sigma^1 \\partial_1 + \\sigma^2 \\partial_2 + \\sigma^3 \\partial_3 "
},
{
"math_id": 3,
"text": "i\\hbar \\gamma^\\mu \\partial_\\mu + mc \\equiv i\\hbar(\\gamma^0 \\partial_0 + \\gamma^1 \\partial_1 + \\gamma^2 \\partial_2 + \\gamma^3 \\partial_3) + mc \\begin{pmatrix}1&0&0&0\\\\ 0&1&0&0 \\\\ 0&0&1&0 \\\\ 0&0&0&1 \\end{pmatrix} "
},
{
"math_id": 4,
"text": "(i \\hbar \\beta^{a} \\partial_a - m c) \\psi = 0"
},
{
"math_id": 5,
"text": "X^\\mu = \\mathbf{X} = (ct,\\vec{\\mathbf{x}})"
},
{
"math_id": 6,
"text": "U^\\mu = \\mathbf{U} = \\gamma(c,\\vec{\\mathbf{u}})"
},
{
"math_id": 7,
"text": "P^\\mu = \\mathbf{P} = \\left(\\frac{E}{c},\\vec{\\mathbf{p}}\\right)"
},
{
"math_id": 8,
"text": "K^\\mu = \\mathbf{K} = \\left(\\frac{\\omega}{c},\\vec{\\mathbf{k}}\\right)"
},
{
"math_id": 9,
"text": "\\partial^\\mu = \\mathbf{\\partial} = \\left(\\frac{\\partial_t}{c},-\\vec{\\mathbf{\\nabla}}\\right)"
},
{
"math_id": 10,
"text": "\\mathbf{U} = \\frac{d}{d\\tau} \\mathbf{X}"
},
{
"math_id": 11,
"text": "\\tau"
},
{
"math_id": 12,
"text": "\\mathbf{P} = m_0 \\mathbf{U}"
},
{
"math_id": 13,
"text": "m_0"
},
{
"math_id": 14,
"text": "\\mathbf{K} = (1/\\hbar) \\mathbf{P}"
},
{
"math_id": 15,
"text": "\\mathbf{\\partial} = -i \\mathbf{K}"
},
{
"math_id": 16,
"text": "\\mathbf{U} \\cdot \\mathbf{U} = (c)^2"
},
{
"math_id": 17,
"text": "\\mathbf{P} \\cdot \\mathbf{P} = (m_0 c)^2"
},
{
"math_id": 18,
"text": "\\mathbf{K} \\cdot \\mathbf{K} = \\left(\\frac{m_0 c}{\\hbar}\\right)^2"
},
{
"math_id": 19,
"text": "\\mathbf{\\partial} \\cdot \\mathbf{\\partial} = \\left(\\frac{-i m_0 c}{\\hbar}\\right)^2 = -\\left(\\frac{m_0 c}{\\hbar}\\right)^2"
},
{
"math_id": 20,
"text": "\\psi"
},
{
"math_id": 21,
"text": "\\left[\\mathbf{\\partial} \\cdot \\mathbf{\\partial} + \\left(\\frac{m_0 c}{\\hbar}\\right)^2\\right]\\psi = 0"
},
{
"math_id": 22,
"text": "\\left[\\partial_\\mu \\partial^\\mu + \\left(\\frac{m_0 c}{\\hbar}\\right)^2\\right]\\psi = 0"
},
{
"math_id": 23,
"text": "\\left[(\\hbar \\partial_{\\mu} + i m_0 c)(\\hbar \\partial^{\\mu} -i m_0 c)\\right]\\psi = 0"
},
{
"math_id": 24,
"text": "A^\\mu"
},
{
"math_id": 25,
"text": "\\left[\\mathbf{\\partial} \\cdot \\mathbf{\\partial} + \\left(\\frac{m_0 c}{\\hbar}\\right)^2\\right]A^\\mu = 0"
},
{
"math_id": 26,
"text": "[\\mathbf{\\partial} \\cdot \\mathbf{\\partial}]A^\\mu = 0"
},
{
"math_id": 27,
"text": "\\psi(x) \\rightarrow D(\\Lambda) \\psi(\\Lambda^{-1}x) "
},
{
"math_id": 28,
"text": "j = A + B, A + B - 1, \\dots, |A - B|,"
},
{
"math_id": 29,
"text": "R_{\\mu \\nu} - \\frac{1}{2} g_{\\mu \\nu}\\,R + g_{\\mu \\nu} \\Lambda = \\frac{8 \\pi G}{c^4} T_{\\mu \\nu}"
}
] |
https://en.wikipedia.org/wiki?curid=577162
|
57721868
|
Mass-flux fraction
|
Ratio of mass-flux of chemical species to total mass flux
The mass-flux fraction (or Hirschfelder-Curtiss variable or Kármán-Penner variable) is the ratio of mass-flux of a particular chemical species to the total mass flux of a gaseous mixture. It includes both the convectional mass flux and the diffusional mass flux. It was introduced by Joseph O. Hirschfelder and in 1948 and later by Theodore von Kármán and Sol Penner in 1954. The mass-flux fraction of a species "i" is defined as
formula_0
where
It satisfies the identity
formula_6,
similar to the mass fraction, but the mass-flux fraction can take both positive and negative values. This variable is used in steady, one-dimensional combustion problems in place of the mass fraction. For one-dimensional (formula_7 direction) steady flows, the conservation equation for the mass-flux fraction reduces to
formula_8,
where formula_9 is the mass production rate of species "i".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\epsilon_i = \\frac{\\rho_i (v+ V_i)}{\\rho v} = Y_i\\left(1+\\frac{V_i}{v}\\right) "
},
{
"math_id": 1,
"text": "Y_i=\\rho_i/\\rho"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "V_i"
},
{
"math_id": 4,
"text": "\\rho_i"
},
{
"math_id": 5,
"text": "\\rho"
},
{
"math_id": 6,
"text": "\\sum_i \\epsilon_i =1"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\\frac{d\\epsilon_i}{dx} = \\frac{w_i}{\\rho v}"
},
{
"math_id": 9,
"text": "w_i"
}
] |
https://en.wikipedia.org/wiki?curid=57721868
|
57724562
|
Universal dielectric response
|
In physics and electrical engineering, the universal dielectric response, or UDR, refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current, AC. First defined in a landmark article by A. K. Jonscher in "Nature" published in 1977, the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence.
The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials. Such systems, which can be called heterogenous or composite materials, can be described from a dielectric perspective as a large network consisting of resistor and capacitor elements, known also as an RC network. At low and high frequencies, the dielectric response of heterogeneous materials is governed by percolation pathways. If a heterogeneous material is represented by a network in which more than 50% of the elements are capacitors, percolation through capacitor elements will occur. This percolation results in conductivity at high and low frequencies that is directly proportional to frequency. Conversely, if the fraction of capacitor elements in the representative RC network (Pc) is lower than 0.5, dielectric behavior at low and high frequency regimes is independent of frequency. At intermediate frequencies, a very broad range of heterogeneous materials show a well-defined emergent region, in which power law correlation of admittance to frequency is observed. The power law emergent region is the key feature of the UDR. In materials or systems exhibiting UDR, the overall dielectric response from high to low frequencies is symmetrical, being centered at the middle point of the emergent region, which occurs in equivalent RC networks at a frequency of :formula_0. In the power law emergent region, the admittance of the overall system follows the general power law proportionality formula_1, where the power law exponent α can be approximated to the fraction of capacitors in the equivalent RC network of the system α≅Pc.
Significance of the UDR.
The power law scaling of dielectric properties with frequency is valuable in interpreting impedance spectroscopy data towards the characterisation of responses in emerging ferroelectric and multiferroic materials.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega = (RC)^{-1} "
},
{
"math_id": 1,
"text": " Y\\propto\\omega^{\\alpha} "
}
] |
https://en.wikipedia.org/wiki?curid=57724562
|
577248
|
New riddle of induction
|
Philosophical paradox introduced by Nelson Goodman
The new riddle of induction was presented by Nelson Goodman in "Fact, Fiction, and Forecast" as a successor to Hume's original problem. It presents the logical predicates grue and bleen which are unusual due to their time-dependence. Many have tried to solve the new riddle on those terms, but Hilary Putnam and others have argued such time-dependency depends on the language adopted, and in some languages it is equally true for natural-sounding predicates such as "green". For Goodman they illustrate the problem of projectible predicates and ultimately, which empirical generalizations are law-like and which are not. Goodman's construction and use of "grue" and "bleen" illustrates how philosophers use simple examples in conceptual analysis.
Grue and bleen.
Goodman defined "grue" relative to an arbitrary but fixed time "t": an object is grue if and only if it is observed before "t" and is green, or else is not so observed and is blue. An object is "bleen" if and only if it is observed before "t" and is blue, or else is not so observed and is green.
For some arbitrary future time "t", say January 1, 10, for all green things observed prior to "t", such as emeralds and well-watered grass, both the predicates "green" and "grue" apply. Likewise for all blue things observed prior to "t", such as bluebirds or blue flowers, both the predicates "blue" and "bleen" apply. On January 2, 10, however, emeralds and well-watered grass are "bleen", and bluebirds or blue flowers are "grue". The predicates "grue" and "bleen" are not the kinds of predicates used in everyday life or in science, but they apply in just the same way as the predicates "green" and "blue" up until some future time "t". From the perspective of observers before time "t" it is indeterminate which predicates are future projectible ("green" and "blue" or "grue" and "bleen").
The new riddle of induction.
In this section, Goodman's new riddle of induction is outlined in order to set the context for his introduction of the predicates "grue" and "bleen" and thereby illustrate their philosophical importance.
The old problem of induction and its dissolution.
Goodman poses Hume's problem of induction as a problem of the validity of the predictions we make. Since predictions are about what has yet to be observed and because there is no necessary connection between what has been observed and what will be observed, there is no objective justification for these predictions. Deductive logic cannot be used to infer predictions about future observations based on past observations because there are no valid rules of deductive logic for such inferences. Hume's answer was that observations of one kind of event following another kind of event result in habits of regularity (i.e., associating one kind of event with another kind). Predictions are then based on these regularities or habits of mind.
Goodman takes Hume's answer to be a serious one. He rejects other philosophers' objection that Hume is merely explaining the origin of our predictions and not their justification. His view is that Hume has identified something deeper. To illustrate this, Goodman turns to the problem of justifying a system of rules of deduction. For Goodman, the validity of a deductive system is justified by its conformity to good deductive practice. The justification of rules of a deductive system depends on our judgements about whether to reject or accept specific deductive inferences. Thus, for Goodman, the problem of induction dissolves into the same problem as justifying a deductive system and while, according to Goodman, Hume was on the right track with habits of mind, the problem is more complex than Hume realized.
In the context of justifying rules of induction, this becomes the problem of confirmation of generalizations for Goodman. However, the confirmation is not a problem of justification but instead it is a problem of precisely defining how evidence confirms generalizations. It is with this turn that "grue" and "bleen" have their philosophical role in Goodman's view of induction.
Projectible predicates.
The new riddle of induction, for Goodman, rests on our ability to distinguish "lawlike" from "non-lawlike" generalizations. "Lawlike" generalizations are capable of confirmation while "non-lawlike" generalizations are not. "Lawlike" generalizations are required for making predictions. Using examples from Goodman, the generalization that all copper conducts electricity is capable of confirmation by a particular piece of copper whereas the generalization that all men in a given room are third sons is not "lawlike" but accidental. The generalization that all copper conducts electricity is a basis for predicting that this piece of copper will conduct electricity. The generalization that all men in a given room are third sons, however, is not a basis for predicting that a given man in that room is a third son.
The question, therefore, is what makes some generalizations "lawlike" and others accidental. This, for Goodman, becomes a problem of determining which predicates are projectible (i.e., can be used in "lawlike" generalizations that serve as predictions) and which are not. Goodman argues that this is where the fundamental problem lies. This problem is known as Goodman's paradox: from the apparently strong evidence that all emeralds examined thus far have been green, one may inductively conclude that all future emeralds will be green. However, whether this prediction is "lawlike" or not depends on the predicates used in this prediction. Goodman observed that (assuming "t" has yet to pass) it is equally true that every emerald that has been observed is "grue". Thus, by the same evidence we can conclude that all future emeralds will be "grue". The new problem of induction becomes one of distinguishing projectible predicates such as "green" and "blue" from non-projectible predicates such as "grue" and "bleen".
Hume, Goodman argues, missed this problem. We do not, by habit, form generalizations from all associations of events we have observed but only some of them. All past observed emeralds were green, and we formed a habit of thinking the next emerald will be green, but they were equally grue, and we do not form habits concerning grueness. "Lawlike" predictions (or projections) ultimately are distinguishable by the predicates we use. Goodman's solution is to argue that "lawlike" predictions are based on projectible predicates such as "green" and "blue" and not on non-projectible predicates such as "grue" and "bleen" and what makes predicates projectible is their "entrenchment", which depends on their successful past projections. Thus, "grue" and "bleen" function in Goodman's arguments to both illustrate the new riddle of induction and to illustrate the distinction between projectible and non-projectible predicates via their relative entrenchment.
Responses.
One response is to appeal to the artificially disjunctive definition of grue. The notion of predicate "entrenchment" is not required. Goodman said that this does not succeed. If we take "grue" and "bleen" as primitive predicates, we can define green as ""grue" if first observed before "t" and "bleen" otherwise", and likewise for blue. To deny the acceptability of this disjunctive definition of green would be to beg the question.
Another proposed resolution that does not require predicate "entrenchment" is that ""x" is grue" is not solely a predicate of "x", but of "x" and a time "t"—we can know that an object is green without knowing the time "t", but we cannot know that it is grue. If this is the case, we should not expect ""x" is grue" to remain true when the time changes. However, one might ask why ""x" is green" is "not" considered a predicate of a particular time "t"—the more common definition of "green" does not require any mention of a time "t", but the definition "grue" does. Goodman also addresses and rejects this proposed solution as question begging because "blue" can be defined in terms of "grue" and "bleen", which explicitly refer to time.
Swinburne.
Richard Swinburne gets past the objection that green may be redefined in terms of "grue" and "bleen" by making a distinction based on how we test for the applicability of a predicate in a particular case. He distinguishes between qualitative and locational predicates. Qualitative predicates, like green, "can" be assessed without knowing the spatial or temporal relation of "x" to a particular time, place or event. Locational predicates, like "grue", "cannot" be assessed without knowing the spatial or temporal relation of "x" to a particular time, place or event, in this case whether "x" is being observed before or after time "t". Although green can be given a definition in terms of the locational predicates "grue" and "bleen", this is irrelevant to the fact that green meets the criterion for being a qualitative predicate whereas "grue" is merely locational. He concludes that if some "x"'s under examination—like emeralds—satisfy both a qualitative and a locational predicate, but projecting these two predicates yields conflicting predictions, namely, whether emeralds examined after time "t" shall appear grue or green, we should project the qualitative predicate, in this case green.
Carnap.
Rudolf Carnap responded to Goodman's 1946 article. Carnap's approach to inductive logic is based on the notion of "degree of confirmation" "c"("h","e") of a given hypothesis "h" by a given evidence "e". Both "h" and "e" are logical formulas expressed in a simple language "L" which allows for
The universe of discourse consists of denumerably many individuals, each of which is designated by its own constant symbol; such individuals are meant to be regarded as positions ("like space-time points in our actual world") rather than extended physical bodies. A state description is a (usually infinite) conjunction containing every possible ground atomic sentence, either negated or unnegated; such a conjunction describes a possible state of the whole universe. Carnap requires the following semantic properties:
Carnap distinguishes three kinds of properties:
To illuminate this taxonomy, let "x" be a variable and "a" a constant symbol; then an example of 1. could be ""x" is blue or "x" is non-warm", an example of 2. ""x" = "a", and an example of 3. "x" is red and not "x" = "a"".
Based on his theory of inductive logic sketched above, Carnap formalizes Goodman's notion of projectibility of a property "W" as follows: the higher the relative frequency of "W" in an observed sample, the higher is the probability that a non-observed individual has the property "W". Carnap suggests "as a tentative answer" to Goodman, that all purely qualitative properties are projectible, all purely positional properties are non-projectible, and mixed properties require further investigation.
Quine.
Willard Van Orman Quine discusses an approach to consider only "natural kinds" as projectible predicates.
He first relates Goodman's grue paradox to Hempel's raven paradox by defining two predicates "F" and "G" to be (simultaneously) projectible if all their shared instances count toward confirmation of the claim "each "F" is a "G"". Then Hempel's paradox just shows that the complements of projectible predicates (such as "is a raven", and "is black") need not be projectible, while Goodman's paradox shows that "is green" is projectible, but "is grue" is not.
Next, Quine reduces projectibility to the subjective notion of "similarity". Two green emeralds are usually considered more similar than two grue ones if only one of them is green. Observing a green emerald makes us expect a similar observation (i.e., a green emerald) next time. Green emeralds are a "natural kind", but grue emeralds are not. Quine investigates "the dubious scientific standing of a general notion of similarity, or of kind". Both are basic to thought and language, like the logical notions of e.g. identity, negation, disjunction. However, it remains unclear how to relate the logical notions to "similarity" or "kind"; Quine therefore tries to relate at least the latter two notions to each other.
Relation between similarity and kind
Assuming finitely many "kinds" only, the notion of "similarity" can be defined by that of "kind": an object "A" is more similar to "B" than to "C" if "A" and "B" belong jointly to more kinds than "A" and "C" do.
Vice versa, it remains again unclear how to define "kind" by "similarity". Defining e.g. the kind of red things as the set of all things that are more similar to a fixed "paradigmatical" red object than this is to another fixed "foil" non-red object (cf. left picture) isn't satisfactory, since the degree of overall similarity, including e.g. shape, weight, will afford little evidence of degree of redness. (In the picture, the yellow paprika might be considered more similar to the red one than the orange.)
An alternative approach inspired by Carnap defines a natural kind to be a set whose members are more similar to each other than each non-member is to at least one member.
However, Goodman argued, that this definition would make the set of all red round things, red wooden things, and round wooden things (cf. right picture) meet the proposed definition of a natural kind, while "surely it is not what anyone means by a kind".
While neither of the notions of similarity and kind can be defined by the other, they at least vary together: if "A" is reassessed to be more similar to "C" than to "B" rather than the other way around, the assignment of "A", "B", "C" to kinds will be permuted correspondingly; and conversely.
Basic importance of similarity and kind
In language, every general term owes its generality to some resemblance of the things referred to. Learning to use a word depends on a double resemblance, viz. between the present and past circumstances in which the word was used, and between the present and past phonetic utterances of the word.
Every reasonable expectation depends on resemblance of circumstances, together with our tendency to expect similar causes to have similar effects. This includes any scientific experiment, since it can be reproduced only under similar, but not under completely identical, circumstances. Already Heraclitus' famous saying "No man ever steps in the same river twice" highlighted the distinction between similar and identical circumstances.
Genesis of similarity and kind
In a behavioral sense, humans and other animals have an innate standard of similarity. It is part of our animal birthright, and characteristically animal in its lack of intellectual status, e.g. its alienness to mathematics and logic, cf. bird example.
Habit formation.
Induction itself is essentially animal expectation or habit formation. Ostensive learning is a case of induction, and a curiously comfortable one, since each man's spacing of qualities and kind is enough like his neighbor's. In contrast, the "brute irrationality of our sense of similarity" offers little reason to expect it being somehow in tune with the unanimated nature, which we never made. Why inductively obtained theories about it should be trusted is the perennial philosophical problem of induction. Quine, following Watanabe, suggests Darwin's theory as an explanation: if people's innate spacing of qualities is a gene-linked trait, then the spacing that has made for the most successful inductions will have tended to predominate through natural selection. However, this cannot account for the human ability to dynamically refine one's spacing of qualities in the course of getting acquainted with a new area.
Similar predicates used in philosophical analysis.
Quus.
In his book "Wittgenstein on Rules and Private Language", Saul Kripke proposed a related argument that leads to skepticism about meaning rather than skepticism about induction, as part of his personal interpretation (nicknamed "Kripkenstein" by some) of the private language argument. He proposed a new form of addition, which he called "quus", which is identical with "+" in all cases except those in which either of the numbers added are equal to or greater than 57; in which case the answer would be 5, i.e.:
formula_0
He then asks how, given certain obvious circumstances, anyone could know that previously when I thought I had meant "+", I had not actually meant "quus". Kripke then argues for an interpretation of Wittgenstein as holding that the meanings of words are not individually contained mental entities.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "x\\text{ quus }y= \\begin{cases} x+y & \\text{for }x,y <57 \\\\[12pt] 5 & \\text{for } x\\ge 57 \\text{ or } y\\ge57 \\end{cases} "
}
] |
https://en.wikipedia.org/wiki?curid=577248
|
57726601
|
Comparison matrix
|
In linear algebra, let "A" = ("aij") be a "n" × "n" complex matrix. The comparison matrix "M"("A") = ("αij") of complex matrix "A" is defined as
formula_0
|
[
{
"math_id": 0,
"text": "\\alpha_{ij} = \\begin{cases}\n-|a_{ij}| &\\text{if } i \\neq j, \\\\\n|a_{ij}| &\\text{if } i=j. \\end{cases}"
}
] |
https://en.wikipedia.org/wiki?curid=57726601
|
577301
|
Magnitude (mathematics)
|
Property determining comparison and ordering
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. Magnitude as a concept dates to Ancient Greece and has been applied as a measure of distance from one object to another. For numbers, the absolute value of a number is commonly applied as the measure of units between a number and zero.
In vector spaces, the Euclidean norm is a measure of magnitude used to define a distance between two points in space. In physics, magnitude can be defined as quantity or distance. An order of magnitude is typically defined as a unit of distance between one number and another's numerical places on the decimal scale.
History.
Ancient Greeks distinguished between several types of magnitude, including:
They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and "magnitude" is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes.
Numbers.
The magnitude of any number formula_0 is usually called its "absolute value" or "modulus", denoted by formula_1.
Real numbers.
The absolute value of a real number "r" is defined by:
formula_2
formula_3
Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70.
Complex numbers.
A complex number "z" may be viewed as the position of a point "P" in a 2-dimensional space, called the complex plane. The absolute value (or "modulus") of "z" may be thought of as the distance of "P" from the origin of that space. The formula for the absolute value of "z" = "a" + "bi" is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space:
formula_4
where the real numbers "a" and "b" are the real part and the imaginary part of "z", respectively. For instance, the modulus of −3 + 4"i" is formula_5. Alternatively, the magnitude of a complex number "z" may be defined as the square root of the product of itself and its complex conjugate, formula_6, where for any complex number formula_7, its complex conjugate is formula_8.
formula_9
(where formula_10).
Vector spaces.
Euclidean vector space.
A Euclidean vector represents the position of a point "P" in a Euclidean space. Geometrically, it can be described as an arrow from the origin of the space (vector tail) to that point (vector tip). Mathematically, a vector x in an "n"-dimensional Euclidean space can be defined as an ordered list of "n" real numbers (the Cartesian coordinates of "P"): "x" = ["x"1, "x"2, ..., "x""n"]. Its magnitude or length, denoted by formula_11, is most commonly defined as its Euclidean norm (or Euclidean length):
formula_12
For instance, in a 3-dimensional space, the magnitude of [3, 4, 12] is 13 because formula_13
This is equivalent to the square root of the dot product of the vector with itself:
formula_14
The Euclidean norm of a vector is just a special case of Euclidean distance: the distance between its tail and its tip. Two similar notations are used for the Euclidean norm of a vector "x":
A disadvantage of the second notation is that it can also be used to denote the absolute value of scalars and the determinants of matrices, which introduces an element of ambiguity.
Normed vector spaces.
By definition, all Euclidean vectors have a magnitude (see above). However, a vector in an abstract vector space does not possess a magnitude.
A vector space endowed with a norm, such as the Euclidean space, is called a normed vector space. The norm of a vector "v" in a normed vector space can be considered to be the magnitude of "v".
Pseudo-Euclidean space.
In a pseudo-Euclidean space, the magnitude of a vector is the value of the quadratic form for that vector.
Logarithmic magnitudes.
When comparing magnitudes, a logarithmic scale is often used. Examples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensity. Logarithmic magnitudes can be negative. In the natural sciences, a logarithmic magnitude is typically referred to as a "level".
Order of magnitude.
Orders of magnitude denote differences in numeric quantities, usually measurements, by a factor of 10—that is, a difference of one digit in the location of the decimal point.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "|x|"
},
{
"math_id": 2,
"text": " \\left| r \\right| = r, \\text{ if } r \\text{ ≥ } 0 "
},
{
"math_id": 3,
"text": " \\left| r \\right| = -r, \\text{ if } r < 0 ."
},
{
"math_id": 4,
"text": "\\left| z \\right| = \\sqrt{a^2 + b^2}"
},
{
"math_id": 5,
"text": "\\sqrt{(-3)^2+4^2} = 5"
},
{
"math_id": 6,
"text": "\\bar{z}"
},
{
"math_id": 7,
"text": " z = a + bi"
},
{
"math_id": 8,
"text": " \\bar{z} = a -bi"
},
{
"math_id": 9,
"text": " \\left| z \\right| = \\sqrt{z\\bar{z} } = \\sqrt{(a+bi)(a-bi)} = \\sqrt{a^2 -abi + abi - b^2i^2} = \\sqrt{a^2 + b^2 }"
},
{
"math_id": 10,
"text": "i^2 = -1"
},
{
"math_id": 11,
"text": "\\|x\\|"
},
{
"math_id": 12,
"text": "\\|\\mathbf{x}\\| = \\sqrt{x_1^2 + x_2^2 + \\cdots + x_n^2}."
},
{
"math_id": 13,
"text": "\\sqrt{3^2 + 4^2 + 12^2} = \\sqrt{169} = 13."
},
{
"math_id": 14,
"text": "\\|\\mathbf{x}\\| = \\sqrt{\\mathbf{x} \\cdot \\mathbf{x}}."
},
{
"math_id": 15,
"text": "\\left \\| \\mathbf{x} \\right \\|,"
},
{
"math_id": 16,
"text": "\\left | \\mathbf{x} \\right |."
}
] |
https://en.wikipedia.org/wiki?curid=577301
|
57736318
|
Force-sensing capacitor
|
Material whose capacitance changes when a force is applied
A force-sensing capacitor is a material whose capacitance changes when a force, pressure or mechanical stress is applied. They are also known as "force-sensitive capacitors". They can provide improved sensitivity and repeatability compared to force-sensitive resistors but traditionally required more complicated electronics.
Operation principle.
Typical force-sensitive capacitors are examples of parallel plate capacitors. For small deflections, there is a linear relationship between applied force and change in capacitance, which can be shown as follows:
The capacitance, formula_0, equals formula_1, where formula_2 is permeability, formula_3 is the area of the sensor and formula_4 is the distance between parallel plates. If the material is linearly elastic (so follows Hooks Law), then the displacement, due to an applied force formula_5, is formula_6, where formula_7 is the spring constant. Combining these equations gives the capacitance after an applied force as:
formula_8, where formula_9 is the separation between parallel plates when no force is applied.
This can be rearranged to:
formula_10
Assuming that formula_11, which is true for small deformations where formula_12, we can simplify this to:
C formula_13
It follows that:
C formula_14
C formula_15 where formula_16, which is constant for a given sensor.
We can express the change in capacitance formula_17 as:
formula_18
Production.
SingleTact makes force-sensitive capacitors using moulded silicon between two layers of polyimide to construct a 0.35mm thick sensor, with force ranges from 1N to 450N. The 8mm SingleTact has a nominal capacitance of 75pF, which increases by 2.2pF when the rated force is applied. It can be mounted on many surfaces for direct force measurement.
Uses.
Force-sensing capacitors can be used to create low-profile force-sensitive buttons. They have been used in medical imaging to map pressures in the esophagus and to image breast and prostate cancer.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": " \\varepsilon A /d "
},
{
"math_id": 2,
"text": " \\varepsilon "
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "x=F/k"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": " C =\\varepsilon A /(d_{nominal}-F/k) "
},
{
"math_id": 9,
"text": " d_{nominal} "
},
{
"math_id": 10,
"text": " C = (\\varepsilon Ad_{nominal} + \\varepsilon AF/k)/(d_{nominal}^2-F^2/k^2) "
},
{
"math_id": 11,
"text": " d_{nominal}^2 >> F^2/k^2 "
},
{
"math_id": 12,
"text": " d_{nominal} >> x "
},
{
"math_id": 13,
"text": " \\simeq(\\varepsilon Ad_{nominal} + \\varepsilon AF/k)/(d_{nominal}^2) "
},
{
"math_id": 14,
"text": " \\simeq C_{nominal} + \\varepsilon AF/kd_{nominal}^2 "
},
{
"math_id": 15,
"text": " \\simeq C_{nominal} + BF "
},
{
"math_id": 16,
"text": " B = \\epsilon A/kd^2 "
},
{
"math_id": 17,
"text": " \\Delta C "
},
{
"math_id": 18,
"text": " \\Delta C = BF "
}
] |
https://en.wikipedia.org/wiki?curid=57736318
|
577366
|
Banach–Alaoglu theorem
|
Theorem in functional analysis
In functional analysis and related branches of mathematics, the Banach–Alaoglu theorem (also known as Alaoglu's theorem) states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology.
A common proof identifies the unit ball with the weak-* topology as a closed subset of a product of compact sets with the product topology.
As a consequence of Tychonoff's theorem, this product, and hence the unit ball within, is compact.
This theorem has applications in physics when one describes the set of states of an algebra of observables, namely that any state can be written as a convex linear combination of so-called pure states.
History.
According to Lawrence Narici and Edward Beckenstein, the Alaoglu theorem is a “very important result—maybe the most important fact about the weak-* topology—[that] echos throughout functional analysis.”
In 1912, Helly proved that the unit ball of the continuous dual space of formula_0 is countably weak-* compact.
In 1932, Stefan Banach proved that the closed unit ball in the continuous dual space of any separable normed space is sequentially weak-* compact (Banach only considered sequential compactness).
The proof for the general case was published in 1940 by the mathematician Leonidas Alaoglu.
According to Pietsch [2007], there are at least twelve mathematicians who can lay claim to this theorem or an important predecessor to it.
The Bourbaki–Alaoglu theorem is a generalization of the original theorem by Bourbaki to dual topologies on locally convex spaces.
This theorem is also called the Banach–Alaoglu theorem or the weak-* compactness theorem and it is commonly called simply the Alaoglu theorem.
Statement.
If formula_1 is a vector space over the field formula_2 then formula_3 will denote the algebraic dual space of formula_1 and these two spaces are henceforth associated with the bilinear evaluation map formula_4 defined by
formula_5
where the triple formula_6 forms a dual system called the canonical dual system.
If formula_1 is a topological vector space (TVS) then its continuous dual space will be denoted by formula_7 where formula_8 always holds.
Denote the weak-* topology on formula_3 by formula_9 and denote the weak-* topology on formula_10 by formula_11
The weak-* topology is also called the topology of pointwise convergence because given a map formula_12 and a net of maps formula_13 the net formula_14 converges to formula_12 in this topology if and only if for every point formula_15 in the domain, the net of values formula_16 converges to the value formula_17
<templatestyles src="Math_theorem/styles.css" />
Alaoglu theorem —
For any topological vector space (TVS) formula_1 (not necessarily Hausdorff or locally convex) with continuous dual space formula_7 the polar
formula_18
of any neighborhood formula_19 of origin in formula_1 is compact in the weak-* topology formula_20 on formula_21 Moreover, formula_22 is equal to the polar of formula_19 with respect to the canonical system formula_23 and it is also a compact subset of formula_24
Proof involving duality theory.
<templatestyles src="Math_proof/styles.css" />Proof
Denote by the underlying field of formula_1 by formula_25 which is either the real numbers formula_26 or complex numbers formula_27
This proof will use some of the basic properties that are listed in the articles: polar set, dual system, and continuous linear operator.
To start the proof, some definitions and readily verified results are recalled. When formula_3 is endowed with the weak-* topology formula_28 then this Hausdorff locally convex topological vector space is denoted by formula_24
The space formula_29 is always a complete TVS; however, formula_30 may fail to be a complete space, which is the reason why this proof involves the space formula_24
Specifically, this proof will use the fact that a subset of a complete Hausdorff space is compact if (and only if) it is closed and totally bounded.
Importantly, the subspace topology that formula_10 inherits from formula_29 is equal to formula_11 This can be readily verified by showing that given any formula_31 a net in formula_10 converges to formula_12 in one of these topologies if and only if it also converges to formula_12 in the other topology (the conclusion follows because two topologies are equal if and only if they have the exact same convergent nets).
The triple formula_32 is a dual pairing although unlike formula_33 it is in general not guaranteed to be a dual system.
Throughout, unless stated otherwise, all polar sets will be taken with respect to the canonical pairing formula_34
Let formula_19 be a neighborhood of the origin in formula_1 and let:
A well known fact about polar sets is that formula_40
If formula_1 is a normed vector space, then the polar of a neighborhood is closed and norm-bounded in the dual space.
In particular, if formula_19 is the open (or closed) unit ball in formula_1 then the polar of formula_19 is the closed unit ball in the continuous dual space formula_10 of formula_1 (with the usual dual norm).
Consequently, this theorem can be specialized to:
<templatestyles src="Math_theorem/styles.css" />
Banach–Alaoglu theorem — If formula_1 is a normed space then the closed unit ball in the continuous dual space formula_10 (endowed with its usual operator norm) is compact with respect to the weak-* topology.
When the continuous dual space formula_10 of formula_1 is an infinite dimensional normed space then it is impossible for the closed unit ball in formula_10 to be a compact subset when formula_10 has its usual norm topology.
This is because the unit ball in the norm topology is compact if and only if the space is finite-dimensional (cf. F. Riesz theorem).
This theorem is one example of the utility of having different topologies on the same vector space.
It should be cautioned that despite appearances, the Banach–Alaoglu theorem does not imply that the weak-* topology is locally compact.
This is because the closed unit ball is only a neighborhood of the origin in the strong topology, but is usually not a neighborhood of the origin in the weak-* topology, as it has empty interior in the weak* topology, unless the space is finite-dimensional.
In fact, it is a result of Weil that all locally compact Hausdorff topological vector spaces must be finite-dimensional.
Elementary proof.
The following elementary proof does not utilize duality theory and requires only basic concepts from set theory, topology, and functional analysis.
What is needed from topology is a working knowledge of net convergence in topological spaces and familiarity with the fact that a linear functional is continuous if and only if it is bounded on a neighborhood of the origin (see the articles on continuous linear functionals and sublinear functionals for details).
Also required is a proper understanding of the technical details of how the space formula_69 of all functions of the form formula_70 is identified as the Cartesian product formula_71 and the relationship between pointwise convergence, the product topology, and subspace topologies they induce on subsets such as the algebraic dual space formula_3 and products of subspaces such as formula_72
An explanation of these details is now given for readers who are interested.
The essence of the Banach–Alaoglu theorem can be found in the next proposition, from which the Banach–Alaoglu theorem follows.
Unlike the Banach–Alaoglu theorem, this proposition does not require the vector space formula_1 to endowed with any topology.
<templatestyles src="Math_theorem/styles.css" />
Proposition —
Let formula_19 be a subset of a vector space formula_1 over the field formula_2 (where formula_85) and for every real number formula_73 endow the closed ball formula_86 with its usual topology (formula_1 need not be endowed with any topology, but formula_2 has its usual Euclidean topology).
Define
formula_87
If for every formula_84 formula_88 is a real number such that formula_89 then formula_41 is a closed and compact subspace of the product space formula_75 (where because this product topology is identical to the topology of pointwise convergence, which is also called the weak-* topology in functional analysis, this means that formula_41 is compact in the weak-* topology or "weak-* compact" for short).
Before proving the proposition above, it is first shown how the Banach–Alaoglu theorem follows from it (unlike the proposition, Banach–Alaoglu assumes that formula_1 is a topological vector space (TVS) and that formula_19 is a neighborhood of the origin).
<templatestyles src="Math_proof/styles.css" />Proof that Banach–Alaoglu follows from the proposition above
Assume that formula_1 is a topological vector space with continuous dual space formula_10 and that formula_19 is a neighborhood of the origin.
Because formula_19 is a neighborhood of the origin in formula_64 it is also an absorbing subset of formula_64 so for every formula_84 there exists a real number formula_88 such that formula_90
Thus the hypotheses of the above proposition are satisfied, and so the set formula_41 is therefore compact in the weak-* topology.
The proof of the Banach–Alaoglu theorem will be complete once it is shown that formula_91
where recall that formula_22 was defined as
formula_92
Proof that formula_93
Because formula_94 the conclusion is equivalent to formula_95
If formula_57 then formula_58 which states exactly that the linear functional formula_12 is bounded on the neighborhood formula_96 thus formula_12 is a continuous linear functional (that is, formula_59), as desired.
formula_68
<templatestyles src="Math_proof/styles.css" />Proof of Proposition
The product space formula_97 is compact by Tychonoff's theorem (since each closed ball formula_98 is a Hausdorff compact space). Because a closed subset of a compact space is compact, the proof of the proposition will be complete once it is shown that
formula_99
is a closed subset of formula_72
The following statements guarantee this conclusion:
Proof of (1):
For any formula_81 let formula_102 denote the projection to the formula_74th coordinate (as defined above).
To prove that formula_103 it is sufficient (and necessary) to show that formula_104 for every formula_76
So fix formula_105 and let formula_106
Because formula_107 it remains to show that formula_108
Recall that formula_88 was defined in the proposition's statement as being any positive real number that satisfies formula_109 (so for example, formula_110 would be a valid choice for each formula_111), which implies formula_112
Because formula_12 is a positive homogeneous function that satisfies formula_58
formula_113
Thus formula_114 which shows that formula_115 as desired.
Proof of (2):
The algebraic dual space formula_3 is always a closed subset of formula_80 (this is proved in the lemma below for readers who are not familiar with this result).
The set
formula_116
is closed in the product topology on formula_117 since it is a product of closed subsets of formula_118
Thus formula_119 is an intersection of two closed subsets of formula_120 which proves (2).
formula_68
The conclusion that the set formula_122 is closed can also be reached by applying the following more general result, this time proved using nets, to the special case formula_123 and formula_124
Observation: If formula_77 is any set and if formula_125 is a closed subset of a topological space formula_126 then formula_127 is a closed subset of formula_128 in the topology of pointwise convergence.
Proof of observation: Let formula_129 and suppose that formula_82 is a net in formula_130 that converges pointwise to formula_131 It remains to show that formula_132 which by definition means formula_133 For any formula_134 because formula_135 in formula_136 and every value formula_137 belongs to the closed (in formula_136) subset formula_138 so too must this net's limit belong to this closed set; thus formula_139 which completes the proof. formula_68
<templatestyles src="Math_theorem/styles.css" />
Lemma (formula_3 is closed in formula_69) —
The algebraic dual space formula_3 of any vector space formula_1 over a field formula_2 (where formula_2 is formula_26 or formula_140) is a closed subset of formula_80 in the topology of pointwise convergence. (The vector space formula_1 need not be endowed with any topology).
The lemma above actually also follows from its corollary below since formula_121 is a Hausdorff complete uniform space and any subset of such a space (in particular formula_3) is closed if and only if it is complete.
<templatestyles src="Math_theorem/styles.css" />
Corollary to lemma (formula_3 is weak-* complete) —
When the algebraic dual space formula_3 of a vector space formula_1 is equipped with the topology formula_9 of pointwise convergence (also known as the weak-* topology) then the resulting topological space formula_29 is a complete Hausdorff locally convex topological vector space.
The above elementary proof of the Banach–Alaoglu theorem actually shows that if formula_77 is any subset that satisfies formula_141 (such as any absorbing subset of formula_1), then formula_142 is a weak-* compact subset of formula_66
As a side note, with the help of the above elementary proof, it may be shown (see this footnote)
that there exist formula_1-indexed non-negative real numbers formula_145 such that
formula_146
where these real numbers formula_147 can also be chosen to be "minimal" in the following sense:
using formula_148 (so formula_149 as in the proof) and defining the notation formula_150 for any formula_151 if
formula_152
then formula_143 and for every formula_84 formula_153
which shows that these numbers formula_147 are unique; indeed, this infimum formula can be used to define them.
In fact, if formula_144 denotes the set of all such products of closed balls containing the polar set formula_154
formula_155
then
formula_156
where formula_157 denotes the intersection of all sets belonging to formula_158
This implies (among other things)
that formula_159 the unique least element of formula_144 with respect to formula_160 this may be used as an alternative definition of this (necessarily convex and balanced) set.
The function formula_161 is a seminorm and it is unchanged if formula_19 is replaced by the convex balanced hull of formula_19 (because formula_162).
Similarly, because formula_163 formula_147 is also unchanged if formula_19 is replaced by its closure in formula_83
Sequential Banach–Alaoglu theorem.
A special case of the Banach–Alaoglu theorem is the sequential version of the theorem, which asserts that the closed unit ball of the dual space of a separable normed vector space is sequentially compact in the weak-* topology.
In fact, the weak* topology on the closed unit ball of the dual of a separable space is metrizable, and thus compactness and sequential compactness are equivalent.
Specifically, let formula_1 be a separable normed space and formula_164 the closed unit ball in formula_21 Since formula_1 is separable, let formula_165 be a countable dense subset.
Then the following defines a metric, where for any formula_166
formula_167
in which formula_168 denotes the duality pairing of formula_10 with formula_83
Sequential compactness of formula_164 in this metric can be shown by a diagonalization argument similar to the one employed in the proof of the Arzelà–Ascoli theorem.
Due to the constructive nature of its proof (as opposed to the general case, which is based on the axiom of choice), the sequential Banach–Alaoglu theorem is often used in the field of partial differential equations to construct solutions to PDE or variational problems.
For instance, if one wants to minimize a functional formula_169 on the dual of a separable normed vector space formula_64 one common strategy is to first construct a minimizing sequence formula_170 which approaches the infimum of formula_171 use the sequential Banach–Alaoglu theorem to extract a subsequence that converges in the weak* topology to a limit formula_172 and then establish that formula_15 is a minimizer of formula_173
The last step often requires formula_79 to obey a (sequential) lower semi-continuity property in the weak* topology.
When formula_10 is the space of finite Radon measures on the real line (so that formula_174 is the space of continuous functions vanishing at infinity, by the Riesz representation theorem), the sequential Banach–Alaoglu theorem is equivalent to the Helly selection theorem.
<templatestyles src="Math_proof/styles.css" />Proof
For every formula_84 let
formula_175
and let
formula_176
be endowed with the product topology.
Because every formula_177 is a compact subset of the complex plane, Tychonoff's theorem guarantees that their product formula_178 is compact.
The closed unit ball in formula_7 denoted by formula_179 can be identified as a subset of formula_178 in a natural way:
formula_180
This map is injective and it is continuous when formula_181 has the weak-* topology.
This map's inverse, defined on its image, is also continuous.
It will now be shown that the image of the above map is closed, which will complete the proof of the theorem.
Given a point formula_182 and a net formula_183 in the image of formula_79 indexed by formula_78 such that
formula_184
the functional formula_185 defined by
formula_186
lies in formula_181 and formula_187
formula_68
Consequences.
Consequences for normed spaces.
Assume that formula_1 is a normed space and endow its continuous dual space formula_10 with the usual dual norm.
Relation to the axiom of choice and other statements.
The Banach–Alaoglu may be proven by using Tychonoff's theorem, which under the Zermelo–Fraenkel set theory (ZF) axiomatic framework is equivalent to the axiom of choice.
Most mainstream functional analysis relies on ZF + the axiom of choice, which is often denoted by ZFC.
However, the theorem does not rely upon the axiom of choice in the separable case (see above): in this case there actually exists a constructive proof.
In the general case of an arbitrary normed space, the ultrafilter Lemma, which is strictly weaker than the axiom of choice and equivalent to Tychonoff's theorem for compact Hausdorff spaces, suffices for the proof of the Banach–Alaoglu theorem, and is in fact equivalent to it.
The Banach–Alaoglu theorem is equivalent to the ultrafilter lemma, which implies the Hahn–Banach theorem for real vector spaces (HB) but is not equivalent to it (said differently, Banach–Alaoglu is also strictly stronger than HB).
However, the Hahn–Banach theorem is equivalent to the following weak version of the Banach–Alaoglu theorem for normed space in which the conclusion of compactness (in the weak-* topology of the closed unit ball of the dual space) is replaced with the conclusion of quasicompactness (also sometimes called convex compactness);
<templatestyles src="Math_theorem/styles.css" />
<templatestyles src="Template:Visible anchor/styles.css" />Weak version of Alaoglu theorem —
Let formula_1 be a normed space and let formula_164 denote the closed unit ball of its continuous dual space formula_21 Then formula_164 has the following property, which is called (weak-*) or : whenever formula_188 is a cover of formula_164 by convex weak-* closed subsets of formula_10 such that formula_189 has the finite intersection property, then formula_190 is not empty.
Compactness implies convex compactness because a topological space is compact if and only if every family of closed subsets having the finite intersection property (FIP) has non-empty intersection.
The definition of convex compactness is similar to this characterization of compact spaces in terms of the FIP, except that it only involves those closed subsets that are also convex (rather than all closed subsets).
Notes.
<templatestyles src="Reflist/styles.css" />
Proofs
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "C([a, b])"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\mathbb{K}"
},
{
"math_id": 3,
"text": "X^{\\#}"
},
{
"math_id": 4,
"text": "\\left\\langle \\cdot, \\cdot \\right\\rangle : X \\times X^{\\#} \\to \\mathbb{K}"
},
{
"math_id": 5,
"text": "\\left\\langle x, f \\right\\rangle ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ f(x)"
},
{
"math_id": 6,
"text": "\\left\\langle X, X^{\\#},\\left\\langle \\cdot, \\cdot \\right\\rangle \\right\\rangle"
},
{
"math_id": 7,
"text": "X^{\\prime},"
},
{
"math_id": 8,
"text": "X^{\\prime} \\subseteq X^{\\#}"
},
{
"math_id": 9,
"text": "\\sigma\\left(X^{\\#}, X\\right)"
},
{
"math_id": 10,
"text": "X^{\\prime}"
},
{
"math_id": 11,
"text": "\\sigma\\left(X^{\\prime}, X\\right)."
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "f_{\\bull} = \\left(f_i\\right)_{i \\in I},"
},
{
"math_id": 14,
"text": "f_{\\bull}"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "\\left(f_i(x)\\right)_{i \\in I}"
},
{
"math_id": 17,
"text": "f(x)."
},
{
"math_id": 18,
"text": "U^{\\circ} = \\left\\{f \\in X^{\\prime} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\right\\}"
},
{
"math_id": 19,
"text": "U"
},
{
"math_id": 20,
"text": "\\sigma\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 21,
"text": "X^{\\prime}."
},
{
"math_id": 22,
"text": "U^{\\circ}"
},
{
"math_id": 23,
"text": "\\left\\langle X, X^{\\#} \\right\\rangle"
},
{
"math_id": 24,
"text": "\\left(X^{\\#}, \\sigma\\left(X^{\\#}, X\\right)\\right)."
},
{
"math_id": 25,
"text": "\\mathbb{K},"
},
{
"math_id": 26,
"text": "\\R"
},
{
"math_id": 27,
"text": "\\Complex."
},
{
"math_id": 28,
"text": "\\sigma\\left(X^{\\#}, X\\right),"
},
{
"math_id": 29,
"text": "\\left(X^{\\#}, \\sigma\\left(X^{\\#}, X\\right)\\right)"
},
{
"math_id": 30,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right)"
},
{
"math_id": 31,
"text": "f \\in X^{\\prime},"
},
{
"math_id": 32,
"text": "\\left\\langle X, X^{\\prime} \\right\\rangle"
},
{
"math_id": 33,
"text": "\\left\\langle X, X^{\\#} \\right\\rangle,"
},
{
"math_id": 34,
"text": "\\left\\langle X, X^{\\prime} \\right\\rangle."
},
{
"math_id": 35,
"text": "U^{\\circ} = \\left\\{f \\in X^{\\prime} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\right\\}"
},
{
"math_id": 36,
"text": "U^{\\circ\\circ} = \\left\\{x \\in X ~:~ \\sup_{f \\in U^{\\circ}} |f(x)|\\leq 1 \\right\\}"
},
{
"math_id": 37,
"text": "U^{\\#} = \\left\\{f \\in X^{\\#} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\right\\}"
},
{
"math_id": 38,
"text": "\\left\\langle X, X^{\\#} \\right\\rangle."
},
{
"math_id": 39,
"text": "U^{\\circ} = U^{\\#} \\cap X^{\\prime}."
},
{
"math_id": 40,
"text": "U^{\\circ\\circ\\circ} \\subseteq U^{\\circ}."
},
{
"math_id": 41,
"text": "U^{\\#}"
},
{
"math_id": 42,
"text": "X^{\\#}:"
},
{
"math_id": 43,
"text": "f \\in X^{\\#}"
},
{
"math_id": 44,
"text": "f_{\\bull} = \\left(f_i\\right)_{i \\in I}"
},
{
"math_id": 45,
"text": "f \\in U^{\\#},"
},
{
"math_id": 46,
"text": "|f(u)| \\leq 1"
},
{
"math_id": 47,
"text": "u \\in U."
},
{
"math_id": 48,
"text": "f_i(u) \\to f(u)"
},
{
"math_id": 49,
"text": "f_i(u)"
},
{
"math_id": 50,
"text": "\\left\\{ s \\in \\mathbb{K} : |s| \\leq 1 \\right\\},"
},
{
"math_id": 51,
"text": "f(u)"
},
{
"math_id": 52,
"text": "|f(u)| \\leq 1."
},
{
"math_id": 53,
"text": "U^{\\#} = U^{\\circ}"
},
{
"math_id": 54,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right):"
},
{
"math_id": 55,
"text": "U^{\\circ} \\subseteq U^{\\#}"
},
{
"math_id": 56,
"text": "\\,U^{\\#} \\subseteq U^{\\circ},\\,"
},
{
"math_id": 57,
"text": "f \\in U^{\\#}"
},
{
"math_id": 58,
"text": "\\;\\sup_{u \\in U} |f(u)| \\leq 1,\\,"
},
{
"math_id": 59,
"text": "f \\in X^{\\prime}"
},
{
"math_id": 60,
"text": "f \\in U^{\\circ},"
},
{
"math_id": 61,
"text": "U^{\\#} \\cap X^{\\prime} = U^{\\circ} \\cap X^{\\prime} = U^{\\circ}"
},
{
"math_id": 62,
"text": "X^{\\prime}:"
},
{
"math_id": 63,
"text": "U \\subseteq U^{\\circ\\circ}"
},
{
"math_id": 64,
"text": "X,"
},
{
"math_id": 65,
"text": "U^{\\circ\\circ};"
},
{
"math_id": 66,
"text": "X^{\\#}."
},
{
"math_id": 67,
"text": "\\left(X^{\\#}, \\sigma\\left(X^{\\#}, X\\right)\\right),"
},
{
"math_id": 68,
"text": "\\blacksquare"
},
{
"math_id": 69,
"text": "\\mathbb{K}^X"
},
{
"math_id": 70,
"text": "X \\to \\mathbb{K}"
},
{
"math_id": 71,
"text": "\\prod_{x \\in X} \\mathbb{K},"
},
{
"math_id": 72,
"text": "\\prod_{x \\in X} B_{r_x}."
},
{
"math_id": 73,
"text": "r,"
},
{
"math_id": 74,
"text": "z"
},
{
"math_id": 75,
"text": "\\prod_{x \\in X} B_{r_x}"
},
{
"math_id": 76,
"text": "x \\in X."
},
{
"math_id": 77,
"text": "U \\subseteq X"
},
{
"math_id": 78,
"text": "i \\in I"
},
{
"math_id": 79,
"text": "F"
},
{
"math_id": 80,
"text": "\\mathbb{K}^X = \\prod_{x \\in X} \\mathbb{K}"
},
{
"math_id": 81,
"text": "z \\in X,"
},
{
"math_id": 82,
"text": "\\left(f_i\\right)_{i \\in I}"
},
{
"math_id": 83,
"text": "X."
},
{
"math_id": 84,
"text": "x \\in X,"
},
{
"math_id": 85,
"text": "\\mathbb{K} = \\R \\text{ or } \\mathbb{K} = \\Complex"
},
{
"math_id": 86,
"text": "B_r ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{s \\in \\mathbb{K} : |s| \\leq r\\}"
},
{
"math_id": 87,
"text": "U^{\\#} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\Big\\{f \\in X^{\\#} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\Big\\}."
},
{
"math_id": 88,
"text": "r_x > 0"
},
{
"math_id": 89,
"text": "x \\in r_x U,"
},
{
"math_id": 90,
"text": "x \\in r_x U."
},
{
"math_id": 91,
"text": "U^{\\#} = U^{\\circ},"
},
{
"math_id": 92,
"text": "U^{\\circ} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\Big\\{f \\in X^{\\prime} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\Big\\} ~=~ U^{\\#} \\cap X^{\\prime}."
},
{
"math_id": 93,
"text": "U^{\\circ} = U^{\\#}:"
},
{
"math_id": 94,
"text": "U^{\\circ} = U^{\\#} \\cap X^{\\prime},"
},
{
"math_id": 95,
"text": "U^{\\#} \\subseteq X^{\\prime}."
},
{
"math_id": 96,
"text": "U;"
},
{
"math_id": 97,
"text": "\\prod_{x \\in X} B_{r_x}"
},
{
"math_id": 98,
"text": "B_{r_x} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{s \\in \\mathbb{K} : |s| \\leq r_x\\}"
},
{
"math_id": 99,
"text": "U^{\\#} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\Big\\{f \\in X^{\\#} ~:~ \\sup_{u \\in U} |f(u)| \\leq 1\\Big\\} ~=~ \\left\\{f \\in X^{\\#} ~:~ f(U) \\subseteq B_1\\right\\}"
},
{
"math_id": 100,
"text": "U^{\\#} \\subseteq \\prod_{x \\in X} B_{r_x}."
},
{
"math_id": 101,
"text": "\\prod_{x \\in X} \\mathbb{K} = \\mathbb{K}^X."
},
{
"math_id": 102,
"text": "\\Pr{}_z : \\prod_{x \\in X} \\mathbb{K} \\to \\mathbb{K}"
},
{
"math_id": 103,
"text": "U^{\\#} \\subseteq \\prod_{x \\in X} B_{r_x},"
},
{
"math_id": 104,
"text": "\\Pr{}_x\\left(U^{\\#}\\right) \\subseteq B_{r_x}"
},
{
"math_id": 105,
"text": "x \\in X"
},
{
"math_id": 106,
"text": "f \\in U^{\\#}."
},
{
"math_id": 107,
"text": "\\Pr{}_x(f) \\,=\\, f(x),"
},
{
"math_id": 108,
"text": "f(x) \\in B_{r_x}."
},
{
"math_id": 109,
"text": "x \\in r_x U"
},
{
"math_id": 110,
"text": "r_u := 1"
},
{
"math_id": 111,
"text": "u \\in U"
},
{
"math_id": 112,
"text": "\\,u_x ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\frac{1}{r_x} \\, x \\in U.\\,"
},
{
"math_id": 113,
"text": "\\frac{1}{r_x}|f(x)|\n= \\left|\\frac{1}{r_x} f(x)\\right|\n= \\left|f\\left(\\frac{1}{r_x} x\\right)\\right|\n= \\left|f\\left(u_x\\right)\\right| \n\\leq \\sup_{u \\in U} |f(u)| \n\\leq 1."
},
{
"math_id": 114,
"text": "|f(x)| \\leq r_x,"
},
{
"math_id": 115,
"text": "f(x) \\in B_{r_x},"
},
{
"math_id": 116,
"text": "\\begin{alignat}{9}\nU_{B_1} \n&\\,\\stackrel{\\scriptscriptstyle\\text{def}}{=}\\, \\Big\\{ ~~\\;~~\\;~~\\;~~ f\\ \\in \\mathbb{K}^X ~~\\;~~ : \\sup_{u \\in U} |f(u)| \\leq 1\\Big\\} \\\\\n&= \\big\\{ ~~\\;~~\\;~~\\;~~f \\, \\in \\mathbb{K}^X ~~\\;~~ : f(u) \\in B_1 \\text{ for all } u \\in U\\big\\} \\\\\n&= \\Big\\{\\left(f_x\\right)_{x \\in X} \\in \\prod_{x \\in X} \\mathbb{K} \\,~:~ \\; ~f_u~ \\in B_1 \\text{ for all } u \\in U\\Big\\} \\\\\n&= \\prod_{x \\in X} C_x \\quad \\text{ where } \\quad C_x ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\begin{cases}\nB_1 & \\text{ if } x \\in U \\\\\n\\mathbb{K} & \\text{ if } x \\not\\in U \\\\\n\\end{cases} \\\\\n\\end{alignat}"
},
{
"math_id": 117,
"text": "\\prod_{x \\in X} \\mathbb{K} = \\mathbb{K}^X"
},
{
"math_id": 118,
"text": "\\mathbb{K}."
},
{
"math_id": 119,
"text": "U_{B_1} \\cap X^{\\#} = U^{\\#}"
},
{
"math_id": 120,
"text": "\\mathbb{K}^X,"
},
{
"math_id": 121,
"text": "\\prod_{x \\in X} \\mathbb{K}"
},
{
"math_id": 122,
"text": "U_{B_1} = \\left\\{f \\in \\mathbb{K}^X : f(U) \\subseteq B_1\\right\\}"
},
{
"math_id": 123,
"text": "Y := \\mathbb{K}"
},
{
"math_id": 124,
"text": "B := B_1."
},
{
"math_id": 125,
"text": "B \\subseteq Y"
},
{
"math_id": 126,
"text": "Y,"
},
{
"math_id": 127,
"text": "U_B ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left\\{f \\in Y^X : f(U) \\subseteq B\\right\\}"
},
{
"math_id": 128,
"text": "Y^X"
},
{
"math_id": 129,
"text": "f \\in Y^X"
},
{
"math_id": 130,
"text": "U_B"
},
{
"math_id": 131,
"text": "f."
},
{
"math_id": 132,
"text": "f \\in U_B,"
},
{
"math_id": 133,
"text": "f(U) \\subseteq B."
},
{
"math_id": 134,
"text": "u \\in U,"
},
{
"math_id": 135,
"text": "\\left(f_i(u)\\right)_{i \\in I} \\to f(u)"
},
{
"math_id": 136,
"text": "Y"
},
{
"math_id": 137,
"text": "f_i(u) \\in f_i(U) \\subseteq B"
},
{
"math_id": 138,
"text": "B,"
},
{
"math_id": 139,
"text": "f(u) \\in B,"
},
{
"math_id": 140,
"text": "\\Complex"
},
{
"math_id": 141,
"text": "X = (0, \\infty) U ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\{r u : r > 0, u \\in U\\}"
},
{
"math_id": 142,
"text": "U^{\\#} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left\\{f \\in X^{\\#} : f(U) \\subseteq B_1\\right\\}"
},
{
"math_id": 143,
"text": "m_{\\bull} \\in T_P"
},
{
"math_id": 144,
"text": "\\operatorname{Box}_P"
},
{
"math_id": 145,
"text": "m_{\\bull} = \\left(m_x\\right)_{x \\in X}"
},
{
"math_id": 146,
"text": "\\begin{alignat}{4}\nU^{\\circ} \n&= U^{\\#} && \\\\\n&= X^{\\#} && \\cap \\prod_{x \\in X} B_{m_x} \\\\\n&= X^{\\prime} && \\cap \\prod_{x \\in X} B_{m_x} \\\\\n\\end{alignat}"
},
{
"math_id": 147,
"text": "m_{\\bull}"
},
{
"math_id": 148,
"text": "P ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ U^{\\circ}"
},
{
"math_id": 149,
"text": "P = U^{\\#}"
},
{
"math_id": 150,
"text": "\\prod B_{R_\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\prod_{x \\in X} B_{R_x}"
},
{
"math_id": 151,
"text": "R_{\\bull} = \\left(R_x\\right)_{x \\in X} \\in \\R^X,"
},
{
"math_id": 152,
"text": "T_P ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left\\{R_{\\bull} \\in \\R^X ~:~ P \\subseteq \\prod B_{R_\\bull}\\right\\}"
},
{
"math_id": 153,
"text": "m_x = \\inf \\left\\{ R_x : R_{\\bull} \\in T_P \\right\\},"
},
{
"math_id": 154,
"text": "P,"
},
{
"math_id": 155,
"text": "\\operatorname{Box}_P ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left\\{ \\prod B_{R_\\bull} ~:~ R_{\\bull} \\in T_P \\right\\} ~=~ \\left\\{ \\prod B_{R_\\bull} ~:~ P \\subseteq \\prod B_{R_\\bull} \\right\\},"
},
{
"math_id": 156,
"text": "\\prod B_{m_\\bull} = \\cap \\operatorname{Box}_P \\in \\operatorname{Box}_P"
},
{
"math_id": 157,
"text": "\\bigcap \\operatorname{Box}_P"
},
{
"math_id": 158,
"text": "\\operatorname{Box}_P."
},
{
"math_id": 159,
"text": "\\prod B_{m_\\bull} = \\prod_{x \\in X} B_{m_x}"
},
{
"math_id": 160,
"text": "\\,\\subseteq;"
},
{
"math_id": 161,
"text": "m_{\\bull} ~\\stackrel{\\scriptscriptstyle\\text{def}}{=}~ \\left(m_x\\right)_{x \\in X} : X \\to [0, \\infty)"
},
{
"math_id": 162,
"text": "U^{\\#} = [\\operatorname{cobal} U]^{\\#}"
},
{
"math_id": 163,
"text": "U^{\\circ} = \\left[\\operatorname{cl}_X U\\right]^{\\circ},"
},
{
"math_id": 164,
"text": "B"
},
{
"math_id": 165,
"text": "x_{\\bull} = \\left(x_n\\right)_{n=1}^{\\infty}"
},
{
"math_id": 166,
"text": "x, y \\in B"
},
{
"math_id": 167,
"text": "\\rho(x,y) = \\sum_{n=1}^\\infty \\, 2^{-n} \\, \\frac{\\left|\\langle x - y, x_n \\rangle\\right|}{1 + \\left|\\langle x - y, x_n \\rangle\\right|}"
},
{
"math_id": 168,
"text": "\\langle\\cdot, \\cdot\\rangle"
},
{
"math_id": 169,
"text": "F : X^{\\prime} \\to \\R"
},
{
"math_id": 170,
"text": "x_1, x_2, \\ldots \\in X^{\\prime}"
},
{
"math_id": 171,
"text": "F,"
},
{
"math_id": 172,
"text": "x,"
},
{
"math_id": 173,
"text": "F."
},
{
"math_id": 174,
"text": "X = C_0(\\R)"
},
{
"math_id": 175,
"text": "D_x = \\{c \\in \\Complex : |c| \\leq \\|x\\|\\}"
},
{
"math_id": 176,
"text": "D = \\prod_{x \\in X} D_x"
},
{
"math_id": 177,
"text": "D_x"
},
{
"math_id": 178,
"text": "D"
},
{
"math_id": 179,
"text": "B_1^{\\,\\prime},"
},
{
"math_id": 180,
"text": "\\begin{alignat}{4}\nF :\\;&& B_1^{\\,\\prime} &&\\;\\to \\;& D \\\\[0.3ex]\n && f &&\\;\\mapsto\\;& (f(x))_{x \\in X}. \\\\\n\\end{alignat}"
},
{
"math_id": 181,
"text": "B_1^{\\,\\prime}"
},
{
"math_id": 182,
"text": "\\lambda_{\\bull} = \\left(\\lambda_x\\right)_{x \\in X} \\in D"
},
{
"math_id": 183,
"text": "\\left(f_i(x)\\right)_{x \\in X}"
},
{
"math_id": 184,
"text": "\\lim_{i} \\left(f_i(x)\\right)_{x \\in X} \\to \\lambda_{\\bull} \\quad \\text{ in } D,"
},
{
"math_id": 185,
"text": "g : X \\to \\Complex"
},
{
"math_id": 186,
"text": "g(x) = \\lambda_x \\qquad \\text{ for every } x \\in X,"
},
{
"math_id": 187,
"text": "F(g) = \\lambda_{\\bull}."
},
{
"math_id": 188,
"text": "\\mathcal{C}"
},
{
"math_id": 189,
"text": "\\{B \\cap C : C \\in \\mathcal{C}\\}"
},
{
"math_id": 190,
"text": "B \\cap \\bigcap_{C \\in \\mathcal{C}} C"
}
] |
https://en.wikipedia.org/wiki?curid=577366
|
577441
|
Compact operator
|
Type of continuous linear operator
In functional analysis, a branch of mathematics, a compact operator is a linear operator formula_0, where formula_1 are normed vector spaces, with the property that formula_2 maps bounded subsets of formula_3 to relatively compact subsets of formula_4 (subsets with compact closure in formula_4). Such an operator is necessarily a bounded operator, and so continuous. Some authors require that formula_1 are Banach, but the definition can be extended to more general spaces.
Any bounded operator "formula_2" that has finite rank is a compact operator; indeed, the class of compact operators is a natural generalization of the class of finite-rank operators in an infinite-dimensional setting. When "formula_4" is a Hilbert space, it is true that any compact operator is a limit of finite-rank operators, so that the class of compact operators can be defined alternatively as the closure of the set of finite-rank operators in the norm topology. Whether this was true in general for Banach spaces (the approximation property) was an unsolved question for many years; in 1973 Per Enflo gave a counter-example, building on work by Grothendieck and Banach.
The origin of the theory of compact operators is in the theory of integral equations, where integral operators supply concrete examples of such operators. A typical Fredholm integral equation gives rise to a compact operator "K" on function spaces; the compactness property is shown by equicontinuity. The method of approximation by finite-rank operators is basic in the numerical solution of such equations. The abstract idea of Fredholm operator is derived from this connection.
Equivalent formulations.
A linear map formula_0 between two topological vector spaces is said to be compact if there exists a neighborhood "formula_5" of the origin in "formula_3" such that formula_6 is a relatively compact subset of "formula_4".
Let formula_1 be normed spaces and formula_0 a linear operator. Then the following statements are equivalent, and some of them are used as the principal definition by different authors
If in addition "formula_4" is Banach, these statements are also equivalent to:
If a linear operator is compact, then it is continuous.
Important properties.
In the following, formula_11 are Banach spaces, formula_12 is the space of bounded operators formula_13 under the operator norm, and formula_14 denotes the space of compact operators formula_13. formula_15 denotes the identity operator on formula_3, formula_16, and formula_17.
Now suppose that formula_3 is a Banach space and formula_23 is a compact linear operator, and formula_24 is the adjoint or transpose of "T".
Origins in integral equation theory.
A crucial property of compact operators is the Fredholm alternative, which asserts that the existence of solution of linear equations of the form
formula_44
(where "K" is a compact operator, "f" is a given function, and "u" is the unknown function to be solved for) behaves much like as in finite dimensions. The spectral theory of compact operators then follows, and it is due to Frigyes Riesz (1918). It shows that a compact operator "K" on an infinite-dimensional Banach space has spectrum that is either a finite subset of C which includes 0, or the spectrum is a countably infinite subset of C which has 0 as its only limit point. Moreover, in either case the non-zero elements of the spectrum are eigenvalues of "K" with finite multiplicities (so that "K" − λ"I" has a finite-dimensional kernel for all complex λ ≠ 0).
An important example of a compact operator is compact embedding of Sobolev spaces, which, along with the Gårding inequality and the Lax–Milgram theorem, can be used to convert an elliptic boundary value problem into a Fredholm integral equation. Existence of the solution and spectral properties then follow from the theory of compact operators; in particular, an elliptic boundary value problem on a bounded domain has infinitely many isolated eigenvalues. One consequence is that a solid body can vibrate only at isolated frequencies, given by the eigenvalues, and arbitrarily high vibration frequencies always exist.
The compact operators from a Banach space to itself form a two-sided ideal in the algebra of all bounded operators on the space. Indeed, the compact operators on an infinite-dimensional separable Hilbert space form a maximal ideal, so the quotient algebra, known as the Calkin algebra, is simple. More generally, the compact operators form an operator ideal.
Compact operator on Hilbert spaces.
For Hilbert spaces, another equivalent definition of compact operators is given as follows.
An operator formula_2 on an infinite-dimensional Hilbert space formula_45,
formula_46,
is said to be "compact" if it can be written in the form
formula_47,
where formula_48 and formula_49 are orthonormal sets (not necessarily complete), and formula_50 is a sequence of positive numbers with limit zero, called the singular values of the operator, and the series on the right hand side converges in the operator norm. The singular values can accumulate only at zero. If the sequence becomes stationary at zero, that is formula_51 for some formula_52 and every formula_53, then the operator has finite rank, "i.e.", a finite-dimensional range, and can be written as
formula_54.
An important subclass of compact operators is the trace-class or nuclear operators, i.e., such that formula_55. While all trace-class operators are compact operators, the converse is not necessarily true. For example formula_56 tends to zero for formula_57 while formula_58.
Completely continuous operators.
Let "X" and "Y" be Banach spaces. A bounded linear operator "T" : "X" → "Y" is called completely continuous if, for every weakly convergent sequence formula_59 from "X", the sequence formula_60 is norm-convergent in "Y" . Compact operators on a Banach space are always completely continuous. If "X" is a reflexive Banach space, then every completely continuous operator "T" : "X" → "Y" is compact.
Somewhat confusingly, compact operators are sometimes referred to as "completely continuous" in older literature, even though they are not necessarily completely continuous by the definition of that phrase in modern terminology.
|
[
{
"math_id": 0,
"text": "T: X \\to Y"
},
{
"math_id": 1,
"text": "X,Y"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "T(U)"
},
{
"math_id": 7,
"text": "V\\subseteq Y"
},
{
"math_id": 8,
"text": "T(U)\\subseteq V"
},
{
"math_id": 9,
"text": "(x_n)_{n\\in \\N}"
},
{
"math_id": 10,
"text": "(Tx_n)_{n\\in\\N}"
},
{
"math_id": 11,
"text": "X, Y, Z, W"
},
{
"math_id": 12,
"text": "B(X,Y)"
},
{
"math_id": 13,
"text": "X \\to Y"
},
{
"math_id": 14,
"text": "K(X,Y)"
},
{
"math_id": 15,
"text": "\\operatorname{Id}_X"
},
{
"math_id": 16,
"text": "B(X) = B(X,X)"
},
{
"math_id": 17,
"text": "K(X) = K(X,X)"
},
{
"math_id": 18,
"text": "(T_n)_{n \\in \\mathbf{N}}"
},
{
"math_id": 19,
"text": "B(Y,Z)\\circ K(X,Y)\\circ B(W,X)\\subseteq K(W,Z)."
},
{
"math_id": 20,
"text": "K(X)"
},
{
"math_id": 21,
"text": "B(X)"
},
{
"math_id": 22,
"text": "T: X \\to X"
},
{
"math_id": 23,
"text": "T\\colon X \\to X"
},
{
"math_id": 24,
"text": "T^* \\colon X^* \\to X^*"
},
{
"math_id": 25,
"text": "T\\in K(X)"
},
{
"math_id": 26,
"text": "{\\operatorname{Id}_X} - T"
},
{
"math_id": 27,
"text": "\\operatorname{Im}({\\operatorname{Id}_X} - T)"
},
{
"math_id": 28,
"text": "M"
},
{
"math_id": 29,
"text": "N"
},
{
"math_id": 30,
"text": "M+N"
},
{
"math_id": 31,
"text": "S\\colon X \\to X"
},
{
"math_id": 32,
"text": "S \\circ T"
},
{
"math_id": 33,
"text": "T \\circ S"
},
{
"math_id": 34,
"text": "\\lambda \\neq 0"
},
{
"math_id": 35,
"text": "T - \\lambda \\operatorname{Id}_X"
},
{
"math_id": 36,
"text": "\\dim \\ker \\left( T - \\lambda \\operatorname{Id}_X \\right) \n= \\dim\\big(X / \\operatorname{Im}\\left( T - \\lambda \\operatorname{Id}_X \\right) \\big)\n= \\dim \\ker \\left( T^* - \\lambda \\operatorname{Id}_{X^*} \\right)\n= \\dim\\big(X^* / \\operatorname{Im}\\left( T^* - \\lambda \\operatorname{Id}_{X^*} \\right) \\big)"
},
{
"math_id": 37,
"text": "\\sigma(T)"
},
{
"math_id": 38,
"text": "0 \\in \\sigma(T)"
},
{
"math_id": 39,
"text": "\\lambda \\in \\sigma(T)"
},
{
"math_id": 40,
"text": "\\lambda"
},
{
"math_id": 41,
"text": "T^{*}"
},
{
"math_id": 42,
"text": "r > 0"
},
{
"math_id": 43,
"text": "E_r = \\left\\{ \\lambda \\in \\sigma(T) : | \\lambda | > r \\right\\}"
},
{
"math_id": 44,
"text": "(\\lambda K + I)u = f "
},
{
"math_id": 45,
"text": "(\\mathcal{H}, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 46,
"text": "T\\colon\\mathcal{H} \\to \\mathcal{H}"
},
{
"math_id": 47,
"text": "T = \\sum_{n=1}^\\infty \\lambda_n \\langle f_n, \\cdot \\rangle g_n"
},
{
"math_id": 48,
"text": "\\{f_1,f_2,\\ldots\\}"
},
{
"math_id": 49,
"text": "\\{g_1,g_2,\\ldots\\}"
},
{
"math_id": 50,
"text": "\\lambda_1,\\lambda_2,\\ldots"
},
{
"math_id": 51,
"text": "\\lambda_{N+k}=0"
},
{
"math_id": 52,
"text": "N \\in \\N"
},
{
"math_id": 53,
"text": "k = 1,2,\\dots"
},
{
"math_id": 54,
"text": "T = \\sum_{n=1}^N \\lambda_n \\langle f_n, \\cdot \\rangle g_n"
},
{
"math_id": 55,
"text": "\\operatorname{Tr}(|T|)<\\infty"
},
{
"math_id": 56,
"text": "\\lambda_n = \\frac{1}{n}"
},
{
"math_id": 57,
"text": "n \\to \\infty"
},
{
"math_id": 58,
"text": "\\sum_{n=1}^{\\infty} |\\lambda_n| = \\infty"
},
{
"math_id": 59,
"text": "(x_n)"
},
{
"math_id": 60,
"text": "(Tx_n)"
},
{
"math_id": 61,
"text": "\\ell^p"
},
{
"math_id": 62,
"text": "(Tf)(x) = \\int_0^x f(t)g(t) \\, \\mathrm{d} t."
},
{
"math_id": 63,
"text": "(T f)(x) = \\int_{\\Omega} k(x, y) f(y) \\, \\mathrm{d} y"
}
] |
https://en.wikipedia.org/wiki?curid=577441
|
5774572
|
Ship resistance and propulsion
|
A ship must be designed to move efficiently through the water with a minimum of external force. For thousands of years ship designers and builders of sailing vessels used rules of thumb based on the midship-section area to size the sails for a given vessel. The hull form and sail plan for the clipper ships, for example, evolved from experience, not from theory. It was not until the advent of steam power and the construction of large iron ships in the mid-19th century that it became clear to ship owners and builders that a more rigorous approach was needed.
Definition.
Ship resistance is defined as the force required to tow the ship in calm water at a constant velocity.
Components of resistance.
A body in water which is stationary with respect to water, experiences only hydrostatic pressure. Hydrostatic pressure always acts to oppose the weight of the body. The total (upward) force due to this buoyancy is equal to the (downward) weight of the displaced water. If the body is in motion, then there are also hydrodynamic pressures that act on the body. For a displacement vessel, that is the usual type of ship, three main types of resistance are considered: that due to wave-making, that due to the pressure of the moving water on the form, often not calculated or measured separately, and that due to friction of moving water on the wetted surface of the hull. These can be split up into more components:
Froude's experiments.
Froude's method for extrapolating the results of model tests to ships was adopted in the 1870s. Another method created by Hughes introduced in the 1950s and later adopted by the International Towing Tank Conference (ITTC). Froude's method tends to overestimate the power for very large ships.
Froude had observed that when a ship or model was at its so-called Hull speed the wave pattern of the transverse waves (the waves along the hull) have a wavelength equal to the length of the waterline. This means that the ship's bow was riding on one wave crest and so was its stern. This is often called the hull speed and is a function of the length of the ship
formula_0
where constant (k) should be taken as: 2.43 for velocity (V) in kn and length (L) in metres (m) or, 1.34 for velocity (V) in kn and length (L) in feet (ft).
Observing this, Froude realized that the ship resistance problem had to be broken into two different parts: residuary resistance (mainly wave making resistance) and frictional resistance. To get the proper residuary resistance, it was necessary to recreate the wave train created by the ship in the model tests. He found for any ship and geometrically similar model towed at the suitable speed that:
There is a frictional drag that is given by the shear due to the viscosity. This can result in 50% of the total resistance in fast ship designs and 80% of the total resistance in slower ship designs.
To account for the frictional resistance Froude decided to tow a series of flat plates and measure the resistance of these plates, which were of the same wetted surface area and length as the model ship, and subtract this frictional resistance from the total resistance and get the remainder as the residuary resistance.
Friction.
(Main article: Skin friction drag) In a viscous fluid, a boundary layer is formed. This causes a net drag due to friction. The boundary layer undergoes shear at different rates extending from the hull surface until it reaches the field flow of the water.
Wave-making resistance.
(Main article: Wave-making resistance) A ship moving over the surface of undisturbed water sets up waves emanating mainly from the bow and stern of the ship. The waves created by the ship consist of divergent and transverse waves. The divergent waves are observed as the wake of a ship with a series of diagonal or oblique crests moving outwardly from the point of disturbance. These waves were first studied by William Thomson, 1st Baron Kelvin, who found that regardless of the speed of the ship, they were always contained within the 39° wedge shape (19.5° on each side) following the ship. The divergent waves do not cause much resistance against the ship's forward motion. However, the transverse waves appear as troughs and crests along the length of a ship and constitute the major part of the wave-making resistance of a ship. The energy associated with the transverse wave system travels at one half the phase velocity or the group velocity of the waves. The prime mover of the vessel must put additional energy into the system in order to overcome this expense of energy. The relationship between the velocity of ships and that of the transverse waves can be found by equating the wave celerity and the ship's velocity.
Propulsion.
(Main article: Marine propulsion) Ships can be propelled by numerous sources of power: human, animal, or wind power (sails, kites, rotors and turbines), water currents, chemical or atomic fuels and stored electricity, pressure, heat or solar power supplying engines and motors. Most of these can propel a ship directly (e.g. by towing or chain), via hydrodynamic "drag" devices (e.g. oars and paddle wheels) and via hydrodynamic "lift" devices (e.g. propellers or jets). A few exotic means also exist, such as "fish-tail propulsion", rockets or magnetohydrodynamic propulsion.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V=k\\sqrt{L}"
}
] |
https://en.wikipedia.org/wiki?curid=5774572
|
57750049
|
Induction-induction
|
In intuitionistic type theory (ITT), a discipline within mathematical logic, induction-induction is for simultaneously declaring some inductive type and some inductive predicate over this type.
An inductive definition is given by rules for generating elements of some type. One can then define some predicate on that type by providing constructors for forming the elements of the predicate, such inductively on the way the elements of the type are generated. Induction-induction generalizes this situation since one can "simultaneously" define the type and the predicate, because the rules for generating elements of the type formula_0 are allowed to refer to the predicate formula_1.
Induction-induction can be used to define larger types including various universe constructions in type theory. and limit constructions in category/topos theory.
Example 1.
Present the type formula_2 as having the following constructors, note the early reference to the predicate formula_3 :
and-simultaneously present the predicate formula_3 as having the following constructors :
Example 2.
A simple common example is the Universe à la Tarski type former. It creates some inductive type formula_13 and some inductive predicate formula_14. For every type in the type theory (except formula_15 itself!), there will be some element of formula_15 which may be seen as some code for this corresponding type; The predicate formula_16 inductively encodes each possible type to the corresponding element of formula_15; and constructing new codes in formula_15 will require referring to the decoding-as-type of earlier codes, via the predicate formula_16 .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A : \\mathsf{Type}"
},
{
"math_id": 1,
"text": "B : A \\to \\mathsf{Type}"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "aa : A"
},
{
"math_id": 5,
"text": "\\ell\\ell : \\sum_{x : A} B(x) \\to A; "
},
{
"math_id": 6,
"text": "\\mathsf{Tru} : B(aa)"
},
{
"math_id": 7,
"text": "\\mathsf{Fal} : B(aa)"
},
{
"math_id": 8,
"text": "x : A"
},
{
"math_id": 9,
"text": "y : B(x)"
},
{
"math_id": 10,
"text": "\\mathsf{Zer} : B(\\ell\\ell(x,y))"
},
{
"math_id": 11,
"text": "z : B(\\ell\\ell(x,y))"
},
{
"math_id": 12,
"text": "\\mathsf{Suc}(z) : B(\\ell\\ell(x,y))"
},
{
"math_id": 13,
"text": "U : \\mathsf{Type}"
},
{
"math_id": 14,
"text": "T : U \\to \\mathsf{Type}"
},
{
"math_id": 15,
"text": "U"
},
{
"math_id": 16,
"text": "T"
}
] |
https://en.wikipedia.org/wiki?curid=57750049
|
57763
|
Aerosol
|
Suspension of fine solid particles or liquid droplets in a gas
An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas. Aerosols can be generated from natural or human causes. The term "aerosol" commonly refers to the mixture of particulates in air, and not to the particulate matter alone. Examples of natural aerosols are fog, mist or dust. Examples of human caused aerosols include particulate air pollutants, mist from the discharge at hydroelectric dams, irrigation mist, perfume from atomizers, smoke, dust, sprayed pesticides, and medical treatments for respiratory illnesses.
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds. When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
The liquid or solid particles in an aerosol have diameters typically less than 1 μm. Larger particles with a significant settling speed make the mixture a suspension, but the distinction is not clear. In everyday language, "aerosol" often refers to a dispensing system that delivers a consumer product from a spray can.
Diseases can spread by means of small droplets in the breath, sometimes called bioaerosols.
<templatestyles src="Template:TOC limit/styles.css" />
Definitions.
Aerosol is defined as a suspension system of solid or liquid particles in a gas. An aerosol includes both the particles and the suspending gas, which is usually air. Meteorologists usually refer to them as particle matter - PM2.5 or PM10, depending on their size. Frederick G. Donnan presumably first used the term "aerosol" during World War I to describe an aero-solution, clouds of microscopic particles in air. This term developed analogously to the term hydrosol, a colloid system with water as the dispersed medium. "Primary aerosols" contain particles introduced directly into the gas; "secondary aerosols" form through gas-to-particle conversion.
Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt, they usually clump together to form a complex mixture. Various types of aerosol, classified according to physical form and how they were generated, include dust, fume, mist, smoke and fog.
There are several measures of aerosol concentration. Environmental science and environmental health often use the "mass concentration" ("M"), defined as the mass of particulate matter per unit volume, in units such as μg/m3. Also commonly used is the "number concentration" ("N"), the number of particles per unit volume, in units such as number per m3 or number per cm3.
Particle size has a major influence on particle properties, and the aerosol particle radius or diameter ("dp") is a key property used to characterise aerosols.
Aerosols vary in their dispersity. A "monodisperse" aerosol, producible in the laboratory, contains particles of uniform size. Most aerosols, however, as "polydisperse" colloidal systems, exhibit a range of particle sizes. Liquid droplets are almost always nearly spherical, but scientists use an "equivalent diameter" to characterize the properties of various shapes of solid particles, some very irregular. The equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The "equivalent volume diameter" ("de") is defined as the diameter of a sphere of the same volume as that of the irregular particle. Also commonly used is the aerodynamic diameter, "da".
Generation and applications.
People generate aerosols for various purposes, including:
Some devices for generating aerosols are:
In the atmosphere.
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds.
Although all hydrometeors, solid and liquid, can be described as aerosols, a distinction is commonly made between such dispersions (i.e. clouds) containing activated drops and crystals, and aerosol particles. The atmosphere of Earth contains aerosols of various types and concentrations, including quantities of:
Aerosols can be found in urban ecosystems in various forms, for example:
The presence of aerosols in the Earth's atmosphere can influence its climate, as well as human health.
Effects.
Volcanic eruptions release large amounts of sulphuric acid, hydrogen sulfide and hydrochloric acid into the atmosphere. These gases represent aerosols and eventually return to earth as acid rain, having a number of adverse effects on the environment and human life.
When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Aerosols interact with the Earth's energy budget in two ways, directly and indirectly.
* E.g., a "direct" effect is that aerosols scatter and absorb incoming solar radiation. This will mainly lead to a cooling of the surface (solar radiation is scattered back to space) but may also contribute to a warming of the surface (caused by the absorption of incoming solar energy). This will be an additional element to the greenhouse effect and therefore contributing to the global climate change.
* The "indirect" effects refer to the aerosol interfering with formations that interact directly with radiation. For example, they are able to modify the size of the cloud particles in the lower atmosphere, thereby changing the way clouds reflect and absorb light and therefore modifying the Earth's energy budget.
* There is evidence to suggest that anthropogenic aerosols actually offset the effects of greenhouse gases in some areas, which is why the Northern Hemisphere shows slower surface warming than the Southern Hemisphere, although that just means that the Northern Hemisphere will absorb the heat later through ocean currents bringing warmer waters from the South. On a global scale however, aerosol cooling decreases greenhouse-gases-induced heating without offsetting it completely.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
Aerosols in the 20 μm range show a particularly long persistence time in air conditioned rooms due to their "jet rider" behaviour (move with air jets, gravitationally fall out in slowly moving air); as this aerosol size is most effectively adsorbed in the human nose, the primordial infection site in COVID-19, such aerosols may contribute to the pandemic.
Aerosol particles with an effective diameter smaller than 10 μm can enter the bronchi, while the ones with an effective diameter smaller than 2.5 μm can enter as far as the gas exchange region in the lungs, which can be hazardous to human health.
Size distribution.
For a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol. This distribution defines the relative amounts of particles, sorted according to size. One approach to defining the particle size distribution uses a list of the sizes of every particle in a sample. However, this approach proves tedious to ascertain in aerosols with millions of particles and awkward to use. Another approach splits the size range into intervals and finds the number (or proportion) of particles in each interval. These data can be presented in a histogram with the area of each bar representing the proportion of particles in that size bin, usually normalised by dividing the number of particles in a bin by the width of the interval so that the area of each bar is proportionate to the number of particles in the size range that it represents. If the width of the bins tends to zero, the frequency function is:
formula_0
where
formula_1is the diameter of the particles
formula_2 is the fraction of particles having diameters between formula_3 and formula_3 + formula_4
formula_5 is the frequency function
Therefore, the area under the frequency curve between two sizes a and "b" represents the total fraction of the particles in that size range:
formula_6
It can also be formulated in terms of the total number density "N":
formula_7
Assuming spherical aerosol particles, the aerosol surface area per unit volume ("S") is given by the second moment:
formula_8
And the third moment gives the total volume concentration ("V") of the particles:
formula_9
The particle size distribution can be approximated. The normal distribution usually does not suitably describe particle size distributions in aerosols because of the skewness associated with a long tail of larger particles. Also for a quantity that varies over a large range, as many aerosol sizes do, the width of the distribution implies negative particles sizes, which is not physically realistic. However, the normal distribution can be suitable for some aerosols, such as test aerosols, certain pollen grains and spores.
A more widely chosen log-normal distribution gives the number frequency as:
formula_10
where:
formula_11 is the standard deviation of the size distribution and
formula_12 is the arithmetic mean diameter.
The log-normal distribution has no negative values, can cover a wide range of values, and fits many observed size distributions reasonably well.
Other distributions sometimes used to characterise particle size include: the Rosin-Rammler distribution, applied to coarsely dispersed dusts and sprays; the Nukiyama–Tanasawa distribution, for sprays of extremely broad size ranges; the power function distribution, occasionally applied to atmospheric aerosols; the exponential distribution, applied to powdered materials; and for cloud droplets, the Khrgian–Mazin distribution.
Physics.
Terminal velocity of a particle in a fluid.
For low values of the Reynolds number (<1), true for most aerosol motion, Stokes' law describes the force of resistance on a solid spherical particle in a fluid. However, Stokes' law is only valid when the velocity of the gas at the surface of the particle is zero. For small particles (< 1 μm) that characterize aerosols, however, this assumption fails. To account for this failure, one can introduce the Cunningham correction factor, always greater than 1. Including this factor, one finds the relation between the resisting force on a particle and its velocity:
formula_13
where
formula_14 is the resisting force on a spherical particle
formula_15 is the dynamic viscosity of the gas
formula_16 is the particle velocity
formula_17 is the Cunningham correction factor.
This allows us to calculate the terminal velocity of a particle undergoing gravitational settling in still air. Neglecting buoyancy effects, we find:
formula_18
where
formula_19 is the terminal settling velocity of the particle.
The terminal velocity can also be derived for other kinds of forces. If Stokes' law holds, then the resistance to motion is directly proportional to speed. The constant of proportionality is the mechanical mobility ("B") of a particle:
formula_20
A particle traveling at any reasonable initial velocity approaches its terminal velocity exponentially with an "e"-folding time equal to the relaxation time:
formula_21
where:
formula_22 is the particle speed at time t
formula_23 is the final particle speed
formula_24 is the initial particle speed
To account for the effect of the shape of non-spherical particles, a correction factor known as the "dynamic shape factor" is applied to Stokes' law. It is defined as the ratio of the resistive force of the irregular particle to that of a spherical particle with the same volume and velocity:
formula_25
where:
formula_26 is the dynamic shape factor
Aerodynamic diameter.
The aerodynamic diameter of an irregular particle is defined as the diameter of the spherical particle with a density of 1000 kg/m3 and the same settling velocity as the irregular particle.
Neglecting the slip correction, the particle settles at the terminal velocity proportional to the square of the aerodynamic diameter, "da":
formula_27
where
formula_28 = standard particle density (1000 kg/m3).
This equation gives the aerodynamic diameter:
formula_29
One can apply the aerodynamic diameter to particulate pollutants or to inhaled drugs to predict where in the respiratory tract such particles deposit. Pharmaceutical companies typically use aerodynamic diameter, not geometric diameter, to characterize particles in inhalable drugs.
Dynamics.
The previous discussion focused on single aerosol particles. In contrast, "aerosol dynamics" explains the evolution of complete aerosol populations. The concentrations of particles will change over time as a result of many processes. External processes that move particles outside a volume of gas under study include diffusion, gravitational settling, and electric charges and other external forces that cause particle migration. A second set of processes internal to a given volume of gas include particle formation (nucleation), evaporation, chemical reaction, and coagulation.
A differential equation called the "Aerosol General Dynamic Equation" (GDE) characterizes the evolution of the number density of particles in an aerosol due to these processes.
formula_30
Change in time = Convective transport + brownian diffusion + gas-particle interactions + coagulation + migration by external forces
Where:
formula_31 is number density of particles of size category formula_32
formula_33 is the particle velocity
formula_34 is the particle Stokes-Einstein diffusivity
formula_35 is the particle velocity associated with an external force
Coagulation.
As particles and droplets in an aerosol collide with one another, they may undergo coalescence or aggregation. This process leads to a change in the aerosol particle-size distribution, with the mode increasing in diameter as total number of particles decreases. On occasion, particles may shatter apart into numerous smaller particles; however, this process usually occurs primarily in particles too large for consideration as aerosols.
Dynamics regimes.
The Knudsen number of the particle define three different dynamical regimes that govern the behaviour of an aerosol:
formula_36
where formula_37 is the mean free path of the suspending gas and formula_38 is the diameter of the particle. For particles in the "free molecular regime", "Kn" » 1; particles small compared to the mean free path of the suspending gas. In this regime, particles interact with the suspending gas through a series of "ballistic" collisions with gas molecules. As such, they behave similarly to gas molecules, tending to follow streamlines and diffusing rapidly through Brownian motion. The mass flux equation in the free molecular regime is:
formula_39
where "a" is the particle radius, "P"∞ and "PA" are the pressures far from the droplet and at the surface of the droplet respectively, "kb" is the Boltzmann constant, "T" is the temperature, "CA" is mean thermal velocity and "α" is mass accommodation coefficient. The derivation of this equation assumes constant pressure and constant diffusion coefficient.
Particles are in the "continuum regime" when Kn « 1. In this regime, the particles are big compared to the mean free path of the suspending gas, meaning that the suspending gas acts as a continuous fluid flowing round the particle. The molecular flux in this regime is:
formula_40
where "a" is the radius of the particle "A", "MA" is the molecular mass of the particle "A", "DAB" is the diffusion coefficient between particles "A" and "B", "R" is the ideal gas constant, "T" is the temperature (in absolute units like kelvin), and "PA∞" and "PAS" are the pressures at infinite and at the surface respectively.
The "transition regime" contains all the particles in between the free molecular and continuum regimes or "Kn" ≈ 1. The forces experienced by a particle are a complex combination of interactions with individual gas molecules and macroscopic interactions. The semi-empirical equation describing mass flux is:
formula_41
where "I"cont is the mass flux in the continuum regime. This formula is called the Fuchs-Sutugin interpolation formula. These equations do not take into account the heat release effect.
Partitioning.
Aerosol partitioning theory governs condensation on and evaporation from an aerosol surface, respectively. Condensation of mass causes the mode of the particle-size distributions of the aerosol to increase; conversely, evaporation causes the mode to decrease. Nucleation is the process of forming aerosol mass from the condensation of a gaseous precursor, specifically a vapor. Net condensation of the vapor requires supersaturation, a partial pressure greater than its vapor pressure. This can happen for three reasons:
There are two types of nucleation processes. Gases preferentially condense onto surfaces of pre-existing aerosol particles, known as heterogeneous nucleation. This process causes the diameter at the mode of particle-size distribution to increase with constant number concentration. With sufficiently high supersaturation and no suitable surfaces, particles may condense in the absence of a pre-existing surface, known as homogeneous nucleation. This results in the addition of very small, rapidly growing particles to the particle-size distribution.
Activation.
Water coats particles in aerosols, making them "activated", usually in the context of forming a cloud droplet (such as natural cloud seeding by aerosols from trees in a forest). Following the Kelvin equation (based on the curvature of liquid droplets), smaller particles need a higher ambient relative humidity to maintain equilibrium than larger particles do. The following formula gives relative humidity at equilibrium:
formula_42
where formula_43 is the saturation vapor pressure above a particle at equilibrium (around a curved liquid droplet), "p"0 is the saturation vapor pressure (flat surface of the same liquid) and "S" is the saturation ratio.
Kelvin equation for saturation vapor pressure above a curved surface is:
formula_44
where "rp" droplet radius, "σ" surface tension of droplet, "ρ" density of liquid, "M" molar mass, "T" temperature, and "R" molar gas constant.
Solution to the general dynamic equation.
There are no general solutions to the general dynamic equation (GDE); common methods used to solve the general dynamic equation include:
Detection.
Aerosol can either be measured in-situ or with remote sensing techniques.
"In situ" observations.
Some available in situ measurement techniques include:
Remote sensing approach.
Remote sensing approaches include:
Size selective sampling.
Particles can deposit in the nose, mouth, pharynx and larynx (the head airways region), deeper within the respiratory tract (from the trachea to the terminal bronchioles), or in the alveolar region. The location of deposition of aerosol particles within the respiratory system strongly determines the health effects of exposure to such aerosols. This phenomenon led people to invent aerosol samplers that select a subset of the aerosol particles that reach certain parts of the respiratory system.
Examples of these subsets of the particle-size distribution of an aerosol, important in occupational health, include the inhalable, thoracic, and respirable fractions. The fraction that can enter each part of the respiratory system depends on the deposition of particles in the upper parts of the airway. The inhalable fraction of particles, defined as the proportion of particles originally in the air that can enter the nose or mouth, depends on external wind speed and direction and on the particle-size distribution by aerodynamic diameter. The thoracic fraction is the proportion of the particles in ambient aerosol that can reach the thorax or chest region. The respirable fraction is the proportion of particles in the air that can reach the alveolar region. To measure the respirable fraction of particles in air, a pre-collector is used with a sampling filter. The pre-collector excludes particles as the airways remove particles from inhaled air. The sampling filter collects the particles for measurement. It is common to use cyclonic separation for the pre-collector, but other techniques include impactors, horizontal elutriators, and large pore membrane filters.
Two alternative size-selective criteria, often used in atmospheric monitoring, are PM10 and PM2.5. PM10 is defined by ISO as "particles which pass through a size-selective inlet with a 50% efficiency cut-off at 10 μm aerodynamic diameter" and PM2.5 as "particles which pass through a size-selective inlet with a 50% efficiency cut-off at 2.5 μm aerodynamic diameter". PM10 corresponds to the "thoracic convention" as defined in ISO 7708:1995, Clause 6; PM2.5 corresponds to the "high-risk respirable convention" as defined in ISO 7708:1995, 7.1. The United States Environmental Protection Agency replaced the older standards for particulate matter based on Total Suspended Particulate with another standard based on PM10 in 1987 and then introduced standards for PM2.5 (also known as fine particulate matter) in 1997.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{d}f = f(d_p) \\,\\mathrm{d}d_p"
},
{
"math_id": 1,
"text": " d_p "
},
{
"math_id": 2,
"text": " \\,\\mathrm{d}f "
},
{
"math_id": 3,
"text": "d_p"
},
{
"math_id": 4,
"text": "\\mathrm{d}d_p"
},
{
"math_id": 5,
"text": "f(d_p)"
},
{
"math_id": 6,
"text": " f_{ab}=\\int_a^b f(d_p) \\,\\mathrm{d}d_p"
},
{
"math_id": 7,
"text": " dN = N(d_p) \\,\\mathrm{d}d_p"
},
{
"math_id": 8,
"text": " S= \\pi/2 \\int_0^\\infty N(d_p)d_p^2 \\,\\mathrm{d}d_p"
},
{
"math_id": 9,
"text": " V= \\pi/6 \\int_0^\\infty N(d_p)d_p^3 \\,\\mathrm{d}d_p"
},
{
"math_id": 10,
"text": " \\mathrm{d}f = \\frac{1}{d_p \\sigma\\sqrt{2\\pi}} e^{-\\frac{(ln(d_p) - \\bar{d_p})^2}{2 \\sigma^2} }\\mathrm{d}d_p"
},
{
"math_id": 11,
"text": " \\sigma"
},
{
"math_id": 12,
"text": " \\bar{d_p}"
},
{
"math_id": 13,
"text": "F_D = \\frac {3 \\pi \\eta V d}{C_c}"
},
{
"math_id": 14,
"text": "F_D"
},
{
"math_id": 15,
"text": "\\eta"
},
{
"math_id": 16,
"text": "V"
},
{
"math_id": 17,
"text": "C_c"
},
{
"math_id": 18,
"text": "V_{TS} = \\frac{\\rho_p d^2 g C_c}{18 \\eta}"
},
{
"math_id": 19,
"text": "V_{TS}"
},
{
"math_id": 20,
"text": "B = \\frac{V}{F_D} = \\frac {C_c}{3 \\pi \\eta d}"
},
{
"math_id": 21,
"text": "V(t) = V_{f}-(V_{f}-V_{0})e^{-\\frac{t}{\\tau}}"
},
{
"math_id": 22,
"text": "V(t)"
},
{
"math_id": 23,
"text": "V_f"
},
{
"math_id": 24,
"text": "V_0"
},
{
"math_id": 25,
"text": "\\chi = \\frac{F_D}{3 \\pi \\eta V d_e}"
},
{
"math_id": 26,
"text": "\\chi"
},
{
"math_id": 27,
"text": "V_{TS} = \\frac{\\rho_0 d_a^2 g}{18 \\eta}"
},
{
"math_id": 28,
"text": "\\ \\rho_0"
},
{
"math_id": 29,
"text": "d_a=d_e\\left(\\frac{\\rho_p}{\\rho_0 \\chi}\\right)^{\\frac{1}{2}} "
},
{
"math_id": 30,
"text": "\\frac{\\partial{n_i}}{\\partial{t}} = -\\nabla \\cdot n_i \\mathbf{q} +\\nabla \\cdot D_p\\nabla_i n_i+ \\left(\\frac{\\partial{n_i}}{\\partial{t}}\\right)_\\mathrm{growth} + \\left(\\frac{\\partial{n_i}}{\\partial{t}}\\right)_\\mathrm{coag} -\\nabla \\cdot \\mathbf{q}_F n_i"
},
{
"math_id": 31,
"text": "n_i"
},
{
"math_id": 32,
"text": "i"
},
{
"math_id": 33,
"text": "\\mathbf{q}"
},
{
"math_id": 34,
"text": "D_p"
},
{
"math_id": 35,
"text": "\\mathbf{q}_F"
},
{
"math_id": 36,
"text": "K_n=\\frac{2\\lambda}{d}"
},
{
"math_id": 37,
"text": "\\lambda"
},
{
"math_id": 38,
"text": "d"
},
{
"math_id": 39,
"text": " I = \\frac{\\pi a^2}{k_b} \\left( \\frac{P_\\infty}{T_\\infty} - \\frac{P_A}{T_A} \\right) \\cdot C_A \\alpha "
},
{
"math_id": 40,
"text": " I_{cont} \\sim \\frac{4 \\pi a M_A D_{AB}}{RT} \\left( P_{A \\infty} - P_{AS}\\right)"
},
{
"math_id": 41,
"text": " I = I_{cont} \\cdot \\frac{1 + K_n}{1 + 1.71 K_n + 1.33 {K_n}^2}"
},
{
"math_id": 42,
"text": " RH = \\frac{p_s}{p_0} \\times 100\\% = S \\times 100\\%"
},
{
"math_id": 43,
"text": "p_s"
},
{
"math_id": 44,
"text": " \\ln{p_s \\over p_0} = \\frac{2 \\sigma M}{RT \\rho \\cdot r_p} "
}
] |
https://en.wikipedia.org/wiki?curid=57763
|
5776733
|
Projective connection
|
In differential geometry, a projective connection is a type of Cartan connection on a differentiable manifold.
The structure of a projective connection is modeled on the geometry of projective space, rather than the affine space corresponding to an affine connection. Much like affine connections, projective connections also define geodesics. However, these geodesics are not affinely parametrized. Rather they are projectively parametrized, meaning that their preferred class of parameterizations is acted upon by the group of fractional linear transformations.
Like an affine connection, projective connections have associated torsion and curvature.
Projective space as the model geometry.
The first step in defining any Cartan connection is to consider the flat case: in which the connection corresponds to the Maurer-Cartan form on a homogeneous space.
In the projective setting, the underlying manifold formula_0 of the homogeneous space is the projective space RPn which we shall represent by homogeneous coordinates formula_1. The symmetry group of formula_0 is "G" = PSL("n"+1,R). Let "H" be the isotropy group of the point formula_2. Thus, "M" = "G"/"H" presents formula_0 as a homogeneous space.
Let formula_3 be the Lie algebra of "G", and formula_4 that of "H". Note that formula_5. As matrices relative to the homogeneous basis, formula_3 consists of trace-free formula_6 matrices:
formula_7.
And formula_4 consists of all these matrices with formula_8. Relative to the matrix representation above, the Maurer-Cartan form of "G" is a system of "1-forms" formula_9 satisfying the structural equations (written using the Einstein summation convention):
formula_10
formula_11
formula_12
formula_13
Projective structures on manifolds.
A projective structure is a "linear geometry" on a manifold in which two nearby points are connected by a line (i.e., an unparametrized "geodesic") in a unique manner. Furthermore, an infinitesimal neighborhood of each point is equipped with a class of "projective frames". According to Cartan (1924),
"Une variété (ou espace) à connexion projective est une variété numérique qui, au voisinage immédiat de chaque point, présente tous les caractères d'un espace projectif et douée de plus d'une loi permettant de raccorder en un seul espace projectif les deux petits morceaux qui entourent deux points infiniment voisins. ..."
"Analytiquement, on choisira, d'une manière d'ailleurs arbitraire, dans l'espace projectif attaché à chaque point a de la variété, un "repére" définissant un système de coordonnées projectives. ... Le raccord entre les espaces projectifs attachés à deux points infiniment voisins a et a' se traduira analytiquement par une transformation homographique. ..."
This is analogous to Cartan's notion of an "affine connection", in which nearby points are thus connected and have an affine frame of reference which is transported from one to the other (Cartan, 1923):
"La variété sera dite à "connexion affine" lorsqu'on aura défini, d'une manière d'ailleurs arbitraire, une loi permettant de repérer l'un par rapport à l'autre les espaces affines attachés à deux points "infiniment voisins" quelconques m et m' de la variété; cete loi permettra de dire que tel point de l'espace affine attaché au point m' correspond à tel point de l'espace affine attaché au point m, que tel vecteur du premier espace es parallèle ou équipollent à tel vecteur du second espace."
In modern language, a projective structure on an "n"-manifold "M" is a Cartan geometry modelled on projective space, where the latter is viewed as a homogeneous space for PSL("n"+1,R). In other words it is a PSL("n"+1,R)-bundle equipped with
such that the solder form induced by these data is an isomorphism.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "[x_0,\\dots,x_n]"
},
{
"math_id": 2,
"text": "[1,0,0,\\ldots,0]"
},
{
"math_id": 3,
"text": "{\\mathfrak g}"
},
{
"math_id": 4,
"text": "{\\mathfrak h}"
},
{
"math_id": 5,
"text": "{\\mathfrak g} = {\\mathfrak s}{\\mathfrak l}(n+1,{\\mathbb R})"
},
{
"math_id": 6,
"text": "(n+1)\\times(n+1)"
},
{
"math_id": 7,
"text": "\\left(\n\\begin{matrix}\n\\lambda&v^i\\\\\nw_j&a_j^i\n\\end{matrix}\n\\right),\\quad \n(v^i)\\in {\\mathbb R}^{1\\times n}, (w_j)\\in {\\mathbb R}^{n\\times 1}, (a_j^i)\\in {\\mathbb R}^{n\\times n}, \\lambda = -\\sum_i a_i^i\n"
},
{
"math_id": 8,
"text": "(w_j)=0"
},
{
"math_id": 9,
"text": "(\\xi, \\alpha_j, \\alpha_j^i, \\alpha^i)"
},
{
"math_id": 10,
"text": "d\\xi + \\alpha^i \\wedge \\alpha_i = 0"
},
{
"math_id": 11,
"text": "d a_j+a_j \\wedge \\zeta+a_{j}^{k}\\wedge a_{k}=0"
},
{
"math_id": 12,
"text": "d a_{j}^{i}+a^{i} \\wedge a_{j}+a_{k}^{i}\\wedge a_{j}^{k}=0"
},
{
"math_id": 13,
"text": "d a^{i}+\\zeta \\wedge a^{i}+a^{k}\\wedge a_{k}^{i}=0"
}
] |
https://en.wikipedia.org/wiki?curid=5776733
|
5777537
|
Hack's law
|
Hydrological relationship
Hack's law is an empirical relationship between the length of streams and the area of their basins. If "L" is the length of the longest stream in a basin, and "A" is the area of the basin, then Hack's law may be written as
formula_0
for some constant "C" where the exponent "h" is slightly less than 0.6 in most basins. "h" varies slightly from region to region and slightly decreases for larger basins (>8,000 mi2, or 20,720 km2). In addition to the catchment-scales, Hack's law was observed on unchanneled small-scale surfaces when the morphology measured at high resolutions (Cheraghi et al., 2018).
The law is named after American geomorphologist John Tilton Hack.
|
[
{
"math_id": 0,
"text": "L = C A^h\\ "
}
] |
https://en.wikipedia.org/wiki?curid=5777537
|
57778150
|
Cole equation of state
|
An equation of state introduced by R. H. Cole
formula_0
where formula_1 is a reference density, formula_2 is the adiabatic index, and formula_3 is a parameter with pressure units.
|
[
{
"math_id": 0,
"text": "p = B \\left[ \\left( \\frac{\\rho}{\\rho_0} \\right)^\\gamma -1 \\right] ,"
},
{
"math_id": 1,
"text": "\\rho_0"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "B"
}
] |
https://en.wikipedia.org/wiki?curid=57778150
|
5777979
|
Bond albedo
|
Fraction of incident light reflected by an astronomical body
The Bond albedo (also called spheric albedo, planetary albedo, and bolometric albedo), named after the American astronomer George Phillips Bond (1825–1865), who originally proposed it, is the fraction of power in the total electromagnetic radiation incident on an astronomical body that is scattered back out into space.
Because the Bond albedo accounts for all of the light scattered from a body at all wavelengths and all phase angles, it is a necessary quantity for determining how much energy a body absorbs. This, in turn, is crucial for determining the equilibrium temperature of a body.
Because bodies in the outer Solar System are always observed at very low phase angles from the Earth, the only reliable data for measuring their Bond albedo comes from spacecraft.
Phase integral.
The Bond albedo ("A") is related to the geometric albedo ("p") by the expression
formula_0
where "q" is termed the "phase integral" and is given in terms of the directional scattered flux "I"("α") into phase angle "α" (averaged over all wavelengths and azimuthal angles) as
formula_1
The phase angle "α" is the angle between the source of the radiation (usually the Sun) and the observing direction, and varies from zero for light scattered back towards the source, to 180° for observations looking towards the source. For example, during opposition or looking at the full moon, α is very small, while backlit objects or the new moon have α close to 180°.
Examples.
The Bond albedo is a value strictly between 0 and 1, as it includes all possible scattered light (but not radiation from the body itself). This is in contrast to other definitions of albedo such as the geometric albedo, which can be above 1. In general, though, the Bond albedo may be greater or smaller than the geometric albedo, depending on the surface and atmospheric properties of the body in question.
Some examples:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = pq"
},
{
"math_id": 1,
"text": "q = 2\\int_0^\\pi \\frac{I(\\alpha)}{I(0)} \\sin\\alpha \\, d\\alpha."
}
] |
https://en.wikipedia.org/wiki?curid=5777979
|
57781155
|
Skew-merged permutation
|
In the theory of permutation patterns, a skew-merged permutation is a permutation that can be partitioned into an increasing sequence and a decreasing sequence. They were first studied by and given their name by .
Characterization.
The two smallest permutations that cannot be partitioned into an increasing and a decreasing sequence are 3412 and 2143. was the first to establish that a skew-merged permutation can also be equivalently defined as a permutation that avoids the two patterns 3412 and 2143.
A permutation is skew-merged if and only if its associated permutation graph is a split graph, a graph that can be partitioned into a clique (corresponding to the descending subsequence) and an independent set (corresponding to the ascending subsequence). The two forbidden patterns for skew-merged permutations, 3412 and 2143, correspond to two of the three forbidden induced subgraphs for split graphs, a four-vertex cycle and a graph with two disjoint edges, respectively. The third forbidden induced subgraph, a five-vertex cycle, cannot exist in a permutation graph (see ).
Enumeration.
For formula_0 the number of skew-merged permutations of length formula_1 is
1, 2, 6, 22, 86, 340, 1340, 5254, 20518, 79932, 311028, 1209916, 4707964, 18330728, ... (sequence in the OEIS).
was the first to show that the generating function of these numbers is
formula_2
from which it follows that the number of skew-merged permutations of length formula_1 is given by the formula
formula_3
and that these numbers obey the recurrence relation
formula_4
Another derivation of the generating function for skew-merged permutations was given by .
Computational complexity.
Testing whether one permutation is a pattern in another can be solved efficiently when the larger of the two permutations is skew-merged, as shown by .
|
[
{
"math_id": 0,
"text": "n=1,2,3,\\dots"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\frac{1-3x}{(1-2x)\\sqrt{1-4x}},"
},
{
"math_id": 3,
"text": "\\binom{2n}{n}\\sum_{m=0}^{n-1}2^{n-m-1}\\binom{2m}{m}"
},
{
"math_id": 4,
"text": "P_n=\\frac{(9n-8)P_{n-1} - (26n-46)P_{n-2} + (24n-60)P_{n-3}}{n}."
}
] |
https://en.wikipedia.org/wiki?curid=57781155
|
5778255
|
Rabi frequency
|
Frequency in atomic physics
The Rabi frequency is the frequency at which the probability amplitudes of two atomic energy levels fluctuate in an oscillating electromagnetic field. It is proportional to the transition dipole moment of the two levels and to the amplitude ("not" intensity) of the electromagnetic field. Population transfer between the levels of such a 2-level system illuminated with light exactly resonant with the difference in energy between the two levels will occur at the Rabi frequency; when the incident light is detuned from this energy difference (detuned from resonance) then the population transfer occurs at the generalized Rabi frequency. The Rabi frequency is a semiclassical concept since it treats the atom as an object with quantized energy levels and the electromagnetic field as a continuous wave.
In the context of a nuclear magnetic resonance experiment, the Rabi frequency is the nutation frequency of a sample's net nuclear magnetization vector about a radio-frequency field. (Note that this is distinct from the Larmor frequency, which characterizes the precession of a transverse nuclear magnetization about a static magnetic field.)
Derivation.
Consider two energy eigenstates of a quantum system with Hamiltonian formula_0 (for example, this could be the Hamiltonian of a particle in a formula_1 potential, like the Hydrogen atom or the Alkali atoms):
formula_2
We want to consider the time dependent Hamiltonian
formula_3
where formula_4 is the potential of the electromagnetic field. Treating the potential as a perturbation, we can expect the eigenstates of the perturbed Hamiltonian to be some mixture of the eigenstates of the original Hamiltonian with time dependent coefficients:
formula_5
Plugging this into the time dependent Schrödinger equation
formula_6
taking the inner product with each of formula_7 and formula_8, and using the orthogonality condition of eigenstates formula_9, we arrive at two equations in the coefficients formula_10 and formula_11:
formula_12
where formula_13. The two terms in parentheses are dipole matrix elements dotted into the polarization vector of the electromagnetic field. In considering the spherically symmetric spatial eigenfunctions formula_14 of the Hydrogen atom potential, the diagonal matrix elements go to zero, leaving us with
formula_15
or
formula_16
Here formula_17, where formula_18 is the Rabi Frequency.
Intuition.
In the numerator we have the transition dipole moment for the formula_19 transition, whose squared amplitude represents the strength of the interaction between the electromagnetic field and the atom, and formula_20 is the vector electric field amplitude, which includes the polarization. The numerator has dimensions of energy, so dividing by formula_21 gives an angular frequency.
By analogy with a classical dipole, it is clear that an atom with a large dipole moment will be more susceptible to perturbation by an electric field. The dot product includes a factor of formula_22, where formula_23 is the angle between the polarization of the light and the transition dipole moment. When they are parallel the interaction is strongest, when they are perpendicular there is no interaction at all.
If we rewrite the differential equations found above:
formula_24
and apply the rotating-wave approximation, which assumes that formula_25, such that we can discard the high frequency oscillating terms, we have
formula_26
where formula_27 is called the detuning between the laser and the atomic frequencies.
We can solve these equations, assuming at time formula_28 the atom is in formula_29 (i.e. formula_30) to find
formula_31
This is the probability as a function of detuning and time of the population of state formula_32. A plot as a function of detuning and ramping the time from 0 to formula_33 gives:
We see that for formula_34 the population will oscillate between the two states at the Rabi frequency.
Generalized Rabi frequency.
The quantity formula_35 is commonly referred to as the "generalized Rabi frequency." For cases in which formula_36, Rabi flopping actually occurs at this frequency, where formula_37 is the detuning, a measure of how far the light is off-resonance relative to the transition. For instance, examining the above animation at an offset frequency of ±1.73, one can see that during the 1/2 Rabi cycle (at resonance) shown during the animation, the oscillation instead undergoes one "full" cycle, thus at twice the (normal) Rabi frequency formula_38, just as predicted by this equation. Also note that as the incident light frequency shifts further from the transition frequency, the amplitude of the Rabi oscillation decreases, as is illustrated by the dashed envelope in the above plot.
Two-Photon Rabi Frequency.
Coherent Rabi oscillations may also be driven by two-photon transitions. In this case we consider a system with three atomic energy levels, formula_29, formula_14, and formula_39, where formula_14 represents a so-called intermediate state with corresponding frequency formula_40, and an electromagnetic field with two frequency components:
formula_41
Now, formula_40 may be much greater than both formula_42 and formula_43, or formula_44, as illustrated in the figure on the right.
A two-photon transition is "not" the same as excitation from the ground to intermediate state, and then out of the intermediate state to the excited state. Instead, the atom absorbs two photons simultaneously and is promoted directly between the initial and final states. There are two necessary conditions for this two-photon process (also known as a Raman process), to be the dominant model of the light-matter interaction:
formula_45
In words, the sum of the frequencies of the two photons must be on resonance with the transition between the initial and final states, and the individual frequencies of the photons must be detuned from the intermediate state to initial and final state transitions. If the latter condition is not met and formula_46, the dominant process will be one governed by rate equations in which the intermediate state is populated and stimulated and Spontaneous emission events from that state prevent the possibility of driving coherent oscillations between the initial and final states.
We may derive the two-photon Rabi frequency by returning to the equations
formula_47
which now describe excitation between the ground and intermediate states. We know we have the solution
formula_48
where formula_49 is the generalized Rabi frequency for the transition from the initial to intermediate state. Similarly for the intermediate to final state transition we have the equations
formula_50
Now we plug formula_51 into the above equation for formula_52
formula_53
Such that, upon solving this equation, we find the coefficient to be proportional to:
formula_54
This is the effective or two-photon Rabi frequency. It is the product of the individual Rabi frequencies for the formula_55 and formula_56 transitions, divided by the detuning from the intermediate state formula_14.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\hat{H}_0 "
},
{
"math_id": 1,
"text": " \\frac{1}{r} "
},
{
"math_id": 2,
"text": "\\begin{align}\n\\psi_1(\\mathbf{r}, t) &= e^{-i\\omega_1 t} |1\\rangle\\\\\n\\psi_2(\\mathbf{r}, t) &= e^{-i\\omega_2 t} |2\\rangle\n\\end{align}"
},
{
"math_id": 3,
"text": "\n\\hat{\\mathcal{H}} = \\hat{H}_0 + \\hat{V}(t)\n"
},
{
"math_id": 4,
"text": " \\hat{V}(t) = e\\mathbf{r} \\cdot \\mathbf{E}_0 \\cos(\\omega t) "
},
{
"math_id": 5,
"text": "\n\\Psi(\\mathbf{r}, t) = c_1(t) e^{-i\\omega_1 t} |1\\rangle + c_2(t) e^{-i\\omega_2 t} |2\\rangle\n"
},
{
"math_id": 6,
"text": "\ni\\hbar \\frac{\\partial \\Psi(\\mathbf{r}, t)}{\\partial t} = \\hat{\\mathcal{H}} \\Psi(\\mathbf{r}, t)\n"
},
{
"math_id": 7,
"text": " e^{i\\omega_1 t}\\langle 1| "
},
{
"math_id": 8,
"text": " e^{i\\omega_2 t}\\langle 2| "
},
{
"math_id": 9,
"text": " \\langle i | j \\rangle = \\delta_{i,j} "
},
{
"math_id": 10,
"text": " c_1(t) "
},
{
"math_id": 11,
"text": " c_2(t) "
},
{
"math_id": 12,
"text": "\\begin{align}\ni \\dot{c}_1(t) &= \\frac{c_1(t) \\cos(\\omega t)}{\\hbar} \\langle 1| e\\mathbf{r} \\cdot \\mathbf{E}_0 |1 \\rangle + \\frac{e^{i\\omega_0 t}c_2(t) \\cos(\\omega t)}{\\hbar} \\langle 1| e\\mathbf{r} \\cdot \\mathbf{E}_0 |2 \\rangle \\\\\ni \\dot{c}_2(t) &= \\frac{c_2(t) \\cos(\\omega t)}{\\hbar} \\langle 2| e\\mathbf{r} \\cdot \\mathbf{E}_0 |2 \\rangle + \\frac{e^{-i\\omega_0 t}c_1(t) \\cos(\\omega t)}{\\hbar} \\langle 2| e\\mathbf{r} \\cdot \\mathbf{E}_0 |1 \\rangle\n\\end{align}"
},
{
"math_id": 13,
"text": " \\omega_0 = \\omega_1 - \\omega_2 "
},
{
"math_id": 14,
"text": " |i\\rangle "
},
{
"math_id": 15,
"text": "\\begin{align}\ni \\dot{c}_1(t) &= \\frac{c_2(t) \\cos(\\omega t)}{\\hbar} \\langle 1| e\\mathbf{r} \\cdot \\mathbf{E}_0 |2 \\rangle e^{i\\omega_0 t} \\\\\ni \\dot{c}_2(t) &= \\frac{c_1(t) \\cos(\\omega t)}{\\hbar} \\langle 2| e\\mathbf{r} \\cdot \\mathbf{E}_0 |1 \\rangle e^{-i\\omega_0 t}\n\\end{align}"
},
{
"math_id": 16,
"text": "\\begin{align}\ni \\dot{c}_1(t) &= \\Omega c_2(t) \\cos(\\omega t) e^{i\\omega_0 t} \\\\\ni \\dot{c}_2(t) &= \\Omega^* c_1(t) \\cos(\\omega t) e^{-i\\omega_0 t}\n\\end{align}"
},
{
"math_id": 17,
"text": " \\Omega := \\Omega_{1,2} "
},
{
"math_id": 18,
"text": " \\Omega_{i,j} = \\frac{\\langle i| e\\mathbf{r} \\cdot \\mathbf{E}_0 |j \\rangle}{\\hbar}"
},
{
"math_id": 19,
"text": "i \\to j"
},
{
"math_id": 20,
"text": "\\mathbf{E}_0 = \\hat{\\epsilon} E_0"
},
{
"math_id": 21,
"text": "\\hbar"
},
{
"math_id": 22,
"text": "\\cos\\theta"
},
{
"math_id": 23,
"text": "\\theta"
},
{
"math_id": 24,
"text": "\\begin{align}\ni \\dot{c}_1(t) = \\Omega c_2(t) \\cos(\\omega t) e^{i\\omega_0 t} &\\to \\frac{\\Omega c_2}{2} (e^{i(\\omega - \\omega_0)t} + e^{-i(\\omega + \\omega_0)t})\\\\\ni \\dot{c}_2(t) = \\Omega^* c_1(t) \\cos(\\omega t) e^{-i\\omega_0 t} &\\to \\frac{\\Omega^* c_1}{2} (e^{i(\\omega + \\omega_0)t} + e^{-i(\\omega - \\omega_0)t})\n\\end{align}"
},
{
"math_id": 25,
"text": " \\omega + \\omega_0 >> \\omega - \\omega_0 "
},
{
"math_id": 26,
"text": "\\begin{align}\ni \\dot{c}_1(t) &= \\frac{\\Omega c_2}{2} e^{i\\delta t}\\\\\ni \\dot{c}_2(t) &= \\frac{\\Omega^* c_1}{2} e^{-i\\delta t}\n\\end{align}"
},
{
"math_id": 27,
"text": " \\delta = \\omega - \\omega_0 "
},
{
"math_id": 28,
"text": " t = 0 "
},
{
"math_id": 29,
"text": " |1\\rangle "
},
{
"math_id": 30,
"text": " c_1(0) = 1 "
},
{
"math_id": 31,
"text": "\n|c_2(t)|^2 = \\frac{\\Omega^2 \\sin^2\\bigg(\\frac{\\sqrt{\\Omega^2 + \\delta^2}t}{2}\\bigg)}{\\Omega^2 + \\delta^2}\n"
},
{
"math_id": 32,
"text": " | 2 \\rangle "
},
{
"math_id": 33,
"text": " t = \\frac{\\pi}{\\Omega} "
},
{
"math_id": 34,
"text": " \\delta = 0 "
},
{
"math_id": 35,
"text": " \\sqrt{\\Omega^2 + \\delta^2} "
},
{
"math_id": 36,
"text": " \\delta \\neq 0 "
},
{
"math_id": 37,
"text": " \\delta "
},
{
"math_id": 38,
"text": "\\Omega_{i,j}"
},
{
"math_id": 39,
"text": " |2\\rangle "
},
{
"math_id": 40,
"text": " \\omega_i "
},
{
"math_id": 41,
"text": "\n\\hat{V}(t) = e\\mathbf{r} \\cdot \\mathbf{E}_{L1} \\cos(\\omega_{L1} t) + e\\mathbf{r} \\cdot \\mathbf{E}_{L2} \\cos(\\omega_{L2} t)\n"
},
{
"math_id": 42,
"text": " \\omega_1 "
},
{
"math_id": 43,
"text": " \\omega_2 "
},
{
"math_id": 44,
"text": " \\omega_2 > \\omega_i > \\omega_1 "
},
{
"math_id": 45,
"text": "\\begin{align}\n\\omega_{L2} + \\omega_{L1} &= \\omega_{1} - \\omega_{2}\\\\\n\\Delta = |\\omega_{L1} - \\omega_{1}| &>> 0\n\\end{align}"
},
{
"math_id": 46,
"text": " \\Delta \\to 0 "
},
{
"math_id": 47,
"text": "\\begin{align}\ni \\dot{c}_1(t) &= \\frac{\\Omega_{1i} c_2}{2} e^{i\\Delta t}\\\\\ni \\dot{c}_i(t) &= \\frac{\\Omega^*_{1i} c_1}{2} e^{-i\\Delta t}\n\\end{align}"
},
{
"math_id": 48,
"text": "\nc_i(t) = \\frac{\\Omega_{1i} \\sin\\bigg(\\frac{\\tilde{\\Omega}_{1i}t}{2}\\bigg)}{\\tilde{\\Omega}_{1i}}\n"
},
{
"math_id": 49,
"text": " \\tilde{\\Omega}_{1i} "
},
{
"math_id": 50,
"text": "\\begin{align}\ni \\dot{c}_i(t) &= \\frac{\\Omega_{i2} c_2}{2} e^{i\\Delta t}\\\\\ni \\dot{c}_2(t) &= \\frac{\\Omega^*_{i2} c_i}{2} e^{-i\\Delta t}\n\\end{align}"
},
{
"math_id": 51,
"text": " c_i(t) "
},
{
"math_id": 52,
"text": " \\dot{c}_2(t) "
},
{
"math_id": 53,
"text": "\ni \\dot{c}_2(t) = \\frac{\\Omega^*_{i2}\\Omega_{1i} \\sin\\bigg(\\frac{\\tilde{\\Omega}_{1i}t}{2}\\bigg)}{2\\tilde{\\Omega}_{1i}} e^{-i\\Delta t}\n"
},
{
"math_id": 54,
"text": "\nc_2(t) \\propto \\frac{\\Omega_{i2}\\Omega_{1i}}{2\\Delta}\n"
},
{
"math_id": 55,
"text": " |1\\rangle \\to |i\\rangle "
},
{
"math_id": 56,
"text": " |i\\rangle \\to |2\\rangle "
}
] |
https://en.wikipedia.org/wiki?curid=5778255
|
577830
|
Titration curve
|
Graph in acid-base chemistry
Titrations are often recorded on graphs called titration curves, which generally contain the volume of the titrant as the independent variable and the pH of the solution as the dependent variable (because it changes depending on the composition of the two solutions).
The equivalence point on the graph is where all of the starting solution (usually an acid) has been neutralized by the titrant (usually a base). It can be calculated precisely by finding the second derivative of the titration curve and computing the points of inflection (where the graph changes concavity); however, in most cases, simple visual inspection of the curve will suffice. In the curve given to the right, both equivalence points are visible, after roughly 15 and 30 mL of NaOH solution has been titrated into the oxalic acid solution. To calculate the logarithmic acid dissociation constant (pKa), one must find the volume at the half-equivalence point, that is where half the amount of titrant has been added to form the next compound (here, sodium hydrogen oxalate, then disodium oxalate). Halfway between each equivalence point, at 7.5 mL and 22.5 mL, the pH observed was about 1.5 and 4, giving the pKa.
In weak monoprotic acids, the point halfway between the beginning of the curve (before any titrant has been added) and the equivalence point is significant: at that point, the concentrations of the two species (the acid and conjugate base) are equal. Therefore, the Henderson-Hasselbalch equation can be solved in this manner:
formula_0
formula_1
formula_2
Therefore, one can easily find the pKa of the weak monoprotic acid by finding the pH of the point halfway between the beginning of the curve and the equivalence point, and solving the simplified equation. In the case of the sample curve, the acid dissociation constant "Ka" = 10-pKa would be approximately 1.78×10−5 from visual inspection (the actual "K"a2 is 1.7×10−5)
For polyprotic acids, calculating the acid dissociation constants is only marginally more difficult: the first acid dissociation constant can be calculated the same way as it would be calculated in a monoprotic acid. The pKa of the second acid dissociation constant, however, is the pH at the point halfway between the first equivalence point and the second equivalence point (and so on for acids that release more than two protons, such as phosphoric acid).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{pH} = \\mathrm pK_{\\mathrm a} + \\log \\left( \\frac{[\\mbox{base}]}{[\\mbox{acid}]} \\right)"
},
{
"math_id": 1,
"text": "\\mathrm{pH} = \\mathrm pK_{\\mathrm a} + \\log(1)\\,"
},
{
"math_id": 2,
"text": "\\mathrm{pH} = \\mathrm pK_{\\mathrm a} \\,"
}
] |
https://en.wikipedia.org/wiki?curid=577830
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.